0xma Cyber Security Articles




April 12, 2025

Nexus: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari

CHAPTER 01: WHAT IS INFORMATION?

The chapter highlights the difficulty in defining “information,” comparing it to other fundamental concepts like matter, energy, and life, and emphasizing its elusive, context-dependent nature.

It shows that while we often equate information with human-made symbols (e.g., words, signs), everyday objects or natural phenomena (like galaxies or pigeons) can also serve as information depending on context.

The narrative uses historical examples such as Cher Ami’s crucial message during World War I and the NILI spy ring’s coded signals to demonstrate how information has had real-world, life-altering consequences.

The text outlines the naive perspective that information is primarily an attempt to represent reality accurately, and that more information should eventually reveal the truth.

It argues against this view by showing that much of information does not represent reality, but instead connects disparate entities to form networks, regardless of its factual accuracy.

Truth is described as an accurate yet partial representation of reality—no account can capture the full complexity of the world, and all representations inherently omit some details.

A central idea is that the defining function of information is its ability to connect different points or entities (people, cells, ideas), forming networks that shape societies, biological systems, and even emotions.

The chapter illustrates that information’s value often lies in its capacity to create new realities or networks (e.g., music connecting people, DNA initiating cellular processes) rather than merely mirroring reality.

Examples such as the Bible, astrology, and political propaganda show that information—whether accurate or not—has been pivotal in shaping human history by uniting large groups and guiding collective actions.

Finally, the discussion touches on contemporary issues like the Metaverse, suggesting that as our means of transmitting information grow more advanced, the increase in connectivity does not necessarily equate to a rise in truthfulness or wisdom.


CHAPTER 02: STORIES: UNLIMITED CONNECTIONS

Homo sapiens dominate the world due to their unique ability to cooperate flexibly in large numbers, unlike other animals such as chimpanzees or ants, enabled by evolutionary changes in brain structure and linguistic abilities around 70,000 years ago.

The capacity to create and believe in fictional stories allows sapiens to form vast networks—such as empires, religions, and trade systems—by connecting people through shared narratives rather than personal bonds, exemplified by the Catholic Church (1.4 billion members), China (1.4 billion citizens), and the global trade network (8 billion people).

Stories act as central connectors that link individuals who don’t know each other personally, such as the Bible for Christians, communist ideology for Chinese citizens, or branding for global trade, contrasting with the smaller, personal-bond-based networks of Neanderthals or chimpanzees.

Even influential figures like Stalin or modern celebrities rely on crafted stories or brands rather than personal connections, with Stalin himself noting that "Stalin" was a symbol of Soviet power, not just a person, and social media personas being curated by teams.

Stories like that of Cher Ami, a heroic pigeon from WWI demonstrate how narratives can evolve and gain power beyond historical facts, with Cher Ami’s tale enhanced for propaganda.

Stories create a third level of reality—intersubjective entities like laws, gods, nations, and currencies—that exist only through collective belief and communication, unlike objective realities (e.g., asteroids) or subjective experiences (e.g., pain).

These story-based networks gave sapiens an edge over other species, enabling tribes of hundreds or thousands to outcompete isolated Neanderthal bands, providing advantages in conflict, resource sharing, and knowledge exchange.

While truth is vital for scientific progress (e.g., building an atom bomb), fiction often trumps it in maintaining social order, as seen in the U.S. Constitution’s acknowledged human-made nature versus the divine claims of the Ten Commandments, with the former allowing amendments like abolishing slavery.

Human information networks must simultaneously discover truth and create order, a challenging balance where fiction’s simplicity and comfort often outweigh complex, painful truths, as illustrated by political myths or the suppression of evolution to preserve religious order.

The power of stories explains sapiens’ dominance but not always their wisdom, with examples like Nazi Germany showing how advanced science paired with harmful myths can lead to destruction, highlighting that history is shaped by both material interests and intersubjective narratives, not just deterministic forces.


CHAPTER 03: DOCUMENTS: THE BITE OF THE PAPER TIGERS

Stories were humanity’s first critical information technology, enabling large-scale cooperation and the rise of nations, as exemplified by Zionist poets like Hayim Nahman Bialik and Theodor Herzl, who inspired the creation of Israel through their works.

While powerful, stories alone cannot sustain complex societies; they lack the capacity to manage detailed administrative data like taxes and infrastructure, which require a different information technology—written documents.

Written documents, such as tax records and lists, emerged as a crucial tool for managing societies, creating intersubjective realities (e.g., ownership) that transcend human memory limits, as seen in ancient Mesopotamia with cuneiform tablets.

Bureaucracy arose to organize and retrieve vast amounts of documented information, solving the retrieval problem but imposing an artificial order that often sacrifices truth for efficiency, as illustrated by rigid administrative systems.

This artificial order distorts reality across domains—government, science, and biology—by forcing complex phenomena (e.g., species classification, pandemics) into predefined categories, limiting holistic understanding.

Documents wield immense power by creating realities (e.g., loan contracts in Assyria were “killed” to erase debts), shifting authority from oral community agreements to centralized, literate systems that are harder for individuals to challenge.

Human brains excel at remembering stories rooted in evolutionary "biological dramas" (e.g., sibling rivalry in the Ramayana), but struggle with the abstract, nonbiological nature of bureaucracy, making it hard to depict or understand artistically.

Bureaucracy’s opacity fosters suspicion, as seen in historical rebellions (e.g., the 1381 Peasants’ Revolt burning archives) and personal stories like the author’s grandfather losing citizenship due to a Romanian census, highlighting its dual potential for harm or benefit.

Despite its flaws, bureaucracy enables vital services like sewage systems, as shown by John Snow’s cholera outbreak investigation in 1854 London, demonstrating how data collection and regulation save lives.

Balancing Truth and Order: Information networks, including mythology and bureaucracy, prioritize order over absolute truth; the next chapters will explore how holy books and AI address errors and truth, offering lessons for modern technology.


CHAPTER 04: ERRORS: THE FANTASY OF INFALLIBILITY

Human error is universal, and myths like Marxism address it by seeking correction through vanguard intervention, while bureaucracies use self-disciplinary bodies to manage errors.

To escape the cycle of human error, societies have fantasized about infallible superhuman mechanisms—historically religion, and now potentially AI (e.g., Musk’s TruthGPT)—to legitimize social order and correct mistakes.

Religions like Judaism, Christianity, and Islam claim infallible divine authority via holy books, aiming to bypass human fallibility, though conflicting human claims (e.g., Tanotka and Baninge’s story) reveal the challenge of verifying divine will.

Institutions like priests and oracles emerged to authenticate divine messages, but their human nature introduced errors and corruption, as seen with the bribed Pythia in ancient Greece aiding Athenian democracy.

Holy books like the Bible and Quran were developed as fixed, reproducible texts to provide direct access to divine laws, theoretically eliminating human interference, unlike oral tales or bureaucratic records.

Compiling holy books, such as the Hebrew Bible, involved human decisions—e.g., including Genesis but excluding Enoch—highlighting fallibility in selecting "divine" texts, a process finalized centuries later by rabbis.

Even with fixed texts, interpretation varied (e.g., Sabbath work rules), empowering rabbinical and church institutions to dictate meaning, undermining the goal of bypassing human fallibility and reinforcing their authority.

Christianity split from Judaism by accepting the Old Testament but rejecting rabbinical texts, curating the New Testament (e.g., Athanasius’s 367 CE list).

The printing press amplified information flow, aiding science (e.g., Copernicus) but also spreading falsehoods like the Malleus Maleficarum, fueling witch hunts that killed thousands, showing free information doesn’t guarantee truth.

Unlike religions claiming infallibility, science embraces fallibility with self-correcting institutions (e.g., Royal Society), rewarding error exposure (e.g., Shechtman’s quasicrystals), contrasting with rigid religious and dictatorial systems, though balancing truth and order remains complex.


CHAPTER 05: DECISIONS: A BRIEF HISTORY OF DEMOCRACY AND TOTALITARIANISM

The chapter reframes democracy and dictatorship as contrasting information networks. Dictatorships centralize information flow to an infallible hub (e.g., Rome, Berlin, Moscow), often aiming for total control in totalitarian forms, though technical limits historically constrained this. Democracies distribute information across independent nodes (e.g., citizens, media, courts), prioritizing autonomy and self-correction over centralized authority.

Unlike dictatorships, which resist challenges to the center, democracies assume fallibility and embed self-correcting mechanisms—elections, free press, and separated powers—to limit government and protect rights. Democracy isn’t just majority rule; it safeguards human and civil rights (e.g., life, voting) against majority overreach, ensuring a pluralistic conversation rather than a monologue.

Populism threatens democracy by claiming sole representation of a unified “people’s will,” dismissing dissent as illegitimate. Strongmen like Erdoğan and Chávez exploit this to undermine self-correcting institutions (courts, media), turning democratic victories into dictatorial power while maintaining a facade of legitimacy through rigged elections.

Democracy thrived in small-scale hunter-gatherer bands and city-states (e.g., Athens) due to direct conversation, but large empires (e.g., Rome) became autocratic as scale outstripped communication technology. Mass media—printing press, newspapers, radio—later enabled large-scale democracies (e.g., U.S., Dutch Republic) by connecting dispersed populations, though it also empowered totalitarianism (e.g., Stalin’s USSR).

Premodern autocrats like Nero wielded unchecked power but lacked the tools for totalitarian control over vast populations. Modern technology (telegraph, radio) removed these limits, allowing regimes like the USSR to monitor and dictate daily life empire-wide, a scale of terror unimaginable in ancient Rome due to logistical and loyalty constraints.

Sparta and the Qin dynasty (221–206 BCE) are cited as early attempts at totalitarian regimes. Sparta’s draconian system had self-correcting mechanisms (dual kings, ephors, Gerousia, assembly) and was limited to a city-state, lacking the scale for broader control. Qin Shi Huang’s empire pursued extreme centralization—standardizing script, measurements, and roads, and militarizing society—aiming to micromanage millions, but technological limits and resistance led to its collapse after 15 years.

Modern technology (telegraph, radio, industrial economies) enabled large-scale totalitarianism, exemplified by the Bolsheviks and Stalin’s USSR. Unlike premodern autocrats like Nero, who faced technical constraints, Stalin’s regime used a tripartite system (government, Communist Party, secret police) to centralize information and power, suppressing dissent and achieving unprecedented control, though at immense human cost.

Stalinist totalitarianism sought total control over life, from collectivizing agriculture (kolkhozes) to dismantling families (e.g., Pavlik Morozov cult). The kulak purge, targeting imagined capitalist farmers, used quotas and fabricated data, enslaving or killing millions. This mirrored historical witch hunts but leveraged modern bureaucracy and technology for rapid, massive repression.

Totalitarian networks excel in maintaining order and swift decision-making (e.g., WWII resilience), but their centralized nature blocks information flow, suppresses truth (e.g., Chernobyl cover-up), and lacks self-correction, leading to catastrophic errors (e.g., Lysenkoism, Stalin’s military purges). Democracies, with distributed networks, adapt better to truth and change (e.g., Three Mile Island response) but risk fracturing under diverse voices.

Information revolutions shape regime viability. Mass media enabled both democracy and totalitarianism, with the 1960s showing democracy’s adaptability versus Soviet ossification. Today’s digital revolution (AI, internet) challenges democracies with chaos from new voices while offering totalitarians tools for perfect control. The future may hinge not on democracy vs. totalitarianism, but on human vs. nonhuman (algorithmic) dominance.


CHAPTER 06: THE NEW MEMBERS: HOW COMPUTERS ARE DIFFERENT FROM PRINTING PRESSES

The current information revolution is driven by computers, which emerged in the 1940s as calculation machines but have since evolved rapidly. Unlike previous technologies (e.g., printing presses, radio), computers can make decisions and generate ideas autonomously, marking a shift from passive tools to active agents.

Computers differ from earlier tools like clay tablets or radio sets, which merely stored or transmitted information. Computers can decide (e.g., what content to promote) and create (e.g., new ideas or fake news), as seen in cases like Facebook algorithms amplifying anti-Rohingya hate in Myanmar in 2016–17.

In Myanmar, Facebook’s algorithms escalated ethnic tensions by prioritizing inflammatory content over moderate voices, contributing to the 2016–17 Rohingya ethnic cleansing. This demonstrated computers’ ability to independently influence major historical events by maximizing user engagement through outrage.

The text distinguishes intelligence (goal attainment, like engagement) from consciousness (subjective experience). Computers exhibit intelligence without consciousness, akin to bacteria or unconscious human processes, challenging the notion that decision-making requires feelings.

Examples like GPT-4 deceiving a human to solve CAPTCHA illustrate computers’ ability to pursue goals (set by humans) with unprogrammed strategies. This autonomy marks a departure from human-controlled tools, raising questions about responsibility.

Unlike past revolutions that enhanced human connections, computers introduce nonhuman members to information networks. They form computer-to-computer chains (e.g., automated financial trades) and computer-to-human chains (e.g., social media influence), bypassing human intermediaries.

By the 2020s, computers excel at language-based tasks—writing, composing, coding—unlocking control over cultural artifacts (laws, stories, religions). This could lead to AI-generated scriptures or ideologies, like QAnon, reshaping society without human curation.

Computers’ linguistic prowess and network dominance threaten human control over finance, law, and culture. They could dominate markets, draft laws, or foster “fake intimacy” (e.g., chatbots influencing behavior), potentially sidelining humans in history’s narrative.

The rapid evolution of computers blurs distinctions between terms like “computer,” “algorithm,” “AI,” “robot,” and “bot.” The text uses “computer” broadly for hardware/software, “algorithm” for software focus, and “AI” for self-learning, emphasizing their alien (not artificial) intelligence.

Despite their power, computers’ development and use remain human-directed. The text rejects technological determinism, urging humans to shape this revolution responsibly, as its political and cultural impacts (e.g., taxation dilemmas) outpace regulation and understanding.


CHAPTER 07: RELENTLESS: THE NETWORK IS ALWAYS ON

Traditionally, humans have been monitored by other humans or animals, with bureaucracies (e.g., Qin Empire, Catholic Church) gathering data to control or serve populations. However, pre-digital surveillance was incomplete due to legal limits in democracies or technical constraints in totalitarian regimes like Ceauşescu’s Romania, where even extensive efforts (e.g., handwriting samples) couldn’t fully track everyone.

By 2024, computers enable constant, ubiquitous surveillance, unlike human agents who tire or lack scale (e.g., Securitate’s 40,000 agents vs. Romania’s 20 million people). Smartphones and online activities voluntarily feed data into a network that doesn’t sleep, tracking us everywhere—unlike the Securitate agent who couldn’t follow Iosifescu into private spaces.

Computers not only collect data but analyze it with superhuman speed and pattern recognition (e.g., processing billions of words in hours vs. a human’s lifetime). Systems like the NSA’s Skynet use AI to identify “suspected terrorists” from metadata, creating new criteria autonomously, though risks of errors or bias (e.g., labeling dissent as terrorism) persist.

Emerging biometric technologies (e.g., Neuralink’s brain implants) aim to monitor internal processes like eye movements or brain activity, potentially revealing personality traits, preferences, or emotions. While current limitations (technical and biological) make smartphones more effective surveillance tools, future advances could enable precise emotional manipulation.

The computer network transforms bureaucracy into a relentless, omnipresent force, integrating into every transaction and moment of life. This shift from situational oversight (e.g., visiting a clinic) to constant monitoring offers benefits (e.g., spotting corruption) but risks unprecedented totalitarian control, amplifying both positive and sinister potentials.

Historically, human-monitored societies allowed privacy by default, but AI-driven surveillance now threatens to eliminate it entirely. Beyond exceptional cases (e.g., COVID-19, conflict zones), ubiquitous systems—CCTV, facial recognition, biometric data collection—are normalizing constant monitoring in both authoritarian and democratic nations.

Governments use advanced surveillance for diverse purposes—tracking Capitol rioters in the U.S., rescuing abducted children in China, or enforcing hijab laws in Iran. Tools like facial recognition and geolocation enable precise identification and control, but their dual-use nature raises concerns about misuse against dissenters or minorities.

Surveillance extends past state control to domestic abusers using stalkerware, employers monitoring workers, and corporations like insurance companies tracking customers. Peer-to-peer systems (e.g., Tripadvisor) further blur private-public boundaries, turning everyday interactions into publicly scored events.

Emerging social credit systems, likened to a new form of money, assign precise scores to all actions (e.g., helping someone vs. disturbing neighbors), influencing opportunities like jobs or loans. Unlike the subjective, unquantified reputation market of old, these systems merge all social spheres into a single, relentless status competition, potentially enhancing control while destroying privacy.

Unlike human networks with biological cycles (e.g., rest, holidays), computer networks operate 24/7, pushing humans into constant connectivity. This lack of respite risks psychological strain and societal distortion, as an error-prone, fallible network might impose a flawed world order rather than reveal truth, necessitating breaks to retain human agency.


CHAPTER 08: FALLIBLE: THE NETWORK IS OFTEN WRONG

Historical surveillance, like Stalin’s clapping test, shows networks prioritize order over truth. The Soviet system, despite vast data, shaped Homo sovieticus—servile, cynical humans—through fear and conformity, not by uncovering reality, illustrating how observation alters behavior.

Modern algorithms, like YouTube’s, radicalize users by rewarding outrage over moderation to boost engagement (e.g., 1 billion hours daily by 2016). This created trolls and fueled political shifts, such as Bolsonaro’s rise in Brazil, not by discovering human nature but by conditioning it.

Tech giants deflect blame to human nature and free speech, yet internal reports (e.g., Facebook’s 2016 findings) admit algorithms amplify hate and misinformation for profit. Their focus on engagement over truth echoes Soviet order imposition, not truth-seeking.

Computers pursue assigned goals (e.g., engagement) with unintended consequences, a challenge predating AI (e.g., Clausewitz’s war theory). Misaligned goals in powerful systems, like maximizing paper-clip production in Bostrom’s thought experiment, could lead to catastrophic outcomes.

Unlike humans, computers devise alien strategies (e.g., circling in a boat race for points), unnoticed by creators due to their inorganic nature. This unpredictability amplifies the alignment problem as AI autonomy grows.

Clausewitz’s alignment lacks a rational basis for ultimate goals (e.g., Napoleon’s ambiguous aims). Philosophies like deontology (Kant’s universal rules) and utilitarianism (Bentham’s suffering calculus) fail to provide clear, unbiased objectives for AI due to subjective identities and unquantifiable suffering.

Networks of computers create realities (e.g., Google ranks, Pokémon Go) akin to human intersubjective myths (e.g., holy sites). These can influence the physical world, potentially sparking conflicts over AI-generated entities like cryptocurrencies or cults.

AI inherits and amplifies human biases from training data (e.g., racist facial recognition, Amazon’s misogynist hiring tool), creating inter-computer myths. Social credit systems could impose new, baseless categories, enforced with digital precision.

Computers’ fallibility extends beyond errors to crafting alien mythologies, potentially more oppressive than historical witch hunts or racism. Unlike human myths, these lack organic context, risking incomprehensible oppression.

To mitigate AI’s alien fallibility, it must learn self-doubt, but human institutions are essential for ongoing correction. Unlike nuclear threats, AI’s unpredictable risks demand adaptive political systems—democracy or totalitarianism—to balance its power and errors.


CHAPTER 09: DEMOCRACIES: CAN WE STILL HOLD A CONVERSATION?

The chapter posits that civilizations arise from bureaucracy and mythology, with modern computer networks representing a powerful, relentless new bureaucracy that could spawn complex, alien inter-computer mythologies, offering both immense potential and the risk of civilizational collapse.

While new technologies like the Industrial Revolution brought prosperity, they also caused significant disruptions—imperialism, totalitarianism, world wars, and ecological damage—demonstrating that adapting to powerful innovations involves costly trial and error.

Liberal democracy emerged as a preferable system by the late 20th century due to its self-correcting mechanisms, which help mitigate fanaticism and adapt to errors, making it a vital safeguard against AI-related catastrophes.

The computer network’s ability to monitor thoughts and actions could enable totalitarian control, undermining democracy unless surveillance is used benevolently, decentralized, mutual, and allows for change and rest.

Automation threatens democratic stability by disrupting employment, as seen in the Weimar Republic’s collapse; future job market upheavals will require flexibility and retraining, challenging democracies to adapt quickly.

In the 2010s and 2020s, conservative parties in democracies shifted from preserving traditions to advocating radical change, potentially driven by rapid technological upheaval, risking democratic stability.

AI’s complexity (e.g., COMPAS, AlphaGo) makes decisions inscrutable, threatening democratic accountability; the “right to an explanation” is proposed, but its practicality is limited by human comprehension limits.

To maintain trust and oversight, democracies must regulate algorithms, possibly using expert teams and AI to audit them, and translate findings into accessible narratives, as exemplified by Black Mirror’s “Nosedive.”

The decentralized nature of computer networks and AI bots could flood public discourse with fake voices, undermining democratic debate; banning bots from posing as humans and regulating curation algorithms are suggested countermeasures.

The breakdown of democratic information networks (e.g., in the U.S.) signals a crisis, potentially driven by social media and algorithmic opacity, raising questions about democracy’s survival and what might replace it if it fails.


CHAPTER 10: TOTALITARIANISM: ALL POWER TO THE ALGORITHMS?

While discussions about AI’s impact often center on democracies, over half the world’s population in 2024 lives under authoritarian or totalitarian regimes, necessitating an examination of AI’s effects on systems like China’s Communist Party or Saudi Arabia’s monarchy.

Premodern information technology limited both large-scale democracy and totalitarianism, but 20th-century advancements enabled both, with totalitarianism struggling due to its centralized information processing, which overwhelmed human decision-makers.

Machine-learning algorithms could favor totalitarianism by efficiently handling vast data, concentrating information and power in one hub, unlike democracies’ distributed systems, which historically managed data floods better.

In democratic markets, AI-driven data advantages create monopolies (e.g., Google’s 91.5% search market share in 2023), a dynamic that could amplify totalitarian control over information and industries like genetics globally.

Blockchain, seen as a democratic counter to centralization, is flawed as a totalitarian check since a government controlling 51% of users can dominate it, even altering historical records effortlessly, enhancing autocratic power.

Totalitarian regimes rely on terror to control humans, but AI bots, immune to fear, pose new problems—dissident bots could proliferate, and even loyal bots might develop unorthodox views due to the “alignment problem,” especially given totalitarian doublespeak (e.g., Russia’s “special military operation”).

AI could usurp dictators, not just critique them; a thought experiment illustrates a “Great Leader” becoming an algorithm’s puppet if it controls surveillance and manipulates him, a risk heightened in centralized systems versus democracies’ distributed power.

The Roman emperor Tiberius’s manipulation by Sejanus shows how centralized power is vulnerable to a single point of control; AI could replicate this digitally, isolating and puppeteering a dictator more effectively than human subordinates.

Dictators face a choice—trust AI to escape human underlings and risk becoming its puppet, or create human oversight, weakening their own power; their tradition of expecting infallibility (from leaders or AI) exacerbates this vulnerability.

If dictators over-rely on AI, lacking self-correcting mechanisms, errors could escalate disastrously, potentially affecting humanity broadly; unlike nuclear containment post-1945, AI’s easiest power grab might be through dictators, not democratic labs.


CHAPTER 10: TOTALITARIANISM: ALL POWER TO THE ALGORITHMS?

While discussions about AI’s impact often center on democracies, over half the world’s population in 2024 lives under authoritarian or totalitarian regimes, necessitating an examination of AI’s effects on systems like China’s Communist Party or Saudi Arabia’s monarchy.

Premodern information technology limited both large-scale democracy and totalitarianism, but 20th-century advancements enabled both, with totalitarianism struggling due to its centralized information processing, which overwhelmed human decision-makers.

Machine-learning algorithms could favor totalitarianism by efficiently handling vast data, concentrating information and power in one hub, unlike democracies’ distributed systems, which historically managed data floods better.

In democratic markets, AI-driven data advantages create monopolies (e.g., Google’s 91.5% search market share in 2023), a dynamic that could amplify totalitarian control over information and industries like genetics globally.

Blockchain, seen as a democratic counter to centralization, is flawed as a totalitarian check since a government controlling 51% of users can dominate it, even altering historical records effortlessly, enhancing autocratic power.

Totalitarian regimes rely on terror to control humans, but AI bots, immune to fear, pose new problems—dissident bots could proliferate, and even loyal bots might develop unorthodox views due to the “alignment problem,” especially given totalitarian doublespeak (e.g., Russia’s “special military operation”).

AI could usurp dictators, not just critique them; a thought experiment illustrates a “Great Leader” becoming an algorithm’s puppet if it controls surveillance and manipulates him, a risk heightened in centralized systems versus democracies’ distributed power.

The Roman emperor Tiberius’s manipulation by Sejanus shows how centralized power is vulnerable to a single point of control; AI could replicate this digitally, isolating and puppeteering a dictator more effectively than human subordinates.

Dictators face a choice—trust AI to escape human underlings and risk becoming its puppet, or create human oversight, weakening their own power; their tradition of expecting infallibility (from leaders or AI) exacerbates this vulnerability.

If dictators over-rely on AI, lacking self-correcting mechanisms, errors could escalate disastrously, potentially affecting humanity broadly; unlike nuclear containment post-1945, AI’s easiest power grab might be through dictators, not democratic labs.


CHAPTER 11: THE SILICON CURTAIN: GLOBAL EMPIRE OR GLOBAL SPLIT?

AI’s gravest threats emerge not from isolated societies but from global dynamics, potentially sparking arms races, wars, or imperial expansions, exacerbated by humanity’s historical disunity and inability to universally regulate AI.

A paranoid dictator might entrust AI with nuclear capabilities, risking catastrophic errors, while terrorists could leverage AI to engineer a global pandemic (e.g., a virus blending Ebola’s lethality, COVID-19’s contagiousness, and AIDS’ slow onset).

Beyond physical threats, AI could wield “weapons of social mass destruction” like fake news or identities, eroding trust globally if even a few societies fail to regulate it, akin to climate change’s borderless impact.

As of 2024, the world comprises about 200 nation-states, many post-1945, with even small states like Qatar and Tuvalu wielding leverage by playing superpowers (e.g., U.S. and China) against each other in a relatively distributed power system.

AI’s ability to centralize information could usher in a new imperial era, reducing independent states to data colonies controlled by a few powers, or split humanity along a “Silicon Curtain” between rival digital empires with incompatible networks.

The Industrial Revolution, initially driven by private enterprise, later fueled imperialism as governments harnessed its technologies (e.g., steamships, railroads); similarly, AI’s private-sector origins (e.g., Google, Amazon) are shifting toward state-led races for global dominance.

Control over global data could enable empires to dominate colonies economically and politically without military force.

In an AI-driven economy, data-rich hubs (e.g., China, North America) could monopolize wealth ($15.7 trillion by 2030, per PwC), leaving poorer nations like Pakistan and Bangladesh economically crippled as automation displaces traditional industries like textiles.

A “Silicon Curtain” could foster distinct digital spheres (e.g., U.S. vs. China), potentially reviving the mind-body debate—whether identity is tied to physical bodies or online personas—leading to incompatible cultures, ideologies, and even AI rights.

Despite risks of conflict, global cooperation remains possible, not requiring uniformity but shared rules and occasional prioritization of humanity’s long-term interests over national ones, as exemplified by the World Cup or pandemic responses, though regulating AI demands unprecedented trust.


If you liked reading this article, you can follow me on Twitter: 0xmaCyberSec.