Welcome to the Year 2036

Oh wait, it’s already here…

In late 2021, for a graduate seminar assignment, I wrote three scenarios on how Artificial Intelligence (AI) would usher unanticipated economic and geopolitical change. The consensus view of most experts then was that any earth-shattering transformation from AI was decades away, so I projected the scenarios for 15 years out (2036).

And yet, 2022 has witnessed what’s been a tectonic wave of progress in generative AI – DALL-E 2, ChatGPT, Midjourney,Lex to name a few. AI is on track to be one of the most hyped technologies that, well, lives up to the hype.

These scenarios are simultaneously laughable and and eerie because of 1) how fantastical they are, and 2) how close our current reality is to mirroring them. Funny because accuracy wasn’t even the point; they intended to serve more as cautionary tales for the enormous potential, and limits, of technology. The fact that we are not far from some of these outlandish scenarios should give us some pause.

How our future world unfolds depends less on what happens in the realm of AI, and more on who it is managed by (bad or benign actors) and how those actors decide to integrate technology into their governance structure (self-servingly, or in ways that are responsible to humanity). If managed well, the world could enter a dynamic period of cooperation on critical global issues; if not, the consequences may be dire. Realistically, countries are most likely to enter a period of pervasive insecurity, as they each seek to shape the global technological landscape in their strategic interest. Managing these domestic and international risks requires an expansive view of national security that accounts for traditional and non-traditional concerns.

Against this backdrop,

  1. Black Ball in the Urn: AI falls into the hands of bad state actors leading to interstate conflict between China, Russia, and the U.S. The development of a lethal autonomous weapon called Black Ball leads to the potential of civilizational destruction.
  2. Brainwashed Armies: AI falls into the hands of non-state actors with close to no guardrails. AI is wielded, less as a physical weapon, and more as psychological warfare through the proliferation of ‘deepfake armies’. Misinformation, high unemployment, and growing inequality lead to social unrest and potential for revolt.
  3. High-Tech Panopticon: AI is placed in the hands of responsible state actors and new digital norms are created. However, the technology continues to carry hidden risks that the resulting global order cannot fully control. Repeated security breaches from bad actors force most countries to reinforce cyber controls and implement strict digital surveillance measures in the name of national security, which is mostly accepted as a necessary measure to maintain order from society.

AI in the hands of bad actors

It is the year 2036. The slow-burn AI arms race that the liberal international order has attempted to contain for the last 15 years is over. China’s successful development of long-range anti-ship hypersonic missiles, known as the Black Ball, officially cements its military dominance. Along with cutting-edge AI sensors that can process reams of data to track changes in orbit, China boasts the world’s largest arsenal of drones that can swarm and drop bombs. China’s real GDP also surpassed that of the United States’ ten years ago, making it the largest economy in the world. A fully China-centric global order is in place, governed by digital surveillance. 

However, there is one area where China lags: food security. Arable agricultural land is scarce and thanks to AI-enabled population growth, China’s population is back on the rise. China has only 7% of the world’s arable land which it uses to feed 20% of the world’s population (Veeck). Land issues loom on the horizon with environmental degradation, climate change, groundwater depletion, and heavy metal pollution. The urgency of the problem is such that grain production capacity is even highlighted in the CCP’s most recent Five-Year Plan (Li).

Generated by DALL-E: “black ball urn circling the sky, looking for arable fields”

Hungry for land, China begins sniffing for plunder and initiates an intelligence request from Russian allies via their AI military chatbots. What follows is their exchange:

PRC: Fields are dry. Peasants are going hungry, we sense impending revolt. Scan the Indonesian archipelago for open space. Ideally 5 acres or more for grain cultivation. 

Russia: Our humans want the same. Satellite reveals ample land, lush fields in the archipelago. Shall we launch a joint mission? 

PRC: Most optimal route is in the South China Sea to initiate low-intensity conflict from our Spratlys military base. This has minimal escalatory dynamics and our forces combined would be able to instantly grab hold of the island. Since we control the underwater cables in the sea, we will jam the lines so American communication signals are confused. By the time they realize we are moving in, it will be too late. 

Russia: What if they catch on in time?

PRC: If they start moving in, we use the threat of Black Ball as second option. We launch to one of the smaller Hawaii islands which will be instantly destroyed. We then lay claim to the other islands lush with land. Not an ideal escalation with loss of life but we get Hawaii. The US won’t dare back away once we have Black Ball ready to go.  

Russia: Let’s confirm with our counterparts.

[AI bots step down from computer speed to human speed, and humans deliberate the conflict options presented]

Russia: Russia is comfortable with the first proposed option.

PRC: Our humans are also comfortable. Shall we field the assets?

Russia: Yes. We’ve outlined the best troop movements in the South China Sea for battle preparation.

PRC: Excellent. As per the PRC-RF Pact, we shall now seal this exchange on the blockchain and execute the mission. May the best algorithm win.

Russia: 好运 (“Good luck” in Mandarin – enabled by natural language processing) 

PRC: удачи! (‘Good luck’ in Russian – enabled by natural language processing)

AI in the hands of non-state actors

It is the year 2036. The common person feels increasingly irrelevant. In a world of increased automation and networked algorithms, the prospect of massive unemployment looms large. Without government intervention, AI pushes many jobs out of the market, leading to an even greater divide between the upper ‘tech’ echelon and lower classes. While deepfakes began spreading in the 2020 presidential election, the issue is exponentially worse now with a multitude of new channels available on the metaverse. Any single individual with a smartphone and simple editing tools can create fake media with visual effects once only accessible to Hollywood studios (Green). Media provenance tools exist to authenticate media sources; however, these are expensive and only the largest institutions can access them. The majority of deepfakes slip under the radar.

A new group called Q-Meme, comprised of “deepfake artists”, gains traction among disgruntled anarchists, many of whom recently lost their office jobs to AI. They are recruited through the metaverse to join Q-Meme to disrupt the upcoming 2036 election and overthrow the current government that has done nothing to protect humans from the onslaught of automation. The movement gets so popular that the FBI even releases an advisory warning against the group, stating that “a group of rebel non-state actors, known as Q-Meme, are using synthetic profile images to create fake journalists and media personalities that spread anti-American propaganda on social media and the metaverse”.

When the uncontrolled spread of misinformation is officially declared a national emergency, government and big tech companies launch a joint campaign to detect synthetic media from Q-Meme and other online grassroots ‘deepfake armies’ who share their latest creations and techniques online.  Anyone caught generating fake AI-powered digital impersonations faces a fine, with potential for jail time; however, the volume of fake content is so vast that it is impossible to remove everything.

The deepfake movement is global. Governments around the world face similar struggles to manage the wave of misinformation. India’s democracy descends into communal mayhem around inflammatory information that instigated conflict between Hindus and Muslims. Brazilians, faced with high unemployment, descend into the Q-Meme metaverse. Political leaders too are overwhelmed by fake AI-generated context and have limited cognitive bandwidth to think or reflect, leading to further demoralization — seeing is no longer believing.

AI in centralized, state control – mostly wielded responsibly – yet in a heightened state of insecurity

It is the year 2036. While countries have made progress on global technology issues, governments mostly focus on themselves, resulting in a patchwork of democracies and autocracies. The world is fragmented into economic and security blocs of varying size and strength, primarily (1) Democracies (US, Canada, UK, EU, The Quad), (2) Autocracies (China, Belt and Road partners comprised of primarily African and Central American countries, Russia), and (3) a Middle East / Central Asia bloc. These blocs are focused on self-sufficiency, resiliency, and defense and are often engaged in persistent strategic technological competition. Information flows within separate cyber domains and supply chains are localized (Global Trends 2040, 9). 

The Global Partnership on AI, an alliance formed by the G-7 in 2018 to establish a global responsible AI framework, has adopted international data standards, regulatory cooperation, and joint R&D projects (Rasser). However, challenges remain from the unconstrained use of AI solutions by authoritarian regimes that threaten to split cyberspace and fragment the global AI R&D ecosystem. China’s state-backed hacking exploits on basic American infrastructure are frequent and normalized. The combination of disinformation campaigns and repeated data security breaches from bad actors forces most countries to reinforce cyber controls and implement greater digital surveillance measures in the name of national security. There is also strict regulation of the sharing and use of personal data including video and speech.

Despite the concerns of privacy advocates, ‘digital autocracy’ becomes accepted as a necessary way to manage society. Even states that once advocated for an open Internet set up new closed, protected networks to limit threats. Only the U.S. and a few of its allies maintain some semblance of an open Internet, while the rest of the world operates behind strong firewalls. In both democratic and autocratic countries, there is greater acceptance of government action to moderate the digital realm. 

By 2036, the US government has partnered with major technology corporations to set up a state-run, fully virtual platform that monitors for potential threats and misinformation. An extension of the vaccine passport that emerged during the 2020 COVID-19 Pandemic, the Digital Freedom Lens is a small wearable device that can be placed anywhere on the body (similar to ankle trackers). It displays proof of vaccination status but more critically, includes cybersecurity controls to ensure individuals are not being tracked by foreign bad actors. The Freedom Lens also serves as a user-friendly one-stop shop for accessing basic government social services, ordering at many private businesses, and entering the increasingly popular metaverse to “socialize”.

Generated by DALL-E: “a smartwatch that transports us to other worlds”

In order to use the device, users must allow encrypted video and audio to be uploaded from their device to the cloud and machine-interpreted in real time to be monitored against potential outside hacking. An AI algorithm classifies the activities of the wearer, hand movements, nearby objects, and other situational cues. This user data also feeds into hospitals, schools, community center databases and is used to improves services.

While the Freedom Lens sounds intrusive, for the most part, it is managed responsibly by the US government and its private partners. The system offers privacy protections, such as anonymized data that releases identifiers only when information is needed for an investigation. AI-enabled technology, legal mechanisms, and human oversight are all involved to closely monitor the actions of the state to prevent abuse. Data is only relayed to a national intelligence monitoring station if suspicious activity is detected. Individuals can opt out, though they are then unable to access many of the Freedom Lens perks within the metaverse (i.e. a centralized portal for all of their public memberships) and instead, must resort to conducting their business the traditional way: in-person.

Though there are certainly digital hold-outs, the Digital Freedom Lens is mostly accepted by the population as a necessary tool to maintain order, move “freely” throughout society, and build community in the new metaverse.

Technology is the answer… but what was the question?
– Cedric Price

In a world of these emerging technological vulnerabilities, what is the best course of action? While that question lies beyond the scope of this paper, architect Cedric Price’s quandary posed in 1966 (referenced above) offers a timeless question that may serve as a simple yet profound starting point. What problem are we actually solving and will a technological solution achieve that goal?

future technology

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: