Journal of Political Risk, Vol. 13, No. 1, January 2025
By Anders Corr
A new strategy that could peacefully democratize China and other autocratic states was published recently. By ensuring a democratic lead in artificial intelligence (AI), its benefits could be used to incentivize autocracies into nonaggression against democracies. Then, AI could unblock information and defeat autocratic censorship, ultimately improving education in autocracies to the point that Chinese people themselves would successfully drive and achieve democratic reforms.
While using the stick of military AI to deter autocracies has long been discussed, the idea of using the carrots from AI to buy peace with autocracies, and then using the same technology to defeat censorship and achieve democratization in those countries, had not, to the best of this author’s knowledge, been published before October.
The idea, as it would apply to China, is that the significant incentives of AI-enhanced civilian goods and services could be offered to the country, but only if Beijing stops aggression against Taiwan, Japan, India, and in the South China Sea. Then, AI could defeat China’s censorship so Chinese citizens could more easily educate themselves about the benefits of democracy, which would help them build a political movement powerful enough to achieve democratic reforms.
That may sound unlikely, but it is possible if the democracies secure a monopoly over the most powerful forms of AI, and if AI becomes what some believe it could: an intelligence that knows 1 million times more than all of science knows today, bringing unimaginable health, educational, and economic benefits to humanity. Some believe that this event, called the “singularity” by some AI analysts, could happen as early as five years from now.
Dario Amodei, the CEO and co-founder of the leading ethical AI company, Anthropic, developed the new strategy to promote democracy globally. The strategy is in his 15,000-word essay released in October.
In a tech field dedicated to moving fast and breaking things to get ahead, Mr. Amodei’s company is the leader in safer approaches to AI through the use of guardrails. Ironically, and in part because of this safer approach, his company is now one of the top dedicated AI companies, alongside the likes of OpenAI, Character.AI, and Cohere. Mr. Amodei helped develop OpenAI’s ChatGPT before leaving to start Anthropic. The name of his company translates to “human-like”.
Anthropic is funded at $7.7 billion, including by Google, Amazon, and Salesforce, with a valuation estimated to be as much as $40 billion. Two unspecified “Asian telecoms” also funded Anthropic, which are most likely from China and will most likely object to Mr. Amodei’s new strategy.
His path-breaking essay is titled “Machines of Loving Grace: How AI Could Transform the World for the Better.” This positive take on AI is unusual for him, given the company’s focus on not only building AI models, but mitigating the risks of AI. In 2023, Mr. Amodei estimated a 10 to 25 percent chance that AI would destroy humanity. So, not as much of the salt required for reading other paeans to AI is required, though a little is still prudent. Anthropic stands to make about $1 billion in revenue annually by the end of 2024, according to the company’s own predictions.
In the paper, Mr. Amodei notes the scale of the China-related problem we face. “Twenty years ago US policymakers believed that free trade with China would cause it to liberalize as it became richer; that very much didn’t happen, and we now seem headed for a second cold war with a resurgent authoritarian bloc.” Mr. Amodei notes that internet technology, contrary to early hopes, likely advantages authoritarianism through providing autocrats with enhanced tools for propaganda and surveillance. This hits the nail on the head.
“Unfortunately, I see no strong reason to believe AI will preferentially or structurally advance democracy and peace, in the same way that I think it will structurally advance human health and alleviate poverty,” Mr. Amodei writes. “Human conflict is adversarial and AI can in principle help both the ‘good guys’ and the ‘bad guys’.” So, he says, we must fight to ensure that AI is used for good, not evil.
Mr. Amodei believes that democracies should seek to have the upper hand globally when powerful AI is created. “AI-powered authoritarianism seems too terrible to contemplate, so democracies need to be able to set the terms by which powerful AI is brought into the world, both to avoid being overpowered by authoritarians and to prevent human rights abuses within authoritarian countries.”
He believes an informal coalition of democracies in close collaboration with private AI companies should seek a clear advantage on powerful AI through controlling its supply chain, quickly scaling, and blocking authoritarian access to key inputs such as semiconductors and their manufacturing equipment. This has already started.
What had not been published prior to Mr. Amodei’s piece, to my knowledge, is his strategy of leveraging AI benefits as a sweetener to stop autocracies from competing with democracies, and then using the same AI tech to defeat their censorship, which would in turn help democratize their societies.
The democratic coalition “would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy (this would be a bit analogous to ‘Atoms for Peace’),” he writes. “The coalition would aim to gain the support of more and more of the world, isolating our worst adversaries and eventually putting them in a position where they are better off taking the same bargain as the rest of the world: give up competing with democracies in order to receive all the benefits and not fight a superior foe.”
The strategy would produce a world with democracies in the lead economically and militarily, Mr. Amodei writes, so that they can effectively mitigate the risk of being conquered by the autocracies, while parlaying their AI superiority into a permanent advantage. It would not be much more of an ask, given the immense power of the AI singularity, to buy democratic reforms in autocracies as well as an end to their aggressive behavior towards neighbors, even before their censorship is defeated. They could voluntarily lift their censorship rules if offered the unimaginable humanitarian benefits of the AI singularity.
Mr. Amodei’s strategy, or something like it, is an excellent working start, with little time to lose. It should be fully supported by the United States and our strongest allies, including in Europe, the United Kingdom, and Japan. The G7 and NATO should at the very least have a joint AI task force to rapidly develop and implement the strategy within months rather than years. Remember, five years could be too late for democracy if the autocracies achieve AI singularity first. Major democratic economies that are in the lead of semiconductor manufacturing, including not only the United States and other entities listed above, but also Taiwan and South Korea, should be closely involved in leading this effort.
If there is even a small chance that AI is as powerful as some think it will be, then the democracies must get ahead, and quickly, to mitigate the risk from an autocratic singularity. The preservation of our freedoms and way of life is in the balance.
Anders Corr has a Ph.D. in Government from Harvard University and is the publisher of the Journal of Political Risk. His most recent book is the Concentration of Power: Institutionalization, Hierarchy & Hegemony (Optimum Publishing International, 2021).