
The big tech companies are experienced lobbyists: Amazon spent $21.4 million on lobbying in 2022, making it the 6th largest spender Meta (the parent company of Facebook, Instagram, and WhatsApp) came in 10th with $19.2 million and Alphabet (parent company of Google) was 19th with $13.2 million. DeepMind and Google, for example, have clashed over the governance of DeepMind projects since their merger. Acquisition and partnership may limit the ability of AI startups to act in ways that lower risk. OpenAI partnered with Microsoft in January, and Google acquired DeepMind in 2014. While not strictly relevant to international agreements, this move suggests that tech companies are willing to compromise on AI safety in response to commercial incentives.Īnother potentially concerning development is the creation of links between AI startups and big tech companies. Google, fearing that OpenAI’s ChatGPT could replace its search engine, told employees it would “recalibrate” the amount of risk it is prepared to accept when deploying new AI systems. However, there are already worrisome signs that commercial competition may undermine these commitments. OpenAI’s stated mission is “to ensure that artificial general intelligence benefits all of humanity.” DeepMind’s operating principles include a commitment to “act as responsible pioneers in the field of AI.” DeepMind’s founders have pledged not to work on lethal AI, and Google’s AI Principles state that Google will not deploy or design AI for weapons intended to injure humans, or for surveillance that violates international norms. US AI companies appear to be aware of the risks of using their products. In 2001, the United States rejected a Protocol to strengthen the Biological Weapons Convention, in part because of pressure from the US chemical and pharmaceutical industries, which wanted to limit inspections of their facilities. In the US-China AI competition, companies developing AI systems and promoting their own interests might lobby against domestic or international AI regulation. By better understanding these domestic forces, policy makers in the United States can minimize the risks faced by the United States, China, and the world. Much like the Cold War nuclear arms race, today’s US-China AI competition is heavily influenced by domestic forces such as private interest groups, bureaucratic infighting, electoral politics, and public opinion.

Historically, arms races are often driven more by domestic economics and politics than by rational responses to external threats.Ĭhina, which is actually regulating AI much more tightly than the United States or even the European Union and is likely to be hamstrung by US semiconductor export controls in the coming years, is far behind the United States in AI development. Kennedy’s rhetoric may have helped him politically but also hindered cooperation with the Soviet leadership.

Eisenhower seem weak on defense, claiming that the Soviet Union was overtaking the United States in nuclear missile deployment. Kennedy invented the “ missile gap” narrative to make President Dwight D. History shows us that this worry is more than just theoretical. But failing to regulate AI or to coordinate with China in cases where that is in the United States’ interest would endanger US citizens. To be sure, the United States must keep its citizens secure.


Critics of the proposed pause argue that regulating or restricting AI would help China pull ahead in AI development, causing the United States to lose its military and economic edge. The reasons that a pause in AI won’t happen are multifold-and are about more than just the research itself. While a pause in the field of AI is unlikely to happen, at least it means the United States is finally starting to realize the importance of regulating AI systems. There is a precedent for such temporary pauses in other fields of research: In 2019, for example, scientists successfully called for a moratorium on any human gene editing that would pass along heritable DNA to genetically modified children. In March, thousands of tech leaders-Elon Musk among them-signed an open letter asking artificial intelligence (AI) labs to stop developing next-generation training systems for at least six months.
