Karman Lucero at Project Syndicate: Central to the Cold War between the United States and the Soviet Union was a rivalry to develop the technologies of the future. First came the race to deploy nuclear weapons on intercontinental missiles. Then came the space race. Then came US President Ronald Reagan’s “Star Wars” program, which seemed to launch a new race to build missile-defense systems. But it soon became clear that the Soviet economy had fallen decisively behind.
Now, a new struggle for technological mastery is underway, this time between the US and China, over artificial intelligence. Both have signaled that they want to manage their competition through dialogue over the development, deployment, and governance of AI. But formal talks on May 14 made it painfully clear that no grand bargain can be expected anytime soon.
That should come as no surprise. The issue is simply too broad – and governments’ perspectives and goals too different – to allow for any single “treaty” or agreement on transnational AI governance. Instead, the potential risks can and should be managed through multiple, targeted bargains and a combination of official and unofficial dialogues.
In It to Win It
China and the US are each fully engaged in policymaking to shape the future of AI, both domestically and internationally. US President Joe Biden’s October 2023 executive order required US government agencies to step up their own use of AI, and to update how they regulate the use of AI in their respective sectors. Similarly, China’s central government has repeatedly signaled the importance of AI development, and the Cyberspace Administration of China (CAC) has issued stringent regulations on the use of algorithms, deepfakes, and AI-generated content.
As for shaping AI governance for the rest of the world, the US has already established multiple globalpartnerships focused on AI governance, and it led the drafting of a UN General Assembly resolution on “safe, secure, and trustworthy artificial intelligence systems for sustainable development.” Similarly, China announced a Global AI Governance Initiative in 2023 and now hosts an annual World AI Conference in Shanghai. With this year’s “Shanghai Declaration,” it unveiled additional plans to shape transnational AI governance. And not to be outdone by the US, China is co-sponsoring a resolution at the UN titled “Enhancing International Cooperation on Capacity-building of Artificial Intelligence,” which focuses on helping developing countries pursue AI in a “non-discriminatory” environment.
The US and China each recognize the importance not only of engaging in dialogue with each other, but also of being seen by the rest of the world to be doing so. The bilateral talks in May demonstrated that both countries will continue to pay lip service to dialogue despite their obvious rivalry. The US highlighted the importance of developing “safe, secure, and trustworthy” systems, and identified potential instances of abuse by China. The Chinese stated that AI development should be “beneficial, safe, and fair,” highlighted the UN’s role in global AI governance, and objected to US export controls.
But given all the attention that the US and China have devoted to AI governance and dialogue, why are their official statements so lukewarm? More to the point, why is it so hard to tackle real issues and come to an actual, substantive agreement? The answer can be found in each country’s domestic approach to AI governance, and how these domestic contexts affect the international dialogue.
The American Way
China and the US have starkly different views on what “AI governance” means, and on what “AI dialogue” entails and should aim to accomplish. In the US, governance is distributed by sector and generally focuses on addressing specific AI-related harms. This is partly due to normative policy goals like supporting innovation and avoiding excessive regulation; but it also reflects constitutional and practical limits on what the US government can actually do to regulate AI. Hence, Biden’s executive order instructs federal agencies to focus more on AI but does not seek to regulate the technology’s private use.
The administration likely determined that it lacks the authority to issue regulations on the use of AI by private actors. But Congress’s authority to regulate AI also faces challenges. A general AI law, like the one the European Union recently adopted, probably would be too broad to get through the House of Representatives and the Senate, and it would surely face legal challenges if it did. The Supreme Court’s decisions in Murthy v. Missouri (2024) and Moody v. NetChoice (2024), lend credence to the idea that code – including algorithm-based content moderation – qualifies as constitutionally protected speech in American jurisprudence, implying that the bar would be quite high for regulatory intrusion.
In practice, most AI governance in the US falls to sector-specific regulators – such as the Food and Drug Administration, with its rules on AI-assisted medical products. One exception is in the national-security context; the US government has broad authority to regulate the use of AI for military purposes, and – arguably – to impose export controls on advanced semiconductors in order to limit China’s ability to develop its own military AI. The White House and the federal government thus participate in multi-stakeholder discussions about AI risks, and influence the practical development of AI by setting policy goals and promoting collaborative, voluntary principles and standards.
More here.