China this week chose not to sign onto an international “blueprint” agreed to by some 60 nations, including the U.S., that looked to establish guardrails when employing artificial intelligence (AI) for military use.
More than 90 nations attended the Responsible Artificial Intelligence in the Military Domain (REAIM) summit hosted in South Korea on Monday and Tuesday, though roughly a third of the attendees did not support the nonbinding proposal.
AI expert Arthur Herman, senior fellow and director of the Quantum Alliance Initiative with the Hudson Institute, told Digital that the fact some 30 nations opted out of this important development in the race to develop AI is not necessarily cause for concern, though in Beijing’s case it is likely because of its general opposition to signing multilateral agreements.
MASTERING ‘THE ART OF BRAINWASHING,’ CHINA INTENSIFIES AI CENSORSHIP
“What it boils down to … is China is always wary of any kind of international agreement in which it has not been the architect or involved in creating and organizing how that agreement is going to be shaped and implemented,” he said. “I think the Chinese see all of these efforts, all of these multilateral endeavors, as ways in which to try and constrain and limit China’s ability to use AI to enhance its military edge.”
Herman explained that the summit, and the blueprint agreed to by some five dozen nations, is an attempt to safeguard the expanding technology surrounding AI by ensuring there is always “human control” over the systems in place, particularly as it relates to military and defense matters.
“The algorithms that drive defense systems and weapons systems depend a lot on how fast they can go,” he said. “[They] move quickly to gather information and data that you then can speed back to command and control so they can then make the decision.
“The speed with which AI moves … that’s hugely important on the battlefield,” he added. “If the decision that the AI-driven system is making involves taking a human life, then you want it to be one in which it’s a human being that makes the final call about a decision of that sort.”
Nations leading in AI development, like the U.S., have said maintaining a human element in serious battlefield decisions is hugely important to avoid mistaken casualties and prevent a machine-driven conflict.
ARMY PUSHES 2 NEW STRATEGIES TO SAFEGUARD TROOPS UNDER 500-DAY AI IMPLEMENTATION PLAN
The summit, which was co-hosted by the Netherlands, Singapore, Kenya and the United Kingdom, was the second of its kind after more than 60 nations attended the first meeting last year held in the Dutch capital.
It remains unclear why China, along with some 30 other countries, opted not to agree to the building blocks that look to set up AI safeguards, particularly after Beijing backed a similar “call to action” during the summit last year.
When pressed for details of the summit during a Wednesday press conference, Chinese Foreign Ministry spokesperson Mao Ning said that upon invitation, China sent a delegation to the summit where it “elaborated on China’s principles of AI governance.”
Mao pointed to the “Global Initiative for AI Governance” put forward by Chinese President Xi Jinping in October that she said “gives a systemic view on China’s governance propositions.”
The spokesperson did not say why China did not back the nonbinding blueprint introduced during the REAIM summit this week but added that “China will remain open and constructive in working with other parties and deliver more tangibly for humanity through AI development.”
CLICK HERE TO GET THE APP
Herman warned that while nations like the U.S. and its allies will look to establish multilateral agreements for safeguarding AI practices in military use, they are unlikely to do much in the way of deterring adversarial nations like China, Russia and Iran from developing malign technologies.
“When you’re talking about nuclear proliferation or missile technology, the best restraint is deterrence,” the AI expert explained. “You force those who are determined to push ahead with the use of AI – even to the point of basically using AI as kind of [a] automatic kill mechanism, because they see it in their interest to do so – the way in which you constrain them is by making it clear, if you develop weapons like that, we can use them against you in the same way.
“You don’t count on their sense of altruism or high ethical standards to restrain them, that’s not how that works,” Herman added.
Reuters contributed to this report.