Chinese and western scientists identify ‘red lines’ on AI risks

Stay informed with free updates

Leading western and Chinese artificial intelligence scientists have issued a stark warning that tackling risks around the powerful technology requires global co-operation similar to the cold war effort to avoid nuclear conflict.

A group of renowned international experts met in Beijing last week, where they identified “red lines” on the development of AI, including around the making of bioweapons and launching cyber attacks.

In a statement seen by the Financial Times, issued in the days after the meeting, the academics warned that a joint approach to AI safety was needed to stop “catastrophic or even existential risks to humanity within our lifetimes”.

“In the depths of the cold war, international scientific and governmental co-ordination helped avert thermonuclear catastrophe. Humanity again needs to co-ordinate to avert a catastrophe that could arise from unprecedented technology,” the statement said.

Signatories include Geoffrey Hinton and Yoshua Bengio, who won a Turing Award for their work on neural networks and are often described as “godfathers” of AI; Stuart Russell, a professor of computer science at the University of California, Berkeley; and Andrew Yao, one of China’s most prominent computer scientists.

The statement followed the International Dialogue on AI Safety in Beijing last week, a meeting that included officials from the Chinese government in a signal of tacit official endorsement for the forum and its outcomes.

The gathering forms part of the pressure from the academic community for tech companies and governments to collaborate on AI safety, in particular by bringing together the world’s two technology superpowers, China and the US.

US President Joe Biden and his Chinese counterpart Xi Jinping met in November and discussed AI safety and agreed to establish a dialogue on the issue. Leading AI companies around the world have also met Chinese AI experts behind closed doors in recent months.

In November, 28 nations, including China, and leading AI companies agreed broad commitments to work together to tackle the existential risks stemming from advanced AI during UK Prime Minister Rishi Sunak’s AI safety summit.

In Beijing last week, experts discussed threats regarding the development of “artificial general intelligence”, or AI systems that are equal to or superior to humans.

“A core focus of the discussion were the red lines that no powerful AI system should cross and that governments around the world should impose in the development and deployment of AI,” said Bengio.

These red lines relate to increasingly autonomous systems, with the statement saying that “no AI system should be able to copy or improve itself without explicit human approval and assistance” or “take actions to unduly increase its power and influence”.

The scientists added that no systems should “substantially increase the ability of actors to design weapons of mass destruction, violate the biological or chemical weapons convention” or be able to “autonomously execute cyber attacks resulting in serious financial losses or equivalent harm”.

Video: AI: a blessing or curse for humanity? | FT Tech

Additional reporting by Madhumita Murgia in London

Via

Leave a Comment