China’s ex-UK ambassador clashes with ‘AI godfather’ on panel
![Getty Images Photo illustration of AI apps on a smartphone screen.](https://ichef.bbci.co.uk/news/480/cpsprodpb/d31b/live/a7a0d820-e754-11ef-8435-d1fb9f73202b.jpg.webp)
![Getty Images Photo illustration of AI apps on a smartphone screen.](https://ichef.bbci.co.uk/news/480/cpsprodpb/d31b/live/a7a0d820-e754-11ef-8435-d1fb9f73202b.jpg.webp)
A former Chinese official poked fun at a major international AI safety report led by “AI Godfather” professor Yoshua Bengio and co-authored by 96 global experts – in front of him.
Fu Ying, former vice minister of foreign affairs and once China’s UK ambassador, is now an academic at Tsinghua University in Beijing.
The pair were speaking at a panel discussion ahead of a two-day global AI summit starting in Paris on Monday.
The aim of the summit is to unite world leaders, tech executives, and academics to examine AI’s impact on society, governance, and the environment.
Fu Ying began by thanking Canada’s Prof Bengio for the “very, very long” document, adding that the Chinese translation stretched to around 400 pages and she hadn’t finished reading it.
She also had a dig at the title of the AI Safety Institute – of which Prof Bengio is a member.
China now has its own equivalent; but they decided to call it The AI Development and Safety Network, she said, because there are lots of institutes already but this wording emphasised the importance of collaboration.
The AI Action Summit is welcoming guests from 80 countries, with OpenAI chief executive Sam Altman, Microsoft president Brad Smith and Google chief executive Sundar Pichai among the big names in US tech attending.
Elon Musk is not on the guest list but it is currently unknown whether he will decide to join them.
A key focus is regulating AI in an increasingly fractured world. The summit comes weeks after a seismic industry shift as China’s DeepSeek unveiled a powerful, low-cost AI model, challenging US dominance.
The pair’s heated exchanges were a symbol of global political jostling in the powerful AI arms race, but Fu Ying also expressed regret about the negative impact of current hostilities between the US and China on the progress of AI safety.
“At a time when the science is going in an upward trajectory, the relationship is falling in the wrong direction and it is affecting unity and collaboration to manage risks,” she said.
“It’s very unfortunate.”
She gave a carefully-crafted glimpse behind the curtain of China’s AI scene, describing an “explosive period” of innovation since the country first published its AI development plan in 2017, five years before ChatGPT became a viral sensation in the west.
She added that “when the pace [of development] is rapid, risky stuff occurs” but did not elaborate on what might have taken place.
“The Chinese move faster [than the west] but it’s full of problems,” she said.
Fu Ying argued that building AI tools on foundations which are open source, meaning everyone can see how they work and therefore contribute to improving them, was the most effective way to make sure the tech did not cause harm.
Most of the US tech giants do not share the tech which drives their products.
![Maria Axente China's former UK ambassador, Fu Ying, on stage at a panel discussion in Paris.](https://ichef.bbci.co.uk/news/480/cpsprodpb/900d/live/5ac41440-e759-11ef-a446-7b6e488c0d01.jpg.webp)
![Maria Axente China's former UK ambassador, Fu Ying, on stage at a panel discussion in Paris.](https://ichef.bbci.co.uk/news/480/cpsprodpb/900d/live/5ac41440-e759-11ef-a446-7b6e488c0d01.jpg.webp)
Open source offers humans “better opportunities to detect and solve problems”, she said, adding that “the lack of transparency among the giants makes people nervous”.
But Prof Bengio disagreed.
His view was that open source also left the tech wide open for criminals to misuse.
He did however concede that “from a safety point of view”, it was easier to spot issues with the viral Chinese AI assistant DeepSeek, which was built using open source architecture, than ChatGPT, whose code has not been shared by its creator OpenAI.
On Tuesday it is the turn of world leaders including French president Emmanuel Macron, India’s PM Narendra Modi and US Vice President JD Vance to hold talks at the summit.
Discussions will include how AI will impact the world of work and be used in the public interest, and how to mitigate its risks.
A new $400m partnership between several countries has also been announced, aimed at supporting AI initiatives which serve the public interest, such as healthcare.
In a BBC interview, UK technology secretary Peter Kyle said he thought it would be dangerous for the UK to fall behind in its adoption of the tech.
Dr Laura Gilbert, who advises the government on AI, said she believed it was essential to maintain the NHS because of the efficiencies it promised. “How are you going to fund the NHS without grabbing AI?” she asked.
Matt Clifford, who wrote the UK’s AI Action Plan which the government has accepted in full, warned that the tech would be “more radical” than when typing was replaced with word processing, as computers first entered the workplace.
“The industrial revolution was the automation of physical labour; AI is the automation of cognitive labour,” said Marc Warner, the boss of the AI firm Faculty. He added that he did not believe his two-year-old child would “have a job as we know them today.”