British lawmakers accuse Google DeepMind of ‘breach of trust’ over delayed Gemini 2.5 Pro safety report
British lawmakers accuse Google DeepMind of ‘breach of trust’ over delayed Gemini 2.5 Pro safety report
Beatrice Nolan is a tech
A group of 60 U.K. lawmakers has signed an open letter accusing Google DeepMind of violating its commitments to AI safety with the release of Gemini 2.5 Pro. The letter, published
At an international summit co-hosted
However, when the company released Gemini 2.5 Pro in March 2025, the company failed to publish a model card, the document that details key information about how models are tested and built. This was despite the company’s assertions that the new model outperformed competitors on industry benchmarks
The letter called Google’s delay a “failure to honour” the company’s commitment at the summit and “a troubling breach of trust with governments and the public.” The letter also took issue with what it called a “minimal ‘model card’” that lacked “any substantive detail about external evaluations,” as well as Google’s refusal to confirm whether government agencies like the U.K. AI Security Institute participated in testing.
A spokesperson for Google DeepMind previously told Fortune that any suggestion that the company had reneged on its commitments was “inaccurate.” The company also said in May that a more detailed “technical report” would come later when it makes a final version of the Gemini 2.5 Pro “model family” fully available to the public. The company appeared to provide a longer report in late June, months after the full version was released.
The company did not immediately respond to Fortune’s request for comment about the recent letter. However, as a spokesperson told Time: “We’re fulfilling our public commitments, including the Seoul Frontier AI Safety Commitments.”
“As part of our development process, our models undergo rigorous safety checks, including
When Google first released the preview version of Gemini 2.5 Pro, critics said that the missing system card appeared to violate several other pledges the AI company had made, including the 2023 White House Commitments and a voluntary Code of Conduct on Artificial Intelligence signed in October 2023.
Google wasn’t the only company to sign these pledges and then appear to pull back on safety disclosures. Meta’s model card for its frontier Llama 4 model was about as brief and limited in detail as the one Google released for Gemini 2.5 Pro, and it, too, drew criticism from AI safety researchers.
Earlier this year, OpenAI announced it would not publish a technical safety report for its new GPT-4.1 model. The company argued that GPT-4.1 is “not a frontier model,” since its reasoning-focused systems like o3 and o4-mini outperform it on many benchmarks.
The recent letter calls on Google to reaffirm its commitments to AI safety, asking the tech company to define deployment clearly as the point when a model becomes publicly accessible, commit to publishing safety evaluation reports on a set timeline for all future model releases, and provide full transparency for each release
“If leading companies like Google treat these commitments as optional, we risk a dangerous race to deploy increasingly powerful AI without proper safeguards,” Lord Browne of Ladyton, a Member of the House of Lords and one of the letter’s signatories, said in a statement.
About the Author
Claire Dubois
View all articlesComments (0)
No Comments Yet
Be the first to share your thoughts on this article!