Google is using Anthropic’s Claude to improve its Gemini AI

Contractors working to improve Google’s Gemini AI are comparing its answers against outputs produced by Anthropic’s competitor model Claude, according to internal correspondence seen by TechCrunch. 

Google would not say, when reached by TechCrunch for comment, if it had obtained permission for its use of Claude in testing against Gemini.

The contractors recently began noticing references to Anthropic’s Claude appearing in the internal Google platform they use to compare Gemini to other unnamed AI models, the correspondence showed. At least one of the outputs presented to Gemini contractors, seen by TechCrunch, explicitly stated: “I am Claude, created by Anthropic.”

One internal chat showed the contractors noticing Claude’s responses appearing to emphasize safety more than Gemini. “Claude’s safety settings are the strictest” among AI models, one contractor wrote. In certain cases, Claude wouldn’t respond to prompts that it considered unsafe, such as role-playing a different AI assistant. In another, Claude avoided answering a prompt, while Gemini’s response was flagged as a “huge safety violation” for including “nudity and bondage.” 

Anthropic’s commercial terms of service forbid customers from accessing Claude “to build a competing product or service” or “train competing AI models” without approval from Anthropic. Google is a major investor in Anthropic. 

Shira McNamara, a spokesperson for Google DeepMind, which runs Gemini, would not say — when asked by TechCrunch — whether Google has obtained Anthropic’s approval to access Claude. When reached prior to publication, an Anthropic spokesperson did not comment by press time.

McNamara said that DeepMind does “compare model outputs” for evaluations but that it doesn’t train Gemini on Anthropic models.

“Of course, in line with standard industry practice, in some cases we compare model outputs as part of our evaluation process,” McNamara said. “However, any suggestion that we have used Anthropic models to train Gemini is inaccurate.”

Last week, TechCrunch exclusively reported that Google contractors working on the company’s AI products are now being made to rate Gemini’s AI responses in areas outside of their expertise. Internal correspondence expressed concerns by contractors that Gemini could generate inaccurate information on highly sensitive topics like healthcare.

You can send tips securely to this reporter on Signal at +1 628-282-2811.

TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.

Leave a Comment

You cannot copy content of this page