Anthropic, an artificial intelligence startup backed by Google and Amazon, on Monday launched its latest AI chatbot group called Claude 3, which it claims is the fastest and most powerful yet.
The company says the Claude 3 Opus, the most intelligent of the three new models, has been ranked ahead of Google’s Gemini across industry benchmark tests, including expert-level knowledge, graduate-level expert reasoning, and basic math. It claims to outperform Ultra and OpenAI’s GPT-4.
“They demonstrate near-human levels of comprehension and fluency in complex tasks, leading to a front in general intelligence,” Antropic said in a statement.
Launched by OpenAI last spring, GPT-4 remains one of the most powerful chatbot technologies adopted by both consumers and enterprises.
Anthropic users will be able to input charts, photos, documents, and other types of unstructured data for analysis, and the chatbot will respond with text. Companies like Airtable and Asana helped him A/B test the model, he told CNBC.
Claude 3 can summarize up to approximately 150,000 words in the form of notes, letters, and stories. In contrast, ChatGPT can understand about 3,000 words.
Anthropic’s new AI suite also includes Sonnet and Haiku, faster and more cost-effective alternatives to Opus. Sonnet and Opus are already accessible in 159 countries, and Haiku will be available soon, Anthropic said.

A former research executive at OpenAI founded Anthropic with a mission to create AI that is “helpful, harmless, and honest.”
The startup, backed by tech giants like Google, Salesforce and Amazon, received $7.3 billion in funding last year alone.
Anthropic’s new chatbot was criticized by Google for creating offensive and historically inaccurate images, including racially diverse figures of the Founding Fathers, Vikings, and even Nazi soldiers during World War II. comes just after the company suspended its AI-powered image generation technology.
“Of course, no model is perfect, and I think that’s a really important thing to say at the outset,” Anthropic co-founder Daniela Amodei told CNBC. “We’ve worked very hard to make these models as functional and safe as possible. Of course there are places where the models still make things up from time to time. .”
