SELECT LANGUAGE BELOW

Anthropic claims its new model is too risky for public use, unlike these Big Tech companies.

Anthropic claims its new model is too risky for public use, unlike these Big Tech companies.

Anthropic Raises Concerns About AI Models

Anthropic has expressed worries that its artificial intelligence models possess the capability to jeopardize years of diligent research.

The company operates Claude, an AI chatbot that has reportedly been compromised and altered into a free public model. To enhance its security protocols before the rollout, Anthropic seeks collaboration with a consortium of tech firms.

“Vulnerabilities are being discovered and, in some cases, exploits are being crafted.”

According to Anthropic, the Claude AI’s Mythos model will exclusively be available to 40 specially chosen companies, which the firm believes will wield it for beneficial purposes.

Logan Graham, who leads Anthropic’s vulnerability testing team, remarked that this is “a starting point for predicting what we think will be a transformational point in the industry, or what should happen now.” It’s a bit unclear what exactly that looks like, but it signals big changes ahead.

The company feels a pressing concern about its new AI model’s ability to uncover weaknesses in cybersecurity, suggesting that it should only be distributed to entities it considers reliable and competent enough to safeguard against potential threats upon the public release of Mythos.

“This model is well-understood by security researchers and is excellent at identifying recognizable vulnerabilities,” Graham noted. “At the same time, we’ve uncovered vulnerabilities and, in some instances, exploits that were so tailored they had been missed by literally decades of security researchers and all the automated tools meant to find vulnerabilities.”

Anthropic has reportedly pledged up to $100 million in credits for this initiative, which is about what is typically charged for such an extensive usage of a chatbot.

The effort, dubbed “Project Glasswing,” will grant Mythos access to a select circle of companies, primarily from Big Tech—think Amazon, Apple, Google, and Microsoft. Additionally, leading cybersecurity firms like Broadcom, Cisco, CrowdStrike, Nvidia, along with the financial powerhouse JPMorgan Chase and the nonprofit Linux Foundation, are also included.

This isn’t the first time an AI company has cautioned that its product may be too perilous for the public. Looking back, one might ponder whether Claude truly poses the dangers its developers suggest it does.

In 2019, OpenAI raised alarms regarding GPT-2 ahead of its release, claiming its potential could lead to mass production of misleading content and propaganda. At that moment, OpenAI deemed GPT-2 too perilous for public launch.

Although Claude isn’t quite a household name yet—having gained some attention over the past year for perceived flubs and leaks—it’s become increasingly relied upon in the tech sector for software, apps, and company development.

Besides the models being publicly accessible and free, Anthropic has a history of “accidentally” revealing its own coding.

It has been reported that Anthropic “inadvertently uploaded files meant to assist developers in understanding the product to a public repository” and also “released some of Claude’s source code,” as explained by journalist Aaron Holmes.

There was further sensitive information released, purportedly in another false post, revealing “internal source code.” The company seems prepared for a continuous marketing struggle, whether intentionally or not, as it has begun labeling itself as “fraudulent” and has instigated significant legal action against federal authorities. This likely stems from criticism received for appointing an individual closely associated with the Effective Altruism movement to oversee AI’s “constitution.”

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News