SELECT LANGUAGE BELOW

New AI partnership agreements established with Google, Microsoft, and xAI by the Trump administration

New AI partnership agreements established with Google, Microsoft, and xAI by the Trump administration

The Trump administration revealed on Tuesday that it has formed new agreements with major tech firms—Microsoft, Google’s DeepMind, and Elon Musk’s xAI—to bolster collaboration in the fields of research, artificial intelligence (AI), and security.

As part of this initiative, the Center for AI Standards and Innovation (CAISI), which operates under the National Institute of Standards and Technology within the Department of Commerce, will partner with AI companies to carry out pre-deployment assessments and targeted research focused on advanced AI capabilities. This effort is largely centered around enhancing AI security.

This partnership adds to an earlier collaboration between CAISI and these companies, aiming to promote information sharing, encourage voluntary improvements in products, and offer governments better insights into AI capabilities and the global landscape of AI development.

“Independent and thorough measurement science is crucial for grasping frontier AI and its implications for national security,” stated CAISI Director Chris Fall. “These expanded industry partnerships will enhance initiatives that benefit the public during pivotal times.”

AI’s Impact on Creative Work

The administration also noted that developers are often supplying CAISI with models that have either reduced or eliminated safeguards, which are critical for evaluating national security capabilities and risks associated with AI.

Global evaluators from various government agencies can partake in assessments and regularly offer feedback through the TRAINS Task Force—an interagency collective focused on evaluating national security concerns linked to AI.

CAISI’s contract allows for testing in classified settings and is designed to adapt as AI technology advances.

Zuckerberg Links Meta Layoffs to AI Investments

Natasha Crampton, Chief AI Officer at Microsoft, said the agreement is pivotal for advancing the science of AI testing. It will focus on evaluating Microsoft’s leading models, assessing safety measures, and working collaboratively to mitigate national security and public safety risks.

“Ongoing rigorous testing is vital for fostering trust and confidence in advanced AI systems,” emphasized Crampton.

Musk Reflects on OPENAI Funding

Crampton elaborated that well-designed tests are essential for understanding whether an AI system operates effectively and meets intended goals. They can also help mitigate risks, such as potential cyber attacks, which could emerge as advanced AI systems become more widespread globally.

In a separate announcement, Microsoft stated that a similar agreement has been reached with the British AI Security Institute (AISI), which oversees AI testing and evaluation.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News