Site icon WDC NEWS 6

Anthropic, OpenAI Sign Deals on AI Safety with U.S. Agency

Anthropic, OpenAI Sign Deals on AI Safety with U.S. Agency


Key Takeaways

  • Anthropic and OpenAI have signed agreements with the U.S. agency overseeing artificial intelligence safety to work together to mitigate risks from the new technology.
  • The U.S. Artificial Intelligence Safety Institute said the move will help to advance safe and trustworthy AI innovation.
  • Officials called the agreements the first of their kind between the U.S. government and industry aimed at establishing AI safety.

Artificial intelligence (AI) startups Anthropic and OpenAI have signed agreements with the federal agency overseeing AI safety to “enable formal collaboration on AI safety research, testing and evaluation.”

The announcement Thursday from the U.S. Artificial Intelligence Safety Institute at the Department of Commerce’s National Institute of Standards and Technology (NIST) called these the first-of-their-kind collaborations regarding AI between the U.S. government and industry to “help advance safe and trustworthy AI innovation for all.”

The statement said the memorandums of understanding will give the AI Safety Institute access to major new models from the two companies before and after their public release. It noted that it will allow for “collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks.” 

Teaming With UK Counterpart To Give Feedback

In addition, the AI Safety Institute expects to work with the U.K. AI Safety Institute to provide feedback to the two companies on how they can improve the safety of their models.

U.S. AI Safety Institute Director Elizabeth Kelly called the agreements with Anthropic and OpenAI just the beginning, but said they are “an important milestone as we work to help responsibly steward the future of AI.”

Anthropic co-founder Jack Clark wrote on the social media platform X that the company is looking forward to teaming up with the AI Safety Institute, adding that “third-party testing is a really important part of the AI ecosystem and it’s been amazing to see governments stand up safety institutes to facilitate this.”


Source link
Exit mobile version