GAITHERSBURG, Md. — The U.S. Artificial Intelligence Safety Institute, part of the Department of Commerce’s National Institute of Standards and Technology (NIST), announced today that it has entered into agreements with Anthropic and OpenAI to formally collaborate on AI safety research, testing, and evaluation.

These Memoranda of Understanding will allow the U.S. AI Safety Institute to access and evaluate new AI models from both companies before and after their public release. This collaboration will focus on assessing capabilities and safety risks, as well as developing strategies to mitigate those risks.

“Safety is crucial for driving technological innovation,” said Elizabeth Kelly, director of the U.S. AI Safety Institute. “We are excited to begin our technical collaborations with Anthropic and OpenAI to advance AI safety science. These agreements mark an important milestone in our efforts to responsibly guide the future of AI.”

In addition, the U.S. AI Safety Institute will provide feedback to Anthropic and OpenAI on potential safety improvements to their models, working closely with its partners at the U.K. AI Safety Institute.

Building on NIST’s 120-year history of advancing measurement science, technology, and standards, these evaluations will deepen NIST’s work on AI by fostering collaboration and research on advanced AI systems across various risk areas.

The evaluations under these agreements will contribute to the safe, secure, and trustworthy development and use of AI, aligning with the Biden-Harris administration’s Executive Order on AI and the voluntary commitments made by leading AI model developers.

About the U.S. AI Safety Institute

The U.S. AI Safety Institute, a part of the Department of Commerce within NIST, was established in response to the Biden-Harris administration’s 2023 Executive Order on AI. The institute is dedicated to advancing AI safety science and addressing the risks associated with advanced AI systems by developing testing, evaluation, and guidelines to promote safe AI innovation in the U.S. and globally.

src: www.nist.gov

AI News tech

Comments are closed.