Johns Hopkins University is joining more than 200 artificial intelligence stakeholders in a new U.S. Department of Commerce initiative to develop and deploy trustworthy and safe AI. Led by the National Institute of Standards and Technology, the consortium brings together AI creators and users, academics, government and industry researchers, civil society organizations, and the nation’s largest companies and innovative startups to focus on creating the foundations for ensuring AI safety. The Johns Hopkins Institute for Assured Autonomy, run jointly by the Whiting School of Engineering and the Johns Hopkins Applied Physics Laboratory, is leading JHU’s involvement.

Housed under the U.S. AI Safety Institute, the consortium will contribute to priorities outlined in the Biden Administration’s Executive Order on AI, including developing guidelines and recommendations for red teaming, risk management, safety and security, and capability evaluations.

“As leaders in the area of safety and assurance of emerging technologies ranging from medical devices and self-driving cars to transportation systems, our researchers understand both AI’s tremendous potential and its risk,” explains Jim Bellingham, executive director of IAA. “This new consortium is a wonderful and exciting opportunity for us to bring together diverse institutes, centers, and laboratories within Johns Hopkins to support the goal of assuring AI is developed and used in ways that support a safer, more prosperous, and more equitable society.”

In its announcement of the new initiative, NIST said that the consortium will focus on establishing the foundations of a new measurement science in AI safety. As a consortium member, JHU faculty and researchers will help NIST develop guidelines related to AI policy, risk management, and safety and security.