An international, multidisciplinary team led by Johns Hopkins IAA faculty Gregory Falco is proposing an approach to earn public trust for highly automated systems such as self-driving vehicles and medical diagnostics systems, positing independent audits to counter assurance challenges presented by recent high-profile crashes and incidents.

In their article “Governing AI safety through independent audits” published this week in Nature Machine Intelligence journal, the team of 20 authors recommends independent audit as a pragmatic governance approach to what they see as an otherwise burdensome and unenforceable assurance challenge for highly automated systems. Among Falco’s co-authors are two fellow IAA team members, the Institute’s Co-Directors Anton Dahbura and Cara LaPointe.

The authors write, “As proposed, independent audit of AI systems would embody three ‘AAA’ governance principles of prospective risk Assessments, operation Audit trails and system Adherence to jurisdictional requirements.” Read their Perspective article in Nature Machine Intelligence.