Experts in AI, the humanities, and ethics gathered at Johns Hopkins for the Ethical & Responsible AI University Collaborative to explore how academic institutions can guide AI development

The Johns Hopkins Institute for Assured Autonomy and the Berman Institute of Bioethics welcomed colleagues from academic institutions across the country for the Ethical & Responsible AI University Collaborative meeting held on February 19 at the Hopkins Bloomberg Center in Washington, D.C.

More than 20 faculty and staff members whose research and work are at the intersection of artificial intelligence and ethics gathered for panels and discussions. Chandra Bhat, Joe J. King Endowed Chair Professor in Engineering and University Distinguished Teaching Professor at the University of Texas at Austin, delivered opening remarks.

“The Collaborative serves as a community of practice that brings values-aligned researchers together to foster interdisciplinary learning in ethical AI research and teaching,” said Bhat, a founding member of Good Systems, a UT Austin initiative dedicated to defining, evaluating, and building ethical AI systems. “It is based on the fundamental premise that universities, when we work collectively, can do much more to expand the range of resources available for AI application to a variety of use cases and, by doing so, can harness the full potential of AI technologies for the benefit of society.”

The first panel discussion, moderated by Veljko Dubljevic, professor of philosophy at North Carolina State University, focused on teaching ethical AI to students. Panelists noted that students want AI to function effectively, and that faculty must be intentional when designing projects and assignments to help students develop critical assessment skills. Matthew Stone, professor of computer science at Rutgers University, shared that he challenges students to approach AI technology as researchers.

The second panel, led by Lauren Goodlad, Distinguished Professor of English and Comparative Literature at Rutgers University focused on facilitating interdisciplinary collaboration. She noted that she reminds students that, in the future, “the most important jobs will go to people who know and understand what AI cannot do.”

Sherri Greenberg, Assistant Dean for State and Local Government Engagement of the LBJ School of Public Affairs at UT Austin, moderated the third and final panel discussion, which highlighted real-world applications of AI policy. Shyam Sundar, Evan Pugh University Professor and James P. Jimirro Professor of Media Effects at Penn State University, who trained as a psychologist and was a member of the panel, said that people are “much more trusting of technology than it deserves.” Participants discussed the importance of regulations and governance related to AI, how AI policy is shaped, and its applications.

After the morning session, two attendees gave lightning talks on AI-related initiatives at their respective institutions. Lauren Tilton, professor of digital humanities and E. Claiborne Robins Professor of Liberal Arts, of the University of Richmond discussed the university’s Center for Liberal Arts and AI, which aims to “foster analysis of interpretability and access to AI in the liberal arts.” The center is a collaboration involving 15 institutions across the southern United States, allowing expertise and resources to be shared among the participating universities.

In addition to attending this annual in-person event, members of the collaborative also work together throughout the year to discuss challenges and successes. They are navigating these discussions against a backdrop of increasing adoption of AI and loosening regulations, says Debra Mathews, associate director for research and programs in the Berman Institute, IAA’s ethics and governance lead, and professor of genetic medicine at the Johns Hopkins School of Medicine.

“In the absence of federal guardrails, academics, institutions, and professional societies must step up to help maximize the benefits and minimize the harms of this transformational technology within our spheres of influence. Our consortium is an important venue not only for information sharing but also collective action, to help ensure that as new AI-enabled technologies are developed and deployed, they protect and advance, rather than frustrate the needs and interests of the public,” she said.