A central challenge in human-machine teaming is for semi-autonomous machines to recognize when they are in situations that are beyond their scope of deployment to act upon. In cases where they are, the machine must hand off the task for a human teammate to respond.

Supported by a Challenge Grant from the Institute for Assured Autonomy, a team led by Ariel Greenberg, a senior staff scientist and project manager at the John Hopkins University Applied Physics Laboratory, and Chien-Ming Huang, John C. Malone Assistant Professor of computer science at the Whiting School of Engineering, is investigating the conceptual and technological advances required to enable artificial agents to act with prosocial intent. Concerned with the design of machines that seek opportunities to help and to prevent harm, the team examines the sensitivity needed in artificial agents for them to discern whether a scene is understood well enough to identify and perform an ethically appropriate intervention without human consultation.

Challenge Grants aim to support interdisciplinary research teams in developing ideas for high-impact projects that focus on the existential challenges of assured autonomy, positioning them to pursue opportunities for external funding. In this collaborative effort, awarded as one of two seedling projects amongst the first Challenge Grant recipient cohort, the team conducts this foundational exploration toward establishing a research and development center for artificial agent ethics.

The team also includes Debra Mathews and Travis Rieder of the Johns Hopkins Berman Institute of Bioethics, and Tianmin Shu in the Computer Science Department in the Whiting School of Engineering. Greenberg provides insight into their work by answering the following questions.

1. Can you provide a brief overview of the topic you and your team are addressing through your IAA Challenge Grant? Is there a specific event or reason you decided to pursue this topic?

As autonomous, artificially intelligent systems become more prominent in civilian spaces, their immense promise for good is tempered by concern over the harm they can perpetrate. To ensure that such systems are suitably deployed and operate ethically, we develop methods to endow these machines with the perception, knowledge, and reasoning capabilities needed to interpret socially and physically complex situations for intervention. Should the context or course of action remain unclear to the artificial agent, these same capabilities will enable it to recognize that a situation is beyond its scope, and hand off responsibility to a human partner.

2. What are some of the challenges associated with doing work on this topic, and how is your team addressing these obstacles?

Expert systems approaches and machine learning algorithms both seek to perform human-grade decision making. However, both also assume that the primary challenge machines face is decision making, that is, to select an appropriate course of action based on existing (and presumably accurate) assessments of the current situation. In practice, arriving at an accurate assessment of the situation is often a greater challenge, as it is fundamentally an act of judgement under uncertainty. By framing AI governance as a principally decision-oriented activity, many methods fail to consider the risk inherent in machine misapprehension (i.e., a situation in which the system has misperceived or misconstrued the current situation). As a result, it is imperative for AI systems to be able to assess context in a rich, symbolic fashion to determine what is to be decided, and whether to hand off that decision.

3. How will your team’s work impact the field of assurance and autonomy? What are your team’s next steps to move your work forward?

By enabling artificial agents to recognize when it is proper to transfer responsibility to humans, we can be more assured that their autonomous intervention response will be appropriate, especially in real world scenarios that often land at the boundary of these machines’ deployment scope. We begin with a test case of preventing household accidents and will later extend this work to other civilian and defense applications.