To advance the benefits and safety of the technology behind unmanned vehicles and the array of artificial intelligence (AI) programs automating our devices, offices, homes and community grids, the Johns Hopkins Institute for Assured Autonomy (IAA) has invested in a portfolio of 10 state-of-the-art research projects. These two-year projects — uniting researchers across Johns Hopkins University (JHU) — have been underway since early 2020 and promise to transform the technology sector and society.
Supported by $6.5 million in funding over two years, the research spans a range of practical applications, such as:
- Developing a policy framework for autonomous vehicles
- Developing software for safe traffic management in national airspace
- Assuring safe operations of AI-enabled systems in offices, hospitals and other social spaces
- Assuring privacy and fairness in AI technologies
- Strengthening AI systems against adversarial attacks
Last year, JHU committed $30 million to establish the IAA as a national center of excellence for assured AI and smart autonomous systems, run jointly by the Johns Hopkins Applied Physics Laboratory (APL) and the JHU Whiting School of Engineering (WSE). In addition to propelling advanced research in the sector, IAA is forming partnerships with stakeholders across sectors and convening top experts for assuring the autonomous world.
A year ago, IAA selected its first research projects after issuing a call for proposals from across Johns Hopkins. The effort is led by the institute’s research director, David Silberberg, an assistant program manager at APL, working closely with IAA Co-Directors Tony Dahbura (WSE) and Cara LaPointe (APL) and the Institute’s extended research team.
Here is a look at the projects:
CREATING A POLICY FRAMEWORK
Assured Autonomous Vehicle Policy
The autonomous vehicle (AV) sector is aggressively working to improve vehicle safety through advances in AI technology and enhanced testing. But successful deployment of AVs is threatened by negative public perceptions and a lack of acceptable public policies for their use. Using simulations of traffic models and policies in a Baltimore community, combined with surveys of public opinion, this project will create a policy framework to ensure AV technologies are acceptable based both on their technological benefits and on their perceived benefit to society. The project will also result in new simulation tools that will help inform policy makers and the public of both positive and potential negative impacts of AVs. Read more.
Johns Hopkins Researchers: Tak Igusa, Joshua Mueller, Jeff Michael, Johnathon Ehsani
ENSURING SAFETY IN NATIONAL AIRSPACE
Assuring Autonomous Airspace Operations
Researchers are developing models and simulations of traffic management systems for unmanned aircraft systems (UASs), developing and evaluating algorithms for planning flights, avoiding risks and obstacles, and identifying rogue aircraft. Many autonomous UASs, including unmanned rotorcraft and fixed-wing aircraft, are operating in uncontrolled national airspace below an altitude of 400 feet. The goal of this research is to evaluate the safety and performance of autonomous traffic-management systems via simulation and to assure the algorithms in use. Key outcomes of this research include a modeling and simulation tool, new technologies for performing real-time traffic management of autonomous airspace, and recommendations of policy and safety standards.
Johns Hopkins Researchers: Lanier Watkins, Louis Whitcomb
ADVANCING FAIRNESS, DEVELOPING PRIVACY DEFENSES
Fairness and Privacy Attacks in AI for Healthcare and Automotive Systems
Current AI has met or exceeded human abilities for tasks such as classifying images and recognizing facial expressions or objects. The performance of medical AI technology is also approaching the level of performance of human clinicians in diagnostic tasks. However, two of the most critical concerns impeding AI assurance are privacy and fairness, such as compliance with the Health Insurance Portability and Accountability Act (HIPAA). This project aims to develop algorithms that assure AI fairness and AI privacy. Researchers are using methods extending generative adversarial networks and two-player adversarial methods to create representative medical images to improve diagnoses for underrepresented populations. They are also developing methods to protect privacy by removing sensitive information from data sets while maintaining data fidelity.
Johns Hopkins Researchers: Philippe Burlina, Yinzhi Cao
BUILDING TRUST IN AI SYSTEMS
Identifying Factors to Explain the Behavior of Deep Learning Systems
AI systems that are interpretable by humans, or can explain their decisions, engender trust in the technology and increase usability. The goal of this research is to develop new methods for explaining AI behavior, exploring three in particular: inferring causal connections between system inputs and decisions, generating rules that explain the system decisions and selecting text summaries that justify model predictions. The proposed work promises to significantly advance the field of explainable AI systems, ultimately earning human trust.
Johns Hopkins Researchers: Mark Dredze, Anna Buczak
BUILDING DEFENSES AGAINST ATTACKS
Physical Domain Adversarial Machine Learning for Visual Object Recognition
In domains such as transportation, medicine, and smart cities and campuses, deep learning (DL) systems can be vulnerable to malicious or non-malicious image occlusion attacks that lead them to make incorrect decisions. This vulnerability presents major obstacles to public trust of AI and autonomous systems. The primary focus of this research is to increase the resilience of DL systems against these attacks. This research will identify system vulnerabilities and deficiencies and improve system robustness. It is poised to have far-ranging impact by increasing the reliability of autonomous systems that use DL for sensing and decision-making.
Johns Hopkins Researchers: Alan Yuille, Yinzhi Cao, Philippe Burlina
ADVANCING THE STATE OF THE ART IN ASSURED SOFTWARE
Regression Analysis for Autonomy Performance Comparison
This project applies statistical learning techniques to assure users and improve the performance of autonomy software. The project compares high-level performance criteria between two versions of the software, enabling engineers to identify where subsequent versions may lead to failure and ensure that more mature software releases do not produce new failure modes. This approach promises to advance the state of the art by delivering an algorithmic framework that gives insight into whether new versions of autonomous systems experience unexpected failures or loss of capabilities.
Johns Hopkins Researchers: Paul Stankiewicz, Marin Kobilarov
DEVELOPING AUTONOMOUS AGENTS THAT FOLLOW SOCIAL NORMS
Socially Aware Robot Navigation in Human Environments
For mobile robots to navigate safely and follow social norms in human spaces, such as offices or hospitals, they need to anticipate people’s navigation and interaction. The goal of this research is to develop autonomous agents, such as robots and vehicles, that navigate indoors and consider social and physical boundaries. Researchers are modeling dynamic social settings, developing hardware and software for more intuitive robot navigation, and deploying the autonomous agents to a test environment. They are also simulating options for public policy across a multitude of metrics, including robotic performance, impact to pedestrians and measures of social norms. Successful demonstration of this research promises to have significant impact, including an increase in public trust and the adoption of robotic technologies in everyday life.
Johns Hopkins Researchers: Chien-Ming Huang, I-Jeng Wang
ASSURING THE SAFETY OF CRITICAL INFRASTRUCTURE SYSTEMS
Runtime Assurance of Distributed Intelligent Control Systems (RADICS)
Researchers developed a traffic control testbed that simulates a highway network to ensure AI-based algorithms move traffic efficiently under normal conditions and do not fail under outlier conditions. At the heart of the RADICS approach is a monitoring program that can take over from the AI with a safe controller when necessary (to ensure safety) by monitoring core AI algorithms and their risk of system failure. When breaches are anticipated or detected, it will switch to more traditional control until the system is clear. This would assure this intelligent traffic control system and be applicable to ensure the safety of other AI-controlled critical infrastructure systems.
Johns Hopkins Researchers: Yair Amir, Tamim Sookoor
REDUCING RISK, INCREASING RELIABILITY OF REAL-WORLD SYSTEMS
Risk-Sensitive Adversarial Learning for Autonomous Systems
Deep reinforcement learning (DRL) is an emerging family of machine-learning techniques that enable systems to learn complex behaviors through interaction with an environment. This project’s goal is to design online learning agents for real-world settings — such as a vehicle navigating among humans or a surgical robot providing assistance in a medical operation — that are sensitive to human risk considerations, to avoid undesirable outcomes. That would increase the reliability of autonomous systems in multiple domains, potentially including health care, transportation, smart cities and national security. The project creates a novel framework for learning models and combines diverse strategies using a game-theoretic approach that considers human risk factors in a dynamic, adversarial environment.
Johns Hopkins Researchers: Raman Arora, Ryan Gardner
ADAPTING TO COMPLEXITY AND GUARANTEEING SAFE PERFORMANCE
Verified Assured Learning for Unmanned Embedded Systems (VALUES)
Advances in image recognition and reinforcement learning are changing the way autonomous systems perceive, decide and control. These systems learn and adapt to evolving tasks and environments not anticipated by human designers. However, current approaches are vulnerable to small changes in their environment and cannot guarantee reliable operation in complex environments. Using a testbed for unmanned vehicles for validation, this project aims to assure autonomous systems by pioneering an architecture for control, assessment and safety of system decisions. Successful research will enable autonomous systems to guarantee safe performance in increasingly complex situations, using new processes that are adaptable and scalable to modern AI algorithms for autonomy.
Johns Hopkins Researchers: Marin Kobilarov, Aurora Schmidt, Greg Hager