Assurance and explainability of unmanned vehicle operations
PIs: Tinoosh Mohsenin (PI, JHU/WSE), Anna L. Buczak (PI, JHU/APL AMDS), Josh Squires (Co-PI, JHU/APL AMDS), Dr. Ben Baugher (Co-PI, JHU/APL AOS)

As autonomous systems take on greater roles and responsibilities in our world questions about the safety and correctness of their decisions become more pressing. Unfortunately, the same procedure — the distillation of vast pools of data and simulated experiences into a deep neural network — that gives state of the art autonomous systems a significant competitive advantage also makes their reasoning inscrutable, even to human experts. An autonomous agent cannot be assured if its decision-making process cannot be explained. In this proposal we outline a vision for assured autonomy through a novel mixture of explainable artificial intelligence, deep reinforcement learning, and human-guided feedback. Key to our approach is the idea that the decisions of deep neural networks may be converted into interpretable, fuzzy rule sets that can be analyzed for indicators of correct, reliable, and safe operation.

Real-World Deployment of Artificial Intelligence to Transform Healthcare Outcomes, Efficiency, Safety, and Value
PIs (OR/SDS focus): Richard Day (WSE, SOM, JHM), Jeffrey Jopling (SOM, JHM), Alan Ravitz (APL, WSE), Peter Najjar (SOM, JHM), Swaroop Vedula (WSE, SOM)
PIs (ICU/Ventilator focus): Kimia Ghobadi (WSE), Michael McShea (APL), Jim Fackler (SOM), Russell Taylor (WSE), Anton Dahbura (WSE), Jan Rizzuto (APL), Steven Griffiths (APL), Scott Swetz (APL), Aurora Schmidt (APL), Antwan Clark (WSE), Khalid Halba (WSE), Israel Gannot (SOM)

Johns Hopkins is on the path to becoming a global leader in large-scale application of artificial intelligence (AI) in healthcare. Assured autonomy is key. Our current focus is Operating Rooms and Intensive Care Units where the most acute, complex, task-dense, and high-risk care is delivered. Clinical teams must integrate enormous amounts of data and make thousands of decisions in these high-tempo, operational environments. These environments are ripe for human-machine collaboration, where using assured autonomous functions and ambient intelligence can enhance evidence-based clinical best practices and cognitively offload human experts to augment human capabilities, enable optimal patient outcomes, and eliminate preventable complications, up to and including mortality. Hospitals will improve patient care and outcomes while consuming fewer resources through improvements in communication, supply management, workflow efficiency, and team performance. AI will enhance the training, performance, and satisfaction of staff, addressing the healthcare staffing crisis. The result will be higher quality at lower cost, equating to greater value in health care.

Artificial Agent Ethics
PIs: Ariel Greenberg (PI, APL), Chien-Ming Huang (PI, WSE), Debra Mathews (Co-PI, JHU), Travis Rieder (Co-PI, JHU)

Our team will produce a framework to guide treatment of ethical concerns around artificial agents as a pathway for the establishment of a Center for Artificial Agent Ethics.  This framework will be honed through development of a research roadmap that begins with the most pressing challenges, to create artificial agents that can:

  1. recognize and reason about potential harms (non-maleficence),
  2. act on duties to protect wellbeing (beneficence),
  3. determine when they are unable to resolve a dilemma, and how to escalate these matters for human attention (responsibility),
  4. do so in a way that ensures their actions are legible to audit and available for adjustment (transparency), so that these agents are licensable for release (trust).

The lessons learned in this initial phase of investigation will then be brought to bear on the remaining ethical principles to complete the framework.

Assured Reinforcement Learning Algorithms for Critical Infrastructure Systems
PIs: Enrique Mallada (PI, JHU), Mahyar Fazlyab (JHU), Yury Dvorkin (JHU), Yair Amir (JHU) Tamim Sookoor (Co-PI, APL), Joseph Maurio (Co-PI, APL), Ryan Silva (Co-PI, APL), and Jared Markowitz (Co-PI, APL)

The last decade has witnessed a resurgence of Reinforcement Learning (RL) as a core enabler of Artificial Intelligence (AI). Today’s prominence of Deep RL is pervasive and quite impressive, with notable examples including Jeopardy!, Atari, GO, StarCraft II, and even poker. However, this striking success of RL has been overwhelmingly limited to virtual domains, where there is little consequence in failing to achieve a task, system integrity can be easily guaranteed, and data and processing are co-located and practically unbounded.

Safety-critical infrastructures starkly distance from idealized environments wherein RL has proven to be successful. In a safety-critical system, humans often interact with the AI system, data and time are limited, and small mistakes can have catastrophic consequences. Moreover, the distributed nature of these systems requires vast communications, making RL agents vulnerable to cyber-attacks. The goal of this proposal is to build a research program aimed at overcoming the foundational limitations that prevent the use of deep reinforcement learning in safety-critical infrastructure systems. To achieve this goal, the proposed research will build on foundational theory and deep domain expertise from JHU faculty and APL researchers.

The program is organized across four foundational research pillars. Constraint learning is aimed at improving the safety of Deep RL algorithms by imposing hard constraints. Rare Event Learning seeks to develop sampling strategies aimed at capturing critical scenarios. Verification and Domain Adaptation aimed at developing methods for verifying the safety of trained policies and their robustness to domain changes. Finally, Security Assurance focuses on anomaly subsystem detection and protection against data poisoning. The program will further focus on two broad application domains: Energy Infrastructure Systems and Safety and Defense Systems.

Building Safe, Assured Connected Vehicular Systems
PIs: Ashutosh Dutta (PI, APL), Marin Kobilarov (PI, WSE), Ed Pavelka (PI, APL), Krishan Sabnani (PI, WSE)

The development and deployment of Connected Autonomous Vehicles (CAVs) have heightened the need for robust and assured V2X (Vehicle-to-Everything) communications for real-time situational awareness of at-scale Connected Transportation Systems (CTS). Investigating issues related to production-grade V2X Networks is integral for reliability in CTS deployments. there are also challenges that need to be addressed, such as ensuring the safety and security of autonomous vehicles and addressing ethical and legal issues. We will pursue multiple research verticals and bring together the best people across many fields to continue to grow the S4 LAB (Safe, Secure, Smart, Scalable) with the goal of becoming the nation’s leading CTS research group. We are also establishing the Autonomous Transportation Assurance Center of Excellence (ATACE) as a collaboration between researchers, manufacturers, industry, and regulators to ensure that vehicles with advanced autonomy behave in safe, reasonable, and acceptable ways.

AutoSOC: Leveraging Collaboration with Maryland Cities and Counties to Build Cyber Smart Cities
PIs: Yinzhi Cao (PI, WSE), Tamim I. Sookoor (PI, APL), Abisheck Jain (Co-PI, WSE), Joshua M. Silbermann (APL Co-PI), Neil Fendley (APL Co-PI), Vineet Kamat (APL Co-PI) 

While many cities claim to be smart, there are currently no real-world smart cities that fully realize the potential to intelligently utilize information from multiple sources to increase efficiencies, improve city governance, and improve the lives of residents. Within the United States, smart cities are in a nascent stage with many initiatives having only established the high-speed communication fabric (e.g., fiber optical and 5G net- works) on which smart technologies can be deployed, and operate very limited use cases (e.g., streetlights and parking meters). Moving beyond this would necessitate the interconnection of networks and bringing online systems that were previously disconnected from the Internet, making cities much more vulnerable to cyber-attacks.

This grant will address the problem of ensuring smart cities are inherently cybersecure by baking-in privacy-preserving and trustworthy cybersecurity mechanisms to the capabilities being deployed to make cities more intelligent. We will leverage the IAA grant to prepare and submit a couple of NSF Smart and Connected Communities (S&CC) proposals as well as an NSF Convergence Accelerator proposal that would fund the research necessary to achieve our goals. These proposals would enable the development of privacy-preserving distributed learning algorithms to enable secure data collection in smart cities and drive intelligent decision making that enhances the quality of life of residents and efficiencies of city infrastructure without increasing the risk of the cities to cyber-attacks and data leakages.

Trusted and Assured Autonomy at Scale for Resilient Socio-Physical Infrastructure in Extreme Events
PIs: I-Jeng Wang (PI, APL), Angie Liu (PI, WSE)

Ensuring the robustness of critical infrastructure under the stress of imminent extreme events, including both natural and man-made disasters, will require trusted decision-making under uncertainty, at scale, and across both social (health systems, civil and utility services, transport systems, public education, etc.) and physical (power grid, roadways, communications, etc.) infrastructures.  A system-level analysis of risks and potential vulnerabilities resulting from the AI/ML-based autonomy that is increasingly integrated into our infrastructure is critical to producing infrastructure that is resilient to extreme events.  In addition, strategic and effective use of trusted autonomy may significantly improve the ability of our infrastructure to prepare for and respond to challenging events. Large-scale and collaborative decision-making under uncertainty will be a key element of humanity’s response to imminent threats from climate change, conflict, and pandemics.

This project will bring together expertise across relevant disciplines in order to identify and articulate research challenges, explore novel concepts, and establish strategic partnerships. In addition, this project will produce a prototype of a response framework under imminent threats by integrating predictive modeling and uncertainty quantification, optimization, and
human-AI decision-making to achieve trusted and assured autonomy at-scale for resilient socio-physical infrastructure.

Assured Autonomy in Human-Machine Teaming for Spaceflight and Off-Planet Presence
PIs: Wes Fuhrman (PI, APL), Peter Kazanzides (PI, WSE), Amy Haufler (PI, APL), Mark Shelhamer (PI, JHU SOM)

NASA’s exploration objectives are on a path to go deeper into space for longer and longer durations. To what end does bringing humans along warrant the cost and risk, and how can we make sure that humans are able to perform the tasks to which they are uniquely capable? In the austere environments of deep space, human-machine teaming with extensive autonomy will be core to any successful mission. To bring humans to Mars and beyond, we must form human/machine teams that function as an autonomous unit, with each member of the team contributing to the assurance of safety, performance, serendipitous discovery, and ultimately success of the mission. JHU has key expertise across its community that can come together to address this critical challenge to our Nation.

The key feature of this initiative is to bring together different domains of expertise in order to emphasize the integration of these domains: an overarching mathematical model that combines physiological/psychological/environmental sensing, scheduling constraints, human capabilities, mission goals and their levels of priority, semi-automated robotic assistance, and spacecraft status. This will provide a capability of on-site autonomous decision-making, independent from mission control teams. This would serve as a test bed for later Mars missions, and also enable more ambitious lunar expeditions, including those out of communication with earth.

The Moon presents the first case in which we can legitimately build the autonomous capabilities and formed human-machine teams needed to achieve exploration objectives. Through the Artemis Program, NASA’s largest ongoing program, humans will move from interacting with isolated early autonomous units to teaming with those which support a sustained presence.