Latest News

Upcoming Events

Jul
23
Tue
IAA Talk – John Winder
Jul 23 @ 11:00 am – 12:00 pm

Title: Bridging the Human-Machine Information Gap for AI Fighter Pilots

Abstract: We discuss the research and development of an AI copilot that predicts human pilot decision making and offers time-critical recommendations to enable machine-precision decisions at machine speeds. The latest in a long line of innovation in AI for air combat at the Johns Hopkins University Applied Physics Laboratory (APL), this AI copilot is an autonomous agent that can function simultaneously as a situationally-aware peer, a cognitive support assistant, and a highly-performant fighter jet pilot. In addition, the AI agent can also serve as a loyal wingman, flying one or more uncrewed autonomous vehicles that can respond dynamically in real time to human commands. Achieving this behavior required innovating new forms of machine learning, including recurrent variational autoencoders for imitation learning, graph neural networks for coordinated behavioral prediction, and novel transformer architectures for dynamic multi-tasking and zero-shot generalization. These AI copilot and wingmen agents are integrated into a high-fidelity virtual reality cockpit that enables interactive demonstrations and experimentation of human-machine teaming scenarios.

Bio: Dr. John Winder is a computer scientist and supervisor of the Advanced AI Algorithms section at the Johns Hopkins University Applied Physics Laboratory (APL). At APL, he leads a team researching AI and machine learning for complex, real-world systems. His recent work encompasses multi-agent reinforcement learning (RL) for cooperative-competitive environments, multi-task knowledge transfer, and human-machine teaming. John joined APL in 2020 after receiving his PhD from UMBC, where his doctoral work focused on hierarchical RL for abstract decision making and concept formation to increase generalization and create more adaptable agent-based AI.

Zoom: https://jhuapl.zoomgov.com/j/1602659447?pwd=wuDPqeecjPYRwvbUbwUAKPIpT3HmZk.1
Meeting ID: 160 265 9447
Passcode: 241294

Oct
7
Mon
AI Ethics and Governance Symposium @ Johns Hopkins University Bloomberg Center
Oct 7 @ 8:45 am – 6:00 pm

The Johns Hopkins AI Ethics and Governance Symposium will bridge parallel, siloed conversations across disciplines and domains. We will bring together experts in AI, AI ethics, safety, assurance, and algorithmic fairness to synthesize and map an approach to shared and domain-specific issues in AI ethics and governance, in four key areas: Defense and Security; Biomedicine and Healthcare; Transportation; and Democracy. Bringing together diverse science, humanities, and engineering scholars, technologists, and policymakers, the Symposium will explore issues across the spectrum of AI innovation, from foundational computer science and engineering to large language models and autonomous machines. 

Details: https://bioethics.jhu.edu/research-and-outreach/projects/ai-ethics-symposium/

Oct
10
Thu
The 3rd International Conference on Assured Autonomy (ICAA’24) @ Vanderbilt University, Nashville, TN
Oct 10 – Oct 11 all-day

The 3rd International Conference on Assured Autonomy will take place at Vanderbilt University, Nashville, TN on October 10-11, 2024.

Important Dates

  • Paper submission deadline: 6/30/2024 (Anywhere on Earth)
  • Acceptance notification: 7/26/2024
  • Publication-ready Papers Due: 8/12/2024

Additional Details: https://iaa.jhu.edu/icaa-2024/

How Do We Create an Assured Autonomous Future?

Autonomous systems have become increasingly integrated into all aspects of every person’s daily life. In response, the Johns Hopkins Institute for Assured Autonomy (IAA) focuses on ensuring that those systems are safe, secure, and reliable, and that they do what they are designed to do.

Pillars of the IAA

Technology

Autonomous technologies perform tasks with a high degree of autonomy and often employ artificial intelligence (AI) to simulate human cognition, intelligence, and creativity. Because these systems are critical to our safety, health, and well-being as well as to the fabric of our system of commerce, new research and engineering methodologies are needed to ensure they behave in safe, reasonable, and acceptable ways…

Ecosystem

Autonomous systems must integrate well with individuals and with society at large. Such systems often integrate into—and form collectively into—an autonomous ecosystem. That ecosystem—the connections and interactions between autonomous systems, over networks, with the physical environment, and with humans—must be assured, resilient, productive, and fair in the autonomous future…

Ethics and Governance

The nation must adopt the right policy to ensure autonomous systems benefit society. Just as the design of technology has dramatic impacts on society, the development and implementation of policy can also result in intended and unintended consequences. Furthermore, the right governance structures are critical to enforce sound policy and to guide the impact of technology…

  • In recent years, we have learned that the most important element about autonomous systems is – for humans – trust. Trust that the autonomous systems will behave predictably, reliably, and effectively. That sort of trust is hard-won and takes time, but the centrality of this challenge to the future of humanity in a highly autonomous world motivates us all.
    Ralph Semmel, Director, Applied Physics Laboratory
  • In the not too distant future we will see more and more autonomous systems operating with humans, for humans, and without humans, taking on tasks that were once thought of as the exclusive domains of humans. How can we as individuals and as a society be assured that these systems are design for resilience against degradation or malicious attack? The  mission of the Institute is to bring assurance to people so that as our world is populated by autonomous systems they are operating safely, ethically, and in the best interests of humans.
    Ed Schlesinger Benjamin T. Rome Dean, Whiting School of Engineering