Latest News

Upcoming Events

Oct
28
Tue
Graduate Student Lightning Talks @ Clark 110, Johns Hopkins University
Oct 28 @ 3:30 pm – 6:00 pm
Oct
30
Thu
IAA Seminar Series — Peter Najjar @ Malone Hall 107, Johns Hopkins University
Oct 30 @ 10:45 am – 12:00 pm

Title: Unlocking the Promise of Deployed Artificial Intelligence for Healthcare Quality and Safety

Abstract: In this forward-looking session, Dr. Peter Najjar, Vice President of Clinical Innovation at Johns Hopkins Health System, explores how data fusion and artificial intelligence are transforming quality and safety operations. He’ll share a practical blueprint for integrating AI into clinical workflows – balancing ambition with alignment, and innovation with rigor. Drawing from the Armstrong Institute’s work on ambient intelligence, event reporting, and machine-assisted data abstraction, this talk reveals how AI can accelerate the future of healthcare quality and safety.

Bio: Peter A. Najjar, MD, MBA is Vice President of Clinical Innovation for Johns Hopkins Health System and Assistant Professor of Surgery at Johns Hopkins School of Medicine. He leads system-wide efforts to advance care delivery through novel clinical systems, data infrastructure, and technology innovation based out of the Armstrong Institute for Patient Safety and Quality. He co-directs the JHM Health Systems Management Fellowship for budding physician-executives and serves on several hospital and digital health startup boards. Clinically, he practices complex and robotic colorectal surgery at The Johns Hopkins Hospital. Dr. Najjar attended the University of California, Davis before earning his M.D. from the University of Chicago and M.B.A. from Harvard Business School. He completed general surgery residency and fellowships in both colorectal surgery and patient safety and quality at Harvard’s Brigham and Women’s Hospital/Dana-Farber Cancer Institute. He is a Fellow of both the American College of Surgeons and the American Society of Colon and Rectal Surgeons.

Zoom: https://wse.zoom.us/j/97389251444?pwd=rve9qr6swKc8wr3x2vSd13jv2eaUfM.1
Meeting ID: 973 8925 1444
Passcode: 756160

Nov
18
Tue
IAA Seminar Series — Anqi Liu @ Malone Hall 228, Johns Hopkins University
Nov 18 @ 10:45 am – 12:00 pm

Title: Robust and Uncertainty-Aware Decision Making under Distribution Shifts

Abstract: Decision making tasks like contextual bandit and reinforcement learning often need to be conducted under data distribution shifts. For example, we may need to utilize off-policy data to evaluate a target policy and/or learn an optimal policy utilizing logged data. We may also need to deal with sim2real problem when there is a dynamics shift between training and testing environments. In this talk, I am going to introduce three threads of my work in the domain of robust decision making under distribution shifts. First, I will introduce distributionally robust off-policy evaluation and learning techniques that feature a more conservative uncertainty in the reward estimation component. This pessimistic reward estimation will benefit both off-policy evaluation and learning under various distribution shifts. Second, I will introduce our work in off-dynamics reinforcement learning, where we recognize that the previous methods in off-dynamics reinforcement learning methods can suffer from a lack of exploration and propose a novel model-based approach to it. Finally, I will cover our current work and future work in uncertainty-aware approaches to safe decision-making problems.

Bio: Anqi (Angie) Liu is an assistant professor in the Department of Computer Science at the Whiting School of Engineering, Johns Hopkins University. She is broadly interested in developing principled machine learning algorithms for building more reliable, trustworthy, and human-compatible AI systems in the real world. Her research focuses on enabling the machine learning algorithms to be robust to the changing data and environments, to provide accurate and honest uncertainty estimates, and to consider human preferences and values in AI interactions. She obtained her PhD in computer science from the University of Illinois Chicago. Prior to joining Johns Hopkins, she completed her postdoctoral research in the Department of Computing + Mathematical Sciences at the California Institute of Technology. She is a recipient of the JHU Discovery Award, AI2AI Award, and an Amazon Research Award.

Zoom: https://wse.zoom.us/j/97720056194?pwd=aCNb14fShnXzVWXnOzCDtDKWbi8cNb.1
Meeting ID: 977 2005 6194
Passcode: 159069

How Do We Create an Assured Autonomous Future?

Autonomous systems have become increasingly integrated into all aspects of every person’s daily life. In response, the Johns Hopkins Institute for Assured Autonomy (IAA) focuses on ensuring that those systems are safe, secure, and reliable, and that they do what they are designed to do.

Pillars of the IAA

Applications

Autonomous technologies perform tasks with a high degree of autonomy and often employ artificial intelligence (AI) to simulate human cognition, intelligence, and creativity. Because these systems are critical to our safety, health, and well-being as well as to the fabric of our system of commerce, new research and engineering methodologies are needed to ensure they behave in safe, reasonable, and acceptable ways…

Foundational AI

Autonomous systems must integrate well with individuals and with society at large. Such systems often integrate into—and form collectively into—an autonomous ecosystem. That ecosystem—the connections and interactions between autonomous systems, over networks, with the physical environment, and with humans—must be assured, resilient, productive, and fair in the autonomous future…

Ethics and Governance

The nation must adopt the right policy to ensure autonomous systems benefit society. Just as the design of technology has dramatic impacts on society, the development and implementation of policy can also result in intended and unintended consequences. Furthermore, the right governance structures are critical to enforce sound policy and to guide the impact of technology…

  • In recent years, we have learned that the most important element about autonomous systems is – for humans – trust. Trust that the autonomous systems will behave predictably, reliably, and effectively. That sort of trust is hard-won and takes time, but the centrality of this challenge to the future of humanity in a highly autonomous world motivates us all.
    Ralph Semmel, Director, Applied Physics Laboratory
  • In the not too distant future we will see more and more autonomous systems operating with humans, for humans, and without humans, taking on tasks that were once thought of as the exclusive domains of humans. How can we as individuals and as a society be assured that these systems are design for resilience against degradation or malicious attack? The  mission of the Institute is to bring assurance to people so that as our world is populated by autonomous systems they are operating safely, ethically, and in the best interests of humans.
    Ed Schlesinger Benjamin T. Rome Dean, Whiting School of Engineering