Latest News

Upcoming Events

Feb
15
Tue
IAA Seminar Series – Cynthia Rudin (Duke), “Interpretable Neural Networks for Computer Vision: Clinical Decisions that are Computer-Aided, not Automated”
Feb 15 @ 11:00 am – 12:00 pm

ABSTRACT: Let us consider a difficult computer vision challenge. Would you want an algorithm to determine whether you should get a biopsy, based on an x-ray? That’s usually a decision made by a radiologist, based on years of training. We know that algorithms haven’t worked perfectly for a multitude of other computer vision applications, and biopsy decisions are harder than just about any other application of computer vision that we typically consider. The interesting question is whether it is possible that an algorithm could be a true partner to a physician, rather than making the decision on its own. To do this, at the very least, we would need an interpretable neural network that is as accurate as its black box counterparts. In this talk, I will discuss two approaches to interpretable neural networks: (1) case-based reasoning, where parts of images are compared to other parts of prototypical images for each class, and (2) neural disentanglement, using a technique called concept whitening. The case-based reasoning technique is strictly better than saliency maps, and the concept whitening technique provides a strict advantage over the posthoc use of concept vectors. Here are the papers I will discuss:

BIO: Coming Soon

Feb
18
Fri
IAA/Berman Seminar Series – Kadija Ferryman, JHU Berman Institute of Bioethics
Feb 18 @ 11:00 am – 12:00 pm

Title and Abstract forthcoming.

Read about the speaker here.

Mar
17
Thu
IAA Seminar Series – John Leonard, MIT
Mar 17 @ 11:00 am – 12:00 pm

Title and Abstract forthcoming

Speaker info here

How Do We Create an Assured Autonomous Future?

Autonomous systems have become increasingly integrated into all aspects of every person’s daily life. In response, the Johns Hopkins Institute for Assured Autonomy (IAA) focuses on ensuring that those systems are safe, secure, and reliable, and that they do what they are designed to do.

Pillars of the IAA

Technology

Autonomous technologies perform tasks with a high degree of autonomy and often employ artificial intelligence (AI) to simulate human cognition, intelligence, and creativity. Because these systems are critical to our safety, health, and well-being as well as to the fabric of our system of commerce, new research and engineering methodologies are needed to ensure they behave in safe, reasonable, and acceptable ways…

Ecosystem

Autonomous systems must integrate well with individuals and with society at large. Such systems often integrate into—and form collectively into—an autonomous ecosystem. That ecosystem—the connections and interactions between autonomous systems, over networks, with the physical environment, and with humans—must be assured, resilient, productive, and fair in the autonomous future…

Policy and Governance

The nation must adopt the right policy to ensure autonomous systems benefit society. Just as the design of technology has dramatic impacts on society, the development and implementation of policy can also result in intended and unintended consequences. Furthermore, the right governance structures are critical to enforce sound policy and to guide the impact of technology…

  • In recent years, we have learned that the most important element about autonomous systems is – for humans – trust. Trust that the autonomous systems will behave predictably, reliably, and effectively. That sort of trust is hard-won and takes time, but the centrality of this challenge to the future of humanity in a highly autonomous world motivates us all.
    Ralph Semmel, Director, Applied Physics Laboratory
  • In the not too distant future we will see more and more autonomous systems operating with humans, for humans, and without humans, taking on tasks that were once thought of as the exclusive domains of humans. How can we as individuals and as a society be assured that these systems are design for resilience against degradation or malicious attack? The  mission of the Institute is to bring assurance to people so that as our world is populated by autonomous systems they are operating safely, ethically, and in the best interests of humans.
    Ed Schlesinger Benjamin T. Rome Dean, Whiting School of Engineering