Latest News

Upcoming Events

Apr
8
Tue
IAA & CS Department Seminar Series — Erik Rye @ Malone Hall 228, Johns Hopkins University
Apr 8 @ 10:45 am – 12:00 pm

Title: Building Haystacks to Find Needles

Abstract: The internet is a big place, comprising billions of users and tens of billions of network devices. Discovering and remediating vulnerabilities in these devices is an imperative for a more secure internet. Unfortunately, vulnerabilities that affect millions of hosts represent only a small fraction of the overall internet. Finding these “needles” at internet scale requires collecting an exponentially larger “haystack.” In this talk, Erik Rye will describe two novel techniques he developed to collect unprecedentedly large network datasets. He will describe how he used these datasets to enable the discovery of new network security and privacy problems at internet scale. These include stark, real-world security and privacy vulnerabilities, such as revealing troop positions in Ukraine and exposing previously-unreachable Internet of Things devices like smart light bulbs in users’ homes. Rye’s findings have prompted design changes in systems run by Apple, SpaceX, and router manufacturers, and improved the security and privacy of millions of affected individuals.

Bio: Erik Rye is a final-year PhD candidate at the University of Maryland, where he focuses on solving large-scale network security and privacy problems. He regularly publishes in venues like the ACM Special Interest Group on Data Communications Conference and IEEE Security & Privacy, and he has shared his work at industry conventions like Black Hat USA and in popular media like KrebsOnSecurity.com. Rye contributes to the network security and measurement communities by running the IPv6 Observatory, which publishes weekly insights into the state of the internet. He holds master’s degrees in computer science and applied mathematics from the Naval Postgraduate School, and also likes dogs.

Zoom: https://wse.zoom.us/j/98183817407

How Do We Create an Assured Autonomous Future?

Autonomous systems have become increasingly integrated into all aspects of every person’s daily life. In response, the Johns Hopkins Institute for Assured Autonomy (IAA) focuses on ensuring that those systems are safe, secure, and reliable, and that they do what they are designed to do.

Pillars of the IAA

Applications

Autonomous technologies perform tasks with a high degree of autonomy and often employ artificial intelligence (AI) to simulate human cognition, intelligence, and creativity. Because these systems are critical to our safety, health, and well-being as well as to the fabric of our system of commerce, new research and engineering methodologies are needed to ensure they behave in safe, reasonable, and acceptable ways…

Foundational AI

Autonomous systems must integrate well with individuals and with society at large. Such systems often integrate into—and form collectively into—an autonomous ecosystem. That ecosystem—the connections and interactions between autonomous systems, over networks, with the physical environment, and with humans—must be assured, resilient, productive, and fair in the autonomous future…

Ethics and Governance

The nation must adopt the right policy to ensure autonomous systems benefit society. Just as the design of technology has dramatic impacts on society, the development and implementation of policy can also result in intended and unintended consequences. Furthermore, the right governance structures are critical to enforce sound policy and to guide the impact of technology…

  • In recent years, we have learned that the most important element about autonomous systems is – for humans – trust. Trust that the autonomous systems will behave predictably, reliably, and effectively. That sort of trust is hard-won and takes time, but the centrality of this challenge to the future of humanity in a highly autonomous world motivates us all.
    Ralph Semmel, Director, Applied Physics Laboratory
  • In the not too distant future we will see more and more autonomous systems operating with humans, for humans, and without humans, taking on tasks that were once thought of as the exclusive domains of humans. How can we as individuals and as a society be assured that these systems are design for resilience against degradation or malicious attack? The  mission of the Institute is to bring assurance to people so that as our world is populated by autonomous systems they are operating safely, ethically, and in the best interests of humans.
    Ed Schlesinger Benjamin T. Rome Dean, Whiting School of Engineering