Latest News

Upcoming Events

Dec
10
Tue
IAA & CS Department Seminar Series – Allison Koenecke @ Malone Hall 228, Johns Hopkins University
Dec 10 @ 10:45 am – 11:45 am

Title: Fairness in Algorithmic Services

Abstract: Algorithmically guided decisions are becoming increasingly prevalent and, if left unchecked, can amplify pre-existing societal biases. In this talk, I use modern computational tools to examine the equity of decision-making in two complex systems: automated speech recognition and online advertising. In the former, I audit popular speech-to-text systems (developed by Amazon, Apple, Google, IBM, Microsoft, and OpenAI) and demonstrate disparities in transcription performance for African American English speakers, and speakers with language impairments (patterns likely stemming from a lack of diversity in the data used to train the systems). These results point to hurdles faced by non-“Standard” English speakers in using widespread tools driven by speech recognition technology.  In the second part of the talk, I propose a methodological framework for online advertisers to determine a demographically equitable allocation of individuals being shown ads for SNAP (food stamp) benefits. This framework measures what different populations believe is a “fair” allocation of ad budgets in a constrained setting, given cost trade-offs between English-speaking and Spanish-speaking SNAP applicants; I uncover broad consensus across demographics for some degree of equity over pure efficiency. Both projects exemplify processes to reduce disparate impact in algorithmic decision-making.

Bio: Allison Koenecke is an Assistant Professor of Information Science at Cornell University. Her research on algorithmic fairness applies computational methods, such as machine learning and causal inference, to study societal inequities in domains from online services to public health. Koenecke is regularly quoted as an expert on disparities in automated speech-to-text systems. She previously held a postdoctoral researcher role at Microsoft Research and received her PhD from Stanford’s Institute for Computational and Mathematical Engineering. She is the recipient of several NSF grants, the Forbes 30 Under 30 in Science, and the Cornell CIS DEIB Faculty of the Year Award.

Zoom: https://wse.zoom.us/j/98352725833

How Do We Create an Assured Autonomous Future?

Autonomous systems have become increasingly integrated into all aspects of every person’s daily life. In response, the Johns Hopkins Institute for Assured Autonomy (IAA) focuses on ensuring that those systems are safe, secure, and reliable, and that they do what they are designed to do.

Pillars of the IAA

Technology

Autonomous technologies perform tasks with a high degree of autonomy and often employ artificial intelligence (AI) to simulate human cognition, intelligence, and creativity. Because these systems are critical to our safety, health, and well-being as well as to the fabric of our system of commerce, new research and engineering methodologies are needed to ensure they behave in safe, reasonable, and acceptable ways…

Ecosystem

Autonomous systems must integrate well with individuals and with society at large. Such systems often integrate into—and form collectively into—an autonomous ecosystem. That ecosystem—the connections and interactions between autonomous systems, over networks, with the physical environment, and with humans—must be assured, resilient, productive, and fair in the autonomous future…

Ethics and Governance

The nation must adopt the right policy to ensure autonomous systems benefit society. Just as the design of technology has dramatic impacts on society, the development and implementation of policy can also result in intended and unintended consequences. Furthermore, the right governance structures are critical to enforce sound policy and to guide the impact of technology…

  • In recent years, we have learned that the most important element about autonomous systems is – for humans – trust. Trust that the autonomous systems will behave predictably, reliably, and effectively. That sort of trust is hard-won and takes time, but the centrality of this challenge to the future of humanity in a highly autonomous world motivates us all.
    Ralph Semmel, Director, Applied Physics Laboratory
  • In the not too distant future we will see more and more autonomous systems operating with humans, for humans, and without humans, taking on tasks that were once thought of as the exclusive domains of humans. How can we as individuals and as a society be assured that these systems are design for resilience against degradation or malicious attack? The  mission of the Institute is to bring assurance to people so that as our world is populated by autonomous systems they are operating safely, ethically, and in the best interests of humans.
    Ed Schlesinger Benjamin T. Rome Dean, Whiting School of Engineering