Latest News

Upcoming Events

Nov
15
Fri
IAA & Berman Seminar Series – Jon Penney, “Privacy Lessons for Risk-Based AI Regulation”
Nov 15 @ 11:00 am – 12:00 pm


Seminar Zoom Link

Title: “Privacy Lessons for Risk-Based AI Regulation”

Abstract: A combination of ubiquitous computing, big data, and the development and deployment of artificial intelligence (AI) and machine learning (ML) systems across all sectors of society has created immense new possibilities, but also serious new risks and harms for privacy, safety, and human rights. Today, the consensus approach to AI regulation internationally is risk-based approaches. Lawmakers in the United States, Europe, Canada, and beyond have all turned to risk-based regulatory tools and schemes to regulate and govern AI systems. But because data is essential to the use and development of AI systems, AI data governance is likewise seen as essential to comprehensive AI regulatory schemes. The result is that data protection and governance is often tacked onto, or bootstrapped to, these broader risk-based approaches, with the EU’s Artificial Intelligence Act—often described as the most robust and comprehensive AI regulatory scheme internationally—a good example of this.

While there is a lively debate about the wisdom of risk-based approaches in AI scholarship and public policy, much less has been said about the wisdom of risk-based approaches for AI data privacy and governance. That is the focus of this talk. Drawing on lessons from privacy and data protection law, policy, and research, this talk argues that the risk-based approaches to AI regulation predominant today are not only largely incommensurable with robust protection for data privacy interests, but need to be fundamentally re-oriented—or entirely abandoned—to address the real risks and harms of AI systems today and tomorrow.

Bio: Jon Penney is a legal scholar and social scientist with an expertise at the intersection of law, technology, and human rights, with an emphasis on emerging technologies and interdisciplinary and empirical methods. Based in Toronto, he is an Associate Professor at Osgoode Hall Law School, York University; a Faculty Associate at Harvard’s Berkman Klein Center for Internet & Society; and a long time Research Fellow at The Citizen Lab based at the University of Toronto’s Munk School of Global Affairs and Public Policy. Recently, he also spent time as a Visiting Scholar at Harvard’s Institute for Rebooting Social Media.

His award winning research on privacy, security, and technology law and policy has received national and international attention, including coverage in the Washington Post, Reuters International, New York Times, WIRED Magazine, The Guardian, Le Monde, The Times of India, among others, and has been chronicled in Harvard Magazine. Beyond research, he serves on the advisory committee for the Cyber Civil Rights Initiative; the Program Committee for the Generative AI Law (Gen Law) Workshop held annually at the International Conference on Machine Learning (ICML); and the Steering Committee for the Free and Open Communications on the Internet (FOCI) Workshop co-located at the annual USENIX Security Symposium.

Zoom: https://jhuapl.zoomgov.com/j/1611113422pwd=UKnjiR1bXR6bWtN7SceJaDbK1kZMEr.1&from=addon
Meeting ID: 161 111 3422
Passcode: 983534

Nov
19
Tue
IAA Seminar @ Malone Hall 107, Johns Hopkins University
Nov 19 @ 11:00 am – 12:00 pm

Details coming soon!

How Do We Create an Assured Autonomous Future?

Autonomous systems have become increasingly integrated into all aspects of every person’s daily life. In response, the Johns Hopkins Institute for Assured Autonomy (IAA) focuses on ensuring that those systems are safe, secure, and reliable, and that they do what they are designed to do.

Pillars of the IAA

Technology

Autonomous technologies perform tasks with a high degree of autonomy and often employ artificial intelligence (AI) to simulate human cognition, intelligence, and creativity. Because these systems are critical to our safety, health, and well-being as well as to the fabric of our system of commerce, new research and engineering methodologies are needed to ensure they behave in safe, reasonable, and acceptable ways…

Ecosystem

Autonomous systems must integrate well with individuals and with society at large. Such systems often integrate into—and form collectively into—an autonomous ecosystem. That ecosystem—the connections and interactions between autonomous systems, over networks, with the physical environment, and with humans—must be assured, resilient, productive, and fair in the autonomous future…

Ethics and Governance

The nation must adopt the right policy to ensure autonomous systems benefit society. Just as the design of technology has dramatic impacts on society, the development and implementation of policy can also result in intended and unintended consequences. Furthermore, the right governance structures are critical to enforce sound policy and to guide the impact of technology…

  • In recent years, we have learned that the most important element about autonomous systems is – for humans – trust. Trust that the autonomous systems will behave predictably, reliably, and effectively. That sort of trust is hard-won and takes time, but the centrality of this challenge to the future of humanity in a highly autonomous world motivates us all.
    Ralph Semmel, Director, Applied Physics Laboratory
  • In the not too distant future we will see more and more autonomous systems operating with humans, for humans, and without humans, taking on tasks that were once thought of as the exclusive domains of humans. How can we as individuals and as a society be assured that these systems are design for resilience against degradation or malicious attack? The  mission of the Institute is to bring assurance to people so that as our world is populated by autonomous systems they are operating safely, ethically, and in the best interests of humans.
    Ed Schlesinger Benjamin T. Rome Dean, Whiting School of Engineering