Title: “Privacy Lessons for Risk-Based AI Regulation”
Abstract: A combination of ubiquitous computing, big data, and the development and deployment of artificial intelligence (AI) and machine learning (ML) systems across all sectors of society has created immense new possibilities, but also serious new risks and harms for privacy, safety, and human rights. Today, the consensus approach to AI regulation internationally is risk-based approaches. Lawmakers in the United States, Europe, Canada, and beyond have all turned to risk-based regulatory tools and schemes to regulate and govern AI systems. But because data is essential to the use and development of AI systems, AI data governance is likewise seen as essential to comprehensive AI regulatory schemes. The result is that data protection and governance is often tacked onto, or bootstrapped to, these broader risk-based approaches, with the EU’s Artificial Intelligence Act—often described as the most robust and comprehensive AI regulatory scheme internationally—a good example of this.
While there is a lively debate about the wisdom of risk-based approaches in AI scholarship and public policy, much less has been said about the wisdom of risk-based approaches for AI data privacy and governance. That is the focus of this talk. Drawing on lessons from privacy and data protection law, policy, and research, this talk argues that the risk-based approaches to AI regulation predominant today are not only largely incommensurable with robust protection for data privacy interests, but need to be fundamentally re-oriented—or entirely abandoned—to address the real risks and harms of AI systems today and tomorrow.
Bio: Jon Penney is a legal scholar and social scientist with an expertise at the intersection of law, technology, and human rights, with an emphasis on emerging technologies and interdisciplinary and empirical methods. Based in Toronto, he is an Associate Professor at Osgoode Hall Law School, York University; a Faculty Associate at Harvard’s Berkman Klein Center for Internet & Society; and a long time Research Fellow at The Citizen Lab based at the University of Toronto’s Munk School of Global Affairs and Public Policy. Recently, he also spent time as a Visiting Scholar at Harvard’s Institute for Rebooting Social Media.
His award winning research on privacy, security, and technology law and policy has received national and international attention, including coverage in the Washington Post, Reuters International, New York Times, WIRED Magazine, The Guardian, Le Monde, The Times of India, among others, and has been chronicled in Harvard Magazine. Beyond research, he serves on the advisory committee for the Cyber Civil Rights Initiative; the Program Committee for the Generative AI Law (Gen Law) Workshop held annually at the International Conference on Machine Learning (ICML); and the Steering Committee for the Free and Open Communications on the Internet (FOCI) Workshop co-located at the annual USENIX Security Symposium.
Zoom: https://jhuapl.zoomgov.com/j/1611113422pwd=UKnjiR1bXR6bWtN7SceJaDbK1kZMEr.1&from=addon
Meeting ID: 161 111 3422
Passcode: 983534

Title: Making AI Work in the Crucible: Perception and Reasoning in Chaotic Environments
Abstract: Disasters like wildfires and wars are increasing in frequency and severity, creating environments where chaos reigns. In these moments, AI holds the potential to revolutionize disaster response—helping first responders stay safe, saving lives, and guiding critical decision-making. Yet current AI systems often fail when faced with the realities of such environments; they assume clean data from reliable sensors, predictable conditions, and well-defined tasks—assumptions that collapse in the face of noisy inputs, shifting contexts, and incomplete information. In this talk, Ritwik Gupta will present a vision for building AI systems that thrive in these complex, high-stakes scenarios. He will explore the core challenges: working with gigapixel images that defy traditional compute paradigms, understanding data from non-visible modalities like synthetic aperture radar, integrating multimodal information from disparate sensors, and making sense of rapidly changing conditions. Tackling these challenges requires fundamentally rethinking AI architectures to account for scalability, adaptability, and robustness—whether by introducing physics-aware models, sensor-in-the-loop designs, or multimodal systems capable of reasoning over fragmented and noisy inputs. Beyond technical challenges, Gupta will discuss how AI policy must evolve to bridge the gap between civilian and military applications. By addressing regulatory bottlenecks, dual-use technologies can be deployed responsibly and equitably in both disaster response and defense scenarios. This dual approach—spanning foundational AI research and policy innovation—will help unlock the potential of AI in the world’s most chaotic environments.
Bio: Ritwik Gupta is a PhD candidate at the University of California, Berkeley; the technical director for autonomy at the Defense Innovation Unit; and an advisor to the FBI on AI and AI policy. His research focuses on computer vision in complex and chaotic environments, as well as the policy implications of integrating dual-use AI into both civilian and military contexts. Gupta’s work has found widespread use in tasks such as assessing building damage after the 2023 Turkey-Syria earthquake and detecting and interdicting criminals engaged in illegal activities on the ocean. His research has been widely covered in press outlets such as TIME, the Wall Street Journal, and CNN. Gupta is a graduate fellow with the Berkeley Risk and Security Lab, a research fellow at the Berkeley Human Rights Center, and an AI policy fellow at the Center for Security in Politics. He previously led a research lab focused on AI for humanitarian assistance and disaster response at Carnegie Mellon University and investigated real-time machine learning for the Apple Vision Pro.