Title: “Machine learning: from research, application to downstream societal impacts”
Abstract: AI has come to exert an outsized influence and impact on the world, particularly on marginalised communities. Yet, research in machine learning tends to develop in a siloed manner from the downstream societal impacts, where the kind of applications that are built upon research and their uses, the underlying values of the field, as well as uneven distribution of harm and benefit have largely been ignored. In this talk, I discuss the underlying values of machine learning research as well as downstream impact of AI research in general and Computer Vision in particular. I present quantitative and qualitative analysis showing 1) predominant values driving the field of machine learning and the concentration of power in the hands of the few and 2) how computer vision research is powering mass surveillance. I highlight the ethical and societal implications of such work and the role machine learning researchers might play in disrupting the ‘Computer Vision, surveillance’ pipeline.
Bio: Abeba Birhane is a cognitive scientist, currently a Senior Advisor in AI Accountability at Mozilla Foundation and an Adjunct Assistant Professor at the School of Computer Science and Statistics at Trinity College Dublin, Ireland. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in Wired UK and TIME on the TIME100 Most Influential People in AI list. Birhane also serves on the United Nations Secretary-General’s AI Advisory Body and the newly-convened AI Advisory Council in Ireland.
Zoom: https://jhuapl.zoomgov.com/j/1615644606?pwd=N83DHG4pIXb6M4IZVa0x7CDycjzSEP.1
Meeting ID: 161 564 4606
Passcode: 173018
Title: “Assessing the Relationship Between Privacy Regulations and Software Development to Improve Rulemaking and Compliance”
Abstract: The advent of the surveillance economy in the modern Internet has significantly transformed understandings of privacy. Governments worldwide have proposed various legislative solutions to encourage responsible behavior by companies handling personally identifiable information. However, the relationship between regulation and software design, and the ultimate efficacy of enforcement paradigms at promoting widespread compliance with data protection standards, are difficult to measure. Our research project, which is funded under the NSF’s “Designing Accountable Software Systems” program, leverages a combined team of legal and engineering experts to provide the first tool to systematically evaluate how privacy laws impact approaches to personally identifiable information in software development, laying the foundation for a new regulatory paradigm based on proactive, rather than reactive, models of enforcement, which rely on mass automated notifications rather than labor-intensive individual enforcement actions. This presentation will focus on the law and policy aspects of the current project, the relationship between privacy, regulation, and the development of new technologies, and the impact of this research on global privacy enforcement structures.
Bio: Michael Karanicolas is the Executive Director of the UCLA Institute for Technology Law & Policy and, as of January 1 2025, will be an associate professor and the James S. Palmer Chair in Law and Public Policy at the Schulich School of Law at Dalhousie University. Previously, he was the Wikimedia Fellow at Yale Information Society Project, where he remains an affiliated fellow. Prior to his academic career, Michael spent a decade as a human rights advocate, where he worked to develop legal frameworks supporting foundational rights for democracy. His research encompasses a number of thematic areas, but generally revolves around the application of human rights standards in an online context. Michael has a B.A. (Hons.) from Queen’s University (Dean’s List), an LL.B. from the Schulich School of Law at Dalhousie University (Dean’s List), and an LL.M. from the University of Toronto. You can find his publications at: https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=3448585.
Zoom: https://jhuapl.zoomgov.com/j/1614814150?pwd=We2fvlWeBpELua4AExM5aWfI5JzpKk.1&from=addon
Meeting ID: 161 481 4150
Passcode: 846194
The Johns Hopkins Institute for Assured Autonomy (IAA) is hosting a special event focusing on graduate students and their work related to assured autonomy. The goal is to continue to build the IAA community by increasing awareness of the research underway by both PhD and Master’s students, and by building bridges between students and faculty as well as among the graduate students.
The format of the event will consist of 5-minute lightning talks plus additional time for discussion. A light dinner and refreshments will be provided.
Speakers need to attend in person.
Planning to attend in person? Register here.
Zoom (for attendees only): https://wse.zoom.us/j/91529582930; Meeting ID: 91529582930
Details coming soon!