As many sectors work to keep up with the rapid evolution of artificial intelligence, government staff must also understand how AI is affecting different industries and technologies that impact their day-to-day work.
To help congressional staffers stay informed on the latest developments in AI and machine learning, the Johns Hopkins Data Science and AI Institute, Engineering Lifelong Learning, and Office of Federal Strategy hosted an information session at the Hopkins Bloomberg Center, connecting them with information, resources, and experts in AI and ML across many disciplines.
“This was really an opportunity for Johns Hopkins to be a resource to congressional staffers and to provide a forum for dynamic and informative discussions on AI and its potential policy implications,” said Chris Austin, director of federal strategy at Johns Hopkins.
Here are some of the key takeaways from each of the five AI specialty areas covered during the session.
- AI in ethics
- AI for speech and language
- AI for computer vision
- AI for national security
- AI in medicine
AI in ethics
AI poses many common ethics challenges, particularly in health care, such as privacy, informed consent, and data ownership and sharing. Alongside those, it also presents its own set of specific issues.
Unique AI ethics challenges
AI has high adoption but low trust among Americans, according to a recent KPMG report. Only 29% believe current safeguards are sufficient, and 72% say regulation is needed. Strengthening those safeguards is challenging because of the nature of AI.
Scale. AI is pervasive and invisible. People often don’t even know it’s being used.
Implicit ethics. Unlike other emerging technologies in which ethical issues arise in the application, in AI, they’re often unintentionally baked in during design, usually by those coding the model, and then have impact at scale.
Bias. We often train models on data that reflect the biases of the people who generated it, leading to tools that likewise reflect these biases in their decisions.
- 29% believe current AI safeguards are sufficient
- 72% say AI regulation is needed
Potential principles to build AI ethics on
Experts have proposed many sets of AI principles over the years. Five themes, according to a study on the emerging global alignment on AI ethics guidelines, underpin them:
- Transparency
- Justice and fairness
- Non-maleficence
- Responsibility
- Privacy
“But the tricky bit is nobody agrees on how these are defined or whose responsibility they are,” said Debra Mathews, associate director for research and programs for the Johns Hopkins Berman Institute of Bioethics. “Further, we need more work to integrate ethical principles into actual governance decisions.”
This is a process that’s important for any emerging technology, Mathews noted, but is particularly critical for AI given its speed of change and prevalence. By considering the ethics of AI early, we can minimize the risks that come with rapid innovation without stunting its progress, she said.

DEBRA MATHEWS
Associate director for research and programs
AI for speech and language
AI-driven speech and language technologies are transforming fields ranging from medicine to cybersecurity. Berrak Sisman, who leads the Speech & Machine Learning (SMILE) Lab at the Johns Hopkins Center for Language and Speech Processing, highlighted three key trends in how AI is reshaping speech and language processing. Her team at Hopkins develops state-of-the-art AI models to improve how machines understand and express themselves through human language.
Expressive speech generation speech generation and deepfakes STEM from the same powerful technologies
The tools that bring voices to life in healthcare, education, or accessibility can also be exploited to create disinformation. Studying these technologies together, both their capabilities and vulnerabilities, is key to developing effective deepfake detection and building more secure, trustworthy systems.
Emotional intelligence is the missing layer in today’s AI
As AI systems get better at generating speech that sounds human, the next frontier is how they say things: tone, empathy, and emotion. In an AI-driven world, emotional intelligence is essential for trust, safety, and effective communication.
Large language models are changing what machines can understand and say
LLMs are already transforming education, healthcare, and national security. They’re not just chatbots; they’re becoming the backbone of how machines reason, generate, and speak.

BERRAK SISMAN
Assistant professor
AI for computer vision
Computer vision focuses on enabling machines to interpret and understand visual data, such as images and videos. It has broad applications across sectors, including autonomous driving, assistive technologies, homeland security, medicine, and public health. The overarching goal is to automate visual tasks that the human visual system performs naturally.
There are a variety of impactful real-world applications, from commerce to health care, including:
- Identifying missing children
- Aiding people with low vision
- Enabling drones and robots to “see”
- Facial phenotyping for stroke monitoring and evaluation
- Mosquito identification to slow the spread of mosquito-borne disease
- Traffic flow estimations
- Self-driving cars, which embody the integration of vision with robotics and autonomous systems
AI for computer vision raises critical challenges, too.
- Data hunger. Heavy reliance on large datasets
- Black-box nature. The lack of transparency in model decision-making
- Bias. Sensitivity to domain shifts, concerns about privacy, and fairness. A common question Rama Chellappa, chief scientist at the Johns Hopkins Institute for Assured Autonomy, said he asks is, “Will it work everywhere, and will it work for everyone?”
- Security. Vulnerability to adversarial attacks
These limitations underscore the importance of addressing technical and generalization issues for real-world deployment, Chellappa said.
RAMA CHELLAPPA
Bloomberg Distinguished Professor
AI for national security
AI is fundamentally changing war both on and off the battlefield. For instance, it’s enhancing drones, making them increasingly autonomous. It’s also impacting the processes, such as the discovery of new materials and code generation, that help develop new military tech.
The broad accessibility of AI produces asymmetric threats. Entities no longer need major physical infrastructure to pose a threat anymore. Now, threadbare militaries can overwhelm stronger forces by leveraging AI capabilities.
“Ukraine, in particular, is giving us a glimpse into the vulnerability of much of the heavy, large equipment that we have, and also an indication of how we might deploy some of our capabilities to be more effective in the future,” said James Bellingham, Bloomberg Distinguished Professor of exploration robotics and executive director of the Johns Hopkins Institute for Assured Autonomy.
Other ways AI can impact national security include:
- The development of and protection against cybersecurity threats
- Improving the work of intelligence agencies by providing more situational awareness to manage conflicts
- The ability to anticipate the consequences of attacks
- Better precision and control of attacks
Dimensions of AI safety and assurance for national security
Many experts note that the same elements that enable AI to support national security can also generate many risks for it, including unclear attribution, better spoofing and deception, and outpacing the human control loop.
One of the top risks is the uneven nature of AI performance, which Bellingham said prompts a need for research on the science of AI and data with a particular focus on providing performance guarantees.
“We need systems-engineering processes with good guidelines for how you construct your data sets, how you validate your data sets, and then when you construct your AI, how you validate your AI,” he said, adding that experts should consider the application domain—land, sea, air, or space, for example—when designing this process.

JAMES BELLINGHAM
Bloomberg Distinguished Professor and executive director of the Johns Hopkins Institute for Assured Autonomy
AI in medicine
AI can advance medicine in many ways, but AI in medicine isn’t new. Soon after AI methods were invented, it was used to analyze signals such as ECG and EEG in the 1960s, to diagnose diseases through expert systems in the 1980s, and to diagnose cancer by analyzing radiology images starting in the 1990s. Today, the FDA has approved 1,016 AI devices for use in patient care and medicine.
AI has a broad impact across medicine, from global to subcellular scales of human biology. It also affects all aspects of patient care, starting before the onset of disease, through diagnosis and treatment selection, and all the way to recovery and long-term health:
- Error prevention
- Enhanced measurement
- Discovery of new treatments
- Optimization and personalization of interventions
- Augmentation of human capacity
- Enabling autonomous agents
To realize the promise of AI in medicine, AI solutions must be valid, precise, and transparent, said Swaroop Vedula, who leads the AI for Surgery Lab at Johns Hopkins. A critical need for building trustworthy AI solutions is trusted datasets that are held to a high standard on accuracy, reliability, findability, availability, interoperability, and reusability.

SWAROOP VEDULA
Associate research professor