Johns Hopkins engineers develop an artificial intelligence system that detects and catalogs traffic signs, paving the way for more efficient and safer roadways

Each day, millions of people across the United States travel on roads and highways, relying on a complex transportation infrastructure to reach their destinations. Keeping vehicle transportation safe, efficient, and sustainable depends on skillful management of roads, bridges, and traffic signs—critically important elements that help drivers navigate. However, maintaining this extensive network of traffic signs requires significant time and resources.

Limited data on the use of local traffic signs inspired a team of researchers, including Johns Hopkins’ Hao (Frank) Yang, assistant professor of civil and systems engineering and member of the Johns Hopkins Institute for Assured Autonomy, and Chenxi Liu at the University of Utah, to develop an artificial intelligence system that can detect and catalog traffic signs with near-perfect accuracy. Their Traffic Sign Detection and Recognition (TSDR) architecture, described in the Journal of Transportation Engineering, could be used by transportation engineers and city planners to improve sign maintenance and reduce the cost of manual surveying.

“We developed an innovative architecture that automates capturing, classifying, and inventorying traffic signs specifically tailored for the U.S., addressing a critical gap caused by the lack of localized data,” explains Yang.

Using the Google Maps application programming interface and dash cameras, the research team collected 5,000 images of traffic signs from Washington State. They manually labeled these images into 43 classes. They then trained their machine learning model to detect traffic signs in the images, recognizing and classifying them into different categories.

The result was an automated pipeline that created a localized traffic sign data inventory for the United States that had never before existed. Although the images used to train and validate the model were collected from Washington State, the TSDR system that Yang and his collaborators developed is relevant to broader regions due to standardized traffic sign designs mandated by federal guidelines throughout the U.S. By focusing on key visual features and categories that are common nationwide, the model accurately captures the majority of sign types relevant across diverse geographic contexts. To further support scalability and adaptability beyond Washington State, the team plans to continually refine the model using additional geographically diverse datasets, ensuring the inventory remains robust and representative nationwide.

“Leveraging over 5,000 manually labeled traffic sign images from Washington State, our TSDR model achieved impressive accuracies of 98.34% in detection and 97.10% in recognition. This approach significantly enhances asset management in transportation, ensuring safer, more efficient, and sustainable travel networks,” Yang says.

Looking forward, the research team plans to integrate large language models to enhance decision-making capabilities related to traffic sign maintenance. By leveraging these LLMs, the system could not only detect and classify signs but also intelligently predict maintenance needs, identify damaged or obscured signage, and prioritize repair tasks based on safety-critical factors. This AI-driven decision support approach could significantly streamline infrastructure management and improve road safety. Yang says that the team remains committed to reproducibility and transparency, and they plan to publicly release their annotated dataset and training algorithms, allowing transportation agencies nationwide to adapt and implement this technology. Future research directions include addressing challenges associated with poorly lit, obstructed, or damaged signs, ensuring that this innovative system robustly performs under diverse real-world conditions.