• Skip to primary navigation
  • Skip to main content
Logo

Work Zone Safety Information Clearinghouse

Library of Resources to Improve Roadway Work Zone Safety for All Roadway Users

  • About
  • Newsletter
  • Contact
  • X
  • Facebook
  • LinkedIn

  • Work Zone Data
    • At a Glance
    • National & State Traffic Data
    • Work Zone Traffic Crash Trends and Statistics
    • Worker Fatalities and Injuries at Road Construction Sites
  • Topics of Interest
    • Commercial Motor Vehicle Safety
    • Smart Work Zones
    • Work Zone Safety and MobilityTransportation Management Plans
    • Accommodating Pedestrians
    • Worker Safety and Welfare
    • Project Coordination in Work Zones
  • Training
    • Online Courses
    • FHWA Safety Grant Products
    • Toolboxes
    • Flagger
    • Certification and
      Accreditation
  • Work Zone Devices
  • Laws, Standards & Policies
  • Public Awareness
  • About
  • Events
  • Contact
  • Search
Publication

ROADWork: A Dataset and Benchmark for Learning to Recognize, Observe, Analyze and Drive Through Work Zones

Author/Presenter: Ghosh, Anurag; Zheng, Shen; Tamburo, Robert; Vuong, Khiem; Alvarez-Padilla, Juan; Zhu, Hailiang; Cardei, Michael; Dunn, Nicholas; Mertz, Christoph; Narasimhan, Srinivasa G.
Abstract:

Perceiving and autonomously navigating through work zones is a challenging and underexplored problem. Open datasets for this long-tailed scenario are scarce. We propose the ROADWork dataset to learn to recognize, observe, analyze, and drive through work zones. State-of-the-art foundation models fail when applied to work zones. Fine-tuning models on our dataset significantly improves perception and navigation in work zones. With ROADWork, we discover new work zone images with higher precision (+32.5%) at a much higher rate (12.8×) around the world. Open-vocabulary methods fail too, whereas fine-tuned detectors improve performance (+32.2 AP). Vision-Language Models (VLMs) struggle to describe work zones, but fine-tuning substantially improves performance (+36.7 SPICE). Beyond fine-tuning, we show the value of simple techniques. Video label propagation provides additional gains (+2.6 AP) for instance segmentation. While reading work zone signs, composing a detector and text spotter via crop-scaling improves performance (+14.2% 1-NED). Composing work zone detections to provide context further reduces hallucinations (+3.9 SPICE) in VLMs. We predict navigational goals and compute drivable paths from work zone videos. Incorporating road work semantics ensures 53.6% goals have angular error (AE) < 0.5◦ (+9.9 %) and 75.3% pathways have AE < 0.5◦ (+8.1 %).

Source: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)
Publication Date: 2025
Full Text URL: Link to URL
Publication Types: Books, Reports, Papers, and Research Articles
Topics: Automatic Data Collection Systems; Computer Vision; Detection and Identification; Work Zones

Copyright © 2026 American Road & Transportation Builders Association (ARTBA). The National Work Zone Safety Information Clearinghouse is a project of the ARTBA Transportation Development Foundation. It is operated in cooperation with the U.S. Federal Highway Administration and Texas A&M Transportation Institute. | Copyright Statement · Privacy Policy · Disclaimer
American Road and Transportation Builders Association Transportation Development Foundation, American Road and Transportation Builders Association U.S. Department of Transportation Federal Highway Administration Texas A&M Transportation Institute