Designing Multi-Robot Ground Video Sensemaking with Public Safety Professionals

Puqi Zhou1, Ali Asgarov2, Aafiya Hussain2, Wonjoon Park3, Amit Paudyal1, Sameep Shrestha1, Chia-wei Tang2, Michael F. Lighthiser1, Michael R. Hieb1, Xuesu Xiao1, Chris Thomas2, Sungsoo Ray Hong1,
1George Mason University    2Virginia Tech    3University of Maryland, College Park
ACM CHI 2026

Aliquam vitae elit ullamcorper tellus egestas pellentesque. Ut lacus tellus, maximus vel lectus at, placerat pretium mi. Maecenas dignissim tincidunt vestibulum. Sed consequat hendrerit nisl ut maximus.

Abstract

Videos from fleets of ground robots can advance public safety by providing scalable situational awareness and reducing professionals’ burden. Yet little is known about how to design and integrate multi-robot videos into public safety workflows. Collaborating with six police agencies, we examined how such videos could be made practical for their workflows. In Study 1, we identified 38 events-of-interest (EoI) relevant to public safety monitoring and 6 design requirements aimed at improving current video sensemaking practice. We also developed a dataset of 20 robot patrol videos (10 day,10 night), each covering at least seven EoI types. In Study 2, we built MRVS, a tool that streams patrol videos and applies a prompt-engineered video understanding model based on our EoI definitions. Participants reported reduced manual workload and greater confidence with LLM-based explanations, while noting concerns about false alarms and privacy. We conclude with implications for designing future multi-robot video sensemaking tools.

Third research result visualization

Third image description.

Fourth research result visualization

Fourth image description.

-->

Video Presentation

Another Carousel

Poster

BibTeX

@article{YourPaperKey2024,
  title={Your Paper Title Here},
  author={First Author and Second Author and Third Author},
  journal={Conference/Journal Name},
  year={2024},
  url={https://your-domain.com/your-project-page}
}