Semantic-driven multi-camera pedestrian detection
Entity
UAM. Departamento de Tecnología Electrónica y de las ComunicacionesPublisher
Springer NatureDate
2022-05-01Citation
10.1007/s10115-022-01673-w
Knowledge and Information Systems 64.5 (2022): 1211-1237
ISSN
0219-1377 (print); 0219-3116 (online)DOI
10.1007/s10115-022-01673-wFunded by
This study has been partially supported by the Spanish Government through its TEC2017- 88169-R MobiNetVideo projectProject
Gobierno de España. TEC2017- 88169-REditor's Version
https://doi.org/10.1007/s10115-022-01673-wSubjects
Multi-camera systems; Pedestrian detection; Semantic segmentation; Video surveillance; TelecomunicacionesNote
The version of record of this article, first published in Knowledge and Information Systems , is available online at Publisher’s website: http://dx.doi.org/10.1007/s10115-022-01673-wRights
© The author(s)Abstract
Abstract: In the current worldwide situation, pedestrian detection has reemerged as a pivotal tool for intelligent video-based systems aiming to solve tasks such as pedestrian tracking, social distancing monitoring or pedestrian mass counting. Pedestrian detection methods, even the top performing ones, are highly sensitive to occlusions among pedestrians, which dramatically degrades their performance in crowded scenarios. The generalization of multi-camera setups permits to better confront occlusions by combining information from different viewpoints. In this paper, we present a multi-camera approach to globally combine pedestrian detections leveraging automatically extracted scene context. Contrarily to the majority of the methods of the state-of-the-art, the proposed approach is scene-agnostic, not requiring a tailored adaptation to the target scenario–e.g., via fine-tuning. This noteworthy attribute does not require ad hoc training with labeled data, expediting the deployment of the proposed method in real-world situations. Context information, obtained via semantic segmentation, is used (1) to automatically generate a common area of interest for the scene and all the cameras, avoiding the usual need of manually defining it, and (2) to obtain detections for each camera by solving a global optimization problem that maximizes coherence of detections both in each 2D image and in the 3D scene. This process yields tightly fitted bounding boxes that circumvent occlusions or miss detections. The experimental results on five publicly available datasets show that the proposed approach outperforms state-of-the-art multi-camera pedestrian detectors, even some specifically trained on the target scenario, signifying the versatility and robustness of the proposed method without requiring ad hoc annotations nor human-guided configuration
Files in this item
Google Scholar:López Cifuentes, Alejandro
-
Escudero Viñolo, Marcos
-
Bescos Cano, Jesús
-
Carballeira López, Pablo
This item appears in the following Collection(s)
Related items
Showing items related by title, author, creator and subject.