TY - GEN
T1 - MSSP
T2 - 2024 International Joint Conference on Neural Networks, IJCNN 2024
AU - Di, Yang
AU - Phung, Son Lam
AU - Bouzerdoum, Abdesselam
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024/7/5
Y1 - 2024/7/5
N2 - Finding safe paths in autonomous or assistive navigation systems is a challenging task. In this paper, we introduce MSSP, a novel multi-perspective street scene perception benchmark dataset for navigation assistance. Compared to single-view perception, multi-view provides more comprehensive information about the surroundings to the pedestrians for enhancing safety. The MSSP dataset includes 2,044 samples from complex scenes in both first-person and third-person perspectives, covering four categories: Sidewalk, Traffic lane, Verge, and Lawn. This dataset primarily focuses on pedestrian navigation assistance and is the first pedestrian multi-perspective dataset. Furthermore, in comparison to single-perspective datasets in other studies, MSSP contains more categories. The dataset supports pixel-level semantic segmentation approaches. We also provide experimental results of recent semantic segmentation methods on this dataset for evaluation. Specifically, the transformer-based SegFormer method outperforms others on MSSP. Our dataset is accessible through: https://github.com/yangdi-cv/MSSP.
AB - Finding safe paths in autonomous or assistive navigation systems is a challenging task. In this paper, we introduce MSSP, a novel multi-perspective street scene perception benchmark dataset for navigation assistance. Compared to single-view perception, multi-view provides more comprehensive information about the surroundings to the pedestrians for enhancing safety. The MSSP dataset includes 2,044 samples from complex scenes in both first-person and third-person perspectives, covering four categories: Sidewalk, Traffic lane, Verge, and Lawn. This dataset primarily focuses on pedestrian navigation assistance and is the first pedestrian multi-perspective dataset. Furthermore, in comparison to single-perspective datasets in other studies, MSSP contains more categories. The dataset supports pixel-level semantic segmentation approaches. We also provide experimental results of recent semantic segmentation methods on this dataset for evaluation. Specifically, the transformer-based SegFormer method outperforms others on MSSP. Our dataset is accessible through: https://github.com/yangdi-cv/MSSP.
KW - Assistive navigation
KW - Benchmark dataset
KW - Multi-view
KW - Scene perception
KW - Semantic segmentation
UR - https://www.scopus.com/pages/publications/85204969345
U2 - 10.1109/IJCNN60899.2024.10651459
DO - 10.1109/IJCNN60899.2024.10651459
M3 - Conference contribution
AN - SCOPUS:85204969345
SN - 979-8-3503-5932-9
T3 - Ieee International Joint Conference On Neural Networks (ijcnn)
BT - 2024 International Joint Conference On Neural Networks, Ijcnn 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 30 June 2024 through 5 July 2024
ER -