Abstract
In intelligent systems for real-time security and safety monitoring, the proliferation of surveillance cameras has fueled a growing interest in using deep learning-based artificial intelligence (AI) models for violence detection. Most current approaches consider violence detection as a video classification task, overlooking the fact that violent activities occur within relatively small spatiotemporal regions. Moreover, these activities depend on relationships among multiple such regions, making a single region analysis inadequate, especially for larger-scale violence. This paper proposes a novel temporal–spatial attention framework inspired by human visual perception, which dynamically focuses on multiple informative regions across space and time. By learning where, when, and for how long to attend within a video, using dynamic three-dimensional attention prediction networks, the model captures complex patterns of violent behavior more effectively. Experiments on four public benchmark datasets and a real-world dataset created for this study demonstrate that the proposed approach outperforms existing methods in accuracy and interpretability.
| Original language | English |
|---|---|
| Pages (from-to) | 26689-26709 |
| Number of pages | 21 |
| Journal | Neural Computing and Applications |
| Volume | 37 |
| Issue number | 32 |
| DOIs | |
| Publication status | Published - Nov 2025 |
Keywords
- 3D spatiotemporal attention maps
- Computer vision
- Residual convolutional neural network
- Video surveillance
- Violence detection
Fingerprint
Dive into the research topics of 'A temporal–spatial deep learning framework leveraging dynamic 3D attention maps for violence detection'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver