A temporal–spatial deep learning framework leveraging dynamic 3D attention maps for violence detection

Elizabeth B. Varghese*, Almiqdad Elzein, Yin Yang, Marwa Qaraqe

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In intelligent systems for real-time security and safety monitoring, the proliferation of surveillance cameras has fueled a growing interest in using deep learning-based artificial intelligence (AI) models for violence detection. Most current approaches consider violence detection as a video classification task, overlooking the fact that violent activities occur within relatively small spatiotemporal regions. Moreover, these activities depend on relationships among multiple such regions, making a single region analysis inadequate, especially for larger-scale violence. This paper proposes a novel temporal–spatial attention framework inspired by human visual perception, which dynamically focuses on multiple informative regions across space and time. By learning where, when, and for how long to attend within a video, using dynamic three-dimensional attention prediction networks, the model captures complex patterns of violent behavior more effectively. Experiments on four public benchmark datasets and a real-world dataset created for this study demonstrate that the proposed approach outperforms existing methods in accuracy and interpretability.

Original languageEnglish
Pages (from-to)26689-26709
Number of pages21
JournalNeural Computing and Applications
Volume37
Issue number32
DOIs
Publication statusPublished - Nov 2025

Keywords

  • 3D spatiotemporal attention maps
  • Computer vision
  • Residual convolutional neural network
  • Video surveillance
  • Violence detection

Fingerprint

Dive into the research topics of 'A temporal–spatial deep learning framework leveraging dynamic 3D attention maps for violence detection'. Together they form a unique fingerprint.

Cite this