CrisisMMD: Multimodal Twitter Datasets from Seven Natural Disasters

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

During natural and man-made disasters, people use social media platforms such as Twitter to post textual and multimedia content to report updates about injured or dead people, infrastructure damage, missing or found people, among other information types. Studies have revealed that this online information, if processed timely and effectively, is extremely useful for humanitarian organizations to gain situational awareness and plan relief operations. In addition to the analysis of textual content, recent studies have shown that imagery content on social media can boost disaster response significantly. Despite extensive research that mainly focuses on textual content to extract useful information, limited work has focused on the use of imagery content or the combination of both content types. One of the reasons is the lack of labeled imagery data in this domain. Therefore, in this paper, we aim to tackle this limitation by releasing a large multi-modal dataset from natural disasters collected from Twitter. We provide three types of annotations, which are useful to address a number of crisis response and management tasks for different humanitarian organizations.
Original languageEnglish
Title of host publicationProceedings of the Twelfth International AAAI Conference on Web and Social Media (ICWSM 2018)
PublisherAAAI Press
Pages465-463
Number of pages9
Volume12
Edition1
DOIs
Publication statusPublished - 15 Jun 2018

Keywords

  • Multimodal
  • Twitter datasets
  • Textual and multimedia content
  • Natural disasters

Fingerprint

Dive into the research topics of 'CrisisMMD: Multimodal Twitter Datasets from Seven Natural Disasters'. Together they form a unique fingerprint.

Cite this