Evaluating Robustness of LLMs on Crisis-Related Microblogs across Events, Information Types, and Linguistic Features

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The widespread use of microblogging platforms like X (formerly Twitter) during disasters provides real-time information to governments and response authorities. However, the data from these platforms is often noisy, requiring automated methods to filter relevant information. Traditionally, supervised machine learning models have been used, but they lack generalizability. In contrast, Large Language Models (LLMs) show better capabilities in understanding and processing natural language out of the box. This paper provides a detailed analysis of the performance of six well-known LLMs in processing disaster-related social media data from a large-set of real-world events. Our findings indicate that while LLMs, particularly GPT-4o and GPT-4, offer better generalizability across different disasters and information types, most LLMs face challenges in processing flood-related data, show minimal improvement despite the provision of examples (i.e., shots), and struggle to identify critical information categories like urgent requests and needs. Additionally, we examine how various linguistic features affect model performance and highlight LLMs’ vulnerabilities against certain features like typos. Lastly, we provide benchmarking results for all events across both zero- and few-shot settings and observe that proprietary models outperform open-source ones in all tasks.

Original languageEnglish
Title of host publicationWWW 2025 - Proceedings of the ACM Web Conference
PublisherAssociation for Computing Machinery, Inc
Pages5117-5126
Number of pages10
ISBN (Electronic)9798400712746
DOIs
Publication statusPublished - 22 Apr 2025
Event34th ACM Web Conference, WWW 2025 - Sydney, Australia
Duration: 28 Apr 20252 May 2025

Publication series

NameWWW 2025 - Proceedings of the ACM Web Conference

Conference

Conference34th ACM Web Conference, WWW 2025
Country/TerritoryAustralia
CitySydney
Period28/04/252/05/25

Keywords

  • disaster response
  • Large language models
  • LLM benchmarking
  • LLM evaluation
  • social media

Fingerprint

Dive into the research topics of 'Evaluating Robustness of LLMs on Crisis-Related Microblogs across Events, Information Types, and Linguistic Features'. Together they form a unique fingerprint.

Cite this