Overview of the CLEF-2025 CheckThat! Lab Task 1 on Subjectivity in News Articles

  • Federico Ruggeri*
  • , Arianna Muti
  • , Katerina Korre
  • , Julia Maria Struß
  • , Melanie Siegel
  • , Michael Wiegand
  • , Firoj Alam
  • , Md Rafiul Biswas
  • , Wajdi Zaghouani
  • , Maria Nawrocka
  • , Bogdan Ivasiuk
  • , Gogu Razvan
  • , Andreiana Mihail
  • *Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

7 Citations (Scopus)

Abstract

We present an overview of Task 1 of the eighth edition of the CheckThat! lab at the 2025 edition of the Conference and Labs of the Evaluation Forum (CLEF). The task required participants to determine whether individual sentences from news articles expressed subjective viewpoints, such as opinions or personal bias, or presented objective, fact-based information. The task was offered in nine languages: Arabic, Bulgarian, English, German, Italian, Greek, Polish, Romanian, and Ukrainian, as well as in a multilingual setting. We curated datasets for each language, comprising roughly 14,000 sentences sourced from diverse news outlets. Participants were tasked with developing classification systems to identify subjectivity (personal opinions or biases) and objectivity (factual information) at the sentence level. A total of 22 teams participated in the task, submitting 436 valid runs across all language tracks. Most systems were based on transformer models, with approaches ranging from fine-tuning language-specific and multilingual encoders to applying English-centric models in combination with machine translation. Several teams also experimented with ensemble techniques, handcraffied features, and in-context learning using large language models. Systems were evaluated using macro-averaged F1 score to ensure equal weighting of subjective and objective classes. Performance varied considerably by language: German, Italian, English and Romanian yielded the highest results. In contrast, Greek and Ukrainian emerged as the most challenging languages, with no team surpassing the 0.65 and 0.51 F1 score marks, respectively. Task 1 offers a valuable benchmark for the development and evaluation of multilingual subjectivity detection systems. This paper presents an overview of Task 1, including datasets, system strategies, and outcomes, contributing to broader research efforts aimed at improving the transparency and trustworthiness of automated content analysis.

Original languageEnglish
Pages (from-to)681-694
Number of pages14
JournalCEUR Workshop Proceedings
Volume4038
Publication statusPublished - 2025
Event26th Working Notes of the Conference and Labs of the Evaluation Forum, CLEF 2025 - Madrid, Spain
Duration: 9 Sept 202512 Sept 2025

Keywords

  • fact-checking
  • misinformation detection
  • subjectivity classification

Fingerprint

Dive into the research topics of 'Overview of the CLEF-2025 CheckThat! Lab Task 1 on Subjectivity in News Articles'. Together they form a unique fingerprint.

Cite this