TY - GEN
T1 - Overview of the CLEF-2025 CheckThat! Lab
T2 - 16th International Conference of the Cross-Language Evaluation Forum for European Languages, CLEF 2025
AU - Alam, Firoj
AU - Struß, Julia Maria
AU - Chakraborty, Tanmoy
AU - Dietze, Stefan
AU - Hafid, Salim
AU - Korre, Katerina
AU - Muti, Arianna
AU - Nakov, Preslav
AU - Ruggeri, Federico
AU - Schellhammer, Sebastian
AU - Setty, Vinay
AU - Sundriyal, Megha
AU - Todorov, Konstantin
AU - Venktesh, V.
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2026.
PY - 2026
Y1 - 2026
N2 - This paper presents the eighth edition of the CheckThat! lab, part of the 2025 Conference and Labs of the Evaluation Forum (CLEF). As in previous editions of CheckThat!, the lab offers tasks from the core of the verification pipeline, including check-worthiness, identifying previously fact-checked claims, supporting evidence retrieval, and claim verification as well as auxiliary tasks addressing different facets of individual steps of the pipeline: Task 1 is on identification of subjectivity (a follow-up of the CheckThat! 2024 edition), which is related to the check-worthiness task, Task 2 is on claim normalization, Task 3 addresses fact-checking numerical claims, and Task 4 focuses on scientific web discourse processing. These challenging classification and retrieval problems are offered in different mono-, multi- and crosslingual settings covering more than 20 languages. This year, CheckThat! was one of the most popular labs at CLEF-2025 in terms of team registrations: 177 teams registered, almost half of them actually participating (a total of 83 teams) and 54 submitted system description papers.
AB - This paper presents the eighth edition of the CheckThat! lab, part of the 2025 Conference and Labs of the Evaluation Forum (CLEF). As in previous editions of CheckThat!, the lab offers tasks from the core of the verification pipeline, including check-worthiness, identifying previously fact-checked claims, supporting evidence retrieval, and claim verification as well as auxiliary tasks addressing different facets of individual steps of the pipeline: Task 1 is on identification of subjectivity (a follow-up of the CheckThat! 2024 edition), which is related to the check-worthiness task, Task 2 is on claim normalization, Task 3 addresses fact-checking numerical claims, and Task 4 focuses on scientific web discourse processing. These challenging classification and retrieval problems are offered in different mono-, multi- and crosslingual settings covering more than 20 languages. This year, CheckThat! was one of the most popular labs at CLEF-2025 in terms of team registrations: 177 teams registered, almost half of them actually participating (a total of 83 teams) and 54 submitted system description papers.
KW - Check-Worthiness
KW - Claim verification
KW - Fact-Checking
KW - Subjectivity
UR - https://www.scopus.com/pages/publications/105023509798
U2 - 10.1007/978-3-032-04354-2_13
DO - 10.1007/978-3-032-04354-2_13
M3 - Conference contribution
AN - SCOPUS:105023509798
SN - 9783032043535
VL - 16089
T3 - Lecture Notes In Computer Science
SP - 199
EP - 223
BT - Experimental Ir Meets Multilinguality, Multimodality, And Interaction, Clef 2025
A2 - Carrillo-De-Albornoz, J
A2 - DeHerrera, AGS
A2 - Gonzalo, J
A2 - Plaza, L
A2 - Mothe, J
A2 - Piroi, F
A2 - Rosso, P
A2 - Spina, D
A2 - Faggioli, G
A2 - Ferro, N
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 9 September 2025 through 12 September 2025
ER -