TY - GEN
T1 - Overview of the CLEF–2022 CheckThat! Lab on Fighting the COVID-19 Infodemic and Fake News Detection
AU - Nakov, Preslav
AU - Barrón-Cedeño, Alberto
AU - da San Martino, Giovanni
AU - Alam, Firoj
AU - Struß, Julia Maria
AU - Mandl, Thomas
AU - Míguez, Rubén
AU - Caselli, Tommaso
AU - Kutlu, Mucahid
AU - Zaghouani, Wajdi
AU - Li, Chengkai
AU - Shaar, Shaden
AU - Shahi, Gautam Kishore
AU - Mubarak, Hamdy
AU - Nikolov, Alex
AU - Babulkov, Nikolay
AU - Kartal, Yavuz Selim
AU - Wiegand, Michael
AU - Siegel, Melanie
AU - Köhler, Juliane
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022/8/25
Y1 - 2022/8/25
N2 - We describe the fifth edition of the CheckThat! lab, part of the 2022 Conference and Labs of the Evaluation Forum (CLEF). The lab evaluates technology supporting tasks related to factuality in multiple languages: Arabic, Bulgarian, Dutch, English, German, Spanish, and Turkish. Task 1 asks to identify relevant claims in tweets in terms of check-worthiness, verifiability, harmfullness, and attention-worthiness. Task 2 asks to detect previously fact-checked claims that could be relevant to fact-check a new claim. It targets both tweets and political debates/speeches. Task 3 asks to predict the veracity of the main claim in a news article. CheckThat! was the most popular lab at CLEF-2022 in terms of team registrations: 137 teams. More than one-third (37%) of them actually participated: 18, 7, and 26 teams submitted 210, 37, and 126 official runs for tasks 1, 2, and 3, respectively.
AB - We describe the fifth edition of the CheckThat! lab, part of the 2022 Conference and Labs of the Evaluation Forum (CLEF). The lab evaluates technology supporting tasks related to factuality in multiple languages: Arabic, Bulgarian, Dutch, English, German, Spanish, and Turkish. Task 1 asks to identify relevant claims in tweets in terms of check-worthiness, verifiability, harmfullness, and attention-worthiness. Task 2 asks to detect previously fact-checked claims that could be relevant to fact-check a new claim. It targets both tweets and political debates/speeches. Task 3 asks to predict the veracity of the main claim in a news article. CheckThat! was the most popular lab at CLEF-2022 in terms of team registrations: 137 teams. More than one-third (37%) of them actually participated: 18, 7, and 26 teams submitted 210, 37, and 126 official runs for tasks 1, 2, and 3, respectively.
KW - Check-Worthiness
KW - Covid-19
KW - Disinformation
KW - Fact-Checking
KW - Fake News
KW - Misinformation
KW - Verified Claim Retrieval
UR - https://www.scopus.com/pages/publications/85136954875
U2 - 10.1007/978-3-031-13643-6_29
DO - 10.1007/978-3-031-13643-6_29
M3 - Conference contribution
AN - SCOPUS:85136954875
SN - 9783031136429
T3 - Lecture Notes in Computer Science
SP - 495
EP - 520
BT - Experimental IR Meets Multilinguality, Multimodality, and Interaction - 13th International Conference of the CLEF Association, CLEF 2022, Proceedings
A2 - Barrón-Cedeño, Alberto
A2 - Da San Martino, Giovanni
A2 - Faggioli, Guglielmo
A2 - Ferro, Nicola
A2 - Degli Esposti, Mirko
A2 - Sebastiani, Fabrizio
A2 - Macdonald, Craig
A2 - Pasi, Gabriella
A2 - Hanbury, Allan
A2 - Potthast, Martin
PB - Springer Science and Business Media Deutschland GmbH
T2 - 13th International Conference of the CLEF Association, CLEF 2022
Y2 - 5 September 2022 through 8 September 2022
ER -