Abstract
We present an overview of the third edition of the CheckThat! Lab at CLEF 2020. The lab featured five tasks in Arabic and English, and here we focus on the three English tasks. Task 1 challenged the participants to predict which tweets from a stream of tweets about COVID-19 are worth fact-checking. Task 2 asked to retrieve verified claims from a set of previously fact-checked claims, which could help fact-check the claims made in an input tweet. Task 5 asked to propose which claims in a political debate or a speech should be prioritized for fact-checking. A total of 18 teams participated in the English tasks, and most submissions managed to achieve sizable improvements over the baselines using models based on BERT, LSTMs, and CNNs. In this paper, we describe the process of data collection and the task setup, including the evaluation measures used, and we give a brief overview of the participating systems. Last but not least, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important tasks of check-worthiness estimation and detecting previously fact-checked claims.
| Original language | English |
|---|---|
| Journal | CEUR Workshop Proceedings |
| Volume | 2696 |
| Publication status | Published - 2020 |
| Event | 11th Conference and Labs of the Evaluation Forum, CLEF 2020 - virtual, Online, Greece Duration: 22 Sept 2020 → 25 Sept 2020 |
Keywords
- COVID-19
- Check-worthiness estimation
- Computational journalism
- Detecting previously fact-checked claims
- Fact-checking
- Social media verification
- Veracity
- Verified claims retrieval
Fingerprint
Dive into the research topics of 'Overview of CheckThat! 2020 English: Automatic Identification and Verification of Claims in Social Media'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver