Abstract
In this study, we report our participation in CheckThat! lab’s Task 1. The aim is to determine whether a claim made in either unimodal or multimodal content is worth fact-checking. We implemented standard preprocessing and fine-tuned the XLM-RoBERTa-large model. Additionally, we applied zero-shot learning and utilized a feed-forward network with embeddings for unimodal content. For subtask 1A submission, we used combined BERT-based models (BERT and BERT multilingual), ResNet50, and Feed Forward network and we ranked as 3rd (Arabic) and 5th (English). We used feed forward network with embeddings for subtask 1B submission and ranked as 3rd in Arabic and 6th in both English and Spanish. In further experiments, our evaluation shows that XLM-RoBERTa-large model outperforms the other models.
| Original language | English |
|---|---|
| Pages (from-to) | 482-493 |
| Number of pages | 12 |
| Journal | CEUR Workshop Proceedings |
| Volume | 3497 |
| Publication status | Published - 2023 |
| Event | 24th Working Notes of the Conference and Labs of the Evaluation Forum, CLEF-WN 2023 - Thessaloniki, Greece Duration: 18 Sept 2023 → 21 Sept 2023 |
Keywords
- Checkworthiness identification
- Feed Forward Network
- Misinformation
- Multigenre Fact-checking
- Multimodal Fact-checking
- ResNet50
- XLM-RoBERTa-large