Z-Index at CheckThat! 2023: Unimodal and Multimodal Check-Worthiness Classification

Prerona Tarannum, Md Arid Hasan*, Firoj Alam, Sheak Rashed Haider Noori

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

1 Citation (Scopus)

Abstract

In this study, we report our participation in CheckThat! lab’s Task 1. The aim is to determine whether a claim made in either unimodal or multimodal content is worth fact-checking. We implemented standard preprocessing and fine-tuned the XLM-RoBERTa-large model. Additionally, we applied zero-shot learning and utilized a feed-forward network with embeddings for unimodal content. For subtask 1A submission, we used combined BERT-based models (BERT and BERT multilingual), ResNet50, and Feed Forward network and we ranked as 3rd (Arabic) and 5th (English). We used feed forward network with embeddings for subtask 1B submission and ranked as 3rd in Arabic and 6th in both English and Spanish. In further experiments, our evaluation shows that XLM-RoBERTa-large model outperforms the other models.

Original languageEnglish
Pages (from-to)482-493
Number of pages12
JournalCEUR Workshop Proceedings
Volume3497
Publication statusPublished - 2023
Event24th Working Notes of the Conference and Labs of the Evaluation Forum, CLEF-WN 2023 - Thessaloniki, Greece
Duration: 18 Sept 202321 Sept 2023

Keywords

  • Checkworthiness identification
  • Feed Forward Network
  • Misinformation
  • Multigenre Fact-checking
  • Multimodal Fact-checking
  • ResNet50
  • XLM-RoBERTa-large

Fingerprint

Dive into the research topics of 'Z-Index at CheckThat! 2023: Unimodal and Multimodal Check-Worthiness Classification'. Together they form a unique fingerprint.

Cite this