Recipe1M+: A Dataset for Learning Cross-Modal Embeddings for Cooking Recipes and Food Images

  • Javier Marin*
  • , Aritro Biswas
  • , Ferda Ofli
  • , Nicholas Hynes
  • , Amaia Salvador
  • , Yusuf Aytar
  • , Ingmar Weber
  • , Antonio Torralba
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

175 Citations (Scopus)

Abstract

In this paper, we introduce Recipe1M+, a new large-scale, structured corpus of over one million cooking recipes and 13 million food images. As the largest publicly available collection of recipe data, Recipe1M+ affords the ability to train high-capacity models on aligned, multimodal data. Using these data, we train a neural network to learn a joint embedding of recipes and images that yields impressive results on an image-recipe retrieval task. Moreover, we demonstrate that regularization via the addition of a high-level classification objective both improves retrieval performance to rival that of humans and enables semantic vector arithmetic. We postulate that these embeddings will provide a basis for further exploration of the Recipe1M+ dataset and food and cooking in general. Code, data and models are publicly available.11.http://im2recipe.csail.mit.edu.

Original languageEnglish
Article number8758197
Pages (from-to)187-203
Number of pages17
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume43
Issue number1
DOIs
Publication statusPublished - 1 Jan 2021

Keywords

  • Cross-modal
  • cooking recipes
  • deep learning
  • food images

Fingerprint

Dive into the research topics of 'Recipe1M+: A Dataset for Learning Cross-Modal Embeddings for Cooking Recipes and Food Images'. Together they form a unique fingerprint.

Cite this