Skip to main navigation Skip to search Skip to main content

Budget-Constrained Online Retrieval-Augmented Generation: The Chunk-as-a-Service Model

  • Shawqi Al-Maliki
  • , Ammar Gharaibeh
  • , Mohamed Rahouti
  • , Mohammad Ruhul Amin
  • , Mohamed Abdallah
  • , Junaid Qadir
  • , Ala Al-Fuqaha*
  • *Corresponding author for this work
  • Hamad bin Khalifa University
  • German Jordanian University
  • Fordham University
  • Qatar University

Research output: Contribution to journalArticlepeer-review

Abstract

Large Language Models (LLMs) have revolutionized the field of natural language processing. However, they exhibit some limitations, including a lack of reliability and transparency: they may hallucinate and fail to provide sources that support the generated output. Retrieval-Augmented Generation (RAG) was introduced to address such limitations in LLMs. One popular implementation, RAG-as-a-Service (RaaS), has shortcomings that hinder its adoption and accessibility. For instance, RaaS pricing is based on the number of submitted prompts, without considering whether the prompts are enriched by relevant chunks, i.e., text segments retrieved from a vector database, or the quality of the utilized chunks (i.e., their degree of relevance). This results in an opaque and less cost-effective payment model. We propose Chunk-as-a-Service (CaaS) as a transparent and cost-effective alternative. CaaS includes two variants: Open-Budget CaaS (OB-CaaS) and Limited-Budget CaaS (LB-CaaS), which is enabled by our “Utility-Cost Online Selection Algorithm (UCOSA)”. UCOSA further extends the cost-effectiveness and the accessibility of the OB-CaaS variant by enriching, in an online manner, a subset of the submitted prompts based on budget constraints and utility-cost tradeoff. Our experiments demonstrate the efficacy of the proposed UCOSA compared to both offline and relevance-greedy selection baselines. In terms of the performance metric—the number of enriched prompts (NEP) multiplied by the Average Relevance (AR)—UCOSA outperforms random selection by approximately 52% and achieves around 75% of the performance of offline selection methods. Additionally, in terms of budget utilization, LB-CaaS and OB-CaaS achieve higher performance-to-budget ratios of 140% and 86%, respectively, compared to RaaS, indicating their superior efficiency.

Original languageEnglish
JournalIEEE Transactions on Artificial Intelligence
DOIs
Publication statusAccepted/In press - 2026

Keywords

  • Cloud Computing
  • Large Language Models (LLMs)
  • Limited-Budget RAG
  • Natural Language Processing (NLP)
  • Retrieval-Augmented Generation (RAG)

Fingerprint

Dive into the research topics of 'Budget-Constrained Online Retrieval-Augmented Generation: The Chunk-as-a-Service Model'. Together they form a unique fingerprint.

Cite this