Reinforcement learning for resource provisioning in the vehicular cloud

Research output: Contribution to journalArticlepeer-review

105 Citations (Scopus)

Abstract

This article presents a concise view of vehicular clouds that incorporates various vehicular cloud models that have been proposed to date. Essentially, they all extend the traditional cloud and its utility computing functionalities across the entities in the vehicular ad hoc network. These entities include fixed roadside units, onboard units embedded in the vehicle, and personal smart devices of drivers and passengers. Cumulatively, these entities yield abundant processing, storage, sensing, and communication resources. However, vehicular clouds require novel resource provisioning techniques that can address the intrinsic challenges of dynamic demands for the resources and stringent QoS requirements. In this article, we show the benefits of reinforcement-learning-based techniques for resource provisioning in the vehicular cloud. The learning techniques can perceive long-term benefits and are ideal for minimizing the overhead of resource provisioning for vehicular clouds.

Original languageEnglish
Article number7553036
Pages (from-to)128-135
Number of pages8
JournalIEEE Wireless Communications
Volume23
Issue number4
DOIs
Publication statusPublished - Aug 2016
Externally publishedYes

Fingerprint

Dive into the research topics of 'Reinforcement learning for resource provisioning in the vehicular cloud'. Together they form a unique fingerprint.

Cite this