Efficient online learning with offline datasets for infinite horizon MDPs: A Bayesian approach

D Tang, R Jain, B Hao, Z Wen - arXiv preprint arXiv:2310.11531, 2023 - arxiv.org
arXiv preprint arXiv:2310.11531, 2023arxiv.org
In this paper, we study the problem of efficient online reinforcement learning in the infinite
horizon setting when there is an offline dataset to start with. We assume that the offline
dataset is generated by an expert but with unknown level of competence, ie, it is not perfect
and not necessarily using the optimal policy. We show that if the learning agent models the
behavioral policy (parameterized by a competence parameter) used by the expert, it can do
substantially better in terms of minimizing cumulative regret, than if it doesn't do that. We …
In this paper, we study the problem of efficient online reinforcement learning in the infinite horizon setting when there is an offline dataset to start with. We assume that the offline dataset is generated by an expert but with unknown level of competence, i.e., it is not perfect and not necessarily using the optimal policy. We show that if the learning agent models the behavioral policy (parameterized by a competence parameter) used by the expert, it can do substantially better in terms of minimizing cumulative regret, than if it doesn't do that. We establish an upper bound on regret of the exact informed PSRL algorithm that scales as . This requires a novel prior-dependent regret analysis of Bayesian online learning algorithms for the infinite horizon setting. We then propose an approximate Informed RLSVI algorithm that we can interpret as performing imitation learning with the offline dataset, and then performing online learning.
arxiv.org