Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > stat > arXiv:2203.01707

Help | Advanced Search

Statistics > Machine Learning

arXiv:2203.01707 (stat)
[Submitted on 3 Mar 2022 (v1) , last revised 3 Jan 2025 (this version, v4)]

Title: Testing Stationarity and Change Point Detection in Reinforcement Learning

Title: 测试平稳性和强化学习中的变化点检测

Authors:Mengbing Li, Chengchun Shi, Zhenke Wu, Piotr Fryzlewicz
Abstract: We consider offline reinforcement learning (RL) methods in possibly nonstationary environments. Many existing RL algorithms in the literature rely on the stationarity assumption that requires the system transition and the reward function to be constant over time. However, the stationarity assumption is restrictive in practice and is likely to be violated in a number of applications, including traffic signal control, robotics and mobile health. In this paper, we develop a consistent procedure to test the nonstationarity of the optimal Q-function based on pre-collected historical data, without additional online data collection. Based on the proposed test, we further develop a sequential change point detection method that can be naturally coupled with existing state-of-the-art RL methods for policy optimization in nonstationary environments. The usefulness of our method is illustrated by theoretical results, simulation studies, and a real data example from the 2018 Intern Health Study. A Python implementation of the proposed procedure is available at https://github.com/limengbinggz/CUSUM-RL.
Abstract: 我们考虑在可能非平稳环境中的离线强化学习(RL)方法。 文献中许多现有的RL算法依赖于平稳性假设,该假设要求系统转移和奖励函数随时间保持恒定。 然而,在实践中,平稳性假设是有限制的,并且在许多应用中可能会被违反,包括交通信号控制、机器人技术和移动健康。 在本文中,我们开发了一种一致的程序,基于预先收集的历史数据来测试最优Q函数的非平稳性,而无需额外的在线数据收集。 基于所提出的检验,我们进一步开发了一种顺序变化点检测方法,可以自然地与现有最先进的RL方法结合,用于非平稳环境中的策略优化。 我们的方法的有效性通过理论结果、模拟研究和2018年Intern Health Study的真实数据例子得到了说明。 所提出程序的Python实现可在https://github.com/limengbinggz/CUSUM-RL获得。
Subjects: Machine Learning (stat.ML) ; Machine Learning (cs.LG)
Cite as: arXiv:2203.01707 [stat.ML]
  (or arXiv:2203.01707v4 [stat.ML] for this version)
  https://doi.org/10.48550/arXiv.2203.01707
arXiv-issued DOI via DataCite

Submission history

From: Chengchun Shi [view email]
[v1] Thu, 3 Mar 2022 13:30:28 UTC (1,084 KB)
[v2] Thu, 13 Oct 2022 07:58:58 UTC (1,077 KB)
[v3] Fri, 8 Mar 2024 01:00:47 UTC (1,545 KB)
[v4] Fri, 3 Jan 2025 23:17:28 UTC (4,697 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • TeX Source
view license
Current browse context:
stat.ML
< prev   |   next >
new | recent | 2022-03
Change to browse by:
cs
cs.LG
stat

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号