Approximating ergodic average reward continuous: time controlled Markov chains
Loading...
Official URL
Full text at PDC
Publication date
2010
Authors
Advisors (or tutors)
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
IEEE
Citation
T. Prieto-Rumeau and J. M. Lorenzo, "Approximating Ergodic Average Reward Continuous-Time Controlled Markov Chains," in IEEE Transactions on Automatic Control, vol. 55, no. 1, pp. 201-207, Jan. 2010, doi: 10.1109/TAC.2009.2033848. keywords: {Convergence;Optimal control;State-space methods;Statistics;Operations research;Process control;Adaptive control;Terminology;Approximation of control problems;Ergodic Markov decision processes (MDPs);policy iteration algorithm},
Abstract
We study the approximation of an ergodic average reward continuous-time denumerable state Markov decision process (MDP) by means of a sequence of MDPs. Our results include the convergence of the corresponding optimal policies and the optimal gains. For a controlled upwardly skip-free process, we show some computational results to illustrate the convergence theorems