Approximating ergodic average reward continuous: time controlled Markov chains
dc.contributor.author | Lorenzo Magán, José María | |
dc.date.accessioned | 2024-10-04T11:37:14Z | |
dc.date.available | 2024-10-04T11:37:14Z | |
dc.date.issued | 2010-01 | |
dc.description.abstract | We study the approximation of an ergodic average reward continuous-time denumerable state Markov decision process (MDP) by means of a sequence of MDPs. Our results include the convergence of the corresponding optimal policies and the optimal gains. For a controlled upwardly skip-free process, we show some computational results to illustrate the convergence theorems | |
dc.description.department | Depto. de Economía Financiera y Actuarial y Estadística | |
dc.description.faculty | Fac. de Ciencias Económicas y Empresariales | |
dc.description.refereed | TRUE | |
dc.description.status | pub | |
dc.identifier.citation | T. Prieto-Rumeau and J. M. Lorenzo, "Approximating Ergodic Average Reward Continuous-Time Controlled Markov Chains," in IEEE Transactions on Automatic Control, vol. 55, no. 1, pp. 201-207, Jan. 2010, doi: 10.1109/TAC.2009.2033848. keywords: {Convergence;Optimal control;State-space methods;Statistics;Operations research;Process control;Adaptive control;Terminology;Approximation of control problems;Ergodic Markov decision processes (MDPs);policy iteration algorithm}, | |
dc.identifier.doi | 10.1109/TAC.2009.2033848 | |
dc.identifier.essn | 1558-2523 | |
dc.identifier.issn | 0018-9286 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14352/108643 | |
dc.issue.number | 1 | |
dc.journal.title | IEEE TRANSACTIONS ON AUTOMATIC CONTROL | |
dc.language.iso | eng | |
dc.page.final | 207 | |
dc.page.initial | 201 | |
dc.publisher | IEEE | |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 International | en |
dc.rights.accessRights | restricted access | |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | |
dc.subject.keyword | Approximation of control problems | |
dc.subject.keyword | Ergodic Markov decision processes (MDPs) | |
dc.subject.keyword | Policy iteration algorithm | |
dc.subject.ucm | Estadística | |
dc.subject.unesco | 1209 Estadística | |
dc.title | Approximating ergodic average reward continuous: time controlled Markov chains | |
dc.type | journal article | |
dc.type.hasVersion | VoR | |
dc.volume.number | 55 | |
dspace.entity.type | Publication | |
relation.isAuthorOfPublication | c1ee52ed-409c-4df3-b640-f490b9a5caa1 | |
relation.isAuthorOfPublication.latestForDiscovery | c1ee52ed-409c-4df3-b640-f490b9a5caa1 |
Download
Original bundle
1 - 1 of 1
Loading...
- Name:
- Approximating ergodic average.pdf
- Size:
- 296.69 KB
- Format:
- Adobe Portable Document Format