Simulation-based Algorithms for Markov Decision Processes
Title | Simulation-based Algorithms for Markov Decision Processes PDF eBook |
Author | Hyeong Soo Chang |
Publisher | Springer Science & Business Media |
Pages | 202 |
Release | 2007-05-01 |
Genre | Business & Economics |
ISBN | 1846286905 |
Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. This book brings the state-of-the-art research together for the first time. It provides practical modeling methods for many real-world problems with high dimensionality or complexity which have not hitherto been treatable with Markov decision processes.
Simulation-based Algorithms for Markov Decision Processes
Title | Simulation-based Algorithms for Markov Decision Processes PDF eBook |
Author | Ying He |
Publisher | |
Pages | 326 |
Release | 2002 |
Genre | Algorithms |
ISBN |
Simulation-Based Algorithms for Markov Decision Processes
Title | Simulation-Based Algorithms for Markov Decision Processes PDF eBook |
Author | Hyeong Soo Chang |
Publisher | Springer Science & Business Media |
Pages | 241 |
Release | 2013-02-26 |
Genre | Technology & Engineering |
ISBN | 1447150228 |
Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. Many real-world problems modeled by MDPs have huge state and/or action spaces, giving an opening to the curse of dimensionality and so making practical solution of the resulting models intractable. In other cases, the system of interest is too complex to allow explicit specification of some of the MDP model parameters, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based algorithms have been developed to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function. Specific approaches include adaptive sampling, evolutionary policy iteration, evolutionary random policy search, and model reference adaptive search. This substantially enlarged new edition reflects the latest developments in novel algorithms and their underpinning theories, and presents an updated account of the topics that have emerged since the publication of the first edition. Includes: innovative material on MDPs, both in constrained settings and with uncertain transition properties; game-theoretic method for solving MDPs; theories for developing roll-out based algorithms; and details of approximation stochastic annealing, a population-based on-line simulation-based algorithm. The self-contained approach of this book will appeal not only to researchers in MDPs, stochastic modeling, and control, and simulation but will be a valuable source of tuition and reference for students of control and operations research.
Simulation-based Optimization of Markov Decision Processes
Title | Simulation-based Optimization of Markov Decision Processes PDF eBook |
Author | Peter Marbach |
Publisher | |
Pages | 169 |
Release | 1998 |
Genre | |
ISBN |
Simulation-Based Optimization
Title | Simulation-Based Optimization PDF eBook |
Author | Abhijit Gosavi |
Publisher | Springer |
Pages | 530 |
Release | 2014-10-30 |
Genre | Business & Economics |
ISBN | 1489974911 |
Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduce the evolving area of static and dynamic simulation-based optimization. Covered in detail are model-free optimization techniques – especially designed for those discrete-event, stochastic systems which can be simulated but whose analytical models are difficult to find in closed mathematical forms. Key features of this revised and improved Second Edition include: · Extensive coverage, via step-by-step recipes, of powerful new algorithms for static simulation optimization, including simultaneous perturbation, backtracking adaptive search and nested partitions, in addition to traditional methods, such as response surfaces, Nelder-Mead search and meta-heuristics (simulated annealing, tabu search, and genetic algorithms) · Detailed coverage of the Bellman equation framework for Markov Decision Processes (MDPs), along with dynamic programming (value and policy iteration) for discounted, average, and total reward performance metrics · An in-depth consideration of dynamic simulation optimization via temporal differences and Reinforcement Learning: Q-Learning, SARSA, and R-SMART algorithms, and policy search, via API, Q-P-Learning, actor-critics, and learning automata · A special examination of neural-network-based function approximation for Reinforcement Learning, semi-Markov decision processes (SMDPs), finite-horizon problems, two time scales, case studies for industrial tasks, computer codes (placed online) and convergence proofs, via Banach fixed point theory and Ordinary Differential Equations Themed around three areas in separate sets of chapters – Static Simulation Optimization, Reinforcement Learning and Convergence Analysis – this book is written for researchers and students in the fields of engineering (industrial, systems, electrical and computer), operations research, computer science and applied mathematics.
Handbook of Simulation Optimization
Title | Handbook of Simulation Optimization PDF eBook |
Author | Michael C Fu |
Publisher | Springer |
Pages | 400 |
Release | 2014-11-13 |
Genre | Business & Economics |
ISBN | 1493913840 |
The Handbook of Simulation Optimization presents an overview of the state of the art of simulation optimization, providing a survey of the most well-established approaches for optimizing stochastic simulation models and a sampling of recent research advances in theory and methodology. Leading contributors cover such topics as discrete optimization via simulation, ranking and selection, efficient simulation budget allocation, random search methods, response surface methodology, stochastic gradient estimation, stochastic approximation, sample average approximation, stochastic constraints, variance reduction techniques, model-based stochastic search methods and Markov decision processes. This single volume should serve as a reference for those already in the field and as a means for those new to the field for understanding and applying the main approaches. The intended audience includes researchers, practitioners and graduate students in the business/engineering fields of operations research, management science, operations management and stochastic control, as well as in economics/finance and computer science.
Constrained Markov Decision Processes
Title | Constrained Markov Decision Processes PDF eBook |
Author | Eitan Altman |
Publisher | Routledge |
Pages | 256 |
Release | 2021-12-17 |
Genre | Mathematics |
ISBN | 1351458248 |
This book provides a unified approach for the study of constrained Markov decision processes with a finite state space and unbounded costs. Unlike the single controller case considered in many other books, the author considers a single controller with several objectives, such as minimizing delays and loss, probabilities, and maximization of throughputs. It is desirable to design a controller that minimizes one cost objective, subject to inequality constraints on other cost objectives. This framework describes dynamic decision problems arising frequently in many engineering fields. A thorough overview of these applications is presented in the introduction. The book is then divided into three sections that build upon each other.