Machine Learning in Non-Stationary Environments
Title | Machine Learning in Non-Stationary Environments PDF eBook |
Author | Masashi Sugiyama |
Publisher | MIT Press |
Pages | 279 |
Release | 2012-03-30 |
Genre | Computers |
ISBN | 0262300435 |
Theory, algorithms, and applications of machine learning techniques to overcome “covariate shift” non-stationarity. As the power of computing has grown over the past few decades, the field of machine learning has advanced rapidly in both theory and practice. Machine learning methods are usually based on the assumption that the data generation mechanism does not change over time. Yet real-world applications of machine learning, including image recognition, natural language processing, speech recognition, robot control, and bioinformatics, often violate this common assumption. Dealing with non-stationarity is one of modern machine learning's greatest challenges. This book focuses on a specific non-stationary environment known as covariate shift, in which the distributions of inputs (queries) change but the conditional distribution of outputs (answers) is unchanged, and presents machine learning theory, algorithms, and applications to overcome this variety of non-stationarity. After reviewing the state-of-the-art research in the field, the authors discuss topics that include learning under covariate shift, model selection, importance estimation, and active learning. They describe such real world applications of covariate shift adaption as brain-computer interface, speaker identification, and age prediction from facial images. With this book, they aim to encourage future research in machine learning, statistics, and engineering that strives to create truly autonomous learning machines able to learn under non-stationarity.
Learning in Non-Stationary Environments
Title | Learning in Non-Stationary Environments PDF eBook |
Author | Moamar Sayed-Mouchaweh |
Publisher | Springer Science & Business Media |
Pages | 439 |
Release | 2012-04-13 |
Genre | Technology & Engineering |
ISBN | 1441980202 |
Recent decades have seen rapid advances in automatization processes, supported by modern machines and computers. The result is significant increases in system complexity and state changes, information sources, the need for faster data handling and the integration of environmental influences. Intelligent systems, equipped with a taxonomy of data-driven system identification and machine learning algorithms, can handle these problems partially. Conventional learning algorithms in a batch off-line setting fail whenever dynamic changes of the process appear due to non-stationary environments and external influences. Learning in Non-Stationary Environments: Methods and Applications offers a wide-ranging, comprehensive review of recent developments and important methodologies in the field. The coverage focuses on dynamic learning in unsupervised problems, dynamic learning in supervised classification and dynamic learning in supervised regression problems. A later section is dedicated to applications in which dynamic learning methods serve as keystones for achieving models with high accuracy. Rather than rely on a mathematical theorem/proof style, the editors highlight numerous figures, tables, examples and applications, together with their explanations. This approach offers a useful basis for further investigation and fresh ideas and motivates and inspires newcomers to explore this promising and still emerging field of research.
Machine Learning in Non-stationary Environments
Title | Machine Learning in Non-stationary Environments PDF eBook |
Author | Masashi Sugiyama |
Publisher | MIT Press |
Pages | 279 |
Release | 2012 |
Genre | Computers |
ISBN | 0262017091 |
Dealing with non-stationarity is one of modem machine learning's greatest challenges. This book focuses on a specific non-stationary environment known as covariate shift, in which the distributions of inputs (queries) change but the conditional distribution of outputs (answers) is unchanged, and presents machine learning theory, algorithms, and applications to overcome this variety of non-stationarity.
Learning in Non-Stationary Environments
Title | Learning in Non-Stationary Environments PDF eBook |
Author | Springer |
Publisher | |
Pages | 454 |
Release | 2012-04-01 |
Genre | |
ISBN | 9781441980212 |
Learning from Data Streams in Evolving Environments
Title | Learning from Data Streams in Evolving Environments PDF eBook |
Author | Moamar Sayed-Mouchaweh |
Publisher | Springer |
Pages | 320 |
Release | 2018-07-28 |
Genre | Technology & Engineering |
ISBN | 3319898035 |
This edited book covers recent advances of techniques, methods and tools treating the problem of learning from data streams generated by evolving non-stationary processes. The goal is to discuss and overview the advanced techniques, methods and tools that are dedicated to manage, exploit and interpret data streams in non-stationary environments. The book includes the required notions, definitions, and background to understand the problem of learning from data streams in non-stationary environments and synthesizes the state-of-the-art in the domain, discussing advanced aspects and concepts and presenting open problems and future challenges in this field. Provides multiple examples to facilitate the understanding data streams in non-stationary environments; Presents several application cases to show how the methods solve different real world problems; Discusses the links between methods to help stimulate new research and application directions.
Reinforcement Learning, second edition
Title | Reinforcement Learning, second edition PDF eBook |
Author | Richard S. Sutton |
Publisher | MIT Press |
Pages | 549 |
Release | 2018-11-13 |
Genre | Computers |
ISBN | 0262352702 |
The significantly expanded and updated new edition of a widely used text on reinforcement learning, one of the most active research areas in artificial intelligence. Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives while interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the field's key ideas and algorithms. This second edition has been significantly expanded and updated, presenting new topics and updating coverage of other topics. Like the first edition, this second edition focuses on core online learning algorithms, with the more mathematical material set off in shaded boxes. Part I covers as much of reinforcement learning as possible without going beyond the tabular case for which exact solutions can be found. Many algorithms presented in this part are new to the second edition, including UCB, Expected Sarsa, and Double Learning. Part II extends these ideas to function approximation, with new sections on such topics as artificial neural networks and the Fourier basis, and offers expanded treatment of off-policy learning and policy-gradient methods. Part III has new chapters on reinforcement learning's relationships to psychology and neuroscience, as well as an updated case-studies chapter including AlphaGo and AlphaGo Zero, Atari game playing, and IBM Watson's wagering strategy. The final chapter discusses the future societal impacts of reinforcement learning.
Markov Decision Processes
Title | Markov Decision Processes PDF eBook |
Author | Martin L. Puterman |
Publisher | John Wiley & Sons |
Pages | 544 |
Release | 2014-08-28 |
Genre | Mathematics |
ISBN | 1118625870 |
The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "This text is unique in bringing together so many results hitherto found only in part in other texts and papers. . . . The text is fairly self-contained, inclusive of some basic mathematical results needed, and provides a rich diet of examples, applications, and exercises. The bibliographical material at the end of each chapter is excellent, not only from a historical perspective, but because it is valuable for researchers in acquiring a good perspective of the MDP research potential." —Zentralblatt fur Mathematik ". . . it is of great value to advanced-level students, researchers, and professional practitioners of this field to have now a complete volume (with more than 600 pages) devoted to this topic. . . . Markov Decision Processes: Discrete Stochastic Dynamic Programming represents an up-to-date, unified, and rigorous treatment of theoretical and computational aspects of discrete-time Markov decision processes." —Journal of the American Statistical Association