Learning Robot Policies from Imperfect Human Teachers

Learning Robot Policies from Imperfect Human Teachers
Title Learning Robot Policies from Imperfect Human Teachers PDF eBook
Author Taylor Annette Kessler Faulkner
Publisher
Pages 0
Release 2022
Genre
ISBN

Download Learning Robot Policies from Imperfect Human Teachers Book in PDF, Epub and Kindle

The ability to adapt and learn can help robots deployed in dynamic and varied environments. While in the wild, the data that robots have access to includes input from their sensors and the humans around them. The ability to utilize human data increases the usable information in the environment. However, human data can be noisy, particularly when acquired from non-experts. Rather than requiring expert teachers for learning robots, which is expensive, my research addresses methods for learning from imperfect human teachers. These methods use Human-in-the-loop Reinforcement Learning, which gives robots a reward function and input from human teachers. This dissertation shows that actively modifying which states receive feedback from imperfect, unmodeled human teachers can improve the speed and dependability of Human-In-the-loop Reinforcement Learning (HRL). This body of work addresses a bipartite model of imperfect teachers, in which humans can be inattentive or inaccurate. First, I present two algorithms for learning from inattentive teachers, which take advantage of intermittent attention from humans by adjusting state-action exploration to improve the learning speed of a Markovian HRL algorithm and give teachers more free time to complete other tasks. Second, I present two algorithms for learning from inaccurate teachers who give incorrect information to a robot. These algorithms estimate areas of the state space that are likely to receive incorrect feedback from human teachers, and can be used to filter messy, inaccurate data into information that is usable by a robot, performing dependably over a wide variety of inputs. The primary contribution of this dissertation is a set of algorithms that enable learning robots to adapt to imperfect teachers. These algorithms enable robots to learn policies more quickly and dependably than other existing HRL algorithms. My findings in HRL will enhance the ability of robots to learn new tasks from laypeople, requiring less time and knowledge of how to teach a robot than prior work. These advances are a step towards ubiquitous robot deployment in the home, public spaces, and other environments, with less demand for expensive expert data and an easier experience for novice robot users

Robot Learning from Human Demonstration

Robot Learning from Human Demonstration
Title Robot Learning from Human Demonstration PDF eBook
Author Sonia Dechter
Publisher Springer Nature
Pages 109
Release 2022-06-01
Genre Computers
ISBN 3031015703

Download Robot Learning from Human Demonstration Book in PDF, Epub and Kindle

Learning from Demonstration (LfD) explores techniques for learning a task policy from examples provided by a human teacher. The field of LfD has grown into an extensive body of literature over the past 30 years, with a wide variety of approaches for encoding human demonstrations and modeling skills and tasks. Additionally, we have recently seen a focus on gathering data from non-expert human teachers (i.e., domain experts but not robotics experts). In this book, we provide an introduction to the field with a focus on the unique technical challenges associated with designing robots that learn from naive human teachers. We begin, in the introduction, with a unification of the various terminology seen in the literature as well as an outline of the design choices one has in designing an LfD system. Chapter 2 gives a brief survey of the psychology literature that provides insights from human social learning that are relevant to designing robotic social learners. Chapter 3 walks through an LfD interaction, surveying the design choices one makes and state of the art approaches in prior work. First, is the choice of input, how the human teacher interacts with the robot to provide demonstrations. Next, is the choice of modeling technique. Currently, there is a dichotomy in the field between approaches that model low-level motor skills and those that model high-level tasks composed of primitive actions. We devote a chapter to each of these. Chapter 7 is devoted to interactive and active learning approaches that allow the robot to refine an existing task model. And finally, Chapter 8 provides best practices for evaluation of LfD systems, with a focus on how to approach experiments with human subjects in this domain.

Learning from Imperfect Demonstrations

Learning from Imperfect Demonstrations
Title Learning from Imperfect Demonstrations PDF eBook
Author Zhangjie Cao
Publisher
Pages 0
Release 2022
Genre
ISBN

Download Learning from Imperfect Demonstrations Book in PDF, Epub and Kindle

Imitation learning is one of the most promising robot learning paradigms, which attempts to learn robot policies from demonstrations. Standard imitation learning algorithms assume that the demonstrations are provided by optimal experts who are capable of perfectly performing the task on the robot of interest (i.e., the target environment). However, under these assumptions, imitation learning usually requires large amounts of demonstrations. This is limiting in many environments, where collecting optimal demonstrations is difficult due to various reasons such as difficulty of controlling robots with high degrees of freedom or limited interactions with the environment. In practice, we often have access to large amounts of imperfect demonstrations, which are possibly not optimal or are produced by different agents with different morphologies or dynamics. Such imperfect demonstrations contain valuable information that can be helpful for learning the optimal policy. However, directly imitating these imperfect demonstrations will lead to learning a suboptimal policy. Instead of direct imitation learning, we propose developing new algorithms to utilize these imperfect demonstrations towards learning an optimal robot policy. In this thesis, we categorize imperfect demonstrations into: i) suboptimal demonstrations, ii) cross-domain demonstrations, and iii) infeasible demonstrations. Suboptimal demonstrations often contain non-optimal sequence of states and actions. For example when reaching an object, the robot might take a longer path towards the goal. Cross-domain demonstrations are collected from agents with different morphologies or dynamics, but such demonstrations can still have correspondence to behaviors of the target agent. Finally, infeasible demonstrations are drawn from other agents that might not have any correspondence to the target agent. Prior works in learning from imperfect demonstrations only focus on one of these categories of imperfect demonstrations. In this thesis, we comprehensively address the problem of learning from imperfect demonstrations: We formalize the different categories of imperfect demonstrations and introduce a set of robot learning algorithms that tackle each category when learning from these demonstrations. We will further discuss under what assumptions each of our methods should be used with imperfect demonstrations. We conduct experiments in a number of robotics manipulation tasks in simulation and real to demonstrate the developed algorithms.

Robots in Education

Robots in Education
Title Robots in Education PDF eBook
Author Fady Alnajjar
Publisher Routledge
Pages 238
Release 2021-07-29
Genre Education
ISBN 1000388840

Download Robots in Education Book in PDF, Epub and Kindle

Robots in Education is an accessible introduction to the use of robotics in formal learning, encompassing pedagogical and psychological theories as well as implementation in curricula. Today, a variety of communities across education are increasingly using robots as general classroom tutors, tools in STEM projects, and subjects of study. This volume explores how the unique physical and social-interactive capabilities of educational robots can generate bonds with students while freeing instructors to focus on their individualized approaches to teaching and learning. Authored by a uniquely interdisciplinary team of scholars, the book covers the basics of robotics and their supporting technologies; attitudes toward and ethical implications of robots in learning; research methods relevant to extending our knowledge of the field; and more.

Artificial Intelligence Brings Social Economic

Artificial Intelligence Brings Social Economic
Title Artificial Intelligence Brings Social Economic PDF eBook
Author Johnny Ch LOK
Publisher
Pages 374
Release 2020-07-29
Genre
ISBN

Download Artificial Intelligence Brings Social Economic Book in PDF, Epub and Kindle

In robots learning process, what difficulties which will face, the scientist indicated this question concerns how to influence robot's learning abilities: Has RL certain desirable qualifties, such as the learning abilities possibility to explore and learn from unsupervised experience? Many also queation RL as a variable technique for learning in complex real-world environments because of practical problems, such as long training time requirements, non-scaling state representations, sparse rewards ( resulting in slow utility propagation) and safe exploration strategies. As a result, reinforcement learning has been utilized for teaching robots and game characters, incorporating real-time human feedback by having a person supply reward and/or punishment as an additional input to the reward function.Consequently, the scientist discovered human's teaching method is the most important factor to influence the robot's learning abilities amonf all other environment factors. He also assumed and argued that reinforcement based learning approaches should be reformulated to move effectively incorporate a human teacher. To do this properly, the educational robot must understand the human teacher's contribution; how does the human teach? and what does the educational robot try to communicate from a robot learner? The scientist suggested human trainers ought use these methods to educate robot learners to learn more easily. His main findings indicates reward factor can influence the human robot teachers' motives to teach robot learners to learn, due to more reward can encourage the human robot trainers to teach robots to learn their knowledge and skill. He also found that robot users read the behavior of the robot machine learners and adjust the human robot trainer whose training strategies as whose mental model of the different robot human trainer education method changes. Viweing the human input as a traditional RL reward signal does not take advantage of the fact that a teacher adjusts hose training behavior to best suit the robot educational learner.In addition to the related RL works mentioned above. Every human robot trainer needs to consider the topic of human input for machine learning systems. Personalization agents and adaptive user interfaces are examples of software that learns by observing human behaviors modeling humann preferences or activities. It empathizes how human teaches the robot learner through interaction, various works address trainable software and robotic agents, exploring explicit human input: learning classification tasks and navigation tasks via natural language, robots that learn by demonstration or/and software agents that learn or training. It seems how to design the robot machine learning software technology will also influence the robot's learning abilities. Thus, human trainer's software learning machine and how to give reward to encourage the human trainer's teaching behavior to let the robot to learn, these factors will influence the robot's learning ability to be applied to education industry to assist human teacher's teaching job successfully. Because how much knowledge and skill, the robot can learn from the human trainer, how much educational knowledge that the robot can own to prepare to teach any students to learn easily. Hence, the human trainer's knowledge and skill will quality to satisfy its primary, secondary and university students learning need.⦁Can robots be teachable agents to students really?

Trust in Human-Robot Interaction

Trust in Human-Robot Interaction
Title Trust in Human-Robot Interaction PDF eBook
Author Chang S. Nam
Publisher Academic Press
Pages 614
Release 2020-11-17
Genre Psychology
ISBN 0128194731

Download Trust in Human-Robot Interaction Book in PDF, Epub and Kindle

Trust in Human-Robot Interaction addresses the gamut of factors that influence trust of robotic systems. The book presents the theory, fundamentals, techniques and diverse applications of the behavioral, cognitive and neural mechanisms of trust in human-robot interaction, covering topics like individual differences, transparency, communication, physical design, privacy and ethics. Presents a repository of the open questions and challenges in trust in HRI Includes contributions from many disciplines participating in HRI research, including psychology, neuroscience, sociology, engineering and computer science Examines human information processing as a foundation for understanding HRI Details the methods and techniques used to test and quantify trust in HRI

Robot-Assisted Learning and Education

Robot-Assisted Learning and Education
Title Robot-Assisted Learning and Education PDF eBook
Author Agnese Augello
Publisher Frontiers Media SA
Pages 167
Release 2021-01-04
Genre Technology & Engineering
ISBN 2889663256

Download Robot-Assisted Learning and Education Book in PDF, Epub and Kindle