
953e7763186b779ce4484b281adf1cd6.ppt
- Количество слайдов: 25
Mental Development and Representation Building through Motivated Learning Janusz A. Starzyk, Ohio University, USA, Pawel Raif, Silesian University of Technology, Poland, Ah-Hwee Tan, Nanyang Technological University, Singapore 2010 International Joint Conference on Neural Networks, Barcelona
Outline • Embodied Intelligence (EI) • Embodiment of Mind • Computational Approaches to Machine Learning • How to Motivate a Machine • Motivated Learning (ML) • Building representation through motivated learning – ML agent in „Normal” vs. „Graded” Environment – ML agent vs. RL agent in „Graded” Environment • Future work
Traditional AI Embodied Intelligence • Abstract intelligence – attempt to simulate “highest” human faculties: • language, discursive reason, mathematics, abstract problem solving • Environment model – Condition for problem solving in abstract way – “brain in a vat” • Embodiment – knowledge is implicit in the fact that we have a body • embodiment supports brain development • Intelligence develops through interaction with environment – Situated in environment – Environment is its best model
Embodied Intelligence § Definition Embodied Intelligence (EI) is a mechanism that learns how to minimize hostility of its environment • Mechanism: biological, mechanical or virtual agent with embodied sensors and actuators • EI acts on environment and perceives its actions • Environment hostility: is persistent and stimulates EI to act • Hostility: direct aggression, pain, scarce resources, etc • EI learns so it must have associative self-organizing memory • Knowledge is acquired by EI
Intelligence An intelligent agent learns how to survive in a hostile environment.
Embodiment of a Mind q q Embodiment: is a part of environment under control of the mind It contains intelligence core and sensory motor interfaces to interact with environment It is necessary for development of intelligence It is not necessarily constant
Embodiment of Mind q q q Changes in embodiment modify brain’s self-determination Brain learns its own body’s dynamics Self-awareness is a result of identification with own embodiment Embodiment can be extended by using tools and machines Successful operation is a function of correct perception of environment and own embodiment
Computational Approaches to Machine Learning q Supervised q Unsupervised q Reinforced Øproblems with Complex environments Ølack of motivation q Motivated Learning q Definition q Need for benchmarks
How to Motivate a Machine ? A fundamental question is how to motivate an agent to do anything, and in particular, to enhance its own complexity? What drives an agent to explore the environment build representations and learn effective actions? What makes it successful learner in changing environments?
How to Motivate a Machine ? q Although artificial curiosity helps to explore the environment, it leads to learning without a specific purpose. q We suggest that the hostility of the environment, required for EI, is the most effective motivational factor. q Both are needed - hostility of the environment and intelligence that learns how to reduce the pain. Fig. englishteachermexico. wordpress. com/
Motivated Learning q Definition*: Motivated learning (ML) is pain based motivation, goal creation and learning in embodied agent. q q q It uses externally defined pain signals. Machine is rewarded for minimizing the primitive pain signals. Machine creates abstract goals based on the primitive pain signals. It receives internal rewards for satisfying its abstract goals. ML applies to EI working in a hostile environment. *J. A. Starzyk, Motivation in Embodied Intelligence, Frontiers in Robotics, Automation and Control, I-Tech Education and Publishing, Oct. 2008, pp. 83 -110.
Neural self-organizing structures in ML Goal creation scheme üan abstract pain is introduced by solving lower level pain Motivations and selection of a goal üWTA competition selects motivation üanother WTA selects implementation üa primitive pain is directly sensed üthresholded curiosity based pain
Building representation through motivated learning Experiments…
Base Task Specification • Environment v Environment consist of six different categories of resources. v Five of them have limited availability. v One, the most abstract resource is inexhaustible. The least abstract Food The most abstract Grocery Bank Office School Sandbox
Base Experiment - Task Specificationperforming proper actions. There are Agent uses resources 36 possible actions but only six of them are meaningful and at a given situation (environment’s and agent’s state) there is usually one best action to perform. The problem is: determine which action should be performed renewing in time the most needed resource. Meaningful sensory-motor pairs and their effect on the environment: Id SENSORY MOTOR INCREASES DECREASES PAIR Id 0 Food Eat Sugar level Food supplies 0 1 Grocery Buy Food supplies Money at hand 7 2 Bank Withdraw Money at hand Spending limits 14 3 Office Work Spending limits Job opportunities 21 4 School Study Job opportunities Mental state 28 5 Sandbox Play Mental state - 36
How to simulate complexity and hostility of environment 1. Complexity Different resources are available in the environment. Agent should learn dependencies between resources and its actions to operate properly. 2. Hostility Function which describes the probability of finding resources in the environment. Mild environment Harsh environment F e S a c s O h t f o B f o a i l G n c r k e F o o c o e d r y 1 2
Base Experiment Results RL agent (left side) can learn dependencies between only few basic resources. In contrast ML agent is able to learn dependencies between all resources. In a harsh environment ML agent is able to control its environment (and limit its ‘primitive pain’) but RL agent cannot 1 RL ML 2
ML agent in „Normal” vs. „Graded” Environment q Two kinds of environments - “normal” (1) and “graded” (2). q “Graded” environment corresponds to gradual development and representation building q Simulations in four environments with: 6, 10, 14 and 18 different hierarchy levels each one representing different resource. 1 … Resources Time 2 … Resources Time
ML agent in „Normal” vs. „Graded” Environment q ML agent learns more effectively in the ”graded” environments with gradually increasing complexity. q In a complex environment this difference becomes more significant. q“gradual” learning is beneficial to mental development
ML agent vs. RL agent in „Graded” Environment. q The second group of experiments compares effectiveness of ML and RL based agents. Resources q. In this simulation we have used “graded” environments with gradually increasing complexity. … q We simulated environments with: 6, 10, 14, 18 levels of hierarchy. Time
ML agent vs. RL agent in „Graded” Environment. 6 levels of hierarchy 10 levels of hierarchy q Initially ML agent experiences similar primitive pain signal Pp as RL agent. q ML agent converges quickly to a stable performance. q Initially RL agent experiences lower primitive pain signal Pp than ML agent. q RL agent’s pain increases when environment is more hostile.
ML agent vs. RL agent in „Graded” Environment. 14 levels of hierarchy 18 levels of hierarchy ML agent keeps learning while RL agent exploits early knowledge In effect, RL doesn’t learn all dependencies it time to survive Similar results to 10 and 14 levels
Future work RL state reward state action RL reward action GC GOALS (motivations)
References: • • • Starzyk J. A. , Raif P. , Ah-Hwee Tan, Motivated Learning as an Extension of Reinforcement Learning, Fourth International Conference on Cognitive Systems, Cog. Sys 2010, ETH Zurich, January 2010. Starzyk J. A. , Raif P. , Motivated Learning Based on Goal Creation in Cognitive Systems, Thirteenth International Conference on Cognitive and Neural Systems, Boston University, May 2009. J. A. Starzyk, Motivation in Embodied Intelligence, Frontiers in Robotics, Automation and Control, I-Tech Education and Publishing, Oct. 2008, pp. 83 -110.
Questions?
953e7763186b779ce4484b281adf1cd6.ppt