Скачать презентацию 10 Multistrategy Learning Prof Gheorghe Tecuci Learning Agents Скачать презентацию 10 Multistrategy Learning Prof Gheorghe Tecuci Learning Agents

756a8750b7cc059454eb391a779c8f81.ppt

  • Количество слайдов: 61

10 Multistrategy Learning Prof. Gheorghe Tecuci Learning Agents Laboratory Computer Science Department George Mason 10 Multistrategy Learning Prof. Gheorghe Tecuci Learning Agents Laboratory Computer Science Department George Mason University 2003, G. Tecuci, Learning Agents Laboratory 1

Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Guiding Induction by Domain Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Guiding Induction by Domain Theory Plausible Justification Trees Research Issues Basic references 2003, G. Tecuci, Learning Agents Laboratory 2

Multistrategy learning is concerned with developing learning agents that synergistically integrate two or more Multistrategy learning is concerned with developing learning agents that synergistically integrate two or more learning strategies in order to solve learning tasks that are beyond the capabilities of the individual learning strategies that are integrated. 2003, G. Tecuci, Learning Agents Laboratory 3

Complementariness of learning strategies Case Study: Inductive Learning vs Explanation-based Learning Explanation. Learning from Complementariness of learning strategies Case Study: Inductive Learning vs Explanation-based Learning Explanation. Learning from examples based learning Examples needed many one Multistrategy learning several Knowledge needed very little complete knowledge Type of inference induction deduction induction and/ or deduction Effect on agent's behavior improves competence improves efficiency improves competence or/ and efficiency 2003, G. Tecuci, Learning Agents Laboratory incomplete knowledge 4

Multistrategy concept learning The Learning Problem Input One or more positive and/or negative examples Multistrategy concept learning The Learning Problem Input One or more positive and/or negative examples of a concept Background Knowledge (Domain Theory) Weak, incomplete, partially incorrect, or complete Goal Learn a concept description characterizing the example(s) and consistent with the background knowledge by combining several learning strategies 2003, G. Tecuci, Learning Agents Laboratory 5

Multistrategy knowledge base refinement The Learning Problem: Improve the knowledge base so that the Multistrategy knowledge base refinement The Learning Problem: Improve the knowledge base so that the Inference Engine solves (classifies) correctly the training examples. Training Examples Knowledge Base (DT) Inference Engine Multistrategy Knowledge Base Refinement Improved Knowledge Base (DT) Inference Engine Similar names: background knowledge – domain theory – knowledge base refinement - theory revision 2003, G. Tecuci, Learning Agents Laboratory 6

Types of theory errors (in a rule based system) How would you call a Types of theory errors (in a rule based system) How would you call a KB where some positive examples are not explained (classified as positive)? + + Incorrect KB (theory) Overly Specific Additional Premise width(x, small)& insulating(x)& shape(x, round) graspable(x) How would you call a KB where some negative examples are wrongly explained (classified as positive)? +_ Overly General Missing Rule has-handle(x) graspable(x) Extra Rule Missing Premise shape(x, round) graspable(x) insulating(x) graspable(x) width(x, small) What is the effect of each error on the system’s ability to classify graspable objects, or other objects that need to be graspable, such as cups? examples Positive Negative examples Proofs for some positive examples negative examples cannot be built: Positive examples that are Negative examples that are not round, or have a round, or are insulating but not 2003, G. Tecuci, Learning Agents Laboratory 7

Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Guiding Induction by Domain Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Guiding Induction by Domain Theory Plausible Justification Trees Research Issues Basic references 2003, G. Tecuci, Learning Agents Laboratory 8

EBL-VS: Combining EBL with Version Spaces • Apply explanation-based learning to generalize the positive EBL-VS: Combining EBL with Version Spaces • Apply explanation-based learning to generalize the positive and the negative examples. • Replace each example that has been generalized with its generalization. • Apply the version space method to the new set of examples. Produce an abstract illustration of this algorithm. 2003, G. Tecuci, Learning Agents Laboratory 9

EBL-VS features • Apply explanation-based learning to generalize the positive and the negative examples. EBL-VS features • Apply explanation-based learning to generalize the positive and the negative examples. • Replace each example that has been generalized with its generalization. • Apply the following method to the new set of several Justify the version space feature, consideringexamples. cases: • Learns from positive and negative examples 2003, G. Tecuci, Learning Agents Laboratory 10

EBL-VS features • Apply explanation-based learning to generalize the positive and the negative examples. EBL-VS features • Apply explanation-based learning to generalize the positive and the negative examples. • Replace each example that has been generalized with its generalization. • Apply the following method to Justify the version space feature: the new set of examples. • Can learn with an incomplete background knowledge 2003, G. Tecuci, Learning Agents Laboratory 11

EBL-VS features • Apply explanation-based learning to generalize the positive and the negative examples. EBL-VS features • Apply explanation-based learning to generalize the positive and the negative examples. • Replace each example that has been generalized with its generalization. • Apply the following method to Justify the version space feature: the new set of examples. • Can learn with different amounts of knowledge, from knowledge-free to knowledge-rich 2003, G. Tecuci, Learning Agents Laboratory 12

EBL-VS features summary and references • Learns from positive and negative examples • Can EBL-VS features summary and references • Learns from positive and negative examples • Can learn with an incomplete background knowledge • Can learn with different amounts of knowledge, from knowledge-free to knowledge-rich References Hirsh, H. , "Combining Empirical and Analytical Learning with Version Spaces, " in Proc. of the Sixth International Workshop on Machine Learning, A. M. Segre (Ed. ), Cornell University, Ithaca, New York, June 26 -27, 1989. Hirsh, H. , "Incremental Version-space Merging, " in Proceedings of the 7 th International Machine Learning Conference, B. W. Porter and R. J. Mooney (Eds. ), Austin, TX, 1990. 2003, G. Tecuci, Learning Agents Laboratory 13

Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Guiding Induction by Domain Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Guiding Induction by Domain Theory Plausible Justification Trees Research Issues Basic references 2003, G. Tecuci, Learning Agents Laboratory 14

IOU: Induction Over Unexplained Justify the following limitation of EBL-VS: Limitation of EBL-VS • IOU: Induction Over Unexplained Justify the following limitation of EBL-VS: Limitation of EBL-VS • Assumes that at least one generalization of an example is correct and complete IOU • Knowledge base could be incomplete but correct: - the explanation-based generalization of an example may be incomplete; - the knowledge base may explain negative examples. • Learns concepts with both explainable and conventional aspects 2003, G. Tecuci, Learning Agents Laboratory 15

IOU method • Apply explanation-based learning to generalize each positive example • Disjunctively combine IOU method • Apply explanation-based learning to generalize each positive example • Disjunctively combine these generalizations (this is the explanatory component Ce) • Disregard negative examples not satisfying Ce and remove the features mentioned in Ce from all the examples • Apply empirical inductive learning to determine a generalization of the reduced set of simplified examples (this is the non-explanatory component Cn) The learned concept is Ce & Cn 2003, G. Tecuci, Learning Agents Laboratory 16

IOU: illustration Positive examples of cups: Cup 1, Cup 2 Negative examples: Shot-Glass 1, IOU: illustration Positive examples of cups: Cup 1, Cup 2 Negative examples: Shot-Glass 1, Mug 1, Can 1 Domain Theory: incomplete - contains a definition of a generalization of the concept to be learned (e. g. contains a definition of drinking vessels but no definition of cups) Ce = has-flat-bottom(x) & light(x) & up-concave(x) & {[width(x, small) & insulating(x)] has-handle(x)} Ce covers Cup 1, Cup 2, Shot-Glass 1, Mug 1 but not Can 1 Cn = volume(x, small) Cn covers Cup 1, Cup 2 but not Shot-Glass 1, Mug 1 C = Ce & Cn Mooney, R. J. and Ourston, D. , "Induction Over Unexplained: Integrated Learning of Concepts with Both Explainable and Conventional Aspects, ", in Proc. of the Sixth International Workshop on Machine Learning, A. M. Segre (Ed. ), Cornell University, Ithaca, New York, June 26 -27, 1989. 2003, G. Tecuci, Learning Agents Laboratory 17

Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Guiding Induction by Domain Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Guiding Induction by Domain Theory Plausible Justification Trees Research Issues Basic references 2003, G. Tecuci, Learning Agents Laboratory 18

Enigma: Guiding Induction by Domain Theory Justify the following limitations of IOU: Limitations of Enigma: Guiding Induction by Domain Theory Justify the following limitations of IOU: Limitations of IOU • Knowledge base rules have to be correct • Examples have to be noise-free ENIGMA • Knowledge base rules could be partially incorrect • Examples may be noisy 2003, G. Tecuci, Learning Agents Laboratory 19

Enigma: method Trades-off the use of knowledge base rules against the coverage of examples: Enigma: method Trades-off the use of knowledge base rules against the coverage of examples: • Successively specialize the abstract definition D of the concept to be learned by applying KB rules • Whenever a specialization of the definition D contains operational predicates, compare it with the examples to identify the covered and the uncovered ones • Decide between performing: - a KB-based deductive specialization of D - an example-based inductive modification of D The learned concept is a disjunction of leaves of the specialization tree built. 2003, G. Tecuci, Learning Agents Laboratory 20

Enigma: illustration Examples (4 positive, 4 negative) Positive example 4 (p 4): light(o 4) Enigma: illustration Examples (4 positive, 4 negative) Positive example 4 (p 4): light(o 4) & support(o 4, b) & body(o 4, a) & above(a, b) & up-concave(o 4) Cup(o 4) Background Knowledge Liftable(x) & Stable(x) & Open-vessel(x) Cup(x) light(x) & has-handle(x) Liftable(x) has-flat-bottom(x) Stable(x) body(x, y) & support(x, z) & above(y, z) Stable(x) up-concave(x) Open-vessel(x) KB: - partly overly specific (explains only p 1 and p 2) - partly overly general (explains n 3) Operational predicates start with a lower-case letter 2003, G. Tecuci, Learning Agents Laboratory 21

Enigma: illustration (cont. ) Classification is based only on operational features: (to cover p Enigma: illustration (cont. ) Classification is based only on operational features: (to cover p 3, p 4) (to uncover n 2, n 3) 2003, G. Tecuci, Learning Agents Laboratory 22

Learned concept light(x) & has-flat-bottom(x) &has-small-bottom(x) Cup(x) Covers p 1, p 3 light(x) & Learned concept light(x) & has-flat-bottom(x) &has-small-bottom(x) Cup(x) Covers p 1, p 3 light(x) & body(x, y) & support(x, z) & above(y, z) & up-concave(x) Cup(x) Covers p 2, p 4 2003, G. Tecuci, Learning Agents Laboratory 23

Application • Diagnosis of faults in electro-mechanical devices through an analysis of their vibrations Application • Diagnosis of faults in electro-mechanical devices through an analysis of their vibrations • 209 examples and 6 classes • Typical example: 20 to 60 noisy measurements taken in different points and conditions of the device • A learned rule: IF the shaft rotating frequency is w 0 and the harmonic at w 0 has high intensity and the harmonic at 2 w 0 has high intensity in at least two measurements THEN the example is an instance of C 1 (problems in the joint), C 4 (basement distortion) or C 5 (unbalance) 2003, G. Tecuci, Learning Agents Laboratory 24

Application (cont. ) Comparison between the KB learned by ENIGMA and the hand-coded KB Application (cont. ) Comparison between the KB learned by ENIGMA and the hand-coded KB of the expert system MEPS Bergadano, F. , Giordana, A. and Saitta, L. , "Automated Concept Acquisition in Noisy Environments, " IEEE Transactions on Pattern Analysis and Machine Intelligence, 10 (4), pp. 555 -577, 1988. Bergadano, F. , Giordana, A. , Saitta, L. , De Marchi D. and Brancadori, F. , "Integrated Learning in a Real Domain, " in B. W. Porter and R. J. Mooney (Eds. ), Proceedings of the 7 th International Machine Learning Conference, Austin, TX, 1990. Bergadano, F. and Giordana, A. , "Guiding Induction with Domain Theories, " in Machine Learning: An Artificial Intelligence Approach Vollume 3, Y. Kodratoff and R. S. Michalski (Eds. ), San Mateo, CA, Morgan Kaufmann, 1990. 2003, G. Tecuci, Learning Agents Laboratory 25

Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Guiding Induction by Domain Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Guiding Induction by Domain Theory Plausible Justification Trees Research Issues Basic references 2003, G. Tecuci, Learning Agents Laboratory 26

MTL-JT: Multistrategy Task-adaptive Learning based on Plausible Justification Trees • Deep integration of learning MTL-JT: Multistrategy Task-adaptive Learning based on Plausible Justification Trees • Deep integration of learning strategies Integration of the elementary inferences that are employed by the singlestrategy learning methods (e. g. deduction, analogy, empirical inductive prediction, abduction, deductive generalization, inductive specialization, analogy-based generalization). • Dynamic integration of learning strategies The order and the type of the integrated strategies depend of the relationship between the input information, the background knowledge and the learning goal. • Different types of input (e. g. facts, concept examples, problem solving episodes) • Different types of knowledge pieces in the knowledge base (e. g. facts, examples, implicative relationships, plausible determinations) 2003, G. Tecuci, Learning Agents Laboratory 27

MTL-JT: assumptions Input: • correct (noise free) • one or several examples, facts, or MTL-JT: assumptions Input: • correct (noise free) • one or several examples, facts, or problem solving episodes Knowledge Base: • incomplete and/or partially incorrect • may include a variety of knowledge types (facts, examples, implicative or causal relationships, hierarchies, etc. ) Learning Goal: • extend, update and/or improve the knowledge base so as to integrate new input information 2003, G. Tecuci, Learning Agents Laboratory 28

Plausible justification tree A plausible justification tree is like a proof tree, except that Plausible justification tree A plausible justification tree is like a proof tree, except that some of individual inference steps are deductive, while others are nondeductive or only plausible (e. g. analogical, abductive, inductive). 2003, G. Tecuci, Learning Agents Laboratory 29

Learning method • For the first positive example I 1: - build a plausible Learning method • For the first positive example I 1: - build a plausible justification tree T of I 1 - build the plausible generalization Tu of T - generalize the KB to entail Tu • For each new positive example Ii: - generalize Tu so as to cover a plausible justification tree of Ii - generalize the KB to entail the new Tu • For each new negative example Ii: - specialize Tu so as not to cover any plausible justification of Ii - specialize the KB to entail the new Tu without entailing the previous Tu • Learn different concept definitions: - extract different concept definitions from the general justification tree Tu 2003, G. Tecuci, Learning Agents Laboratory 30

MTL-JT: illustration from Geography Knowledge Base Facts: terrain(Philippines, flat), rainfall(Philippines, heavy), water-in-soil(Philippines, high) Examples MTL-JT: illustration from Geography Knowledge Base Facts: terrain(Philippines, flat), rainfall(Philippines, heavy), water-in-soil(Philippines, high) Examples (of fertile soil): soil(Greece, red-soil) soil(Greece, fertile-soil) terrain(Egypt, flat) & soil(Egypt, red-soil) soil(Egypt, fertile-soil) Plausible determination: rainfall(x, y) >= water-in-soil(x, z) Deductive rules: soil(x, loamy) soil(x, fertile-soil) climate(x, subtropical) temperature(x, warm) climate(x, tropical) temperature(x, warm) water-in-soil(x, high) & temperature(x, warm) & soil(x, fertile-soil) grows(x, rice) 2003, G. Tecuci, Learning Agents Laboratory 31

Positive and negative examples of Positive and negative examples of "grows(x, rice)" Positive Example 1: rainfall(Thailand, heavy) & climate(Thailand, tropical) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia) grows(Thailand, rice) Positive Example 2: rainfall(Pakistan, heavy) & climate(Pakistan, subtropical) & soil(Pakistan, loamy) & terrain(Pakistan, flat) & location(Pakistan, SW-Asia) grows(Pakistan, rice) Negative Example 3: rainfall(Jamaica, heavy) & climate(Jamaica, tropical) & soil(Jamaica, loamy) & terrain(Jamaica, abrupt) & location(Jamaica, Central-America) ¬ grows(Jamaica, rice) 2003, G. Tecuci, Learning Agents Laboratory 32

Build a plausible justification of the first example Example 1: rainfall(Thailand, heavy) & soil(Thailand, Build a plausible justification of the first example Example 1: rainfall(Thailand, heavy) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia) & climate(Thailand, tropical) grows(Thailand, rice) 2003, G. Tecuci, Learning Agents Laboratory 33

Build a plausible justification of the first example Example 1: rainfall(Thailand, heavy) & soil(Thailand, Build a plausible justification of the first example Example 1: rainfall(Thailand, heavy) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia) & climate(Thailand, tropical) grows(Thailand, rice) Justify the inferences from the above tree: analogy Facts: terrain(Philippines, flat), rainfall(Philippines, heavy), water-in-soil(Philippines, high) Plausible determination: rainfall(x, y) >= water-in-soil(x, z) 2003, G. Tecuci, Learning Agents Laboratory 34

Build a plausible justification of the first example Example 1: rainfall(Thailand, heavy) & soil(Thailand, Build a plausible justification of the first example Example 1: rainfall(Thailand, heavy) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia) & climate(Thailand, tropical) grows(Thailand, rice) Justify the inferences from the above tree: deduction Deductive rules: soil(x, loamy) soil(x, fertile-soil) climate(x, subtropical) temperature(x, warm) climate(x, tropical) temperature(x, warm) water-in-soil(x, high) & temperature(x, warm) & soil(x, fertile-soil) grows(x, rice) 2003, G. Tecuci, Learning Agents Laboratory 35

Build a plausible justification of the first example Example 1: rainfall(Thailand, heavy) & soil(Thailand, Build a plausible justification of the first example Example 1: rainfall(Thailand, heavy) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia) & climate(Thailand, tropical) grows(Thailand, rice) Justify the inferences from the above tree: inductive prediction & abduction Examples (of fertile soil): soil(Greece, red-soil) soil(Greece, fertile-soil) terrain(Egypt, flat) & soil(Egypt, red-soil) soil(Egypt, fertile-soil) 2003, G. Tecuci, Learning Agents Laboratory 36

Multitype generalization 2003, G. Tecuci, Learning Agents Laboratory 37 Multitype generalization 2003, G. Tecuci, Learning Agents Laboratory 37

Multitype generalization Justify the generalizations from the above tree: generalization based on analogy Facts: Multitype generalization Justify the generalizations from the above tree: generalization based on analogy Facts: terrain(Philippines, flat), rainfall(Philippines, heavy), water-in-soil(Philippines, high) Plausible determination: rainfall(x, y) >= water-in-soil(x, z) 2003, G. Tecuci, Learning Agents Laboratory 38

Multitype generalization Justify the generalizations from the above tree: Inductive generalization Examples (of fertile Multitype generalization Justify the generalizations from the above tree: Inductive generalization Examples (of fertile soil): soil(Greece, red-soil) soil(Greece, fertile-soil) terrain(Egypt, flat) & soil(Egypt, red-soil) soil(Egypt, fertile-soil) 2003, G. Tecuci, Learning Agents Laboratory 39

Build the plausible generalization Tu of T 2003, G. Tecuci, Learning Agents Laboratory 40 Build the plausible generalization Tu of T 2003, G. Tecuci, Learning Agents Laboratory 40

Positive example 2 Instance of the current Tu corresponding to Example 2 Plausible justification Positive example 2 Instance of the current Tu corresponding to Example 2 Plausible justification tree T 2 of Example 2: 2003, G. Tecuci, Learning Agents Laboratory 41

Positive example 2 The explanation structure S 2: The new Tu: 2003, G. Tecuci, Positive example 2 The explanation structure S 2: The new Tu: 2003, G. Tecuci, Learning Agents Laboratory 42

Negative example 3 Instance of Tu corresponding to the Negative Example 3: The new Negative example 3 Instance of Tu corresponding to the Negative Example 3: The new Tu: 2003, G. Tecuci, Learning Agents Laboratory 43

The plausible generalization tree corresponding to the three input examples 2003, G. Tecuci, Learning The plausible generalization tree corresponding to the three input examples 2003, G. Tecuci, Learning Agents Laboratory 44

Learned knowledge New facts: water-in-soil(Thailand, high) water-in-soil(Pakistan, high) 2003, G. Tecuci, Learning Agents Laboratory Learned knowledge New facts: water-in-soil(Thailand, high) water-in-soil(Pakistan, high) 2003, G. Tecuci, Learning Agents Laboratory Why is it reasonable to consider these facts to be true? 45

Learned knowledge New plausible rule: soil(x, red-soil) soil(x, fertile-soil) Examples (of fertile soil): soil(Greece, Learned knowledge New plausible rule: soil(x, red-soil) soil(x, fertile-soil) Examples (of fertile soil): soil(Greece, red-soil) soil(Greece, fertile-soil) terrain(Egypt, flat) & soil(Egypt, red-soil) soil(Egypt, fertile-soil) 2003, G. Tecuci, Learning Agents Laboratory 46

Learned knowledge Specialized plausible determination: rainfall(x, y) & terrain(x, flat) >= water-in-soil(x, z) Facts: Learned knowledge Specialized plausible determination: rainfall(x, y) & terrain(x, flat) >= water-in-soil(x, z) Facts: terrain(Philippines, flat), rainfall(Philippines, heavy), water-in-soil(Philippines, high) Positive Example 1: rainfall(Thailand, heavy) & climate(Thailand, tropical) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia) grows(Thailand, rice) Positive Example 2: rainfall(Pakistan, heavy) & climate(Pakistan, subtropical) & soil(Pakistan, loamy) & terrain(Pakistan, flat) & location(Pakistan, SW-Asia) grows(Pakistan, rice) Negative Example 3: rainfall(Jamaica, heavy) & climate(Jamaica, tropical) & soil(Jamaica, loamy) & terrain(Jamaica, abrupt) & location(Jamaica, Central-America) ¬ grows(Jamaica, rice) 2003, G. Tecuci, Learning Agents Laboratory 47

Learned knowledge: concept definitions Operational definition of Learned knowledge: concept definitions Operational definition of "grows(x, rice)": rainfall(x, heavy) & terrain(x, flat) & [climate(x, tropical) climate(x, subtropical)] & [soil(x, red-soil) soil(x, loamy)] grows(x, rice) Abstract definition of "grows(x, rice)": water-in-soil(x, high) & temperature(x, warm) & soil(x, fertile-soil) grows(x, rice) 2003, G. Tecuci, Learning Agents Laboratory 48

Learned knowledge: example abstraction Abstraction of Example 1: water-in-soil(Thailand, high) & temperature(Thailand, warm) & Learned knowledge: example abstraction Abstraction of Example 1: water-in-soil(Thailand, high) & temperature(Thailand, warm) & soil(Thailand, fertile-soil) grows(Thailand, rice) 2003, G. Tecuci, Learning Agents Laboratory 49

Features of the MTL-JT method and reference • Is general and extensible • Integrates Features of the MTL-JT method and reference • Is general and extensible • Integrates dynamically different elementary inferences • Uses different types of generalizations • Is able to learn from different types of input • Is able to learn different types of knowledge • Exhibits synergistic behavior • May behave as any of the integrated strategies Tecuci, G. , "An Inference-Based Framework for Multistrategy Learning, " in Machine Learning: A Multistrategy Approach Volume 4, R. S. Michalski and G. Tecuci (Eds. ), San Mateo, CA, Morgan Kaufmann, 1994. 2003, G. Tecuci, Learning Agents Laboratory 50

Features of the MTL-JT method Justify the following feature: • Integrates dynamically different elementary Features of the MTL-JT method Justify the following feature: • Integrates dynamically different elementary inferences 2003, G. Tecuci, Learning Agents Laboratory 51

Features of the MTL-JT method Justify the following features: • May behave as any Features of the MTL-JT method Justify the following features: • May behave as any of the integrated strategies What strategies should we consider for the presented illustration of MTL-PJT? Explanation-based learning Multiple-example explanation-based learning Learning by abduction Learning by analogy Inductive learning from examples 2003, G. Tecuci, Learning Agents Laboratory 52

MTL-JT as explanation-based learning Assume the KB would contain the knowledge: MTL-JT as explanation-based learning Assume the KB would contain the knowledge: "x, rainfall(x, heavy) water-in-soil(x, high) "x, soil(x, red-soil) soil(x, fertile-soil) 2003, G. Tecuci, Learning Agents Laboratory 53

MTL-JT as abductive learning Assume the KB would contain the knowledge: MTL-JT as abductive learning Assume the KB would contain the knowledge: "x, rainfall(x, 2003, G. Tecuci, Learning Agents Laboratory heavy) water-in-soil(x, high) 54

MTL-JT as inductive learning from examples 2003, G. Tecuci, Learning Agents Laboratory 55 MTL-JT as inductive learning from examples 2003, G. Tecuci, Learning Agents Laboratory 55

MTL-JT as analogical learning Let us suppose that the KB contains only the following MTL-JT as analogical learning Let us suppose that the KB contains only the following knowledge that is related to Example 1: Facts: rainfall(Philippines, heavy), water-in-soil(Philippines, high) Determination: rainfall(x, y) --> water-in-soil(x, z) Then the system can only infer that "water-in-soil(Thailand, high)", by analogy with "water-in-soil(Philippines, high)". In this case, the MTL-JT method reduces to analogical learning. 2003, G. Tecuci, Learning Agents Laboratory 56

Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Guiding Induction by Domain Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Guiding Induction by Domain Theory Plausible Justification Trees Research Issues Basic references 2003, G. Tecuci, Learning Agents Laboratory 57

Research issues in multistrategy learning • Comparisons of learning strategies • New ways of Research issues in multistrategy learning • Comparisons of learning strategies • New ways of integrating learning strategies • Synergistic integration of a wide range of learning strategies • The representation and use of learning goals in multistrategy systems • Dealing with incomplete or noisy examples • Evaluation of the certainty of the learned knowledge • General frameworks for multistrategy learning • More comprehensive theories of learning • Investigation of human learning as multistrategy learning • Integration of multistrategy learning and knowledge acquisition • Integration of multistrategy learning and problem solving 2003, G. Tecuci, Learning Agents Laboratory 58

Exercise Compare the following learning strategies: -Rote learning -Inductive learning from examples -Explanation-based learning Exercise Compare the following learning strategies: -Rote learning -Inductive learning from examples -Explanation-based learning -Abductive learning -Analogical learning -Instance-based learning -Case-based learning From the point of view of their input, background knowledge, type of inferences performed, and effect on system’s performance. 2003, G. Tecuci, Learning Agents Laboratory 59

Exercise Identify general frameworks for multistrategy learning, based on the multistrategy learning methods presented. Exercise Identify general frameworks for multistrategy learning, based on the multistrategy learning methods presented. 2003, G. Tecuci, Learning Agents Laboratory 60

Basic references • Proceedings of the International Conferences on Machine Learning, ICML-87, … , Basic references • Proceedings of the International Conferences on Machine Learning, ICML-87, … , ICML-04, Morgan Kaufmann, San Mateo, 1987 - 2004. • Proceedings of the International Workshops on Multistrategy Learning, MSL-91, MSL-93, MSL-96, MSL-98. • Special Issue on Multistrategy Learning, Machine Learning Journal, 1993. • Special Issue on Multistrategy Learning, Informatica, vol. 17. no. 4, 1993. • Machine Learning: A Multistrategy Approach, Volume IV, Michalski R. S. and Tecuci G. (Eds. ), Morgan Kaufmann, San Mateo, 1994. 2003, G. Tecuci, Learning Agents Laboratory 61