Скачать презентацию Annotating Student Emotional States in Spoken Tutoring Dialogues Скачать презентацию Annotating Student Emotional States in Spoken Tutoring Dialogues

ef7d16b2d9113b8930d18bcf50e8f761.ppt

  • Количество слайдов: 34

Annotating Student Emotional States in Spoken Tutoring Dialogues Diane Litman and Kate Forbes-Riley Learning Annotating Student Emotional States in Spoken Tutoring Dialogues Diane Litman and Kate Forbes-Riley Learning Research and Development Center and Computer Science Department University of Pittsburgh

Overview Corpora and Emotion Annotation Scheme student emotional states in spoken tutoring dialogues Analyses Overview Corpora and Emotion Annotation Scheme student emotional states in spoken tutoring dialogues Analyses our scheme is reliable in our domain our emotion labels can be accurately predicted Motivation incorporating emotional processing can decrease performance gap between human and computer tutors (e. g. Coles, 1999; Aist et al. , 2002) Goal implementation of emotion prediction and adaptation in our computer tutoring spoken dialogue system to improve performance

Prior Research on Emotional Speech u. Actor- or Native-Read Speech Corpora (Polzin and Waibel Prior Research on Emotional Speech u. Actor- or Native-Read Speech Corpora (Polzin and Waibel 1998; Oudeyer 2002; Liscombe et al. 2003) many emotions, multiple dimensions acoustic/prosodic predictors u. Naturally-Occurring Speech Corpora (Litman et al. 2001; Ang et al. 2002; Lee et al. 2002; Batliner et al. 2003; Devillers et al. 2003; Shafran et al. 2003) Kappas around 0. 6; fewer emotions (e. g. E / -E) acoustic/prosodic + additional predictors u. Few address the spoken tutoring domain

(Demo: Monday, 4: 15 pm!) (Demo: Monday, 4: 15 pm!)

Spoken Tutoring Corpora u. ITSPOKE Computer Tutoring Corpus 105 dialogs (physics problems), 21 subjects Spoken Tutoring Corpora u. ITSPOKE Computer Tutoring Corpus 105 dialogs (physics problems), 21 subjects u. Corresponding Human Tutoring Corpus 128 dialogs (physics problems), 14 subjects u. Experimental Procedure 1) Students take a physics pretest 2) Students read background material 3) Students use the web and voice interface to work up to 10 physics problems with ITSPOKE or human tutor 4) Students take a post-test

Emotion Annotation Scheme for Student Turns in Spoken Tutoring Dialogs u‘Emotion’: emotions/attitudes that may Emotion Annotation Scheme for Student Turns in Spoken Tutoring Dialogs u‘Emotion’: emotions/attitudes that may impact learning u. Perceived, Intuitive expressions of emotion u. Relative to other turns in Context and tutoring Task u 3 Main Emotion Classes negative strong expressions of e. g. uncertain, bored, irritated, confused, sad; question turns positive strong expressions of e. g. confident, enthusiastic neutral no strong expression of negative or positive emotion; grounding turns

Emotion Annotation Scheme for Student Turns in Spoken Tutoring Dialogs u 3 Minor Classes Emotion Annotation Scheme for Student Turns in Spoken Tutoring Dialogs u 3 Minor Classes weak negative weak expressions of negative emotions weak positive weak expressions of positive emotions mixed strong expressions of positive and negative emotions case 1) multi-utterance turns case 2) simultaneous expressions u Specific Emotion Labels: uncertain, confused, confident, enthusastic, …

Annotated Dialog Excerpt: Human Tutoring Corpus Tutor: Suppose you apply equal force by pushing Annotated Dialog Excerpt: Human Tutoring Corpus Tutor: Suppose you apply equal force by pushing them. Then uh what will happen to their motion? Student: Um, the one that’s heavier, uh, the acc-acceleration won’t be as great. (WEAK NEGATIVE, UNCERTAIN) Tutor: The one which is… Student: Heavier (WEAK NEGATIVE, UNCERTAIN) Tutor: Well, uh, is that your common. Student: Er I’m sorry, the one with most mass. (POSITIVE, CONFIDENT) Tutor: (lgh) Yeah, the one with more mass will- if you- if the mass is more and force is the same then which one will accelerate more? Student: Which one will move more? (NEGATIVE, CONFUSED)

Analyses of Emotion Annotation Scheme u 2 annotators: 10 human tutoring dialogs, 9 students, Analyses of Emotion Annotation Scheme u 2 annotators: 10 human tutoring dialogs, 9 students, 453 student turns u. Machine-learning method in (Litman&Forbes, 2003) (HLT/NAACL’ 04: Tuesday, 2: 20 pm) learning algorithm: boosted decision trees predictors: acoustic, prosodic, lexical, dialogue, and contextual features u. Analyses optimize annotation for: inter-annotator reliability predictability use for constructing adaptive tutoring strategies to increase student learning

6 Analyses of Emotion Annotation u 3 Levels of Annotation Granularity NPN Negative, Positive, 6 Analyses of Emotion Annotation u 3 Levels of Annotation Granularity NPN Negative, Positive, Neutral (Litman&Forbes, 2003) Nn. N Negative, Non-Negative (Lee et al. , 2001) positives and neutrals are conflated as Non-Negative En. E Emotional, Non-Emotional (Batliner et al. , 2000) negatives and positives are conflated as Emotional neutrals are Non-Emotional u 2 Possible Conflations of Minor Classes Minor Neutral: conflate minor and neutral classes Weak Main: conflate weak and negative/positive, conflate mixed and neutral classes

Analysis 1 a: NPN Minor Neutral u 385/453 agreed turns (84. 99%, Kappa 0. Analysis 1 a: NPN Minor Neutral u 385/453 agreed turns (84. 99%, Kappa 0. 68) Negative Neutral Positive Negative 90 6 4 Neutral 23 280 30 Positive 0 5 15 Predictive accuracy: 84. 75% (10 x 10 cross-validation) Baseline (majority = neutral) accuracy: 72. 74% Relative improvement: 44. 06%

Analysis 2 a: Nn. N Minor Neutral u 420/453 agreed turns (92. 72%, Kappa Analysis 2 a: Nn. N Minor Neutral u 420/453 agreed turns (92. 72%, Kappa 0. 80) Negative Non-Negative 90 10 Non-Negative 23 330 Predictive accuracy: 86. 83% (10 x 10 cross-val) Baseline (majority = n. N) accuracy: 78. 57% Relative improvement of 38. 54%

Analysis 3 b: En. E Weak Main u 350/453 agreed turns (77. 26%, Kappa Analysis 3 b: En. E Weak Main u 350/453 agreed turns (77. 26%, Kappa 0. 55) Emotional Non-Emotional 169 19 Non-Emotional 84 181 Predictive accuracy: 86. 14% (10 x 10 cross-val) Baseline (majority = non-emo) accuracy: 51. 71% Relative improvement of 71. 30%

Summary of the 6 Analyses u. Tradeoff: reliability, predictability, annotation granularity KAPPA ACCURACY BASELINE Summary of the 6 Analyses u. Tradeoff: reliability, predictability, annotation granularity KAPPA ACCURACY BASELINE REL. IMP. minor neutral NPN . 68 84. 75% 72. 74% 44. 06% Nn. N . 80 86. 83% 78. 57% 38. 54% En. E . 67 85. 07% 71. 98% 46. 72% weak main NPN . 60 79. 29% 53. 24% 55. 71% Nn. N . 74 82. 94% 72. 21% 38. 61% En. E . 55 86. 14% 51. 71% 71. 30%

Extensions to the 6 Analyses: Consensus Labeling u. Ang et al. , 2002: consensus-labeling Extensions to the 6 Analyses: Consensus Labeling u. Ang et al. , 2002: consensus-labeling increases data set to include the difficult student turns u. Original annotators revisit disagreements and through discussion try to achieve a consensus label u. Consensus: 445/453 turns (99. 12%, 8 discarded) u. Machine-learning results: predictive accuracy decreases across 6 analyses still better than baseline

Extensions to the 6 Analyses: Including Minor Emotion Classes u Only last 5 dialogs Extensions to the 6 Analyses: Including Minor Emotion Classes u Only last 5 dialogs fully annotated for Minor Classes u 142/211 agreed turns (67. 30%, Kappa 0. 54) neg w. neg neu w. pos mixed 48 2 0 0 0 2 w. neg 6 10 3 2 2 0 neu 11 70 22 3 3 w. pos 0 1 1 9 2 0 pos 0 0 1 1 1 0 mixed 1 1 2 1 0 4 neg 2

Extensions to the 6 Analyses: Specific Emotion Labels u Only last 5 dialogs fully Extensions to the 6 Analyses: Specific Emotion Labels u Only last 5 dialogs fully annotated u 66 turns agreed negative (weak or strong) 45/66 agreed for specific negative label (5) uncertain > confused > bored, sad, irritated (68. 18%, Kappa 0. 41) u 13 turns agreed positive (weak or strong) 13/13 agreed for specific positive label (2) confident > enthusastic (100%, Kappa 1. 0)

ITSPOKE Computer Tutoring Corpus ITSPOKE: What else do you need to know to find ITSPOKE Computer Tutoring Corpus ITSPOKE: What else do you need to know to find the box's acceleration? Student: the direction (NEGATIVE, UNCERTAIN) ASR: add directions ITSPOKE : If you see a body accelerate, what caused that acceleration? Student: force (POSITIVE, CONFIDENT) ASR: force ITSPOKE : Good job. Say there is only one force acting on the box. How is this force, the box's mass, and its acceleration related? Student: velocity (NEGATIVE, UNCERTAIN) ASR: velocity ITSPOKE : Could you please repeat that? Student: velocity (NEGATIVE, IRRITATED) ASR: velocity

ITSPOKE Computer Tutoring Corpus u. Differences from human tutoring corpus make annotation and prediction ITSPOKE Computer Tutoring Corpus u. Differences from human tutoring corpus make annotation and prediction more difficult Computer inflexibility limits emotion expression and recognition shorter student turns, no groundings, no questions, no problem references, no student initiative, …

ITSPOKE Computer Tutoring Corpus u (Litman & Forbes-Riley, ACL`04): 333 turns, 15 dialogs, 10 ITSPOKE Computer Tutoring Corpus u (Litman & Forbes-Riley, ACL`04): 333 turns, 15 dialogs, 10 subjects Best reliability and predictability: Nn. N, weak main 78% agreed turns (Kappa 0. 5) 73% accuracy (RI 36%): subset of predictors Predictability: add log features, word-level features Reliability: strength disagreements across 6 classes can often be viewed as shifted scales turn 1 turn 2 turn 3 Neg A weak Neg B A Neu weak Pos B A B Pos

Conclusions and Current Directions u. Emotion annotation scheme is reliable and predictable in human Conclusions and Current Directions u. Emotion annotation scheme is reliable and predictable in human tutoring corpus u. Tradeoff between inter-annotator reliability, predictability, and annotation granularity u. ITSPOKE corpus shows differences that make annotation and prediction more difficult u. Next steps: 1) label human tutor reactions to 6+ analyses of emotional student turns, 2) determine which analyses best trigger adaptation and improve learning, 3) develop adaptive strategies for ITSPOKE

Affective Computing Systems u Emotions play a large role in human interaction (how is Affective Computing Systems u Emotions play a large role in human interaction (how is as important as what we say) (Cowie et al. , 2002; psychology, linguistics, biology) Affective Computing: add emotional processing to spoken dialog systems to improve performance Good adaptation requires good prediction: focus of current work (read or annotated natural speech) u Emotion impacts learning. e. g. poor learning negative emotions; negative emotions poor learning (Coles, 1999; psychology studies) Affective Tutoring: add emotional processing to computer tutoring systems to improve performance Non-dialog Typed dialog Spoken dialog Few yet annotate/predict/adapt to emotions in spoken dialogs

Prior Research: Affective Computer Tutoring (Kort, Reilly and Picard. , 2001): propose a cyclical Prior Research: Affective Computer Tutoring (Kort, Reilly and Picard. , 2001): propose a cyclical model of emotion change during learning; develop non-dialog computer tutor that uses eye-tracking/ facial features to predict emotion and support change to positive emotions. (Aist, Kort, Reilly, Mostow & Picard, 2002): Adding human emotional scaffolding to automated reading spoken dialog tutor increases student persistence (Evens et al, 2002): CIRCSIM, a computer typed dialog tutor for physiology problems; hypothesize adaptive strategies for recognized student emotional states; e. g. if detecting frustration, system should respond to hedges and self-deprecation by supplying praise and restructuring the problem. (de Vicente and Pain, 2002): use human observation of student motivation in videod interaction with non-dialog computer tutor to develop detection rules. (Ward and Tsukahara, 2003): spoken dialog computer “tutor” uses prosodic/etc features of user turn (e. g. “on a roll”, “lively”, “in trouble”) to infer appropriate response as users recall train stations. Preferred over randomly chosen acknowledgments (e. g. “yes”, “right” “that’s it”, “that’s it ”) (Conati and Zhou, 2004): use Dynamic Bayesian Networks) to reason under uncertainty about abstracted student knowledge and emotional states through time, based on student moves in non-dialog computer game, and to guide selection of “tutor” responses.

Sub-Domain Emotion Annotation: Adaptation Information for ITSPOKE u 3 Sub-Domains PHYS emotions pertaining to Sub-Domain Emotion Annotation: Adaptation Information for ITSPOKE u 3 Sub-Domains PHYS emotions pertaining to the physics material being learned e. g. uncertain if “freefall” is correct answer TUT emotions pertaining to the tutoring process: attitudes towards the tutor or being tutored e. g. tired, bored with tutoring session NLP emotions pertaining to ITSPOKE NLP processing e. g. frustrated or amused by speech recognition errors PHYS = main/common strong emotions in human tutoring corpus

Example Adaptation Strategies in ITSPOKE PHYS: En. E if E, ask for student contribution Example Adaptation Strategies in ITSPOKE PHYS: En. E if E, ask for student contribution e. g. “Are you ok so far? ” Nn. N Only respond to negative emotions e. g. engage in a sub-dialog to solidify NPN Respond to positives too e. g. if positive and correct, move on NLP: if negative, apologize; redo sound check

Excerpt: Annotated Human-Human Spoken Tutoring Dialogue Tut: The only thing asked is about the Excerpt: Annotated Human-Human Spoken Tutoring Dialogue Tut: The only thing asked is about the force whether the force uh earth pulls equally on sun or not that's the only question Stud: Well I think it does but I don't know why I d-don't I do they move in the same direction I do-don't… (NEGATIVE, CONFUSED) Tut: You see again you see they don't have to move. If a force acts on a body. Stud: It- (WEAK POSITIVE, ENTHUSIASTIC) Tut: It does not mean that uh uh I mean it will um. Stud: If two forces um apply if two forces react on each other then the force is equal it's the Newton’s third law (POSITIVE, CONFIDENT) Tut: Um you see the uh actually in this case the motion is there but it is a little complicated motion this is orbital motion Stud: Mm-hm (WEAK POSITIVE, ENTHUSIASTIC) Tut: And uh just as. Stud: This is the one where they don't touch each other that you were talking about before (MIXED, ENTHUSIASTIC + UNCERTAIN) Tut: Yes just as earth orbits around sun Stud: Mm-hm (NEUTRAL)

Wavesurfer (H-H Transcription &) Annotation Wavesurfer (H-H Transcription &) Annotation

Perceived Emotion Cues (post-annotation) u Negative Clues: lexical expressions of uncertainty or confusion (Qs, Perceived Emotion Cues (post-annotation) u Negative Clues: lexical expressions of uncertainty or confusion (Qs, “I don’t know”), disfluencies (“um”, I do-don’t), pausing, rising intonation, slow tempo u Positive Clues: lexical expressions of certainty or confidence, (“right”, “I know”), little pausing, loud speech, fast tempo u Neutral Clues: moderate tempo, loudness, pausing, etc, as well as lexical groundings (“mmhm”, “ok”)

Analysis 1 b: NPN Weak Main u 340/453 agreed turns (75. 06%, Kappa 0. Analysis 1 b: NPN Weak Main u 340/453 agreed turns (75. 06%, Kappa 0. 60) Negative Neutral Positive Negative 112 9 9 Neutral 31 181 53 Positive 0 5 47 Predictive accuracy: 79. 29% (10 x 10 cross-val) Baseline (majority = neutral) accuracy: 53. 24% Relative improvement: 55. 71%

Analysis 2 b: Nn. N Weak Main u 403/453 agreed turns (88. 96%, Kappa Analysis 2 b: Nn. N Weak Main u 403/453 agreed turns (88. 96%, Kappa 0. 74) Negative Non-Negative 112 18 Non-Negative 32 291 Predictive accuracy: 82. 94% (10 x 10 cross-val) Baseline (majority = non-neg) accuracy: 72. 21% Relative improvement of 38. 61%

Analysis 3 a: En. E Minor Neutral u 389/453 agreed turns (85. 87%, Kappa Analysis 3 a: En. E Minor Neutral u 389/453 agreed turns (85. 87%, Kappa 0. 67) Emotional 109 Non. Emotional 11 Non 53 280 Emotional Predictive accuracy: 85. 07% (10 x 10 cross-val) Baseline (majority = non-emo) accuracy: 71. 98% Relative improvement of 46. 72%

Analysis 5: Consensus Labeling u 445/453 consensus turns (99. 12%, 8 discarded) minor neutral Analysis 5: Consensus Labeling u 445/453 consensus turns (99. 12%, 8 discarded) minor neutral weak main neg NPN neg pos neg neu pos 99 321 25 19 265 61 neg En. E neg non-neg 99 346 119 326 emo Nn. N non-neg non-emo 124 321 180 265

ITSPOKE: Intelligent Tutoring SPOKEn Dialogue System u. Back-end is text-based Why 2 -Atlas tutorial ITSPOKE: Intelligent Tutoring SPOKEn Dialogue System u. Back-end is text-based Why 2 -Atlas tutorial dialogue system (Van. Lehn et al. , 2002) u Student speech digitized from microphone input; Sphinx 2 speech recognizer u Tutor speech played via headphones or speakers; Cepstral text-to-speech synthesizer

Annotated Dialog Excerpt: Human Tutoring Corpus Tutor: Suppose you apply equal force by pushing Annotated Dialog Excerpt: Human Tutoring Corpus Tutor: Suppose you apply equal force by pushing them. Then uh what will happen to their motion? Student: Um, the one that’s heavier, uh, the acc-acceleration won’t be as great. (NEGATIVE, UNCERTAIN) Tutor: The one which is… Student: Heavier (NEGATIVE, UNCERTAIN) Tutor: Well, uh, is that your common. Student: Er I’m sorry, the one with most mass. (POSITIVE, CONFIDENT) Tutor: (lgh) Yeah, the one with more mass will- if you- if the mass is more and force is the same then which one will accelerate more? Student: Which one will move more? (NEGATIVE, CONFUSED) Tutor: Mm which one will accelerate more? Student: The- the one with the least amount of mass (NEGATIVE, UNCERTAIN)