42e1df2cd4a0d73d3c8e96d08aa050d6.ppt
- Количество слайдов: 13
A critical look at task-based learning research methodologies Barry O’Sullivan Centre for Research in Testing Evaluation & Curriculum Roehampton University CRTEC 1
Focus of this talk Briefly present an overview of the sociocognitive approach to test validation Present results from three strands of research in language testing Make some observations of the Implications of these findings to TB learning and testing CRTEC 2
The Socio-Cognitive Approach Test Taker Test Task CRTEC 3
Strand 1 - O’Sullivan & others Explored the effect on candidate performance of affective reactions to variables associated with their interlocutor in an interactive task Variables included: Age; Gender; Acquaintanceship Perceived Language Level; Personality CRTEC 4
Strand 1 - Results When variables are isolated there are significant affects observed When variables are explored in more complex designs there are equally complex interactions observed Variation in examiner behaviour and rater performance has also been reported CRTEC 5
Strand 2 - Weir & Wu Explored test and task difficulty in monologic long-turn tasks Done in the context of a national test in Taiwan test was tape-mediated Different (equivalent) test forms and task versions were administered CRTEC 6
Strand 2 - Results Evidence that it is possible to generate equivalent test forms from a systematically described specification Evidence that there is variation to be found at the task level Operational procedures for generating equivalent test forms suggested CRTEC 7
Strand 3 - O’Sullivan, Weir & Horai Set out to explore how task difficulty is affected by manipulating performance parameters such as planning time & amount of speaking time Design called for four equivalent task versions Complexity of the study multiplied by this requirement CRTEC 8
Strand 3 - Establishing Equivalence Set of 9 “equivalent” tasks supplied by test developer Qualitative review by stakeholders Performed by group of 54 learners Reduced to 8 (one rejected; one modified) Scored by trained raters Review by “expert panel” using instrument based on Skehan 1996 CRTEC Reduced to 4 ‘truly’ equivalent task versions 9
Overview Strand 1 suggests that the interlocutor (and the examiner/rater) is a systematic, if not always predictable variable in test task performance Strand 2 suggests that it is possible to create equivalent task-based tests, but this is a complex (& expensive) process Strand 3 suggests that it is even more difficult to create ‘truly’ equivalent monologic test task CRTEC 10
Implications - Language Testing It appears to be possible to create operationally equivalent versions of task-based tests Generating evidence of test and task equivalence (& therefore validity) is not easy All test tasks should be specified in terms of the test-taker, the performance conditions and the associated cognitive processing involved (i. e; from a socio-cognitive perspective) CRTEC 11
Implications - TB research A review of the TB literature (& SLA literature) reveals little awareness of the importance or either the interlocutor or of task equivalence Researchers should consider these research strands when they explore interactive language they rely on ‘similar’ or ‘equivalent’ versions they try to define task difficulty CRTEC 12
CONTACT Dr Barry O’Sullivan Centre for Research in Testing, Evaluation & Curriculum (CRTEC) Erasmus House Roehampton University Roehampton Lane London SW 15 5 PU United Kingdom Tel: +44 (0)20 8392 3348 b. osullivan@roehampton. ac. uk barry. sullivan 1@ntlworld. com CRTEC 13
42e1df2cd4a0d73d3c8e96d08aa050d6.ppt