Enhancing Small Group Consensus Discussions in Academic Settings

small group consensus discussion tasks ca driven n.w
1 / 13
Embed
Share

Explore the framework and tasks involved in small group consensus discussions driven by criteria at Newcastle University. Discover the key elements candidates must demonstrate, the conceptual assumptions guiding the discussions, and the processes involved in assessing spoken interactions for differential performance. Gain insights into listener behaviors, collaborative points, task management, and criteria development in group discussions.

  • Group Discussions
  • Criteria-Driven Tasks
  • Newcastle University
  • Academic Settings
  • Spoken Interaction

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Small group consensus discussion tasks: CA driven criteria Chris Heady INTO Newcastle University

  2. task example

  3. What is the task? . candidates must: .. 1. Give their views 2. Be able to respond to the task and the materials 3. Be able to respond appropriately to others in the group 4. Discuss their views and the views of other 5. Work towards a consensus (not achieve?) 6. Show they can work collaboratively 7. Be able to listen actively to others 8. Show they can make useful, relevant and sensible points 9. And more ..

  4. conceptual assumptions Construct small group study discussions (in class or outside) Weir (2005) model of test validity criterion, scoring, consequential Learning on Foundation programme Learning in School(s) at Newcastle University Co-construction c.f. IELTS, FCE, CAE? University co-construction common place

  5. Process 1: dissertation FEATURES OF SPOKEN INTERACTION IN A PEER-GROUP ORAL ENGLISH TEST AND EVIDENCE OF DIFFERENTIAL PERFORMANCE Foundation Architecture EAP module Video recordings middle vs upper: 5.0-5.5 vs 6.5- 8.0 CA Transcription and analysis = criterial features which support evidence of discrimination

  6. summary Listenership (McCarthy, 2003) clarifications, conformations, back channelling, overlaps, turn completion, comments Mix of long and short turns Able to pick up and develop other group members points from turn to turn and across turns Collaborative points (Galaczi,2008) Complexity of points Listenership: fewer overlaps, back- channelling, rare turn completions Shorter turns Responds to others but little development of points Agrees and disagrees: responding (but not always developing) Separate opinions = parallel (Galaczi, 2008) Some task management

  7. Process 2: criteria development Collaborative process What should we rate? What can we rate? Watch, observe and identify and reflect - narrowing of features Consensus Balance between interactional features and linguistic features Trialled with old and new set of criteria - three examiners Standardisation and on-task moderation All assessment video-ed for EE consideration User guide, student-facing guide

  8. Criteria v1

  9. Rater feedback Concerns about listenership and task preparation Language effectiveness sometimes problematic Positive about ease of use and a movement away from adverbs! Positive about use of criteria as teaching aid Listen and respond is an excellent addition and really differentiates this assessment from the presentation It disadvantages our higher level students who really strive to reach top marks of 90. They dislike ending with a lower score than what they came with at entry. Accuracy of language is missing (links with point above) Further development: more clarification for assessors needed for specialist or topic vocabulary some disagreement amongst teachers with the overlapping/finishing turns section The language aspects can be difficult to keep track of i.e. emphatic language and longer noun phrases

  10. Opportunities / limitations Multimodality Paralinguistic Washback Consequential validity evidence collection Task achievement? Reference outside test context? Scoring debates: should turn completion mean over-ride others? Can students prepare for backchannelling etc?

  11. Very selected bibliography Bachman, L. (1990) Fundamental Considerations in Language Testing. Oxford, Oxford University Press Bonk W. J. and J. G. Ockey (2003) A many facet Rasch analysis of the second language group oral discussion, Language Testing 20 89-110 Brooks, L. (2009) Interacting in pairs in a test of oral proficiency: Co-constructing a better performance, Language Testing 26:3 341-366 Galaczi, E. (2008) Peer-Peer Interaction in a speaking test; the case of the First Certificate in English examination, Language Assessment Quarterly, 5:2 89-119 Gan, Z. (2010) Interaction in group oral assessment: a case study of higher and lower scoring students, Language Testing 27:4 585-602 Lazaranton, A. (1998) An analysis of differences in linguistic features of candidates at different levels of the IELTS Speaking Test. Report prepared for the EFL Division, University of Cambridge Local Examinations Syndicate, Cambridge

  12. McCarthy, M. (2003) Talking back; small interactional response tokens in everyday conversation, Research on Language and Social Interaction 36;1, 33-63 May, L. (2011) Interactional Competence in a Paired Speaking Test, Features Salient to Raters, Language Assessment Quarterly, 8:2, 127-145, published on line at http://dx.doi.org/10.1080/15434303.2011.565845, accessed 06 June 2014 Seedhouse, P. (2012) What kind of interaction receives high and low ratings in Oral Proficiency Interviews? English Profile Journal Volume 3 August 2012 available at http:///journals.cambridge.org/EPJ last accessed 23/05/14 Van Moere, A. (2006) Validity evidence in a university group oral test, Language Testing 23:411 available at http://ltj.sagepub.com , last accessed 06/06/2014 Van Moere, A. and M. Kobayashi (2003) who speaks most in this group? Does that matter? Paper presented at the Language Testing Research Colloquium Weir, C.J. (2005) Language Testing and Validation. Basingstoke, UK: Palgrave Macmillan

Related


More Related Content