University Module Evaluation Questionnaire Impact Study

a process and impact evaluation of a university n.w
1 / 12
Embed
Share

A process and impact evaluation conducted by Sheffield Hallam University in November 2021 assessed the effectiveness of Module Evaluation Questionnaires (MEQ) in enhancing the student experience and informing quality assurance and enhancement processes. Findings indicate varying response rates and the need for realistic expectations in student engagement for reliable data collection.

  • University
  • Evaluation
  • Student Experience
  • Quality Assurance
  • Data

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. A Process and Impact Evaluation of a University s Module Evaluation Questionnaire (MEQ) Sheffield Hallam University Impact and Evaluation Group: November 2021 Alan Donnelly (Lecturer in Student Engagement, Evaluation and Research) Caroline Heaton (Senior Lecturer in Student Engagement, Evaluation and Research)

  2. Context At the point of being created in 2014/15, Module Evaluation Questionnaires (MEQ) were anticipated to result in several benefits, which included to: improve the University s understanding of its taught provision; improve the module review process by incorporating student feedback, to help the University to identify areas which need improvement and validate any actions taken previously; develop the link between the student experience of the module and national measures of the course experience such as the National Student Survey (NSS). In 2020/21, a review was commissioned by the Sheffield Hallam University Leadership Team to evaluate the processes and impact of the MEQ.

  3. Evaluation: Overview Focus groups with 60 module leaders Captured evidence about the implementation of processes and the impact of the initiative (Parsons, 2017). A range of evidence was gathered and drawn upon for this mixed- methods study. A Task and Finish Group was established, which was formed of staff, across colleges and central teams, and students with an interest in: MEQ processes, thestudent experience or evaluation; teaching and learning; and the use of MEQ data within quality assurance and enhancement processes. Evaluation took part when courses and modules were being delivered using blended approach (combination of online and in- person). Reflections from T&L* portfolio leads across 10 depts Interviews with 19 student course reps Monitoring data Relevant literature and research

  4. Literature: Perceptions of MEQ purpose Arthur (2020): Relationships between academics, university managers and students

  5. Findings: Response Rates Monitoring data shows that response rates on modules in 2020/21 were predominantly between 15% and 20%, which led to some concerns about data reliability and the extent to which the data was fulfilling its intended purposes (quality assurance, quality enhancement). No one-size-fits-all minimum response rate to achieve reliable data (Oermann, Conklin & Rushton, 2018). Institutions may have unrealistic expectations about students engagement with shaping their educational experience (Lowe & Bols, 2020).

  6. Findings: Design and Engagement The standardised design of the MEQ was seen as a factor that restricted its potential usefulness: Being able to remove or add specific questions at module-level was suggested as a potential way of improving their relevance and alignment with disciplines. Standardised questionnaire can help establish links to the NSS (T&L portfolio staff), but its impact on improving practices is deemed to be limited/minimal (Arthur, 2020; Borch, 2021). While there are practices that can promote engagement with the questionnaire, there is a limit to their effectiveness: Low response rates were not solely indicative of a lack of promotion or effort by module leaders. Online delivery of MEQs made it difficult for staff to personalise and explain their purpose to students. Varied levels of motivation among student reps to complete the MEQs.

  7. Findings: Informing module enhancements MEQs were used variably by staff in the evaluation of their practices but their role in informing changes to modules was limited: Qualitative comments were helpful for clarifying issues, but they can be lacking in volume and be too broad to provide significant insight. Perceptions among some staff and students that the MEQ is more likely to be used to raise complaints than identify positive aspects.

  8. Findings: Quality assurance The role of MEQs in quality assurance varied across local areas: Some processes were in place for MEQ responses to be used to inform quality assurance processes (e.g., module review). Recognition that current response rates resulted in MEQs having a lower profile than in previous years.

  9. Findings: Responding back to students and students comments Mixed evidence of module leaders responding back to students about the MEQ responses: Module leaders felt that there needed to be more realistic expectations about what modifications were possible from student feedback mechanisms, particularly due to the challenges in responding back (e.g., timing of MEQs at end of modules, conflicting comments). The idea of providing students with support and guidance on the characteristics of effective feedback, to help improve the relevance, criticality and professionalism of their responses, was suggested by many staff participants. Pedagogical changes can take several iterations before achieving their potential (Rox et al., 2021) and changes should not be made in a reactive manner (Jones-Devitt & Lebihan, 2018).

  10. Recommendations Explore the feasibility and potential benefits of allowing module leaders to add a small number of their own questions, or to remove irrelevant questions, to increase the MEQ s relevance to disciplines. Reduce the number of standardised core closed questions. Review the current survey schedule for modules that have a non-standard delivery (e.g., in relation to the flexibility of timings). Explore the feasibility of introducing a course-level questionnaire, within the existing platform, to replace MEQs. Alternatively, allow module/course teams to determine their own approach to module evaluation.

  11. Recommendations Liaise with the existing platform supplier to explore current and future technological developments to facilitate in-class participation. Continue to raise module leaders awareness about the practices that encourage, but cannot guarantee, higher levels of student participation (e.g., dedicating time in-class to complete MEQs, discussing the uses of the data). Provide guidelines for all staff groups on the appropriate interpretation of MEQ data based on response rates, module sizes and other factors.

  12. Selected References* Arthur, L. (2020). Evaluating student satisfaction-restricting lecturer professionalism: outcomes of using the UK national student survey questionnaire for internal student evaluation of teaching. Assessment & Evaluation in Higher Education, 45(3), 331-344. Borch, I. H. (2021). Student evaluation practice: A qualitative study on how student evaluation of teaching, courses and programmes are carried out and used (Doctoral thesis, The Arctic University of Norway, Troms , Norway). Retrieved from https://munin.uit.no/bitstream/handle/10037/21920/thesis.pdf?sequence=2&isAllowed=y Clayson, D. E. (2020). A Comprehensive Critique of Student Evaluation of Teaching: Critical Perspectives on Validity, Reliability, and Impartiality. Routledge. Hornstein, H. A. (2017). Student evaluations of teaching are an inadequate assessment tool for evaluating faculty performance. Cogent Education, 4(1), 1304016. Jones-Devitt, S. & LeBihan, J. (2018). Use and abuse of the student voice. AdvanceHE. https://www.advance-he.ac.uk/knowledge-hub/use-and-abuse-student-voice Lowe, T., & Bols, A. (2020). Higher education institutions and policy makers: The future of student engagement. In A Handbook for Student Engagement in Higher Education (pp. 267-284). Routledge. Oermann, M. H., Conklin, J. L., Rushton, S., & Bush, M. A. (2018, July). Student evaluations of teaching (SET): Guidelines for their use in Nursing forum (Vol. 53, No. 3, pp. 280-285). Scheepers, A. (2019). SET Project: Student Evaluations of Teaching Measuring and Enhancing Course Quality and Teaching Quality. Shah, M., Cheng, M., & Fitzgerald, R. (2017). Closing the loop on student feedback: The case of Australian and Scottish universities. Higher Education, 74(1), 115-129. Stein, S. J., Goodchild, A., Moskal, A., Terry, S., & McDonald, J. (2021). Student perceptions of student evaluations: enabling student voice and meaningful engagement. Assessment & Evaluation in Higher Education, 46(6), 837-851. Varwell, S. (2018). Engaging Students in Online Distance Learning. Sparqs. https://www.sparqs.ac.uk/ch/ODL%20Guidance.pdf Varwell, S. (2021). Models for exploring partnership: Introducing sparqs student partnership staircase as a reflective tool for staff and students. International Journal for Students as Partners, 5(1), 107-123. Wiley, C. (2019). Standardised module evaluation surveys in UK higher education: Establishing students perspectives. Studies in Educational Evaluation, 61, 55-65 * Some references are listed that were used in the project but have not been directly cited in this presentation.

Related


More Related Content