Utilizing large language models for EFL essay grading: an examination of reliability and validity in rubric-based assessments

No Thumbnail Available
Date
2024-05
Authors
Yavuz, Fatih
Çelik, Özgür
Yavaş Çelik, Gamze
Journal Title
Journal ISSN
Volume Title
Publisher
Wiley
Abstract
This study investigates the validity and reliability of generative large language models (LLMs), specifically ChatGPT and Google's Bard, in grading student essays in higher education based on an analytical grading rubric. A total of 15 experienced English as a foreign language (EFL) instructors and two LLMs were asked to evaluate three student essays of varying quality. The grading scale comprised five domains: grammar, content, organization, style & expression and mechanics. The results revealed that fine-tuned ChatGPT model demonstrated a very high level of reliability with an intraclass correlation (ICC) score of 0.972, Default ChatGPT model exhibited an ICC score of 0.947 and Bard showed a substantial level of reliability with an ICC score of 0.919. Additionally, a significant overlap was observed in certain domains when comparing the grades assigned by LLMs and human raters. In conclusion, the findings suggest that while LLMs demonstrated a notable consistency and potential for grading competency, further fine-tuning and adjustment are needed for a more nuanced understanding of non-objective essay criteria. The study not only offers insights into the potential use of LLMs in grading student essays but also highlights the need for continued development and research.
Description
Keywords
AI-based grading , automated essay scoring , generative AI , large language models , reliability , rubric-based grading , validity
Citation
Yavuz, F., Çelik, Ö., & Yavaş Çelik, G. (2024). Utilizing large language models for EFL essay grading: An examination of reliability and validity in rubric-based assessments. British Journal of Educational Technology, 00, 1–17. https://doi.org/10.1111/bjet.13494