Utilizing large language models for EFL essay grading: an examination of reliability and validity in rubric-based assessments
dc.authorid | 0000-0003-2645-2710 | |
dc.contributor.author | Yavuz, Fatih | |
dc.contributor.author | Çelik, Özgür | |
dc.contributor.author | Yavaş Çelik, Gamze | |
dc.contributor.authorid | 131069 | |
dc.date.accessioned | 2024-06-27T08:37:02Z | |
dc.date.available | 2024-06-27T08:37:02Z | |
dc.date.issued | 2024-05 | |
dc.department | Rektörlüğe Bağlı Birimler, Ortak Dersler Koordinatörlüğü | |
dc.description.abstract | This study investigates the validity and reliability of generative large language models (LLMs), specifically ChatGPT and Google's Bard, in grading student essays in higher education based on an analytical grading rubric. A total of 15 experienced English as a foreign language (EFL) instructors and two LLMs were asked to evaluate three student essays of varying quality. The grading scale comprised five domains: grammar, content, organization, style & expression and mechanics. The results revealed that fine-tuned ChatGPT model demonstrated a very high level of reliability with an intraclass correlation (ICC) score of 0.972, Default ChatGPT model exhibited an ICC score of 0.947 and Bard showed a substantial level of reliability with an ICC score of 0.919. Additionally, a significant overlap was observed in certain domains when comparing the grades assigned by LLMs and human raters. In conclusion, the findings suggest that while LLMs demonstrated a notable consistency and potential for grading competency, further fine-tuning and adjustment are needed for a more nuanced understanding of non-objective essay criteria. The study not only offers insights into the potential use of LLMs in grading student essays but also highlights the need for continued development and research. | |
dc.description.sponsorship | TUBİTAK | |
dc.identifier.citation | Yavuz, F., Çelik, Ö., & Yavaş Çelik, G. (2024). Utilizing large language models for EFL essay grading: An examination of reliability and validity in rubric-based assessments. British Journal of Educational Technology, 00, 1–17. https://doi.org/10.1111/bjet.13494 | |
dc.identifier.doi | 10.1111/bjet.13494 | |
dc.identifier.endpage | 17 | |
dc.identifier.issn | 0007-1013 | |
dc.identifier.startpage | 1 | |
dc.identifier.uri | https://dspace.mudanya.edu.tr/handle/20.500.14362/202 | |
dc.identifier.wos | WOS:001237843800001 | |
dc.identifier.wosquality | Q1 | |
dc.institutionauthor | Yavuz, Fatih | |
dc.language.iso | en | |
dc.publisher | Wiley | |
dc.relation.journal | British Journal of Educational Technology | |
dc.relation.publicationcategory | Makale- Uluslararası- Hakemli Dergi- Kurum Öğretim Elemanı | |
dc.rights | info:eu-repo/semantics/openAccess | |
dc.subject | AI-based grading | |
dc.subject | automated essay scoring | |
dc.subject | generative AI | |
dc.subject | large language models | |
dc.subject | reliability | |
dc.subject | rubric-based grading | |
dc.subject | validity | |
dc.title | Utilizing large language models for EFL essay grading: an examination of reliability and validity in rubric-based assessments | |
dc.type | Makale |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Fatih Yavuz - Utilizing large language models for EFL essay grading An examination of.pdf
- Size:
- 1.76 MB
- Format:
- Adobe Portable Document Format
- Description:
License bundle
1 - 1 of 1
Loading...
- Name:
- license.txt
- Size:
- 1.8 KB
- Format:
- Item-specific license agreed to upon submission
- Description: