Browsing by Organisation Author "Yavuz, Fatih"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- PublicationEnhancing reading strategies through literature selection in EFL classrooms(CV. Yudhistt Fateeh, 2024-04) Durak, Ayşegül; Yavuz, Fatih; 392831; 131069This study presents a detailed examination of how to enhance reading strategies with selected literary genres using the library research method. In addition to library resources, the information collection process has been monitored through various academic publications, articles and books, the data obtained has been analysed and the results have been evaluated. Discovering the efficacy of authentic texts demonstrates that it enhances reading comprehension and fosters the abilities of learners. The study also identifies key strategies such as skimming, scanning, guessing, and distinguishing implied and literal meaning strategies. These techniques play a significant role in learners’ capability of understanding authentic text. Drawing on theoretical frameworks, the study underscores the importance of integrating the strategies mentioned above into teaching reading to enrich deeper understanding and critical engagement with literature. As a result, using the selected strategies and literary sources can effectively equip EFL learners to understand the unknown parts of complex texts. Language acquisition and proficiency improvement in authentic text are facilitated by navigating.
- ItemUtilizing large language models for EFL essay grading: an examination of reliability and validity in rubric-based assessments(Wiley, 2024-05) Yavuz, Fatih; Çelik, Özgür; Yavaş Çelik, Gamze; 131069This study investigates the validity and reliability of generative large language models (LLMs), specifically ChatGPT and Google's Bard, in grading student essays in higher education based on an analytical grading rubric. A total of 15 experienced English as a foreign language (EFL) instructors and two LLMs were asked to evaluate three student essays of varying quality. The grading scale comprised five domains: grammar, content, organization, style & expression and mechanics. The results revealed that fine-tuned ChatGPT model demonstrated a very high level of reliability with an intraclass correlation (ICC) score of 0.972, Default ChatGPT model exhibited an ICC score of 0.947 and Bard showed a substantial level of reliability with an ICC score of 0.919. Additionally, a significant overlap was observed in certain domains when comparing the grades assigned by LLMs and human raters. In conclusion, the findings suggest that while LLMs demonstrated a notable consistency and potential for grading competency, further fine-tuning and adjustment are needed for a more nuanced understanding of non-objective essay criteria. The study not only offers insights into the potential use of LLMs in grading student essays but also highlights the need for continued development and research.