Abstract
Providing detailed and timely feedback on student writing is one of the most persistent challenges in large ESL classrooms. At Ahmedabad University, more than 1,200 undergraduates annually complete a required advanced writing-intensive communication course where each student is expected to receive two rounds of formative feedback on long- form essays. In practice, heavy workloads often result in delayed or uneven feedback, raising both efficiency and equity concerns. To address this, we developed and piloted the COM Essay Assessor, a rubric-aligned, Generative AI powered feedback tool designed to operate in a human-in-the-loop mode. The tool generates first-pass feedback mapped directly to the course rubric, which instructors then review, edit and share with students. Findings from the pilot suggest that the tool substantially reduces instructor workload, improves consistency across sections, and provides actionable, rubric-based feedback that students can use to improve their writing. This paper situates the tool within broader discussions of AI-assisted teaching, highlights its design principles, and reflects on its pedagogical and ethical implications. Limitations and future directions are also discussed, including rubric validation, API-based scalability, and integration into writing-intensive courses across disciplines.
Presenters
Juhi BansalCo Director, Centre for Learning Futures, Ahmedabad University, Gujarat, India
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
KEYWORDS
Generative AI, Formative Feedback, ESL Writing, Rubric-Based Assessment, Human-In-The-Loop
