Integrating Deep Learning into An Automated Feedback Generation System for Automated Essay Scoring
Chang Lu, Maria Cutumisu
Jul 01, 2021 15:05 UTC+2
—
Session D2
—
Zoom link
Keywords: automated essay scoring, deep learning, feedback generation, assessment, machine learning, natural language processing
Abstract:
Digitalization and automation of test administration, score reporting, and feedback provision have the potential to benefit large-scale and formative assessments. Many studies on automated essay scoring (AES) and feedback generation systems were published in the last decade, but few connected AES and feedback generation within a unified framework. Recent advancements in machine learning algorithms enable researchers to develop more models that explore the potential of automated assessments in education. This study makes the following contributions. First, it implements, compares, and contrasts three AES algorithms with word-embedding and deep learning models (CNN, LSTM, and Bi-LSTM). Second, it proposes a novel automated feedback generation algorithm based on the Constrained Metropolis-Hastings Sampling (CMHS). Third, it builds a classifier to integrate AES and feedback generation into a systematic framework. Results show that (1) the scoring accuracy of the AES algorithm outperforms that of state-of-the-art models; and (2) the CMHS method generates semantically-related feedback sentences. The findings support the feasibility of an automated system that combines essay scoring with feedback generation. Implications may lead to the development of models that reveal linguistic features, while achieving high scoring accuracy, as well as to the creation of feedback corpora to generate more semantically-related and sentiment-appropriate feedback.