Analysis of stopping criteria for Bayesian Adaptive Mastery Assessment
Androniki Sapountzi, Sandjai Bhulai, I. Cornelisz, Chris Van Klaveren
Jul 02, 2021 14:10 UTC+2
—
Session PS2
—
Gather Town
Keywords: adaptive assessment, performance model, mastery criteria, stopping policy, individualization
Abstract:
Computer-based learning environments offer the potential for automatic adaptive assessments of student knowledge and personalized instructional policies. In prior work, we introduced an individualized Bayesian model to achieve this goal. Based on observed student response times and the accuracy for each answered question, we illustrated that the model converges to the student's true knowledge level. In this paper, we leverage our assessment model as a stopping policy to determine when to stop the assessment. We evaluate several criteria based on the change of performance measures as questions are presented, such as the mean assessment level, the Kullback-Leibler divergence, and the statistical t-test. Student performances are simulated considering their sensitivity to the prior belief for mastery over different educational cases. Our results indicate which criteria offer an efficient assessment and which can effectively handle wheel-spinning students. We show that our model performs well with changing student knowledge during the assessment, thereby generalizing the effectiveness of our model.