Going Online: A simulated student approach for evaluating knowledge tracing in the context of mastery learning
Abstract: Intelligent Tutoring Systems (ITS) are widely applied in K-12 education to help students to learn and master skills. Knowledge tracing algorithms are embedded within tutors to keep track of what students know and do not know and to better focus their practice. While knowledge tracing models have been extensively studied in offline settings, very little work has explored their use in online settings.This is primarily because conducting experiments to evaluate and select knowledge tracing models in classroom settings is expensive. To fill this gap, we introduce a novel way of using machine-learning models to generate simulated students. We conduct experiments using agents generated by the Apprentice Learner (AL) Architecture to investigate the online use of different knowledge tracing models (Bayesian Knowledge Tracing, the Streak model and Deep Knowledge Tracing). We were able to successfully A/B test these different approaches using simulated students.An analysis of our experimental results revealed an error in the initial implementation of one of our knowledge tracing models that was not identified in our previous work, suggesting AL agents provide a practical means of evaluating knowledge tracing models prior to more costly classroom deployments. Additionally, our analysis found that there is a positive correlation between the model parameters achieved from human data and the parameters obtained from simulated learners. This finding suggests that it might be possible to initialize the parameters for knowledge tracing models using simulated data when no human-student data is yet available. Lastly, we discuss the limitation of Deep Knowledge Tracing and the possibility to optimize this approach for better prediction in online settings.