Student Strategy Prediction using a Neuro-Symbolic Approach
Anup Shakya, Vasile Rus, Deepak Venugopal
Jul 02, 2021 20:30 UTC+2
—
Session I3
—
Zoom link
Keywords: Strategy Prediction, Neuro-Symbolic AI, Deep Networks, Markov Logic
Abstract:
Predicting student problem-solving strategies is a complex problem but one that can significantly impact automated instruction systems since they can adapt or personalize the system to suit the learner. While for small datasets, learning experts may be able to manually analyze data to infer student strategies, for large datasets, this approach is infeasible. We develop a Machine Learning model to predict strategies from student data with discrete interaction steps. Deep Neural Network (DNN) based methods such as LSTMs are a natural fit for this task since the goal is to model sequential data. However, purely LSTM-based methods often have long convergence times for large datasets and like several other DNN-based methods have the inherent problem of overfitting the data. To address these issues, we develop a Neuro-symbolic approach for strategy prediction, namely a model that combines strengths of symbolic AI (that can encode domain knowledge) with DNNs. Specifically, we encode relationships in the data using Markov Logic and use symmetries among these relationships to train an LSTM more efficiently. In particular, we use an importance sampling approach where we sample the training data such that for clusters/groups of symmetrical instances (instances where the strategies are likely to be symmetric), we only pick representative samples for training the model instead of using the whole group. Further, since some groups may contain more diverse strategies than the others, we adapt the importance weights based on previously observed samples. We run a detailed empirical evaluation on the publicly available KDD EDM challenge datasets from Mathia where we show that by exploiting symmetries, we can learn a model that is both scalable and accurate.