Knowing both when and where: Temporal-ASTNN for Early Prediction of Student Success in Novice Programming Tasks
Ye Mao, Yang Shi, Samiha Marwan, Thomas Price, Tiffany Barnes, Min Chi
Jun 30, 2021 19:30 UTC+2
—
Session C1
—
Zoom link
Keywords: Machine Learning, Early Prediction, Abstract Syntactic Tree, Deep Learning, Program Classification
Abstract:
In this work, we present a fully data-driven approach, named Temporal-ASTNN, for early student success prediction in open-ended programming tasks. Temporal-ASTNN combines a novel abstract syntactic tree (AST) - based model, ASTNN, which can handle the \emph{spatial} nature of student programming, and Long-Short Term Memory (LSTM), which handles the \emph{temporal} nature of student programming. For spatial programming tasks, we focused on programming classification (correct or incorrect), and the effectiveness of ASTNN is compared against another state-of-the-art algorithm (Code2Vec) and other token-based models across two programming domains: iSnap and Java. For programming tasks involving both spatial and temporal data, we focused on the task of early student success prediction, and the effectiveness of temporal-ASTNN is compared against the original ASTNN and other models. Our results show that Temporal-ASTNN can achieve the best performance with only the first 4-minute data, and it continues to outperform other models with only the first 10-minute sequences from students.