Text Representations of Math Tutorial Videos forClustering, Retrieval, and Learning Gain Prediction
Pichayut Liamthong, Jacob Whitehill
Jun 30, 2021 20:40 UTC+2
—
Session PS1
—
Gather Town
Keywords: Open educational resources (OER), Crowdsourcing, Information Retrieval
Abstract:
With the goal of making vast collections of open educational resources (YouTube, Khan Academy, etc.) more useful to learners, we explored how automatically extractable text representations of math tutorial videos can help to categorize the videos, search through them for specific content, and predict the individual learning gains of students who watch them. In particular, (1) we devised novel text representations, based on the output of an automatic speech recognition system, that consider the frequency of different tokens (symbols, equations, etc.) as well as their proximity from each other in the transcript. (a) a set of tokens that depends on a fixed library of math problems that the tutorial videos explain, or (b) a set of tokens that is library-agnostic. Unsupervised learning experiments, conducted on 208 videos that explain 18 math problems about logarithms show that the clustering accuracy of our proposed methods reaches 85%, surpassing that of standard TF-IDF features (78% using log normalization). (2) In a video search setting, the proposed text features can significantly reduce the number of videos (up to 88% reduction on our dataset) and amount of video time (up to 82%) that users need to spend looking for desired content in large video collections. Finally, (3) in an experiment on Mechanical Turk with n=541 participants who watched a randomly assigned tutorial video between a pretest & posttest, the text features and their multiplicative interactions with students' prior knowledge provide a statistically significant benefit to predicting individual learning gains.