Say What? Automatic Modeling of Collaborative Problem Solving Skills from Student Speech in the Wild
Samuel Pugh, Shree Krishna Subburaj, Arjun Ramesh Rao, Angela Stewart, Jessica Andrews-Todd, Sidney D'Mello
Jun 30, 2021 15:00 UTC+2
—
Session A3
—
Zoom link
Keywords: collaborative problem solving, natural language processing, collaborative interfaces
Abstract:
We investigated the feasibility of using automatic speech recognition (ASR) and natural language processing (NLP) to classify collaborative problem solving (CPS) skills from recorded speech in noisy environments. We analyzed data from 44 dyads of middle and high school students who used videoconferencing to collaboratively solve physics and math problems (35 and 9 dyads in classroom and school environments, respectively). Trained coders identified seven cognitive and social CPS skills (e.g., sharing information) in 8,660 utterances. We used a state-of-the-art deep transfer learning approach for NLP, Bidirectional Encoder Representations from Transformers (BERT), with a special input representation enabling the model to analyze adjacent utterances for contextual cues. We achieved a micro-average AUROC score (across seven CPS skills) of .80 using ASR transcripts, compared to .91 for human transcripts, indicating a decrease in performance attributable to ASR error. We found that the noisy school setting introduced additional ASR error, which reduced model performance (micro-average AUROC of .78) compared to the lab (AUROC = .83). We discuss implications for real-time CPS assessment and support in schools.