Beyond Tutor Logs: Utilizing sensor data for measuring student behavior.
David G. Cooper
West Chester University
dcooper@wcupa.edu

ABSTRACT

This half day tutorial will provide an overview of ways that sensors have been used to augment tutor logs, the leading edge sensors and hardware that are available for use, and the open source software that can be modified to log the detected behavioral features.

After the overview, participants will select one or more of the open source software packages to create a logging capability of facial expressions, vocal affective features, and/or web-cam based eye tracking. In addition some more advanced sensors such as the OAK-D camera, the Orbbec Astra Pro, and the ReSpeaker microphone array will be available to try out.

The advances in the processing of sensor data in recent years have yielded many options for detecting behavior in educational settings. However, the use of sensor data in education research adds a level of complexity and cost that may be daunting. This tutorial will provide participants with a foundation for successfully adding sensors to their research protocols.

Keywords

Sensors, Tutorial, Audio-Visual, Facial Features

1. DETAILS

The details are summarized below:

Length of Tutorial: half-day
Proposed Format: tutorial, hybrid
Expected Target audience: anyone interested in using sensors
Previous version: EDM 2013 Beyond Tutor logs

2. PROPOSED FORMAT

The tutorial will start in a lecture format to provide an overview of sensors, past work that uses them, and current hardware and software available to the average researcher. The overview will take somewhere between half an hour and an hour with time for questions.

After the overview participants will break into groups to explore different software for processing video and audio, and to learn how to take the processed sensor data to convert it into time indexed activity data that can be processed alongside tutor logs or online activity logs such as poll everywhere, LMS quizzes, PrarieLearn quizzes, Runestone activities, or anything else.

Each activity will have a worksheet describing the goals, a slide deck, and a video walkthrough of important steps.

The goal of each activity will be about taking the open source tool that extracts some behavior and adding a logging capability that can be associated with other data logs that are part of your study.

Proposed activities include:

  1. Extracting Facial Action Units from OpenFACE 2.2 [2]
  2. Extracting basic emotions from the Human Library [7]
  3. Segmenting pauses between speech using PRAAT [3]
  4. Making a custom annotation in ELAN [6]

3. PLANS FOR SUPPORTING REMOTE ATTENDEES

Since this will be in person, a hybrid version of the talk may be possible, but may be difficult, so a video of the introductory talk will be made available to the remote participants. In addition, either I can set up breakout rooms for participants using my zoom account or if there is an EDM remote collaboration tool, then that could be used.

Since participants are supposed to work through the practice activities in self-directed groups, the materials available for in person participants, the worksheet, slide deck, and video walkthrough of important steps, will be made available to online participants as well.

4. DESCRIPTION OF TUTORIAL CONTENT AND THEMES

The tutorial is focused on data collection of activity, affective expressions, and behaviors that can be detected by various sensors. Since behavioral data is often hand coded, using sensors to help automate the process is something that may be useful to all researchers collecting educational data in a classroom setting. Ideally, if more people know how to incorporate sensor data into their research, there will be richer open data sets for educational data mining.

The content will include a description of experiences using sensors as a data source of behavior logs, recommendations about hardware and software that can be used for this purpose, and an approach to using sensors in an incremental fashion, using a methodical approach to alleviate some of the risk and cost of incorporating sensors into the research process.

5. ABOUT THE TUTORIAL CHAIR

David G. Cooper, Ph.D., dcooper@wcupa.edu, is an Assistant Professor at West Chester University. David has spent over 20 years working with different types of sensor data. He was in charge of the integration software for the “Emotion Sensors go To School" [1] project where he deployed about 30 sets of 4 sensors to a classroom at a time in 3 different schools. He was instrumental in curating a crowd sourced audio-visual data set [4] with labeled emotions that has been used by over 500 researchers over the last 7 years. Recently, David has been creating an audiovisual sensor platform with real time annotating capabilities to work with a variety of low cost audio and visual sensors for deployment in classrooms and other research environments where recording activity could be useful. This was recently demonstrated at SIGCE TS 2023 [5]. One of David’s classes focuses on the use of sensors for affect detection, and David has led graduate students to successful integration of sensors into their final projects for the class. His last tutorial on this subject was in 2013 at EDM and since then, David has gained experience with the latest low cost hardware and open source software that is available for use in this type of research.

6. REFERENCES

  1. I. Arroyo, D. G. Cooper, W. Burleson, B. P. Woolf, K. Muldner, and R. Christopherson. Emotion sensors go to school. In Proceedings of the 2009 Conference on Artificial Intelligence in Education: Building Learning Systems That Care: From Knowledge Representation to Affective Modelling, pages 17–24, 2009.
  2. T. Baltrusaitis, A. Zadeh, Y. C. Lim, and L.-P. Morency. Openface 2.0: Facial behavior analysis toolkit. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pages 59–66, 2018.
  3. P. Boersma and D. Weenink. Praat: doing phonetics by computer. computer program, 1992-2022.
  4. H. Cao, D. G. Cooper, M. K. Keutmann, R. C. Gur, A. Nenkova, and R. Verma. Crema-d: Crowd-sourced emotional multimodal actors dataset. IEEE Transactions on Affective Computing, 5(4):377–390, 2014.
  5. D. G. Cooper. The audiovisual labeled emotion (ale) research platform. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 2, SIGCSE 2023, pages 1263–1263, New York, NY, USA, 2023. Association for Computing Machinery.
  6. N. M. P. I. for Psycholinguistics The Language Archive. Elan (version 6.7). computer software, https://archive.mpi.nl/tla/elan, 2023.
  7. V. Mandic. Human library. https://github.com/vladmandic/human, 2024.