Data distilled by educational data miners, after logging, but before data mining
(Notes from a discussion at the EDM workshop at ITS2006)

Idleness
Time breakdown by category
Common action sequences or patterns
Plausibility/ filtering of improbable, meaningless sequences
Error because of putting right answer in wrong place
What information may have altered actions
Number of previous encounters,actions in current context/skill
Time, normalized (how much faster is this action than the same action, when conducted by other students?)
Number of past attempts, hint requests
Number of actions on element by number of people
Number of actions on current problem step/skill, or recently
Heuristic guesses about actions (example: which skill to blame)
Comparison between current action and pre-test
Expected action/ probability of action
Predicted probability of knowing skill
Model-based assessments
Putting student in context of overall class/ group subdivision

Is this list missing anything that you distill?
Post to EDM-DISCUSS (if you are already subscribed) or email the website maintainer, to let us know!
  • Not yet an IEDMS member?

Recent News

  • Proceedings of the Eighth International Conference on Educational Data Mining now available here.
  • Journal of Educational Data Mining issue 7(2) now available here.