Abstract: There are a number of novel exercise types that students can utilize while learning Computer Science. These include coding exercises, debugging tasks, output prediction, Parsons Puzzles, self-explanation, and even typing exercises. Each of these novel exercises require a different level of complexity and interaction as outlined by the ICAP Framework. Some are Interactive, like solving coding problems; Constructive, like explaining code; Active, like retyping source code; and Passive, like reviewing slides. However, there has been little research on how students practice by exercise type and when they do so. In this study, we will present our findings on student activity sequences from an online professional development course tailored to teaching non-traditional students Python in preparation for a larger AI/Data Science program. We isolated student activities into sessions, using 60 minute gaps between interactions to indicate the end of a practice session. We then produced activity transition visualizations to compare completers and non-completers of the course. Finally, we used multiple factor analyses on these sessions to examine which exercises students interacted with and their `next activity' selections. Based on our findings, we confirm the presence of platform silos as introduced by . Further, we expand this concept to the presence of activity silos based on the ICAP Framework, where students only focus on one ICAP mode of exercise types during sessions. To our surprise, non-completer sessions also showed platform and weak activity silo presence. Finally, we present discussion on our findings and how instructors and researchers may use this information to ensure students show persistence through practice.