Computer-based testing systems increasingly provide learners with interactive dashboards that visualise performance and offer feedback. Prior work has shown that such dashboards can improve achievement and technology acceptance, yet we still know relatively little about how students actually use these tools and which patterns of interaction are associated with better learning outcomes and self-regulated learning (SRL) behaviours. This paper presents an xAPI-based process analytics study of an interactive test dashboard for Grade 8 physics. We extend a prior diagnosis-and-feedback mechanism by instrumenting the system to log fine-grained learner actions, including navigation events, item-level reviews, time-on-dashboard, and the sequence in which students address different types of diagnostic items. Using these logs, we identify distinct profiles of dashboard use and examine how they relate to learning gains and changes in physics self-efficacy, controlling for prior knowledge. The paper contributes (i) a generic event model and analytics pipeline for studying dashboard use in computer-based testing environments, (ii) empirical characterisation of typical and atypical dashboard use patterns, and (iii) evidence on how SRL-oriented behaviours—such as prioritising slow-wrong items and engaging in repeated review cycles—mediate the impact of dashboards on learning. The results suggest design principles for dashboards that not only present feedback, but also scaffold productive self-regulation during formative assessment.
Computer-based testing systems increasingly provide learners with interactive dashboards that visualise performance and offer feedback. Prior work has shown that such dashboards can improve achievement and technology acceptance, yet we still know relatively little about how students actually use these tools and which patterns of interaction are associated with better learning outcomes and self-regulated learning (SRL) behaviours. This paper presents an xAPI-based process analytics study of an interactive test dashboard for Grade 8 physics. We extend a prior diagnosis-and-feedback mechanism by instrumenting the system to log fine-grained learner actions, including navigation events, item-level reviews, time-on-dashboard, and the sequence in which students address different types of diagnostic items. Using these logs, we identify distinct profiles of dashboard use and examine how they relate to learning gains and changes in physics self-efficacy, controlling for prior knowledge. The paper contributes (i) a generic event model and analytics pipeline for studying dashboard use in computer-based testing environments, (ii) empirical characterisation of typical and atypical dashboard use patterns, and (iii) evidence on how SRL-oriented behaviours—such as prioritising slow-wrong items and engaging in repeated review cycles—mediate the impact of dashboards on learning. The results suggest design principles for dashboards that not only present feedback, but also scaffold productive self-regulation during formative assessment.