Assessment for Learning MOOC’s Updates
Virtual Performance Assessments: A New Frontier in Computer-Mediated Evaluation
The digital age offers powerful new possibilities for assessment, including real-time tracking of learning, adaptive tests that personalize difficulty, and multimodal or simulation-based tasks that allow students to demonstrate understanding in more authentic and creative ways. These innovations can provide richer insights into student progress and support more individualized learning. However, they also bring significant risks: increased surveillance and data-privacy concerns, algorithmic bias in automated scoring systems, unequal access to technology that can exacerbate existing inequities, and new challenges to academic integrity. Over-reliance on automated tools may also narrow what is valued in learning and reduce the role of human judgment. Balancing the potential benefits with careful attention to these dangers is essential for creating fair and meaningful assessments in a digital world.
One strong example of an innovative, computer-mediated assessment is Virtual Performance Assessments (VPAs) used in some science and engineering programs. These are immersive, simulation-based tasks in which students perform complex problem-solving activities inside a digital environment that mirrors real-world contexts.
A typical VPA places students in a virtual lab or field setting—such as investigating an ecological disturbance in a virtual forest or diagnosing a mechanical failure in a simulated engineering system. Students interact with tools, run experiments, collect data, manipulate variables, and draw conclusions. The assessment software records their actions, the sequence of decisions they make, their data analysis behaviors, and their final explanations or solutions. Instead of answering multiple-choice questions, students do the work of scientists or engineers in a structured digital space.
Analysis:
VPAs offer several advantages over traditional assessments. They measure applied skills—problem formulation, experimental design, data interpretation, and iterative decision-making—which are difficult to assess with paper tests. Because the environment is computer-mediated, the system can track nuanced process data: where students click, how they adjust parameters, and how they revise hypotheses when evidence changes. This produces a deeper understanding of how students think, not just what final answer they give. VPAs can also increase engagement by situating tasks in meaningful, story-driven contexts.
However, VPAs come with challenges. They require significant technological infrastructure and can disadvantage students with limited digital experience or lower-quality hardware. Scoring can be complex: capturing and interpreting process data demands sophisticated analytics, and algorithms risk embedding hidden biases about what counts as “effective” problem-solving. Designing high-quality simulations is costly, and teachers may need training to interpret the results. Despite these challenges, VPAs illustrate how computer mediation can expand assessment beyond recall-based testing toward richer, more authentic demonstrations of learning.

