Overview

Learning Experience Design (LxD) is about structuring our work so that learning comes not just from success but also from mistakes. In our Solitaire project and JavaScript OOP coding lesson, we designed the sprint around a cycle of diagnostic evaluation, formative peer review, and summative reflection.

We also actively gathered feedback from our target audience—other CSSE students—to shape how the game was built and refined. Every bug, fix, and redesign became part of the learning process.

Learning Through Mistakes

The most valuable lessons this week came from bugs in our Solitaire prototype:

  • Bug Logs: For example, early on our shuffle function returned duplicate cards. We logged this issue in GitHub.
  • Fix Notes: After debugging, we adjusted the array handling so each card appeared exactly once.
  • Before/After Comparison:
    • Before: Decks sometimes contained multiple kings or missing cards.
    • After: Proper 52-card deck with consistent rules.

Bug Log

This mistake taught me that testing edge cases early saves time later and reinforced why Agile emphasizes “working software” at every stage.

Evaluation Timeline & Process

We structured our sprint evaluations in three phases, following both self-diagnostic and summative evaluation principles:

🟢 Start of Sprint – Diagnostic Phase

  • Individual Diagnostic: Each team member completed a self-check and shared their coding baseline with the teacher.
  • Purpose: Establish starting point and individual learning goals before beginning Solitaire development.

📋 Evidence: Notes from initial skill assessment + teacher feedback.

🟡 During Sprint – Formative Phase

  • Peer Review: We interviewed CSSE students to gather feedback about our lesson
  • Purpose: Continuous improvement, catching mistakes early, and collaborative learning.
  • Example: Many students requested a “How-To-Play” popup, which we are working on incorporating into our game.

Peer Feedback

🔴 End of Sprint – Summative Phase

  • Self-Assessment: I wrote a reflection comparing my starting point to my progress.
  • Teacher Evaluation: Oral discussion with the teacher about strengths (debugging persistence) and areas to grow (efficiency in OOP).
  • Documentation: All bug logs, peer review notes, and before/after code were captured here as baseline evidence for the next project.

Reflection

The mistake that taught me the most was the lack of transparency on how the game was played in our lessons. It forced us to slow down and think about all use cases instead of assuming that everyone will understand our game.

Working through diagnostic → formative → summative phases showed me that learning isn’t a straight line. Each evaluation type offered a different lens:

  • Diagnostic made me aware of my baseline.
  • Formative gave me peer insights I wouldn’t have caught alone.
  • Summative forced me to articulate how I grew and what I still need to work on.

Overall, this sprint reinforced that mistakes are checkpoints that make both the game and my coding skills stronger.