Skill Badges Design Case Study

Problem

Our team leader, Emily Lutrick, wanted to provide educators with a way to show that they have invested in developing specific educator skills, for their own sense of accomplishment and to show leadership their commitment to continuously growing as a professional.

Discussion

Emily having laid out the goal, the team began discussing how to achieve it, each of us offering perspective from our respective areas of expertise. Emily and Dani have experience as classroom educators, Christine in developing materials for adult learners, Jessica with technical considerations, and me with design.

  1. At what level should we evaluate educator knowledge — groups of skills or individual skills?
  2. When do we prompt educators to claim a badge?
  3. Do we build our own assessment builder or find something from a third-party? If we build our own, what types of assessment items can we offer? Who will author the assessments?
  4. How many items are required to sufficiently gauge the educator's depth of knowledge?
  5. Can educators attempt assessments a limited number of times or until they reach the threshold? What is the threshold?
  6. Who can see what badges an educator has claimed? Are badges private by default? How can educators share their badges?
  7. Does each badge have a unique visual representation?

First Iteration

A first pass at the design assumed we would assess groups of skills and badges would be shown on the map. Download the v1 Badges document (1.2 MB PDF)

Solution

Later that year, we returned to the Badges feature. In the intervening months, we'd made changes that necessitated redesigning aspects of the badges feature. For example, based on a combination of Google Analytics data and reports from trainers in the field, we had reworked how learning resources were displayed so the interface was easier for those using small laptops. This meant a Badges view for the map didn't make sense and the badge assessment prompt would appear in the skill details view. Having been conceived of as separate areas of the interface, assessment creation and completion remained largely as first designed.

PLM Skill Badges preview
The guide used to communicate the user flow logic and functionality of the skill badges feature. Download the Skill Badges document (4.5 MB PDF).
  1. To keep assessments short and emphasize the educator's momentum, we chose to assess individual skills. This also fit better with our focus on individual skill learning and the supporting resources.
  2. Once all the required resources we provided were marked done by the educator, we would prompt them to claim a skill badge.
  3. Not seeing a good assessment builder available that would integrate with our development platform, we chose to build our own. The professional development designers who authored many of our educator materials would use this internal tool to craft short assessments using a few different item types that would assess to a level 2 depth of knowledge.
  4. Christine determined that 10 items could sufficiently assess knowledge of a skill.
  5. We chose to allow the educator to attempt the assessment as many times as needed to reach an 80% threshold. Locking the assessment or imposing other requirements before additional attempts seemed punitive and fairly arbitrary.
  6. Keeping with our focus on giving individual educators control, badges would remain private unless the educator chose to share their map with leaders of their choice. Educators could also share a link to their badges page and print individual badges for their records or inclusion in their yearly portfolio.
  7. While they would make for a nice demo, given the sheer number of skills (over 40 on several learning maps) and that I was the sole designer by this point, individual badge graphics did not seem like a good use of time and energy.

Implementation

By this point, Jessica (our Technical Lead) and I had a solid process for moving a design into reality. Working from my guide to the user flow and interface of the feature, Jessica began writing code. As she ran into questions, clarifications to the flows or interface and interactions were incorporated in updated versions of the guide and shared with the whole team to keep everyone synchronized. Once Jessica had largely completed the code, I double-checked that the feature worked as designed and took a pass through the front-end code to polish markup and styling.