Learning Science & Insights

Learning design and
iterative refinement

We iterate with students to evolve and refine products that encourage improvement and growth

Learning design begins with prototypes. This prototype of study plan was initially piloted with 1200 students to learn what they liked and disliked about the workflow and design, and how they preferred to use the study plan. It was followed by deep, iterative research with a number of students, an extensive student survey, and deep data mining to explore the nuanced behaviors of 1200 pilot students to inform the evolution of the design.

Seven iterations later, research revealed that most students preferred to use the diagnostic as a “gut check” and an “opportunity to grow.” However, seeing performance data presented as a “score” (a colored bar, percentage, or score) caused a variety of anxieties. Different cohorts felt their ability was predetermined and they couldn’t improve, others over-attended to the score, and others felt test-taking anxiety. Moreover, sophisticated learning analytics revealed that the raw score gains were not the best way to measure growth or mastery.

This evolved design helped students get “into a learning mode.” It helped students to understand that they were closing gaps in their knowledge, without feeling tested or pressured, and therefore increased their self-regulation and self-efficacy. The language used in the design helped reinforce the appropriate use and emotional response. Overall, the design drives a growth mindset.


We iterate with instructors to evolve and refine products that improve their efficiency and enhance their touch and reach

For this writing tool, we began with in-depth user research with 11 leading writing instructors. This was complemented by deep data analysis of the writing assignments of more than 16,000 students and the detailed grading and written feedback from instructors in more than 700 classes.

We identified classes of instructors with different teaching and scoring strategies. With those personas identified, a further 4 iterations with more than 20 rounds of testing with instructors enabled us to optimize how scoring rubrics, learning objectives, and feedback tools were presented to improve their efficiency and enhance their feedback to individual students.

In a later iteration, we found that ‘nudging’ instructors by presenting all scoring and rubric options equally led them to be more targeted and specific with their feedback.