In anticipation of the launch of our new game, the studio team recently began to design and implement user tests with the help of over 20 children at a local school in North London. It’ll be the first in a series of tests that we’re running in partnership with the school, and we’re keenly looking forward to gaining even more insight in our upcoming sessions.
Before getting into the trenches, we did our fair share of homework: trawling the blog posts of leading studios specialising in children’s media for any tidbit of knowledge we could harvest; and then perusing academic literature on how best to run co-design and testing sessions with children. Simply put, planning and implementing our first session was incredibly educational and rewarding.
Boy, did we learn a lot! In fact, after the last child had hurried out the door, the entire team exchanged nervous glances before collective laughter broke the silence. “That was intense,” I distinctly remember one of us saying afterward. In this post, our aim is to share the lessons we’ve learned in user testing with children – and at the end, link to some of the resources we consulted before we designed our tests.
Our overall process in preparing for a user test (aside from obtaining informed consent from all parties, complying with data laws and other administrative matters) to be held at a school was the following:
First, we created a test script. This was a document that laid out the question we wanted to answer about our game during the user test, what we were testing and how each of us would run the test with our assigned group of children. Second, we built to our available tablets and each took one variation of a navigational feature we were looking to test (similar to A/B testing). Each facilitator was assigned 2-3 children at a time, each of whom had a tablet. We all followed the test script and took notes – a part of the process we’re looking to streamline as we continue. Third, we compared notes during a team debrief. Here, we shared our findings and discussed what adjustments we should make to the feature next time.
We’re testing across a rather broad range that includes children ages 3 to 8 years-old. Generally, the recommendation is to split this group into two smaller groups: children ages 3 to 5; and children 6 to 8, as developmentally, the two ranges have considerably different sets of abilities.
In creating our “test script” – a document which outlined the general process that each team member would follow when assessing a certain user interface’s usability – we decided to mix these groups and carefully observe the differences we saw in levels of attention, engagement, ability to take and act on instructions and how easy it was for them to understand the interface.
Predictably, the younger group (ages 3 to 5), were less able to engage with an interface for a prolonged period of time – especially when parts of the game were rendered inaccessible, since our first user test was confined to determining the viability of only a few, smaller features. The older group was more likely to stick with the test script for a longer period of time.
Lesson: Decide whether you want to split into two cohorts to account for developmental differences.
Children as young as 3 years old were well-versed in device controls and had an almost instinctive understanding of user interfaces. We ended up testing far more than we had initially planned – because children were able to manipulate our game’s UI (which is what were testing) with such fluency and ease that we had extra time to determine the viability of other features.
So why is this important? Does it mean that we don’t really need to test our UI if children have such an innate understanding of them anyway? On the contrary – we learned that if children couldn’t come to grips with a feature we thought was new or interesting, then something we’d designed wasn’t seamless and intuitive enough. If they were so naturally good at knowing how to manipulate smartphones and iPads – and they didn’t understand how to find a help button or move a draggable element – then we’d clearly done something that needed improvement.
Lesson: When you’re testing, try not to be too married to cool new design elements or features that you think make the app. If children – who are pretty well-versed in using apps – don’t get it, get back into the workshop to rethink your approach.
Our team was very specific about the feature that we intended to test. In fact, our build contained four toggles that allowed our facilitators to easily turn specific elements on and off, so that we could refine the user experience based on our hypotheses in real-time. In practice, this was a variation of A/B testing, where over the course of the session we could determine how well children responded to one element or some combination of elements.
Try selecting 1-2 UI elements (e.g. navigation or a type of button) to test; brainstorm variations of it alongside your design and art team; and narrow the options down to =< 4. Ensure your build allows you move between these, and then either divide each variation up among your facilitators to test; or have everyone test all 4 over the course of the session.
Lesson: Select one feature or aspect of the app you want to test. Focus on a single question that you would ideally like answered by the end of the session. Make sure that your test script will assist you in answering it and in your testing release process, configure your builds to make the process seamless.
When working with children – who all have differing levels of ability and attention spans – it’s sometimes difficult to stick to a “script.” Let your test script be a guide, but also make sure your facilitators don’t veer too far off script (e.g. letting children explore parts of the game that they shouldn’t access or play). This is to ensure that the data your team is generating to answer the question is directly comparable – and doesn’t get influenced by other types of interaction and activity.
During our first session, our facilitators used a variety of methods to note how children were interacting with our app. One of our facilitators created a chart with a simple checklist and tabulated how each child was interacting with each navigational element. In addition, this person wrote down additional observations made during the course of the session in the margins. During our team brief afterward, it was clear that this approach was probably the best way of jotting important takeaways down.
Lesson: For the next test script, we’ve provided facilitators a checklist that will easily allow them to tabulate how well each child interacted with an individual feature. This enables to quickly see if an approach worked or if it didn’t. In addition, room for additional notes are provided.
User testing with children, especially when they’re young, is an unpredictable experience. That’s a good thing! While we went in armed with our test scripts, prepared with newly-installed builds and crayons for good measure, some of our greatest insights came from what we didn’t expect to hear.
Our cohort had a lot to say. Why was this character not animated? Shouldn’t they talk when I tap on them? Had we considered setting part of the story in a different scenario? Why couldn’t they access the games yet? Be open to new ideas and learn to see your creation through a child’s eyes.
Make sure you’re in compliance with safeguarding laws Adults who are working with children must obtain background clearance to work with children. The specific obligations are different across countries, obviously – but in any case, it’s incredibly important to check beforehand what your studio might need to do (e.g. enhanced background checks) before moving further into testing.
Ensure that teachers or teaching assistants are on-hand to help if you’re testing at a school Assuming you’re working alongside a school, it’s always a good idea to have full buy-in from teachers, teaching assistants and administrators while testing is performed. They’ve known the children for far longer than you have, and when things become unpredictable, they’ll be your best bet in continuing the testing without too many interruptions.
Always have a back-up activity on-hand if children are having an off-day We devised a co-design activity to augment our user testing activity. Our game is in production as we continue to test characters, narratives and functionality – and sometimes, providing something off-script and away from the tablet can ensure that children continue to meaningfully participate. Let’s face it: sometimes testing UI or a navigation element isn’t the most interesting task, so have some markers, crayons and colored paper on-hand if you need it!
We're building a brighter future for our kids, one line of code at a time.
Brightlobe specialises in game-based digital diagnostic tools for neurodevelopmental conditions, like autism and ADHD. We empower families and children by reducing time to diagnosis and cost, and provide clinicians with precision insight for greater diagnostic accuracy.