UX testing in the lab is great, but will only get you so far
Don’t get me wrong. Lab-based UX testing is and should remain a core component of any customer-focussed business’s arsenal, but like any research methodology, there are times when its limitations mean that other approaches are also needed.
Lab-based UX testing is brilliant for getting reactions to new products / features, optimising how successfully users carry out key user journeys and working out whether their attention is directed where you expect / hope it will be (especially when you can add eye-tracking into the mix). When viewed by stakeholders, it’s also an extremely effective way of increasing engagement with the customer and hammering home where any issues lie within the design of a product.
The Importance of Context
The main issue with lab-based UX testing is that it is, of course, constrained by the four walls that contain it. For existing products, this isn’t always an issue – users’ experience of using the product means that a lot of the outside context can be brought inside those four walls.
But when you are dealing with new products, particularly technologies that aim to disrupt behaviour, lab-based testing forces users to imagine the context they might be using it in. Users’ ability to do this can be variable and it’s difficult to know how much of this imagined context and any claimed intentions can be relied upon, especially if we’re dealing with complex products and services fitting into complex lives.
Someone might be able to run through a user journey in a nice, comfy lab with a moderator providing some additional prompts, but can they do it on their own on a wet and windy Tuesday night in Stoke?
Rise of the MVP
The logical step, therefore, is to break out of those four walls and fast-track getting your products into the real world as soon as possible to learn more about how they’ll actually be used. This approach has gained massively in popularity with the movement towards lean and agile, with product teams focussing development around building Minimum Viable Products (MVP) to start capturing user data as early as possible so they can understand their early users’ behaviours; and then evolve the product accordingly.
This yields a lot of rich data telling you what your customers are doing, but doesn’t necessarily tell you why they are behaving in a certain way.
Over the last few years, we’ve seen a real increase in demand for understanding the why. Data, on its own is not enough – it only tells half the story.
The behavioural data may give you some insight into user interactions, but is unlikely to give you the broader context to understand specifically what encouraged or prevented users from completing key actions.
- What goal or underlying need were users trying to satisfy when they used the product?
- What environmental or habitual triggers caused users to engage with the product?
- What’s preventing users from engaging with your product?
- Did the experience of using the product live up to expectations?
- Did the product have the desired impact on the user’s broader behaviour?
- What aspect of the experience gave users a reason to use the product again?
- What aspect of the experience deterred users from using the product again?
- When using the product in-context, do other needs emerge which the product could satisfy (better) in future?
Getting early answers to these sorts of questions is incredibly important – especially for high-profile product launches. Although many people will argue that it is best to launch a new product as fast as possible to get learnings, it is always worth weighing up the benefits of this approach against the risks of any negative impacts on the product/brand:
- What impact will negative reviews / word of mouth have on future product sales?
- What impact will a negative experience have on the broader brand?
- How quickly and easily can any issues or sub-optimal executions be identified and fixed once the product has launched?
- What might the potential cost be of resolving any customer complaints?
- How will a troubled launch impact future investment in your product?
Recently, when undertaking some research on a new app which had been launched and heavily advertised by a household name around six months ago, one user who we asked to download the app said he thought it must be fake, given its 2-star rating in the app store! There is an expectation that big brands will get it right from the start.
Piloting new products ‘in the wild’
To help manage these risks, it is vital to run a pilot launch of your new product (e.g. in MVP or alpha/beta stage) with a limited number of users. We’ve used a range of methods, often in combination, including surveys, online discussions, tele-depth interviews, guerrilla interviews, and feedback via WhatsApp to gain feedback from first users. This approach is stronger still when combined with behavioural data and customer support – giving a holistic view of users’ behaviours, motivations, triggers, barriers and experiences.
From my experience, the key benefits of testing your products ‘in the wild’ include:
- Real world context: understand usage, motivators and barriers to usage in a highly valid, real-world environment – uncovering insight you wouldn’t find when testing in a lab environment.
- Get launch ready: Test whether the experience lives up to the product’s ‘promise’ and flag up any issues you hadn’t expected so you can head them off before they bite
- Protect the broader brand: Ensure the experience of using the product reflects positively on the brand.
- Plan for the future: Pre-launch feedback helps you identify which improvements and potential new features are most important to users, based on real (not imagined) usage
‘In the Wild’ testing works for us, as part of a broader array of approaches. I would be keen to hear your views on what approaches, methods and tools you’ve found most effective when developing and launching new products, and how they’ve helped your business. Please let us know your thoughts in the comments below.