Usability testing: top survival tips from the BP forecourt

By The App Business under Insights 05 April 2016

Many of us who are responsible for creating, managing and maintaining digital products have already accepted that before you release any product to market, it is a good idea to test it first with the end user.

Doing so represents a golden opportunity to answer some burning questions. Do users like the idea of my product? Would users find my product useful? How easy is my product to use? Will my product achieve my business objectives?

If run well, user testing can provide you with the crucial answers to these questions, and serve up insights that will allow you to make your product the best it can be before it goes live.

However, if run badly, user testing can severely hinder the success of your product - providing you and your business with a set of false truths and bad data that actively prevent you from getting to the right answers.

When usability testing goes wrong

When it comes to product design, it is easy to rely on assumptions. A lot of businesses still assume they know what their users want, but often this is based on their own company experience and industry knowledge - not validated user insight. Their experience and knowledge is invaluable, but it doesn’t replace the need to really understand the outcomes users are trying to achieve.

Additionally, customer expectations of what a ‘good’ product or service is are constantly in flux. This is doubly true for products in the digital space, where new technology and new behaviours emerge faster than most businesses can keep up. In turn, this can make it easy for decision-makers to lose sight of what is most important to customers. A pre-existing set of customer insights can suddenly be rendered out of date very quickly - leaving decision-makers ill equipped to design the right products for their customers.

Luckily, many organisations do recognise that their own assumptions need to be challenged. They often do so by using a combination of customer surveys, focus groups and controlled user testing to validate whether their product resonates with the end user. And herein lies another challenge.

Not everyone is an Oscar winner

The primary problem with controlled and online user testing is that they create too much distance between the business and the problem they are trying to solve.

During a controlled, in-house usability testing session, a user will typically be brought into a market research viewing room, be given an app to play with, asked to perform several key tasks - and then asked how they felt about the whole experience.

The first problem here is that the user is often asked to use too much of their imagination. Most of your users are not actors, let alone Oscar winners, and find it difficult to visualise the situation where they might actually use the app in ‘real life’. For most of them, the mental energy spent on imagining hypothetical scenarios causes them to tense up and behave in an unnatural way.

We saw this recently ourselves when we were testing a prototype for BP. We wanted to test if BP customers found paying for fuel was quicker and easier if they used the prototype we had provided them. The issue was that we were asking our users to imagine that they were at a forecourt - when they were actually just sitting on a sofa in our King’s Cross HQ. As a result, a few participants did not understand the task given to them, causing them to get lost during the testing process. This caused one such participant to get visibly angry when asked if they found the app useful, telling us that the concept seemed like a “massive waste of time”. Clearly, the abstraction was a step too far.

However, when testing the same prototype with BP customers on an actual forecourt, our responses were all overwhelmingly positive - with all of the users stating that they thought our application would make the experience faster and easier. That is not to say our prototype was perfect (things can always be improved), but by testing in a ‘real-life’ scenario our users were not distracted by the limits of their own imagination and were able focus on what really mattered - the product.

You don’t know what you don’t know

The second big limitation of in-house usability testing is that by observing the user in such a controlled environment you are ignoring the hundreds of small environmental factors that might affect how the user actually uses your product.

During our forecourt visits to BP, for example, we noticed that it took a long time for elderly and disabled people to leave their cars and embark on the long walk into the shop. Up until that moment, we had presumed that the app’s target audience would be the young and digitally savvy market who had no time on their hands. But this observation revealed the limitation of our assumption. If we had not been observing BP customer’s ‘in-the-wild’ we would not have recognised the full potential for our application.

Principles for successful in-the-wild testing

Each ‘in-the-wild’ testing session will have its own unique set of challenges that you will have to overcome. However, here are three key principles from our BP testing sessions that will keep you in good stead to get the most accurate results.

1. Fake it to make it

Always try your best to recreate the real life experience that your product will be used in, even if you have to fake the missing pieces. During our BP prototype sessions, the supporting technology was not in place, making it impossible to create a realistic experience. To get around this, we used a secondary mobile device as a Bluetooth beacon to trigger actions on the user device, so that our users thought their interaction was real. The more your users think they are playing with the real thing, the better.

2. Always keep a record

Most of the time you will be too busy watching your users to write down your best observations. Keeping a track of what happens in each session is always valuable in case you have missed anything crucial. A great piece of software that we used on our prototype was a 3rd party SDK called ‘Lookback’. This recorded a video of the user’s device screen as well their individual facial expressions, allowing us to relive every moment of their experience.

3. Silence is golden

It is always best to step back and resist the temptation to get involved once the testing has begun, in case you influence the results - so hold your silence as long as possible. It is often when a user gets lost or confused at any point during the test that you get the best insights on how to improve your product.

To wrap up

Ultimately, we test products with users to see if they work. And simply put, there is no better place to test products than where they will actually be used. As researchers / testers in the wild, you are simply trying to get out of the way and let the users tell you - directly and clearly - if your product works.

---

If you are interested in learning more about our testing process, get in touch with us here.