Amazon’s Dash buttons have drawn a lot of attention since they were unveiled in 2015 in the US. The premise is simple: the buttons aim to remove the arduous task of reordering household items by placing small buttons all over your home. One in the bathroom for toothpaste, and one in the kitchen for cat food... another in the kitchen for washing up liquid. Oh, and another for dishwasher tablets...
...Wait a minute. It doesn’t seem very frictionless, does it? Or even convenient, for that matter. Surely consumers would prefer a solution that doesn’t lock them in, and doesn’t need a proliferation of different buttons for all of the different consumer products that a household gets through in a week?
And is ‘easy reordering’ enough of an incentive for consumers to stick branded buttons all over their houses? What’s more, relying on the actual friction of a physical button press to reorder products seems curiously clunky when compared to the ‘no-UI is the ultimate UI’ trends in product and service design.
When an FMCG client asked us recently to participate in a hackathon, challenging teams to make reordering day-to-day products a friction-free, planet-saving experience, we saw an exciting opportunity to investigate and prototype a different approach to the Dash buttons.
Instead of physical buttons, we wanted to explore the ways in which sensors could offer a better way forward - particularly when integrated with loyalty schemes.
The camera’s untapped potential
Pondering how sensors could be used to crack our brief led us to think about cameras.
They are, after all, the most familiar digital sensor. With one or more on every smartphone sold today, it’s hard to imagine the world without the ever-present snapping, Instagramming and filming that these connected cameras have enabled. Their inclusion in smartphones is also a major factor in the dramatic reduction of their price, whilst constantly pushing quality higher and higher.
But digital cameras, whether inside your smartphone or otherwise, can be so much more. Emerging computer vision techniques, coupled with advances in machine learning, are making the light sensor one of the most versatile tools in the sensor box.
Computer vision techniques can already help us rapidly detect text, colours, shape and motion in a field of view. With new machine learning APIs, it will be possible to make these sensors more intelligent over time, improving their ability to pick out text, or even particular brands, and to rapidly identify what’s passed in front of the lens.
With these thoughts in the back of our mind, we set out to explore and test the ways in which this most familiar of sensors could help us crack our brief.
It was an idea that was simple on paper, but difficult to execute: if we could train a light sensor to automatically detect which products pass by it, then we could get a log of all of the packaging that a household recycled.
The data captured by the sensor would provide a goldmine of valuable information for retailers and manufacturers. A customer indicating that a product has been exhausted could not only prompt a quick and easy reordering system, driving sales, but it could also provide a way to really fine-tune packaging volumes and sizes, reducing waste and cost in the manufacturing process. Similarly, for new product variants, it’s a great opportunity to capture consumer feedback about what they liked, or disliked, about the product.
For consumers, instead of branded push buttons all over the house, the process of reordering everyday goods becomes ‘invisible’ and discreet. To tackle the challenge with recycling, integrating the experience with a loyalty scheme would allow retailers to positively reward customers for recycling by awarding them points, and by tracking the amount of total space saved from landfills.
Saving the earth, saving money and saving time reordering everyday goods seems like a much more well-rounded proposition than just the push button reordering of a single, specific product.
Given the time constraints, we didn’t focus on making something pretty. Instead, our attention centred around training the light sensor to pick out packaging that ‘fell’ in front of it, logging the products as it did so. This, in turn, triggered a push notification relating to what had been thrown into our makeshift recycling bin.
Working with our fellow hackathon team members, who joined us from Superdrug, we also explored and demonstrated how this could be integrated into a mobile ordering flow for the retailer.
Technical investigations in this area are an ongoing part of our R&D efforts as we look to refine this proposition and make it more accurate. It is clear that we are just beginning to uncover the opportunities that can come with harnessing these connected light sensors.
Our hackathon team came back to TAB HQ with even more ideas about the potential of using light sensors more in our everyday, connected world - so stay tuned for more updates in the future.