The Higher Lower Game has had a great success across web and app platforms. It has been played over half a billion times in all corners of the globe.
With the release of Amazon Echo, we saw an opportunity to explore a new context for the game. This set us up with a fascinating design challenge. How do you make a game that relies heavily on visual cues, with just speech and voice?
Interestingly, the game started out 'verbally' across the desks at Code: 'Who gets searched for more, Kanye or Beyonce?'. At the time, it seemed trivial turning that into a screen based game. It turns out that that was a challenge in itself. We aimed to get the interface communicating the question and game mechanic as clearly and quickly as possible. After a lot of iteration and testing, the game could be played without any instruction.
The game UI could be understood immediately. This was vital in making the game accessible to a broad spectrum of people. It also heavily informed how we communicated the game via Alexa. We knew we had a strong foundation on how to present the questions. We had yet to find out how that would translate to a voice based interaction.
The format of the question seems to work well, but here is a list of all the challenges we faced:
We've got a lot of ideas on how to keep improving the game, such as bringing in custom sound clips when you get past a certain score threshold. Or having party modes where you can set up teams. We're also playing around with a 'confer' mode where Alexa won't listen to you whilst you discuss the question. Here's a short video of it in action:
Visit the skills in the Alexa app to play The Higher Lower Game.