My first non-visual interface project. This is an Alexa Skill that tells a horror story, in which the user is able to select which character they want to follow.
To comply with my non-disclosure agreement, I have omitted and obfuscated confidential information in this case study. The information in this case study is my own and does not necessarily reflect the views of Wizeline or Coca Cola México.
This has been one of my most challenging projects, because it was the first time that I worked on a non-visual interface. The requirement from the client was to build an Amazon Alexa skill that would narrate a storyline. At some point, the skill would allow the user to select which path of the story they want to follow.
The client already had all the production assets and the script defined, so it was up to me to gather all the information available, and define what was needed to start building this voice app.
After a kick-off call with the clients, I identified some opportunities for a discovery phase, with the goal to gather useful information about the business goals and the potential users before starting to build the product.
With the requirements gathered from the clients, I defined the following process:
The outcomes from the stakeholder interviews allowed me to identify the business goals of the stakeholders, but I noticed some gaps that were missing from the user’s side. Unfortunately, I was not able to have access to the potential users, so I decided to conduct a Proto-Personas workshop with the assumptions and information that we knew.
I conducted a remote workshop with the stakeholders for building Proto-Personas, given that I couldn’t have access to interview potential users.
Stakeholders had identified two potential “profiles” that would use this product, so that was our starting point. They thought first of building their own “activation experiences” inside malls, so we built that Proto-Persona first. Then, they thought of the users that would download the skill in their own devices, so we built that profile as a secondary user.
As we were discussing and making progress, stakeholders noticed that their main potential users should be the ones that would download the skill in their own devices, so we shifted focus over those kind of potential users. This was because their business goals were more focused on gaining “viral” awareness, so they wanted to define an MVP that would be an awesome experience for these specific users.
With that information defined, I was able to start defining something similar to a User Flow, but more suited for the needs of this particular challenge. I named it Conversational & Smart Things flow, in which I first mapped the logic behind the narrative and its different paths. Then, I added another lane to that map in which I would define the touch points of the user with the product.
Finally, I added some timestamps for a nice-to-have functionality which involved smart things triggered by some events in the story narrative. We found that this improved a lot the experience of the story, so we decided to go ahead an implement this too!
We were able to deliver an amazing experience for the users, because there were many positive comments published in the Amazon Skill Store profile page. Talking about the stakeholders, they were very happy because we surpassed their expectations. As they were from the marketing area of the company, they were looking for metrics about usage, downloads and positive comments from the users.
I was amazed of what we achieved as a team. The discovery phase was crucial for having strong foundations, and the flows facilitated the development and the conversation between me, dev team, stakeholders and testing. Once we tested the final product along with the smart things connected, we were blown away about the overall experience!
I’m very happy that I had the chance of being part of this project. I hope I can be part of more projects like this in the future!