Nike Coach

Brief

The problem:
First time runners struggle to find the guidance they need to start their journey. Running is the cheapest and most accessible way of getting fit but novices can find it challenging to gain the knowledge of what to wear or how to create a habit of running.

The challenge:
Nike Coach is a personalised running assistant (powered by Google Home) that gives first time runners everything they need to find their first pair of trainers and get running.

Create a conversational interface that understands what kind of runner you are and speaks to you in a way that is appropriate to your experience.

My role
I was part of a large team to deliver the Nike Coach experience, for which I was responsible for the user research, strategy, design and the user testing.


Overview

Project length
3 months

Team
1 User Experience Designer
1 Strategist
1 Data Analyst
3 Copywriters
1 Creative Technologist
1 Technical Lead
1 Project Lead


Discovery


Who is our audience?

Primary audience:

  • Non-runners (users who have never run before)

  • Runners with limited experience (run before in the past but low on knowledge/fallen out of the habit)

  • 25-35 years old

  • Male and female (slight skew to females)

  • People looking for help and guidance to begin running

How will we measure success?
We will know we have succeeded when we see:

  • Users continue to use the assistant beyond 21 days (the point of the habit being formed)

  • Users who migrate to Nike Run Club during their journey through the Google Assistant

  • Users who purchase a pair of shoes recommended by them

  • Users who purchase a pair of shoes and keep them beyond the point of return/refund

Scope and constraints

  • The assistant’s focus and overall duration for pilot will be 31 days

    • The assistant should consider (but is not limited to) the first day of January in order to support new year’s resolutions.

  • The assistant will need to consist of two core features:

    • An intelligent shoe finder

    • Tips and tricks to support the running journey

  • The assistant will be accessible from:

    • Google Home (voice)

    • Google Assistant (text)

    • Nike.com

KPI’s
Key criteria points that the phase 1 pilot launch and pilot will be measured against:

  1. Get users running

  2. Get users to buy the right pair of trainers for them

  3. Keep the users running 31 days after purchase?

Trainer research
In-store research was carried out in various trainer stores - Nike, Adidas, Runner Need, The Marathon Store. We observed the types of questions the shop assistant would ask us in order to get the right shoe for the users needs:

  • All brand & specialist (except Nike) store experiences reaffirm this idea of form versus. function. It appears that you can’t have both in their eyes.

  • They are not speaking intuitively to a new runner (words like energised, responsive, power run).

  • They are not clearly explaining why things are (why X brand, why X size, etc) but making decisions for you

  • This is confounded by super scientific analyses (gait analysis, etc)

  • Nike are the only store to understand that many (especially women) want style + substance,

  • Nike’s testing is also ‘understandable’, you wear the trainer and you understand that you’re looking to understand support.

The journey
Establishing the user journey early on was vital to convey the experience to the client who was new to this technology (Google Home). The journeys for a first time runner was looked at:

  1. The ideal journey the user would take

  2. The user doesn’t know what type of shoe they would like

First time runner: Ideal

First time runner: Don’t know

Understanding the needs of the user
Top level conversation structures were mapped out to start forming the logic of the experience.

How do we quickly assess what level a runner is?

Understanding user needs to inform personalisation

From this, we were able to create a recommendation logic, where based on different variables, a more personalised result would be generated for the user.

Screen Shot 2018-09-19 at 16.51.41.png

Mapping out the full logic
Twine was used to create the full structure of the logic. Twine is a tool for building branching structures where you can choose your own adventure stories or conversational interfaces.

At its most basic Twinery has two main views, the ‘story map’ and the ‘preview’.

Mapping out the full conversation helped to determine all the copy needed for the conversations, as well as the assets for Google Assistant on screen. We were also able to prototype the conversation through Twine and get client sign off before the project commenced forward.


User testing

First round of testing
Internal testing sessions were carried out, which tasks were set to uncover any usability problems and the viability of the experience.

The sessions were moderated using these techniques:

• Effective: Watch for the results of each section of the experience, and see how often they are done accurately and completely.

• Efficient: Time users as they work to see how long the experience takes to complete.

• Engaging: How well the experience draws the user into the interaction and how pleasant and satisfying it is to use.

• Error Tolerant: How well the product prevents errors and can help the user recover from mistakes that do occur.

• Easy to Learn: How well the experience supports both the initial orientation and continued learning throughout the complete lifetime of use.

A post session interview was then carried out to validate our assumptions, which were:

• The Assistant will get people running

• The Assistant is motivational in the way the conversation flows and making people to get running

• The Assistant is educational in the advice and tips given

• The Assistant will help user to achieve their goal which was established in the initial stage of the experience

• The conversation flows naturally

• User will want to carry on using the Assistant after initial usage

Findings
From the testing sessions that were carried out internally, we found that the users:

• Understood that the bot is to onboard beginner runners, to educate and encourage them to run

• Found the experience simple, easy, straightforward, and helpful. No negative feedback was given

• Felt overwhelmed when presented with 3 long options on the menu

• Felt that the bot understood their needs

• Were motivated to go out for a run

• Felt some of the language used was slightly patronising

• Found some of the copy too long and needed to get to the point a lot quicker

Next steps
We looked at addressing the issues uncovered in the testing sessions by liaising with the copywriter to work on the language, length of copy, and to look at shortening the 3 menu options, as it was vital for the user to remember all the options to get the most out of the experience.

The plan for external user testing
We wanted to understand how easy the product is to use, how the users found the experience and if the users would come back again. We also would like to validate the personas and the assumptions we have. Usefulness is defined as the combination of utility and usability i.e. whether the experience delivers:

• What the user needs
• How easy and pleasant it is to use

Breaking the user testing into several parts, allowed us to refine the early product logic and ensure that the final design is implemented during build. We will also continuously tested and refined the product within rehab as the designs develop.

For each of the five dimensions of usability, we think about how it is reflected in requirements for each of the user groups.

From the user testing sessions, we wanted to find out:

  • What does our audience think of Nike Coach?

  • What features are they most interested in?

  • What features or changes do they believe would improve the experience?

In order to find answers to these questions we planned to conduct five testing sessions.

These sessions were carried out onsite at Rehab and covered both the Google Home and Google Assistant version of the experience.

To qualify for our five tests participants had to:

  • Be new to running

  • Be based within Greater London

  • Be aged between 20 and 30

Nike Coach was designed for a predominantly female audience, so our testing sessions reflected this with a split of three women to two men.

Quantitative findings
Working with the data analyst, I was able to draw key data insights to steer the user testing.

New vs returning users
Of the 2k users we have had in the experience (23.12.17-18.03.18): the majority (96%) of our experience’s audience visited once; with 4% returning to the experience. Nearly all of these returning users came back for a second session.

Returning user behaviours:

  • Spend longer (+38%) in experience than new users   (1 min 50 secs vs 2 mins 32 secs, respectively)

  • Average number of messages per conversation remains in line, with new users averaging 5 messages per conversation versus. 6 for new users.

Content
Looking at the content areas accessed by the 2k users, the largest cohort of users (47%) accessed the shoe finder. Nearly a quarter (24%) of users accessed content relating to getting started running, and a further 13% accessed beginner’s tips; 9% start running content.

Combined, 16% accessed motivational or inspirational content in-experience.

Looking at the data, it was key to find out why there was such low percentage of returning users and why there was a low engagement rate for the motivation and inspiration content.

This would be investigated further through the post launch user testing.

Qualitative findings

From the user testings sessions that were carried out, we were able to synthesise insights and put together our recommendations on how we could improve and elevate the experience.

What worked well
On a scale of 1 (strongly disagree) to 5 (strongly agree) users were asked to rank the site based on certain factors.

What needs improvement
On a scale of 1 (strongly disagree) to 5 (strongly agree) users were asked to rank the site based on certain factors.

Screen Shot 2018-09-21 at 14.52.08.png

Insight 1
Because of the Nike Coach name, testers frequently mentioned their desire for the bot to know more about running, particularly for those who had run in the past.

The majority of testers felt they were mostly getting tips they already knew about or could find by Googling instead.

This insight would correlate to the data which pinpoints low returning users. If users are not finding the running tips useful, they do not have the need to return to the experience.

“I expected a coach, this didn’t feel like I was learning anything I didn’t already know”
- Becky, Female, Regular runner

Insight 2
Testers noted that parts of the conversation (particularly in the Knowledge) felt long, making it difficult to recall all of the options that were available.

As a general theme, the shorter the request Nike Coach made, the more confident and comfortable our testers were in answering.

“I couldn’t hear all the options, so I just picked the final one because it was the only option I remembered. It all blurred into one.”
- Neil, Male, Infrequent runner

Insight 3
Every tester remarked on the quality design and visual content when they were using the Shoe Finder.

The use of GIFs to explain the shoe components were considered to be helpful and informative and the images used to describe surfaces were seen as an easier way to decide on an answer.

“The design was really good”
- Barrie, Male, Regular runner

Recommendation 1 - Name of the experience
The name of the experience was the clearest point of contention for our testers.

Our recommendation would be to either update the content of the bot to better reflect the expectation users had of a personal trainer, or, rename the experience to better summarise the current feature set of the experience.

Recommendation 2 - Content refresh
Where we do provide tips and advice we should present them in a more engaging format.

As an example, presenting a user with Nike Coach’s ‘Top 5 Tips for your 1st Run’ is a more efficient way of sharing our expertise without the requirement for a user to have to say “Give me more first run tips” at the end of every tip.


Nike Coach Phase 2

Learnings from phase 1 have been implemented into phase 2, with an optimised experience, which went live on 14th September 2018.

NB - I did not much involvement in phase 2, as it was mostly a content refresh, and only consulted on the project on an ad-hoc basis.