top of page

Using multiple research approaches to understand users' behaviors and frustrations.

Determining Customer Behaviors

Understand who our users are, what their motivations are, and build proto-personas based off findings.

Project Goal

Define Needs: 3 weeks

Design Study: 4 weeks

Perform Study: 3 weeks

Data Analysis: 3 weeks

Overall: 3 months

TimeFrame

  • Self-Reported Survey

  • Remote Moderated Usability Studies

  • Website Analytics

  • Customer Journey Maps

  • Proto-personas

Methodologies

Parature Ticketing behaviors

When Parature was acquired by Microsoft CRM, the idea was to merge the best of both worlds into a new product, the Interactive Service Hub.  The new service desk would integrate Knowledge Base and Ticketing capabilities from both companies.  But in order to ensure Parature customers would be ready to move onto the new product, I had to determine if it would completely fit their behaviors and needs.  We had to find out how Parature customers were using the product.

 

The Question

"Who is Parature's user base?  In what ways are they using Parature?  Where are their frustrations so we can avoid repeating them, and what essentials do we have to keep in the redesign?"

 

The Approach

For such a broad question, I took a 3-pronged approach.

 

Customer Self-Reporting Survey

To get the broadest idea of who our users were, I decided to start with a survey.  The advantage of accessing every single customer outweighed the disadvantages of self-reporting bias.  Focusing on quantitative results, I took extra care to have very specific questions but still added the ability to clarify any answer.  The questions were structured so I could break down results based on role, experience, and other pivots.  The survey was sent out to roughly 1,000 end customers.

 

A total of 334 participants completed the survey.  A few issues remained consistent even when roles were taken into account, such as 92% of users wanted to be able to view multiple items at a time, allowing them to multitask.  The most common workaround reported was users were opening multiple windows and tabs so they could easily switch been tasks without losing their place.  Armed with this information, I ensured the new designs allowed users to view multiple items at once so they could multitask.

 

Remote Moderated Usability Studies

In order to counteract the lack of detail and potential bias in the survey, I also moderated remote usability studies via web conference.  Ideally, in-person studies would have been ideal but were not possible at this time.  I defined the participant requirements as a minimum of 10 users per role (ticketing, phone, chat, and knowledge base); half of these users would be current Parature users with a minimum of 6 months experience, and half would be new users.

 

To satisfy these requirements, I met with a total of 12 customer service representatives, ranging from tier 1 support to manager.  I selected eligible participants for a variety of ages, genders, companies, and experience demographics to eliminate any potential skew.

 

In the study, I measured several metrics including;

  • Time on Task (TOT)

  • Task Success Rate

  • Ease of Use

  • System Usability Score (SUS)


I also tracked markers to qualify these metrics, such as the number of times prompted, alternate ways of doing or using a feature, and any custom metrics as they arose.  

 

From performing these studies, I learned that a common frustration is for support agents on phone and chat, creating a new case for their customer took too long to load, and required too much information before it could be created and saved.  This lead to the design of the "Quick Create" function, allowing users to input the minimum amount of information before they could move on.

 

Web Analytics

It can be difficult to gain meaningful insights based on web analytics alone, but combined with the survey and usability studies, they helped flesh out the data.  I requested a month's worth of data and took note of where the most and least time was spent on the site in order to develop better questions for my survey and study.

 

Using the survey and usability studies to provide context to the web analytics, we saw that the high number of hit rates on ticket pages was due in large part to agents switching around different tickets.  Some pages with low hit rates and high bounce rates turned out to be due to agents quickly looking up some information and then leaving, instead of low-value pages as originally believed.

 

The Results

Based on all the feedback we got from the previously mentioned studies, we created a new design that integrated the findings from the study.  We added easy navigation between tickets which could be organized into a queue by the ticketing manager roles if desired.  The "Quick Create" allowed agents to fill out information in real-time to create tickets with less information, auto-populated fields where possible, and could be saved in a draft format for an agent to flesh out after a call.  A quick notes section served to replace the Post-It notes on users' monitors.  The remote usability studies also served as a benchmark study against our new design, which improved efficiency by roughly 32% for the core tasks tested.

 

Findings from this initial effort also helped to service future efforts on customer journey maps and proto-personas.

 

CustomerSurvey
bottom of page