top of page

"In order to know where you are going, you have to know where you've been."

Cvent Baseline and Benchmark Study

Understand the current state of the product to measure key performance metrics like Time on Task, Task Success Rate, and SUS Scores, for basis of comparison against later iterations.

Project Goal

Define Needs: 4 weeks

Design Study: 6 weeks

Perform Study: 3 weeks

Data Analysis: 3 weeks

Overall: 4 months (plus ongoing Benchmark Studies)

TimeFrame

  • Discovery Research

  • Unmoderated Remote User Research

  • Baseline and Benchmark Studies

  • Comparative Analysis

  • Longitudinal Studies

Methodologies

Establishing a Baseline

After a year at Cvent, although the research team was consistently cranking out new findings, we were struggling with getting buy-in since the product team and other stakeholders had a hard time understanding how our findings could result in changes to the design.  For one product line in particular (a WYSIWYG website editor similar to Wix or Squarespace) the Product team was delivering constant changes, but when my team spoke with the Customer Support and Client Success teams, ticket volumes and complaints were higher than ever.

I decided that "one-off" research projects outside the context of use did not paint a clear enough picture for the Product team, and I decided to perform a baseline study, against which we could benchmark all the Product team's changes.

 

The Questions

"How is our product offering currently doing?  Are the changes we are making actually affecting the user experience?  How does it perform compared to other products?"

 

The Approach

In order to understand how the product was performing, I decided we needed a larger sample size to really understand how our users were doing with the product.  This had the added advantage of feeding the team's data-driven personas.

Knowledge Refresh

Before jumping into the project, I refreshed my memory by reading articles and books written about how to best conduct baseline studies to ensure rigor and repeatability.

I also signed up for the User Experience Professionals Association's short course titled "Benchmarking the User Experience" lead by Jeff Sauro of www.MeasuringU.com.

Armed with all the answers I needed, I began to frame up the study.

Vetting Tools

Knowing that I was targeting between 20-30 participants for this study, I knew I could not moderate this research while keeping up with my other responsibilities.  I determined that an unmoderated study would be our best bet, and would allow for repeatability with minimal effort when we decided to benchmark against the original baseline.

I compared over 15 unmoderated research tools, and due to timeline and budget constraints, settled on Validately (now UserZoom Go.)

Stakeholder Buy-In

While I worked on researching the best tool for our small team, I started conversations with key stakeholders.  In previous projects, we had struggled with certain parties dismissing our findings by disagreeing with our methodologies.  To avoid this situation, we established the practice of presenting our methodologies before starting the research, to allow them to air any concerns they may have about how we reach our conclusions.  This allows for a verbal agreement that our findings will be accepted as true, as well as a discussion for what ranges are considered acceptable for the product.

 

The Study

Writing the Tasks and Acceptance Criteria

In order to ensure the tasks matched users' goals and would be repeatable for future studies, I kept the overall tasks generic and high level, while listing the specific steps required for success in the annotating guide. 

 

For these tasks we measured the following metrics:

  • Task Success Rate

  • Number of times the user struggled with the task

  • Single Ease Question (SEQ)

  • Task Confidence Rate

  • Time on Task (TOT)

 

At the end of the study, we also captured the System Usability Score (SUS), overall Task Confidence Rate, Net Promoter Score (NPS), in addition to any other qualitative feedback users wished to add.
 

Finding the Participants

In order to find participants, I collaborated with the Client Success Team to identify early adopters of the product.  Two hundred and ninety-two users and eight referrals opted into the study, who were asked to take a screener survey which would enter them into a prize drawing.  Participants who completed the full unmoderated study were incentivized with a $100 Amazon gift card. 

 

In total, 27 participants qualified and completed the study.

 

The Results

Analysis

As results trickled in, I started analyzing the recordings.  After two weeks, I was spending all my time evaluating them, finding patterns, and pulling particularly compelling video clips and quotes.

I enlisted the team intern to help with some of the more automated data analysis.

Findings
A full write-up of the study was created and handed off to the product team.

Key findings included:

  • Users had trouble adding content to a page

  • Users did not publish their site before previewing or testing

  • Users got lost navigating between pages as well as within the right-hand panel

  • Users had trouble understanding how to apply global versus local changes

The Outcome

After delivering the results with both quantitative and qualitative outcomes, the Product team took the learnings to heart.  They were able to fully understand the issues within the context of the ecosystem and make changes that more drastically improved scores in subsequent iterations.

bottom of page