top of page

At CivicScience, a common theme appeared every time I spoke with our users - "I just want to compare these things."

At the core of everything we do within the system, we needed a way to allow people to compare things; compare them against each other, against another variable, or against time, and to see if the changes were significant.

Crosstabs

Allow users an easy and intuitive way to find and compare multiple variables across several different factors, while still allowing control and freedom for our power users.

Project Goal

Initial Project: 8 months

Discovery: 1 month

Design: 1 month

Development: 6 months

My Proposal: 1 month

Discovery: 0 (reused existing)

Design: 2 weeks

Development: 2 weeks

TimeFrame

  • Competitor research

  • Conversations with Subject Matter Experts

  • Moderated usability studies

  • Internal product research

  • Web analytics

Methodologies

The Goal

As we were upgrading our product into a new technology stack, we had the opportunity to make improvements.  Aside from our Question Search, the second most used analytical tool was Scorecards.  Despite being highly powerful and frequently used internally, they were too complex and time-consuming for the vast majority of our clients.

The Challenges

Despite having the needed functionality, Scorecards were cumbersome to use.  After talking with daily users of the tool, there were several key usability issues that were uncovered, including:

  • the inability to edit a column once added

  • a lack of flexibility when comparing items

    • such as forcing users into columns and questions into rows, despite the fact that user segments are defined by question responses

  • a lack of filtering, forcing users to create custom user segments to filter users

    • this also lead to an overabundance of user segments in other areas of the application​

  • inability to save drafts and edit

The Process

Research

When talking with users of the system, I started with the internal Client Success team.  Since they were the ones who spent the most time using scorecards to fulfill client requests, they knew what customers were truly looking for in the data.  They were also full of handy tips and tricks to shortcut as much of the scorecard creation process as possible. 

 

Through these discussions, I was able to learn that at the end of the day, Client Success and customers alike just wanted to compare how many people answered each question against each other, and how statistically significant these findings were.  When asked why they didn’t use the “Question Compare” feature, two main themes occurred;

  1. Users believed they could only compare 2 questions against each other (which was not enough)

  2. Users often forgot the functionality exists

 

Additionally, our power users often referred to competitors’ functionality, which allowed them to drag and drop any question into a row and any segment into a column to see a results table.

 

After talking with members of the Technology team, it became apparent that we could “stack” this “Question Compare” feature into a table that showed a full comparison of questions crossed with other questions, without being limited to segments as columns.  After an initial proof of concept, we were set to design a more complex interaction.

A Design First Approach

 

After an initial failed attempt to create a Crosstab feature in order to remain competitive against other industry tools, I was asked to create a proposal for a minimum viable product (MVP).

Armed with my initial round of research from the previous iteration and a deep understanding of our current capabilities, I went to work.

Initially, creating complex Boolean logic of mixing questions together (e.g. show me people who answered “Vanilla” as their favorite ice cream AND “Blue” as their favorite color but NOT people born after the year 2000) became quite a sticky web to untangle.  However, after talking with the Client Success team, we discovered that the most common filters weren’t stacking questions on each other like the above example, but rather one of two use cases; 1) comparing how different user groups respond to the same questions or 2) comparing the same user group over time.  By removing the time filter and adding it as a layer you can apply to a question, the need for Boolean logic in over 90% of use cases was removed.

Boolean.png

Logic tables to help communicate Boolean logic to PM

Interactions

Next, drag-and-drop interactions had to be proven out as a concept.  Question search was a simplified version of our main question search, resized and reflowed using mobile-first principles.  After creating some initial concepts explaining how the design would show affordances for where a question was being placed, I collaborated with the dev team to find some development libraries that were close enough for us to begin our next proof of concept.  After we settled on one library that met most of our needs, they began building prototypes where they could drag and drop questions onto a canvas. 

XT_2.5 Dragging.png

Wireframes used to illustrate how the drag-and-drop functionality should work.

Interactions

After drag-and-drop proved to work well, I decided to split up editing the canvas into multiple "modes;"

  1. Create 

  2. Add/Edit Variables

  3. Filter

  4. View Results

By breaking these out into separate steps, it allowed each mode to be it's own thing and not interfere with the interactions of other stages.

XT_Step 5_ Variation 1_ Side Pane.png

An early iteration showing how the "view" panel might allow a user to filter once a question was selected.  This was discarded in favor of an "edit" mode.

The Results

At the time of this writing, this is an ongoing project and I do not have screenshots of the final implemented product.  However, some examples of the final designs are shown below:

bottom of page