CAry, NC  |  Jan - May 2015

ROLE

IMPACT

Human Factors Intern

Analyzed client usage data to suggest improvements to software product and consulting practices

 
 


Summary

•  Worked as a member of the software team to improve Horizon's product

•  Brought a cognitive psychology perspective to research projects and weekly development meetings

•  Reviewed literature on human factors methodology and technology acceptance

•  Designed a study to examine the efficacy of Horizon's product compared to existing tools

 
 


MAIN PROJECT

Horizon Performance is a consulting and software as a service company that assists clients with the selection, assessment, and development personnel.

One feature of Horizon's software product is the ability to capture and record behavior in real-time.  My work was focused on analyzing how clients used this Behavioral Observations tool.

An example observation.

An example observation.

To log an observation, users select from a preset list of behaviors.  This list is tailored to the client's specific needs.  Through consultation and communication, a list of 70 behavior options was developed for this client.

 
Data set of ~5,000 observations.

Data set of ~5,000 observations.

I was presented with the client's usage data of the Behavioral Observations tool.  This data included all of the observations they recorded over the past 15 months.

I was given the freedom to explore this data to find interesting patterns and takeaways.  My goal was to learn about how this client was using our product and better understand their user experience.

 

key takeaway #1

2 behavior options...

... accounted for over half of all observations.

Although we worked with the client to develop the list of behaviors, the data suggest that something was lost in translation.

The top 2 options far overshadowed the other 68 options, and the majority of options were almost never used.  There was a stark contrast between user's perceived needs and their actual behavior.

Further research and user testing is required to determine the best course of action, but potential solutions include:

1.  Redesign the list of behavior options.

70 options may be too many for users to maintain.  A shorter list could more comprehensible and equally as effective.  Also, an interface could be implemented to organize behavior options and facilitate interactions.  This could help moderate the salience of list items.

2.  Reassess how users are trained to use the system.

70 options may in fact be the right number.  Users may have a different mental models for how to define and distinguish between different behaviors.  They may also have different goals in mind when using the product.

 

Key Takeaway #2

The "Other" option was used in 80% of all observations.

The product designers intended the "Other" category to be used only as a supplement to the specific behavior options.  After all, considerable time and effort was dedicated to creating a tailored list of behaviors for the client.

"Other" was supposed to be for observations that did not fit into any of the existing options.

Potentially, data tagged as "Other" could inform designers of behaviors that need to be added to the list.

Two possible interpretations are:

1.  "Other" is being used as intended.  Users are observing many behaviors that do not fit any of the list options.

Despite our efforts, the tailored list may not actually represent what the client truly needs.  Communication between us and the client could potentially be improved.  The list of options should be updated to better reflect the client's interests.

2.  Users perceive the function of "Other" differently from designers.

The observations tagged as "Other" could actually be categorized into one of the specific list options.  However, users are not doing so.  This could be due to a lack of knowledge or understanding of the categories.  Or, this could be due to a usability issue that needs to be addressed (i.e., users select "Other" out of convenience and ease-of-use).