Home

Resources: UPA 2004 Idea Markets

Why Usability Findings & Recommendations Don’t Get Used

Activator: Janice James, Simply Usable By Design

Description

This session examined the issue of why usability findings & recommendations don’t get used. People in the User Experience field often face partial or incorrect application of their recommendations and design changes. Why? Maybe it’s because the findings reports that usability experts produce aren't usable. Reports may be readable, handsome and reasonable, but they don't meet the needs of their audiences.

The questions posed to the participants included the following:

  • Why do recommendations and design changes not get implemented?
  • What percent of recommendations or design changes get rejected or ignored?
  • Which ones get acted on and which ones don't?
  • Who rejects what? Why?
  • Who is the audience for your usability findings and recommendations?
  • What kind of analysis and research needs to happen to understand the audience enough to make recommendations that get acted on?
  • What will make a findings report more useful and effective overall?


The following includes each question and comments collected/discussed by the participants.

Why do recommendations and design changes not get implemented?

  • Time and money
  • Lack of trust
  • Organizational politics (one team vs. another)
  • Teams want validation vs. recommendations for changes
  • Teams want to make small changes vs. redesign the app/website
  • Teams just want to make incremental changes
  • Teams don’t want to make more changes that users have to ‘deal’ with
  • Team members perceive themselves as designers, so they can’t have possibly been wrong about their design decisions
  • The changes may not fit with other areas of the application/website

What percent of recommendations or design changes get rejected or ignored?

  • Developers may implement changes based on ‘their’ experiences of observing users during the test
  • Teams are willing only to make small changes but not larger (higher) priority ones
  • Teams typically make 30% overall initially and about 30% in next release
  • The number of changes made is affected by the IT budgets
  • The areas that have some political backing may be the areas of the app/website that changes are made to
  • Smaller changes that cost less to make are made
  • Least important and easiest to fix changes are made
  • The fixes that are made may apply to ‘features’ but not to the problem areas
  • Areas that are related to higher business priorities get fixed
  • 50/50
  • Low hanging fruit items always get fixed\
  • Easy-to-manage/easy-to-fix items get fixed

Which ones get acted on and which ones don't?

  • Less costly/lower risk items
  • Inconsistencies
  • Those that affect the business in a major way
  • Those that are politically influenced
  • Showstoppers
  • Those things that are not an accurate representation of customers’ requests or that will cause cutomers to fail (in their tasks)
  • Small vs. large fixes
  • Even high impact problems don’t get fixed
  • Internal tools generally don’t get fixed (tools used by the company’s employees)
  • Things without good solutions don’t get fixed
  • Things that are more fun for developers to fix get fixed

Who rejects what? Why?

  • Developers
    • They don’t want to ‘customize’ the app/website
    • They justify why the app/website is the way it is
    • They take on the attitude “it’s not my job”
  • Business owners/managers
    • They take the attitude that you’re telling them they’re doing business wrong
    • They say they don’t have enough validation/justification to make the changes (they’re not ‘ready’ to make changes
    • Culture is slow to change
    • Time/Schedules and budgets don’t allow for changes
    • Whether or not changes should be made is based on their personal experiences with the customers
  • Business leaders
    • Changes would clash with business objectives

Who is the audience for your usability findings and recommendations?

  • Design teams
  • Business owners/leaders
  • Developers
  • Training dept.
  • Technical communicators

What kind of analysis and research needs to happen to understand the audience enough to make recommendations that get acted on?

  • Learn and speak the audiences’ language
  • Understand what the audience cares about (how they keep track of problems)
  • Survey to determine what the audience feels are the useful aspects of a report
  • Collect feedback
  • Visit teams (audiences)
  • Learn what the cost/benefit is to the audience
  • Find out what the audiences hot buttons are (job performance, lost revenue, etc.)
  • Get an M.B.A.
  • Find ways to better relate to the audiences

What will make a findings report more useful and effective overall?

  • Include design ideas for solutions
  • Add user quotes to the reports
  • Become respected by the teams—establish good relationships
  • Include a section in the report on ROI
  • Include video highlights
  • \Make the recommendations actionable
  • Include screenshots in the reports
  • Include screenshots in the report of how the competitors handle the ‘situation’
  • Include solutions—not just problems
  • Include a spreadsheet of problems with assigned owners and recommended solutions
  • Take ownership of some of the problems
  • Recommend iterative changes to ensure greater buy in
  • Make the report short and concise
  • Develop the report for different audiences (executive summary, detail and more detail)
  • Know/learn what the audience (readers of the report) will be most accepting of (12 minute video)
  • Include only the highest priority problems
  • Serve food when presenting the report







Usability Resources UPA Store UPA Chapters UPA Projects UPA Publications Conferences and Events Membership and Directories About UPA