The U P A voice
Feb 2005 Contents

Featured UPA Links:

Job Bank

Member Benefits:
Discounts and Special Offers:

Event Discounts

 

amazon.com
Order your usability books now from the Amazon.com link from this site. Part of all sales help fund UPA.

Guerilla Facilitation

by Ronnie Battista
Vice President of User Experience at D&B

Having facilitated a few hundred one-on-one usability tests and focus groups, I've come away with several thoughts about the art of facilitation. I've seen too much written about the “right” way to facilitate. Consider these thoughts my first attempt at what I’ll call “Guerilla Facilitation.”

Anyone who has studied facilitation methodology knows the prevailing themes: Be as unobtrusive as possible. Give users a task. Let them run through it. Don't “reward” the customer. Don't give any instruction until they've spent x amount of time on a screen. Interject only when the participant is clearly stuck or not speaking up. When forced to speak, use only non-leading, non-loaded, Socratic responses (answer a “what is this?” question with “well, what do you think it is?”). There are even instructor-led classes where facilitators practice using an audible, non-specific, neutral response to encourage feedback without showing bias (it actually sounds a lot like Homer Simpson's “mmmmm...” sound, with a bit less pleasure). Rigid stuff indeed. That said, in the perfect world, I believe in such methods, and as best as possible, I do try to follow “the rules.”

However, the world (and test participants) are not perfect, and what you want to do “right” does not necessarily work in some situations. When I first began facilitating, it didn't take long before I began to feel a familiar sense of disillusionment that was reminiscent of my “Karate is cool” phase when I was thirteen. Back then, I dabbled in self-defense, read the books, took a couple of lessons. At some point, I learned a series of actions that I branded “the killer move.” I practiced this “killer move” to show off to friends, and I mastered it. I would approach a friend, and say, "Stand there. OK, now I'll punch you here...here... then here, and it will break this, this, and this on you in two seconds.” Finally, a good friend (who was a black-belt), physically demonstrated to me that my killer move only works in the real world if my attacker is standing still. I've found that when facilitating, my test participants rarely stand still.

So, if you find yourself in a facilitator role and you're beginning to think “that voodoo that we do” is looking like doo doo, here are a few things that I suggest might help:

  • Think like a stand-up comedian - A comedian usually starts by talking about stuff he finds amusing. If people laugh, he continues; if not, he moves on to something else. Sometimes, the same material that keeps them laughing at one club completely bombs in the next. Jokes about the Garden State Parkway (a New Jersey comedy club staple) won't go over half as well in Minnesota. Successful comedians wouldn't try it. Try to think the same way as you work with each new test participant. Continuously adjust your tasks and facilitation skills as you read that person. This is far less science, far more first impression. Gauge the user (e.g. are they demonstrative or diminutive? Do they appear eager to please or eager to punish?). With this in mind, consider the tasks they are performing, and determine if there are ways to capitalize on the value this user can provide. For example, one tester I had was clearly not interested in colors and font, which was one of the big issues we needed feedback on. Try as I may, I knew I couldn't make him care about shades of blue in the hour we had together. And if he picked one over another, it was only because I pressed him on it. However, this person was very interested in the layout of content, and had plenty to say about how he disliked it. Knowing this, I chose to refocus and tailor future tasks to hit content hard.

  • Don't beat a dead horse - Given the non-invasive methods of facilitation, there's often a tendency to not get involved even when it is evident that intervention is necessary. Some would say “well that’s the point, right? A facilitator wouldn’t be there in the real world.” To remedy this, “flounder-time” is usually agreed upon up front with your observers (giving users say, 1- 2 minutes tops to flounder through something before they jump in to help). That said, don't stare at the second hand, as these situations can throw a wrench in the pace and objectives of a test session. In one session, the test subject knew how to one-click minimize and maximize Windows when having multiple browsers open, but in practice didn't do it. He would drag the top and sides of windows, which was time-consuming and painful to watch. At first, I bit my tongue as he slowly moved through any task that required moving between windows. Yes, this was important to see (i.e. relatively novice Windows users will have problems with multiple browsers), but it didn't take too long for the observers watching to “get the picture.” I only let him go on for less than a minute. I then pointed out a few tips to help. Again, I only had an hour with this person, and I wasn't about to school him on better Windows navigation in that time. So if he got stuck again, I only gave it a few seconds before jumping in.

  • Make the most of the debrief time between tests – As you start the day, take a few minutes to explain to observers the “right” ways to facilitate so they understand how it works, and also take some time to explain your individual style as well. This often helps set expectations, and gets buy-in at the get-go. As you conduct each session debrief (and you should be setting aside at least 10 minutes in between sessions), take a minute to point out when you went “off the preferred path.” Tell the observers what you did and why you felt the need to do it. Ask them if they were comfortable with how the test was conducted. Most observers aren't married to rigid usability practices. When you solicit their feedback, it reinforces that you are working with their best interests in mind. Sometimes it helps to offer them an opportunity to speak up during future tests if they disagree. Explain that they can, if it makes sense, interrupt the session. Clarify that this must be used sparingly, but make them feel empowered to hit the Esc key if needed.

  • Give your observers the final word - As a further way to empower your observers, give them a chance to pipe in towards the end. The traditional testing sequence has the user leaving after his tasks are complete (perhaps filling out a questionnaire before). As an alternative, cut the session by 5 – 10 minutes. Tell the user to fill out the questionnaire while you step out of the room for a minute, letting the tester know that you are going to speak with the observers to see if they have any last-minute questions, clarifications or tasks. This ensures that the observers have a chance to address the current participant with their specific issues. They can also test something that may have derived from their discussions during the test. Based on how well the test is going, you may decide to probe a previous task a bit deeper, try a new task, or cut the evaluation short. No matter how well you facilitate, the fact remains that a small percentage of folks that screen well during the test recruiting process will fail to provide articulate, engaging and/or valuable feedback during live sessions.† It's painful for all involved, and many observers feel a sense of relief when they can say “no further questions.” Observers appreciate that option as well, especially over the course of a long, intensive day.

  • Tasks and scenarios are a guide, not gospel - Sometimes the participant and the task are just not working out. If you see that happening, find a logical stopping point and either start fresh with another task, or better yet try to make the transition seamless. Find a way to segue what participants are doing to that new task. Don't panic if users break from a specific task to do something else. If it looks like they are interested in something that gets them moving or animated AND looks like a good opportunity to capture customer experience data, let them pursue it. Just be ready to get them back on task. For example, some power-users like to go for a spin off the beaten-path every now and then. In one test I conducted, as I was having the “final word” discussion in the observation room, one observer excitedly pointed out "look, she's playing around!" The user had finished the questionnaire and was looking up things on her own. Some of the best insight comes out of these renegade missions.

All of these suggestions relate to the one guiding principle that we must never forget...

  • Time is money - get the best bang for your buck - Yes, getting real customers to test your application provides invaluable insight. Facilitated in the purist form, these sessions will continue to enlighten business owners and developers who observe. I believe that we can do even better, but sometimes you need to break a few methodology eggs to make a bigger insight omelet (oh god of metaphors, strike me down!). As facilitator, you are central to the client’s perceived and actual return on investment. In addition to typical project fees, the most important cost is the time and expense of your observers. Testers don't care if you break the rules a bit, as long as you pay them their $100-$200 at the end. And your observers clearly want the sessions to be as fruitful as they can be. A testing day is intense, and can seem very long indeed. It's your job to make the most of every minute.

If you find yourself in the role of facilitator, I hope you find these suggestions useful. If you know your client's goals and use a little creative license to help them reach their goals, so be it. Trust your instincts, and you'll do fine.

  Usability Professionals' Association
140 N. Bloomingdale Road
Bloomingdale, IL 60108-1017
Tel: +1.630.980.4997
Fax: +1.630.351.8490
UPA: office@upassoc.org
Contact the Voice