The Magazine of the Usability Professionals' Association
By Larry Constantine
How do we know what users need? Arguably the best known, and possibly the most widely practiced approach, is field study using contextual inquiry, an ethnographic technique popularized by Karen Holtzblatt and Hugh Beyer. But not every project has the resources to undertake the substantial field study needed for good results. Model-driven inquiry is an alternative approach that, by turning contextual inquiry upside down, can dramatically cut the need for field research, yet deliver solid, dependable requirements quickly and in a highly useful format.
Contextual inquiry is rooted in anthropology and borrows its primary method, participant observation, from ethnographic research. Like ethnography, which literally means “writing about people,” it seeks in-depth and detailed understanding through immersive data gathering. It is a proven technique for eliciting user requirements based on the core principles of (a) understanding in context, (b) partnership with subjects, and (c) focused investigation.
Understanding in context means simply that knowledge and insights about users and user needs must be based on direct observation and discussions of work or other practice in the actual ordinary setting in which it takes place. Partnership means that the investigators regard themselves not as researchers dispassionately studying subjects, but as collaborators engaged with practitioners in a joint effort to understand and make sense of their work practice.
Focus means making explicit the assumptions and beliefs about what an investigation is about, comparable to the guiding question in ethnography. Focus not only shapes the inquiry process but also helps in exposing and clarifying the observer’s frame of reference. Focus setting usually relies on brainstorming and discussion, but can also use other techniques, such as surveys or focus groups, to narrow and frame the inquiry.
Once objectives for the investigation are established, field study can begin, which, if carried out diligently, may involve extended periods of participant observation generating great quantities of data. It is not uncommon to end up with hundreds, or even thousands, of sticky notes or index cards filled with sundry bits and pieces of observational data. Using techniques such as affinity clustering, the amassed data is categorized and organized to develop models. This is an altogether rational and scientific research paradigm that clearly reflects methodological roots in the social sciences, notably anthropology: field study generates data that are analyzed to yield theory or models.
Contextual inquiry uses five fairly elaborate models: a Flow Model representing the coordination, interaction, and responsibilities of participants within a work practice; a Sequence Model outlining the steps to accomplish an activity; a Cultural Model capturing the norms, influences, and pressures within the work environment; an Artifact Model that includes the documents or other work products that structure or contribute to work; and a Physical Model that describes the physical environment where work is accomplished. The models clearly reflect their social science origins in anthropology.
In contrast, model-driven inquiry is a purely pragmatic child of necessity, growing out of projects with compressed schedules, inadequate budgets, and little or no institutional support for rigorous research in the field. Model-driven inquiry turns the scientific rationale of contextual inquiry on its head. Instead of gathering quantities of data as the basis for building models, it builds models as the basis for limited, sharply focused data gathering.
Model-driven inquiry is based in exploratory modeling developed at Siemens in Germany and borrows from Joint Essential Modeling. The approach has evolved through practice on multiple projects into a streamlined and simplified agile alternative to conventional ethnographic approaches.
In outline, model-driven inquiry is straightforward:
1. Build exploratory models
2. Compile emerging questions or issues
3. Select expeditious means for resolution
4. Conduct limited, highly focused inquiry
5. Refine and complete the initial models
6. Review and validate the models
Model-driven inquiry begins by building simplified, provisional models of user requirements based on whatever information and insight might be available at the time. Typically these models are in the form of simple inventories or lists, such as a list of all activities in which users are, or might, be engaged. The purpose of exploratory modeling is twofold: first, to create preliminary models for later refinement and use; and second, through modeling, to uncover areas for investigation.
A common objection is that there can be no basis for modeling users and their needs without first doing research. But no project is a blank slate; we always know something. We can draw on past experience, related systems and applications, previous versions, and insights emerging from group process. Typically we will have a variety of resources already available, including mission statements, preliminary requirements documents, strategic project plans, market studies, or competitive analysis. All of these can contain valid and informative material that is, unfortunately, intermixed with noise, nonsense, contradictions, and incorrect conclusions.
We build systematic models instead of merely discussing the matter because the discipline of more rigorous modeling facilitates uncovering unknowns and ambiguities. In principle, almost any models might serve, but it makes sense to build moderately formal models that are useful as input to interaction design. For example, use cases and personas are popular models that can be used in exploratory modeling.
My own work takes an activity-centered approach using three simple inventories to capture the essence of user needs: an Activity Inventory that catalogs the immediate and closely related human activities within which the system will be used; a User Role Inventory that identifies the roles that users will play within those activities in relation to the system being designed; and a Task Inventory that compiles all the tasks (essential use cases) that users will need to perform in the course of carrying out their roles within those activities. Activities and roles may sometimes be described briefly or elaborated with a more precise profile, but detail and refinement are kept to a minimum at this stage.
Exploratory modeling is best carried out in a workshop setting with three to a dozen participants. In the expert-driven variation, participants might include interaction designers, information architects, and other design and usability professionals, along with business analysts, domain experts and marketing staff, as well as system architects, software engineers, and developers. A more collaborative variant involves end users along with managers, decision makers, customers, and other stakeholders. Simple, relatively non-technical models—such as activity, role, and task inventories—that require little or no explanation or special expertise, are strongly preferred when non-technical people are included. In my experience, more diverse participation yields better results and saves time on the subsequent inquiry process.
As exploratory models are being brainstormed and refined, questions and issues will become apparent. These are collected in the form of inquiries: things needed to complete and correct the provisional models.
Exploratory modeling can highlight unknowns, ambiguities, matters of policy or priority, and points of disagreement, debate, or dissent within the group. Rather than arguing or attempting to resolve every disagreement, an inquiry is formulated and recorded.
Here are a few examples of inquiries generated in the course of two projects:
The next step is to pick a source and a method of investigation for the inquiries. Sometimes all that is needed is a phone call to the right person. To pick the right person, you want to consider: Who is likely to know? Who can make an educated guess or informed estimate? How important is an exact answer? How confident must you be that you have the right answer? What is the easiest way to find out?
Inquiry resources include not only end users and customers, but also sales and marketing people, tech support staff and logs, user assistance specialists, and technical writers, as well as business decision makers. Informants—that is, the subjects of inquiry to observe or interview—may be selected on various criteria, such as specific knowledge, location, or ease of access. Informants who represent user roles already identified can be particularly useful. Marketing and sales people can often help gain access to informants.
Besides participant observation, inquiries can be resolved through simple questions, small scale experiments, interviews, or even surveys. An inquiry can be conducted in person, in context, or in any media, from telephone and email to instant messaging or web forms. In all cases the aim is to take the simplest, cheapest, most efficient route to the information sought.
Complete, Consistent, and Correct
New findings are incorporated into revised models. Once completed, the models are carefully reviewed for the classic “three Cs” of quality: completeness, consistency, and correctness.
When possible and practical, the revised models should be validated with users and subject matter experts as appropriate. Informants are asked: Does this cover everything you would need to do or all the roles you play? Is there anything you think we might have missed? Within your responsibilities, are all of these things needed? Business decision makers may also be recruited for review of models. Here the question is whether all the business objectives appear to be covered by the activities, roles, and tasks to be supported by the system.
Payoffs, Problems, and Potential
There can never be a guarantee that model-driven inquiry will, in any given project, yield a sufficiently complete and valid picture of user requirements. As with contextual inquiry—or any other approach to eliciting requirements—much depends on the skills and persistence of the practitioners.
The biggest problems, of course, are not with what you don’t know, but rather with what you don’t even know that you don’t know. It is impossible to generate inquiries for what you unaware of or for contradictions that never become clear. But similarly, one can never be certain that more participant observation with additional informants might not have uncovered something previously missed. In most cases, exploratory modeling will expose areas of ignorance or blurry edges to the known world that can shape model-driven inquiry.
Model-driven or Contextual?
Contextual inquiry has certainly proved its worth, but there still might be good reason for usability and design professionals to try model-driven inquiry. For one thing, it provides very early, day-one draft versions of core models. In areas where confidence is high, provisional models may themselves be sufficient to begin selected design activities. The provisional models also generally provide an earlier overview of needed functionality and its organization, supporting sound preliminary decisions about overall structure or architecture of the user interface. By reducing the time and money invested in field research, model-driven inquiry can potentially cut initial lead time, reduce total project costs, and free up resources for other purposes.
For those committed to contextual inquiry and contextual design techniques, model-driven inquiry could still be a valuable adjunct. Exploratory modeling can serve as a more structured approach to focus setting. When combined with participant observation, model-driven inquiry has the potential for substantially shortening field study and data analysis. Moreover, the usual contextual design models can be developed through exploratory modeling, providing a potentially useful framework to guide and speed up the later modeling activities.UX
Larry Constantine, IDSA, ACM Fellow, heads a research lab and is a professor at the University of Madeira, Portugal. A design methodologist and award-winning designer, he specializes in interaction design for safety-critical applications. He has more than 175 papers and seventeen books published, including the award-winning Software for Use.
Usability Professionals' Association
promoting usability concepts and techniques worldwide
User Experience Magazine is by and about usability professionals, featuring significant and unique articles dealing with the broad field of usability and the user experience.
This article was originally printed in User Experience Magazine, Volume 8, Issue 3, 2009.
© Usability Professionals' Association
Contact UPA at http://www.usabilityprofessionals.org/about_upa/contact_upa.html