Field Research: Another View

In marketing research vernacular, my company is in the business of “fielding” research—especially research that requires face-to-face interviewing.  By fielding I mean we facilitate the gathering of data—recruiting the appropriate individuals; hosting and managing the interviewing/data-gathering process with tools tailored specifically for the research design; recording and organizing the data in a readily usable form.

Our involvement in usability research is based on the assumption that, while the philosophical underpinnings, objectives, and techniques of that discipline may differ considerably from those used in marketing research, the underlying principles of good recruiting and seamless field management do not.

Accordingly, these services can and do affect the quality of the final results. Clearly, a poorly managed process can lead to flimsy or useless results.

The etymology of the word fielding suggests why it’s common in the vernacular of marketing research (but rarely used among usability practitioners), and also lends some insight into the differences between the two disciplines’ approaches to seeking out appropriate subjects to query in pursuit of their particular brand
of truth.

Marketing research and usability research, although they each represent hybrid applications derived from a more-or-less common ancestry (the social sciences), evolved quite differently.

Early on in the development of marketing research, the object was to estimate the relative distribution of peoples’ expectations, perceptions, reactions to, and behavior patterns surrounding, a given product or service (already thriving, failing, or under consideration). Accordingly, marketing research required sociologists, anthropologists, and psychologists to venture out from the confines of academia into “the field” to take measurements across a broad universe of consumers. Marketing research focused on learning based on distributions of broad scale measurements. The task of taking those measurements became known as “field” work.

The early stages in the development of usability research had to do with the application of psychology and physical anthropology to the task of engineering intuitive interfaces between people and machines. This work was well suited to laboratory exploration—the primary objectives being readily addressed by carefully addressing a relatively small sampling of individuals representing some variance in physical or mental abilities, but rarely demanding broad-scale parsing of response distributions. Design, data collection, and analysis of usability research were generally accomplished by focusing on developing workable solutions among a handful of “typical” users of the instruments under consideration.

Perhaps the biggest difference between the fielding of marketing and usability projects can be found in the complexity of respondent screening. Finding the right respondents is assigned different priorities by these two disciplines. Marketing research has made screening a highly formal process, while usability researchers’ requirements are frequently somewhat less formal.

Thanks largely to the immediacy of the Internet, marketers are able to design appeals targeted with great specificity. Accordingly, marketing researchers feel compelled to do so simply because they can. They succumb to what Kaplan calls the “law of the tool,” which can be paraphrased as, “Give a boy drumsticks and he immediately finds out everything needs drumming.”

Today we see screening questionnaires from marketing research clients with ten or more pages and upwards of fifty questions, the answers to which are processed to isolate potential focus-group respondents which, in reality, represent truly obscure segments of the consumer public. But, does the fact that internet-based recruiting allows exceedingly rigorous levels of granularity make it advisable to do so? While we may now have the capacity to implement such a search, we are not, in most cases, enhancing the value of our research—frequently just the opposite.

Usability researchers frequently lean toward just the opposite approach. It is not unheard of to get a request to recruit “some people” for a usability project.  When pressed, these requests are fleshed out to a bare minimum—a mix of males and females all of whom access the Internet on a daily basis. Usability researchers are relatively unfocused on “sampling.”

Both extremes can produce data that are marginally applicable or, worse, misleading. On one hand, marketing researchers run the risk of imputing great precision to their findings based on a meticulous set of selection criteria despite the statistical unreliability of the number of observations they make. On the other hand, the problem with the usability approach is that, being unmindful of sampling rigor allows usability engineers to interview friends and colleagues, people with whom they are comfortable largely because they communicate on the same geek-wave.

Both disciplines would benefit by carefully considering the assumptions they impose with their approaches to sampling.

Comments are closed.