Social psychologists tend to think about things like trust a lot. But we are not software designers. When I read what usability experts have to say about trust, the precision with which they talk about how to design for it is impressive. Trust assurances, third-party security seals, and lists of FAQs all seem successfully geared toward (or so evidence suggests), engendering user trust in a website. Software design seems to look at trust from the sharp end of things—with precise ideas and interventions to affect a user’s trust.
It’s sort of discouraging for a social psychologist, because we tend to operate on the other end of the continuum—the blunt end. When your object of study is the human being, one can’t be overly wed to precision, as much as you would like to be. Just the same, it is sometimes helpful to organizations to think at the blunt end of the continuum when developing strategies to promote trust in e-commerce. The lessons of social psychology may not always be specific and detailed, but they are informative about what is going on between the user’s ears, and what features the user brings to the context that affects trust.
In the wake of Facebook’s IPO, one of the great debates that will ultimately affect the trajectory of their stock price is the value of advertising through that interface. Consumers are increasingly asked to trade privacy for convenience in e-commerce, social networking, online banking, and so forth. User trust is a likely part of the key to unlocking that value. Some users see e-commerce and social networking as sort of a “Mother” presence that looks out for them, tries to connect them with other people and products of value to them, and has their best interest at heart. These folks will probably be a lot looser with privacy than users who view social networking sites as “Big Brother.” Big brother keeps an eye on you, but he isn’t looking out for you. You have to watch your back!
So what types of online social interactions shift our perceptions from “Mother” to “Big Brother?” Social psychologist Leon Festinger might have something to say about this.
Leon Festinger is credited with proposing the theory of cognitive dissonance (Festinger & Carlsmith, 1959). Cognitive dissonance occurs when a person simultaneously holds two beliefs that are psychologically inconsistent. The early demonstration of dissonance involved getting seventy-one undergraduate males to do a boring task. A really, really boring task. Festinger then paid these guys either $1 or $20 to tell the ostensibly waiting next participant that the task was fun. Keep in mind this was 1959, so in today’s money it was probably more like being paid either $8 versus $160 to lie. Of course, being poor students, they all took the deal.
At a later time, in a different room, Festinger asked the participants to rate how much they enjoyed the boring task. According to the prevailing thinking of the time, the participants who got paid the greater amount to lie should have come to rate the task as more enjoyable—more money, more enjoyment. What Festinger found was the exact opposite—the cheap liars rated the experiment as far more enjoyable than the expensive liars. Why?
Festinger explains this as due to dissonance, as in, you can’t really think “I’m an honest person,” and “I just lied for a very small amount of cash,” at the same time. They are psychologically inconsistent. The expensive liars don’t have this problem. Their large compensation justified the small lie. To resolve the dissonance this situation aroused, the cheap liars shifted their perceived enjoyment of the boring task so as to reduce the inconsistency. If they really enjoyed the experiment on some level, then they didn’t lie!
Since Festinger’s contemporaries were quite skeptical of his findings, he and his colleagues and students spent a lot of time in the laboratory, replicating this sort of finding and looking for the edges of it. One of the factors that they stumbled upon was the importance of free choice in dissonance arousal. Jack Brehm (1956) posed as a marketing researcher and asked several women to rate the attractiveness of eight different appliances. Afterwards, as a reward, each woman was told she could have one of the appliances as a gift, and was offered a choice between two appliances she had rated as being equally attractive. After she made the choice, it was wrapped up and given to her. A few minutes later she was asked to rate the appliances again. Lo and behold, the ratings had changed! The women rated the chosen appliance as significantly more attractive than the rejected appliance in the second round of ratings. Again, why?
Brehm argued that this shift in ratings occurs as a way of justifying the choice they made. The choice was really between two “equal” objects. Thinking about negative aspects of the chosen object was inconsistent with the behavior of having selected it. To resolve this inconsistency, after making the choice, the women focused on the positive aspects of the chosen object and the negative aspects of the rejected object. Freedom of choice created motive inside the women to justify the choice, making it consistent with their broader cognitive landscape.
So what does this have to do with user trust, exactly?
User Trust, Attributions, and Dissonance
Human beings are constantly generating explanations for why they engage in a variety of behaviors. Psychologists call this process “attribution.” Participation in e-commerce is no different. I engage in e-commerce because it is simple and convenient for me, or perhaps for other reasons.
One of the major ways we slice these explanations is into “internal” and “external” attributions. Internal attributions are, obviously, internal. They deal with intrinsic motives—I do something because I want to. External attributions focus more on extrinsic incentives—I do something because I have to. Extrinsic motivation includes trying to avoid some form of punishment, as well as trying to acquire some incentive.
Let’s take online banking as an example. When users have the choice of whether or not to engage in online banking, and they choose to do so, they are likely to attribute this choice to intrinsic motives. They could go to a brick-and-mortar bank bank and have a face-to-face interaction, but they choose to use the online application. For a lot of users, this is the end of the story. They decide they like online banking—it’s way more convenient, saves them time, and it’s simple to do. They do it for themselves. This creates a dynamic exchange with the online banking interface that lays the groundwork for trust to develop. After all, no one is making me use the online application, so I must be using it because I trust it.
But for a whole segment of the population, online banking feels a lot more like something they’ve been forced into. Their bank started charging fees for receiving paper statements, for transactions made with tellers, and so forth. If the fees become exorbitant to the point where users feel “forced” into online banking, they probably are going to do it, but it isn’t going to feel like a free choice. They are going to know exactly why they are doing it—to avoid the penalties their bank is imposing for choosing otherwise. This is an extrinsic motivation, and it doesn’t engender trust.
Trust develops when users attribute the decision to engage in a relationship (for example, online banking or social networking) to their own free choice—just like the women in Brehm’s study. When users feel forced into the relationship, it undermines their trust. Users who are enticed to interact online with too big a stick or carrot are going to attribute their interaction to the punishment or reward they are avoiding or seeking, not to trust. This is sort of a “Big Brother” perception. “Big Mama” uses a lighter touch.
So the key principle of dissonance research in user trust is this: minimal pressure equals maximal trust.
Dynamics and Details
Cognitive dissonance highlights how the dynamics of an interaction influence the development of user trust in an online application. This is the broader contribution social psychology has to make towards understanding user trust. Trust falls squarely between the ears of the user, and just as the brain itself develops, trust also develops over time. It occurs over a temporal continuum, not a dichotomy. Although we can design for user trust with specific, discrete manipulations (presence or absence of security features, differing levels of system reliability, and so forth), trust typically unfolds in an interaction history. Usually, an interaction history builds trust as people slowly disclose more and more to each other, assessing the risk and reward of the relationship as they go. It is a dynamic dance, with both parties assessing each other along the way.
The slow, incremental exchange of information is akin to the notion of free choice in cognitive dissonance. Requests for smaller bits of user’s information aren’t as likely to arouse defenses as large requests for extremely sensitive personal data. If users slowly offer more information over time under relatively little pressure, the chances that they will attribute their participation in the interaction to their own trust in the online application increase. Giving users more ways to engage in the slow exchange of increasingly sensitive information over time should help sustain the dance, and result in greater levels of trust.
One of the challenges in studying trust in online applications is the seductive nature of details—in some ways they are so much easier to focus on. They are precise and their effects are often easier to measure. But this misses the dynamics that undergird the development of trust. Dynamics are messy and unpredictable, but if we ignore them, we risk pushing users for too much, too fast. When you put dynamics and details together, you’ve got a recipe for how an online application can seem a lot more benevolent, evoking “Big Brother” a little less and “Big Mama” a lot more.
Comments are closed.