upa - home page JUS - Journal of usability studies
An international peer-reviewed journal

Intra- and Inter-Cultural Usability in Computer-Supported Collaboration

Ravi Vatrapu and Dan Suthers

Journal of Usability Studies, Volume 5, Issue 4, August 2010, pp. 172 - 197

Article Contents


Discussion

The empirical findings of the cultural usability analysis can be summarized as follows: Culturally different participants were found to be engagedat different levels in different collaborative activities such as shared workspace refresh, cross-referencing of knowledge-map objects in threaded discussion. The participants also createddifferent quantities of artifacts such as evidential relation links and knowledge-map nodes. Further, participants from the two cultural groups gave different user interface satisfaction ratings besides different quality and quantity of comments. The results of our cultural usability analysis add to the empirical findings that document cultural effects on usability evaluation processes and products discussed in the Introduction section of this paper.

Specifically, Chinese participants, on average, reported higher overall user interface satisfaction scores but gave significantly lower ratings for information display and terminology aspects of the system (see Figure 4). This finding doesn’t completely agree with the cultural-cognitive difference of holistic vs. analytical reasoning (Nisbett & Norenzayan, 2002; Nisbett, Peng, Choi, & Norenzayan, 2001). Following (Nisbett & Norenzayan, 2002; Nisbett et al., 2001)., participants from holistic thinking style cultures (such as Chinese participants of this study) are expected not to offer more specific user interface satisfaction ratings for the individual components of the system than the overall system. In other words, following (Nisbett & Norenzayan, 2002; Nisbett et al., 2001), in usability evaluation settings, it is to be expected that the holistic thinking style participants would be less discriminatory in their ratings of the constituent parts of the system compared to the overall system. But the empirical trend we found doesn’t agree with these expectations. Chinese participants rated the components of the socio-technical system with greater specificity than the overall system. On the other hand, American participants in the study gave lower overall system ratings and higher constituent parts ratings. More research is needed on this empirical finding of discrepancies in ratings for whole vs. parts of the system between culturally different participants.

A possible explanation for the significant differences on participant ratings for the Screen section of the QUIS questionnaire could be the cultural-cognitive differences in field-dependent vs. field-independent modes of perception between the Chinese and American participants respectively (Nisbett & Miyamoto, 2005; Nisbett & Norenzayan, 2002; Witkin, 1967; Witkin & Goodenough, 1977). Briefly, field-dependent participants rely more on external cues than field-independent participants with respect to perceptual organization.

Even though not statistically significant, an interesting empirical trend was observed with the Chinese participants making more positive comments and less negative comments than the American participants of the experimental study. Chinese participants’ cultural concerns about deference and harmony-maintenance might account for the higher number of positive comments than the negative comments. However, Chinese participants gave lower ratings to the parts of the system compared to the American participants. Moreover, Chinese participants found more usability problems and made more usability suggestions. From this empirical trend it appears that for measuring user satisfaction, it might be beneficial to solicit open-ended comments in addition to seeking Likert-type ratings on a questionnaire.

In prior work, we have proposed a design evaluation framework of usability, sociability, and learnability for computer-supported collaborative learning (CSCL) environments (Vatrapu, Suthers, & Medina, 2008). Despite the differences in usability reported in this paper and sociability reported elsewhere (Vatrapu, 2008; Vatrapu & Suthers, 2009b), a preliminary analysis of individual learning outcomes on the essays show no significant differences (Vatrapu, 2008; Vatrapu & Suthers, 2009a). So, even though the subjective perception of the ease of tool-learning was marginally significant, domain-learning outcomes were not statistically different. This corroborates prior findings in HCI and human factors that show a discrepancy between objective measures of performance and subjective indicators of preference or what we have termed as the performance preference paradox (Vatrapu et al., 2008).

Research Limitations: Linguistic Relativity and Cultural Relativity

English language comprehension and articulation remains a potential mediating variable in the performance of the Chinese participants. To empirically evaluate this mediating variable, session verbosity (total words individually produced by a participant in the collaborative session) and essay verbosity (total words produced by a participant in the individually written essay) were calculated. A two way analysis of variance for session verbosity showed significant main effects for both culture, F(1,56)=4.46, p=0.04, and gender, F(1,56)=6.70, p=0.01. On average, American participants produced more words in the collaborative session than the Chinese participants. Female participants produced more words in the collaborative session than the male participants. Similar results were obtained for the essay verbosity measure. It should be noted that copy + pasted information from the source materials is included in both verbosity measures. Also, given the nature of the experimental task, there were no strong theoretical reasons to control for copy + pasted words in the verbosity. Future studies using Chinese language materials and instruments should help disambiguate the language effect. Having said that, the experimental results are relevant to computer-supported collaborative settings with English as the medium of interaction.

Implications for Cultural Usability Research

Honold (2000, p. 341) identified eight factors to be taken into account for investigating product usage across cultural contexts: objectives of the users, characteristics of the users, environment, infrastructure, division of labor, organization of work, mental modes based on previous experience, and tools. Objectives of users have been the focus of recent work in cultural usability that focuses on evaluator-participant interpersonal relations (Shi & Clemmensen, 2007; Vatrapu & Pérez-Quiñones, 2006). Cultural usability researchers should carefully re-consider the debate around “demand characteristics” (Orne, 1962) in experimental psychology. Orne (1962) defined demand characteristics as “totality of  cues that convey an experimental hypothesis to subjects [and] become significant determinants of subject’s behavior” (p.779). In the context of cultural usability in particular, and usability evaluation in general, demand characteristics refer to the study expectations, evaluator-participant relationships, and cultural norms for appraisal and attribution. The experimental study reported in this paper was designed taking into account the Contact Hypothesis (Allport, 1954) that beneficial intergroup impressions and outcomes would result if equal status, equal institutional support, equal incentives, and a super-ordinate goal are provided for group interactions. Regarding characteristics of user,the concept of culture is often used but rarely operationalized in experimental studies. Cultural characteristics are attributed to individuals by virtue of ethnic affiliation or nation-state membership. Given the “fading quality of culture” (Ross, 2004) the homogenous assumptions of culture are unwarranted. We conducted a manipulation check to verify that cultural characteristics of participants were in fact different. As stated in the culture measures in the Results section, cultural characteristics of participants in this experimental study are assessed, evaluated, and documented at level of individual values as well at the level of group cultural dimensions.

In our opinion, cultural usability research needs to move beyond documenting cultural variation in usability assessment methods and outcomes. We need to move toward an empirically informed theoretical understanding, explanation, and prediction of cultural HCI phenomena. Computer-supported collaboration presents some unique challenges that do not entirely fall under the purview of usability typically conceived of as efficiency, effectiveness, and satisfaction. Research in the established HCI research field of computer-supported cooperative work (CSCW), the emerging HCI field of human-information interaction (HII(Jones et al., 2006), the technology enhanced learning paradigm of computer-supported collaborative learning (CSCL; (Stahl, Koschmann, & Suthers, 2006), and the emerging field of computer-supported intercultural collaboration (CSIC; (Vatrapu & Suthers, 2009b) is investigating how users interact with each other as well as with the computers. Currently, the Internet is undergoing a profound shift towards a participatory mode of interaction. With the advent of fundamentally social software such as social networking sites (Orkut, Facebook, MySpace, Mixi etc.), cultural usability needs to be expanded to include technological intersubjectivity. We need a richer understanding of the phenomenon at the human-computer interactional level. In our opinion, there is a real need for a real-time and real-space interactional account of cultural cognition for the field of HCI. A first attempt at a socio-technical interactional theory of culture could be found in (Vatrapu, 2009a). We believe that an interactional understanding of cultural variance at the level of perception and appropriation of affordances and structures and functions of technological intersubjectivity can better inform the design, development, and evaluation of computer-supported collaboration systems than the current homeostatic conceptions of cultures as dimensional models or typologies. This is not to say that cultural dimensional models are not useful nor is it to say that they are not insightful. Rather, it is to say that cultural HCI phenomena should also be theorized and empirically evaluated at the human-computer interactional level.

Previous | Next