upa - home page JUS - Journal of usability studies
An international peer-reviewed journal

Adapting Web 1.0 Evaluation Techniques for E-Government in Second Life

Alla Keselman, Victor Cid, Matthew Perry, Claude Steinberg, Fred B. Wood, and Elliot R. Siegel

Journal of Usability Studies, Volume 6, Issue 4, August 2011, pp. 204 - 225

Article Contents


Pilot Exercises: Methodological Approach

The goal of this phase of the project was to develop and conduct exercises that would pilot-test experts’ recommendation for the (a) ease of implementation, (b) potential usefulness, and (c) methodological and technical challenges to obtaining informative results.

Usability and User Feedback

In the previous sections of this paper, we used the term usability testing broadly, to refer to the full range of usability analysis methods, including user testing, focus groups, and heuristic reviews. The adaptability of all these methods to VWs was of general interest to us. However, with regard to usability testing during the pilot exercise, our main interest was to develop a Second Life analogue of Web 1.0 user testing. In Web 1.0, this testing typically involves an active intervention session in which a facilitator guides a participant through series of targeted tasks, while eliciting think-aloud protocols and capturing the data via Morae software. The expert panel suggested that the usability evaluation should reflect the social interactivity and quest-like nature of the VW experience. To incorporate these features, the exercise was designed as a group scavenger hunt hat we called the “Infothon.” In two user sessions, three groups of three users completed the tasks. The Infothon participants were IT professionals with variable experience in interactive gaming and moderate to no prior experience in Second Life. All had college or graduate-level degrees, but only one had science background (biochemistry) that might have enabled her to judge the accuracy of the technical information provided in Tox Town. During the preliminary screening and recruitment, all expressed moderate-to-high level of interest in environmental impact on human health, as an issue with practical relevance to their life. At the same time, because this was a convenience sample, their motivation to persist on tasks might have been lower than that of spontaneous users of environmental health Web sites. A facilitator, stationed in a central location, received answers and interviewed participants delivering quest responses, using a combination of structured, pre-scripted questions and spontaneous probes. The primary goal of the interview was to obtain participants’ answer and to inquire about the path that led to it. Three additional observers roamed in-world, occasionally asking participants clarifying questions.

The Infothon procedure involved the following:

  1. Interactive Second Life training session (45 min).
  2. Activity orientation, including teams’ assignment and instructions about the scavenger hunt tasks, process, and communications with the facilitator (15 min).
  3. Scavenger hunt (2 hours): Teams of three participants engaged in the two-hour long scavenger hunt, collaborating on eight usability tasks and completing in-world or browser user feedback surveys (see below).
  4. In-world focus group discussion of the scavenger hunt experience (30 min).
User tasks

The tasks aimed to investigate users’ experience with Tox Town in Second Life, rather than Second Life avatar controls. The focus was on the effectiveness and efficiency of information retrieval rather than deep learning of the toxicology information available in Tox Town. In particular, we hoped to be able to identify sub-optimally placed information (e.g., in low-traffic areas or in locations that were not commonly associated with that information). The wordings of the tasks were general enough to permit users to select a variety of paths and modes of transport (e.g., foot, flight, teleportation). We were particularly interested in the effect of information placed directly in the interactive 3-D environment (e.g., information on water pollution appears upon interaction with a water fountain) verses embedded in flat Web 1.0 information products (e.g., clicking on a virtual library poster opens a Web 1.0 page via an in-world browser). Tasks also tested the ease of objects’ control, impact of non-educational VW features on user satisfaction, and the effect of social interactions and multitasking on users’ performance, among other things. Tasks also involved answering multiple-choice questions about information and objects location, such as in the following example:

Q: How does Tox Town define a “brownfield?”: a) Unused property scheduled for redevelopment; b) Open chemically burned space where no plants can grow ; c) Zones of a city where pollution is permitted.

Survey design

During the exercise, all participants had multiple opportunities to complete a survey that focused on dimensions of satisfaction in VW experience (based on Isbister & Schaffer, 2008), referred to throughout this document as the Satisfaction Survey. The survey questions incorporated Likert scales and participants were asked to rank how strongly they agreed or disagreed with statements about the VW.

The pilot varied two features of survey presentation, the mode (Web-based vs. in-world) and the trigger (static, pop-up on proximity, invitation by a facilitator). A Web-based version of the survey was accessible through a link from an invitational poster and also explicitly offered to participants by the facilitator. This version could be viewed from within the VW through the Second Life Web browser or outside the VW in another browser. An in-world version could be accessed by touching other copies of the poster. An invitational pop-up window for the survey also appeared in the vicinity of mushroom objects scattered through various virtual gardens.

Data capture and analysis

A lead usability expert watched users interact and interviewed team members who presented answers to him either through avatars appearing “in person” or through non-local, in-world text chat windows (voice communication via microphones was also attempted but soon abandoned owing to audio quality and bandwidth issues). Two additional observer avatars were present who followed participants and occasionally asked clarifying questions about their actions and statements. Records of each avatar’s text chats with other avatars were downloaded for later review. All Infothon sessions were video recorded with Morae software and reviewed after the sessions.

Performance

At the time of this research, no commercial services or tools were available to measure the performance of user-generated information applications on Second Life or other VW platforms. One of the complications of measuring application performance in Second Life is that a specific virtual “region” can be simulated by different computer servers over time. Changes in performance may reflect a change on the “simulator” and not necessarily changes to the user-generated content. We chose to focus on two performance indicators that reflect the performance of a specific simulated scene as perceived by its users: rendering time and accessibility. The first indicator is the rough equivalent to “page download time” on the Web, and the second is a measure of whether a particular simulation remains accessible to users over time. We developed proof-of-concept tools that could help us determine the feasibility of creating such monitoring capability for user-generated applications in Second Life. The performance evaluation tests were not performed concurrently with the Infothon

Our prototypes make use of the standard Second Life client and a text-based Second Life viewer (METAbolt). We used scripts to simulate a user logging in and viewing specific Second Life content once every 15 minutes for 24 hours. We approximated the rendering time by the time it took the Second Life client to download all the objects needed to render a specific scene, as determined by the contents of the viewer’s cache.

Usage

Linden Labs does not offer per-region usage statistics to subscribers, only some overall usage statistics of the entire Second Life platform. At the time of this research, few companies offered usage monitoring services to Second Life subscribers. The companies are Second Life subscribers themselves and are examples of the user entrepreneurship that the Second Life environment has enabled over the years. We tested the capabilities of one such company, Maya Realities (MR). The services offered by other companies were similar in nature. MR used the scripting capabilities available to users of the environment to enable “avatar detectors” that, after being strategically placed in the virtual region owned by the users of the MR service, report visitors and their locations to a central server on the Internet. The methods used by MR have some potential limitations mainly related to the product being implemented using user-level scripting capabilities in the Second Life environment.

MR produces summary usage reports that include the following data:

 

Previous | Next