The Magazine of the Usability Professionals' Association
By Negar Moshiri
The television has evolved substantially since it was introduced more than half a century ago: black and white to color, analog to digital, CRT to flat screen, and 2D to 3D. Content sources have evolved, too, from a dozen broadcast channels, to thousands of digital channels plus DVR recordings, video-on-demand libraries, and even the Internet. But through these many transformations, the remote control has changed very little. New buttons have been added to support new features, and designs have become sleeker with better ergonomics. The control paradigm, however, remains the same: dozens of buttons to control the TV functions, and up, down, left, and right arrows to navigate the screen (see Figure 1). The TV remote control is needlessly complex, and navigating through content choices one item at a time is inefficient and cumbersome. The TV user interface (UI) as a whole does not meet the needs of today’s TV users.
Hillcrest Labs has embarked on a mission to solve this problem. Frustrated by the inadequacy of the TV user interface, we found ourselves asking, “Why can’t I just point at the screen to get what I want?” After all, pointing is an innate human behavior to select things. That question led us to design a pointing UI for the TV, which was a collaborative effort that involved many disciplines, including human factors, industrial design, UX design, software, hardware, and sensor engineering.
Understanding the TV User Experience
The first step in the process was to experiment with pointing at TV using off-the-shelf equipment. A PC was attached to a TV and used to run early prototypes of applications, such as a mini-guide and video-on-demand, controlling the experience with a mouse. There was a great deal to leverage from the personal computer and its mature pointing interface, but we didn’t want to replicate the PC user interface on the TV. We wanted to design specifically for the TV experience.
TV Usage is About Entertainment and, Hence, “Lean-Back”
Viewers expect the TV to deliver entertainment content, mostly in the form of video. Although the TV can provide information, enable communication, and also offer productivity functions, these experiences should fit within the context of a video-oriented entertainment experience.
Watching TV is generally associated with relaxation and comfort. The viewer is typically seated on a sofa or lying on a bed with the expectation that the content will be delivered to them. This mode of usage is passive; the viewer is only active long enough to select a program, but then passive while viewing the content. The user expects to exert minimum effort to control the experience.
In contrast, computer usage is a “lean-forward” activity. The user actively seeks information and tends to multi-task. That is not to say that TVs are never used in a lean-forward fashion and computers are never used in a lean-back way. Video consumption on the computer and gaming on the TV break these patterns, but each has a dominant use case that drives design decisions.
TV is a Communal Device
TV is typically placed in a room where the family congregates, and viewers are accustomed to watching TV in a group setting. Selection of content and control of the user experience are subject to group dynamics. There is one remote control and, much to the chagrin of certain family members, it must be shared. In contrast, computers are personal. Even when shared in a family, only one family member at a time uses the computer.
TV is Viewed and Controlled from a Distance
TVs are often placed across the room from the viewing area. Therefore, watching TV is typically done from a distance. Because TVs are also placed in kitchens, bedrooms, and workout rooms, controlling the TV should be possible from a variety of different positions, such as when sitting, standing, or lying down, and at different angles and distances. But in all of these settings, the viewer is looking up at the TV screen. Therefore, controlling the experience should not require looking down or away from the TV.
User Interface for the TV
Understanding the context of the TV experience meant designing a pointing UI specifically for the TV. Inspired to transform the TV experience, but also cognizant of the need to please the proverbial “couch-potato,” we got to work.
The design of the pointing UI involved both the input device and the Graphical User Interface (GUI). Designing the two components together allowed us to define an interaction system holistically from the ground up.
A pointing UI allows the designer to trade off hard buttons for on-screen controls. Thus, a pointing remote control can be designed with very few buttons and still enable the full range of functions TV viewers expect. Our research had shown that consumers were frustrated by the typical 50+ button remote control, so we designed an interaction system using only five buttons (see Figure 2). We decided to use the power of pointing to simplify the experience and deliver just what the user needs, no more, no less. Quoting John Maeda from Laws of Simplicity, “Simplicity is about subtracting the obvious and adding the meaningful.”
Pointing Remote Control
A pointing remote control must allow the user to remotely move the position of a cursor on a TV screen. The pointing solution should not require work, and should be controlled with one hand with no surface. The remote control should be designed so that it can be used comfortably in a variety of positions, and it should work within a reasonable range of angles and distances from the TV.
A number of pointing technologies were evaluated, such as a joystick, trackball, touch-pad, and in-air motion-based mice. We found motion control provided the most natural interaction. It works by gently waving the hand in the air to position the cursor, much like actual pointing. Early prototypes of motion pointing devices were tested extensively with users. Some of our research included contextual inquiries where we tested our prototypes and UI concepts in users’ homes within their TV-viewing environments. We also ran many tests in our “living room” lab.
Our user research validated that motion pointing is, indeed, an intuitive control method for the TV, and feedback from users allowed us to refine our solution. We found several factors to be critical when applying motion control technology to the TV remote control:
Immediate feedback: To make the interaction system intuitive, we had to ensure it was readily apparent how to use it from the moment the user picked up the remote control. We designed the pointing interaction so that the cursor appears in response to the user picking up the device, eliminating the need to press a button. Turning the cursor on with an explicit button press increases the effort to learn the interface for novice users and increases the cognitive load for repeat users.
Pointing without pointing: We found that having to point directly at the screen could be tiring and not always comfortable in typical TV-watching situations. We wanted the user to be able to relax his or her arm and not be forced to point at the screen. So, we designed a relative pointing system using inertial sensors. The user can point a relative pointing device in virtually any position without the need to point explicitly at the TV. This approach is in contrast to absolute pointing using optical sensors, where the user has to explicitly point at a camera mounted near the TV.
Tremor reduction: Tremor is the involuntary oscillation of the hand which is natural for humans and can vary from person to person. A motion device will respond to these small oscillations and move a cursor on screen. We found that it is critical to the usability of the motion remote control to cancel the effects of natural tremor and maintain the position of the cursor.
There were many other considerations to the design of the remote control, including power consumption, ergonomics, and button design.
Graphical User Interface and UX Design
The design of the TV GUI and applications should fit the lean-back, entertainment-oriented context of the TV. The primary use case is to watch video so controls should be arranged on the edges of the screen with minimal coverage of the video. Ideally, controls should only appear when needed. We decided to use motion to activate the UI and make icons visible. Explicit motion indicates that the user is about to engage with the system, likewise, lack of motion indicates that the user is consuming content and controls must get out of the way.
In order to use a pointing interface for the TV, the visual elements of the GUI must be designed to be both viewable and selectable from a distance. The Society of Motion Picture and Television Engineers (SMPTE) provides guidelines for TV screen size and distance to the screen. We used these guidelines to arrange our living room lab setup and fine-tune icon and font sizes for optimal visibility for different users and different TV-size-versus-distance configurations.
We developed tests based on Fitts’ Law to size icons for the best point-and-click performance. A Fitts’ Law test presents targets of varying sizes appearing at random locations on the screen and measures the time it takes for the user to move from one target to the next and acquire that target. The larger the icon, the easier it is for the user to acquire, so the key is to find the minimum size that achieves the desired performance.
Since motor skills and manual dexterity vary by age, we tested users in three different age groups: children 5-11 years, whose motor skills are still under development, adults 65-79 years, whose manual dexterity may be subject to degradation due to aging, and adults 18-60 years who are not subject to age-related limitations. Using data from these tests, we were able to characterize pointing performance for different target sizes and age groups, and derive best practice guidelines appropriate to the pointing technology.
Recommendations and Guidelines
We encountered many interesting and challenging problems as we designed a new user interface for the TV. There is a lot more to designing for the television and to motion-based input devices than included in this article. But the following are a few recommendations:
Negar Moshiri is a senior consultant at NavigationArts, where she manages UX strategy and design engagements. She is also the former VP of user experience and application design at Hillcrest Labs, where she worked on next generation user interfaces, including motion controlled UIs and visualization methods for entertainment applications.
Usability Professionals' Association
promoting usability concepts and techniques worldwide
User Experience Magazine is by and about usability professionals, featuring significant and unique articles dealing with the broad field of usability and the user experience.
This article was originally printed in User Experience Magazine, Volume 11, Issue 1, 2012.
© Usability Professionals' Association
Contact UPA at http://www.usabilityprofessionals.org/about_upa/contact_upa.html