GNOME Accessibility Logo

GNOME Accessibility Team

/!\ The following content is being kept here for historical preservation only. The information contained herein may or may not be accurate. It is definitely not being maintained.

Designing for Disabilities

Arguments typically leveled against designing for users with disabilities include the claims that costs are too high, and the benefits serve too small a market. These arguments should sound familiar to HCI practitioners, who have historically faced initial resistance or even opposition to the introduction of HCI processes into product development. Just as organizational understanding of the design process must be changed for HCI to be accepted, so too must the standard HCI conceptualization of "the user" change to recognize the needs of people with disabilities.

Ranges of User Capabilities

The traditional view of people "having a disability" or "not having a disability" is overly simplistic. All users have a range of capabilities that varies across many dimensions depending on the user and his or her life stage, task, and environment. As more of the population approaches their middle 40's, there are increasing numbers of users peering through bifocals at their screens. A nontrivial portion of the population experiences some degree of hearing loss, and may not always notice software alert sounds. As we age, more of us will develop age related disabilities -- 25% by age 55, jumping to 50% at age 65.

In addition to normal consequences of aging, people may experience sudden temporary or permanent changes in capabilities at any time in their lives. If a computer user falls and breaks her wrist, she will spend several weeks or more with much the same keyboard capabilities as many people with spinal cord injuries or missing limbs. Every user's ability to interact with systems varies over short as well as long periods of time. A sprained wrist can restrict use of a mouse. A late night working or a lost contact lens can transform small fonts into a suddenly challenging experience. Any user who does not currently have a disability could someday have a stroke, car accident, or other event resulting in a temporary or permanent disability.

In fact, a significant number of user requirements for people with disabilities apply to almost any user, given the right circumstance or task context . Whether a user's hand is broken, painful due to repetitive strain injury, or permanently paralyzed, there are similar needs. Whether someone is unable to look at a screen because he is driving, or cannot see a screen because he is blind, the user requirements have much in common. Whether a user does not hear because she is talking to somebody on the phone, paying attention to her task, working in a noisy environment, or happens to be deaf is less important than the fact that users in these contexts need alternate sources of information. As McMillan (1992) observed, "From the point of view of a computer, all human users are handicapped" (p. 144).

Universal Design

In recent years, as the notion of accessibility has been broadened to encompass much more than design for people with disabilities, the concept of "universal design" has gained visibility. Traditional design has focused on filling the needs of the "average" person, with the assumption that design for the average provides for the needs of most. The universal design argument is that designing for the "average" is by definition exclusionary, because the "average" user is a fictitious construct.

Attempts to design for this fictitious "average" user may not account for effects of multiple individual differences. Tognazzini related an anecdote illustrating the pitfalls of ignoring such overlapping differences:

Several years ago, the Air Force carried out a little test to find out how many cadets could fit into what were statistically the average-size clothes. They assembled 680 cadets in a courtyard and slowly called off the average sizes--plus or minus one standard deviation--of various items, such as shoes, pants, and shirts. Any cadet that was not in the average range for a given item was asked to leave the courtyard. By the time they finished with the fifth item, there were only two cadets left; by the sixth, all but one had been eliminated.

The Universal Design philosophy emerges from a recognition of the idea central to this story -- that there is no average user. Universal designs target the broadest possible range of user capabilities. Examples of products that embody this theme include automatic doors, remote control thermostats, and velcro. Using no assistive technology, people who were previously unable to open a door, operate a thermostat, or tie their shoes are able to perform these tasks, whereas "the rest of us" find these tasks easier as well. Proponents of universal design do not assume that all users will be able to use all designs, but instead argue that by redefining our definition of the user, a much wider range of users can be accommodated without significant extra effort.

Watch out for that Ramp

Universal Design is a worthy goal, but it is important to acknowledge that there are complex customization-related HCI issues that must be resolved before it can be achieved with computers. In discussing user interface design, Lewis and Rieman wrote, "Experience shows that many ideas that are supposed to be good for everybody aren't good for anybody." We agree that in human-computer interaction, as in much of life, what is "good for you" is not always "good for me."

An example of this principle in action was illustrated to us by a colleague who caught her foot at the base of a wheelchair ramp and tripped. The resulting fall produced injuries that included a sprained wrist and numbness in one hand. The injuries could easily have been more severe. The irony of a person acquiring a temporary or perhaps permanent disability because of an artifact designed to help people with disabilities strikes us as an appropriate alert that there may be stumbling blocks on the path to Universal Design.

One computer-related stumbling block is apparent in considering a simplified scenario of a public information kiosk. If we assume blind users must have access, then it becomes important to provide voice and sound output. There may be a tone when a control has been activated, and voice output to provide information about graphics and video displayed on screen. If a deaf user steps up to the kiosk, she will need visual notifications such as closed captions and visual alerts as alternatives to voice and sound alerts. If a user with no significant disability steps up to the kiosk, how will it interact with her? Surely she will not wish to deal with a cacophony of voice descriptions, closed captions, beeping dialogs and flashing screens?

Environments are needed that allow users to tailor system input and output modalities to their capabilities and preferences. Recent research has suggested that information can be represented in a form abstracted from the particulars of its presentation. The technical solution of providing multiple redundant interface input and output mechanisms is not, in itself, sufficient to resolve conflicting user needs. In the absence of any means for intelligently designing and customizing their use, highly multimodal interfaces could lead to as many usability problems as they resolve, causing some users to trip over features designed to help other users. Determining how users will interact with such systems is a challenging HCI issue.

DISABILITY SPECIFIC DESIGN GUIDELINES

In reading this section it is important to remember that as with all users, the needs of users with disabilities vary significantly from person to person. Many users with disabilities do not use assistive technologies, but can benefit from small design changes in the user. Other users have significant investments in assistive technologies, but they too can benefit from software that better responds to their interaction needs.

Physical Disabilities

Many users with physical disabilities use computer systems without add-on assistive technologies. These users can especially benefit from small changes in interface accessibility. Some users with physical disabilities use assistive technologies to aid their interactions. Common hardware add-ons include alternative pointing devices such as head tracking systems and joysticks. The MouseKeys keyboard enhancement available for MS Windows, Macintosh, and X Windows-based workstations allows users to move the mouse pointer by pressing keys on the numeric keypad, using other keys to substitute for mouse button presses. Because system-level alternatives are available, it is not necessary for applications to provide mouse substitutes of their own. It is important that applications provide keyboard access to controls, features, and information in environments that have keyboard navigation. Comprehensive keyboard access helps users who cannot use a mouse. Many environments allow users to use tab and arrow keys to navigate among controls in a window, space bar and enter to activate controls, and key combinations to move focus across windows. In some cases, extra engineering may be required to ensure that these features work in all areas of an interface.

In addition to keyboard navigation, keyboard accelerators and mnemonics are also helpful for users with physical disabilities (as well as blind and low vision users). Whenever practical, commonly used actions and application dialogs should be accessible through buttons or keyboard accelerators. Unfortunately few of the standard accelerator sequences were designed with disabilities in mind. Many key combinations are difficult for users with limited dexterity (e.g., in Motif, holding down Alt-Shift-Tab to change to the previous window in Motif). Nonetheless, use of key mapping consistent with guidelines for the local application environment not only speeds use of applications for users with movement difficulties, but it also increases the effectiveness of alternate input technologies such as speech recognition. Assistive technologies often allow users to define macro sequences to accelerate common tasks. The more keyboard access an application provides, the greater the user's ability to customize assistive technology to work with that application.

Repetitive Strain Injuries (RSI)

Frequently repeated keyboard tasks should not require excessive reach or be nested deep in a menu hierarchy. The needs of users already having symptoms of RSI overlap significantly with the needs of users with other types of physical disabilities. Assistive technologies such as alternate pointing devices, predictive dictionaries, and speech recognition can benefit these users by saving them keystrokes, reducing or eliminating use of the mouse, and allowing different methods of interacting with the system.

Low Vision

Users with low vision have a wide variety of visual capabilities. Estimates suggest that there are approximately 9-10 million people with low vision. For the purposes of this discussion, consider a person with low vision to be someone who can only read print that is very large, magnified, or held very close.

The common theme for low vision users is that it is challenging to read what is on the screen. All fonts, including those in text panes, menus, labels, and information messages, should be easily configurable by users. There is no way to anticipate how large is large enough. The larger fonts can be scaled, the more likely it is that users with low vision will be able to use software without additional magnification software or hardware. Although many users employ screen magnification hardware or software to enlarge their view, performance and image quality are improved if larger font sizes are available prior to magnification.

A related problem for users with low vision is their limited field of view. Because they use large fonts or magnify the screen through hardware or software, a smaller amount of information is visible at one time. A limited field of view means that these users easily lose context. Events in an interface outside of their field of view may go unnoticed. These limitations in field of view imply that physical proximity of actions and consequences is especially important to users with low vision. In addition, providing redundant audio cues (or the option of audio) can notify users about new information or state changes.

Interpreting information that depends on color (e.g, red=stop, green=go) can be difficult for people with visual impairments. A significant number of people with low vision are also unable to distinguish among some or any colors. In addition, a significant portion of any population will be "color blind". For these reasons, color should never be used as the only source of information -- color should provide information that is redundant to text, textures, symbols and other information.

Some combinations of background and text colors can result in text that is difficult to read for users with visual impairments. Again, the key is to provide both redundancy and choice. Users should also be given the ability to override default colors, so they can choose the colors that work best for them.

Blindness

There is no clear demarcation between low vision and true blindness, but for our purposes, a blind person can be considered to be anybody who does not use a visual display at all. These are users who read braille displays or listen to speech output to get information from their systems.

Screen reader software provides access to graphical user interfaces by providing navigation as well as a braille display or speech synthesized reading of controls, text, and icons. The blind user typically uses tab and arrow controls to move through menus, buttons, icons, text areas, and other parts of the graphic interface. As the input focus moves, the screen reader provides braille, speech, or non-speech audio feedback to indicate the user's position. For example, when focus moves to a button, the user might hear the words "button -- Search", or when focus moves to a text input region, the user might hear a typewriter sound. Some screen readers provide this kind of information only in audio form, while others provide a braille display (a series of pins that raise and lower dynamically to form a row of braille characters).

Blind users have screen reading software that can read the text contents of buttons, menus, and other control areas but screen readers cannot read the contents of an icon or image, they can only read the descrptive lable or accessible name associated with them. Meaningful names should be provided for user interface objects in their code. Meaningful names can allow some screen reading software to provide useful information to users with visual impairments. Rather than naming an eraser graphic "widget5", for example, the code should call it "eraser" or some other descriptive name, that users will understand if spoken by a screen reader.

Without such descriptive information, blind or low vision users may find it difficult or impossible to interpret unlabeled, graphically labeled, or custom interface objects. Providing descriptive information may provide the only means for access in these cases. As an added selling point to developers, meaningful widget names make for code that is easier to document and debug.

Hearing Disabilities

People with hearing disabilities either cannot detect sound or may have difficulty distinguishing audio output from typical background noise. Interfaces should not depend on the assumption that users can hear an auditory notice. In addition to users who are deaf, users sitting in airplanes, in noisy offices, or in public places where sound must be turned off also need the visual notification. Additionally, some users can only hear audible cues at certain frequencies or volumes. Volume and frequency of audio feedback should be easily configurable by the user.

Sounds unaccompanied by visual notification, such as a beep indicating that a print job is complete, are of no value to users with hearing impairments or others who are not using sound. While such sounds can be valuable, designs should not assume that sounds will be heard. Sound should be redundant to other sources of information. Visual notices can include changing an icon, posting a message in an information area, or providing a message window as appropriate. If visual notification does not make sense as a redundant or default behavior, then it can be provided as an option.

Other issues to consider include the fact that many people who are born deaf learn American Sign Language as their first language, and English language as their second language. For this reason, these users will have many of the same problems with text information as any other user for whom English is a second language, making simple and clear labeling especially important. TABLE 5. Design Guidelines

Design Guideline

Physical

RSI

Low Vision

Blind

Hearing

Provide keyboard access to all application features

X

X

X

X

Use a logical tab order (left to right, top to bottom, or as appropriate for locale)

X

X

Follow key mapping guidelines for the local environment

X

X

X

X

Avoid conflicts with keyboard accessibility features (see Table 6)

X

X

Where possible, provide more than one method to perform keyboard tasks

X

X

Where possible, provide both keyboard and mouse access to functions

X

X

X

X

Avoid requiring long reaches on frequently performed keyboard operations for people using one hand

X

X

Avoid requiring repetitive use of chorded keypresses

X

X

Avoid placing frequently used functions deep in a menu structure

X

X

X

X

Do not hard code application colors

X

Do not hard code graphic attributes such as line, border, and shadow thickness

X

Do not hard code font sizes and styles

X

Provide descriptive names for all interface components and any object using graphics instead of text (e.g., palette item or icon)

X

Do not design interactions to depend upon the assumption that a user will hear audio information

X

Provide visual information that is redundant with audible information

X

Allow users to configure frequency and volume of audible cues

X

X

X

Accessibility/Documentation/GNOME2/Designing For Disabilities (last edited 2011-07-21 18:55:22 by JoanmarieDiggs)