2002 Conference Proceedings

Go to previous article 
Go to next article 
Return to 2002 Table of Contents


CHALLENGES OF ACCESSIBLE WEB DESIGN:
STANDARDS, GUIDELINES, AND USER TESTING

John M. Slatin
Institute for Technology and Learning
University of Texas at Austin
John_slatin@forum.utexas.edu

Introduction

This paper describes the AIR-Austin University Challenge, a Web design contest at the University of Texas at Austin to build developers' awareness of the needs of Web users with disabilities. The rules of an annual community-wide Accessibility Internet Rally, AIR-Austin were adapted to suit the conditions of Web development in the complex University environment. Results of the competition may help to identify ways of changing Web design practices to support improved accessibility for people with disabilities.

AIR-Austin

AIR-Austin is an annual event produced by the Austin nonprofit, Knowbility, Inc., to promote awareness of accessibility issues in Austin's high tech community. First organized in 1998, the event has spread beyond Austin to cities such as Dallas/Fort Worth, Denver, and San Francisco.

Teams of professional Web developers are matched with local nonprofit organizations seeking to create a new Web site or redesign an existing one. Staff from the nonprofits participate in a half-day workshop about maintaining a Web presence, while Web developers attend a half-day workshop on accessibility. The curriculum for the accessibility workshop includes material developed by Knowbility as well as lessons based on IBM's Accessibility Checklist. A copy of the judging form is provided.

After developers and nonprofits are matched with one another, they have a few days during which they can meet and prepare for the Rally, but they are not allowed to work on the actual Web site. On the day of the Rally, all teams meet at the St. Edward's University Professional Education Center, where they have from 8:00 AM to 5:00 PM to construct their sites. Judges are present throughout the day to answer questions.

The University Challenge

For the University Challenge, developers were encouraged to enter their own sites into the competition. Some teams were redesigning existing sites to mesh seamlessly with the University's evolving portal architecture. Others were focused on providing a Web-based interface to legacy mainframe applications, and still others were working on sites serving specific groups of faculty or students. The complexity of these tasks was incompatible with the idea of a one-day, make-or-break contest. Instead, developers were told in May that they had until September 15-the Rally date for the community-wide AIR-Austin event-to complete their work.

Judging

Judging began the day after the Rally, and the awards ceremony was held about two weeks later. AIR-Austin and University Challenge sites were judged together, though separate prizes were awarded.

The 9-member judges' panel included Jim Allan of the Texas School for the Blind and Visually Impaired; Phill Jenkins of the IBM Accessibility Center; Jim Thatcher, formerly of IBM and now an independent accessibility consultant; and five University of Texas graduate students: Olin Bjork, Jason Craft, Aimee Kendall, Neelum Wadhwami, and Bill Wolff. All had had prior experience in evaluating Web accessibility, including use of the AIR judging form.

Each site underwent an extensive manual review using either the JAWS screenreader or IBM's talking browser, Home Page Reader. CAST's automated accessibility checker, BOBBY, was used as a supplemental tool to speed identification of such problems as images without ALT text or forms without appropriate labels, and to estimate download times.

The criteria listed on the AIR Judging Form are grouped under the following headings:

* High Impact Accessibility (6 items, 10 points each)
* General Accessibility (11 items, 5 points each)
* Usability (12 items, 3 points each)
* Appropriateness (1 item, holistic scoring: up to 10 points)
* Aesthetics (1 item, holistic scoring, up to 10 points)
* Bonus items (8 items, 1 point each)
* Extraordinary effort (up to 5 points)
* Discretionary deductions for severe problems (requires agreement of 3 judges)

Results

Results are both encouraging and disappointing. More than three times as many teams expressed interest in competing in the 2001 University Challenge than had done so the previous year. More than 80% of the participating sites fell into the administrative category, indicating that the administrative computing group has begun to get the message and had taken steps to improve the accessibility of their services. It's worth noting, too, that many developers-again, especially in administrative computing-who did not or could not enter their sites into the competition were nonetheless working to enhance accessibility. This group included members of the team developing the interface for the University portal and the group developing the new Information Technology Services site. Meanwhile, the Webmaster's office dedicated a part-time position to accessibility consulting, meaning that any University developer can receive up to four hours' consulting at no charge.

We've learned a couple of things. First, we have to acknowledge that, as a strategy for promoting large-scale change in the practice of Web designers and developers at UT Austin, the University Challenge leaves a lot to be desired. A third of the teams dropped out of the contest, either because they were unable to meet the deadline or because plans had changed.

The total page-count for all the sites participating in this year's competition is less than 1,000-not even a noticeable ripple on the 900,000-page surface of the University Web. Faculty participation was negligible; we're not doing an effective job of persuading faculty to take accessibility seriously.

We have much to learn from the University Challenge, however. Several of the participating teams have agreed to serve as subjects for a study jointly conducted by Kay Lewis and Matt Bronstad in the AccessFirst group at my Institute for Technology and Learning, and Austin Usability, Inc., to investigate the relationship between accessibility for people with disabilities and usability for the general user population. In addition to the sites submitted for competition, we have collected copies of the pages as they were before being redesigned, along with the BOBBY reports and the score sheets completed by the AIR-Austin judges. We will soon begin running user tests on both the "before" and "after" versions of each site, with a test population that includes users with and without visual disabilities. Users will be assigned specific tasks to carry out; their sessions will be recorded in the AccessFirst Design and Usability Studio. The audio/video record will include the subjects talking about their decisions and actions as they try to do what we've asked of them. We'll thus get a measure of accessibility as operationally defined in Section 508: we'll see whether people with disabilities are able to use these Web resources as effectively as people without disabilities, and with any luck we'll be able to correlate those findings with specific features of the site.

Conclusion

Even before we start running these tests, several questions have begun to emerge.

First, accessibility is harder than it looks. Taken together, AIR-Austin and the University Challenge yielded some 35 new or redesigned Web sites whose developers were making a conscious effort at accessibility; they had the criteria in front of them as they worked, and they had access to the judges. Yet many still struggled, and their efforts fell short of the mark. Why?

Second, it isn't easy to define accessibility. There were 9 judges this year. Two of our judges are members of the working group that produced the Web Content Accessibility Guidelines and a third was instrumental in writing the Section 508 federal standards; each year we have revised the form and refined the criteria to reflect what we've seen in the sites submitted to us and to record decisions we had made by consensus as we sought to resolve unexpected ambiguities. The four newest judges had also had experience using the AIR scoring instrument in judging another competition earlier in the year. And yet we found ourselves differing, often sharply, in our assessment of the work we were given, and again found ourselves confronting situations not fully accounted for in the criteria we had developed. At other points, we articulated different judgments as to how a specific criterion should be applied in a particular instance. Is this a lack of clarity, a flaw in the criteria? Or is it something endemic to the situation?

Third, I would anticipate that user testing will reveal even more differences. Observing users (with and without disabilities) in other cases has shown a surprising and quite wonderful variety of behaviors as different people are faced with the same tasks; we will almost certainly find similar differences as we test the University Challenge sites on real people doing real things. We believe that we'll see a measurable relationship between the success those users experience in carrying out the tasks we assign them and the efforts developers have made to solve accessibility problems. But it may be difficult to pin down. It has already become evident that few if any of the developers participating in our study limited themselves to improving the accessibility of existing features as they revamped their sites. On the contrary, they often made sweeping changes in content and overall design as well. In short, we may find it harder to be sure we're comparing apples and apples than we might like.

All of which only serves to highlight the importance of the guidelines and standards documents we have, and to make clear yet again that such documents do their work best when they give rise to new practices. Stay tuned.

REFERENCES

For information about the Accessibility Internet Rally, see http://www.knowbility.org/airevents.html.

For information about the University Challenge, see http://www.utexas.edu/events/air-ut/.

For information about ITAL and the AccessFirst Design and Usability Studio, see http://www.ital.utexas.edu/accessfirst/.


Go to previous article 
Go to next article 
Return to 2002 Table of Contents 
Return to Table of Proceedings


Reprinted with author(s) permission. Author(s) retain copyright.