2003 Conference Proceedings

Go to previous article 
Go to next article 
Return to 2003 Table of Contents 


ACCESSIBILITY AND UBIQUITOUS COMPUTING

Presenter
Marja-Riitta Koivunen,
World Wide Web Consortium
Web Accessibility Initiative,
MIT/LCS, 200 Technology Sq.,
Cambridge, MA 02139, U.S.A.
Email: marja@w3.org

Email: icdri@icdri.org">icdri@icdri.org

1. Introduction

Ubiquitous computing makes computational services available in our environment as seamlessly and transparently as possible. The services are delivered through different devices and modalities while doing our normal activities in spaces related to work, hobbies, and activities of general living. The devices vary from portable or specialized devices to intelligent rooms with permanently placed generic devices, such as cameras, microphones, and different size screens that are embedded to the room as part of the architecture. A step even further are cave-like environments where the physical space is organized virtually around the user.

The aim of ubiquitous computing environments is to let users use the natural communicating means and abilities with several senses and modalities as a whole. For instance in "Put-That-There" experiment [Bolt80] the computer knew from the spoken commands that the activity is "put" while the pointing gestures tell "what" is being put and "where" it should be put to. Surrogates for distant users [Sellen92, Greenberg2000] can be positioned around a meeting table to make people use natural cues, such as turning their heads towards and looking into the eyes of the participants who they are talking to.

Ubiquitous computing is helpful for many users. However, it also needs to be accessible to participants who may have permanent or temporary disabilities, or who use special devices or use low bandwidth. For instance, a user who cannot see or a user who cannot move his hands needs alternative means to be able to use the "Put-That-There" kind of user interface. And the same is true for a user who broke his or her arms, or has a sore throat and cannot speak the commands. We also need to offer means for users with visual disabilities so that they can retrieve information from a drawing that is drafted on a whiteboard in an intelligent meeting room, as well as experiment what nonvisual information the surrogates should offer.

It is a challenge to take the accessibility viewpoint into consideration from the beginning when designing ubiquitous computing environments. With multimodal user interfaces there is a need for alternatives that can be understood, and alternative ways to produce information without audio, visual or tactile channels. The end result is an even more flexible interface that will serve all users better. For instance, distance users who are not sharing intelligent environment may often benefit from these alternatives.

This paper discusses technologies used for sharing information and ideas in ubiquitous computing environments and how these technologies could take accessibility into account either by themselves or by combining them with other technologies. The main focus is on environments with at least some physical artifacts.

2. Technologies for sharing material and context

2.1 Shared documents

The Web is a natural way to share documents and comments ubiquitously. A Web user may have the same information available almost anytime and anywhere through a broad set of devices. If the Web Content Accessibility Guidelines 1.0 (WCAG) [WCAG99] are followed, the information is also accessible and available in alternative ways. Other WAI guidelines provide advice for accessible user agents and authoring tools. With location information, such as GPS, it is possible to provide users environment dependent information. For instance, a user walking towards a bus stop can ask information about approaching buses, or a user sitting in a bus can receive descriptions of the objects along the route.

A user telecommuting to a meeting may easily see the same material as the other people in the room if it is published on the Web. In the room itself the information can be presented on the wall to provide a common context for the participants. When the page is changed on the wall, that information should be also available for remote Web clients so that they can change the page automatically or alert the user. It is possible to include this information in the meeting records, so that users can search what was discussed in audio while presenting a certain page. Local participants may also follow the pages at their own pace with their own computers if they want to check something not on the wall.

Often a portion of a document on a screen is pointed to or highlighted while talking about it. This location can be stored by using an XPointer, an SVG outline on an image or some other addressing schema. With XPointer, it is important to keep a copy of the document version as it appeared during the discussion. Otherwise following the records at a later state with updated documents will be confusing.

2.2 Shared drawings

Images and drawings help participants to better understand the discussed concepts and their structure. However, it is difficult to create and understand drawings without seeing them. In addition physical disabilities may create obstacles to the creation of the images. Some technologies can help to overcome these problems.

IC2D [Hesham2001, Hesham2000] is a drawing tool that uses a 3x3 recursive grid to help nonvisual users to create and understand simple drawings in a two dimensional and even in a three dimensional space. The designer can easily label the parts and create hierarchical structures that explain the image on different abstraction levels.

Assist [Alvadaro2002] is an intelligent drawing tool that can be trained to recognize shapes from drawing gestures in physics and other domains. Assist can label the recognized parts generically and present them in finished way. Additional labelling may be needed for accessibility to attach the purpose of each part in finer detail.

Scalable Vector Graphics (SVG) [SVG2001] is a mark-up language for scalable vector images. SVG offers applications means for structuring, grouping and labeling image components [Koivunen2001]. Additional metadata can provide more semantics explaining the relations of the parts of the image. An SVG application user interface affects the accessibility of the resulting images.

Users with physical disabilities may not be able to draw with a mouse or a pen, however, with a good user interface they may use drawing commands instead, for instance, "go left 50, up 60", "draw a circle here", "increase the circle radius by 2".

2.3 Shared comments, annotations and minutes

Comments are often shared by editing them into Web documents or by sending them to discussion lists and translating the location by using words. Shared Web annotations help to make this more transparent and let the users see the comments in the document context. Annotea is a framework for creating shared Web annotation systems [Kahan2001]. The annotation framework may also be extended to use it to bookmark some interesting places in the documents and search for those places later.

Realtime meetings are important forums in which to share comments about a design. Minutes of a meeting can be recorded in several modalities, such as video, audio, and text, for information searching later [Hammond2002]. Comments can be easily stored and searched on IRC or a textual chat. There are several bots e.g. [Zakim, Chump] that support storing of comments with some semantics. If the locations in the document or image, e.g a module in the design, are stored in addition to the voice discussions and video, it is easy to search later for discussions related to a specific location or module in the design.

Textual comments and mark-up can be easily converted to speech or braille. Recorded speech and videos are more difficult to convert to text automatically, so also other alternatives should be made available. Explicit points of a meeting can be marked with keywords to help searching and accessibility.

3. Technologies for sharing presence and location

Telecommuting users need to have a feeling that they are part of a meeting or a working group. Transparent information about what is going on in a meeting room or in a virtual collaborative space helps to create that feeling. They can see who is available, and when, for discussions; or where exactly the other users are located so that advice can be tailored according to that location.

3.1 Presence in the background

Sometimes users need just some background awareness of other people in the group. There are several experiments with tools supporting this concept. Portholes [Dourish92, Girgensohn99] shows images of the participants in a collaborating group in a selected frequency e.g. every 5 minutes. Movements in the images can give cues that someone is actively working in the room. People who cannot see the images can still benefit from information about who seems to be active. This information can be transmitted through many senses, such as using subtle sound cues tailored for each participant in the background. If there is no background sound, the shared collaborative space is empty.

In an instant messaging approach users tell themselves whether they are active or not. This is more informative but not as transparent to the users, who need to tell the system what their state is, and to others who need to monitor the messages. Participants may let the system know if they follow messages from an IRC channel or SMS type messages, rather than from snapshot images. This can all be organized in the working environment so that activity is detected automatically e.g when a certain device is used.

3.2 Presence as surrogates

People participating from distance can be presented in the surrounding physical space as physical surrogates [Buxton97]. These surrogates can be located anywhere: around a table, in a visitor chair near a user's desk, at the door, sitting in a car or on a bicycle, etc. If the user's space is virtual they may also be presented as virtual avatars.

Users who have visual disabilities cannot see the surrogates, but can benefit from audio cues. Sound coming from the direction of a surrogate may indicate who is talking and where they are located. When someone else is speaking, it might be good to still get some faint noise cues that tell that a surrogate person is present similar to how a blind person knows where the people around him or her are. A deaf person may need a close-up view of the surrogates mouth on his own device, and maybe an additional window for signing services.

A person is supposed to appear in a door surrogate in a similar way as when a person peeks into a room from a door, including the approaching sound of steps. At the door it is possible to check if the person in the room is busy. Making this same conclusion without a visual channel is much more challenging. It is possible to analyze the video to look if the person is alone or how many visitors are seated around a meeting table. It is also possible to identify some people, and check if they belong to a known group. There may also be other information available to make that conclusion, for instance, information on a shared calendar. The final information naturally comes from the owner of the room, and his reactions to the disturbance.

For a blind user, surrogates may be presented with voice cues, such as announcing names as steps approach virtual door. To prevent distractions the cues need to be designed carefully for different situations; e.g. in important meetings no cues are presented unless it is a high priority request. In informal meetings it is possible to use short cues or maybe even tunes that can be used also on a cell phone. The cues may be directed to the earphones so as not to distract others. Also it is possible to use a little device vibrating in a pocket to indicate that someone is approaching. It may even be possible to have different vibrating sequences for different people or different purposes.

3.3 Location of users

Sometimes it is helpful to know exactly where the participants are in their environment [Lin2002]. For instance, it is easier to give users better directions or information related to the location. Location information may let you know who is moving to the whiteboard or sitting opposite you, even if you cannot see them. Location information might also help users control distant machines. In a virtual 3D environment, such as a cave, where everything is presented visually, location information is often needed to be able to translate people and object locations easily to nonvisual cues.

4. Conclusions

Ubiquitous computing interfaces are demanding to the designers and developers but they also make life easier and more natural for many users. While it is even more challenging to think about accessibility and alternative ways to interact in ubiquitous environments' it also is quite plausible. This paper presents some existing technologies used in ubiquitous computing interfaces, and discusses how they could better support accessibility. Accessibility requirements add restrictions but at the same time help make the user interfaces more flexible for many purposes.

References

[Alvadaro2002] Christine Alvarado and Randall Davis. A Framework for Multi-Domain Sketch Recognition. Proceedings of 2002 AAAI Spring Symposium on Sketch Understanding. http://fracas.ai.mit.edu/drg/pubs/alvarado/alvarado-aaai-ss2002.pdf

[Bolt80] Bolt, R.A. “Conversing with Computers,” In R. Baecker, W. Buxton, (Eds.). Readings in Human-Computer Interaction: A Multidisciplinary Approach, California: Morgan-Kaufmann, 1987.

[Buxton97] Buxton, W. (1997). Living in Augmented Reality: Ubiquitous Media and Reactive Environments. In K. Finn, A. Sellen & S. Wilber (Eds.). Video Mediated Communication. Hillsdale, N.J.: Erlbaum, 363-384.

[Chump] annoChump homepage.http://www.w3.org/2001/09/chump/

[Dourish92] Dourish, P. and Bly, S. (1992). Supporting Awareness in a Distributed Work Group. Human Factors in Computing Systems, CHI'92 Conference Proceedings (Monterey, CA), New York, pp. 541-547.

[Girgensohn99] Girgensohn, A. , Lee, A., and Turner, T. (1999). Being in Public and Reciprocity: Design for Portholes and User Preference. In Human-Computer Interaction INTERACT '99, IOS Press, pp. 458-465, 1999.

[Greenberg2000] Greenberg, S. and Kuzuoka, H. (2000). Using Digital but Physical Surrogates to Mediate Awareness, Communication and Privacy in Media Spaces. Personal Technologies, 4(1), January, Elsevier.

[Hammond2002] Tracy Hammond, Krzysztof Gajos, Randall Davis, and Howard Shrobe. An Agent-Based System For Capturing And Indexing Software Design Meetings. Proceedings of the International Workshop On Agents in Design - WAID'02. Cambridge, MA, August 2002. To appear. http://www.ai.mit.edu/projects/iroom/publications/WAID02.pdf

[Hesham2001] Hesham M. Kamel and James A. Landay, "The Use of Labeling to Communicate Detailed Graphics in a Non-visual Environment." In Extended Abstracts of CHI 2001: Conference on Human Factors in Computing Systems. Seattle, WA, March 31-April 5, 2001. pp. 243-244. http://guir.berkeley.edu/projects/ic2d/pubs/ic2d-chi2001.pdf

[Hesham2000] Hesham M. Kamel and James A. Landay, "A Study of Blind Drawing Practice: Creating Graphical Information Without the Visual Channel." In Proceedings of the Third International ACM SIGCAPH Conference on Assistive Technologies: ASSETS 2000, Washington, DC, November 13-15, 2000, pp. 34-41. http://guir.berkeley.edu/projects/ic2d/pubs/ic2d-assets.pdf

[Kahan2001] Kahan, J., Koivunen, M., Prud'Hommeaux, E., and Swick, R. (2001) Annotea: An Open RDF Infrastructure for Shared Web Annotations. In Proc. of the WWW10 International Conference, Hong Kong, May 2001.

[Koivunen2001] Koivunen, M., and McCathieNevile, C. Accessible Graphics and Multimedia on Web. In Proc. of HFWeb Conference, June 4, 2001, Madison., U.S.A. http://www.optavia.com/hfweb/7thconferenceproceedings.zip/Koivunen.pdf

[Lin2002] Justin Lin, Robert Laddaga, and Hirohisa Naito. Personal Location Agent for Communicating Entities (PLACE). In Proceedings of The Fourth International Symposium on Human Computer Interaction with Mobile Devices. Pisa, Italy, 2002. To Appear.http://www.ai.mit.edu/projects/iroom/publications/mobilechi02.pdf

[Sellen92] Sellen, A., Buxton, W. & Arnott, J. (1992). Using spatial cues to improve videoconferencing. Proceedings of CHI '92, 651-652. Videotape in CHI '92 Video Proceedings.

[SVG2001] Scalable Vector Graphics (SVG) 1.0 Specification W3C Recommendation 04 September 2001 Jon Ferraiolo (Ed.). http://www.w3.org/TR/2001/REC-SVG-20010904/

[WCAG99] Chisholm, W., Jacobs, I., Vanderheiden, G. (Eds.) (1999). Web Content Accessibility Guidelines 1.0. May 5 1999. World Wide Web Consortium.http://www.w3.org/TR/WCAG10/

[Zakim] Zakim home page http://www.w3.org/2001/12/zakim-irc-bot


Go to previous article 
Go to next article 
Return to 2003 Table of Contents 
Return to Table of Proceedings


Reprinted with author(s) permission. Author(s) retain copyright.