2000 Conference Proceedings

Go to previous article 
Go to next article 
Return to 2000 Table of Contents

The development of UNICORN -- a multi-lingual communicator for people with cross-language communication difficulties

Mamoru Iwabuchi Norman Alm Peter Andreasen Department of Applied Computing Dundee University Dundee, Scotland, UK Kenryu Nakamura Department of Special Education Kagawa University Takamatsu, Kagawa, Japan

A prototype multilingual communication system has been developed for people with speech and language difficulties. The system could also be used by people whose only communication difficulty is that they are unable to speak another language. What this study has demonstrated is that in circumstances where the communication is following routine and predictable paths, providing the effect of instant translation is possible.


Since the users of augmentative and alternative communication (AAC) are already making use of computer-mediated communication, it may be possible to provide them with a multi-lingual capability as part of the system. This would depend on some sort of automatic translation of the words to be spoken. Performing this on unique text, however, is at this stage, not really feasible.

Machine translation has been developed in pursuit the need for rapid translation of an increasing amount of text and documentation. The performance of automatic translation is still far from perfect however, and can be unusable without human editing (Varile and Zampolli, 1997). As an illustration of the problems encountered, the following translations were generated by typical machine translation systems. In the first example, the original English text was translated into Spanish. The second was between English and Japanese. All translations were done by machine translation systems without human intervention.

Original text:

Sorry, your wheelchair must be checked in. We will lend you one of ours.

Example 1

After the Spanish translation :

Apesadumbrado, su sillon de ruedas se deben llegar. Le prestaremos uno el nuestros.

Which literally means --

Grieved, your armchair of wheels must arrive itself. We will lend one the ours.

Example 2

After the Japanese translation :

Sumimasen, anata no kuruma-isu wa touchaku o kiroku sare nakutewa naranai.

Wareware wa anata ni wareware no no 1-tsu o kasudearou.

Which literally means --

Sorry, the arrival of your wheelchair must be recorded. We will lend one of ours to you.

Providing a multilingual capability for the user of an augmentative and alternative communication system presents two challenges. One is the often unsatisfactory performance of un-edited machine translation. The other is the requirement that the text to be translated must first be produced in the original language. This remains a time-consuming process, despite the continuing improvements to AAC systems.

Conversation modeling has been shown to be a potentially useful technique for improving the rate and effectiveness of augmentative and alternative communication. Conversational features such as openings, feedback, closings, and story telling have been modeled successfully in AAC systems. Another recent approach has been to provide help for transactional type communication by means of script-based communication systems (Alm et al., 1989; Newell et al., 1991; Waller, 1992, Dye et al., 1997).

It may be that these techniques can be used to provide AAC users relatively easily with a multi-lingual capability. At the same time, the idea may have wider applications, suggesting ways to provide speaking people with a rapid and effective multilingual communication system. The potential cross-fertilization of ideas from work on complex systems for people with disabilities and the wider field of human-computer interaction has been pointed out by Newell, who calls this concept Ordinary and Extra-Ordinary Human-Computer Interaction (Newell, 1990).


The UNICORN (Universal Communicator over Remote Networks) prototype consists of a large store of reusable conversational material and a model of the conversation which allows the system to link the items together into appropriate sequences. It is designed to be operated by both speakers so that they can have smoothly flowing dialogues. The prestored materials are translated manually beforehand by taking into account the language and cultural differences between the speakers. These users are provided with an interface which is in their own language, but will speak in the other language they have chosen. The output is also multimodal, consisting of speech output together with an interface composed of text, symbols and pictures. Adding icons and pictures to the interface helps users to recognize prestored utterances quickly and also makes the system accessible for non-literate people.

The prototype is intended to be operated by both speakers who have different disabilities or mother tongues. While composing an utterance, this system is displayed in a speaker's own language, and when finished, the utterance and all the other components on the screen are automatically translated and spoken in the other language.

The interface of the prototype was designed to be easy to recognize and efficient in use. Eight different cards are provided on the prototype, which was developed initially for the situation of being at an airport check-in counter. Each card has a tag with a symbol which corresponds to its category so that the user can easily recognize which card is currently chosen. Symbols also accompany words and phrases as a visual prompt. This also makes the system usable by non-readers.

Conversational pragmatics were taken into account for the interface design of the prototype. Pre-stored conversation materials are arranged in accordance with the frequency of use, conceptual grouping and conversation flow.

A screen shot of the same state of the interface in the three different language versions is shown below.


Figure 1 : An example of the UNICORN interface, shown in the three current languages.

At the top is a log of the entire conversation. The staff member, using their own language has asked if they can help. The other person replies, in their own language that they have a speech problem. Both speakers are using the CHAT card, which contains general purpose phrases. Other cards contain phrases and words relevant to the particular situation, in this case, checking in at an airport. Notice the mixture of text and symbols to aid quick understanding and to assist people with literacy problems.

The code to produce the prototype was written in Java, which allows it to be run on any operating system without major change. This prototype is composed of two parts, the Java program and the data. Because the function and interface of the program are all created by the information in the data files, the stored conversation materials and the arrangement of the components can be modified simply by editing these data files without any alteration to the program. The intention was for the prototype to be usable both in a face-to-face situation, and for remote communication across the internet.


A comparison between the prototype and a commercial multi-lingual phrase book was performed using cross-language pairs in three languages: English, Japanese, and Spanish. It was concluded that using the prototype helped the participants to produce a more natural and successful conversation, particularly due to an increased use of social signs and the ability to take the initiative in a conversation. Although this first evaluation showed no significant difference between the two methods in terms of conversation efficiency, it is anticipated that improved interface design will increase the ease of use and hence the conversational efficiency of the system.


Alm, N., Arnott, J. L., Newell, A. F. (1989). Discourse analysis and pragmatics in the design of a conversation prosthesis. Journal of Medical Engineering and Technology, 13 (1/2), pp.10-12.

Dye, R., Alm, N., Arnott, J.L., Harper, G., Morrison, A.I. (1997). A script-based AAC system for transactional interaction. Natural Language Engineering, 1 (1), pp.1-13, Cambridge: Cambridge University Press.

Foulds, R. (1980). Communication rates of nonspeech expression as a function in manual tasks and linguistic constraints. Proceedings of the International Conference on Rehabilitation Engineering. Washington, DC: RESNA, Association for the Advancement of Rehabilitation Technology. pp. 83-87.

Newell, A. F. (1990). Speech technology: Cross fertilization between research for the disabled and the non-disabled. Proceedings of the First ISAAC Research Symposium in Augmentative and Alternative Communication, Stockholm 1990.

Newell, A.F., Arnott, J.L., Alm, N.A. (1991). The use of models of human conversation patterns within a prosthesis for non-speaking people. Bulletin of the Institute of Mathematics and Its Applications. 27 (12), Dec 1991, pp.225-231.

Varile, G. B., Zampolli, A. (Ed.). (1997). Survey of the State of the Art In Human Language Technology. Cambridge: Cambridge University Press.

Waller, A. (1992). Providing Narratives in an Augmentative Communication Systems. Ph.D. Thesis, University of Dundee, Dundee, Scotland, U.K.

Go to previous article 
Go to next article 
Return to 2000 Table of Contents 
Return to Table of Proceedings

Reprinted with author(s) permission. Author(s) retain copyright.