CDS 485 Computer Applications in Communication Disorders and Sciences
MODULE 4
AUGMENTATIVE AND ALTERNATE COMMUNICATION (AAC) DEVICES FOR COMMUNICATIVELY
HANDICAPPED INDIVIDUALS
SECTION III
AAC and the Manipulation of Modalities
I. What is a Modality?
Augmentative and Alternative Communication is a story about Modalities and how whole or sub
modalities may be switched when one becomes hopelessly disabled. But what is a Modality? The word for example, Modality, like the word Spouse, can mean different things
depending upon who is using it. To some the word Spouse means a companion, a
friend and an object of affection. To others it may mean a monster, a liar or a
jailor. Likewise, many disciplines find a different use for the word
_Modality._
To the Theologian, the word Modality
is used in Christianity to refer to the structure and organization of the local
church. The Universal Catholic Church is the modality as described in Catholic
Theology.
To the Lawyer Modality refers to the
basis of legal argumentation in United States constitutional law.
To the Musician it is a subject
concerning certain diatonic scales known as musical modes. It_s also why I
never learned to play the piano!
To the Sociologist, it is a concept
in structural theory.
To the Philosopher it is the
qualification in a proposition that indicates that what is affirmed or denied
is possible, impossible, necessary, contingent and some other things. So much
for these notions, and there are others. But let_s look at some that are more
relevant to a discussion of AAC.
II. A MODALITY IS A SENSORY SYSTEM
AND MORE.
To the Medical Profession a Modality
may be the faculty through which the
external world is apprehended. Hence, it can refer to a sense-organ or a specific sensory channel (system), such as in
vision or hearing. Seeing and hearing, of course, are not just functions of the
transducers (i.e., the eyes and ears which convert stimuli from the environment
into analogous patters of electro chemical impulses in the brain). They involve
a complex neurological infra structure that exists beyond the transducer to
organize and interpret the stimuli. This infra structure can be portrayed
adequately for our purpose by Mysak_s Model, an _oldie but a goodie,_ with a
few adaptations.
![]()

A. Transduction -- The first four
green boxes in Mysak's Model, which he calls the Receptor Unit, (green boxes 1-4) are the Receptive Transducers such as the eyes, the ears, touch and kinesthetic feedback. These transducers put us in touch with the
environment by changing external stimuli into internal electro chemical
impulses. But at this point, there is no meaning involved in the stimulation.
Nevertheless, damage to these transducers can imperil cognition, speech and
language development and communication simply by the isolation it creates. Assistive Technologies, like hearing
aides, glasses, and structures to facilitate standing or walking etc., can help
to breach this isolation and enable the development of cognition, language and
communication.
![]()
B. Perception -- The next two boxes
beyond the transducers, (blue boxes 1 and 2) represent neural networks that
organize the inflow of electro-chemical impulses from the transducers to
efficiently and quickly achieve meaning. This is the process of Perception and is represented in Mysak's
Model by the Integrator Unit. But why
are there two boxes in this unit?

![]()
There are two boxes because perception has two processes which contribute to
the organization function. The first (blue box 1) represents the contribution
of that Innate Genetic Inheritance
which creates the specific neural
infra-structure we possess because we are human. Our neural structure is
different than that, say, of a cat and we perceive the environment differently
than does a cat. The other (blue box2) refers to the influence that stored past experiences and learning
have on the way we organize and interpret incoming stimuli.
Disruption to the former (blue box 1), the innate
perceptual networks, can create severe processing problems that result in a
major lack of effectiveness in achieving organization and understanding of what
is being transduced. Poor figure-ground discrimination, and spatial and
temporal confusions can be a few of the consequences.
A dearth of good, or a plethora of bad experiences or training, (which pertains
to blue box 2,) can create or exacerbate a perceptual problem. A child, for
example, who because of a severe motor impairment has had little or no
experiences in touching square, round or rectangular objects may have
difficulty later in perceiving drawings including them, such as are found in
printed letters. In this case, providing AT early in life, such as specially
constructed body supports which free a child_s hands to physically explore
objects in the environment, is one key to rehabilitation.
C. Memory _ the next red box
represents the Storage Unit of Mysak's Model. By all rights, this box should be
ten times as large as it is, because this is where humans are the World
Masters. Our ability to develop and maintain in memory thousands upon thousands
of concepts, symbols, recollections of the past and a plethora of motor
patterns is unparalleled on this planet.
Memory, of course, is a bundle of processes, including short term memory (and
all its mechanisms), long term memory (and all its mechanisms), and the
conversion process from one to the other. Included here also are a number of
forgetting processes.

![]()
Severe memory disorders due to retardation or brain trauma may inhibit not only
communication but the ability to cope with daily living routines. AT can be
very useful here in providing systems and/or mechanisms to bridge the gap.
Picture schedules, generated by a computer, for example can allow a child or
adult with a severe memory deficit to participate and even take responsibility
for some daily activities.
D. Concepts, Language and the Seat of
Consciousness _ the next brown box in Mysak's Model is the Governor, which represents the process
of consciousness, decision making and
ultimately the formulation of an idea to be communicated symbolically.
Descartes said, "Because I think, therefore I am!" This threatens my
very existence, so I don't want to go into that too deeply.

![]()
But Language is a Process of going from an un-symbolized idea or concept in the
Governor (Deep Structure) to a
symbolic expression (Surface Structure).
This involves the application of phonemic, morphologic, syntactic, semantic and
pragmatic rules, which may be expressed eventually through speech, Signing,
writing or some other form. A lack of opportunities to socialize early in life
before the emergence of language (i.e., to communicate with others), which is
characteristic in the life of severely motor impaired, autistic, deaf and other
disadvantaged children, can in itself retard or forestall the development of
these language skills. AT and AAC which enable social interaction, if provided
early can ameliorate significantly this lack of development.
E. Movement -- The Black Box which
holds the secret of most motor behaviors including Speech is the Mixer
in Mysak_s Model. It answers a riddle that many of us may have pondered in an
idle moment (before, of course, we took a course in Phonetics) -- How
do we talk? We do it effortlessly, and copiously (if you are a
professor), and yet we have little awareness of the specific movements we make
to do it. Indeed if we did, speech would cease, because the number of movements
and the speed and precision required are almost incomprehensible, unless you
make a study of it (and why would anyone be crazy enough to do that!!!). Speech
movements exceed by far in timing and number, the finest of ballet routines you
can imagine.

![]()
In reality, the Mixer is like a neural
_Juke Box_ (for the ancient generation) or an IPod (for today_s younger
students) which stores records (thousands upon thousands) of all the motor
behaviors we have learned throughout life. This includes, walking, eating,
brushing our teeth, tying our shoe laces, dressing, playing a musical
instrument, performing sports and, of course, speech etc., etc., etc. The
Mixer, like an old fashioned juke box brings up a motor pattern upon request.
If we wish to say something, the message is sent from the Governor to the Mixer
which selects the proper motor pattern and sends it to the appropriate muscles
designated to do the work.
If the Mixer fails to work, movement becomes totally uncoordinated and
ineffective. This is called Apraxia.
It can be mild or so severe that an individual can make no voluntary movements.
It can effect an arm (hence no Signing or writing), the vocal mechanism (hence,
no speech) and/or the whole body (hence, almost total isolation). To have a
simulated experience like this, stand in the middle of the room. Then raise the
toe of the right foot (keep the heel down), then raise the heel of the right
fool (keep the toes down). This is a simple rocking motion. Then do the same
thing with the left foot. Alternate this rocking movement between the right and
left feet, and then increase the speed. You may find that this simple motion
becomes all confused because the Mixer is not yet trained to handle it. (After
this exercise, it is best that you remain seated for three hours to avoid
appearing intoxicated as you walk across the room. You can avoid a lot of work
too.)
If that demonstration was a little too wild for you, here is another one I
learned at my PAGE Club (Professors
And Geriatrics Exercise Club) as demonstrated personally here in the 3 pictures
below by their 95 year old Class Instructor, Madam Celia.

Cross your
hands at the wrists, right on top of left, rotate the wrists clock wise so that
the palms face each other, clasp the hands with alternating fingers (figure 1
above), and then draw them toward you and up (Figure 2 above). When the clasped
hands are up, have someone point to a finger which you are to then move (figure
3 above). You may find it difficult (at least for a short moment) to find the
proper motor pattern to move the designated finger. Other fingers may move
instead. This is what it is like to have
Apraxia except for those who have a real and severe case of verbal apraxia
or more, this is the rule rather than the exception. But AAC devices can fill
the gap where the motor patterns have failed to go. Hence, verbal communication
through a computer with speech output can make a huge difference in the life of
a person with severe verbal apraxia and no speech!
F. Speech -- The EFFECTOR UNIT
of Mysak's Model is the Speech Mechanism, which receives the messages from the
Mixer. It consists of three parts (represented by three gray boxes):
The Motor, which is the muscle
system that serves as the source of energy for speech (e.g., the Diaphragm and
others); the
Generator, which is a mechanism
for creating the sound for speech (e,g the Larynx); and the...
Modulator, which is a system
of resonating air chambers that shape the air waves from the larynx to make
them sound, among other things, human and recognizable as phonemes.

![]()
Damage to these mechanisms can disable speech partially or totally. The speech structure may be impaired or missing
(like a laryngectomy or a glossectomy) or the motor function may be compromised (like in ALS or Cerebral Palsy).
The job of the SLP in these cases is to assess the total person and find the
best match in terms of assistive/alternative devices or strategies. This is not
as simple as it may sound because there are many variables that must be
considered. This will be the focus of the next module.
It must be noted that the Effecter Unit
in Mysak_s model relates to only one form of communication_speech. There
are, of course, many others forms which depend mainly on the functional
integrity of the first box (the motor). On a linguistic level, there is Sign
Language, Braille and Morse Code. On a non linguistic level, there are the
forms of communication we discussed in the previous discussion of Graded and
Combinative Nominal and Expressive Communication. These can include gesturing,
raising an eyebrow, eye-gaze, body proximity and even coughing to name a few.
Typically, Speech is the dominant form of communication and is supported
simultaneously by the other forms. When speech becomes disabled, the other
forms may be able carry an increased role in transmitting a message. A lot
depends, of coursed, on the integrity and capability of the motor system
G. The Channels of Communication in
Mysak_s Model relate to speech production. They are the routes in the
environment for the propagation of the speech sounds. For the speaker only, as
far as the sound waves are concerned, there are two channels of propagation.

![]()
Channel 1 is the air
through which sound waves travel. Everyone hears those sounds including the
speaker.
Channel 2 (for the speaker
only) are the bones of the skull through which his/her own speech sounds
travel. That is why when we speak, we hear ourselves slightly differently than
the rest of humanity
The two Channels have a medical significance to Audiologists who compare them
to differentiate a Conductive from a Sensory Neural Hearing loss by measuring
and comparing the Air-Bone hearing thresholds_the so called _Air-Bone Gap._
In AAC, where oral speech is not the output source, these channels can have a
different meaning. For example, in Computer Technology, a Modality is a path
of communication between the human and the computer. Hence,
these channels may represent modalities by which a patient may communicate with
a computer, such as by touch (direct selection) or by a mouse (Proportional
control), or through a switch in conjunction with a
scanning system. This extends the concept of a modality beyond the limits of
the body Sensory System to an AAC device that now becomes part of the
communication pathway. In fact, this notion is embodied in the etymology of the
word _Modality,_ which involves the employment of any therapeutic agent.
The number of different kinds of devices in the world of AAC that are available
to fit into this pathway, of course, is quite large. This provides a plethora
of options for the SLP to consider in planning a program of rehabilitation.
H. Feedback -- The four green
Sensor boxes in Mysak_s Model are the Receptive Transducers again, used this
time to monitor the signals produced by the Expressive Transducer. That
includes the sounds that are being produced, the sensations of touch from the
tongue and lips, and kinesthetic feedback from the jaw movements, to mention a
few. This is part of the essential feedback process.

![]()
For the AAC user, however, these transducers can carry an even greater role.
The selection of a particular communication device or strategy for a
communicatively handicapped person, for example, may depend on the integrity of
these Sensor Units. Vision, for example, must be assessed to determine its
functionality. Some individuals cannot focus, while others can_t see the entire
visual field. For those with poor vision, a device may be modified or it may have
auditory cues available (as in the use of auditory feedback during scanning).
For patients both Deaf and Blind, tactile devices (featuring Braille or Morse
Code or mechanical hands for Touch Signing) may be considered.
I. Repair -- The Comparator
in Mysak's Model (the yellow box) uses the feedback from the Sensor Units to
monitor and repair the movement produced by the Expressive Transducer. The name
of the game is FEEDBACK. It is virtually impossible to learn or maintain any
motor behavior without feedback. Before any movement ever actually takes place,
there is a flurry of activity in the brain.

![]()
Expectancies of what is to be accomplished by the movement is set up in the
Comparator, which is probably a number of locations in the brain--such as the
frontal lobe, the cerebellum, and the brainstem to mention a few. If the
feedback from the Sensors does not match the expectancies, error messages are
sent to the Mixer to correct the movement, and to the Governor so that we are
aware of the process. This process applies to all motor movements, whether
making a sentence or pointing our finger at a picture, letter or word on an AAC
device.
Some individuals with motor impairments, like cerebral palsy, may have less
than accurate repair processes. In these cases adaptations to keyboards that
access a computer may be indicated. This may take the form of special software
to modify the response of the keyboard, or devices to help channel the user_s
movement to the right location.
III. LEARNING MODALITIES
To Some Education Theorists, Modalities are Learning Systems which
can be reduced to channels such as the visual,
auditory and motor modality. Visual
children tend to learn by watching and looking at pictures and may be
easily distracted by movement and action in the classroom. Auditory children tend to
learn by being told, respond to verbal instructions, and may be easily
distracted by noise. Those who respond to motor/kinesthetic stimuli tend to be
involved and active, and would rather do than watch, and prefer 'hands on'
projects. Language skills have also been similarly classified by modality. The Illinois Test of Psycholinguistic Ability
is a case in point. It makes an assessment of the Auditory and Visual
modalities to determine which is functioning the best for learning and
communication, and which may be significantly impaired, and if so, where the
breakdown in the pathway may have occurred.
The concept of Learning Modalities is useful in AAC. It provides a framework
for understanding the redundancy of language in the pathways of the brain. We
will review three Learning Modalities for this purpose. These are the Auditory,
Visual and Haptic Modalities. Of course I am hedging to save paper, as
professors love to do, because the Haptic Modality itself is really a composite
of three separate Modalities: Tactile, Kinesthetic and Vestibular.
So there are really five! Well, there is more than that if we consider
gustatory and olfactory tracks, but we won_t at least for now. Each modality,
with the possible exception of the Vestibular (the sense of balance), can
support language and the communication processes. When communication breaks
down (or fails to develop), it is necessary to examine each modality in detail
to determine where the break may have occurred and what alternate routes may be
possible without and/or with the aid of technology. This is referred to as Task Analysis. Failure
to do this can have tragic consequences! Take for example, the story of Julia.
Julia was a young woman who was afflicted with a stroke which left her almost
totally paralyzed from head to toe. All she could do finally was make a kind of
a guttural sound, but no speech. The doctors, nurses and family assumed she had
no language, and hence everyone talked in front of her. Frequently they
referred to her as a vegetable, and made jokes or other unkind statements,
assuming she could not understand. In truth, Julia had considerable language
capacity. Only her expressive language was impaired. In terms of receptive and
inner language, she was quite normal. Hence, she understood and endured with
anguish all that was being said. In addition, she suffered from the terrible isolation
that occurs when the language bridge is broken. It was six years before someone
became suspicious that she was not a "vegetable," and began to
explore her actual language abilities. Finally she was freed from her body
prison through AAC. Had some one analyzed her language processes more
thoroughly in the beginning, she would not have had to suffer so severely so
long!
Please
click here to see the complete Julia file.
(PLEASE NOTE THAT TO
GET BACK TO THE LESSON, SIMPLY CLOSE THE PAGE. That is because if
you use the return link at the bottom of the Julia page, you may get hopelessly
sucked into cyberspace and never find your way back!!! The good news is that
we've only lost one student this way from Cohort I and she finally showed up in
Cohort XI.)
History is full of similar cases. The well known term "deaf and dumb" is testimony to the
old notion that deaf individuals who cannot speak have little language capacity.
To the contrary, they are lacking in only one area of language processing (the
receptive transducer for the Auditory Modality). The classic example, of
course, to repudiate this idea, is the life of Helen Keller. She was, of
course, both Deaf and Blind and still learned to communicate by speaking!
|
Helen
Keller |
|
|
And here is a fascenating film
strip of Helen Keller and her teacher Anne Sullivan taken in 1930.
Please click HERE, and then on the Arrow in the
Center of the Picture, to see the Helen Keller Video, THEN USE THE BROWSER RETURN
BUTTON TO GET BACK TO THE LESSON.
A. What is a Task Analysis of
Language?</B
We have discussed the processes that occur in the Sensory
System, and we are still talking about these processes but now we will be
discussing the Auditory, Visual and Haptic Modalities.
The
Auditory, Visual and Haptic Neural Systems

![]()
For each of the modalities, these processes may be
distributed among the Receptive, Inner, or Expressive components of Language.
It is important to analyze these components individually when we are assessing
the language abilities of a child, or an adult. This would have saved Julia a
lot of grief by identifying both her language weaknesses and strengths.
A. Receptive and Expressive Language Skills: What identifies a Receptive
(or Inner) Task as opposed to an Expressive Task. A Test of Receptive and Inner
functioning relies on minimal voluntary responses, using forms such as
pointing, nodding, grunting, blinking, twitching and other similar behaviors,
which I often see in the back row of my AAC class on campus. The emphasis is on
the timing, not the efficiency of the movements. Gestures could also fall under
this category.
Expressive tests, on the other hand, focus in on the efficiency and competency
of the response. Hence, tests that require a complex response are expressive
tests. Speaking, writing, drawing, Signing, and pantomime are some examples of
a complicated response that we might observe for an expressive task.
B. The Auditory Modality consists of the neural system that
extends between the receptive transducer (the ear) and the expressive
transducer, the mechanisms for speech. Phonemes, of course, are the basic unit
for social communication in this modality. Hence, speech is the most typical
form of encoding involved. Morse Code would be another possibility. These, as
we discussed are based on a system of symbols.
Because in Semiotics the manner in which information is encoded is considered
a modality, both the stimuli and any motor responses to the stimuli could be
considered as additional segments of the modality structure. Other modality
segments for the Auditory modality then would include messages that are encoded
as signs. Examples, as we discussed earlier, are nominal graded signs (viz.,
stomach rumbling, burping etc.); nominal combinative signs (echolalia);
expressive graded signs (viz., moans, shouts, crying and laughing, etc.); and
expressive combinative signs (viz., swearing, singing, social speech forms such
as, _How are you today,_ idioms, and proverbs, etc.) These provide many
possible options for variation in the Auditory Modality, some of which can and
do operate simultaneously. These options also provide a redundancy in
communication for alternative routing when a segment of the modality is
disabled. If speech fails, for example, communication may still be possible
through Morse Code. Yes, it is true that friends of a disabled patient may not
be able or willing to learn Morse Code, but a computer will, and can even
convert the code into speech!
At this point, then, our Auditory Modality Structure may look something
like this:

![]()
C. The Visual Modality includes those neural systems that
extend between the receptive transducer (the eye) and the expressive
transducers (the motor mechanisms required for writing, and/or Sign Language,
Pantomime and gestures.) Writing and Sign Language are based on symbols such as
graphemes (written letters) and visual patters of space and movement (the Signs
of Sign Language). But there are many other communications based on signs.
Examples of these are nominal graded (viz., thrashing and crying); nominal
combinative (viz., pointing, gesturing etc.); expressive graded (viz., body
_language_) and expressive combinative (viz., swearing gestures, and social
routines like opening a door and letting someone else go first).

![]()
At this point, then, our Visual Modality Structure may look something like this:

![]()
D. The Haptic Modality is yet
another channel within the brain that can support the processes of language.
This modality, however, is a composite of two more basic modalities--the
Tactile and the Proprioceptive Modalities.
1. The Tactile Modality is
the sense of touch and, of course, is very familiar to us. Its receptive
transducer is the system of nerve endings just under the skin. The role of
the Tactile Modality in our cognitive development may be underestimated by most
of us. Tactile modality plays a major role in the child_s exploration of
the environment. It helps a baby to develop an awareness of the body's limits,
of which the new born baby is unaware. It helps us to keep tabs of where we are
in space. We can gage much about our body position from what we feel through
our feet on the floor and from our seat and back against a chair. Knowing where
we are in space is paramount to the development of many language concepts
(e.g., prepositional phrases) and language skills (e.g., discriminating
"b" from "d" from "q" from "p." Hence,
as you observe a baby, you may notice that they spend much time touching and
rubbing against things with their hands, feet, legs, lips and tongue. This is
as much a process of serious study and exploration as is the busy bustling of a
scientist about his laboratory.
2. The Proprioceptive Modality
is also two sub modalities experienced as one: The Kinesthetic and Vestibular
Modalities.
3. The Kinesthetic Modality is
tantamount our "eyes" looking inward to our own body. The transducer
for the Kinesthetic modality is the system of nerve endings in the joints of
the body, and in the muscles. Like the sense of touch, it is very important to
the development of body awareness. In fact in cases where this process fails, a
person can totally loose the awareness of a body part! To the baby, the
kinesthetic sense is also a basic ruler for exploring and understanding the
environment. Initially, visual (or auditory) images provide no real information
to the baby about the properties of referents (things) such as angles, sizes,
shapes, distances or mass. This is information is obtained first hand (no pun
intended) as the baby comes into physical contact with and manipulates the referents
in their environment. The baby's hands, feet or mouth are constantly probing
objects that are within their grasp--rattles, blocks, rails on the crib, balls,
table legs, fingers, etc. These objects' properties are measured by the
kinesthetic sensory system, which calculates and stores body angle, tension and
fatigue etc. This information is cross referenced with the Visual and Auditory
Modalities to give them a bases for meaning. The Vestibular Modality also plays
a role in this exploration by providing a reference in space around which
positions of up and down can be determined. Symbolically, the Kinesthetic
Modality alone can support language in the form of Braille, writing, typing and
touch Signing to name a few forms.
One goal of early education is to provide young children with as many
opportunities as possible to examine many different referents. The ultimate
goal is to develop concepts upon which language can be mapped. The story of
Montessori is a grand example of using the Haptic modality for this purpose.
The Montessori approach stresses a hands-on exploration at an early age. But it
is just this modality and these types of experiences that are denied to the
severely motor disabled child. This child because of his/her impairment is
unable to interact with and explore the environment. The consequence is a lack
of information upon which to develop basic concepts about the world and around
which language can be developed. Later in life, the motor impaired child may
not have as much to communicate about because of this dearth of concepts.
Hence, language and communication are impaired two times over_because of a
language delay, and because of the motor impairment. .
At this point, then, our Haptic
Modality Structure may look something like this:

![]()
E. Cross Modality Processing:
When a person listens and then speaks, the communication process is confined to
a single modality. But frequently the action may involve two or all of the
modalities in consort. For example in the Peabody Picture Vocabulary Test, the
Stimuli are words (auditory) and Pictures (visual). Hence, an additional
process of cross modality conversion
becomes involved.

|
|
In this manner all modalities may be involved simultaneously. For example, in a
teaching of reading strategy called the _Writing Road to Reading,_ a pupil
writes the word (viz., in the sand) for Haptic processing, and says it aloud
for Auditory processing and looks at it for Visual processing all at the same
time.
F. Modality Extensions: To a
medical Doctor, a Modality may refer to the employment of, or the method of
employment of, a therapeutic agent. Hence, in the world of AAC, for a patient who
cannot speak, the therapeutic agent
may be a Communication Board, Pictures
used in a particular manner, or a Computer
with speech output among others. This extends the reach of a modality structure
beyond the limits of the body to incorporate an external technology, like a
computer.

![]()
1.
In Computer Technology, a Modality is also
a path of communication between the
human and the computer. Hence, a patient may communicate with a computer by
touch (direct selection) or by a mouse (Proportional control), or through a
switch in conjunction with a scanning system. And of course, there are many
different kinds of computers to be considered that can be used as part of this
path. Now our Modality Structure may look like this on the Expressive side:

![]()
Ironically, when the expressive modality is impaired to the extent that there
is only a minimal motor response available for communication, and an AAC device
is deemed to be an appropriate rehabilitative strategy, the role of the Receptive segment of the Modality
System takes on a new importance. In order to use an AAC device, a patient must
be able to discriminately see, hear and/or touch it. This in itself may require
Assistive Technology, like glasses or a hearing aid. But then, the question
becomes, _What is the patient able to decode?_ That is, what should there be on
the device for the patient to choose from_words, letters, pictures, photos?
2. In Computer Science,
especially Computer Imaging, the type of input is considered to be a modality.
For example, Black and White would be one modality, and Color would be another.
This is equally true for input from an AAC device like a computer which a
patient might be using for communication. But there is more. The input may be
linguistically symbolic, like phrases, words, and letters; or it may have
graphic symbols like pictures or Bliss Symbols; or it might use signs like
Happy, Sad, Yes, No or Stop. The pictures may have abstract meaning or iconic
meaning or both. A picture of an Apple, for example may mean _apple,_ but in
combination with a picture of a truck, it may mean _red._ Bliss Symbols are
another example of a symbol system that has a high degree of iconicity (i.e.,
it looks like what it signifies.) Then again, pictures may be photographs,
colored drawings, two dimensional black and white sketch or stick figures. Each
in its own right would be considered a modality segment available for plugging
in or out of our communication modality structure. So our extended modality
structure may look like this on the Receptive side:

![]()
When we combine ALL the modalities into one picture and take into consideration
the wide variety of computers, low tech devices, and no tech strategies that
are available, it becomes apparent that there is a vast array of options for the SLP to choose
from in modifying these modalities to meet the needs of a patient. And of course, there is a plethora of issues associated with each
which will help determine the choice that the SLP will make. Many of these will
be examined in the next Section. For now here are several examples of
multimodality communication using speech, gestures, body language, and an AAC
Device among others:
Please
click HERE, and then on the Arrow in
the Center of the Picture, to see the FIRST Video, THEN USE THE BROWSER RETURN BUTTON TO GET BACK TO
THE LESSON.
Now, please click on the Picture Arrow below to see the Second Video. (You do not need to use the Return Button on the Browser for these)
Now, please click on the Picture Arrow to see the Third Video. (You do not need to use the Return Button on the Browser for these)
Now, please click on the Picture Arrow to see the Fourth Video. (You do not need to use the Return Button on the Browser for these)
Now, please click on the Picture Arrow to see the Fifth Video, which is rather amazing. (You do not need to use the Return Button on the Browser for these)