1995 VR Conference Proceedings

Go to previous article 
Go to next article 
Return to 1995 VR Table of Contents 


Virtual Holography in Diagnosis and Therapy of Sensorimotor Disturbances

Torsten Kuhlen*, Karl-Friedrich Kraiss*, Axel Szymanski*, Christian Dohle**, Harald Hefter**, Hans-Joachim Freund**

*Institute of Technical Computer Science, Aachen Technical University (RWTH) Ahornstrasse 55, 52074 Aachen, Germany
Tel.: +49 241 803636
Fax.: +49 241 8888 308
impact@dali.techinfo.rwth-aachen.de

**Department of Neurology, Heinrich Heine University Duesseldorf
Moorenstrasse 5, 40225 Duesseldorf, Germany
Tel.: +49 211 311 8678
Fax.: +49 211 311 8469

INTRODUCTION

This paper introduces an "interactive motor performance analysis and classification tool" (IMPACT) being developed in a combined effort of the Institute of Technical Computer Science at Aachen Technical University and the Department of Neurology at Heinrich Heine University Duesseldorf. IMPACT employs Virtual Reality (VR) techniques in order to improve the diagnosis and therapy of sensorimotor disturbances. It is based on the optoelectronic motion recording system Selspot II and a Silicon Graphics Indigo Extreme UNIX-workstation, which provides the computing and graphics performance necessary for the generation and presentation of VR scenarios.

In IMPACT, the use of VR comprises two aspects: on the one hand, patients can be brought into virtual scenarios where they perform specific tasks - either to measure their performance for diagnosis or to train certain actions for rehabilitation purposes. On the other hand, the physician can benefit from three-dimensional visualization capabilities during motion analysis. Both aspects are integrated into IMPACT together with existing "conventional" motor tasks and analysis methods. Figure 1 gives an overview of IMPACT.

The next section briefly describes sensorimotor disturbances and the problematic nature of their diagnosis and therapy, whereas the following sections discuss the components of IMPACT in more detail.

SENSORIMOTOR DISTURBANCES

Patients with sensorimotor disturbances are handicapped to various degrees. Some deficits (such as ideomotor apraxia) may show up only during special situations. Others handicap the patients so severely that they hardly can care for themselves (ideational or tactile apraxia, or ataxia) [Freund92]. With some individuals, mainly sensory deficits occur; with others, pure motor deficits are observed. Usual examination of sensorimotor disturbances is based on various functional tests. Patients have to perform motor tasks, which are then analyzed for diagnostic purposes and therapy planning. Motions can be performed on request or as imitations and may include interaction with one or more objects.

In clinical practice the motor disturbances resulting from cortical and those from subcortical lesions are distinguished, but quantitative comparisons are scarce. Cortical and subcortical brain areas extensively interchange information. So far, little is known about this mutual interaction in the control of sensorimotor integration. Therefore, complex test situations are necessary to examine this sensorimotor network to bring out even subtle impairment. Complex tasks are also important for therapy. Today in motor therapy, tasks are used which are of rather simple nature. However, a better performance of these simple motor tasks does not guarantee an improvement in the handling of complex tasks [Mai93]. Even patients with focal lesions may suffer from a complex motor deficit (as optic ataxia) caused by the inability to perceive one special type of information or to match different ones. For example, Goodale and co-workers [Goodale91] described a patient who was able to grasp an object, but failed to perceive its properties. In order to perform accurate visually-guided movements, it is necessary to integrate visual information, information about the spatial relationship between the object, the body and the limbs. Thus, the adequate examination of visuomotor interaction (necessarily) has to test the use of 3-D space information. This has become only recently possible by VR.

VR-BASED MOTOR TASKS

The use of virtual instead of real test environments in motor tasks has significant advantages:
For diagnostic purposes, VR can be used to give a separated presentation of input that normally cannot be distinguished from each other. For example, it is possible to present virtual objects visually, lacking tactile feedback during manipulation. With healthy patients, this should lead to some kind of compensation strategy, which may be different from those in certain groups of patients (e.g. patients with severe sensory deficits). Because the different input streams can be modified separately, this may cause inconsistent information perception. For healthy patients, it should be possible to recognize dominant input streams, an organization behaviour which should be compared to the patient's behaviour.

VR paradigms are also useful for investigations of situations that are difficult or even impossible to create with conventional setups. Grasping tasks with perturbation are an example for such motor tasks. Those are of special interest for the analysis of mental information processing [Paulignan91] [Gentilucci93] [Castiello93]. Here the object size, position, color, orientation or shape are changed while the patient is grasping. Even combinations of different perturbations, which are especially difficult to create in real test environments, become feasible using computergraphic instead of real objects. As a result it can, for instance, test how much information is necessary to estimate the three-dimensional trajectory of a moving, possibly accelerating target, which is temporarily made invisible for the patient. We expect such setups to allow new insights into information interchange between different brain areas.

Finally, a 'redirection' of output channels can be performed, enabling patients with deficits in one defined mode to use other ways of controlling. For example, patients suffering from a movement disorder have lost their capability to spatially organize their movements. It is not clear, however, whether this is due to a synchronization deficit of their own limbs or to a deficit of movement representation in general. Here, VR can be used for controlling motor tasks using 'channels' other than limb positions (e.g. muscle activation pattern).

If specific deficits have been established, motor tasks for therapy purposes can be performed by the patients. However, the benefit from motor rehabilitation is limited so far, though considerable amounts of money are spent on such rehabilitation procedures. Indeed, the simple reason may be that suitable training procedures could not be established and controlled so far. Recent animal experiments (on rats and hamsters) have demonstrated that the enforced use of an impaired limb after a motor cortical lesion leads to even worse results than not influencing the animal at all. Thus the strategy should be to diagnose a deficit, but not to train this specific deficit and instead to use VR techniques to embed the compensation of the deficit in a more complex arrangement. In addition, organizing rehabilitation procedures not in an abstract, but in a game-like manner, results in a significant better rehabilitation success [Wann93].

Several VR-based motor tasks to be used in IMPACT are under development at our Institutes. Up to now, the implementation of grasping tasks with perturbation has been completed, so that patients can now be tested.

VIRTUAL HOLOGRAPHY

A very important question refers to how virtual objects should be presented to the patients for the execution of motor tasks. In VR systems a head mounted display, allowing full immersion into a virtual world, is often used. However, from our experience, even healthy persons are frequently irritated by the complete loss of visual contact to the real environment. As in fully immersive VR environments, the patient's hands have to be presented by computergraphic counterparts. The inevitable time lag of today's VR systems is incompatible with the accuracy of measurement needed for our purpose.

Therefore, a non-immersive visualization technique is preferable, especially with regard to those motor tasks in which patients have to interact with objects using their own hands. As a consequence, IMPACT uses a visualization technique based on a high resolution graphics screen and stereo glasses. Since virtual objects can be visualized stereoscopically by this technique, they seem to be situated in free space in front of the screen as shown in figure 2.
However, the interaction with virtual objects also requires that they are perceived by the subject as stationary. In other words: the virtual scenario must be visualized in a way that it is perceived exactly like a scenario of real objects. Therefore, the visualization of a virtual scenario has to be adapted dynamically to the patient's view position and direction. The authors regard the inclusion of these variables into the projection of the current frame, known as "Virtual Holography" or "Fish Tank Virtual Reality", as being essential for the implementation of motor tasks. At present, an electromagnetic tracking system is used in IMPACT to measure the patient's head position and orientation.

The ideal computer display would be a sphere of 360 degrees, viewed from the center. This sphere would have a constant distance regardless of from where it is viewed. Computer screens never appear to meet our demand to provide any concavity to our field of vision. Most computer screens currently available are either flat or convex cathode ray tubes (CRT's). Curvature and the refraction index of the screen cause projection errors depending to a great extent on the current viewpoint. At the Institute of Technical Computer Science in Aachen, algorithms have been developed which compensate these nonlinear distortions [Walter94]. The pictures in figure 3 give an impression of the accuracy achievable by these algorithms even on extremely curved older screens. While the lower cube is real, the other one is a holographic virtual cube.

MOTION RECORDING

The clinical tools used for diagnosis today are usually restricted to qualitative performance descriptions of motor tasks, initiated by request or done by imitation [Goldenberg93]. Only single-joint movements have been investigated in detail so far, whereas quantitative descriptions of complex motor performances have only recently been used in clinical studies [Mueller91]. Of course, a more precise evaluation could be derived from a direct analysis of complex motor performance itself, which we are planning to do with IMPACT. Quantitative measurements of motor tasks are of central importance not only for diagnosis but also for validation of the steps in therapy to be taken. The reason being because they show the efficiency of a therapy even if the extent of the disorder is only changed slightly.

In IMPACT, motion recording is used off-line for later motion analysis as well as on-line for interaction with virtual objects. Even though IMPACT has been equipped, and can be used with, a dataglove in combination with an electromagnetic tracking system attached to the patient wrist (see figure 4), our preferred input device for IMPACT is the Selspot II optoelectronic motion recording system. Since dataglove systems are still lacking sufficient accuracy and speed, they are not suitable for our concrete medical application. Selspot II uses infrared Light Emitting Diodes (LED's) as markers. Up to 32 of these markers can be attached to the patient's arm and hand. Two or three high speed infrared cameras are positioned at different viewpoints above and beside the patient. In operation, the LED's are illuminated sequentially while the cameras track their movements at speeds of up to 1000 Hz. Analyzing the pictures delivered by the cameras, the computer integrated into Selspot II calculates the three- dimensional coordinates of the attached LED's at maximum sample rates of 100 Hz, providing a very compact, real time input data stream for the VR system. The time lag caused by this measurement technology is minimal, while the compactness of the Selspot II output data eases the long time storage of measurement data for later off-line diagnosis.

VR-BASED MOTION ANALYSIS

In using "conventional" methods for motion analysis, the handling of the recorded data on movement trajectories is especially problematic, because neither detailed quantitative analysis nor motion visualization and validation of the measured data are sufficiently supported by contemporary computer techniques.

Typically, validation of the recorded trajectories supplied by optoelectronic systems is done in form of numeric tables or visualization as two-dimensional plots of the three-dimensional trajectories. Stick diagrams, connecting marker positions at corresponding times by lines, are often used to demonstrate a patient's movement. Values of single joint angles or kinematic parameters, such as velocity and acceleration, are calculated and presented as two- dimensional diagrams or tables [Winter91].

For the analysis of all movement tasks, VR can play a decisive role in enhancing the existing visualization concepts, using animated models displayed in three dimensions [Kuhlen95]. This intuitive way of presentation enables the physician to concentrate on the specific aspects of motor deficit. Motions can be animated within VR at different rates, thus allowing precise assessment and interpretation. A snapshot of an animation in IMPACT can be seen in figure 5. To realize such an animation, it is compulsory to establish an adequate hand/arm-modeling procedure as described in the next section.

In IMPACT, three-dimensional visualization is achieved using a high resolution graphics monitor in combination with stereo shutter glasses (see figure 6). So, the same computer hardware is used for motion analysis as well as for VR-based motor tasks, integrating both into one single system.
During an animation, the physician can change view position, view direction and view angle in real time, using an input device with six degrees of freedom: three for translation and three for rotation. The input devices currently in use are SpaceMaster, SpaceBall and SpaceMouse. Specifications for other devices are implemented in the code, but have not yet been tested.

HAND/ARM-MODELING

Any motion recording technique that describes the state of the upper extremities provides different kinds of information in its own data format. The Selspot II system, for example, provides the three- dimensional coordinates of markers attached to the patient's skin. The VR computer has to calculate all arm and hand joint angles from these. In contrast, a dataglove system provides joint angles for all fingers plus one three-dimensional coordinate of the electromagnetic tracker attached to the wrist. Here, the absolute positions of all fingers have to be calculated by the VR system.

The IMPACT system is being built to be open to new input and output devices. Therefore, it is most desirable to have a uniform data format that describes as many state variables of hand(s) and arm(s) as exactly as possible. This way, all tasks like motion display and motion analysis, as well as interaction with virtual objects, will be independent from the recording technique used. In order to provide the mathematical capabilities to compute lacking state values for the uniform data format from the values supplied directly by the respective motion recording technique, an adequate modeling technique including anthropometric data has to be used. The modeling procedure currently used by IMPACT combines inverse kinematics and vector calculations [Schmitt94]. As the position of a specific location on the human skin can change up to an inch in relation to a joint while moving a limb, these inaccuracies, caused by the Selspot recording principle, also affect our current model. We are hoping to get around these problems of skin movement by using an improved model already under development. This model will use neural networks in converting angle movements to three-dimensional coordinates and reverse.

FUTURE DEVELOPMENTS

The IMPACT system is in the prototype stage right now. As an open system like this is able to fulfill about any new purpose, it can hardly be called complete at any time. We are constantly improving the system, and some extensions will be functional soon. We have completed most of the development of motor tasks for diagnostic purposes. An improved arm model, including a uniform data format for any kind of position tracking system, and a new, intuitive user interface for non-experts are on their way.

A subsystem using neurocomputers for diagnosis is also under development, and motor tasks suitable for training based on Virtual Holography will follow the sets of motor tasks for diagnostic purposes. It is obvious that the development of tactile feedback during motor tasks would improve realism for the patient even more and therefore, again increase the patient's level of motivation.

SUMMARY

It has been shown that VR techniques can be usefully applied in the field of sensorimotor disturbances. Virtual scenarios can be created in which patients perform specific motor tasks. Using an optoelectronic position tracking system for motion recording as well as for interaction with virtual objects, and using Virtual Holography as an adequate visualization technique, we are now developing and testing suitable scenarios. Furthermore, an advanced visualization and animation tool using VR technology for diagnosis purpose and therapy planning has been developed, which the physician can profit from during motion analysis.

REFERENCES

Go to previous article 
Go to next article 
Return to 1995 VR Table of Contents 
Return to Table of Proceedings 


Reprinted with author(s) permission. Author(s) retain copyright.