2004 Conference Proceedings

Go to previous article 
Go to next article 
Return to 2004 Table of Contents 


ASSISTIVE TECHNOLOGY PROGRAMS IN HIGHER EDUCATION AND PROGRAM EVALUATION

Presenters
James Bailey
University of Oregon
Email: jbailey@uoregon.edu

Introduction

Assistive Technology (AT) programs have grown spontaneously over the past 15 years. What started as a single PC in a library has grown to an expansive program including accessible web issues, distance learning, sophisticated technological supports for students with learning disabilities, innovative technological supports for students with physical disabilities and large-scale alternative text production centers. AT support systems have become established programs, and as such, are now subject to review by program evaluation.

The origins of most AT programs put them at a disadvantage for this type of classic program evaluation, but at the same time this formal approach is best understood by administrators in higher education. Traditional program evaluation include at least a half-dozen well-know different methodologies and as many that are lesser known. Nevertheless, these methods begin with similar definitions of "program" which influences the expectations of the evaluators.

Traditional program evaluations are intended to work with programs that have had deliberate designs. These programs start with stated goals and outcomes and are driven by philosophies expected to produce those outcomes. AT programs have grown incrementally by responding to individual student needs and requests. Such origins do not easily lend themselves to appraisal by traditional program evaluation methods.

Rather than attempt to create an evaluative method that fits the evolutionary and spontaneous nature of many AT programs, time is better spent aligning the existing program along a logic model that works with traditional program evaluation. Imbedded in AT programs are philosophies and expected outcomes, it is just that they have rarely been articulated in any organized way.

This presentation will outline some of the key features of traditional program evaluation and discuss them within the context of a higher educational AT program.

Start With a Committee

A good place to start is to form a committee to carry out the evaluation. This does not have to be a large group; in fact, it might be small in the formative stages to help the project get focused. In time, you may want to enlarge the committee. A good initial committee composition might be: the DS coordinator, the AT person (if there is one), a representative from campus information technology (IT), and a representative from the academic student services area. Enlarging the committee could include a student member and a faculty member.

Determine Who Conducts the Evaluation

After the committee is formed good place to start is to determine whether your evaluation is going to be a self-evaluation or conducted by an outside evaluator. It is generally recommended to use an outside evaluator. The experienced evaluator can save time in both evaluation design and implementation. The perspective of an independent evaluator usually brings fresh insights to the evaluation. The outside evaluator also gives strength to the evaluation's validity and integrity. Unfortunately, independent evaluators add to the cost of the evaluation. A self-study, carefully designed and implemented, can yield results that are both valid and very meaningful in assessing your program. A good compromise can be to bring in a consultant to assist with areas where evaluation experience is valuable.

One person should be considered the evaluator and this person may also be the committee chair and is responsible for all aspects of the process form start to finish. It is preferable if the evaluator and the coordinator of the program being evaluated are different people. That may not be feasible and in such cases a co-evaluator may be named. This might be someone from the same campus with a background in research.

Decide the "What" and "Why" of Your Evaluation

The committee first needs to answer two questions: why are you doing an evaluation and what are you going to evaluate? At first, the answers to these questions appear to be ridiculously obvious, but on closer examination it is clear that their answers are not so apparent. You might think you are evaluating your AT and that is correct. However, if you ask a student to describe a successful AT program and a college administrator to describe one you would likely get two different answers. Most programs in higher education have multiple stakeholders and each of these constituents will have a different definition of success.

The "what" and the "why" of a program evaluation tend to run hand-in-hand. If you want to determine student satisfaction with your program (the what), then the reason (the why) is to see if students are feeling well served. On the other hand, if you want to ensure that you are getting good value for your AT expenses (the what), then the reason (the why) is to ensure that your AT budget is being responsibly allocated. Again, this, at first blush, seems very obvious, but it is time well spent to clearly define what you are evaluating and why you are doing so.

Determine the Audience for the Results

Another seemingly obvious step is to define who the evaluation is for and how the results are expected to be used. If the study is on student satisfaction, then the audience might be DS to improve the AT program. If the study is on access for students with disabilities to technology across campus, then the results might be used by the academic IT group to measure the success of integrating AT with student computing in general. Both of the examples just cited are completely valid purposes for an evaluation, and yet, both would require completely different studies to accomplish their goals. It possible to assess more than one aspect of a program (and do so for more than one audience) in a single evaluation, but doing so will increase the workload and logistics of conducting the evaluation.

Detecting the "What"

Once you have decided what you what specifically you are evaluating, you then need to define indicators that will detect it. Staying with the student satisfaction example; one indicator might be "Does AT help you succeed in school?" If a student strongly felt that AT facilitated success, then that could be interpreted as a good indicator of student satisfaction. Please note: books have been written about survey questions and how to interpret them, this is a (very) simple example of a possible indicator.

Having determined the sorts of information that will give you the clearest picture of what you are evaluating, you need to decide how to acquire that information. There are various ways to collect data.

Questionnaires and surveys

  1. easy to disseminate
  2. protects anonymity
  3. design is critical (wording etc.)
  4. restricts responses

Interviews

  1. provides a broader picture of respondents feedback
  2. greater logistical overhead
  3. analysis can be difficult

Evaluating existing data

  1. transparent to program operation
  2. may provide only a thin slice of information
  3. requires careful interpretation

Observations

  1. derived from actual program operations
  2. may be very difficult acquire (specialized viewing rooms etc.)
  3. may require skilled and experienced observers

The list goes on, but this is a fair sampling of what might be considered helpful in evaluating an AT program.

Data collections methods are selected for the type of information desired, for the audience expected to respond and for logistical (costs, ease of dissemination, etc) reasons. If this is being conducted by a DS staff (with or without the assistance of an outside consultant), then the data collection process should be kept relatively simple. Simple does not have to mean inadequate.

No discussion of program evaluation would be complete without mentioning the difference between qualitative and quantitative research. In essence, your evaluation is research. At the risk of over simplifying to the extreme, quantitative data are numbers and qualitative data are words. The numbers can be "real", for example the number of students in a program or they can be numbers assigned to attitudes, for example, the degree of agreement with a statement (strongly agree, somewhat agree, etc.). And while there are exceptions to this, it is helpful to think of interviews as qualitative data. This particularly true when you are reviewing the interviews looking for trends.

Once the data collection methods have been determined and it can be a mix of several, the actual instruments or observation instructions need to be designed. Instruments are the actual questionnaires, or interview questions you would use etc. This is one area where an experienced hand is helpful. Poorly constructed questionnaires or interviews may lead to either worthless data. Or worse, data that gives an inaccurate reflection of the respondents' true feelings.

Dealing with Data

This is a tricky part for experienced evaluators and is that much more so for inexperienced self-evaluators. If your evaluator is not experienced and you are at a college or university, then you might find assistance in this area. The results from a simple Likert-type questionnaire upon which relatively basic statistics are performed can yield very useful information to a program director or coordinator.

Reporting Results

Conducting an evaluation makes people aware that it is happening. The initial design should include some mechanism for reporting results or conclusions. It may be that you consider the evaluation internal and private and that case you state that you will make decisions to publish later. Obviously, all the key players and committee members should receive some document regarding results. It is recommended, however, to make your findings and conclusions available to the public.

Conclusion

With the growth of AT programs, particularly in higher education, it is inevitable that they will be increasingly subject to evaluation, either as a part of DS or as a stand alone program. While such an evaluation may initially cause anxiety in a DS or AT coordinator, many positive benefits can come out of a thoughtful and well implemented assessment. Understanding that your program is considered successful by those it serves will be well received by the administrators of your school. Or knowing your budget is being wisely spent will warm the hearts for your school's accountants. On the other hand, if there is a problem (even if it is only a "perceived" problem), it is crucial that you know it, understand, and resolve it.

DS and AT coordinators need to familiarize themselves with at least the fundamentals of program evaluation, so that when it does come he or she can positively contribute to and influence the process.


Go to previous article 
Go to next article 
Return to 2004 Table of Contents 
Return to Table of Proceedings


Reprinted with author(s) permission. Author(s) retain copyright.