Go to previous article
Go to next article
Return to 1999 Conference Table of Contents
Edward L. Meyen
This paper is based on the experience of developing, teaching and testing two fully online graduate level courses, one focusing on the process of curriculum development and the other on designing and teaching online instruction in education (http://busboy.sped.ukans.edu/sped798d). Enrollees included traditional full-time graduate students and employed professionals. All instruction was asynchronous and lectures were presented using audio streaming in a multimedia format. Students were able to control their progress through the lectures, that is, they could stop and review a lecture at any time. Online course support features included the syllabus, lesson schedule, a listserv, roster, and technical support information. Lesson instructional features included a lesson page, advanced organizer, the lecture, an outline, notes, activities, lesson assessments, readings, a glossary, focus presentations, collaborative projects, and exams.
In order to document those elements of the course design and technical delivery system in need of revision, all course features were subjected to formative evaluation and performance data was collected on participating students. Data was collected during a pilot study, a field test under the normal conditions of graduate online offerings, and a series of focus group sessions.
The pilot study consisted of 14 students from disciplines outside of education. They were selected to test the navigation system and related features only, as opposed to assessing the content. Students' ability to use the navigation system with ease was important because we did not want the technical aspects of the course to interfere with the instructional process. Many difficulties with the navigation system were reported during the pilot test and several revisions of the human-computer interface were made.
Field testing was conducted in the spring semester of 1996, at which time the combined enrollment of the two courses was 31 graduate students. Twenty-two were female and nine male. Eleven lived in the university community, 18 lived within 50 miles, and two lived 150 miles from campus. Seventeen were employed full time, six worked part time, and the remainder were not employed.
Focus groups were held during the second half of the semester after all students had completed at least half of the 16 lessons. The course developer did not participate in the focus groups. Appropriate revisions were made based on feedback from the focus groups. Students wanted the ability to print hard copies of all text materials, to access the outline and notes in advance of completing the lecture, and more frequent images of the instructor on the monitor. Other frequent comments included that students valued the flexibility of the format, the personal interaction with the instructor, and the ability to review all material in its original form. Students also reported that they varied in progress, but most completed on time and had become very conscious of course design. Students also felt that technical skills of students need not be an important consideration, collaboration could be effectively achieved online, students did better work where products are involved, and that response time expectations increased as students progressed.
The formative evaluation consisted of two instruments: a
survey form and a university-wide instrument, the Curriculum and
Instruction Survey (C & I survey). The C & I survey,
which focuses on principles of effective teaching, includes 10
items intended to be used by all faculty administering the
instrument to students in their classes. In addition, there are
28 optional items faculty may elect to use. The C & I survey
was administered online and scored electronically.
Figure 1. Ten common items used in all C & I surveys.
|10 Items on the C&I Survey||A*||B*||C*||D*||E*|
|1. Has command of subject.||4.73||4.79||4.52||4.86||4.52|
|2. Successfully communicates subject matter.||4.53||4.04||4.07||4.52||4.07|
|3. Availability of professor to students.||4.80||4.26||4.23||4.50||4.23|
|4. Is sensitive to the responses of the class.||4.53||4.29||4.18||4.58||4.18|
|5. Assignments are pertinent and helpful in learning subject.||4.80||4.36||4.18||4.63||4.18|
|6. Provides critiques of students work.||4.53||4.19||3.86||4.39||3.86|
|7. Is fair.||4.80||4.49||4.28||4.69||4.28|
|8. Overall, (s)he is an effective teacher.||4.80||4.26||4.17||4.65||4.17|
|9. Objectives and methods are clearly explained.||4.60||3.92||4.14||4.59||4.14|
|10. Overall, course goals and objectives are achieved.||4.67||4.16||4.17||4.62||4.17|
* A (Online courses), B(WWW Supported courses), C (T&L Department), D (Special Education Department), E (University wide).
The mean responses for students in the online courses were compared to the mean responses for graduate courses taught in the departments of Special Education, and Teaching and Learning (which represent the home departments for students enrolled in the online courses), university-wide means, and the means for traditional on-campus courses that also provide web-based supports, such as having the syllabus, assignments and activities online. Figure 1 reports the mean values for the 50 potential comparisons. In 48 of these comparisons, the mean ratings favored the online courses. On all items the mean score for the online courses exceeded the means for the university wide-group.
Mean scores on 10 of the optional items selected from the C & I survey are reported in Figure 2. The optional items selected included those judged to be most appropriate for evaluating online instruction. Because they were optional, comparisons were not made across groups. However, the scores were consistently high for the online course.
Figure 2. C & I survey optional items
|1. Instructor was willing to help me outside of class.||4.87|
|2. This course aroused my intellectual curiosity.||4.73|
|3. I made an honest effort to learn from the course.||4.93|
|4. The instructor raised questions and posed problems to the class.||4.6|
|5. When making generalizations the instructor made good use of examples.||4.86|
|6. The readings were appropriate in length.||4.97|
|7. Exams covered material emphasized in this course.||4.87|
|8. The requirements and dealings of the course were made clear.||4.79|
|9. Various aspects of this course were well integrated.||4.73|
|10. Instructor's presentations were clear and understandable.||4.67|
Formative data regarding the students' learning environment was also collected using a questionnaire of eight multiple choice items. Anonymously, each student was asked to complete the questionnaire after each of the sixteen lessons. Selected items indicate that majority of the students completed the lessons at their workplace (51%), without experiencing any technical difficulty during the lesson (67%), and in one sitting, without a break (53%).
Formative data were also collected by means of a daily log on the time required to develop and teach the online courses. Instructional development time averaged 40 hours per lesson, totaling 640 hours per course. Once the lesson material was handed to the technical developer it was in a form specifically structured for the technical design. For example, on the lecture scripts, notations were recorded where each illustration and note was to be inserted once the audio recording of the script had been digitized. Technical development time averaged 56 hours per lesson, totaling 256 hours per course. This included time devoted to all tasks associated with placing the lessons online. Some of these tasks included transferring the illustrations, notes and audio lecture to a digitized form appropriate for the Internet, creating the various feedback forms for student activities, ensuring a functional email and list serve system, synchronizing the lecture, notes, illustrations using audio streaming technology, linking course features and supports, and finally, ensuring all systems are functional when accessed through the Internet.
The time devoted by the instructor in responding to activities
submitted by students and the number of responses to students is
reported in figure 3. The amount of response time is high because
the instructor prepared original feedback tailored to each
student's activity responses.
Figure 3. Teaching / responses time
|Online Development Course||Curriculum Course||TOTAL|
|Number of responses||1104||508||1612|
|Responses per student||50||56||106|
|Number of minutes||13,147||8,635||21,782|
|Minutes per student||598||959||1,557|
Students' grades were based on the mid-term and final exams,
focus presentations, and collaborative projects. Student
performance on activities and lesson assessments were not graded.
Rather, they were treated as instructional tools with feedback
provided to students. Data collected from the lesson assessments
(a 10-item quiz specific to the content of each lesson) were
collected to assess student understanding of the lesson content
and provide them feedback, as well as to obtain a formative
analysis of performance data for all students. This facilitated
identification of content needing modification within lessons
and/or items requiring revision. Responses to each item was
scored automatically and aggregated. Students scored a mean of
8.34 (SD = 0.77) on the curriculum course, and scored a
mean of 8.77 (SD = 1.08) on the online development
Asynchronous instruction via the Internet is fast becoming the educational delivery mode of choice by employed professionals who are unable to conveniently access higher education on traditional campuses, as well as K - 12 schools for student and staff development. Given the national movement toward offering online instruction, it is essential that online instruction be subjected to evaluation and that formative procedures be employed.
Go to previous article
Go to next article
Return to 1999 Conference Table of Contents
Return to Table of Proceedings