|
IT
Leader: Zane Berge
EDIT 704, Fall 2001
Berge, Z. and Myers, B. (2000). Evaluating computer-mediated communication
courses in higher education. Journal of Educational Computing Research,
23(4), 431-450.
Introduction
This article reviews various approaches to evaluation of online higher
education courses that are either totally asynchronous, or have an asynchronous
component that constitutes a significant portion of the course grade.
The review is intended to serve as the foundation for identifying common
elements that would help instructional designers improve future programs
in distance education, and help institutions determine the value of distance
education programs.
Berge has written extensively on computer-mediated instruction, as has
Myers. The context for the authors’ interest in evaluation, the
topic of this article, is the lack of clarity as to what constitutes sound
evaluation of distance education courses. Following on Clarke’s
advice to use proven evaluation methods (cited on p. 431), the authors
take Kirkpatrick’s four steps to measuring training effectiveness
(cited on p. 432) as the starting point. The authors begin with Kirkpatrick
because his model is well-known in the corporate training world, and as
the authors note, is used by the vast majority of companies tracked by
the American Society for Training and Development for benchmarking. The
authors seek to understand how (or if) Kirkpatrick’s four levels
of evaluation – reaction, learning, behavior, and results –
make sense in higher education. Nevertheless, the authors dismiss two
of the four evaluation areas (behavior and results)
as being too workplace-specific, and as such, not applicable to higher
education. They then turn their attention to only one area of evaluation,
reaction, because of the criticality of participant reaction to overall
program evaluation.
The authors go on to describe their review of the literature and the discovery
of some ten published course evaluation instruments administered exclusively
to students (non-faculty and non-staff participants) in computer-mediated
higher education courses. Although the majority of the evaluation instruments
were intended for post-course evaluation, the authors found several common
threads of questions across the pre-, mid-, and post-evaluation instruments
examined, including familiarity with the required technology, and limitations/restrictions
to participation in online discussions. The authors conclude that in order
for the distance education course designers to be more effective, and
for informed decision-making about the value of distance education programs,
these evaluation instruments must be modified to include more questions
about participation online, familiarity with the various aspects of technology,
comparison to the face-to-face classroom experience, and the degree of
student involvement in discussion groups and online seminars.
Discussion
Berge and Myers assessment of the 10 evaluation instruments is grounded
in cognitive. According to Saettler (1990), cognitivism views education
as a process in which the learner is an active participant. Effective
learning depends largely on what the learner knows and on the active cognitive
processing that takes place during instruction. Berge and Myers also contend
that education is a process in which the primary purpose is to “impart”
knowledge and develop the way the learners use their mental faculties
(p. 440). Since education is not primarily concerned with job performance,
only student reaction to a course/program, and the extent to which learning
took place can be measured. Behavior in “real world” situations
and results – levels 3 and 4 of Kirpatrick’s model –
are irrelevant to higher education. That is why the authors focus exclusively
on strengthening level 1 (reaction) evaluation.
It is true that college/university course content, particularly at the
undergraduate level in liberal arts, is not performance-based in the way
that the business world would define performance. However, there is a
variety of soft skills that are highly valued in the “real world”
and that can be acquired in higher education courses and measured. For
example, cooperative learning - an instructional process in which small,
intentionally-selected groups of students work interdependently on a well-defined
learning task while the instructor serves as a facilitator/consultant
(Cuseo 1992) – has been used successfully computer-mediated instruction
for several years. The collaborative process is nearly identical to the
team building and teamwork processes essential to the corporate environment.
Why, then, shouldn't’t we consider the evaluation of cooperative
projects – which receive a group grade as well as an individual
grade for each participant’s contribution – as an evaluation
of performance-based behavior as defined by Kirkpatrick? Further, student
retention affects the institution’s bottom line. If students do
not enjoy the online experience, they will avoid (where possible) that
institution’s distance education courses, adversely affecting the
institution’s revenue stream. Why, then, shouldn't’t we consider
student satisfaction and willingness to take another distance education
course at the institution as an evaluation of business results as defined
by Kirkpatrick?
Berge and Myers are correct in noting that pre-course evaluation of learner
familiarity with technology, learner background and expectations are essential
in helping instructional designers to improve future programs. That recommendation
is as applicable to on-ground courses as it is to online courses. However,
the lack of immediate feedback and the absence of non-verbal cues in the
online course format make detailed and early input into the design process
critical for online courses. That is why the authors’ apparent agreement
with Clark’s contention that there is no difference pedagogically
between online and offline instruction seems incredible. A casual review
of the literature (Feenberg, 1989; Khan, 1997; Cyrs, 1997) shows that
successful online courses require a very different pedagogy than face-to-face
courses. Although there may be “no significant difference”
in the quality and quantity of what is learned online vs. on-ground, the
methods of instruction (should) vary.
Conclusions
Research into evaluation instruments for distance education courses is
ongoing, just as the authors recommend. For example, the University of
Wisconsin’s Student Assessment of Learning Gains (SALG) instrument
(2001) probes each of the areas recommended by Berge and Myers and is
used at any point during a course for formative feedback, as well as at
the end for program improvement. The University of Idaho College of Engineering
has produced an online guide to evaluation for distance educators (2001)
and focuses on a blend of qualitative and quantitative instruments to
measure student perceptions of the distance education experience and obtain
insights for design development and improvement. Interestingly, these
efforts do seek to measure behavior and results – Kirkpatrick’s
levels 3 and 4 – because they are deemed essential to assessing
the quality and worth of distance education offerings.
References
Cuseo, J. (1992). Cooperative learning vs. small-group discussion and
group projects: The critical differences. Cooperative learning and college
teaching, 2(3), 5-10.
Cyrs, T.E. (Ed.) (1997). Teaching and learning at a distance: What it
takes to effectively design, deliver and evaluate programs. San Francisco:
Jossey-Bass Publishers.
Feenberg, A. (1989). The written world: On the theory and practice of
computer-conferencing. In R. Mason & A. Kaye (Eds.), Mindweave: Communication,
computers and distance education. Oxford: Pergamon Press.
Khan, B.H. (Ed.). (1997). Web-based instruction. Englewood Cliffs: Educational
Technology Publications.
Saettler, P. (1990). The evolution of American educational technology.
Chapter 11: Cognitive science and educational technology: 1950-1980. Englewood,
CO: Libraries Unlimited.
Student Assessment of Learning Gains (SALG). (2001). (Internet resource).
University of Wisconsin. Located at http://www.wcer.wisc.edu/salgains/instructor/default.asp.
Guide #4: Evaluation for Distance Educators. (2001) (Internet resource).
University of Idaho College of Engineering. Located at http://www.uidaho.edu/evo/dist4.html.
|
|
|