Online learning has emerged
from its infancy to a dynamic age of adolescence. According to US News
and World Report, enrollment in online courses increased
by almost 20 percent in 2004 over the previous year; 11 percent of post secondary
students will take at least one course online. Additionally, over 90 percent
of public colleges offer at least one online course (Boser, 2004). It is estimated
that by 2005, the E-learning market will top $4 billion (Boser, 2004). Driven
by convenience, economics, and flexibility, online learning has become an integral
part of the higher education landscape. A variety of online courses have been
designed and offered using new and emerging online communication tools but the
pedagogical frameworks were little different from those used in traditional,
face-to-face course offerings. Only recently has the research focus turned to
the design of online courses and serious thought given to fostering meaningful
communication among students and between instructors and students.
The initial focus of online learning research has been on the tools with
which users interact. The online discussion board is a tool that gives
users the opportunity
to communicate asynchronously with members of a group. The effective use of discussion
boards in online courses relies on robust communication and interactions between
the members of a group. Collaborative activities, shared goals, and common tasks
provide the context for interaction, but without a strong working relationship,
even the best designed courses will prove ineffectual. Therefore, a greater understanding
of the dynamics of online interactions is necessary in order to provide the optimum
learning environment for our students.
Recent research has addressed student perceptions of their online learning
experience. However, these studies have produced mixed results due
to the diversity of student
characteristics that are brought to the teaching-learning process (Pena-Schaff,
Altman, & Stephensen, 2005). In a meta-analytical study of research on distance
education, Zhao, Lei, Yan, Lai, and Tan (2005) recommend that future studies
examine learner characteristics, such as gender, study habits, learning styles,
learning environment, access to resources, experiences with distance learning,
and technology proficiency which may interact with learning outcomes. Also, Zhao
et al. (2005) reported that one of the difficulties in determining factors which
make a difference in online learning is that not all online courses are equal.
Design, content, implementation, and student composition vary from course to
course. With this in mind it becomes important to study the relationships between
factors and attitudes within a particular course context in order to find meaning
and practical relevance.
One example of an online course is the George Mason University course
entitled Web-Based Learning. The objective of this graduate level course
is to introduce
practicing teachers to a variety of activity structures found on the Internet
and to promote a model for teaching and learning in an online environment. The
course design is based on instructional strategies supported by research. The
course has been implemented over the years using three different delivery models.
These models include an expert mentor model, in which the student interacts one
on one with an expert mentor throughout the course, a peer-facilitated model,
in which students work collaboratively with peers to complete and assess the
assignments, and finally, the traditional instructor-led model, in which student
work is directed and assessed by a faculty member.
The purpose of this study is to examine student perceptions of in a specific
web-based learning course and to see if these perceptions relate to gender differences,
teaching level, and attitudes towards delivery models. Three questions will be
addressed by this research:
1. Are there gender differences in students’ perceptions of online learning
and do these differences depend on teaching level?
2. Are measures of accessibility, view of course design, and sense of enjoyment
predictors of interaction?
3. How do each of the delivery treatments compare to the instructor-led control
group for measure of student perceptions of Web-Based Learning?
Method
Sample
In the summer semester of 2005, one hundred and thirty seven students
enrolled in a graduate level course focusing on Web-based Learning. This
course was part
of a Master’s degree program in Curriculum and Instruction with an emphasis
on the integration of technology into the classroom. Of that number, fifty seven
of the students were male and eighty were female. The students were from a number
of different, widely distributed school divisions. Participants in this study
were all practicing classroom teachers from various grade levels and content
areas representing elementary, middle, and high school. They ranged in age from
twenty-three to sixty years old with anywhere from two to twenty-eight years
of experience. Table 1 provides participant demographic information.
As part of their coursework related to Web-Based Learning, a program
of study was designed to provide opportunities for groups of students
to participate in
online discussions centered on the content and collaborative projects. In addition,
three delivery models were used for instruction. One model was the Expert Mentor
model. In this model, students worked through the Web-Based Learning course one-on-one
with an expert mentor, who had graduated from the program and who had taken the
course previously. A second model was the Peer Facilitated Model in which students
were randomly selected to groups. Each member of the group took a turn at facilitating
and leading the group through the activities within the course. They assessed
their own projects and a faculty member was available to them as the course Moderator.
In the third delivery model, the course was conducted online with a faculty Instructor
teaching the group. Peer Facilitated model and the Instructor-led model, Blackboard,
a course management system was used to manage the course and the discussion boards.
For the Expert Mentor group, the course was accessed through Blackboard, but
the students communicated individually with the expert mentor through email discussions.
All material used in the course was the same and equal.
Instrument
In order to assess students’ attitudes and beliefs concerning their learning
and experiences during an online course, the Web-Based Learning Environment Inventory
(WEBLEI) (Chang & Fisher, 2001) was given post treatment. This instrument
was developed and used to assess students’ perceptions of online learning).
This instrument incorporates students’ usage pattern (for example, students’ access,
convenience of materials), students’ learning attitudes (for example, students’ participation
and enjoyment), students’ learning process (for example, level of activity
and interactivity between student to student and student to lecturer) and academic
factors (for example, scope, layout, presentation, and links of the web-based
learning materials) (Chang & Fisher, 2001).
The WEBLEI instrument was designed to capture students’ perceptions of
web-based learning environments. The instrument assesses those perceptions according
to four scales. The first three scales are adapted from Tobin’s work on
Connecting Communities Learning (CCL) and the final scale focuses on information
structure and the design aspect of the web-based material (Chang & Fisher,
2001). The WEBLEI considers Web-based learning effectiveness in terms of a cycle
that includes access to materials, interaction, students’ perceptions of
the environment, and students’ determinations of what they have learned.
Chang & Fisher (2001) describe these four scales as and the characteristics
of the learning environment they measure as:
Access convenience, efficiency, autonomy
Interaction flexibility, reflection, interaction, feedback, collaboration
Response enjoyment, confidence, accomplishment, success, frustration, tedium
Results clear objectives, planned activities, appropriate content, material design
and layout, logical structure
The instrument itself consists of 31 questions related to the participant’s
learning in a web-based environment. Participants are asked to respond to how
often the factor in each question takes place. Participants answer each statement
on a Likert scale with the following choices: 5 – Always; 4 – Often;
3 – Sometimes; 2 – Seldom; 1 – Never. The survey is divided
into four sections – each addressing a different scale. Thus, data concerning
the participants’ responses for each of the scales can be tallied.
Statistical Analysis
The first question in the study was: Are there gender differences in
students’ perceptions
of online learning and do these differences depend on teaching level? In order
to address this question, a two-way analysis of variance (ANOVA) was performed
for each of the dependent variables, student perceptions of the ease of access
(Access), student perceptions of the degree of interaction (Interaction), student
perceptions of enjoyment (Response), and student view of course design and appropriateness
(Result) with the independent variables of gender and teaching level.
The second question was: Are measures of accessibility, view of course
design, and sense of enjoyment predictors of interaction? A multiple
regression was performed
with the student perceptions of the degree of interaction as the dependent variable
and the independent variables of Access, Response, and Result as the predictors.
In addition, the data was reviewed for outliers using the residual data from
the multiple regression analysis.
The third question addressed in this study was: How do each of the delivery
treatments compare to the instructor-led control group for measures
of student perceptions
of Web-Based Learning? A multiple regression for analysis of variance using dummy
coding was performed to test the null hypotheses that the means of each treatment
group was equal to the mean of the control group. Three tests were performed
for each of the dependent variables for measures of student perceptions in the
Web-Based Learning course. These dependent variables were Access, Interaction,
and Response. The independent variables were the dummy coding schemes, x1 and
x2, which compared the treatment of the Expert Mentored model with the control
group, Instructor-led model and the treatment of the Peer Facilitated model with
the control group, Instructor-led model.
Results
The first research question seeks to determine if there are gender differences
in student perceptions of their access to technology and the Web-Based Learning
course, the degree of interaction, their sense of enjoyment, and their view of
course design. An ANOVA was performed for each of the dependent variables, which
were each of the measures of student perceptions. The results in Table 3 show
that in student perceptions of Access, there is a statistically significant main
effect for gender, F(1, 131) = 57.52, p =.000, and for teaching level, F(2, 131)
= 3.58, p =.031. In addition the results show that there is a statistically significant
interaction (G x TL), F(2, 131) = 10.00, p= .000. The post hoc tests results
in Table 5 show that elementary school teachers have a statistically significant
(p = .002) higher perception of online access than middle school teachers by
at least 1.13 points but not more than 6.29 points. Figure 1 shows that male
middle school teachers had the lowest perception of online access and that female
middle school teachers had a higher perception of online access than their male
counterparts. In addition, female teachers had a higher perception of online
access overall.
In terms of gender differences in student perceptions of their degree
of online interaction, Table 4 shows that there is a statistically
significant main effect
for teaching level, F(2, 131) = 5.89, p = .004. The results in Table 6 show that
in the post hoc test, elementary school teachers had a statistically significant
(p = .002) higher perception than middle school teachers for the degree of online
interaction by at least 1.51 points but not more than 7.75 points.
The results in Table 7 show that there were no statistically significant
main effect for gender or teaching level in student perceptions of
their sense of
enjoyment in the course (Response), nor was there any statistically significant
interaction between gender and teaching level. Additionally, the results in Table
8 show no statistically significant main effect for gender or teaching level
in student views on the appropriateness of the course design (Results), and there
was no interaction between gender and teaching level.
The results in Table 9 show that at least one outlier could be expected
in the data for the dependent variable as well as in the independent
variables. The
Cook’s Distance revealed that there were no influential data points. The
data were reviewed for the outliers and the outliers were removed. The results
for regression model 1 and model 2 are reported with the outliers removed.
The second research question asked if student perceptions of access,
sense of enjoyment, and view of the course design were predictors of
the degree to which
a student interacted throughout the course. The multiple regression model was
significant, p = .000, however, the results in Table 10 show that only the predictors
of Access (p = .001) and Response (p = .000) were statistically significant predictors
of Interaction. Therefore, the predictor, Results, was eliminated from the model
and a second regression was performed using only Access and Response as the predictors
of Interaction.
In the second multiple regression model, the results show that the model
was statistically significant, p = .000, and R2 = .408. This indicated
that 40.8%
of the variance in the degree of interaction was explained by the predictors,
Access and Response. Table 11 shows that the predictor Access is a statistically
significant predictor, (p = .008) over and above Response for Interaction and
the predictor Response is a statistically significant predictor, (p = .000) over
and above Access for Interaction. Figure 3 shows the regression equation generated
by the model. Ranking the predictors shows that Response has more importance
in predicting Interaction (ß = .516) than Access (ß = .192). The
magnitude of each predictor was calculated squaring the part correlations. From
the results, ry(1.2) = .189 and ry(2.1) = .507, and therefore, r2y(1.2) = .036
and r2y(2.1) = .257. Therefore, 3.6% of the variance in the degree of Interaction
is uniquely explained by Access and 25.7% in the degree of Interaction is uniquely
explained by Response. The tolerance for each predictor indicated little overlap
of the predictors with each other, Access Tolerance = .967 and Response Tolerance
= .967.
The third research question asks: How do each of the delivery treatments
compare to the instructor-led control group for measure of student
perceptions of Web-Based
Learning? The Expert Mentor treatment group was compared to the Instructor-led
control group and the Peer Facilitated treatment group was compared to the Instructor-led
control group. The results in Table 12 show that there is no statistically significant
difference in the means of the Expert Mentor treatment group when compared to
the control group and there is no statistically significant difference in the
means of the Peer Facilitated treatment group when compared to the control group.
Therefore the delivery models are equal in terms of student perceptions of ease
of access, degree of interaction, and sense of enjoyment.
Discussion
Online learning has created a new aspect to the identity of educator.
In face to face classrooms where the instructor has the opportunity to
visibly see student
characteristics, the online environment can obscure some of the differences in
characteristic. The perceptions of students about their online learning environment
can be very important as educators work to develop useful and successful virtual
environments for their students.
In some respects the results of the study were surprising. The higher
perceptions of females in the accessibility to online materials, courses,
instructors and
online classmates, contradict reports in the literature about the digital divide
that exists between males and females. The explanation that male middle school
teachers have lower perceptions of access and interaction might be due to the
fact that the sample size for male teachers was small and that if a larger sample
were taken, these differences may not be found. There may be characteristics
among this group of males that prevented them from accessing online materials.
The course is traditionally held during the summer months when many teachers
take on summer school duties and it may be that the male participants in this
study were the primary income provider in their respective families. Their online
experiences may have been more limited as well. Therefore additional characteristics
such as availability and commitment to online courses, as well as an understanding
of previous online experiences of the participants would be additional variables
to study. It would be beneficial to interview the participants to discover why
they believed these differences in perception of access exist.
There was no surprise, however, in the fact that ease of access and sense
of enjoyment about the course were predictors of the degree of interaction.
When
a student believes he or she can access the course, the materials, the participants,
and the instructors without problems, it allows the student to have more opportunities
to communicate within the course. With a sense of accomplishment and enjoyment,
students are more likely to participate, communicate and interact with the other
members of the group.
Finally, the results of the study showed that the delivery model used to implement
the course did not matter. This is of particular importance because it offers
the instructor several choices in which to deliver the model to students. At
the higher education level, universities often rely on adjuncts to support courses,
especially those offered in the summer when regular faculty takes on other responsibilities.
The study shows that the use of expert mentors does not change the student perceptions
of Web-Based Learning in terms of access, enjoyment or degree of interaction.
This study provided insight into how two particular student characteristics,
gender and teaching level, might relate to student perceptions of online learning.
Also it showed online courses should be designed to ensure that a student’s
sense of accomplishment and ability to access the technology be considered in
order to promote interaction. Finally, as long as the delivery model is framed
not only in good pedagogy but also using instructional strategies developed through
research in online learning, the instructor can expect student perceptions to
be the same between models. This study focused on one web-based learning course.
It is recommended that each online designer analyze their own courses in terms
of student perceptions and characteristics.
References
Boser, U. (2004). Working on what works best. US News and World Report.
Retrieved April 28, 2006, from
http://www.usnews.com/usnews/edu/elearning/articles/03good.htm
Chang, V., & Fisher, D. (2001, December). The validation and application
of a new learning environment instrument to evaluate online learning
in higher education. Paper presented at the meeting of the Australian
Association for Research in Education Conference, Fremantle, Australia.
Pena-Schaff, J., Altman, W., Stephensen, H. (2005). Asynchronous online
discussion as a tool as a tool for learning: Students’ attitudes,
expectations, and perceptions. Journal of Interactive Learning Research,
16, 409-430.
Zhao, Y., Lei, J., Yan, B., Lai, C., & Tan, S. (2005). What makes
the difference? A practical analysis of research on the effectiveness
of distance education. Teachers College Record, 107, 1836-1884.
|