Mixed Method Validity Threats Memo 3

While the purpose of Memo 3 is to address validity and strategies to mitigate validity threats in my study, I believe that I first need to modify portions of my research matrix from Memo 2. My questions, data collection methods, and analyses require more explanation, clarification, and justification. Also this memo provides further explanation and justifications of chosen methods which was missing from Memo 2. Some of the material in this memo reiterates information from Memos 1 and 2. This is solely intended to facilitate my thinking about the design process of my study.

A number of studies in recent years has focused on the features, strategies, and delivery options that foster successful student learning in online environments (Swan, Shea, Fredericksen, Pickett, & Maher, 2000; Smith & Rose, 2002; Singleton, Quilter & Weber, 2004; Song, 2005). Many of these studies focus on student’s perception of their own learning to determine which features of online learning are most important. As a result, instructional quality of online courses can be assessed in terms of design, content and objectives, interactivity and collaboration, higher order thinking, articulation of expectations, and evaluation. Now that the research has been done on what a quality online course should look like, it is time to investigate to what extent these quality features are used in practice.

My goals in this study are to understand how online environments, delivery methods, and features are used in online coursework offered at the university level, to determine the extent to which quality features as cited in the literature appear in these online courses, and to assess the impact of these courses on student learning. The hope for this study is to provide evidence that attention needs to be given to how faculty use online learning in their practice. While the aim of this report will be to heighten awareness of best practices in the use of online learning, I must pay close attention to language and to adopt a non-threatening posture throughout.

Data Collection:
Since the purpose of this study is to better understand the use of online learning and its effectiveness on student learning in a university setting, a large sample is needed. Surveys are the best method to elicit perceptions from a large number of people. I will administer two surveys. The first will be used to identify online course use across the University. This survey will be used to identify the sampling population for the second survey. The sampling population for this second survey causes some concern. Electronic survey would reach more participants. However, the return rate may be very low. I need to consider narrowing my survey population to those courses that have some kind of face to face component, such as a hybrid or blended environment. This may decision may exclude many online courses and the availability of participants who would be able to provide data. However, according to the literature, the majority of online courses or courses offering online aspects are conducted in a hybrid environment. I would then be investigating a more common use of online environments but I also run the risk of interfering factors caused by the presence of a face to face component. My decision for administering the second survey may not come until the results of the University-wide survey to identify online course use are analyzed.

Research Question 1: How is online learning used at the University level?
An initial University-wide survey will be done to identify uses of online learning. An email will be sent to all University faculty listed on courses and will inquire about the use of online learning practices in their coursework. This survey will require a simple reply to message. I could also use course catalogs and course descriptions to identify online courses as well as contact technology personnel within each college and department to make inquiries about the use of online learning. A potential validity threat here is a poor response rate and/or the exclusion of the use of online learning in a particular area. I may not have a good picture of online learning across the University because of poor response rate. I may need to enlist the help of senior faculty and the Provost office to act as a gatekeeper.

Research Question 2: What are student’s perceptions about the quality of their learning in online environments?
I will construct survey questions which will determine the extent to which quality course criteria are present or absent in current online courses as well as student perceptions on the impact of the presence or absence of these quality criteria on their own learning. The survey will collect background information such as age, gender, past online course experience, academic level as some of the possibilities. For research question 2a: What features of quality online courses are recognized as present or absent by students?- the survey will be designed using an ordinal scale in one column (example: Discussion Boards were used in this course: to a very great extent, to a great extent, to some extent, to a small extent, not at all) to determine the extent online features are used and to address research question 2b: What are student perspectives about the effectiveness of these features in their learning?- an ordinal scale in another column (example: Discussing topics with my peers promoted my thinking on the topic: to a very great extent, to a great extent, to some extent, to a small extent, not at all) to determine student perceptions about which the presence or absence of the feature impacted their learning.
I have also considered using interval scales because these scales “provide the most variation of responses and lend themselves to stronger statistical analysis” (Creswell, 2005, p. 168). Therefore the questions would be designed for responses such as: strongly agree, agree, disagree, and strongly disagree. The online course quality features will be taken from the literature on online course quality and best practices in online course design and delivery. By adopting the online course quality features from the literature, my bias as a designer and instructor in online learning environments should not affect the development of my survey questions.

Data for these questions will be analyzed using quantitative methods, specifically descriptive statistics to determine the frequency of use.

There is a possible validity threat to my conclusions stemming from the fact that my researcher created survey has not been tested on a wide scale. I hope to address the clarity of the questions by asking designers and instructors of online courses to review my survey and to provide feedback. Also I will administer the survey to some of my graduate students from a summer online course I teach. These students would not be eligible for the research study but their feedback will provide me with an indication about how the survey will be received by other students.

I will also combine my survey with the established survey reported in the literature and frequently used by me and other colleagues to assess student perceptions about their online learning experiences. This survey, called the Web-Based Learning Environment Inventory (WEBLEI) (Chang & Fisher, 2001) does not specifically meet my needs as a stand alone survey. This survey will not specifically determine the presence or absence of a particular quality feature but it does survey student perceptions about four domains which incorporate the online quality course features reported in the literature. The WEBLEI instrument was designed to capture students’ perceptions of web-based learning environments. The instrument assesses those perceptions according to four scales. The first three scales are adapted from Tobin’s work on Connecting Communities Learning (CCL) and the final scale focuses on information structure and the design aspect of the web-based material (Chang & Fisher, 2001). The WEBLEI considers Web-based learning effectiveness in terms of a cycle that includes access to materials, interaction, students’ perceptions of the environment, and students’ determinations of what they have learned.

Chang & Fisher (2001) describe these four scales as and the characteristics of the learning environment they measure as:
Access convenience, efficiency, autonomy
Interaction flexibility, reflection, interaction, feedback, collaboration
Response enjoyment, confidence, accomplishment, success, frustration, tedium
Results clear objectives, planned activities, appropriate content, material design and layout, logical structure

The instrument itself consists of 31 questions related to the participant’s learning in a web-based environment. Participants are asked to respond to how often the factor in each question takes place. Participants answer each statement on a Likert scale with the following choices: 5 – Always; 4 – Often; 3 – Sometimes; 2 – Seldom; 1 – Never. The survey is divided into four sections – each addressing a different scale. Thus, data concerning the participants’ responses for each of the scales can be tallied.

I believe that inclusion of this established survey will provide me with information on the overall perceptions of students about their experience without tying the effectiveness of a specific feature to student learning. I can use the results of the WEBLEI to compare with the results of the researcher created survey to determine any differences with hopes that the responses from the researcher created survey are not significantly different than the results from WEBLEI, i.e. student perceptions about the effectiveness of their learning and the presence or absence are not significantly different from their overall perceptions in each of the domains surveyed in the WEBLEI. I can group my survey questions on student perspectives of learning according to the WEBLEI domains and make the comparison for validity purposes. In addition the WEBLEI results will provide an overall perception of the student learning which will add information to my conclusions about student perceptions on their online learning experiences at the University.

Survey questions are created in the words of the researcher and may not completely represent student perceptions. In order to provide the student with the opportunity to describe their perceptions in their own words, an open-ended question will be designed into the survey. The inclusion of an open-ended survey will allow me to compare responses with the close ended survey questions to check for agreement or discrepant information. Also, these results may produce information not expected such as influences of factors other than the online quality course features on the perceptions of student about their learning. The responses from the open-ended question will be analyzed using qualitative methods. The process will consist of developing categories and segmenting the text, coding similar ideas, developing connections between the categories. The hope is that these connections will support the descriptive statistical data about the presence or absence of quality features and the impact on student perceptions about their learning.

Approximately 15 interviews of participants will be conducted. Interviewees will be randomly selected from the surveyed population. These interviews will be used as a member-check tool and to triangulate the results from the surveys. Also, these interviews will provide me with the opportunity to explore the possibilities of discrepant information or to dig more deeply into what students believed about their online learning experiences. Participants who responded with discrepant information on the open-ended question portion of the survey will also be asked to be interviewed.

As part of research question 2: What features of quality online courses are recognized as present or absent by student? I would also like to investigate if there is a relationship between presence of quality feature in an online environment and student perception of the effectiveness of that feature in their learning. In addition I would also like to know if there is a relationship between the absence of the quality feature and student perception of the effectiveness of that feature in their learning. Here the quality features can be categorized into domains, much like the WEBLEI domains and then through correlation testing. I believe that this component of the study would provide information that either supports or downplays the importance of certain features of online courses in student perceptions of their learning in online environments as reported in the literature.

In the process of thinking about validity threats to my study and using the matrix as a framework for this thinking, I quickly saw that the third question in my original matrix- How does online course quality compare with student perception of quality learning?- was not a valid question. Despite the feedback I received about this question, it wasn’t until I reworked and revised my matrix that I saw that this question did not fit into my goals and therefore the question was not valid.

This process also brought to light many concerns about in my design I still need to address such as the best method to administer my surveys, the relationships that I need to form in order to have access to participants, and whether or not there are other quantitative methods that I could employ in order to triangulate my data. I also believe that I need to further investigate the use of the WEBLEI. I have used this survey in the past and never has it revealed statistical significance, which is why I would like to use it in conjunction with my survey. It is possible that since all of my online courses contain the quality features cited in the literature, my students report that they have learned in a high quality course. Since my focus in this study is on online learning environments developed and/or used by faculty who may not have experience in online learning and teaching, and students may not have extensive backgrounds in online learning and/or design, I believe it will be interesting and informative to see how the WEBLEI survey fares within this population.

By acknowledging possible validity threats in the context of the matrix, I was able to see more clearly the benefits of employing a mix method approach to address validity issues as well as produce different kinds of data that might prove useful in the investigation of student’s perspectives about their learning in these online courses.

References
Chang, V., & Fisher, D. (2001, December). The validation and application of a new learning environment instrument to evaluate online learning in higher education. Paper presented at the meeting of the Australian Association for Research in Education Conference, Fremantle, Australia.

Creswell, J. (2005). Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research. New Jersey: Pearson.

Quilter, S. & Weber, R. (2004). Quality assurance for online teaching in higher education: Considering and identifying best practice for E-learning. International Journal on E-Learning, 3(2), 64-73.

Smith, A. & Rose, R. (2002). Essential elements: Prepare, design, and teach your online course. In G. Richards (Ed.), Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2005, 2723-2724. Chesapeake, VA: AACE.

Song, H. (2005). Improving the quality of online courses using CMS: Findings and implications. In G. Richards (Ed.), Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2005, 2408-2418. Chesapeake, VA: AACE.

Swan, K., Shea, P., Fredericksen, E., Pickett, A., & Maher G. (2000). Course design factors influencing the success of online learning. In Proceedings of 2000, 513-518. Chesapeake, VA: AACE.