Pretests are given to determine "what learners already know with regard to the objective(s) at hand (Smith & Ragan, 1999, p. 95). With a pretest, an instructor can determine what the students need to learn and then help the students focus on the pieces of the instruction that have not previously been learned. A Pretest might be given before a lesson starts or as a way to gain students' attention and provide them with the objective(s) of the lesson.
Posttests are usually given toward the end of a lesson. Posttests "will assess whether the learner can achieve both the enabling objectives and the terminal objectives of a lesson (Smith & Ragan, 1999, p. 95). By testing enabling as well as terminal objectives, the teacher has more information as to where learning has "gone wrong." Items on a posttest should differ from the items on the pretest.
Smith, P. and Ragan, T. (1999). Instructional design (2nd ed.). New York: John Wiley & Sons, Inc.
According to Nitko (2001), "A performance assessment (1) presents a hands-on task to a student and (2) uses clearly defined criteria to evaluate how well the student achieved the application specified by the learning target" (p. 240). During a performance assessment, students must apply their knowledge and skills from multiple areas to show they can perform a learning target. A performance assessment may require a student to
Unlike short answer or multiple choice items used in other types of assessments that require indirect demonstration, performance tasks require direct demonstration of achievement of a learning target.
Nitko, A. (2001). Education assessment of students (3rd ed.). Upper Saddle River, NJ: Prentice-Hall, Inc.
Peer/ Self-evaluation assessment strategies
Peer and self-evaluation assessment strategies ask students to "reflect on, make a judgment about, and then report on their own or a peer's behavior and performance" (Alaska Department of Education & Early Development, 1996, Self and Peer-Evaluations, para. 1). Both performance and attitude can be evaluated with peer and self-assessments. Assessment tools for this type of evaluation might include sentence completion, Likert scales, checklists or holistic scales.
Alaska department of education & early development: A collection of assessment strategies. (1996). Retrieved September 22, 2002, from The Alaska Department of Education & Early Development's Curriculum Frameworks Project Web site: http://www.educ.state.ak.us/tls/frameworks/mathsci/ms5_2as1.htm#selfandpeerevaluations
"A portfolio is a limited collection of a student's work that is used to either present the student's best work(s) or demonstrate the student's educational growth over a given time span" (Nitko, 2001, p. 254). A portfolio is a collection limited to only the work that best serves the portfolio's purpose, rather than a collection of all of a student's work. The pieces contained in a portfolio are carefully and deliberately selected.
Nitko, A. (2001). Education assessment of students (3rd ed.). Upper Saddle River, NJ: Prentice-Hall, Inc.
A person's score on a norm-referenced test (NRT) shows where the person stands relative to other people who have taken the test (Seels & Glasgow, 1990). The Scholastic Aptitude Test (SAT) is an example of a NRT because it shows how a person stands relative to other potential college students. Success is measured by how far ahead a person is of the other test takers. NRTs are usually used for selection purposes. NRTs are designed to measure relative standing in a group rather than mastery of a specific skill.
Seels, B. and Glasgow, Z. (1990). Exercises in instructional design. Columbus, OH: Merrill Publishing Company.
Criterion-referenced tests (CRTs)
Criterion-referenced tests (CRTs) are also referred to as content-referenced or objective-referenced tests. "A test is criterion-referenced when its score can be translated into a statement about what a person has learned relative to a standard; a CRT score provides information about a person's mastery of a behavior relative to the objective and reflects that person's mastery of one specific skill" (Seels & Glasgow, 1990, p. 147). A state's automobile driving test is an example of a CRT.
Typically a cutoff score is set and those who meet or exceed to score pass the test. Any number of test takers can pass a CRT.
Seels, B. and Glasgow, Z. (1990). Exercises in instructional design. Columbus, OH: Merrill Publishing Company.
Achievement tests assess the knowledge, abilities and skills that are at the center of direct instruction in schools (Nitko, 2001). Achievement tests may be standardized or nonstandardized. Standardized tests are created by professional agencies and use the same materials and administration procedures for all students. Nonstandardized tests have not had the assessment materials tried out by a publisher, nor has any student-based data been collected concerning how well the test is functioning.
Nitko, A. (2001). Education assessment of students (3rd ed.). Upper Saddle River, NJ: Prentice-Hall, Inc.
Observation is usually used to make an informal assessment of student behaviors, attitudes, skills, concepts or processes (Alaska Department of Education & Early Development, 1996). Observations may be recorded through anecdotal notes, checklists, video, audio recordings or photos. Observations may be used to collect data about behaviors that are difficult to evaluate by other methods.
Sometimes learners are assessed in on-the-job situations. According to Smith and Ragan (1999), "Probably the best way to see if students have learned what we want them to learn at the necessary level is to take them into the real world and have them perform what they have been instructed to do" (p. 99). Rating scales and checklists can be used to record the quality of the process as the worker performs.
Alaska department of education & early development: A collection of assessment strategies. (1996). Retrieved September 22, 2002, from The Alaska Department of Education & Early Development's Curriculum Frameworks Project Web site: http://www.educ.state.ak.us/tls/frameworks/mathsci/ms5_2as1.htm#observation
Smith, P. and Ragan, T. (1999). Instructional design (2nd ed.). New York: John Wiley & Sons, Inc.
Interviews are used to get a better idea of students' "attitudes, thinking processes, level of understanding, ability to make connections, or ability to communicate or apply concepts" (Alaska Department of Education & Early Development, 1996, Interviews, para. 1). Interviewing consists of observing and questioning the students. Interviews can be both formal and informal and are a good tool for diagnosing students' strengths as well as needs.
Alaska department of education & early development: A collection of assessment strategies. (1996). Retrieved September 22, 2002, from The Alaska Department of Education & Early Development's Curriculum Frameworks Project Web site: http://www.educ.state.ak.us/tls/frameworks/mathsci/ms5_2as1.htm#interviews
In addition to being an instructional strategy, simulations are also useful for assessment purposes, especially for assessing higher-order rule learning and attitude change (Smith & Ragan, 1999). Simulations can be delivered using print-based or interactive multimedia tools. A case study is an example of a print-based simulation. Case studies are often used to assess in fields such as management, law and medicine.
The use of personal computers is a common way to administer an interactive multimedia simulation assessment. By using computers, simulations can easily be administered to an individual or a group. Some of the more elaborate examples of simulation testing using multimedia include pilot and astronaut training (Smith & Ragan, 1999).
Smith, P. and Ragan, T. (1999). Instructional design (2nd ed.). New York: John Wiley & Sons, Inc.
Essays are usually used as an assessment tool in two general situations (Nitko, 2001). This first is in subject areas like social studies, mathematics, science or history to evaluate how well "students can explain, communicate, compare, contrast, analyze, synthesize, evaluate, and otherwise express their thinking about several aspects of the subject" (pp. 184-5). The second is to evaluate students in their ability to write in standard English with appropriate use of language and to write for various purposes including exposition, persuasion and communication.
There are two basic varieties of essay items: restricted response and extended response. Restricted response items limit what the student is allowed to answer in both content and form. Whereas extended response items give students the freedom to express their own ideas and organize those ideas in their own way (Nitko, 2001).
To help eliminate subjectivity in the evaluation of essay items, designers usually develop checklists, rating scales, model answers or use multiple graders to evaluate the exam (Smith & Ragan, 1999).
Nitko, A. (2001). Education assessment of students (3rd ed.). Upper Saddle River, NJ: Prentice-Hall, Inc.
Smith, P. and Ragan, T. (1999). Instructional design (2nd ed.). New York: John Wiley & Sons, Inc.
Recall items are used to assess declarative knowledge objectives. In response to recall items, students are asked to reproduce what they were presented with during instruction either verbatim, paraphrased, or summarized (Smith & Ragan, 1999). Recall items are usually in a written format, such as short answer, fill-in-the-blank or completion items. While these types of test items require a lot of memory, they also require fewer higher reasoning processes.
Smith, P. and Ragan, T. (1999). Instructional design (2nd ed.). New York: John Wiley & Sons, Inc.
In response to recognition items, learners are required "to recognize or identify the correct answer from a group of alternatives" (Smith & Ragan, 1999, p. 101). Declarative knowledge that has been memorized can be assessed with recognition items. Multiple choice, matching and true false items can all be used as recognition items. These types of questions can be constructed in such a way as to require the use of higher cognitive skills. Learners may be asked to apply learned principles or concepts in order to recognize a correct answer.
Smith, P. and Ragan, T. (1999). Instructional design (2nd ed.). New York: John Wiley & Sons, Inc.
With constructed answer items, learners are required to produce or construct a response (Smith & Ragan, 1999). Responses may take a written or performance-based format. Constructed answer items require a higher reasoning of intellectual skills than recall items and demand more memory and cognitive strategies than recognition items because the students are less cued and the options are less limited. Additionally, constructed answer items often are more closely aligned with real-life situations and are usually a more valid assessment. Here is an example of a constructed response item:
What combination of shutter speed and aperture setting would lead to the same exposure as f/8 at 1/125 sec? (p. 102)
Smith, P. and Ragan, T. (1999). Instructional design (2nd ed.). New York: John Wiley & Sons, Inc.
Formative assessment helps the instructor monitor student learning while in progress. Formative assessments are usually less formal than summative assessments and, although the instructor may keep a record of the results, they are not used to report official achievement progress, such as a letter grade (Nitko, 2001). Formative assessments may help diagnose individual students' learning needs and help to plan instruction.
Nitko, A. (2001). Education assessment of students (3rd ed.). Upper Saddle River, NJ: Prentice-Hall, Inc.
Summative assessment is used to evaluate students as well as the instructor after one or more units is taught. Results from summative assessment are usually used to count toward a final grade (Nitko, 2001). Summative assessments are useful tools for reporting student progress to parents, school authorities, managers, etc.
Nitko, A. (2001). Education assessment of students (3rd ed.). Upper Saddle River, NJ: Prentice-Hall, Inc.