Assessments answer the questions “Where are we?” and “How will we know when we’re there?” Assessments include any activity designed to determine how well students have met learning goals or outcomes. They provide you and your students with both accountability for previous learning and guidance for subsequent learning.
Assessments are an essential element of pondering and proving. For students, as both learners and teachers, to accept responsibility in learning and teaching, they must have opportunities to try, test, prove, examine, and judge what they have learned. These assessments help learners and teachers identify misunderstandings and guide future preparation and instruction. They enable students to better love, serve, and teach one another (See Learning Model principles 4 and 5).
What should I assess?
Your assessments should focus on the learning outcomes you’ve outlined for the course. If you see discrepancies between your assessments and learning outcomes one of the two should be adjusted. Assessment is only meaningful if you know where you are going. Begin with your outcomes and what kind of learning each represents (see Learning Outcomes Overview).
You should try your best to assess the full range of content, skills, and difficulty as defined by your outcomes. In such instances, you’ll usually have to pick and choose representative scenarios to test. Creating an assessment plan (e.g., a table listing your outcomes, assessment activities for each, and the number of items) can help you map out how each outcome will assess sufficiently (see Assessment Planning).
What type of assessments should I use?
In its simplest form, an assessment consists of a question or task, an observable response, and evaluation criteria. Assessments may include individual pondering, in-class activities, assignments, group interviews, exams, and simulations. They can be simple, such as an informal verbal question, or complex, like a formal semester-long simulation.
Although there are no simple rules for deciding what type of assessment to use, understanding the factors involved in each will help. To determine what kind of assessment to use in a given situation, first consider: the type of learning your outcomes represent, if and how it will be graded, and how authentic you need the assessment to be. These will help you design a fitting setting, medium, and response format to use. Consider the following:
- Settings: individual, group, in-class, take-home, interview, presentation, non-graded, self-graded, peer-reviewed, instructor graded, etc.
- Medium: mind, live interaction, paper, web, audio/video recording, etc.
- Response Formats: multiple choice, matching, context-dependent, short answer, essay/verbal response, performance, etc.
You will need to decide on what type of assessment to use for each outcome and how many items would be appropriate to sample the range of content and difficulty.
How do I create effective items?
The first step in writing effective items (questions and tasks) is to follow your plan for how many and what type each will be (i.e; multiple choice and matching items, context-dependent Item sets, essay or performance items).
You can also create effective assessments by continually improving the ones you’ve got. Each time you use an assessment activity with students, keep track of how they did and what was difficult for them. Ask yourself if the assessment measured what you intended. Make any necessary modifications or notes to improve the activity before its next use. Item analysis statistics from the testing center can be used to help you identify multiple choice items that may need a closer look.
- Group project interviews. Engineering students work in groups to design and produce a solution to a problem. The intended outcomes are assessed by evaluating both the group’s process and their solution through periodic interviews with the instructor. You would then grade students during each interview using a rubric (see Rubrics).
- Mix exam types. One effective approach to graded exams is to assess the bulk of your individual concepts and principles with multiple-choice and matching items (selected-response) while saving a few of the more holistic outcomes for papers, projects, etc. (performance assessments).
- Peer-reviewed projects. One good way to involve authentic tasks (e.g., design plans, problem solutions, etc.) without creating an impossible grading workload for yourself is peer-review. Create an effective grading rubric and then and have two students or groups anonymously grade each other’s projects. Take the average of the scores and allow for an appeal process (see Peer Evaluation).
- Beyond recall. To assess more than just recall (e.g. understand, apply, etc.), even with multiple choice assessments, give students a situation they’ve never seen before that requires them to come up with a solution using both their knowledge of the subject and their own reasoning skills. Don’t tell them which principle to use; let them decide.
- Reliability. Help ensure that your scoring is consistent by working for objectivityuse selected response items, rubrics, model responses, etc. Include enough items to get a sufficient sample and make sure the instructions are clear.
- Reviews. Consider reviewing assessments with your students, allowing them to explain their thinking on difficult items. This will allow you to understand why it was difficult and improve the assessment or instruction for next time. You may want to first establish your policy for rescoring to avoid a complaint session.
- Validity. Be sure you are measuring what you really intended (sometimes students can memorize the definition, but not understand the concept)
- Practicality. The optimally valid and reliable assessment would include several items for every objective and several performance items with multiple raters. This is usually not practical. Make careful trade-offs among validity, reliability, and practicality while maintaining creativity with peer reviews.