Alternative Authentic Assessment Methods

Authentic Assessment

Authentic assessment is a form of assessment in which students demonstrate meaningful application of knowledge and skills by performing real-world tasks. These tasks involve effectively and creatively addressing problems faced by professionals, consumers, and citizens in that field. Student performance is evaluated utilizing a rubric.

Authentic assessment is a form of direct assessment because it provides direct evidence of application of knowledge, skills, and attitudes. It is often referred to as performance assessment or alternative assessment.

With traditional assessments, instructors often discuss and are discouraged against “teaching to the test.” With authentic assessment, instructors are encouraged to “teach to the test” because students need to learn how to perform the meaningful tasks associated with their real-world experience. To develop student knowledge, skills, and attitudes in order to perform well, the instructor should show the students models of good and inadequate or inaccurate performance. Sharing the scoring rubric with the students is also encouraged. By sharing the rubric, the instructor is not providing the answers to the assessment, but assisting students in understanding the key focus areas and what is considered a strong performance.

Examples of authentic assessments

  • Oral interviews
  • Writing samples
  • Exhibitions
  • Experiments
  • Observation
  • Producing a commercial
  • Composing a song
  • Creating a flyer
  • Debating
  • Portfolios

Authentic versus traditional

Authentic and traditional assessments differ from each other in key ways:

Authentic assessment Traditional assessment
Perform a task Select a response
Real-life experience/scenario Contrived by the instructor
Focuses on inquiry (higher-level Bloom’s) Focuses on bits of information (lower-level Bloom’s)
Assumes knowledge has multiple meaning Assumes knowledge has a single meaning
Treats learning as active (student-structured) Believes learning is passive (teacher-structured)
Direct evidence of learning Indirect evidence of learning

 

Combining traditional and authentic assessments

Traditional and authentic assessments complement each other when utilized in combination. Instructors do not need to limit themselves to only traditional assessments or authentic assessments in their course. The combination of both traditional and authentic assessments may prove a stronger approach than either alone. Student knowledge can be evaluated through the use of a traditional assessment, such as multiple choice questions or essays, but their ability to apply that knowledge in real-life scenarios that require skill demonstration can additionally be evaluated with an authentic assessment. For example, a medical student’s knowledge of a medical condition can be tested with a traditional assessment, followed by the student’s ability to appropriately treat a patient with that same condition by going on medical rounds.

Tips:

  • Design backwards.  As with all teaching, instructors should start with intended learning objectives. By knowing what the student should be able to do when learning is complete, the instructor can easily plan the assessment and the learning experience.
  • Break the real-world experience down into small steps. To avoid overwhelming students, instructors can break the steps necessary to complete the experience into smaller chunks.
  • Don’t get frustrated. Developing a strong authentic assessment can be challenging but very rewarding. Rubric develop, in particular, can be challenging to instructors. Expect challenges and work through them. Repeated experience by the instructor and the student with authentic assessments will improve the experience, the rubric itself, and the comfort of instructors and students with the process and tools.
  • Never underestimate the power of student reflection. By reflecting on the experience and assessment, students will further evaluate and recognize what they have learned. The reflections will also assist the instructor in identifying challenges experienced by the students.

 

Additional resources:

  • Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives. New York: Longman.
  • Meyer, C. A. (1992). What's the difference between authentic and performance assessment? Educational Leadership, 49, 39-40.
  • Newmann, F. M. & Wehlage, G. G. (1993). Five standards of authentic instruction. Educational Leadership, 50, 8-12.
  • Rolheiser, C., Bower, B. & Stevahn, L. (2000). The portfolio organizer: Succeeding with portfolios in your classroom. Alexandria, VA: Association for Supervision and Curriculum Development.
  • Steffe, L. P., & Gale, J. (Eds.). (1995). Constructivism in education. Hillsdale, NJ: Erlbaum.
  • Stiggins, R. J. (1987). The design and development of performance assessments. Educational Measurement: Issues and Practice, 6, 33-42.
  • Wiggins, G. P. (1993). Assessing student performance. San Francisco: Jossey-Bass Publishers.
  • Wiggins, G. P. (1998). Educative assessment: Designing assessments to inform and improve student performance. San Francisco: Jossey-Bass Publishers.
  • Wiggins, G. P., & McTighe, J. (1998). Understanding by design. Alexandria, VA: Association for Supervision and Curriculum Development.
  • Worthen, B. R., White, K. R., Fan, X., & Sudweeks, R. R. (1999). Measurement and assessment in schools. New York: Longman.

Summary of Indirect Assessment Techniques

(Assessing Academic Programs in Higher Education by Allen 2004)

Technique Potential Strength Potential Limitations 
Surveys
  • Are flexible in format and can include questions about many issues
  • Can be administered to large groups of respondents
  • Can easily assess the views of various stakeholders
  • Usually have face validity – the questions generally have a clear relationship to the objectives being assessed
  • Tend to be inexpensive to administer
  • Can be conducted relatively quickly
  • Responses to closed-ended questions are easy to tabulate and to report in tables or graphs
  • Open-ended questions allow faculty to uncover unanticipated results
  • Can be used to track opinions across time to explore trends
  • Are amenable to different formats, such as paper-and-pencil or online formats
  • Can be used to collect opinions from respondents at distant sites
  • Provide indirect evidence about student learning
  • Their validity depends on the quality of the questions and response options
  • Conclusions can be inaccurate if biased samples are obtained
  • Results might not include the full array of opinions if the sample is small
  • What people say they do or know may be inconsistent with what they actually do or know
  • Open-ended responses can be difficult and time-consuming to analyze
Interviews
  • Are flexible in format and can include questions about many issues
  • Can assess the views of various stakeholders
  • Usually have face validity – the questions generally have a clear relationship to the objectives being assessed
  • Can provide insights into the reasons for the participants’ beliefs, attitudes, and experiences
  • Interviewers can prompt respondents to provide more detailed responses
  • Interviewers can respond to questions and clarify misunderstandings
  • Telephone interviews can be used to reach distant respondents
  • Can provide a sense of immediacy and personal attention for respondents
  • Open-ended questions allow faculty to uncover unanticipated results
  • Generally provide indirect evidence about student learning
  • Their validity depends on the quality of the questions
  • Poor interviewer skills can generate limited or useless information
  • Can be difficult to obtain a representative sample of respondents
  • What people say they do or know may be inconsistent with what they actually do or know
  • Can be relatively time-consuming and expensive to conduct, especially if interviewers and interviewees are paid or if the no-show rate for scheduled interviews is high
  • The process can intimidate some respondents, especially if asked about sensitive information and their identity is known to the interviewer
  • Results can be difficult and time-consuming to analyze
  • Transcriptions of interviews can be time-consuming and costly
Focus Groups
  • Are flexible in format and can include questions about many issues
  • Can provide in-depth exploration of issues
  • Usually have face validity – the questions generally have a clear relationship to the objectives being assessed
  • Can be combined with other techniques, such as surveys
  • The process allows faculty to uncover unanticipated results
  • Can provide insights into the reasons for the participants’ beliefs, attitudes, and experiences
  • Can be conducted within courses
  • Participants have the opportunity to react to each other’s ideas, providing an opportunity to uncover the degree of consensus on ideas that emerge during the discussion
  • Generally provide indirect evidence about student learning
  • Require a skilled, unbiased facilitator
  • Their validity depends on the quality of the questions
  • Results might not include the full array of opinions if only one focus group is conducted
  • What people say they do or know may be inconsistent with what they actually do or know
  • Recruiting and scheduling the groups can be difficult
  • Time-consuming to collect and analyze data
Reflective Essays
  • Are flexible in format and can include questions about many issues
  • Can be administered to large groups of respondents
  • Usually have face validity – the writing assignment generally has a clear relationship to the objectives being assessed
  • Can be conducted relatively quickly
  • Allow faculty to uncover unanticipated results
  • Can provide insights into the reasons for the participants’ beliefs, attitudes, and experiences
  • Can provide direct assessment of some learning objectives
  • Generally provide indirect evidence about student learning
  • Their validity depends on the quality of the questions
  • Conclusions can be inaccurate if biased samples are obtained
  • Results might not include the full array of opinions if the sample is small
  • What people say they do or know may be inconsistent with what they actually do or know
  • Responses can be difficult and time-consuming to analyze

 

Summary of Direct Assessment Techniques

Download PDF

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

 

Choosing The Right Assessment Tool

Download PDF

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

 

Pro's and Con's of Various Assessment Tools

Download PDF

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab