top of page
Writer's pictureDoug Wren

What if… All Educators Were Assessment Literate?

Doug Wren is a consultant and author of the book Assessing Deeper Learning: Developing, Implementing, and Scoring Performance Tasks. He recently retired from his position as Educational Measurement & Assessment Specialist with Virginia Beach City Public Schools. Before coming to Virginia Beach, Doug was Director of Research & Evaluation for the DeKalb County School District in Georgia. To learn more, go to https://wrenedconsult.com or check out his free course in the Edjacent Teachable School, Introduction to Assessment Literacy at https://edjacent.teachable.com/.


Like many educators, teaching was a calling for me. Like some, teaching was my second career. Returning to grad school after earning my certification was inevitable—I couldn’t get enough education about education.


A Little Learning Is a Dangerous Thing

The lines that follow this oft-misquoted phrase are “Drink deep, or taste not the Pierian Spring: There shallow draughts intoxicate the brain.” What English poet Alexander Pope meant was that a little learning is exhilarating, but it’s better to drink your fill at the Pierian Spring, a sacred source of knowledge in Greek mythology. My first taste at the Pierian spring of assessment literacy came during EPY 700: Educational Tests and Measurement, a class I found both intoxicating and relevant to my classroom. Drinking my fill of assessment literacy involved a doctoral program of study rich in ed psych coursework; however, educators don’t need to go to such lengths to become assessment literate.


“Assessment literacy consists of an individual’s understanding of the fundamental assessment concepts and procedures deemed likely to influence educational decisions,” wrote assessment icon W. James Popham. Two of these concepts are test reliability and validity. For a full explanation of the what, why, and how of assessment literacy, read Chapter 1 of Popham’s 2018 book, Assessment Literacy for Educators in a Hurry.


What We’ve Got Here Is Failure to Educate

A few years before No Child Left Behind became the law of the land, Jim Popham’s buddy Rick Stiggins made this prediction: “Teachers will be expected to be far more assessment literate in the future than they are today or have been in the past.” Whether teachers are more assessment literate than they were in 1998 is debatable. A 2012 study of 180 teacher preparation programs in 30 states indicated that only one-fifth of the programs adequately covered assessment literacy topics. Fewer than 2% of the programs appropriately covered the topics of analyzing assessment data and making instructional decisions based on the data.


Last fall I asked a friend of mine—a former K-12 educator who teaches full-time at a university—why most preservice teachers lacked assessment knowledge when they graduated. He told me that typically, colleges and universities “expect” content area instructors to incorporate assessment topics into their courses. The (false) assumption is that these instructors are assessment literate and know what to cover.


Exacerbating the problem is that the majority of states and school districts either ignore the importance of assessment literacy or move it to the back burner. Not one state includes assessment literacy standards in their teacher certification requirements. Few districts offer effective professional development on the basics of tests and measurement.


So… What if All Educators Were Assessment Literate?

Before describing my ideal scenario if all educators were assessment literate, here are three real-life examples of tests and test results being misused, along with the unintended consequences:


Example #1 – Elementary and middle school students spend a significant amount of class time taking and retaking reading tests that mimic the all-important state test given at the end of each school year. I used to work with central office curriculum coordinators who believed these benchmark tests—as unreliable and lacking in validity evidence as they were—should be long and arduous so children would build up stamina for the end-of-year test. There is no research to support this belief; test prep isn’t the same as preparing for a 10K run. The worst part about making kids read test passages ad nauseum is the message that reading is laborious and not something to be enjoyed.


Example #2 – Some building administrators use physical or digital data walls with benchmark test scores to track the perceived progress of students and classes toward big test readiness. But data walls can be counterproductive. There was a school in my district where teachers were so afraid of looking bad on the wall of shame that they gave their students inappropriate help on benchmark tests. The bad part was that every student had artificially high scores, so the reading and math specialists were unable to identify the students that needed real help.


Example #3 – A group of high-level district administrators saw that the percentage of high school students who met the College and Career Readiness Benchmark on the SAT Math section was well below the percentage of students who met the benchmark in Evidence-based Reading & Writing (ERW). To increase the percentage of students who meet the Math benchmark, the administrators decided that all high school kids should be encouraged to take advanced math courses. There was no discussion about whether it was in the best interests of every HS student to take additional math courses beyond Algebra I and Geometry, nor was there any mention of improving the teaching effectiveness of the district's HS math instructors. I was at the meeting where this decision was hastily made, but I was not given the opportunity to speak because that privilege was above my pay grade. If these high-level administrators were (a) assessment literate, and (b) had taken the time to look at available data, they would have recognized that:

  • Considerably fewer male and female SAT test takers in every racial/ethnic group meet the College & Career Readiness Benchmark in math than meet the ERW benchmark. This is because college-level math courses are generally harder than college-level social science and English courses. Comparing the percent of SAT test takers who meet the Math benchmark with the percent who meet the ERW benchmark is like comparing apples and oranges.

  • Increasing the number of students who take advanced math courses doesn’t mean that the percentage of SAT test takers meeting the Math benchmark score will automatically increase. There are many variables at play here, such as how well the students are taught and each student's ability to apply specific math concepts on a college entrance exam.

Now here’s my best-case scenario if all educators were assessment literate. First, superintendents would do their best to convey important assessment knowledge to school board members and other politicians. These decision makers would then realize that one-shot standardized tests are not the best way to gauge teaching and learning. Next, after giving teachers long overdue raises, districts and states would invest in high-quality assessments that measure deeper learning. They would use the results formatively (e.g., to improve instruction), instead of summatively (e.g., to judge students, teachers, schools, or districts). Time spent on meaningful learning tasks would increase and time spent taking tests would decrease. If every teacher was assessment literate, students would understand why they are taking tests, and each test would be reliable, engaging, and provide valid results.


Becoming Assessment Literate

If you’d like to learn more about assessment literacy, here are a few suggestions:


Recent Posts

bottom of page