In my prior post, I assigned you homework. And, no, you may not be excused to use the restroom at this particular moment – I’ve used that trick a thousand times and know it well. Nice try, though.
If you found a need to develop a few alter-egos to affirm or deny each statement due to contradictory responses or an incredible desire to simply negotiate settlement-compromise answers that often ended simply in, “Meh,” take heart.
You’re not alone.
I often suggest that although we deal with myriad of challenges from those outside of our profession, we have plenty of work to do when it comes to getting our own house in order. Our love-hate relationship with assessment is often the lumbering elephant in the room. In short, we do what humans traditionally do: bend our opinions or tolerance of an assertion to match the running narrative in our minds, and the questions from the “homework” were written to force answers on contradictory statements at nearly the same time.
It is important to note, however, that the psychological acrobatics I describe above are not done consciously; it’s not a nefarious exercise to rationalize, ignore, or dispute conflicting data. It’s a human reaction, done at just below the surface of our conscious awareness. Quite simply, we all do it.
The purpose of this two-part blog is not to hand down answers on assessment from on high; I don’t have solutions – only prompting questions and points that I believe we must consider and reconcile.
As a profession, we need to develop a cohesive and thoughtful platform about our position on high-stakes assessment that concedes a consistent support of mandates from policy-makers and a promise to fulfill expectations faithfully. However, doing so does not strip our field of its right to have professional educators offer concurrent alternatives or forceful opinions on how the model could be improved.
I know you’re perhaps wondering what my answers would be to those questions, because I certainly have opinions on each. Unfortunately, this is not the appropriate forum for a conversation that is too complicated for a textbook, let alone a blog. However, I leave you with some random thoughts to consider:
- Most (all?) high-stakes tests only measure a fraction of what is outlined in the standards, so it’s a fruitless argument to suggest that we “test all that matters.” Even if one believed the standards exhaustively outline all such learning, we simply don’t asses their entire scope.
- For years, there have been issues – in Arizona it often seems centered around the scoring of writing – that raise questions about how much we can read into assessment data. Statewide fluctuations from year to year might indicate increased attention from schools, or simply raise questions about test design alterations or scoring. Either way, those results launch us into a frantic response mode. Only in a system where such data affirms trends seen in well-designed teacher-district assessment system can we make effective programming decisions. Ideally, government agencies would be curious about the level of alignment, but I’m not holding my breath.
- We must be careful when we openly criticize high-stakes assessments as unreliable, yet loudly embrace results when they show improvement that elevates our school’s status; it undermines our credibility. Tests are either reliable and valid, or they are not. What meaning we want to derive from them does not change the quality of their design.
I close this post with no more homework, but rather a proposal about our professional obligation to explore a unified voice on high-stakes testing, finding a position that embraces elements of seemingly complex conflicting statements and is not influenced by how helpful or harmful the current data is to our status. I continue to be curious as to your thoughts and am offering extra credit to those who make assertions that we don’t often hear. After all, cliches around high-stakes testing support and criticism has only gotten us to here:
A place full of unhappiness and “Meh.”