How NBPTS will kill public education
Which brings us to the latest news in the saga of the National Board for Professional Teaching Standards.
For those of you unfamiliar with NBPTS, it is an organization that certifies teachers who successfully complete a screening process designed to identify effective educators. Founded in 1987, it has certified more than 70,000 teachers, most of whom receive annual bonuses or increased pay from their states due to their certification. There are millions and millions of dollars spent every year by the states on certification and teacher incentives.
It’s a great idea. There’s only one hitch: it doesn’t work. Research shows that certified teachers are no more effective than teachers who have not gone through the certification process.
What does NBPTS do with this research? Here’s what they don’t do: they don’t learn from it, retool, and figure out how to identify and certify truly effective teachers. Instead, they try to bury, discredit, or dispute any research that doesn’t agree with their assertions.
The latest attempt: a bad piece of PR called “Measuring What Matters,” in which a group of 10 NBPTS-certified teachers do their best to question the existing body of research, mostly by saying that available research measures effectiveness on the basis of independent assessments (ie, state tests), and that what they do – the superior value they bring to the classroom – cannot be measured using independent measures of student knowledge and skills.
They argue instead for “authentic” assessments, portfolios, and other components of a “multiple measures” model that are all based in subjective analysis of student work, assumedly by the same people responsible for teaching them in the first place. (I’ve written about grade inflation, and the need for independent assessment, here.)
It’s clearly detached thinking – but how will this mentality kill public education?
As I’ve said, public education is going to have to start reaching out to stakeholders (not government, but the rest of us stakeholders) for support. And if educators and administrators toe the line on this thinking – “we’re really good at what we do, but you can’t possibly measure it” – in the face of public awareness of poor K12 outcomes (dropout rates, remedial postsecondary rates, international comparisons, and more), the public will simply balk. They’ll be polite – “good luck with that” – but they’ll quickly realize that it’s pointless to invest in a system governed by this mentality, and walk away. The opportunity to find outside support will be lost – and public education will have to face the wrenching cutbacks it could have otherwise avoided.
I’ll close with a quote from Jim Collins’ “Good to Great and the Social Sectors”:
To throw up your hands and say, “But we cannot measure performance in the social sectors the way you can in a business” is simply lack of discipline. All indicators are flawed, whether qualitative or quantitative. Test scores are flawed, mammograms are flawed, crime data are flawed, customer service data are flawed, patient-outcome data are flawed. What matters is not finding the perfect indicator, but settling upon a consistent and intelligent method of assessing your output results, and then tracking your trajectory with rigor. What do you mean by great performance? Have you established a baseline? Are you improving? If not, why not? How can you improve even faster toward your audacious goals?
To attract stakeholders, this is the kind of thinking that education will have to adopt – and definitely not the kind of thinking that involves arguing against independent outcomes data simply because you don’t like what the data represent.