Many supposedly highly educated people do not understand data. They use it all the time, rely on it, even revere it to a level that approximates worship, but since they don’t understand it, it becomes merely fuel for confirmation bias. Any study that tells them what they want to hear is “hard data”, and any study that doesn’t has “flawed methodology” or is otherwise suspect.
In education, one of the current buzzwords is “data-driven”. As with most buzzwords, its meaning is unclear. The picture that comes most immediately to my mind is of those drivers who go off a bridge or into a lake because their GPS told them to, but I actually support the concept, if not the name. I just don’t think it is anything new. The tools may be new, but not the concept.
The fundamental thing about data that most people don’t seem to get is that a data point is not actually anything in and of itself. It is a symbolic representation of something, and like all symbols, it can mean a lot of different things. The data-related skill a teacher needs to use is data interpretation, and this is something all good teachers have always done.
If one student fails a multiplication test, it may be because she doesn’t understand multiplication, or because her mother just died, or because the boy behind her kept pulling her hair during the test, or for any number of other reasons. A good teacher who knows her students will have a pretty good hypothesis and then investigate. If a whole class fails a multiplication test, it may be because they don’t understand multiplication, but it may be because the test was confusing, or because there was a big earthquake that morning and they couldn’t concentrate. Again, a good teacher will have a pretty good hypothesis and then investigate.
A single test score is rarely a reliable indicator of anything. Aggregated test scores more frequently indicate something, but what exactly they indicate is not obvious. This is more true the more impersonal and standardized the test. I have written questions for standardized tests. It is an extremely difficult thing to do well, because you can never predict all the possible ways a test taker could misunderstand the question, and no question ever tests only one skill. For example, a teacher who knows his students well can craft math questions that use only vocabulary the students know, phrased in ways the students will understand. The teacher can then be fairly sure that the questions are actually measuring the students’ ability to do the math. This is not true of a test developer unaware of those particular students’ language skills.
Many people discussing education these days use the term “data” to mean “test scores”. They also use “technology” to mean computers and smartphones and the like. Both usages annoy me. A pencil is technology too. So is a book. In the same vein, the fact that a particular student’s father is in jail is data. The fact that another student’s mother has cancer is also data. Neither is a reason not to try to help the students do their best work, but both are possible explanations for a student’s low performance on a particular test, or a particular series of tests. This is one of the many reasons no test should have high stakes attached to it. All tests provide data, but not all of it is of any use, and there are other kinds of data that matter at least as much.
This is why I don’t like the term “data-driven”. What is the alternative? Utter obliviousness? All good teaching is guided by data, but that data may come from many sources. If a teacher needs a computer algorithm to know how his students are doing, then either he is not a very good teacher, or he has too many students. (I will digress here to mention that the studies I have seen that “prove” that class size doesn’t matter tend to look at the difference between having, say, 18 students in a class and having 22. No, at that size, the difference is not significant. Why don’t they look at places like the LAUSD, where class sizes of 35-40 are not unusual? Is there a difference between 20 students and 40? You betcha.)
I’m not saying computer algorithms can’t be useful for teachers. They can, but they are not reliable enough to supersede good judgement based on daily interactions with students and all the other data available to teachers in the course of their work. One of the rallying cries of people who protest high-stakes tests is, “Students are not data points!” The reverse is also true. Data points are not students. Test scores are just representations of particular students’ performance on particular measures at particular times. They may indicate situations to investigate, but they are not sufficient evidence to declare students or their teachers failures.
Here is some data:
Self-described education reformers have used many millions of dollars to successfully influence education policy over the last couple of decades. While claiming to want to make teaching a more attractive career, they have promulgated a narrative that blames the gross inequities in our public education system on lazy, uncaring, unintelligent and/or racist teachers who are too hard to fire, and the unions that protect them at the expense of children. They claim that high-stakes tests are the best way to determine who these bad teachers are.
Experienced, dedicated teachers, even the recipients of prestigious awards, have been leaving the field in droves, some even discouraging young people from entering it. Now we are seeing reports of growing teacher shortages.
Are these events related? There is not enough information here to be sure, but I could make a pretty good hypothesis.