HILTL HB2223: Talk data to me

In the first installment of “How I learned to love HB2223,” I talked about finding love through contextualized learning, which led me to propose a couple of ideas for pairing that might seem, well, nuts. That led to an observation: There are a lot of questions about CoReq-ing, some of which even involve data.

Consider this little data snapshot from last year, courtesy of Sam Echevarria-Cruz:

Discipline CoReq N // Status % Success Success Gap
HIST 1301 116


CoReq 64% 15%

Non-CoReq 79%
ENGL 1301 403

CoReq CoReq 80% 8%
Non-CoReq Non-CoReq 88%
EDUC 1301 331


CoReq 85% 2%

Non-CoReq 87%
SOCI 1301 60

CoReq CoReq 87% -4%
Non-CoReq Non-CoReq 83%

Let’s consider the outcome gap between CoReq students and Non: big success gap in history, less big for English, and so on. One plausible explanation for this gap is that, by definition, CoReq students come into the credit course with skill deficits. That’s what makes the CoReq students CoReq students, so that explanation makes sense. But, however appealing it may be, that explanation is merely consistent with the data; there are other explanations that are also consistent with the same data. For instance, it’s plausible that the reading load maps nicely onto the gap: The more reading assigned, the bigger the success gap (except this explanation might imply that soc profs are taking reading away from students somehow).

My point is not to disparage explanations like these; on the contrary, my point is to encourage a different mindset about data. It’s very easy to look at data with an eye to consistency with a favored explanation and call it “support” or worse, “confirmation.”

Let’s resist this temptation, and look at data for the questions it allows us to pose.

That shift is often going to mean that data seems to be asking for . . . more data. Take this entertaining little snapshot, for instance: We’ve aggregated all the students who were CoReq-ed, irrespective of any peculiarities. What if there are CoReq students who passed the TSI but who wanted the extra support anyway? What about students who convinced someone to let them take the pair because they were desperate for the credit course? It’s not a stretch to imagine that such students are going to skew the gap, right?

What about grouping by more familiar demographic characteristics? Are some groups more “gapped” than others? And what about grouping outcomes by ranges of scores on the TSI? Wouldn’t that be interesting?

And look at the flipside: How many of those Non-CR are TSI exempt, but, had they not been exempt, would have been mandated? In other words, how many Non-CR are actually incognito dev ed students?

So far, these new questions are really about what’s hidden behind the data— which means we haven’t even started on alternate explanations. Here’s one (thanks, Herb!): What if the gap is smaller when the course integrates metacognitive issues, challenges, and strategies? No offense, historians, but maybe history courses don’t isolate and emphasize reading and writing skills specifically for history. In a way, that’s the whole point of an EDUC course, right? (I’m not sure what that means for you, sociologists, but I am tempted to make a few really bad jokes about the meta in metacognition.)

What’s the moral of these reflections on this straightforward little data set?

First and foremost, we don’t actually know what this data is telling us. At best, it’s descriptive and not explanatory. And it’s descriptive only in very broad strokes.

Second, consistency misleads us, precisely because it is so satisfying. And if you don’t think so, ask Emerson: Consistency is not just a hobgoblin because fussy attachment to consistency can make our thought sterile; it also haunts us when we look at data through the lens of an article of faith. Then, consistency makes us think that data is speaking to us more clearly than it really is.

All is not lost, because we have a way of exorcising this particular hobgoblin: Focus more on questions. We have to be made of pretty stern stuff, though, because we aren’t always going to get clear, straightforward answers. And that brings me to the final point in our discussion of data: Validity.

No one is more attached to the notion of validity than I, so I feel for anyone pained by the pervasiveness of “anecdote as data” in our profession. Of course we need better studies, validated by and because of sound method. Of course. But a little voice in my head asks, What do we tell the students who haven’t heard that we don’t have a validated study yet, but show up for classes anyway?

I admit up front that my perspective is influenced (warped, if you prefer) by my previous life as a psychotherapist, but it’s the inveterate therapist in me that says that doing something plausible now is better than waiting for something validated later. For some people, later doesn’t come. Incidentally, that’s also the history of medicine, in twenty words or less.

Let me put it another way: We know that, today, we cannot create a perfectly germ-free environment. But it does not follow from today’s limitations that we should do appendectomies in the student lounge.

Next up: Mandate II and still loving it.

Spread the love

Author: Matthew

philosopher, iconoclast, technoboy, musician, conjuration battle-mage, dean

Leave a Reply

Your email address will not be published. Required fields are marked *