End of semester.  Winter.  Grading and student feedback, plus those odd moments that make you stop and think, like the shine on a wet pavement. Students who write and say that they appreciated being in class with someone whose first language wasn’t English because they learned something about a bigger world.  Students whose first language isn’t English writing to say that it was being in an online class that made it possible for them to participate at all. Given the rising panic about universities playing Titanic to the iceberg of online learning, these are voices that perhaps ought to be heard, even if they only constitute an anecdote (that’s for you, More or Less Bunk!).

But this isn’t what’s odd.  Clouds and silver linings etc.  The odd part is that quality-managed institutions are so daunted by the idea that these grace moments might be worth knowing about. Unless they can be collated as part of an evidence package for an institutional audit, strategic plan, curriculum review, competitive grant, a press release or a bar chart in a research article, they’re just so … unproductive.

It’s a long-standing face-off in methods between data collection and personal reflection, between number and narrative, survey and ethnographic encounter.  Of course we all know that these uncountable moments have a rhizomatic relationship to good learning, it’s just that they don’t line up as neatly as more conventionally measured “student outcomes”.

And as most Australian universities don’t have a spare Picasso in their hope chest, and are operating in a very narrow budgetary corridor between a rock and a hard place, this gives them a strong, practical incentive to focus on collecting evidence of quality teaching that can be converted to a range of charts and graphs for the annual report or the national audit.

Pass/fail rates, and grades in general, fall into this category of both collectible and representable, so they’re pretty significant, as tea leaves go. This is one of the background reasons why the individual academic judgment that drives the grading process is the subject of so much intervention, hedged in by policies and protocols lined up three deep.  The practical result of all this policywork is that final grades go through the wash of committee oversight two or three times, as multiple pairs of eyes scroll through results searching for gaps, anomalies or disturbing curves.

Universities also take student evaluations very seriously; they’re the pepper sprinkled liberally across our strategic planning for quality teaching and learning. Here we typically collect both quantitative (Likert scale) data and qualitative (free text) responses. To academics, the free text responses are often the most useful, even if they can be a bit bracing to read. But universities have a very strong preference for the aggregated quantitative responses as the most important insight into the overall quality of the learning experience.

Typically, this feedback emerges from the data crunch looking like this: “5.2”.  The great thing about “5.2” is that it allows teachers to be compared against teachers, so it’s very incentivising—even though this grading isn’t done by consistent cohorts of students, so isn’t the strongest basis for comparison, really.  Neither are the surveys compulsory for students, and there’s no minimum response rate required to make them robust. They’re typically conducted in the last week of class, so “5.2” skews towards the state of mind of those who are there at the end, whether out of duty, convenient timetabling, sudden exam prep panic, or genuine engagement.

This might have made sense five years ago, but now that attendance is becoming much, much patchier, the integrity of the survey system is really starting to shudder.  If what’s being evaluated is lectures, for instance, then the weight of the student who has attended two lectures all semester (week one and this one) is exactly the same as that of the student who has come along every time, rain or shine, and for whom this is a key subject in which she is engrossed, and about which she has some useful and informed critical commentary for the lecturer.

You might think we’d find this pause for thought, but it turns out we’ve got the answer to this riddle of contrary human behaviour: “5.2”.

None of this is news.  In general, the well-known limitations of the process mean that we keep this kind of thing in house. But the University of Texas is one of an undisclosed number of institutions upping the stakes by including this data as part of the personal profile of its academic staff.  It turns out that this vast institution is keeping tabs on the value-for-salary of 13,000 individual academics in terms that include “pay, the sources of their pay, their rank, tenure status, teaching load, research grants and, for some, the average of the grades they awarded and their average student satisfaction score.”

Writing in The Australian, Gavin Moodie describes the collation of individual productivity diagnostics in Texas as symptomatic of “an erosion of trust between US higher education staff, institutions, parents who are expected to pay ever increasing fees, the general public and state and federal governments.”  He notes that it’s driven by the demands of external stakeholders concerned to make university performance more open and accountable. Although he thinks that measuring individual value in this way would never catch on here, it’s worth thinking about the beliefs that were used in March last year to promote the government’s proposed MyUniversity web publication of “rich performance information”, as reported by the Brisbane Times: “‘Transparency does place pressure on people,” Ms Gillard told reporters yesterday. ”Pressure to improve. That’s a good kind of pressure.”

So this seems like exactly one of those times when the United States does Australia a favour by creating a cautionary example. The quality assurance processes sloshing round Australian higher education at the moment are already making academics feel as though we can’t be left alone with the chickens, and the question of trust is—rightly—a sensitive one. At the very least, if the complex human practice of teaching and learning is going to be boiled down into rich performance information, then let’s have a serious conversation about developing measures that include much greater capacity for reflection and dialogue, speaking not only to the rigour but also to the subtlety of the learning experience.

It’s hard to measure the quality of the gleam on a wet pavement, but that doesn’t mean we should overlook it.

2 Responses

  • MfD:

    I’m not panicked about universities playing the Titanic to the online learning iceberg. I’m panicked about faculty playing the Titanic to the online learning iceberg. I actually think there’s a big difference there.

    Reply
  • Mmm. I’m not sure the nuances of your distinction are clear to me. That is, I think the distinction between the two is generally important, but in the case of online learning it often seems to me that it’s the coercion of one by the other that’s causing the unease.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.