US/not us

We need to have more conversations with people who are not us.

Chris Gilliard, #DigPed, August 2016

1

It’s 5am. It’s dark outside, and cold inside. My daughter’s in the kitchen banging cupboard doors and making coffee. She’s up to watch the Olympics, and she wants company. Blearily we straggle out to join her and slump on the couch under blankets, trying to figure out what’s happening. Skeet shooting, what is that?

Divers fall from the sky in apparently perfect synchronisation. They enter the water like needles. Judges manage to find something wrong. We marvel at the judges.

The television advertising of Australia’s major Olympic sponsors relays us back to ourselves over and over. Look, it’s us, up in the dark, our sleepy faces lit by the television screen, watching what’s happening on the other side of the world.

We show up.

2.

It’s 5am. I’m up early to be part of a time-sychronised workshop for the Digital Pedagogy Lab in Fredericksburg, Virginia. I can’t point to Fredericksburg on a map, so I look it up. Wait, it’s that  Fredericksburg.Screenshot 2016-08-12 09.48.23

I grew up near that  Stonehenge so I know what it means to live in a place that has an overbearing past. In thick places, the tourist economy alibis history, sustains its double bluff: that we’re both done with its troubles, and so vigilant about it that we won’t repeat it.

Except until we do, in some form or another.

The workshop participants are collaborative, generous, thoughtful. They make time in their lives for us all to put our thoughts together, to try to understand what we think we know when we know where someone is from, and guess where they were born, and double somersault from there to the impressions we have about places, countries, cultures. They write their hopes for the workshop left-handedly to get a sense of what it feels like to be using techniques and technologies designed for (and by) a dominant culture.

People who are left-handed recognise each other at this moment, like two Australians at a northern hemisphere conference.

3.

In a Google document we crowdsource knowledge of South Africa, Egypt and Australia, where we three facilitators work. The Australian field fills up in a familiar way:

Coral Reef, Great Ocean Road, Rabbit Proof fence,Kangaroos, outback, Vast and funky landscape, PY Media, the Opera House, Sydney Island

Cricket

Crocodile dundee

But that’s not all. Because someone knows about the Nauru files, and that the Australian government we have just re-elected are destroying a generation of already homeless refugee children, on the grounds that this might save others from drowning at sea. Australians have seemed to go along with this lesser-of-evils calculation. But the details are becoming too much to bear.

This is the report of a witnessed assault by a guard on a 5 year old child because she was running through a tent.

With his left hand he hit her across the back of the head. It was very forceful – he hit her so hard it lifted her off her feet and sent her crashing to the ground.

Our Minister for Immigration responds to the stories contained in the Nauru Files with a lack of compassion so astonishing that our mouths fill with sand:

People have self-immolated to get to Australia.

Clearly never having met a fourth degree burn survivor, that is what he said.

4.

Back in the workshop, we raise questions of power and silencing. We think about whether we need more rules, or fewer rules, for international online learning. We wonder if organically forming communities have an inherent tendency to marginalise the unexpected visitor—and not just in spite of the diligently inclusive language they use to value all their members, but because whenever belonging is made visible in the formation of a community, it is always coded by those who control the invitation to belong.

Derrida’s conditional hospitality is never far away, when we speak about what we can do to make others feel included.

Last week a brief exchange on whether a call for papers on the experiences of women of colour in education meant to say “US education” or was really open to others, sent me back to Barthes’ discussion of exnomination. In his essay on the function of myth in distributing power, Barthes points out that the most powerful in any situation will not need to name themselves, and indeed will seek to demonstrate their power by reserving for themselves the default position. The most powerful are those who can establish their own status as the one that never needs to be qualified.

President. Woman president.

There, you saw it.

Barthes’ focus is the bourgeoisie, the class who do not wish to name themselves. His idea was picked up in 2000 by linguist Robin Lakoff, who expanded it usefully to look at dominant groups in general, and the tactical unnaming of privilege.

If you are a member of the dominant group, your attributes are invisible, as your role in making things the way they are is not noticeable.

For all of us who work as educators, and especially those of us who work in edtech, the American college system has fully achieved this status. It is the default that doesn’t have to name itself. I have sat in LMS demonstrations watching a video of everyday US college life as the roadmap for vendor planning for us. And no one raised an eyebrow, because we’re used to this across every surface of soft cultural power, where the US dominates to the point that we forget we’re not thinking our own thoughts.

Hi Professor Bowles,

I hope your summer is going well! 

I wanted to reach out to invite you to participate in our ‘Professor Pulse’ study. This project aims to collect data and insights into professors’ sentiments on current issues and topics in higher education – everything from tenure, to student apathy, to school administration.

Hi Lauren. It’s winter here. Our professorial system is entirely different to yours. You don’t mean me, you really don’t.

But if Nauru teaches us anything, it’s that we don’t change global power by wrestling a bit of it for ourselves, and then punching down.

5.

Here’s the hopeful part. International online networks are becoming a new kind of everyday, and they sensitise us all to the defaults we each use, and impose on others. This morning’s workshop was followed by a conversation about identity and difference in digital pedagogy with educators Sherri Spelic, Annemarie Perez, Miriam Neptune and Chris Gilliard. I asked Chris what he expects US educators to learn from the presence of others in their workshops, their conversations, their sense of the scope of “education” when they say it.

Chris’ answer went to the heart of how we achieve change by showing up. So if we want Americans to stop thinking of the rest of the world as the exotic, the underserved market, being present is the place to begin. We need to make time to hear from each other in workshops like this, at a scale that we can work with. We need to promote listening well as an activist practice. And as educators we have to lead this process, and centre it in our teaching.

We need have more conversations with people who are not us.

Warmest thanks to all the workshop participants, co-facilitators Paul Prinsloo and Maha Bali, and Chris Gilliard, activist educator.

Hovering

University participation has risen spectacularly. The target of 40% participation should be comfortably met by 2025. The nation has quickly moved from an elite to a mass higher education system. The second equity target has proven more challenging, but progress is being made. The relative proportion of low-SES undergraduate students rose from 16.2% to 17.7% between 2009 and 2014. In the same period, the overall number of undergraduate low-SES students increased by 44%, while other cohorts increased by 30%.

Andrew Harvey, ‘Uncapping of university places achieved what it set out to do‘, June 2016

1

It takes a while to notice something’s wrong. There’s a sound that doesn’t quite belong, although not by much—it’s not like a siren right in your street, or a breaking window. So you catch yourself noticing it, and forget to look up. But five, ten minutes later it’s still there, and look, it’s a helicopter hovering, hanging in the air like a kite. Then it’s looping out in a wide arc and coming back to exactly the same spot. Round and back, round and back, all morning.

Helicopter
Rescue helicopter, 2012, elleneka102, shared on Flickr CC BY-NC 2.0

Once you’ve seen it, you don’t unhear it. Explanations start unspooling, tumbling over each other, tangling up. It’s hovering over a major intersection, it’s scanning the escarpment where a hiker might have fallen, it’s following a car chase, it’s filming something, it’s hunting for someone. Neighbours come out of their houses and look up. How long have you been hearing that sound? When did it start?

2

In the past few weeks I’ve exchanged thoughts with people about the rise of analytics in higher education, and especially the arrival of personalisation. What separates personal from personalised? This email came:

Dear Kate,

Hope this email reaches you well!! Hurry up, Don’t miss SAFe AGILEST Training Program … It is our sincere hope that this new version helps you and your enterprise achieve the benefits you all deserve.

Dear Alice, I worry about sincerity in your hands.

Personalisation is the endgame of consumer analytics. It’s the point at which wide surveillance morphs into individual care, without the actual cost of staff. In universities, social data about students layers over all their tracks and patterns as learners, their collisions and intersections, all the half-cooked queries and false starts that no one much intended to share; personalisation lets us zoom in with an unmanned drone to drop off a map to a journey, crafted just for them.

And if we notice they’re drifting from the trail, how could it be a bad thing for us to use our insights to recover them, adjust their progress, set them straight?

It turns out learning analytics is a field where people say “intervention” without unease. We intervene like good people stopping a fight, like bystanders who step in and rescue someone. We come between someone and what fate seems to have stored up for them. It’s a salvationist theology: we know what’s best for others, and we can see when someone’s tilting, and possibly falling right off the wagon.

The problem is that this is exactly the kind of reformism that drives the other kind of intervention, the tough love kind, the governmental kind. We intervene out of faith, prejudice and self-interest. We intervene to help failing students become their better, more successful selves in ways that worked for us. We intervene because we’ve gone on selling the graduate earnings premium like a cheap watch despite all the evidence that the labour market is falling to bits. And we’re hardly disinterested. Our intervening zeal has a grubby side: students can’t be left alone to fail, to make a plan that doesn’t involve us, because their completion has a dollar value, and their success grows our reputation and our market for the future.

So we also intervene because we’re sandbagging our business plan, and our revenue stream. We intervene to ensure that every student who enrols in year X sticks around until year Z, all doing the exact same amount of stuff, at a foreseeable unit cost that enables us to plan. And in service of our interventions, a well scaffolded curriculum works for us like a movie of standard length works for a movie theatre: a business efficiency sold as a unique and transforming experience.

This is why we’re seeing whole divisions appearing whose role is to hover over learning, to track all the things that learners do, to gather data so that our interventions are precisely targeted. This is also why so much effort in the governance of digital learning is focused on getting more students doing more things in the LMS, even though this is one of the least engaging environments for actual learning; it’s why there is increasing policy focus on placing data capture points in curriculum, assessment and feedback; and increasing responsibilities for staff in managing the digital records of student learning.

Behind all this local busywork, there’s a powerful and well-funded research effort that’s being sustained by these changes, and that’s constantly searching for new action. What new data can institutions recruit? What new insights can be drawn out of fresh combinations of things we’ve always known? If for example we can pinpoint the exact moment when attrition risk begins and we can personalise the perfectly automated intervention, can we enrol more and weaker students in better conscience? With sufficient personalisation, and perhaps some upfront investment in digital resources, could more students self-manage their learning, and could someone still be prepared to pay for their experience? What if those students were in large and underserved education markets in developing economies? 

This is the lesson that MOOC pioneers have left behind for us to think about as they pivot into the next phase of their business plan: that analytics, automation and personalisation are the basis of a low-cost and skeleton staff educational experience that can be rolled out anywhere, and that only needs a modest fee-for-access to cover its costs, providing the market reach is wide enough.

But the patterns that analytics can make visible are those that should be starting human conversations, not replacing them. This is why we need to be far less sanguine about twinning analytics with cheap labour—let alone tutor bots—because if this human conversation is going to help students personalise their own learning (and they are surely the right ones to be doing it), it needs staff who are resourced with time, stability, experience and the confidence to hear what students have to say.

3

No two students who quit university do so for the same reason. The decision to leave is part of a complicated story that began long before they arrived, and will go on to deliver future outcomes none of us can see. It involves families, friends, and a muddle of hopes and fears that are political, social and contradictory. This semester I’ve had the privilege of listening to students who left and came back, who are on the verge of leaving, who have changed direction and changed again. The toughest stories to hear are from those who are staying because the risk of leaving seems worse, in this employment market, in this region, at this time, with those family hopes backed up behind them.

Thanks to the data we hold on enrolment, retention and completion, we know these students only as the basis of our claims of policy success. They’re here, they’re meeting all the deadlines and earning grades and moving through the curriculum right on time. Analytics based on tracking failure and discontinuation won’t help them, because their problem isn’t in this terrain at all, but in messier zones of self-doubt, fatigue and anxiety. To understand more about the experience of the student who detaches without leaving, and why this should matter more to us, we need to show up in person, to listen fully, and to let each story stay a whole one.

There are challenges and opportunities facing social and narrative researchers in education: scale, replicability, transferability are all troubled when the focus is on the stories learners tell rather than the observable things they do. But there are explanations that can’t be found by any other means, that can’t be seen by hovering. So let’s have this conversation openly and optimistically, and see what we can add.

#sonar

Sightings

Updates below

In a bizarre coincidence, when I opened the book to scan the contents I found myself looking at the section about sharks.  In particular, “surviving if you are in a raft and you sight sharks—”… I wonder if anyone would be interested in using this as a model for an edtech field manual for surviving the Higher Ed apocalypse.

Jim Groom,”Survival: the manual” July 7 2014

Thanks to Jim Groom, I’ve been thinking about Jaws in this plainly bizarre week in the short history of commercial MOOCs. For all its singular qualities, and for all the symbolic load placed on it by film theorists, Jaws is at heart an ordinary mystery: something unexpected and unexplained happens, someone goes missing, and everyone else spends the movie piecing together clues, disputing priorities, and dealing with what comes next.

But there’s a small scene in the middle that often gets forgotten, where two kids prank the holiday crowds at the beach with a cardboard fin—and in doing this set up the perfect opportunity for the real shark to glide in to calm water, unnoticed by everyone until it’s too late.

This week’s edtech weirdness had both mystery and something like a distracting prank, involving a MOOC in which the professor was yanked from view, then bobbed up again briefly, before vanishing again. Paul-Olivier Dehaye, a maths lecturer from the University of Zurich, put up a three-week course: “Teaching Goes Massive: New Skills Required” (#massiveteaching) through Coursera. The landing pages raise questions about the Coursera approval pathway and standards: two weeks of short RSA Animate style videos, and a final week where students will do more or less whatever they like in an “Experiment Area”. Dehaye is likeable, clear and thoughtful about his topic, but the videos aren’t elite brand rocket science—certainly, nothing that an informed and curious teacher in the office next door to you couldn’t have thought up.

And that should have been the first clue, I think. The course goal is “personal growth”, for which—thankfully—no certificate is offered, and the content is quite vague: “‘Readings’ will come naturally during the course as basis for discussion. … In the first week, we will provide a short summary of proposed content of the course. The content of the later weeks will be decided on by the students, and should cover the proposed content and more.”

But after the first week some or all of the content was deleted, and then Dehaye was himself removed, leaving enigmatic clues on Twitter, some participation in Metafilter discussions, some blog comments here and there (including on George Siemens’ blog), and a deleted Etherpad document that he wrote to explain his actions.

MOOCs can be used to enhance privacy, or really destroy it. I am in a real bind. I want to fight scientifically for the idea, yet teach, and I have signed contracts, which no one asks me about. If you ask me something, I can tell you where to look for the information. My plan becomes to flip the tables. I want to “break out” and forge an identity outside of the course, on Twitter, because I realize this is the only way for me to fight for this identity, engage with my students, and those big shots all simultaneously (journalists, educational analytics people, etc).  … Meanwhile I want everyone to organize their own learning, which I know is happening by looking a bit around. Some people don’t like my course, which is fine. It’s your choice, that’s part of the point. Still, I get lots of emails from coursera asking what is going on. A lot of pressure from them now. They are confused just like you were, and I intended to confuse them even more because they were not ready to challenge their own pedagogical practices fast enough, judging from past experience.

After blogger Apostolos K pointed out that these strange goings-on hadn’t attracted much coverage, and George Siemens wrote “Something Weird is Happening at Coursera“, the story was quickly picked up. Carl Straumsheim treated it as “The Mystery of the Missing MOOC” for Inside Higher Ed; Steve Kolowich covered it for The Chronicle first as a mystery (“In a MOOC Mystery, a Course Suddenly Vanishes“) and then as an experiment (“University of Zurich Says Professor Deleted MOOC to Raise Student Engagement“). Jonathan Rees had two goes at it, both worth reading: “The worst of the best of the best” on his own blog, and “Even super professors deserve academic freedom” for The Academe Blog. Rolin Moe, whose MOOC blog is touchingly subtitled “Debating, debriefing and defining the learning trend of 2012-“, wrote it up as “Dr Famous is Missing“.

By the end of the week, opinions diverged. Yesterday Michael Gallagher argued in a beautiful post that to exploit students in a research project raises questions of trust that can’t be overlooked even if the intent is to criticise (“Teaching vs. research and MOOC brouhahas“); today George Siemens congratulated Dehaye on starting a conversation about our vulnerability to commercial data mining by companies like Coursera.

I’m still absorbed by the freakishly odd coincidence of Dehaye’s co-authored take on a probability problem that’s apparently well known to mathematicians, involving 100 Prisoners And A Lightbulb, with George Siemens’ July 5 post (published just before all this turned into a thing) on the latent knowledge in any class, involving 100 learners in a room. This is Siemens, but could be Dehaye:

The knowledge and creative capacity of any class is stunning. Unfortunately, this knowledge is latent as the system has been architected, much like a dictatorship, to give control to one person. In many cases, students have become so accustomed to being “taught” that they are often unable, at first, to share their knowledge capacity. This is an experience that I have had in every MOOC that I’ve taught. The emphasis in MOOCs that I’ve been involved with is always on learners taking control, learners joining a network, or learners becoming creators. In a Pavolovian sense, many learners find this process disorienting and uninviting. We have been taught, after a decade+ of formal schooling, to behave and act a certain way. When someone encourages a departure from those methods, the first response is confusion, distrust or reluctance.

I’ll call my theory of knowledge and learning “100 people in a room”. If we put 100 people in a room, the latent knowledge capacity of that room in enormous. Everyone in this room has different life experiences, hobbies, interests, and knowledge. We could teach each other math, physics, calculus. We could teach poetry, different languages, and political theory. The knowledge is there, but it is disconnected and latent. Much of that knowledge is latent for two reasons: 1) We don’t know what others know, 2) connections aren’t made because we are not able with our current technologies to enable everyone to speak and be heard.

At the moment, I’m not sure that we know enough to be sure what the plan was with #massiveteaching. So I’m keeping an open mind to the possibility that what looked like a prank was an attempt to start a different conversation—including, and perhaps especially, with students—about the risks of corporate data mining and the lessons from Google advertisements or Facebook’s experiments with emotional manipulation. The fact that it didn’t work smoothly, and might make Coursera much more twitchy about allowing experimental course design in the future, shouldn’t necessarily be the measure by which it’s finally judged.

Meanwhile let’s keep one eye on the ocean where the real sharks are. As ever, the timely counsel in confusing times is from Jim Groom, who seems to me to be looking in the right direction:

I don’t know what it is, but Sharks remind me we are deeply vulnerable always.

Me too.

Update

People are still writing about this. Two very good posts today:

According to Apostolos K, Coursera/U Zurich have resumed the course without Paul-Olivier Dehaye, which seems to me a reasonably complicated thing to do if the whole designed purpose of the enterprise originates with him. It’s a bit like the Mayor of Amity Island putting on the cardboard fin to prove that there’s no shark.

Down on main street

“We think it’s fair to ask the student to pay $3 extra a week to get the chance to earn a million dollars more over a lifetime than Australians without a university qualification. … Mr and Mrs Mainstreet are paying almost 60 per cent of the tuition fees of a uni student and they are also paying back the loan at the 10-year government bond rate of 3.8 per cent, whereas the student’s loan is indexed at CPI, currently 2.5 per cent,” Mr Pyne said.

Uni loan changes ‘cost $5 a week’, June 4

Since Christopher Pyne made fairness in higher education the surprise water cooler topic in this budget, there have been strongly negative reactions to the hiking up of student debt from all over the place. The government is now campaigning hard on the idea that fee reforms are both essential and inconsequential: the impact is tiny, the freedom is vast, and the overall costs are just as likely to go down as up (this is what the Minister calls the magic of the market, so do clap if you believe him.)

There are some practical problems with trying to pass off education debt as similar to other kinds of reputable middle-class debt, like mortgages or business loans, rather than, say, experience debts or gambling debts. Education might pay dividends in the end, but while it doesn’t, there’s no asset: no car to repossess, no house to put on the market, no shares to sell. Graduates who don’t go on to the full-time career for which they trained not only don’t see the promised premium earnings, but they can’t get a refund or put their degree on eBay. They’ve had the experience, and their numbers haven’t come up. Now they’re in a hole.

Behind this is the more important problem that there are no standards of responsible lending applied to education debt. If you’re offered a university place, you’re entitled to go into debt to complete your degree, just like that. It’s a no-doc loan of the worst kind, because it has to be — your future capacity to repay is itself the asset you’re going to debt to acquire. So no one’s responsible for even minimal risk evaluation of prospective undergraduates and their families. To put it brutally, universities can recruit underprepared students to make up numbers and protect their revenue stream, and at the moment have no real skin in the game when it comes to graduate employment.

Until now, the risk has more or less worked for Australian students even in non-vocational degrees because interest rates have been low, and it hasn’t worked for the lender because the incentive to repay is correspondingly weak. Students who have been able to pay fees up front have been better off, but not to a life-changing degree. But still, graduates have got stuck below the repayment threshold for a wide range of reasons, or have nicked off overseas, or have died with their debts unpaid. All of this amounts to a prediction that Australia could have $13bn in doubtful debt by 2017—a hill of beans compared to the trillion dollar toxic debt swamp in the US, but significant for a small education market like ours.

So it’s obvious why the government wants to adjust repayment terms: both to get more money back from those who repay tidily, and to use the threat of compounding interest to round up those who aren’t repaying much at all. It should be a low risk strategy: as owners of the national education debt pipeline, the government clearly expected to be able to tweak both interest rates and repayment thresholds while still offering a better deal than any commercial lender, and by these means to turn education debt into a more attractive asset.

But this is proving a hard sell. Having spent a lot of time at home this year, I’ve come to think that if Christopher Pyne had watched more daytime TV, he would understand why we’re not jumping at the idea. It’s because we know more than he realises about disreputable debt: last resort borrowing, predatory lending, and household debt that’s being juggled across multiple credit accounts. Australians at home are hassled all day long by TV commercials focused on compounding debts owed to intimidating lenders, and financial underpreparedness for illness, accident and death. This is what’s in the basement of our national consumer confidence: a realistic sense of how quickly debt picks off the most vulnerable in this prosperous economy.

Like someone spruiking a raw food juicer or a funeral plan to this frightened audience, the Minister has to work hard to convince us to turn a blind eye to what’s lurking in the shadows of deferred payment, and to focus instead on the transformative power of the product. It’s why he’s making his case at the highest perch of generalisation, glossing over earning disparity between male and female graduates, graduates in different disciplines, graduates living in different parts of the country (especially in the country parts of the country), graduates from different social backgrounds, and with variable levels of educational preparedness before they start their degrees. He’s also hoping we don’t understand the impact of part-time and precarious employment, regional employment, misadventure, illness, disability, parenting, or the fact that the economy itself is slowing down.

In fact, everything that makes a real difference to graduate lifetime earnings is invisible from the Minister’s penthouse, leaving us with the simplification repeated in speech after speech after speech: graduates will make 75% more than non-graduates, and in case we’re not sure what that is, why—it’s a million dollars.

Jackpot.

Or not. Just as with cancer mortality modelling—about which I know a thing or two—the aggregates, multipliers and generalisations across a demographic slice that make up this million dollars are all bundled inside speculation about external variables, and can’t possibly predict what will happen with the accuracy required to judge the personal risk of going into long-term debt. When someone says “X life expectancy” or “Y lifetime earnings”, they’re pretty much saying “83% percent reduction in wrinkles”—it’s really up to you what you make of this as you stand at the counter with the wrinkle cream in your hand.

And yet the Minister’s gone on repeating his million dollar pitch long after even the friendliest economist has quietly pointed out that the facts are more complicated. Because this is exactly what you have when you don’t have responsible lending guidelines: a cheap and shouty sales pitch involving lifetime guarantees, a sprinkle of FOMO, and a miracle product. And he’s energetically trying to nudge Australian taxpayers into resenting university graduates, despite the evidence that Australian graduates themselves go on to become Australian taxpayers to a very significant degree.

Yesterday Stephen Matchett, in his excellent daily newsletter on Australian higher education, suggested that student debt has become the equivalent of the $7 Medicare co-payment to health reform: it’s the pill that the electorate just won’t swallow, no matter how it’s sugar coated. I think he’s right. What’s taken us all by surprise in this budget is that across every portfolio, with remarkable tin-ear consistency, the stakes have been pushed too high, the reasoning has been too lazy and too divisive, and the reactions of Australians to the central topic of budget fairness have been really widely misjudged.

Oh, and also, the rustling up of patronising stereotypes to explain it all is really wearing thin.

Aftermath

This is a short post, because I don’t know what to do with my sadness at a well-known educational technology blogger with a huge following, who’s so enamoured of his own popularity that he writes an April Fools farewell note to blogging, that references the personal impact of blogging on him in terms of hate mail, threats, and clinical depression, and then spends the aftermath passing on the supportive tweets he got from people who responded with concern for his wellbeing.

This, edtech, is our own tiny little version of the #CancelColbert satire moment.  The pressure’s on to get to the joke, to joke back, to be the first to spot the cryptic clues in the post itself and to not be fooled. 

Because of course all those people who did stop to comment on his blog, or send him kind messages on Twitter, now realise that they are the straight guys to this hilarious set-up: they are the fooled. Without them, the joke is a tree falling silently in a forest, but now the fooled are part of the spectacle. They are its very ka-thump.

Maybe you need to know Steve, to be a mate of his, to view this stunt with affection. Maybe you need to be male, or British, or something or other.  I can’t imagine.

But here’s the thing: like Dave Cormier, when I read the post I just got stuck thinking of the people I know who are on the blunt end of this foolish play, the people (interestingly, mostly women) whose blogging really is controversial enough to bring them threats, or those who have recently shown the extraordinary courage to write out in public — in front of their colleagues, their friends, their families, their bosses, their children — that clinical depression is the name for what they walk with every day. Every single day.

To these people, as to those of us writing with illness, the internet has been a place where we have been trusted to be dealing with this stuff simply because we say we are. There’s no other proof. And those who take the risk of showing kindness towards us have made an incredible difference to how we experience whatever it is we experience. We’re all frankly a bit amazed when someone is unmasked as having invented serious illness or loss to get attention online (not to mention cash), because that fabrication does something to the fragility of trust in these networks of concerned strangers that’s quite hard to repair. If you’re fooled once, you’re much less likely to trust the next stranger who asks for help.

And the result is that we become bystanders: people who look away when someone says that they are being harmed or threatened, when they say they are struggling. We become the people who rationalise their looking away as a healthy scepticism. Because, you know, we’re not fools.

So now I think I want to say to Steve: please just take a step back from your joke, and go read those bloggers who really do deal with trolls, or those for whom alcoholism and depression aren’t quite so backslappingly funny. Because it’s obvious that you get that satire has its limits, otherwise surely you’d have announced your April Fool retirement from blogging due to, oh, cancer, or your upcoming rape trial, or the death of a child.

See? These things don’t work, do they?

And for me, neither do the things you joked about.

 

UPDATE: On April 2, Steve Wheeler published a follow up post explaining his joke:

Of course blogging carries with it the risk of misunderstanding and even rejection, and some bloggers are the targets of those who overstep the mark and who are aggressive or even abusive. No matter who you are, there will be people who oppose you. Some bloggers do indeed suffer from depression and may even resort to alcohol or other substance abuse to escape from the pressure of sustaining their writing. Others are profoundly affected by harsh comments on their blogs. It’s not always a bed of roses. Anyone who is a public author must try to come to terms with such issues if they are to make any progress with their writing. Most of the comments I receive on my blog are very constructive and even those that disagree fundamentally with what I have written are generally presented in a firm but polite manner. Discuss: Is a ‘joke’ like this a valid way to promote discussion?

There’s really nothing I could add to this. – KB

Ratfarming: let’s not

But in our minds the answer to the question “Should I blog?” is now a clear and resounding “Yes”, at least, if conventional indicators of academic success are your aim. Blogging is now part of a complex online ‘attention economy’ where social media like Twitter and Facebook are not merely dumb ‘echo chambers’ but a massive global conversation which can help your work travel much further than you might initially think.

Inger Mewburn and Pat Thompson, “Academic blogging is part of a complex attention economy leading to unprecedented readership” (LSE blogs, this week)

Now that I’m on sick leave, the question of what counts as work has slanted a bit. Academics are chronically prone to working while sick. Often this is accompanied with a little self-justifying jig in which we explain that we love to do our research/teaching etc so much that it doesn’t count as work. But it’s just as often the way deadlines don’t wait, and what we do is essentially contract project work that’s hard to pass on to others. Lying in bed staring at the ceiling, we all feel the pressure of the snowcave of email in which we’re slowly being buried. And so we chip away at it anyway, because it’s easier to maintain an airway of sorts by tunnelling constantly, than to have to dig yourself out from six feet of snow at the end of it.

This is fantastically good news for universities, who couldn’t stay open without the human chain of volunteer labour that extends from graduate students and other adjuncts working well beyond their paid casual hours, to salaried workers taking their email, grading, peer reviewing and online teaching into doctors’ waiting rooms, hospitals and bed.

But the funny thing about cancer is that it seems so extreme, that everyone is advising me to make sure that I’m not working. So I’ve had to think again about what counts as work, and figure out what it is that I want to protect during this confronting, confusing time.

It turns out that I don’t think of either blogging or Twitter as part of my paid employment. Quite the opposite: I started writing online secretively in 2011 because I was looking for somewhere to think for myself. I didn’t want a platform; I didn’t want to promote my research or improve my profile. I didn’t even want people to know who I was, in case this troubled my employer. I wanted to make a bower for collecting things of value to me: thoughts, information, other people’s words that would amount to a better grasp of why higher education felt like a difficult place to be.

A whole lot has changed since then. Academics are now being advised and trained to blog; and as Inger and Pat suggest in their thoughtful discussion of the impact of blogging profile on journal article downloads, it may be that they will eventually be considered delinquent if they don’t.

As it happens, I don’t dispute the utility of blogging in the “attention economy”.  It does work, if that’s what you’re after. And I really can see why people get into Twitter for its amplifying capacity. We’re all unsettled by the way that academic publishing encloses writing that was already paid for by a public that then would have to pay a second time actually to read it. As citations are becoming both the carrot and the stick for survival then it makes sense that research managers are becoming more interested in the way that academics could deploy social media work to make themselves more upworthy.

But I have some serious reservations about hitching public online conversations to the pseudo-productivity of formal academic publication. It’s not about the impact of social media on the academy, but the reverse. Academic publishing is collapsing as a meaningful forum for the circulation of ideas precisely because its true function is now to maintain the scarcity of repute, in an economy that trades individual reputation for institution reputation, all of which washes back to the journals themselves. Journals pride themselves on equating difficulty with quality: how hard it is for anything to be published, and how long it takes. They do this because they need to maintain their own business models and market value; these are very hard times for them too. So for prestige to attach to publication, a huge volume of written work that has already gone through many drafts and redrafts has to be rejected.

The squandering of human time that closed, peer-reviewed academic publishing represents is truly astonishing. It’s a similar in scale, nature and damage to the other competitive systems on which higher education stakes its claim to excellence: hiring, tenure, grant-getting, ranking schemes. For all of these to be meaningful in the current scheme, they require massive failure rates. This required failure ratio then expresses itself as a kind of personal shame that works as an inducement to further overwork, which is exactly how the human cost is becoming so significant.

And the idea of publication as a means of making funded research genuinely useful has been substituted by the work of counting and factoring up research outputs. The classic story told about perverse incentives is ratfarming under colonial rule in Hanoi: in an economy where peasants are paid per rat kill, the sensible response is to farm rats to kill and turn in for reward. In other words, the rational decision that the system triggers is the exact opposite of the system’s goal. The hyphenation of citation to rankings means that higher education is very close to perfecting in its workers its own ratfarming calculation, and we all know it.

Sure, social media has versions of all this, but it’s still possible to make a space within its generous and substantially ungovernable folds for practices of thinking, sharing and listening that are self-managed, and that work just because they work for you. You can chase followers, or not mind at all.  You can spend all day listening to three people and no more.  You can maintain a valuable and engaging life on Twitter in the same way that you’re stimulated by listening to the radio in the car or having coffee with three friends: you don’t need the whole world in your ear at one time. And above all, you can do this without having to file an end-of-year report.

These not-work practices now need protecting against the seductive but ultimately quite sleazy pull of the attention economy. Surreptitiously joined up networks of people thinking quietly, on their own time, now offer higher education a whole range of models for learning and discovering that will still be here after the MOOC circus moves on.

That is, unless blogging becomes professionally compulsory, in which case we’ll all be in the ratfarm.

Thanks to Bon Stewart for the conversation today that got me thinking to write this down.

Also, health update: I’m off to hospital tomorrow for further surgery. Getting a big diagnosis as a higher education worker, and writing about it here, has taught me that there’s far, far more in this gift economy than the attention economy will ever offer.  Thank you.

Irreplaceable time

Part one: the hamster wheel

The majority of Australians working extra hours or hours outside of normal work hours do so in order to meet the expectations of their job. Almost 60 per cent of respondents report this, with 45 per cent saying that this extra work is necessary often or sometimes. This represents 5.2 million Australian workers who are working extra hours to keep their workload under control and on target.

Prue Cameron and Richard Denniss, “Hard to Get a Break“, for the Australia Institute, November 2013

It’s the crazy time in Australian universities. Research grants are announced, thousands of student grades are being shovelled into student management systems, next year’s business plans are being drafted, graduation ceremony planning is at its fraughtest, and northern hemisphere visitors are showing up to give talks because they’re bundling the southern hemisphere conference season with side trips here and there to make it all more tax deductible.

Last week, the fifth annual Go Home On Time Day campaign pointed out quietly that if their survey extrapolates over the whole population, then half the Australian workforce are unhappy with the hours they work.  Both the overworked and the underemployed are becoming frantic in this economy of the hamster wheel.  2.8 million Australians may need more working hours than they can piece together from casual, seasonal employment, just to make ends meet; and both casual and permanent employees are now suffering from the culture of unpaid overtime:

More than half (54 per cent) of survey respondents report that working extra hours without pay is expected or not expected but not discouraged in their workplace. More than one in five (22 per cent) respondents say that it is expected and more than one in three respondents (32 per cent) say their workplace does not expect but does not discourage it. In other words, the practice and culture of the workplace make this the norm. This normative pressure is felt more by women .

Cameron and Denniss,  p 11

I’ve been thinking about this because on Go Home on Time Day this year, I was sitting in a surgeon’s office. It turns out that I have breast cancer, and I found out that very day.  And here’s the thing: I first thought about getting something checked out exactly 12 months ago. I found time at the end of 2012 to take a day off work, got a referral from my GP, and then the vague unease passed. So I didn’t chase it up.

Over a busy year being both a full-time worker and a parent to three school-age children, I noticed now and then that the unease came back, and I fought with it in the middle of the night, along with to-do lists and unsent emails and ideas for projects and the anxieties of my co-workers and all of my misgivings about working for an institution whose driving mission is to be in the top 1% of world universities, which seems to me as shallow and demoralising an idea as any I’ve heard since I started working in higher education.

And now here we are.

Part two: irreplaceable time

I have breast cancer. A week ago, I had breast cancer, and the week before that, and the week before that. Maybe five, eight, even ten years ago, the first bad cell split inside me, secretly. But I didn’t know. This is how I arrived at knowing.

Xeni Jardin, Diagnosis

It’s been just over a week since the Moment. A routine visit, friendly chit chat about Christmas shopping, and then suddenly a quiet chill in the room, professionals looking at each other but not at me, an emergency biopsy, a result. I’ve had a thyroid scan, a chest X-ray, a CT scan, and tomorrow I’m having a bone scan.

And through all of this I’ve been thinking back to a planning day that I recently sat through, that for all sorts of banal reasons left me feeling completely exasperated with the corporate culture of team-building that is so reckless with people’s time and trust. I followed the instructions, more or less, while thinking about how much I’d like to quit my job, and the thing that went round and round in my head as we were hustled through a series of exercises designed to show how perfectly team building is created from the will to win, was this:  you don’t have my consent to use my remaining time in this way.

Afterwards I puzzled about this a bit: why had it come to me so strongly that it was important to speak back to this kind of dispiriting and divisive activity, however well-intentioned it might be?

I’ve come to this conclusion: I really have a problem with the culture of work in higher education. Having this diagnosis doesn’t make me special, because it doesn’t make me differently mortal than anyone else.  We are neither vampires nor zombies, whatever the craze for playing with these ideas: we are humans, and we are all here together for a very short time, historically speaking. And so that being the case, the question facing us all is this: what do we do about work?

What do we do about the way in which overwork is the price that is now demanded for participating at all? What do we do about the thousands of higher education workers consigned to underwork that prevents them from making their irreplaceably good contribution to the mission of universities or the communities that they care about? Do we really believe that our colleagues in the precarity are there because they deserve it? Do we really think sustainable and healthy workplaces will result from us giving up all of our evenings and weekends just to keep up with the standards set by the most driven, or those with the fewest external ties or interests?

If we have created a culture in which only those who are most single-minded about work are applauded, promoted and respected, we have made something whose capacity for harm is pervasive and long-term. A couple of weeks ago I listened to a senior executive colleague talk in public about how our children value and respect the things we women achieve at work. I don’t disagree that our children recognise that we pour their time into the institutions we work for, but my three daughters are telling me clearly that they experience this as harmful to them and harmful to me. And for those of us who work as educators, this is the at-all-costs behaviour we’re modelling to students who will graduate into an economy that is fuelled on the empty-tank fumes of unpaid labour.

I’ve been thinking for several weeks about a comment Richard Hall made on Twitter, about the need for courage in higher education, not hope. After debating this with him a bit, and taking a while to reflect on my own situation, I’ve come to think he’s right. Hope is the alibi for inaction: what we need is the courage to put work itself at risk.

So this is the choice I’m making, in this irreplaceable time.

These have been part of my thinking this week:

Thanks to Pat Lockley who is far more sentimental than you might think, this lovely video has been as good a metaphor as any for how things feel:

And finally, personal thanks to Agent Zed, a stranger I know only from Twitter, who answered all of my frantic questions about cancer diagnosis while I was sitting in the surgeon’s waiting room and then checked in afterwards to see how things went.
Note: This is a longer than usual post, that was once much shorter. For the first time since I began blogging two years ago, I published something entirely accidentally before it was written. So if you came by this through an email subscription, I’m so sorry — that was only half the story, and as a result it’s been rapidly edited since then.  I guess this is one of the odd symptoms of trying to process the whole situation.  It’s finished now.  KB

Pieces of the sky

None of this is inevitable — not MOOCs, not funding cuts, not the death of the giant brick-and-mortar research university or the death of the small liberal arts college, no matter how gleefully the libertarians in Silicon Valley rub their hands as they craft their hyperbolic narratives about the end of the university and the promise of education technology — all their stories about innovation and doom and profit.

Audrey Watters,  minding the future, 15 Oct 2013

Normality’s threatened by the monster” movies sell us a proposition about the human response to threat. Whether we’re facing sharks, aliens or legumes, they tell us that the drama will be extended by people who don’t get it. Some fail to act because they find the risks too preposterous to accept; more venally they engage in a cover up in order to protect their own short-term business interests. Then there are crowds who get so excited and optimistic about change that they welcome the end of the world by partying on rooftops with placards.

Over the last couple of days I’ve found myself skirmishing with Jonathan Rees in a way that makes me wonder which of these groups he thinks I belong to. Round and round we’ve gone, more or less like this:

It’s the most we’ve disagreed since we first clashed over whether doing stuff online is the end of the world as we know it.  And if you’re called revisionist by a historian, you need to pay attention.  But I think what we disagree on is what we’re defending.  Jonathan and I both occupy tenured positions.  That we have jobs at all is thanks to the willingness of our adjunct colleagues to keep our institutions running. There’s so much wrong here I can no longer see the difference between Coursera and my local university. Both claim transformative effects, both seek to maximise market share using minimum labor costs. Jonathan, who is a labor historian and activist, takes a different view, that he expresses much better than I can. But the business bottom line is that he and I depend in our day jobs on labor that is exploited, and people that are harmed by this, in complex, awful, obvious ways.

So there’s that.

Then I went back to the post that Jonathan threw up as evidence that I’ve been some kind of MOOC apologist, and found there isn’t anything I’d say differently now. This is what I think:

  1. Sebastian Thrun’s argument that there will be 10 universities left in 50 years time reduces education to content in a way that fails to understand how limited and provisional content is proving to be. As Patrick Masson points out over and over, the Internet is the only MOOC we’ll ever need, if content is the thing.  Audrey Watters’ inspiring response to Thrun’s miserable vision also clearly explains why we should resist this banal reduction.
  2. Content is culturally distinctive and locally relevant, but neither of these make it economically sensible to produce locally. Sometimes I think you have to live in import-dependent cultural markets to get why this is so important.  Australian filmmakers certainly know a thing or two about Sebastian Thrun’s prediction. So we need to take seriously the public good arguments for the preservation of locally relevant educational content, but we can’t do this simply by forcing a diet of local produce on the consumer. Just as we have had to with movies, we need to plan for market failure, because anything else really does involve heads and sand.
  3. To preserve the opportunity to learn locally against the logic of market and massification, we need to co-produce regionally and internationally. Leading Australian universities who’ve taken the FOMO route and partnered with VC-funded providers suggest that Coursera has pitched its exclusive club strategy well; and FutureLearn is following the same path. But this isn’t the only model. It now makes sense to get together those who have most to gain if we change the way status itself is calculated and horse-traded: the world’s small-economy regional education providers.
  4. To understand what locally relevant learning could mean in transnational partnerships we might look at the ways in which UNESCO have framed cultural diversity as critical to the health of the world’s overall cultural system.  Of course, the US hasn’t lobbied enthusiastically for the protection of cultural diversity, but France and Canada have both played a leading role in encouraging us to think beyond the nation as the most important player here.  Let’s work together with people who think as we do.
  5. The world’s knowledge isn’t a resource, it’s the ocean that connects us. This means that it’s much more than just a source of fish for powerful nations to trawl and trade. We all share cooperatively in its health, and we are all responsible for its depletion.

To achieve most of the things that are good under these difficult conditions, some of the time we need to be online, because we need get out there to think together, and to find our fellow travellers. We do need to stop calling anything a MOOC, let alone everything.  This hopeless label now actively prevents us from thinking about differences between a whole range of things because they have “online” in common, even as we try to reclassify them with enigmatic qualifiers like “x” and “c”.  In this field, “x” really can be anything, so let’s just say that the problem is MOOC itself. It can’t be recuperated because in the past 12 months it’s been associated with too many toxic enthusiasms: educational colonialism, brand fetishism, and AI as a substitute for the subtle gift of time that humans share with each other.

Yesterday I sat outside watching smoke haze from catastrophic bushfires drift over the place where I live.  The fires are a very long away from here, but the burned gum leaves are falling in my yard.  They’re visitations from a terrible struggle that someone else just went through, somewhere else, not me.  But in falling here they make clear, like pollution or rising sea levels or global finances, that we’re in this time together.

I was waiting to connect to a small live online conversation with educators from Canada, New Zealand and the US, who are all working together in an online course for open educators that’s the first put out by a Canadian start-up, Wide World Ed.  It’s six weeks of talking and thinking, and it’s not all that M, nor entirely O. It’s also not necessarily C.  But it’s online, and that’s how come I’m in it.

Maybe in the rush to market we’ve all forgotten how to acknowledge that live online presence of other people still feels magical and strange, like falling leaves from somewhere else.  As I listened to the sounds of someone typing at a keyboard after midnight on the other side of the world, I realised that I’m still a believer in what this can mean for locally relevant education.

I just no longer think universities are the best or only way to let this happen.

This is written in gratitude for all the other writers like Jonathan who help me make sense of MOOCs, but especially Melonie Fullick, who was on it very early.

Circus skills

What gets you into it is a love of books and idealising wisdom. What keeps you there is exhaustion and rank fear. … The academy has become the circus.

“annamac”, comment,  There Are No Academic Jobs and Getting a PhD Will Make You Into a Horrible Person, Slate magazine

I’ve been thinking about what it feels like to be working in a university at the moment, particularly one that’s focused on change. Change is an easy project to pursue, and it always feels good to be proposing to achieve it.

But there’s another conversation about change within universities, that has everything to do with Rebecca Schuman’s sad, important, strategically naive article in Slate on the US job market in the humanities. This is where I found “annamac”, generously sharing the journey she took from believing in university work as “a life devoted to finding the truth” to “the reality – petty rivalries, forced writing about nothing, unreasonable expectations, and the disregard of you as a thinker.”

How well do we support people to weather the changes that are occurring in universities, as well as the changes in their own hopes and expectations? Universities are filled with people who have good ideas about achieving small-scale change to their everyday work practices, that together really would make a difference, but who have no confidence that their ideas will be appreciated or encouraged. This leaves them with few options for changing the situation that they’re in, other than by leaving, and for this reason it discourages even self-reflection as a form of waste. Why reflect, when you aren’t entitled or empowered to act?

I’ve been thinking about this because I’m off to a professional development opportunity that has come at an awkward time, given my own ambivalence. Professional development is an expensive exercise for which the return on investment is an employee who has been developed to become a more useful part of the system in which they work, rather than someone who is more unconvinced.

Step one in this case has been to answer a survey on the values that I practice when I’m leading a team, which has generated a fancy visualisation. I’m happy to answer these generic psychometric questions, and I’m sure they’re based on compelling research, but they’re not the questions I’m asking myself. These have much more to do with the way my job fits into my overall values as a person, the way I live in my family and my community, and in particular the way that the irregular and unregulated nature of academic work means continually failing to be present with my children, who will be leaving home by the time I get to inbox zero.

There’s no other way to say it: just keeping up with an academic job means that I habitually shortchange the people that I really love, and I’ve made very little contribution to the community where I live, even on things that are important to me. Half a kilometre from my house is a community garden; right under my nose is a mountain of email, grading, a wildly overdue book contract and administration. Take a guess.

I’m not an exception in this. Looking around me I see colleagues figuring out how much work they can secretly do while their kids are watching TV, how many emails they can answer or papers they can grade at the soccer game, how many family occasions they can miss or somehow multitask, whether or not they really have time to go to the gym. Then there are quieter conversations about alcohol, fatigue, shame and depression.

I also hear colleagues celebrating the way that technology has made it easier for them to work “when it suits”, arranging with each other to “do this over the weekend on email”, without looking at the personal, health or community impact of “when it suits” meaning “all the time”. And community impact is one of the most insidious: how many university workers, trained at public cost, make good neighbours to the elderly, or give up whole weekends to volunteering?

It’s not just academic staff. The sense that something really needs to change about the way we work is increasingly shared by many administrative and professional colleagues who are getting stuck in rigidly defined career silos, hemmed in by performance planning, reporting obligations and timekeeping systems that actively prohibit their capacity to create change, except for the changes that were proposed as actions in the last strategic plan.

Students are also trudging through their enrolment without hope of being able to ask for change, other than by filling out the zillionth survey or feedback form. A while back I ran a workshop for first year students to reflect on the choice they had made to come to university, and the choice they were continuing to make by staying. As I expected, most had thought about quitting. What surprised me was that so many had actively prepared to leave, but had then stalled because they were more afraid of not graduating than they were of boredom, for which high school had prepared them exceptionally well. So instead of figuring out either how to change their lives by leaving, or change the university by staying, they were readying themselves to buckle down, converting curiosity and optimism into minimum-effort pragmatism.

These students were facing a dilemma I recognise. There’s a sustainable familiar situation that will persist if you do nothing, and there’s the potential to take risks and strike out for some kind of unknown. The familiarity of the sustainable situation weighs heavily, especially if others depend on you, but the devil-you-know calculation includes a hidden risk that the gradient you’re on will continue its subtle decline: the unhappy relationship becomes unbearable; the job that bores you starts to make you sick; disappointment hardens into bitterness and anger.

So where there was a simple problem that was external, now you’ve entered into a contractual relationship with the problem, and your decisions—even decisions to do nothing but plod on—are feeding it.

Universities that are serious about creating change and not just reacting to it could listen with a more open mind to the stories and experiences of their own employees and students who have ideas for changing the way we work. We so love to measure things; let’s measure how effectively we support those who come up with ideas for working differently—rather than just filtering them through preset beliefs about successful types and failures, or generic assumptions about the right remedies, offsets, or incentives to improve their performance.

(for C.B.)

And the best piece I’ve read on the original Slate article:

The view from here

Here come the planes
They’re American planes.  Made in America.

(Laurie Anderson, O Superman)

Being a terrifically slow learner, I’ve signed up for another MOOC.  In my defense, I enrolled a while back and forgot, and now it’s come around just as I’ve been forced to admit that there are only so many chocolates you can eat or stuff down your cleavage before it all falls over.

So now I’m in with x thousand others, trying a constructivist MOOC focused on the current and future state of higher education.

But this time, something’s different.  I’ve scanned the assigned readings, and I’ve even printed one. (Although as ever, being a MOOC student is causing my sympathy for all students to double by the minute, as I realise how much of an obstacle to engagement these practical steps prove to be, and how misleading the sense of achievement when the staple finally goes in. That’s it!  My work is done. The reading is on my desk. OK, back to email.)

Now I’m looking at achieving a personal best by completing the first task, which is why I’ve slumped into a deckchair to reflect on the pressures causing change in higher education, and their possible consequences.

This is a whole skip bin of questions, so I just want to grab a bit I can reach: why isn’t higher education a powerhouse of change, given the innovation talent pool a university typically represents? I have a feeling the devil’s in the small print on this one.  We can change big things, but in the banal and everyday routines we’re not seeing anywhere near the rate of change that most commentators predict. A significant cause of this is that most higher education institutions—whatever the impression created by international rankings—are at heart really parochial. We compare internationally, but we compete locally, and we’re governed by local cultural habits as much as by our locally enabling legislation.

I’ve been thinking about how parochialism operates as a brake on change since reading Ferdinand von Pronzynski’s discussion of the introduction of a Higher Education Achievement Report for British students. To Australians, the idea of a transcript that looks at what students have actually done while at university isn’t revolutionary, but the view from the British system is this:

The expectation that students, employers and others will abandon grades [degree classifications] in favour of a general report is probably naive. Grades are too much part of the culture of higher education and recruitment for employment, to mention nothing else, for that to happen.

And this is how change doesn’t come about: because people look at the way things have always been done in the system of which they’re a part, and they can’t imagine how it could be otherwise, no matter how much evidence there is that this change has already happened somewhere else and everyone is going about their business without fuss.

Taken-for-grantedness is buried deep in our capacity to evaluate the properness of any higher education innovation within our own culture, but it’s also highly exportable if you have enough cultural muscle. This is why education systems in many younger, smaller economies stick with taken-for-granted habits borrowed from somewhere else, from the Oxbridge-esque sandstone quadrangles of Australia’s Big Eight, to the ceremonial language and even the canned music of our graduation ceremonies. And don’t get me started on hats with tassels.

It’s also how the whole world got used to “Facebook”, even though a facebook was a distinctly north American campus phenomenon before it was a social network.

At one level, it does look as though MOOCs have driven a truck through this, by being so big, so free floating, so global. But what’s actually happening is that MOOCs are still mostly made in north America, and the rest of us have an interesting opportunity to experience first hand how they do it, watching their classes, seeing into their lecture theatres, learning about the culturally particular interaction between professors and TAs, figuring out what typical assessments they use. And in this case, we’re also using resources that are for the time being predominantly drawn from north American media commentary on changes to the north American systems, even though there’s a clear mission by the (Canadian) team involved to challenge this somehow.

And there are global taken-for-granteds in play, the hardest ones to unthink—despite our mission as researchers (and teachers) to make change thinkable in many other spheres. Here’s one: even an open, constructivist course that’s not delivering itself as a form of potted TV can’t do without a selection of weekly readings. George Siemens refers to these preselected readings as a “starting point that people want — a contract“, and this expectation certainly matches my experience of removing assigned readings from my own teaching, at which point people looked as though I’d told them I was planning to teach in my underwear.

The capacity to assign the right sort of readings turns out to be a habitual signal of academic expertise, one that we don’t even notice ourselves reinforcing. I know there’s a risk of disingenuous countersignalling in choosing to avoid this when I teach. But for me the alternative is riskier: that we focus our entire teaching strategy on replicating our own expertise in the minds of others, and we close off the possibility that learners may engage more effectively by finding their own resources to share and then seeing how others respond. That’s what keeps Twitter ticking over for many academics, after all.

We were asked in week one what CFHE12 could do better, and after a bit of brooding, this is my practical answer, as a way of thinking about how higher education could change one of its most unexamined habits, and in the same move MOOCs could really make good on their global promise.

Instead of asking participants to introduce themselves “to the class” (awkward, given the constituency) in the first forum, and then respond to the assigned readings in the next, what if participants had all introduced themselves by linking to a locally relevant reading that speaks to the way in which higher education is changing (or not) right where they are?  Curating these in a wiki or social bookmarking system would have created an instant bibliography of the most up to date higher education research and commentary sorted on a country-by-country basis.

It’s a concrete example of something the constructivist MOOCS—who seem to me to treat their mass enrolment as a capable resource, not just an audience—have the capacity to create, that your local university can’t.