University participation has risen spectacularly. The target of 40% participation should be comfortably met by 2025. The nation has quickly moved from an elite to a mass higher education system. The second equity target has proven more challenging, but progress is being made. The relative proportion of low-SES undergraduate students rose from 16.2% to 17.7% between 2009 and 2014. In the same period, the overall number of undergraduate low-SES students increased by 44%, while other cohorts increased by 30%.

Andrew Harvey, ‘Uncapping of university places achieved what it set out to do‘, June 2016

1

It takes a while to notice something’s wrong. There’s a sound that doesn’t quite belong, although not by much—it’s not like a siren right in your street, or a breaking window. So you catch yourself noticing it, and forget to look up. But five, ten minutes later it’s still there, and look, it’s a helicopter hovering, hanging in the air like a kite. Then it’s looping out in a wide arc and coming back to exactly the same spot. Round and back, round and back, all morning.

Helicopter
Rescue helicopter, 2012, elleneka102, shared on Flickr CC BY-NC 2.0

Once you’ve seen it, you don’t unhear it. Explanations start unspooling, tumbling over each other, tangling up. It’s hovering over a major intersection, it’s scanning the escarpment where a hiker might have fallen, it’s following a car chase, it’s filming something, it’s hunting for someone. Neighbours come out of their houses and look up. How long have you been hearing that sound? When did it start?

2

In the past few weeks I’ve exchanged thoughts with people about the rise of analytics in higher education, and especially the arrival of personalisation. What separates personal from personalised? This email came:

Dear Kate,

Hope this email reaches you well!! Hurry up, Don’t miss SAFe AGILEST Training Program … It is our sincere hope that this new version helps you and your enterprise achieve the benefits you all deserve.

Dear Alice, I worry about sincerity in your hands.

Personalisation is the endgame of consumer analytics. It’s the point at which wide surveillance morphs into individual care, without the actual cost of staff. In universities, social data about students layers over all their tracks and patterns as learners, their collisions and intersections, all the half-cooked queries and false starts that no one much intended to share; personalisation lets us zoom in with an unmanned drone to drop off a map to a journey, crafted just for them.

And if we notice they’re drifting from the trail, how could it be a bad thing for us to use our insights to recover them, adjust their progress, set them straight?

It turns out learning analytics is a field where people say “intervention” without unease. We intervene like good people stopping a fight, like bystanders who step in and rescue someone. We come between someone and what fate seems to have stored up for them. It’s a salvationist theology: we know what’s best for others, and we can see when someone’s tilting, and possibly falling right off the wagon.

The problem is that this is exactly the kind of reformism that drives the other kind of intervention, the tough love kind, the governmental kind. We intervene out of faith, prejudice and self-interest. We intervene to help failing students become their better, more successful selves in ways that worked for us. We intervene because we’ve gone on selling the graduate earnings premium like a cheap watch despite all the evidence that the labour market is falling to bits. And we’re hardly disinterested. Our intervening zeal has a grubby side: students can’t be left alone to fail, to make a plan that doesn’t involve us, because their completion has a dollar value, and their success grows our reputation and our market for the future.

So we also intervene because we’re sandbagging our business plan, and our revenue stream. We intervene to ensure that every student who enrols in year X sticks around until year Z, all doing the exact same amount of stuff, at a foreseeable unit cost that enables us to plan. And in service of our interventions, a well scaffolded curriculum works for us like a movie of standard length works for a movie theatre: a business efficiency sold as a unique and transforming experience.

This is why we’re seeing whole divisions appearing whose role is to hover over learning, to track all the things that learners do, to gather data so that our interventions are precisely targeted. This is also why so much effort in the governance of digital learning is focused on getting more students doing more things in the LMS, even though this is one of the least engaging environments for actual learning; it’s why there is increasing policy focus on placing data capture points in curriculum, assessment and feedback; and increasing responsibilities for staff in managing the digital records of student learning.

Behind all this local busywork, there’s a powerful and well-funded research effort that’s being sustained by these changes, and that’s constantly searching for new action. What new data can institutions recruit? What new insights can be drawn out of fresh combinations of things we’ve always known? If for example we can pinpoint the exact moment when attrition risk begins and we can personalise the perfectly automated intervention, can we enrol more and weaker students in better conscience? With sufficient personalisation, and perhaps some upfront investment in digital resources, could more students self-manage their learning, and could someone still be prepared to pay for their experience? What if those students were in large and underserved education markets in developing economies? 

This is the lesson that MOOC pioneers have left behind for us to think about as they pivot into the next phase of their business plan: that analytics, automation and personalisation are the basis of a low-cost and skeleton staff educational experience that can be rolled out anywhere, and that only needs a modest fee-for-access to cover its costs, providing the market reach is wide enough.

But the patterns that analytics can make visible are those that should be starting human conversations, not replacing them. This is why we need to be far less sanguine about twinning analytics with cheap labour—let alone tutor bots—because if this human conversation is going to help students personalise their own learning (and they are surely the right ones to be doing it), it needs staff who are resourced with time, stability, experience and the confidence to hear what students have to say.

3

No two students who quit university do so for the same reason. The decision to leave is part of a complicated story that began long before they arrived, and will go on to deliver future outcomes none of us can see. It involves families, friends, and a muddle of hopes and fears that are political, social and contradictory. This semester I’ve had the privilege of listening to students who left and came back, who are on the verge of leaving, who have changed direction and changed again. The toughest stories to hear are from those who are staying because the risk of leaving seems worse, in this employment market, in this region, at this time, with those family hopes backed up behind them.

Thanks to the data we hold on enrolment, retention and completion, we know these students only as the basis of our claims of policy success. They’re here, they’re meeting all the deadlines and earning grades and moving through the curriculum right on time. Analytics based on tracking failure and discontinuation won’t help them, because their problem isn’t in this terrain at all, but in messier zones of self-doubt, fatigue and anxiety. To understand more about the experience of the student who detaches without leaving, and why this should matter more to us, we need to show up in person, to listen fully, and to let each story stay a whole one.

There are challenges and opportunities facing social and narrative researchers in education: scale, replicability, transferability are all troubled when the focus is on the stories learners tell rather than the observable things they do. But there are explanations that can’t be found by any other means, that can’t be seen by hovering. So let’s have this conversation openly and optimistically, and see what we can add.

#sonar

10 Responses

  • francesbell

    When I read this (lovely post BTW), the ideas of the very wonderful @josiefraser popped into my head and I did a bit of googling. Here’s what Josie had to say about personalisation in 2006 and I think it might be interesting for you 🙂 http://fraser.typepad.com/edtechuk/2006/10/personalisation.html
    How about the institution thinking about how it engages with the learner? Let’s think about the agency of students.
    It’s a bit depressing really that things might be worse than they were in 2006:(

    Reply
    • Kate Bowles

      Frances (and Josie) this is a terrific link, thank you. I was interested to see Dave Cormier also engaged in that conversation in the comments, and so I find myself wondering: what went wrong? Why are we where we are now? For Australian public universities, the answer is in the budget; for all of us it’s the increasing pressure from global competition to do more (research) with less. If teaching revenue sustains the operation, then teaching is very obviously subsidising research, and eventually something has to give.

      So we end up in a situation where technical capacity that has the potential to do very good and useful things gets redeployed to do cost-cutting things, however they’re labelled.

      But I think since 2006 a whole lot of other things have happened in the field of consumer personalisation that has brought some ideals into education without sufficient attention to the assumptions behind them, and whether they really are a good fit to how learners think and behave.

      Reply
      • francesbell

        Kate and Vanessa – you both know much more about tech/labour issues than me (and I learn much from you). I am finding it quite fruitful to think about work that influenced me in the mid to late 90s when I was getting interested in practicing and researching web-based learning, as we called it then:) One was http://uncommonculture.org/ojs/index.php/fm/article/view/569/490#d4 by David Noble that is interesting to reread today.
        Another strand of work was from Stephen Ehrmann who worked on the Flashlight project at Annenberg. . He said https://ns.calico.org/html/article_596.pdf
        “If we want an educational strategy that can expand the
        envelope of possibilities for outcomes, accessibility, and education per dollar, we must:
        1) create coherent, pervasive change in the organization of teaching and learning in
        ways that respond successfully to the Triple Challenge;
        2) use computers, video and telecommunications in a crucial supporting role (or not at
        all); and
        3) use those crucial technologies in a pervasive, coherent way to support those
        pervasive, coherent changes in teaching and learning.”
        Unfortunately, I think that the educational strategies that emerged have focused on the education per dollar/ pound/ Euro at the expense of outcomes and accessibility leading to scenarios closer to what David Noble considered.

        Reply
    • What popped into my mind was a 2005 Savage Chickens cartoon about doing more with less, http://www.savagechickens.com/2005/11/do-more-with-less.html. OK so did a lot more — I’ve been thinking about and collecting on the tech and academic labor issue for some time now, ditto personal v personalized learning — just that the chicken came first.

      Reply
  • Kate,
    Thanks for the insightful post. I think it’s always a positive thing to question what players in academia propose to “do” with analytics. I wanted to clarify something you referred to. In Section 2, you say:

    “It turns out learning analytics is a field where people say “intervention” without unease. We intervene like good people stopping a fight, like bystanders who step in and rescue someone.”

    Later in that section, you say:

    “But the patterns that analytics can make visible are those that should be starting human conversations, not replacing them”

    I don’t quite see the statements as opposing as you do. I think the term “intervention” is conflated with “prescription”, and I’m not a fan of that. When I talk about intervention, I say it’s like tapping someone on the shoulder and saying “Hi there…I just wanted to see if there is there anything you might need help with”. This is all about starting a conversation. I’ve said it numerous times before — analytics should be used to complement the human decision-making process, not replace it.

    It’s true…there are those who profess that the machine knows what’s best. Hopefully, we get more folks calling these players out and shining a light on the fallacy. However, I don’t think that these squeaky wheels (or really, wheels with large marketing budgets) should skew what’s really going on. I think that most institutions are looking to provide resources to help students with coaches/advisors, not with bots.

    Just my opinion on the piece. I’d love to have a synchronous conversation on it if you think differently. Feel free to reach out.

    Mike Sharkey

    VP of Analytics, Blackboard (mike dot sharkey at blackboard dot com)
    Previously: Blue Canary (founder), PAR Framework (charter member), University of Phoenix (Dir. of Academic Analytics)

    Reply
    • Kate Bowles

      Hi Mike, welcome and thanks for this useful reply.

      The trouble that we’re seeing, I suspect, isn’t caused by the emergence of analytics in principle or in practice, but by the specific combination of analytics and casualisation emerging at the same time (and for fairly similar budgetary reasons).

      So institutions have access to significant data, provided student learning is corralled through the right systems, and ideally this enables faculty to reach out in exactly the way that you describe. But when the growing majority of faculty are adjuncts, many also working across institutions, often very tenuously linked to the institutional circulation of data, there’s a big disconnect between the conversations data could trigger and support, and the working reality of the faculty who need to be supported to have those conversations with students.

      I’ve been wondering for a while how much of these college employment trends affect the development roadmap for companies like Blackboard, and I’d be interested to know your thoughts on this. It seems from this vantage point that there’s so much innovation in edtech to enable platforms to provide more and more nuanced data, and I’m not against this at all, but while it’s so disconnected from this parallel trend in faculty hiring we aren’t creating the conditions in which human insight can exploit the larger patterns that data can reveal. This is where we keep making the mistake of thinking analytics will cut corners for us, instead of creating new vantage points to see round corners more intelligently.

      As a narrative researcher I’m optimistic that we can figure out new ways to harness the strengths and insights of both. Very specifically, I’m curious to know more about how data based on capturing observable behaviour fits, or doesn’t, with our understanding of student decision-making. To me, this decision-making is something we all really care about, that’s framed by a far more complicated sense of self than the behavioural snapshot shows, that has a longer history and some very subtle issues in relation to imagining the future.

      Reply
      • I’m intimately familiar with the adjunct trend. I was one of them, having taught at the University of Phoenix for 10 years (both online and face-to-face). While I don’t have an answer for how this trend affects technology companies in general, I can tell you two effects it has on someone like me who has the ability to impact some product roadmap decisions:

        1) Help adjuncts offload administrative tasks
        My feeling has always been that faculty should spend less time doing things like counting forum posts and calculating grades, and more time doing what helps the students most — providing quality feedback. If we can leverage data to automate the former, we allow a fixed-rate employee like an adjunct instructor to spend more of those precious hours giving feedback.

        2) Focus on coaches/advisors as mush as (if not more than) faculty
        We can have a lively discussion about whether the shift to adjunct is related to a shift towards student success coaches/advisors. Regardless, it’s definitely a trend (in the U.S., at least). While many adjuncts care deeply about student success, some are more pragmatic and see their primary responsibility as teaching the content. Therefore, we want to focus analytics like predicting student risk on BOTH the faculty and the advisor, instead of a traditional faculty-centric model.

        I like your narrative around the adjunct model trend. I’d agree that there are both unintended consequences (like the impact of data you discuss) and intended consequences (you get what you pay for). Thanks for beating the drum on this. I’ll reiterate my position that data should complement the human decision-making process. If there are ever intonations that data should be wholly supplanting qualitative instructor feedback, then start raising the red flags.

        Mike

        Reply
  • Great discussion Kate. From what you say here, it seems to me that what “intervention” lacks is recognition, in the strong, normative sense. Turning up, listening – these are the actions of one who recognises the other person as just that, another person. Not a business unit, not a series of digital actions or grades. And being recognised as competent, intelligent, and individual is a part of what enables a student to be free to learn.

    Reply
  • Kate Bowles

    Andrew, this is exactly right, and I’m thinking now that it’s also why intervention and recognition are polarised political options. We’re not saying nearly enough about the way in which faux personalisation is degrading our capacity to recognise each other in complex ways. And yet we know exactly when the email is fake, and we know exactly how it makes us feel to be addressed in this way. So the risk is that if we automate the intervention, students will also know. Unless we complement the investment in data with the humans who can genuinely, personally follow up, we are committing ourselves to fake emails.

    Reply
  • It’s already a problem, and that’s without the analytics uses.

    My own teaching experience suggests that many students, and precisely the ones who are newer to university, often despise (ie. ignore) their official email accounts, because they are filled with impersonal announcements from various arms of the Uni, but which they deem irrelevant or unimportant.

    I wholly agree about the investment in genuine people – I’d like a university to invest in me!

    Reply

Leave a Reply to Kate Bowles Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.