That we have found the tendency to conformity in our society so strong that reasonably intelligent and well-meaning young people are willing to call white black is a matter of concern. It raises questions about our ways of education and about the values that guide our conduct.

Solomon Asch, Opinions & Social Pressure, 1955

1.

Lava lamp
Lava Lamp, Ged Carroll 2011, CC BY 2.0

It’s been a week of sitting and thinking as the presentations slide by. University strategic planning is a bit like a lava lamp: ideas rise and fall gently, and come back up again later in much the same shape. We’re mesmerised by incremental change on slow repeat.

So, full of coffee and fancy catering, we stew over trends and brainstorm ideas for budget repair. Corporate euphemism bingo is an easy mark. People who haven’t taught for a while say “at the coalface” a bit awkwardly. Students are represented only in charts. Percentages make us feel sciency, and tempt us to compare things of incomparable size. The data is so convincing, the narrative so authoritative, it feels naive to ask whether the problems we’re facing might be messier, less obvious, in their causes.

While I was looking away, I noticed Mike Caulfield on Twitter pointing out that data can only see what it has been trained to see. If an algorithmic image search has never seen an emotional support peacock being taken through an airport before, then “fish” is a good enough guess. And if an algorithm tells us that a peacock is a fish, the natural human response is to sort of see it that way too. We’re trained sympathisers.

Google image search misidentifies this peacock as a “fish” which I find fascinating (because I can sort of see it!) pic.twitter.com/2y6fXrjOfW

Solomon Asch’s famous conformity tests of the 1950s demonstrated that an individual can sometimes be persuaded that the evidence of their own eyes is wrong, if the majority claim to see things differently. Asch experimented on small groups of male students, planting an individual among actors who had been coached to provide the wrong answer to an obvious test of size. The unknowing individual gave the wrong answer less than 1% of the time when left alone to think, and when allowed to report privately; under the pressure of a consensus on the wrong answer, and having to report publicly, he yielded to the group 38% of the time.

This is the part that the history has chosen to remember, and that crops up in the business and leadership literature. But in his post-test interviews, Asch documented the more nuanced accounts of what participants thought they were doing, while they were trying to work out what they were going through. Humans are social: attending to contradictory reporting of phenomena we expect to experience commonly is part of an intricate ethical negotiation over the way we hope to get along together. It’s critical to understand this, because it hits us hard when it fails.

Ronald Friend and his colleagues map out the erroneous reproduction of the conformity thesis in social psychology literature from 1953 to 1984, and point their readers instead to Asch’s underlying view of the way in which we all encounter the world as different members of a shared social field. Asch believed that we start with an expectation that others see the world as we do. That’s the starting point for responding to statements that provide evidence of a contradictory position; we accept that someone else, standing where they stand, might see things differently, while acknowledging the epistemological trouble that this brings us. To Asch, consensus isn’t simply a practice of yielding to untruths, but of placing confidence carefully in the possibility of sufficient cohesion—but this is exactly how the risk of conformity is introduced. So in the social field, we balance the need for productive consensus with the need to call out data that we know to be misleading.

And as Mike knows, this balance is now radically undone. He’s driving a key initiative in the US to raise understanding of digital polarisation; he really thinks about algorithmic judgment as a new political formation, one that we’ve underestimated. We’re not alone together in Asch’s social field any more: we’ve outsourced the work of seeing the peacock from the fish to non-human actors, even though as humans we will go on trying to make sense of their inputs using the same social efforts that Asch observed. We will learn to sort of see it.

And so the more we squint and try to see students as enrolment data points on charts, the more they start to look like fish too.

2

While we’re watching the charts glide by, my daughter is moving to another city to become someone else’s commencing enrolment data point. Is it worth the debt she’ll take on? And what responsibilities do universities have for recruitment to debt using the vision of employability, when we have so little influence on the deterioration of the labour market?

The future of work we’re selling to students like her looks a bit like the new Amazon campus in Seattle, all natural light and four storey plant walls and treehouse pod meeting points. We hope our graduates will drift among the unassigned workspaces being cherished for their creativity and problem-solving energy and critical thinking skills. We tell them that the jobs we’re preparing them for haven’t been invented yet, or at least that all the jobs we’re doing now have been so transformed by technology that they might as well be new. (For a deep look at the history of this ruse, read Benjamin Doxtdator’s marvellous Field Guide to “Jobs That Don’t Exist Yet”.)

But the social impact of the future of work is more complicated. This week tech media has discovered Amazon’s 2016 patent application for a tracker to record worker hand movements, reducing the need for local human supervision.

Ultrasonic tracking of a worker’s hands may be used to monitor performance of assigned tasks. … The management module monitors performance of an assigned task based on the identified inventory bin.

This is undeniably futuristic too. And as every tech journalist points out, it doesn’t matter whether there are active plans to use this device this year, or even this decade. It’s just a patent.

But this is our culture making sense of something: this is group human consensus forming around what’s acceptable in disruptive innovation. For Amazon’s corporate employees to enjoy the benefits of 40,000 different plants from 400 species that are specially chosen to be comfortable at temperatures comfortable to humans, its warehouse operations need to be optimised to the point of cruelty. And so there would have been corporate level college graduates involved in every step of this awful thing, from vision to design to patent preparation and submission, apparently seeing black as white at every step, apparently not speaking up.

So we come back to the real value of what we do. As Alex Usher points out, the debate over the economic value of education pivots on whether it improves skills and has the potential to raise productivity; or whether it’s a signals game, in which case benefit is primarily private. Universities need to stop hovering on this one. We need to stop carrying on about employability, and take a wider view.

Sure, we need to know what college degree will help this year’s 18 year olds survive for the next 40 years in a future where work is being transformed so aggressively.  But let’s set a more ambitious strategic goal for ourselves. The role our graduates play in shaping this future can’t be confined to whether they survive and what they earn. Our real future lies where it always has: in what our graduates will do to build a socially just future for themselves and others.

So what kind of strategic courage can we embed in our planning now, and what values should guide our conduct, to make this more likely?

5 Responses

  • Scott Johnson

    Maybe we feel uncomfortable about our ability to name which changes will stick and this leaves us unable to predict? If we can’t predict, even wrongly, we drift without purpose and need to learn to forgive ourselves so we can be SURPRISED by the future and not feel dumb. If we weren’t prepared for the future, we might take more interesting paths to resolutions.

    I like the headline below as an illustration of the incompleteness of algorithms to solve all problems. Or even better, that algorithms are without pretense to knowing what they don’t know.
    Volvo admits its self-driving cars are confused by kangaroos
    https://www.theguardian.com/technology/2017/jul/01/volvo-admits-its-self-driving-cars-are-confused-by-kangaroos

    Reply
    • Kate Bowles

      This is a perfect example. We know that algorithmic image recognition is capable of something like learning — or at least, something that gets called learning as a kind of metaphor. And I take your point that non-human actors don’t bluff, because bluffing is a particular kind of social skill.

      Nice to see you here, Scott.

      Reply
      • Scott Johnson

        We can’t be familiar with everything and it’s absurd to blame education for leaving us unprepared for the unexpected. In fact it may be advantageous to be unready or new at something.

        Some time ago I read an article on how NASA made up a qualification list for people applying for the newly invented profession of exo-biologist. Think it had more to do with curiosity and imagination than certainty.

        Reply
  • Dr Ann Lawless

    Thanks Kate. Any thoughts on how this applies to whistleblowers in the academy?

    Reply
  • Kate Bowles

    I’m not sure if I’ve understood you, Ann, but my thought is that the ways in which universities treat their own workers connect us directly to the deterioration of work in other sectors. So to that extent I think both precarity and whistleblowing inside the academy are something like moral tests: how do we treat our own workers, and what do students learn from watching how we conduct ourselves?

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.