Chat GPT is like an e-bike for the mind

Sam Altman, CEO of OpenAI, Twitter, 16 December 2022

“We think the casualisation problem will be solved by AI in about 15 years or so.”

University CEO, 2019


Hello, it’s been a while.

May 2020: something about refusal, and then three years of careful-what-you-wish-for. Three years of writing that refused to be written, as we all tried to find our footing in a pandemic that wouldn’t end, and wouldn’t end, and wouldn’t end. I’m not the only person who couldn’t write, and then couldn’t return to writing when I thought my feet might be on more solid ground. I kept wading forward with everyone, holding things above my head, bracing for the next thing.

In a more literal way, during that time I walked around my neighbourhood, trying to figure out how working from home could improve on working all the time. Following bike paths I noticed something new: teenage girls on two-seater e-bikes. I knew nothing about e-bikes; what I was looking at was only their effect. No one was sitting on handlebars, everyone was getting up hills easily. It looked fun and safe and liberating. The environmental impact was surely an improvement on driving to the shops; girls not walking home alone in the dark looked like a win.

“Do you have any idea how much those things cost?”

No, I didn’t. But I do now.


In 2023, e-bikes are commonplace in our town. I read the stories about their lithium batteries catching fire. The consumer reality has made its brutal point: teens riding e-bikes in my neighbourhood are part of the real estate updraft that’s put million dollar price stickers on every house, and candles and cushions in all the stores. We’ve become an upmarket weekender community, and e-bikes are the casual reminders that hanging out at the beach is just as much about Australia’s housing inequity crisis as anything else.

This summer’s craze is ChatGPT. ChatGPT is the specific tool that’s become a short-hand for near-Artificial General Intelligence (AGI) in the same way that Google made itself the brand verb for “search”. Of course, there’s more than one product in this market, just as there’s more than one e-bike. But for the moment we seem to have agreed in education that the generic term is ChatGPT. The same employer that briefed me in a panic in 2020 on the best way to style my home for Zoom teaching is now sending anxious emails about ChatGPT, encouraging me to safeguard academic integrity with new rules while also embracing AI for education as the revolution we can’t refuse—as if we weren’t all already stuck to its flypaper by its major investor, Microsoft.

Microsoft already organises our communications and our calendars. Microsoft greets me by name at the start of the week and suggests ways I might be more productive. Microsoft filters my email into “focused” and “other”. Microsoft nudges and corrects and cajoles and soothes all day long. Microsoft is unfurling the roadmap for integrating ChatGPT into Teams, so that we can become more productive together and the minutes and action items will write themselves, and just quietly Microsoft will notice who arrived late and left early.

Microsoft injects AI into Teams so no one will ever forget what the meeting decided

In the virtual corporate world, slackers have nowhere to hide

This is the wedge that has already brought AI into universities as corporate workplaces. Educators and teaching are not the target market, any more than we were the original market for Zoom. Nevertheless, just like the pandemic, AI is driving erratic and panicky policy change. We’re trying to seem both progressive and principled at once, but the reality is that we’ve been caught out. This is much bigger than helping college students cheat, or helping them learn, and it didn’t happen overnight. The risk isn’t just to research integrity or assessment practices, but to job security: people who work in universities, especially in student services (already casualised to the bone) should expect that the insertion of AI into college customer service will change their working experience sooner rather than later.

AI usage is exploding in facial and voice recognition technology, medical diagnostics, algorithmic trading, and automated customer service bots. … the AI software market is expected to jump 21.3% to $62.5 billion in 2022, forecasts market research firm Gartner. The research group adds the worldwide AI semiconductor market will grow to more than $70 billion by 2025, up from $23 billion in 2020.

Artificial Intelligence Stocks To Watch: Big Tech Expands AI Products, Services, January 30 2023

In January I signed up for an account with OpenAI, and began a conversation with ChatGPT to see how it felt to play with a toy from such a pricey toybox. And it’s true, it’s like an e-bike for the mind: it’s fast and fun, it sails along. I ask a question and it spools out an answer. After enough of this, I no longer care if the answer is wrong or is accidentally useful. I’m not even reading, I’m retaliating. I’m trying to get it to inform on itself, and it’s chatting away, tactfully avoiding my interest in its training models and the ethics of its trade, but turning out readable prose nonetheless. I’m starting to recognise its habit of numbered list arguments, and the way it keeps finishing its answers with a smug little pirouette: it is important to note however. (If you think ChatGPT is smart to have picked up this academic writing tell, get along to Ludwig.)

At least it’s making whole sentences which is more than I’ve been doing. I can’t remember why I was so worried.

ChatGPT I hear you calling
ChatGPT I feel so blue
ChatGPT I think I’m falling
I’m falling for you

(That’s not ChatGPT, that’s me. Less sincerely than L. Cohen, to give you a clue.)

But like an e-bike, I also don’t know how it’s made, or how much it costs. It’s free to me. Well, it was.


If you don’t know much about e-bikes, this is one article you could read. It’s the story of two teenage girls, an e-bike, a fatal accident, and how e-bikes are made profitable through the use of cheaper parts, including brake parts. Read with care: this is a story about a terrible family loss. But it’s also a generic story of how a craze works: business booms, risk is underestimated, and we don’t think enough about what could go wrong, until it does. Suddenly there’s more than one person saying that they also couldn’t get their brakes working properly, or that e-bikes aren’t for kids. Courageously, the writer—an experienced cycling journalist—tells their own story of buying e-bikes for their teenagers during the pandemic:

I was aware that certain compromises were made to hit that price point, but I initially felt it was a solid and extremely fun bike for the money.

Crazes are moral honeytraps. They work because of compromise. Developers and investors dive in early, and go hard. Component and labour costs can be cut by using low-wage workers in China (for bike parts) or Kenya (for content moderation of the LLM training data used in ChatGPT), and markets can be persuaded to look the other way. No one thinks cheap brake parts are a problem until the brakes actually fail. Market forgiveness is assumed, because this is how the start-up and investor classes learn. Fail early, fail often. As the CEO of the company that made the e-bike in this story wrote last month to his customers: “As a young company, we recognize that we have made mistakes. Now we are dedicated to learning from them.”


As educators, we don’t have the same liberty to accept and even to embrace near-AGI in its adolescence, while it’s still trying to sort out problems with racism in its training data, or while its celebrity champions are still equating ethics with censorship, and certainly not while it’s using sweatshop labour. When we invite students to engage with it for any reason, we exploit the working lives of the employees of the company in Kenya who were hired to clean out content from its training data that included “graphic detail” of exactly the laundry list of internet gore you would expect. One worker described the experience as “torture”. In return, a spokesperson for OpenAI said this:

Our mission is to ensure artificial general intelligence benefits all of humanity, and we work hard to build safe and useful AI systems that limit bias and harmful content. … Classifying and filtering harmful [text and images] is a necessary step in minimizing the amount of violent and sexual content included in training data and creating tools that can detect harmful content.

OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic, January 18 2023

This explicit devaluing of the workers who stared and stared at harmful content so that AGI can benefit “all of [the rest of] humanity” matters to universities. They’re in our wheelhouse now. So instead of protesting or celebrating that ChatGPT can write an exam question or a sonnet or a bit of code, we need to focus on how this was made hygienic for privileged users like us. The answer is simple: low-waged workers in Kenya were recruited at piecemeal wages to clean up sexual abuse, hate speech and violence, and neither the US based outsourcing company nor the global “impact sourcing” recruiting company thought not to go along with this until the human damage was done. That’s happened, and happens all the time. What’s up to us is whether we think it matters enough to make ChatGPT a moral problem for us.


So what do educators do with ChatGPT, other than pour our own free labour — and that of our students — into continuing to train its model? At the moment, I don’t have an answer except to keep learning.

These recent articles on ChatGPT, education and labour have impacted my thinking [updated with new links 7 February 2023]:

Autumm Caines’ excellent blogs are thorough on ethics, privacy, data, and safety. Start with this one and work back.

Donna Lanclos and Lawrie Phipps credit Autumm’s work and have provided candid boilerplate text for students to use when crediting the use of ChatGPT, that acknowledges “that the system was trained in part through the exploitation of precarious workers in the global south.

Brenna Clarke Gray has three excellent posts up and is still thinking. Start with this one on writing assignments and work forward.

Christina Hendricks is a philosopher who writes about ed tech ethics, and ethics in general, and has looked at the ethics of LLM in education here.

Anne-Marie Scott has a frankly worded rant about it all on her blog, and her focus on dignity and equity in education really matters.

Guy Hoffman questions the value of students outsourcing their writing to ChatGPT, and speaks up for writing as thinking process here.

Dan McQuillan looks at the labour issues masked by AI hype here.

Rob Horning has a beautiful essay that looks deeply into what it means to be a human facing all of this. Make time for this one.

Writing in Slate, surveillance researcher Chris Gillard and Pete Rorabaugh, who have also been studying the outsourcing story, argue for the urgent regulation of this industry, and advise educators to stop and think about whose interests are served when we are told that our only option is to live with it.

Robin Winslow is keeping a close eye on Microsoft’s predatory interests, and the centralising of everything.

Nabil Alouani takes a closer look at the case of the outsourced digital labelling and puts it bluntly: “legal slavery is an open secret in the tech industry

On gender issues in tech and outsourcing, pair this with this.


One other thing that changed while I was away: I’m no longer active on Twitter except to read. I’m keeping my words away from that company, with some misgivings about leaving but many more misgivings about that CEO. So if you’re looking for me, you’ll find me on the fediverse.

4 Responses

  • It’s a joyous day in my feed reader when the light flickers on for Music For Deckchairs… I knew/hoped you’d be back.

    I find myself woefully ignorant of e-bikes, and reading Molly’s story I am crushed at how un-necessary a motor is for a bicycle experience. The joy of cycling (like I need to explain to you!) for me was the freedom of noise and capability of a machine, to have pride in locomotion, to be closer to and notice the terrain, see bugs on the sidewalk, flowers, weird discarded objects, all things you miss when you are moving faster, on a machine.

    Maybe I am way out of touch, and I am missing something about the experience of e-biking.

    Thanks for returning to the longer thoughts here Kate, I have missed immersing myself in drawn out thoughts rather than the tasty, but brief morsels of public streams. Best to you and I will read anything you write here.

    • Kate Bowles

      It’s been such a while that I’d forgotten I had to approve comments, I’m so sorry you had to wait. Your comment really brings up something that I want to think more about: what do we miss, while moving at speed? I too am here for the weird discarded objects.

  • Kate! Fantastic to read your reflections and so important to learn of the history of how such a ‘disruption’ got to this point. Hello, also. I also got back to writing today, and also about Chat GPT. So maybe it is prompting some simultaneous human responses.

    I miss the Illawarra so and am sad to hear about the changes there, although that trajectory has been going for a while of course.

    Anyway. Hello.

  • Vanessa Vaile

    I meant to comment on how glad I was to see your post when I saw the WP notice, tagged and bagged it for my Chat GPT collection. Commenting now before I forget again. Chat GPT powering deliberate disinformation is another dimension.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.