The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission. 

Comment

AI – not a human – will decide your career future. That might be a good thing

We’ve seen its worst impulses and we’ve heard from its detractors. But we’re missing a lot of the nuance, writes AI startup co-founder Edmund Cuthbert

Thursday 20 March 2025 12:57 GMT
Comments
Google unveils AI-powered robot that can fold origami and undo zips

By now, you’ve heard the horror stories. AI that’s racist, sexist, or both. Screening tools ranking white men’s resumés higher, language models reinforcing gender stereotypes, and artificial intelligence that seems to have a homophobic bias.

And should we really be surprised? Generative AI gets its training data from humans, after all, and humans are biased. If you mix their views together in a statistical soup, you get all the bad opinions, all the snap assessments we make every day based on stereotypes — and then you create systems that get to work enacting those prejudices.

But there’s a heck of a lot of nuance being lost in this debate. Like a lot of recent discussions that have become hugely polarized and politicized — raw milk, fluoride, home-schooling, electric vehicles — the truth is inconveniently complicated.

It’s easy to be suspicious of AI. In fact, it’s important that we are. It’s not just a composite of human intelligence — it makes its own, new assumptions, and businesses are keen to put it to work. It scans medical files for malpractice, advises lawyers and parses resumés, reads PDFs at lightning speed and constructs graphics out of sentences. And as we run out of authentic, human-generated data to feed the models, companies are turning to AI models themselves to generate more training data. This synthetic data can be even more prone to bias, and over-reliance on it could lead to many more embarrassing moments — like Google’s Gemini AI generating images of Black Nazi soldiers in 1940s Germany.

Nevertheless, large language models — known as LLMs — like ChatGPT represent the biggest technological leap forward of most of our lifetimes, as big a shift as the development of the internet. They’re here to stay, and we need to work out how to use them to positive effect.

Luckily, when AI is built correctly, it’s much, much better at furthering social good than a human is.

High-quality AI can detect cancer better than human radiologists; detect floods five days in advance, well beyond what human calculations have been able to do in the past; and find lost hikers twice as well as traditional methods.

So how do we make sure that big decisions — hiring, healthcare, policing, and all those other society-wide issues — are made properly in the future? It will require careful thought about when and where we want a human in the loop, and which tasks can be entirely handled by autonomous agents.

A lot of this also involves knowing when to use AI, and when not to. For instance, when hiring, AI can be an excellent resource for filtering resumés or headhunting candidates. It can recommend unconventional candidates — ones who might not have the PhD but do have incredibly appropriate experience or someone with hands-on experience doing the same job as your customers, who has baked-in empathy for them — at much higher rates than a human recruiter ever would.

But do we really want to see AI-led interviews, where people who apply for jobs are directed to stare into screens and respond to robotic voices rather than coming into the office and speaking with a human? I don’t think so. Doing so fundamentally misunderstands what a hiring process is supposed to be about, which is making sure that the job is a good fit for the applicant as much as it is the other way around.

In education, the first place most people go with AI is helping teachers — automating grading, summarizing student performance, generating lesson plans. And sure, that’s useful. But it’s also a marginal gain — a slight efficiency boost to a system that’s already overworked and under-resourced.

What’s less obvious, but far more transformative, is using AI to create individualized tutors — one for every student, available any time, infinitely patient, and tailored to that child’s exact learning style and pace. This isn’t about replacing teachers — it’s about finally delivering the kind of education to all children that only the wealthiest students have had access to in the past.

That kind of social change for schoolchildren is not a marginal improvement. That’s a revolution.

AI has the profound potential to revolutionize entire systems — provided we're bold enough to rethink how we build it, rather than trying to simply replicate outdated methods. Its greatest promise lies not in automating bias or patching up inefficiencies of the past, but in building something fundamentally new and deeply equitable.

The key is embracing AI as a tool for radical, thoughtful innovation, not merely as a bandage for broken systems. Because those who fear that artificial intelligence could bake in prejudice at scale are right — but the solution doesn’t have to be Luddite. Indeed, AI is already here, and the solution has to be that those who are committed to equity take charge of this technological shift.

Edmund Cuthbert is the co-founder of AI startup Boolio

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in