This post was sparked by a thoughtful question from a General Counsel during a recent client session. She didn’t just raise a good point – she reminded me how rarely we talk about ethics in any meaningful way. And how urgently we need to!
The speed of change is unlike anything we’ve ever experienced. New AI tools are landing faster than most people can evaluate them, let alone understand what they’re for. It’s exciting. It’s messy. And it’s deeply uncomfortable.
But here’s the thing: while we’re all trying to make sense of what’s next, AI is already making decisions for us – quietly, invisibly, and without much public debate. The big ethical question isn’t “what can AI do?” It’s “who decides what we use it for?”
And right now, the answer seems to be: whoever builds it first.
Ethics Isn’t Just About Right and Wrong. It’s About Who Gets a Say.
The word ‘ethics’ comes from the Greek ethos, meaning habit, custom, belief. In short, ethics is what we collectively decide matters. But AI isn’t “collective” in any meaningful sense.
The tech is being created by venture capitalists, defence departments, hedge funds, and quarterly targets. Not by democratic process. Not by public debate. And certainly not by those most vulnerable to its misuse.
We’ve already got care robots making medical decisions, facial recognition rolled out in public spaces without consent, and predictive policing systems that reinforce racial bias. Fridges that can lock themselves because you’re on a diet? That’s not fiction. It’s being prototyped.
And let’s not pretend there’s some great philosophical framework behind these deployments. Often, it’s: Can we build it? followed by Will it scale? – with ethics duct-taped on afterwards.
From Hybrid Humans to Digital Companions
Back in 1991, when the World Wide Web arrived, we began a slow merge between the digital and the biological. We didn’t notice at first. But it’s been happening for decades – through the phone in your pocket, the apps on your watch, the algorithms shaping your feed.
Now we’re on the cusp of something new. We’re not just connected – we’re being joined on the network by something alien.
Joined by chatbots, avatars, AI companions, and virtual lovers. Some of them will work with us. Some will feel like they understand us better than our actual friends. Some already comfort the bereaved by mimicking dead relatives.
The ethical questions here are enormous. Should AI companions be allowed to simulate grief support? What happens to closure when you can’t let go of someone who still “talks” to you? Could this cause more harm than healing?
And are we heading towards a world where someone’s first intimate relationship is with a machine?
There are no easy answers. But these are not sci-fi hypotheticals. This is what’s already happening. So again – who’s deciding what direction we go in?
We’ve Moved Past Generative AI – Now It’s Agentic
Most people are just getting their heads around Generative AI: “I type a prompt, it gives me something back.” Simple enough. But the next wave is Agentic AI – AI that acts on your behalf, subject to the parameters we give it.
Think executive assistants that send your emails, file expenses, book travel, and manage your calendar – not because you told them to, but because they know you want them to.
That shift from reactive to proactive brings a whole new set of ethical questions. If an AI agent makes a decision in your name, who’s accountable for the outcome? What happens when it misunderstands context or prioritises the wrong thing?
For me, this is where humans have to stop being the “inefficient bit of the system” and start being the point. Listening, reflecting, making meaning – that’s where our value lives now.
I even run exercises in my workshops where people practise pausing for three seconds before replying. Sounds simple. But in a world where AI executes fast, the competitive advantage might just come from slowing down and listening better.
Justice, Bias and the Illusion of Objectivity
Let’s get real: AI systems are not neutral.
In the US, a tool called COMPAS has been used for years to assess the likelihood that someone will reoffend. It was revealed to be deeply biased – falsely labelling Black defendants as high-risk at nearly twice the rate of white defendants. Courts used this data in decisions about bail, parole, and sentencing.
This is what happens when you plug historical bias into futuristic tech. You don’t fix injustice. You scale it.
Even simple search terms tell a story. Search “schoolboy” on Google Images and you’ll get what you’d expect. Try “schoolgirl” and it’s a different story – more stylised, more sexualised, more shaped by cultural bias than by reality. That’s not just unfortunate. It’s dangerous. Because this is the kind of data that trains the next generation of AI.
What goes in is what comes out – only louder, faster, and with more authority than ever before.
If AI is going to be used in hiring, in justice, in healthcare, in education – then explainability and transparency aren’t “nice to haves.” They’re essential. Without them, we risk embedding inequality so deeply into our systems that we forget it was ever human-made to begin with.
And again – who decides what datasets we use? Who decides what gets filtered, what’s considered “truth,” and what we leave out because it’s messy?
Addiction by Design – And the Coming Era of Digital Overindulgence
We already know that tech can be addictive. But combine that with AI and things get a lot more personal.
Imagine a VR environment that adapts in real time to your heart rate, your pupil dilation, your microexpressions. One that flatters you when you need it. One that mirrors your mood. One that delivers the exact emotional cocktail to keep you inside, distracted, and endlessly gratified.
That’s not a game anymore. That’s a dopamine hose – with no hangover and a tailored UI.
This is where ethical design becomes urgent. If we don’t embed friction, restraint, and transparency into these systems, we’re building the perfect environment for psychological dependency – especially for younger users whose neuroplasticity makes them even more susceptible.
The solution? We need digital wellness therapists. Cognitive behavioural tools. Reintegration programmes. Maybe even limits on certain types of interaction – not to restrict freedom, but to protect our ability to make meaningful choices. The same way we treat alcohol or gambling.
And schools need to get their heads around this now – not in five years when the damage is already done.
The Age of Fake Media – and the Battle for What’s Real
In March last year, a financial professional in Hong Kong transferred $25 million after getting a video call from what he thought was his boss. It was a deepfake. Fully believable. Completely fake.
We’re in a trust crisis – and AI is accelerating it.
Intel’s FakeCatcher uses blood-flow signals to detect authenticity in facial videos. The EU is pushing hard on watermarking and traceability. Finland has media literacy built into every stage of education. But platforms themselves? Still largely profit-driven. The more outrageous the content, the more engagement it gets.
So we need cultural fixes (like BBC fact-checking), educational fixes (media literacy in schools), and technical ones (deepfake detection, real-time verification, and maybe even embedded prompts that highlight suspicious content automatically).
Because if we lose our grip on what’s real, we lose our ability to act meaningfully. Even the death of common knowledge is at stake. Could we start disagreeing that water boils at 100 degrees centigrade?
AGI, Utility, and the Danger of Underestimation
Let’s talk about AGI – Artificial General Intelligence. Human-level intelligence. Maybe beyond.
If it arrives, it won’t come with a fireworks show. It’ll be more like electricity: invisible, ambient, everywhere. And we’ll underestimate it massively once the novelty wears off.
Steve Wozniak once said the test of AGI is whether a robot can walk into a house, find the kettle, and make a cup of coffee. I add to that: can it go backpacking in a foreign country and sort itself out without help?
If it can do those things, we’ve crossed a line. Intelligence becomes infrastructure. So then the ethical questions aren’t about how smart the system is – they’re about how much we trust it to make decisions when no one’s watching.
And once again: who decides how much autonomy we give away?
Superintelligence, Uncomprehended Outcomes, and the Governance Gap
We talk a lot about models “hallucinating.” (That’s AI speak for confidently bullshitting by the way). But what happens when future AI systems don’t hallucinate – they just operate on levels we can’t comprehend?
We’re likely heading into a world where models have an effective “IQ” far beyond anything human. 50,000+. Not just beating us at chess or Go – but seeing patterns in medicine, climate, and geopolitics that humans can’t comprehend on a conscious level. At these levels, AI will see us as we see an ant.
The real question becomes: if an AI presents a solution that saves lives but we don’t understand how it got there – do we act on it? Or do we hold back?
And who decides what level of black-box reasoning is acceptable? Most of us wouldn’t put experimental drugs into our bodies without knowing what’s in them – but we’re putting AI into our societies every day without asking those questions.
We need global governance. An AI Geneva Convention. An independent watchdog like an “AI EPA” – not just for regulation, but for coordinated oversight across countries, sectors, and technologies.
Because the private sector won’t self-limit. And the public sector’s already late.
A Manifesto for Digital Rights in an AI World
We can’t stop the machine age. But we can decide what kind of humans we want to be inside it.
That starts by affirming some basic rights:
- The right to remain biological
- The right to be inefficient if efficiency strips away our dignity
- The right to go offline
- The right to verified truth
- The right to emotional and intellectual self-determination
- The right to choose human connection even when machine alternatives are faster, cheaper, or more convenient
These aren’t nostalgic. They’re necessary. Because without them, we risk building a world that’s optimised, automated, and fundamentally alienating.
So, What Now?
I don’t have a neat ending for this. And I think that’s the point. Ethics isn’t a final destination. It’s a live, continuous, uncomfortable practice.
But here’s what I believe.
We’re not just building tools. We’re building the conditions for future life. And the most important decision we can make is this:
Do we let the loudest voices and fastest builders shape that future? Or do we stop, reflect, and ask: “Is this helping us become better humans – or just better systems?”
AI is not the enemy. But unexamined power is.
We still have a choice. Let’s use it while we can!