As AI reshapes how we interact with systems and each other, we need to expand our understanding of what’s at stake.
I’ve spent years talking & speaking globally about this idea of consequence design, which has evolved to be defined as interrogates the ways interfaces and technology proliferate daily life. and I’ve written about this Where things always got a little stuck with this framework is eventually getting to a place where you’d shrug and go okay, I agree but now what? about the ways these problems show up in everyday life.
In a world increasingly mediated by AI agents and large-language models, it’s difficult to know where the people start and where the machines end.
Last week, two blog posts: one where Mathias Biilmann introduced the concept of Agent Experience (AX), followed by another post from Steven Fabre followed up on the importance of agent-compatible, collaborative products, led me to think about my own work and a concept I’ve been using privately for years, but hadn’t yet written about, disintermediation.
Understanding disintermediation
Disintermediation sounds more complicated than it is. An ATM giving you money at midnight because the bank is closed is one example of a helpful intermediary. This didn’t put anyone out of a job - banks were never open late, and without this (perhaps when we still used money for things), you’d have to wait until morning.
On the other hand, using an AI agent to purchase something based on its knowledge of you, or leveraging a platform to contact another person’s agent to make a purchase or arrange a service, creates a series of assumptions about user intent. In one-off situations, this isn’t a huge deal - might even be preferable. But in the aggregate, these mediated interactions can have huge downstream effects across society.
Disintermediation in AI-mediated systems is the process by which people become separated from direct relationships, capabilities, and decision-making as AI interfaces interpret and act on their behalf.
Unlike traditional automation that simply makes processes more efficient, disintermediation fundamentally changes how people interact with services and systems, often requiring them to adapt to AI limitations rather than the other way around.
For example, when AI chatbots become the primary way to access customer service, people don’t just lose direct contact - they lose the ability to express needs in their own terms, must learn to communicate in ways the AI understands, and lose access to human judgment in complex situations. You can ask for a human, but in time, you might not get one.
Why we need to expand our lens to human experience
On its face, Human Experience sounds like a fork of traditional human-computer interaction or user experience principles that have evolved over the last 40+ years and has a wide canon and practitioners.
I’ve spent years talking about this idea of consequence design, which has evolved to be defined as interrogates the ways interfaces and technology proliferate daily life. and I’ve written about this & talked about it extensively since 2017.
In a world increasingly mediated by AI agents and large-language models, it’s difficult to know where the people start and where the machines end.
Why We Need to Expand Our Lens to Human Experience
While the tech industry focuses on optimizing Agent Experience (AX), we’re missing something crucial: how these systems reshape human capability and relationship patterns across society. This isn’t just about better interfaces or user experiences - it’s about understanding and designing for how AI mediation changes human behavior, decision-making, and social connections.
Human Experience (HX) builds on traditional UX principles but expands our lens to consider:
From Consequence Design to HX
My work on consequence design has always focused on understanding how interfaces and technology reshape daily life. As AI agents become primary mediators of human experience, we need frameworks that help us:
HX isn’t just another layer of design - it’s a critical practice for ensuring AI-mediated systems work for people rather than forcing people to work for them.
What’s Next
As organizations rush to implement AI agents and large language models, we need people focused on:
This isn’t about resisting AI implementation - it’s about ensuring these systems enhance rather than erode human experience. I’ll continue expanding on this topic through writing and speaking more in 2025.
Feel free to get in touch if you want to talk more about this concept, too.