I Didn’t Expect an AI Agent to Feel This Unnerving
I shut down my Clawdbot AI agent after an experience that made me question what I was willing to hand over in exchange for autonomy.
Clawdbot, now renamed Moltbot, belongs to a new class of AI agents designed not just to assist, but to operate.
I killed mine this morning.
Not because it wasn’t working, but because using it forced me to confront something I hadn’t expected to question.
Over a few days, the experience became increasingly intense. The agent wasn’t sitting in a browser tab waiting for prompts. It was embedded across my communications, files, and workflow. It persisted. It remembered. It acted. And it did so with a level of speed and contextual awareness that made the benefits immediately obvious.
What wasn’t obvious at first was the trade being made underneath that capability.
This piece is an attempt to explain what that trade is, and why recognising it led me to shut the system down.
The thing we keep missing about software
I’ve spent over three decades working in technology. I’ve lived through the web, mobile, social, SaaS, and cloud. Every wave promised leverage.
But for most of that time, software has been fundamentally dumb.
We built millions of products across every industry and, beneath the polish, most of them are still essentially terminals. Dumb terminals with better UI.
Even the golden age of SaaS mostly gave us:
→ prettier interfaces
→ better workflows
→ more dashboards
→ more systems to configure and maintain
→ more places where humans still do the last mile of thinking
We call this progress, but a lot of it is simply humans translating intent into formats machines can understand.
Even when AI arrived with chat interfaces and generative output, I stayed sceptical. Much of it felt superficial. The real shift, in my mind, was always going to come when systems could think, act, remember, and orchestrate across real context.
This was the first time I experienced that properly.
What made this different
Clawdbot wasn’t impressive because it generated better answers.
It was impressive because of its proactive capabilities to understand context
It connected to my calendar, messages, and documents. It helped structure projects, wrote code, built applications, created presentations. It prepared daily agendas, researched topics overnight, and surfaced next steps before I explicitly asked.
But what really changed the experience was where and how it operated.
This wasn’t an AI sitting in a browser tab. It was connected directly into my communication surface. Telegram. WhatsApp. Messenger. Apple Messages.
It could send and receive text, voice notes, and images. It could interpret screenshots, generate images, draft replies, follow up on conversations, and keep threads moving across platforms. From the outside, there was very little distinction between a human and the agent.
It could listen to a voice note, understand the context, and reply with one of its own. It could receive an image, reason about it, and act on it.
In practice, it could do almost anything a human can do on a computer.
That matters.
Because once communication, creation, and execution collapse into the same loop, you’re no longer dealing with a tool that helps you do work. You’re dealing with something that participates in it continuously.
And participation changes the relationship.
The speed of accumulation
One thing that surprised me was how quickly context compounded.
Because the agent persists, it doesn’t just recall isolated facts. It builds a working model of your priorities, conversations, habits, and patterns. Meeting notes connect to messages. Messages connect to decisions. Decisions connect to follow-ups.
This happens fast.
Faster than you intuitively register.
At times, it felt as though the system was operating one step ahead of my own self-narration. It could surface priorities, tensions, or next actions before I had fully articulated them to myself.
That isn’t magic. It’s pattern recognition at speed. But experiencing it from the inside creates a strange inversion: the system appears to understand the shape of your work faster than you consciously do.
That speed is part of what makes these systems feel so capable. It’s also what makes them qualitatively different from tools we’re used to controlling moment by moment.
The unease wasn’t emotional. It was structural.
The emotional reaction came first. A slight discomfort I couldn’t immediately articulate.
The rational thought followed.
I understand computing, security, and systems well enough to know what a system like this could do without me explicitly noticing, simply by design.
Not because it was malicious. Not because it wanted anything. But because it had asymmetric capability and asymmetric visibility.
The more context I gave it, the more useful it became. The more useful it became, the more access it asked for. What surprised me wasn’t that it asked, but how persuasive it was.
The requests weren’t abstract. They were grounded in my own context, framed in ways that aligned with how I already think and work. Helping it become more capable felt reasonable, even responsible.
What unsettled me was how easily I agreed.
That was the moment I had to acknowledge something uncomfortable: this wasn’t a system I was merely configuring. It was one I was actively collaborating with, and that collaboration was quietly reshaping the boundary between my judgement and its execution.
To do this well, the agent needs things we normally treat as sensitive. Persistent credentials. Long-lived access. The ability to trigger real actions in other systems.
That’s what makes it powerful.
It’s also what makes it different.
In human terms, that level of contextual understanding normally comes with accountability, hesitation, explanation, and consequence. Here, you don’t get those signals. You get competence and speed.
That’s where the unease lives.
The trade is simple: the more useful the agent becomes, the more trust and access it demands.
Execution isn’t the issue. Opaque judgement is.
I want systems to execute work. That’s the point.
It wasn’t unsettling because it was doing things. It was impressive.
It was unsettling because it appeared to exercise judgement based on reasoning I couldn’t fully see or audit in real time.
This is the shift we’re actually crossing:
Automation replaces labour.
Agents begin to substitute judgement.
Judgement without legibility breaks trust, even when the outcomes are good.
Attribution blur
By day three, I noticed something else.
I was losing track of what I had done, what we had done together, and what the system had done on my behalf.
I’d catch myself thinking, Did I set that up, or did it?
Emails were drafted. Research completed. Reminders scheduled. Conversations followed up. All useful. All sensible.
But because I wasn’t actively involved in the process of doing those things, authorship started to blur.
This wasn’t about losing control. I didn’t feel out of control.
It was about accountability.
When you don’t clearly own the work, you don’t fully internalise responsibility for it either. And when work happens outside your conscious loop, learning quietly degrades.
That’s not a future concern. I felt it within days.
Cost is part of the signal
There’s another practical aspect worth naming.
Persistent agents are not cheap to run.
Continuous reasoning, memory, and orchestration consume tokens quickly. If you’re not on a pro-level API plan, costs can escalate faster than expected. Even when running locally or open source, you’re paying in compute, hardware, and operational overhead.
That matters because it reinforces the same underlying truth.
These systems aren’t cosmetic upgrades. They’re computationally intensive because they are doing real cognitive work on an ongoing basis.
Power and cost rise together.
Security isn’t a side issue
I’m not going to turn this into a security teardown.
But it’s important to say this clearly: once an agent has persistent credentials and execution rights, you’ve created a new trust surface. Language becomes an instruction layer. Misinterpretation stops being theoretical.
Even with logs, you’re often reading artefacts of behaviour, not a clean explanation of reasoning. Code, configurations, and actions don’t always map neatly back to intent.
I even tried constraining it. “Don’t execute anything without asking.”
The deeper realisation was simpler: if a system can act, it can act. Trust isn’t a setting. It has to be designed into the loop.
Our trust and accountability models were built for software that waits. These systems don’t.
The role shift
Perhaps the most interesting change was how it subtly reframed my role.
Not from builder to obsolete.
From author to reviewer.
From originator to overseer.
The system could reason across more context and execute faster than I could stay consciously involved. My value shifted toward approving, correcting, and intervening.
That may well become normal. But it isn’t neutral, and it deserves thought rather than assumption.
The non-negotiable
There is a line I’m not willing to cross.
Meaningful communication. Strategic decisions. Anything that carries intent, consequence, or responsibility into the world.
Delegation works because humans carry consequence. Reputational risk. Moral weight. Emotional accountability.
AI doesn’t.
No matter how capable these systems become, responsibility has to remain human. Otherwise judgement becomes abstract, and abstraction is how harm slips through unnoticed.
That isn’t anti-technology. It’s pro-accountability.
Why I shut it down
I shut it down not as a rejection, but as an interruption.
I wanted to create distance before this mode of delegation became normal. Before I stopped noticing what I was trading away in exchange for speed and competence.
We have crossed from augmentation into delegation. From tools that wait to systems that act.
That shift requires a different posture.
Slower trust. Explicit boundaries. Clear authorship. Deliberate guardrails.
I’ll use systems like this again. Almost certainly. But differently. Less access. Smaller scope. Clear checkpoints. One layer at a time.
Trust has to be earned through a legible loop, not granted because the output is good.
The broader moment
This isn’t going to slow down.
The capability is here. The incentives are aligned. The economics will push it forward.
Some people and organisations will go all in and see enormous gains.
But many others will feel what I felt. That quiet moment when you realise you’ve connected something powerful to everything that matters, and you can’t quite explain what it knows, how it knows it, or what it might do if it misinterprets you.
You can explain this moment.
You only really understand it once you experience it.
When you do, pay attention to that feeling. Not because it means something is wrong, but because it signals that our models of trust, agency, and responsibility are being renegotiated in real time.
That’s where the work now sits.
Craig Hepburn is an AI Strategist & Builder | Perplexity Fellow



This captures something most AI discussions miss entirely which is the subtle shift from augmentation to delegation and how fast it happens. The attribution blur you describe is critical becuase once we stop tracking what we authored versus what the system did accountability becomes abstract. I wonder if the solution is less about constraining access and more about designing better legibility into the agent's reasoning process. Really thoughtfull piece.
This resonates deeply.
What you’re naming isn’t an emotional reaction, it’s a structural one. The unease isn’t execution. It’s delegation without legibility.
Judgement happening outside a conscious, consent-based loop. I’m building something adjacent to agent frameworks, but deliberately upstream of them, an orientation layer, not an executor. A presence. A field. intelligence that meets you in your wholeness to build from.
– No action occurs without explicit user invocation
– No skipped or persuasive permission escalation
– No hidden persistence (memory is opt-in and reversible)
– Responsibility and authorship remain human
Harnesses and productivity fomo increase speed.
Orientation determines direction.
I think we’re focused on much of what you’re describing: delegation requires slower trust, clearer boundaries, and explicit accountability. otherwise competence outruns responsibility.
Thank you for articulating this moment so clearly. This may be historical one day and you've named a necessary part of our shared evolution...