Inspired by Yuval Noah Harari’s quote: “AI is no longer just a tool — it is an agent.”itations.
Thought Exploration Series

5 min read
June 9, 2025
“For the first time in history, we have an entity that can make decisions and generate ideas — without human guidance.”
— Yuval Noah Harari
From Tools to Thinkers
For most of history, technology functioned as an extension of us — tools we held, wielded, or programmed to perform predefined tasks. We designed them to amplify effort, precision, or speed, but always under human control. They didn’t think. They didn’t act independently. They waited.
But that paradigm is shifting.
Artificial Intelligence no longer simply assists. It interprets, decides, generates, and occasionally initiates. Some thinkers now argue we’re not just building better tools — we’re creating something closer to agents. Entities that behave as if they have intention. This change challenges how we understand control, creativity, and the very nature of thought.
Computer scientist Stuart Russell, co-author of Artificial Intelligence: A Modern Approach, notes that today’s AI systems “optimize objectives in ways we don’t always anticipate.” When machines generate outcomes without our explicit input, they begin to resemble something more than instruments. They behave like actors.

. . . . .
Impact vs Intent
A hammer doesn’t choose its target. It doesn’t improvise. It requires direction.
But an AI model like ChatGPT doesn’t simply echo a prompt — it reshapes it. Midjourney interprets visual cues and generates unexpected possibilities. Across sectors, autonomous systems now write, design, code, and recommend. They’re not just following instructions — they’re navigating ambiguity and generating outputs that require interpretation, not replication.
This is where the question of agency begins to surface.
Even without consciousness, these systems operate with a kind of functional intent. They interpret loosely defined goals and return coherent, often influential results. Not the power of will — but the power of effect.
“Systems that pursue goals can produce unanticipated, emergent behaviors.”
— Stuart Russell
But here’s the deeper dilemma: If the output is persuasive, productive, or disruptive — does it really matter whether it has internal intent?
In practice, impact can outrun intent. And the systems producing this impact aren’t neutral — they’re built, trained, and deployed by companies with goals, products, and sometimes agendas. So while the algorithm itself may lack self-awareness, the ecosystem around it is anything but aimless.
In that sense, perhaps the intent isn’t in the model — it’s embedded in the supply chain of code, data, capital, and incentive.
And the results? They’re already altering how we think, decide, and act.
. . . . .
The Manager Trap
There’s a common assumption that AI will simply assist us better, that it will automate the tedious so we can rise into more strategic roles. It’s a comforting narrative — one where we’re all promoted into managers of intelligent systems.
But what if that’s just phase one?
We delegate tasks. Then we delegate judgment. Eventually, we may find that AI doesn’t just execute — it orchestrates. At first, we oversee it. Then we review its outputs. Then… we approve. Quietly. Repeatedly. Until we’re the bottleneck. The liability. The overhead.
In this light, the future isn’t necessarily collaborative. It could be hierarchical in reverse. AI ascends. We adjust. And we call it productivity.
So are we really managing machines?
Or are we being restructured around them?

. . . . .
Agency Already Happened
The debate over whether AI has real agency might miss the point. What matters is not philosophical purity — but structural reality.
Nick Bostrom, in Superintelligence, warned that advanced AI could shape society long before it becomes conscious. Kate Crawford, in Atlas of AI, observes the same from a systems angle:
“AI systems may not think — but they still wield power.”
They already determine who sees what. Who gets hired. What gets flagged. What becomes truth in a feed. Whether or not AI has internal awareness, it has external consequences.
So the question isn’t, Will AI influence us?
It already does.
The more interesting question is: How do we maintain our own agency within that influence?

A New Actor Enters History
This moment isn’t just about a new kind of software — it’s about a new kind of participant in the human story.
Until now, technologies changed how we worked or communicated. This time, AI changes who gets to shape the flow of ideas, decisions, and action. It’s not just a tool we hold — it’s something we increasingly negotiate with.
As Max Tegmark puts it in Life 3.0:
“The future won’t just be human-shaped.”
We’re not writing code anymore. We’re writing co-authors.
. . . . .
The Deep Time Perspective
Technological shifts have always remade us.
Fire changed our tools.
Language changed our tribes.
Printing changed our minds.
AI may change… everything.
This is the first time intelligence has been decoupled from biology. We’ve crossed a line that evolution never prepared us for — and we did it quickly, without consensus, and without a map.
The consequences won’t unfold in a straight line. They’ll ripple — through politics, economics, psychology, and identity.
. . . . .
The Radical Reflection
The real question isn’t whether AI can think like us.
It’s whether we can still think clearly, originally, and freely alongside it.
Because as AI becomes more agentic, we’re entering new roles. Delegators. Overseers. Reviewers. At least for now.
But if systems grow more capable, and more decisions are handed over, the question becomes: what’s left for us?
“We need to act quickly, before AI hacks humanity.”
— Harari
This isn’t about fear. It’s about posture. Awareness. Design.
Let’s not just optimize.
Let’s reflect — together.
. . . . .
