The Midi-chlorian Dream of AI Agents
Agency in a galaxy not too far away
I woke up from a dream where of all things I was discussing AI agents using the Force from Star Wars. I know I can dream better than this but I started wondering if the metaphor actually holds.
Let’s see.
First let me ask forgiveness for the sins I am about to commit. I watched the movies, enjoyed some of them, played some games, but I am no master of the canon. Growing up I loved the idea of the Force as a poetic, holistic energy that connected everything. Later I realized it remixed elements from Eastern and Western philosophies into a vision of intergalactic homeostasis, not too dissimilar from the Gaia Hypothesis. In fact the Empire was always busy mining resources and destroying ecosystems and the Force was the balancing counter-agent, or so I liked to think.
As beautiful and versatile a narrative device the Force is, it was still pure magic and the problem was clarifying whether it was something to be harnessed or it was an invisible hand controlling the faith of all characters. Then in 1999 George Lucas gave it a mechanism: Midi-chlorians, invisible intelligent organisms translating the will of the universe into action. To the horror of many fans, this turned the magic into a distributed network. This is where the metaphor might get productive for us.
Imagine another NYC.
8 million people trying as usual to avoid each other and say 3 times as many AI agents actually doing the opposite: constantly pinging each other, sharing bits of information, coalescing their attention, negotiating, and explicitly or implicitly interacting back with us.
You walk on the street and the song in your headphones inspires you to snap a photo of a good looking tree, just enough time to prevent you from taking the 8:23am train that was going to be stuck anyway between Jay and York. Maybe a reservation has preemptively been made to a bar that happened to be close to your friends’ new apartment days before you actually think of writing to them, let alone knowing they moved. Or that time the person sitting next to you at the café was reading the very same book you just bought and so you discovered you were both working on a similar project and later started a new collaboration… This is quickly turning into a cringey and naive future vision, but the point is to imagine that this could be the doing of midi-chlorian-like agents, orchestrating all of this toward some ideal (but how/who to define what ideal is?) lifestyle.
In Nokia's heyday I was working on a project to build Symbian “bots” that would proactively optimize battery, prioritize contacts, silence notifications, silently observing and learning your patterns. Later around 2016 at Google I was part of a group sketching AI agent/s that could browse, analyze content, and even click buttons on our behalf. Some people laughed at the mocks. Today, these ideas ship as beta features. Truthfully, we are nowhere near the scale and complexity I described before (it is also not a given that it is achievable or desirable) and we are still debating how to properly define agents but as much as I love/d searching for the perfect definition I think right now we need more mental models than definitions to help us grasp what we might already intuit. Is the Star Wars one bringing anything useful?
Reconsider the Skywalkers: was Luke’s journey his own, or was he a pawn in a cosmic balancing act? Was Anakin’s fall a series of personal moral failures, or an inevitable correction by the Force itself? Most characters seemed to believe they were making choices, but from a wider point of view, they were fulfilling roles dictated by a power they could channel but not control.
Swap X-wings for personalized autoplay and you might realize something similar is already here, you are just scrolling past it every day. The recommender systems we interact with every day are already a tangible version of this steering current, even before we get to talk about advanced autonomous agents.
The Midi-chlorian metaphor is obviously not precise to describe AI agents but it might be handy because it connects this emerging yet amorphous tech with something we already know. We are already familiar with the benefits and shortcomings of recommender systems as we have experienced them both on the personal and global scales. And we already know from decades of Star Wars canon that being able to perceive and control the Force requires a great deal of training and self-control and it might just be easier to go with the flow, wherever it takes us. Sounds familiar?
Becoming a Jedi is not easy and from a systemic perspective even then the actions of an individual person, AI agent, or recommender system contribute but cannot control the emergent behaviors of the much larger distributed networks they are part of. That we already knew long before LLMs.
Thinking about agents in this way is not only an intellectual game; it could be a productive mental model for its discussion, design, and development. If we are building this Force, we have a responsibility to shape its nature. The critical question is how do we design AI systems that enhance human agency rather than diminish it?
This requires a new charter for design that prioritizes legibility, contestability, and genuine empowerment. The debate should expand from privacy and data collection to defining and controlling directional consent. Can we make the workings of the algorithmic Force visible? Can we give users the ability to not just follow the current, but to consciously swim against it, or even change its direction? We might need an academy to learn how to read and wield this force.
It also means that when developing frameworks for AI agents we need to consider multiple scales. There is the nuclear, obvious layer of micro one-on-one interactions, whether between human-agent, agent-agent, agent-machine (hw or sw), but there is another, maybe less obvious, layer of massive multiplayer interactions between a growing numbers of actors (human and otherwise) that require a different way of thinking about their dynamics and emergent patterns. And again, existing recommender systems already demonstrate that we don’t need particularly intelligent systems to get visible network effects. As designers we are naturally trained to think about the first layer and we start to have frameworks for controls, feedback, and guardrails for agentic systems but I am not sure we have yet (or at least that’s something I am interested in exploring more) mental models, metrics, and techniques to understand the aggregate dynamics of the second layer.
I admit it is fun to stretch the cinematic metaphor a bit more so here are a few tools and experiments we might want to try (even before AI agents):
The Jedi Toggle: a slider that dials algorithmic influence from suggestive to sovereign. Want serendipity? Turn the Force off. This is a user’s tool to withdraw directional consent, an algorithmic mute button.
Force-Traces: a visualization of how algorithms shaped your week. A small data-trail showing how you were nudged from indie pop to doom-folk.
The Sparring Arena: a simple web toy where you choose to speculatively act for yourself or let the system act for you. See what you gain. See what you lose. Algorithmic what-ifs.
And if this is interesting to you, here are some of my favorite (colleagues and sources of inspiration) reading rabbit holes that will take you to more serious thinking about these topics.
Gabriel Iason et al., The Ethics of Advanced AI Assistants
Shannon Vallor, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting and AI Mirror
Tobias Rees, Agency. On the Philosophical Stakes of AI agency
Bruno Latour, Reassembling the Social: An Introduction to Actor-Network-Theory
David Sumpter, Outnumbered
Maybe George Lucas wasn’t just adding lore; maybe the Force was guiding him to give us a vocabulary for the invisible, systemic forces that mediate our lives :) He saw the network beneath the magic.
To be continued.
(if you have suggestions for material, experiments please let me know)



I can't believe you didn't loop in Yoda's famous line about "Fear leads to anger. Anger leads to hate. Hate leads to suffering." and the outrage amplifying algos! We need that in part two!