This is the first of three essays about the technologist’s responsibility to change the world for the better – to create more of what they want to see in the world. This requires an understanding of one’s environment and the effects of their work, and a rootedness in core beliefs.
The essay you’re about to read is an evaluation of the situation at hand: what is new about our current environment? What is valuable in this environment? Why is this moment so critical?
The most dangerous idea in the world is that action alone breeds good outcomes. Technologists have turned to accelerationism while ignoring their responsibility to consider what it is we’re accelerating towards. We're building a world where anyone can do anything, while systematically destroying our ability to decide what's worth doing.
This isn’t just another essay about agency, mostly because I think agency is widely misunderstood. Technologists have co-opted agency through the meme "you can just do things." Substack and Twitter are filled to the brim with takes on why agency is important, how to become more agentic, and so on. But much of this analysis is rooted in a shallow understanding: agency = action. If you're doing things—and if you're traditionally successful in doing them—then you're considered agentic. You must simply be “in the arena.”
I, too, think agency is extremely important. But being a hustler is not necessarily high-agency. Agency is not just about action; it's about the relationship between action and belief. It's the ability to define a clear vision and act on it with intention – “conscious actions.” Yes, you can just do things, but which things should you do?
Being acted upon makes you no more agentic than the person who does nothing. “Gaining agency is gaining the capacity to do something differently from, or in addition to, the events that simply happen to you,” says Simon Sarris.
True agency is our scarcest resource.
1. The World is More Malleable Than Ever
From Marshall McLuhan’s insights in Understanding Media to Davidson’s The Sovereign Individual, the trajectory of technological progress hasn’t come as a surprise. These ideas have become part of the technologists’ canon.
The internet, as McLuhan foresaw, was not just a new medium but a transformative force that would reshape society. It dismantled the monopolies of traditional institutions, giving rise to a new era of individual empowerment. Kevin Kelly’s 1000 True Fans theory crystallized this shift, showing how creators could thrive by cultivating direct relationships with their audiences. The Creator Economy emerged, and with it, the tools for individual expression and economic independence became more accessible than ever.
Then came crypto, which redefined ownership and value exchange, and now LLMs, which have democratized creation and problem-solving on an unprecedented scale. Each wave of innovation has followed the same pattern: individuals gaining powers that were once the exclusive domain of institutions and organizations.
The world is more malleable than ever. It is a fluid, ever-changing landscape where the rules are constantly being rewritten. But this malleability comes with a cost. Trust in traditional institutions – media, corporations, even governments – has crumbled. The solid structures we once relied on have dissolved into what Zygmunt Bauman called Liquid Modernity. In this state, change is not just frequent; it is the only constant.
McLuhan once said, “It’s hard to have an objective in a world that’s changing faster than that objective can be fulfilled.” To further use his words, the medium is no longer a stable platform for delivering messages. Our environments are liquid, oozing entities that resist definition. Without active intervention, the default outcome is slop: the endless stream of content, clickbait, and recycled ideas that dominate our digital landscapes1.
The challenge of our time is not just to navigate this fluidity but to shape it. How do we maintain direction and purpose in a world where everything is constantly shifting?
2. The Weight of Infinite Opportunity
The tools of our age—open protocols, LLMs, algorithmic feeds—present the paradox of freedom versus agency: while they grant unprecedented freedom to create and connect, this very permissiveness undermines our capacity to influence our environment with purpose.
Take algorithmic feeds. They promise to surface the most relevant content, but their logic is not aligned with our intentionality. They optimize for engagement, not meaning. The result is a flood of slop—endless streams of AI-generated articles, clickbait videos, and recycled memes. These feeds don’t just distract us; they shape what we see, how we think, and ultimately, how we act.
Or consider large language models (LLMs) like GPT. They are powerful tools for writing, brainstorming, and problem-solving. But without a clear sense of purpose, they default to generating generic, uninspired content that lacks depth or direction. The tool itself is neutral, but its design encourages passivity and ease rather than active creation. This has led to a resurgence in discussion around the importance of “taste.”
L.M. Sacasas calls this "the enclosure of the human psyche." Just as the enclosure of the commons turned shared land into private property, the tools of the digital age commoditize our attention, our creativity, and our agency. They transform the human psyche into a resource to be mined, optimized, and monetized. The more we use these tools, the more they shape us—often in ways that undermine our ability to act with intention.
Ivan Illich, in Tools for Conviviality, warned of this dynamic. He wrote, "Tools are intrinsic to social relationships. An individual relates himself in action to his society through the use of tools that he actively masters, or by which he is passively acted upon." Market forces have shaped these tools to capture our attention, as passive users generate more predictable profits than active ones. At best, our tools are “neutral,” but they are not convivial. They do not "serve the individual’s [chosen purposes]," largely because they do not encourage the user to make choices or to be opinionated.
This creates a vicious cycle. The more we use these tools, the more they erode our intentionality. And the less intentionality we have, the harder it becomes to use our tools with agency.
3. Designing For Agency
The challenge of agency operates on multiple levels: individual, systemic, and societal.
First, individuals can learn to use technology in ways that enhance rather than diminish their agency. I often return to Visa’s concept of "having a vector”: instead of being buffeted by the endless streams of content and possibilities, we can maintain a clear direction—a personal vector—that guides our technological choices and interactions. This means deliberately choosing tools that align with our goals rather than defaulting to whatever captures our attention, and building systems of thought and action that compound over time.
Having a vector doesn't mean ignoring the serendipity that technology enables—rather, it means having a framework for integrating new discoveries into our existing direction without losing momentum.
Second, as technologists and builders, we have a responsibility to design systems that increase the floor of agency.
The power of agency is, in practice, the power to build: to direct our intelligence toward the work of understanding the world and adapting it to our needs. It’s how we impose a purposeful order on nature’s chaos. – Gena Gorlin
The current paradigm of technology design often optimizes for engagement and potential at the expense of user agency. This creates a form of learned helplessness, where users become increasingly dependent on systems with no worldview.
We’ll talk about this more in the next essay.
Finally, why is this so urgent? Because agency compounds over time, both positively and negatively. Those who maintain their agency in age will find their capability and influence growing exponentially. They'll be able to leverage new tools and platforms to pursue their vision with increasing effectiveness.
Agency begets agency; passiveness begets passiveness.
But the inverse is equally true and more concerning: those who lose agency become increasingly vulnerable to manipulation and exploitation. The negative effects of technological progress disproportionately impact the least agentic members of society, who find themselves being acted upon. They become passive consumers rather than active creators, their choices increasingly shaped by algorithms and agendas rather than personal intention.
This is why the challenge of agency is perhaps the most critical issue of our time. It's not just about individual success or failure—it's about preserving human capability and autonomy in an increasingly complex world. From childhood education to spirituality to interpersonal relationships, it’s critical that we learn how to instill and cultivate agency at every level. The systems we build and the choices we make today will determine whether technology serves to enhance our humanity or diminish it.
The path forward requires action at all levels: individuals must learn to maintain their vector amidst technological chaos, builders must take responsibility for designing agency-enhancing systems, and society must recognize the existential importance of preserving and promoting human agency. The stakes are too high to do otherwise.
My friend Kasey pointed out that the elephant in the room here is more economic than technological. Alas, that is a discussion for another essay book. While profit incentives shape how technology develops and deploys, our focus here is on the tangible systems humans interact with daily. These technologies are the medium through which larger economic forces influence human behavior and agency.
What's behind your mind ? What influences helped you come up with this work?
The TLDR:
The article "The Age of Agency" discusses the following key points:
Rise of AI Agents: The emergence of AI agents capable of performing complex tasks and making autonomous decisions is transforming various industries.
Impact on Employment: The integration of AI agents into the workforce raises concerns about job displacement and the need for reskilling.
Ethical Considerations: The deployment of autonomous AI systems necessitates discussions around ethical guidelines, accountability, and the potential consequences of machine decision-making.