Agency and Customization in the Age of AI
A couple of weeks ago, I stumbled on Sublime (not the ska band!) through a Substack note by Evie; one that went viral after she shared her lineup of favourite note-taking and PKM apps.
Sublime’s logo looked pretty cool, playful and mysterious, so I just downloaded it for shits and giggs. Little did I know I was about to PLUNGE into a rabbit hole that would reshape how I think about AI and the internet itself.
Turns out Sublime isn’t just an app but it’s also a newsletter, run by Sari Azout.
It explores technology, taste, and the future of individuality in the age of AI. I read one post: What Matters in the Age of AI Is Taste and immediately subscribed. Before I knew it, I was watching interviews and talks from Sari, trying to absorb more of her thinking.
Everything was so clear and resonant, and just made sense! In fact, I even upgraded to premium, adding yet another line item to my expensive-ass stack of subscriptions. As someone trying to understand and build meaningful things with these new powers AI is granting us, this all felt deeply relevant.
So this post is my attempt to crystallize everything I’ve been learning about where our future with AI lies; a future built around customization, individuality, and context as the foundations of a more personal kind of intelligence. One that’s quietly reshaping the world around us.
1. The age of algorithmic sameness
The internet; and by extension, the world we now live in, is overrun by algorithms. Everything we see, click, and even think about online passes through a system built to predict what will keep us scrolling and returning to whatever app we are on.
We live in what The New Yorker’s Kyle Chayka calls an “age of algorithmic sameness”.
In his book “Filterworld: How Algorithms Flattened Culture”, he explains something we’ve all come to know all too well; that the platforms shaping our digital lives (you know TT, IG, YT) aren’t built to reward originality but… wait for it… engagement. They optimize for what’s familiar, emotionally efficient (quick to feel, easy to forget) and easily digestible. The result is a kind of cultural homogenization; a flattening of taste where differences in style, context, and intent start to disappear in favour of what “performs” (🤢).
It always makes me sad when a friend posts a video with a certain song I’ve never heard before that sounds fresh, cool, intentional : D Then I hear it recycled across a bunch of other reels and realize damn… it was just a trending song… and it’s probably why they picked it : (
It wasn’t always like this. I grew up in the early aughts and still remember how the early web felt like a sprawl of personal taste: handmade websites, weird forums, messy originality.
But then came the platforms. Over time the internet learned to optimize itself. It stopped rewarding what was interesting and started rewarding what was effective.
Fast forward to today, this iteration of AI, this whole wave of “generative AI,” feels like the next chapter of that story; the part where the algorithm no longer just recommends but also creates.
The similarity lies in the underlying pattern: both systems learn from massive datasets of what already exists.
The difference is what happens next. Engagement algorithms push us toward sameness because their job is to maximize broad appeal.
Generative AI can follow the same logic, producing conventional, algorithmically safe outputs.
Which is why I’ve always disliked the word “generated.” To me it has an ugly twin lurking beside it: “generic”. “Generated” and “generic” share the same root problem. Both are outputs of pattern matching. Both feel like they were assembled by gravity rather than shaped by taste.
What unsettles me most is generative AI’s stochastic nature when left unchecked. These models don’t decide; they sample probabilities. Every sentence is a statistical best guess. Every image is the most likely next pixel.
Without direction, without context, without taste, it becomes expression without a point of view. Creativity without a creator.
So now the question becomes who is doing the steering, and toward what? These tools don’t have perspectives, so are you willing to give them yours?
2. Customization as agency
The first wave of AI felt like magic tricks; impressive, instant, but ultimately ephemeral.
The novelty wore off quickly, and the technology’s limitations forced everyone into the same shallow use cases.
But as those constraints dissolve, as models gain memory, context, the ability to understand nuance, now something more interesting becomes possible: ACTUAL COLLABORATION.
Intelligence shaped by your taste, your context, your way of thinking.
Better models are important no doubt, but the real antidote to generic generation is AI that knows you.
This sublime essay was my entry point into this idea (please read it, it’s really good).
“AI is powerful, but taste-blind. It can make anything, but it just has no idea what’s actually worth making.
But AI + your taste? That’s the game-changer.
The more AI can execute, the more your eye for what’s interesting – your ability to discern and curate what matters and why – becomes everything.”
This is Sublime’s whole premise, and it made so much sense to me because I already do a version of this compulsively (I have no doubt you do too). Screenshots, screen recordings, notes scattered across apps. References I can’t quite explain why I’m saving, but they feel important. Been collecting this stuff for years!
Sari’s essay crystallized something I’d been feeling: all those references I’ve been hoarding are raw material. They’re what will make AI outputs feel like mine, not just the internet’s statistical average.
So that’s when I realized: customization is about agency.
There’s a difference between personalization and customization.
Personalization is what algorithms have always done to you. TikTok learns your patterns, Spotify builds your playlists, Netflix queues your next binge.
You’re still consuming what gets served. The algorithm knows you, but you’re not shaping it, merely reacting.
Customization is different. When you curate references, craft prompts, decide what context to feed an AI, you’re making choices that reflect how you see the world. What you value. What you think is worth making. To design, prompt, or fine-tune an AI is authorship. It externalizes your taste, your priorities, your creative instincts.
This is where agency lives: not in consuming personalized content, but in shaping the tools that help you think and create.
Yes, that takes more work. It’s slower than just accepting whatever the algorithm serves or settling for bland AI outputs.
But that effort is the entire point.
Passive algorithmic consumption trains you to receive, not to choose.
Active customization does the opposite. It keeps you engaged in the process of deciding.
In an age where everything’s instant and friction-free, that deliberate engagement might be the only thing keeping our thinking sharp.
3. The individuated future
I’m not going to pretend I know what the future looks like lol. But I’ve been paying attention to conversations happening around it, and so I’ll be mostly sharing the patterns I’ve been seeing and the questions that only time will answer.
One of those conversations was an episode of Sinead Bovell’s podcast where she sits down with futurist Alexander Manu. There was one moment that stuck with me. Sinead raised a concern: doesn’t hyper-personalization kill community? If we all see different products, want different things, we lose the shared experiences that bring us together.
Manu’s response: we already customize everything we can. Your iPhone home screen looks nothing like mine. But we still find ways to connect. To him personalization isn’t the problem but what we’ve always wanted. We just haven’t extended that logic everywhere else yet.
The next era of AI will likely follow this pattern. Not one generic chatbot for millions, but personally generated intelligence shaped by individual context.
Which raises an interesting question: what happens to platforms built on sameness?
Social media today optimizes for virality right? But if everyone’s AI reflects genuinely different contexts, how do these platforms function? When the internet stops being a town square and becomes millions of personalized realities, what then?
But highkey this future has real problems.
A few comments under the video highlighted them.
One captured it perfectly:
“Democracy is only possible if we can all agree on the facts. If we all live in separate realities we can’t hold a conversation.”
Others pointed out that we’ve already seen what filter bubbles did to political discourse and so hyper-customized AI could expand that fragmentation everywhere.
And maybe we’re overestimating how much people actually want to be unique. One commenter argued: “Style is a group thing... Most people don’t want to stand out. Most people don’t have taste.” Shared products and cultural touchstones give us common ground.
So isn’t hyper-customization just trading one problem (sameness) for another (fragmentation)?
I think it comes down to how you get there.
Algorithmic sameness is passive. The platform decides what you see based on what keeps you scrolling. You’re not choosing the filter bubble, you’re just stuck in it.
Individuated AI, done right, is active. You’re deliberately curating the references. You’re choosing what context to feed it. And you can customize toward expansion, not just confirmation. You can build a knowledge base that challenges you, that includes perspectives you disagree with, that pulls from sources the algorithm would never surface.
And again there’s real risk here too. If we all retreat into personalized realities with no shared frame of reference, we lose collective conversation. But the alternative, generic AI trained on the internet’s lowest common denominator, feels worse.
So I think instead of asking whether personalization is good or bad, we ought to ask how do we design it and what values guide that design.
How do we get individualization without fragmentation? How do we build AI that amplifies personal taste while still creating space for shared culture?
And then there is the practical side. Current AI already depends on massive energy and infrastructure. How do we scale personalized intelligence without burning through resources we cannot afford? How do we make sure customization is accessible, not just a premium feature for people who can pay for it?
I have no certainty about where this goes. But I do know the value worth protecting. If the choice is between systems that flatten us into sameness or tools that let people actively shape their own intelligence, me personally… I choose agency.
The future of the internet might not depend on everyone seeing the same thing. It might depend on everyone having the ability to decide what they want to see, and learning how to stay in conversation across those differences.