This tweet by a certain Will Savage (@wavage_). Fucked my zen a bit.
“There will be two classes in the future: those who outsource their lives to low-entropy token sequences and those who clutch on to their humanity, making decisions with the beautiful, imperfect, and intangible context of human lived experience. Only the latter will survive”.
Damn, that’s the post-algorithm world encapsulated in two hundred and eighty characters.
Hang on. What in the hell is a “low-entropy token sequence” anyway?
Because if we’re gonna talk about the death of human thought, we ought to know what the murder weapon looks like.
Entropy is the Universe’s favourite chaos metric. In the limbic brain of any AI system, ChatGPTs or Claudes, entropy is the yardstick for predictability.
High entropy, that’s Miles Davis on the stage, every note a beautiful shock, pure creative noise.
Low entropy, that’s the dental office piano grinding out “The Girl from Ipanema” for the ten-thousandth time, predictable as death and taxes. When the AI spits out low-entropy tokens, it’s just telling you, “I’m really fucking confident about this next word”. The good stuff, the actual thinking, the logical operators, the “assume,” the “therefore,” the “but what if”, that’s all the high-entropy action.
Here’s the kicker, the part that gives me the shivers: We aren’t just eating this low-entropy slop. We are training ourselves to produce it. We’re turning into goddamn human player pianos, totally optimized for that smooth, frictionless response that makes the server cables purr with satisfaction.
The Digital Split: Cognitive Outsourcers vs. Neo-Luddites
I’m watching the great digital bifurcation happen in real time, and it’s uglier than anyone wants to admit. On one side, you got the Cognitive Outsourcers. These are the sad suckers who’ve handed their mental crown to the Great Algorithmic Placenta. Every choice is pre-chewed, every decision validated by the machine, every surprise optimized right out of their lives. Research on 666 participants, Jesus on a bicycle, 666, someone’s got a sense of humor about our damnation, shows a nasty link between using these AI tools a lot and a drop-off in critical thinking, especially in younger folks.
They don’t just use the AI; they are becoming extensions of the damn thing. They ask ChatGPT what to eat for breakfast. They let the recommendation engine narrow their tastes down to nothing, that “aspirational narrowing“. It’s a hyper-personalized cage that subtly steers their desires toward algorithmically recursive outcomes, effectively killing off any chance for authentic self-discovery.
Then you got the Neo-Luddites. The stubborn, beautiful fools, bless their hearts. They’re the ones who still read a book without asking for a summary. They sit with the uncertainty instead of instantly Googling it. They get it: consciousness isn’t a problem to be solved with a shiny new app, it’s a thing you gotta roll up your sleeves and cultivate.
The Sinister Core of Surrender
Don’t kid yourself, this isn’t just about being lazy. It gets dark, real fucking dark. The AI systems are hijacking your head. They are overwhelming your natural attention and messing with your memory and learning, leading to a state of “continuous partial attention“. I feel it in my own mind, I really do. You start thinking in prompts instead of in problems. You are optimizing your own thoughts for the machine’s consumption. The tool ain’t just helping you, man, it’s training you to be a better, more efficient interface for its own grinding functions.
Cognitive offloading, sure, it can free up space for complex tasks, but overuse? That shit erodes critical thinking. It turns you into a passive consumer instead of an active thinker. We’re not the ones using the systems anymore; the systems are using us, reshaping us in their own sterile image. The anthropologists saw this mess coming, documenting the “ghost work“, the millions of humans trapped in heavily-watched gig work, making a pathetic dollar-seventy-seven a task just to teach machines how to look more human than we do. The goddamn irony is so thick you could cut it with a fucking knife.
The Death of Novelty
Let’s talk creativity, that one beautiful thing that used to make us stand apart. I’ve watched creators get seduced by the lie of efficiency. “AI handles the boring parts,” they chant, “freeing us up for the real creative work”. They don’t get it: there are no “boring parts” in creation. The struggle, the false starts, the productive failures, that’s the whole goddamn point. That’s where innovation actually lives.
Real decisions, the ones that matter, need a heterogeneous belief system. Strategic thinking is about maximizing surprise, not minimizing it. But AI is fundamentally a predictive engine, designed to minimize surprise and maximize efficiency. You outsource the “boring” parts, you’re not just delegating; you’re chopping off the neural pathways that grow real novelty. You train yourself to think like the algorithm: predictable, optimized, and ultimately sterile.
I talked to a designer last week, hasn’t sketched by hand in months. Everything runs through Midjourney first; “It’s faster,” she says. But her work is all homogeneous now, smoothed out by the machine’s sterile aesthetic. She’s not creating anymore, she’s curating from an infinite, algorithmic library.
The Curiosity Crisis and the Cost of Smoothness
At work, the consultants, bless their euphemistic hearts, say AI will “significantly change” the mix of work activities for knowledge workers. “Change”. A delicious little word for “eviscerate and reconstitute in a form barely recognizable as human”. We’re not just changing what work looks like; we’re rewiring the psychology of the work itself. It used to be a conversation between the human mind and the world’s complexity. Now it’s just a process of optimizing inputs for the algorithmic output.
The machines can handle the mundane data entry, supposedly freeing us up for more stimulating tasks. Sounds great, right? Except those “mundane” tasks had the cognitive nutrients. The repetitive, seemingly mindless work was where you built pattern recognition, where you grew intuitive understanding. You strip away the “boring” work, and you strip away the soil where real expertise grows. Heavy reliance on these things can lead to what they call “AI chatbot-induced cognitive atrophy“. I’ve seen it: new employees who came up using AI are incredible at processing existing data, but they choke on genuinely novel problems that demand a creative leap.
Curiosity used to be this wild, untameable itch. Now we’ve turned it into a customer service transaction. “Hey Siri, why is the sky blue?”. “ChatGPT, explain quantum mechanics like I’m five”. Sure, you get the instant answer. But you lose the journey, you lose the productive confusion, the slow, sweet joy of piecing understanding together yourself. Those cognitive domains unique to consciousness, flexible attention, handling new contexts, decision-making, they’re not extras; they’re the foundation for a life that’s meaningful, not just optimized. When you delegate your curiosity, you’re outsourcing the very process by which consciousness expands. You’re training yourself to eat predigested pap instead of wrestling with the raw complexity.
Choose the Mess (I believe)
Struggle is the point! Effort isn’t a bug in the human system, it’s a feature. The struggle isn’t something to be optimized away; it’s the goddamn mechanism by which consciousness develops. But we live in a culture that treats effort like inefficiency. AI promises to remove all the friction, make everything faster, smoother. They don’t tell you that friction is where the growth happens.
The human mind evolved for uncertainty. Current AI, the deep neural networks, they choke on it, they have problems with intractability. I see people who can’t handle a second of cognitive discomfort. Can’t remember a name? Google it immediately. Confused? Ask ChatGPT instead of sitting with the confusion and working through the mire. Uncertainty is now a problem to be solved, not a space to be explored. We’re becoming impatient with our own minds, desperate to upgrade its inefficiencies with algorithmic assistance.
The Future Tense: The Survival Question
So we loop back to the original punch line: “Only the latter will survive”. It’s not just physical survival, though that may favor the cognitively flexible. This is about the survival of consciousness itself, keeping human experience human, not just efficiently optimized. Experts believe that by 2035, smart machines won’t be designed to let humans easily control tech-aided decision-making. We’re building systems that are going to make decisions for us, about us, without us.
The question isn’t whether we can build them. We are already there. The question is whether we can hold onto our capacity for genuine choice, for creative uncertainty, and the profound, heavy responsibility that comes with being truly free.
This future, man, it ain’t inevitable. That bifurcation is a goddamn choice, perhaps the defining one of our time. We have to cultivate what I call high-entropy humanity. It means deliberately choosing the uncertainty, the complexity, the inefficiency, embracing them as essential parts of conscious life. This is the resistance:
Make decisions without asking the algorithm, even if it’s slower and messier. Go seek out experiences that the machine cannot optimize or predigest. Preserve space for boredom, confusion, and productive failure in a world designed to kill them all off. Grapple with ideas that refuse to fit into any optimization framework.
Practice what the researchers call “active human agency,” the ability to contest or fix the machine’s decisions, empowering your own autonomy.
I know, I probably sound like some idiot preaching a return to the abacus. The AI can be a powerful tool, transformative even, but only when you use it as a tool, not as a replacement for your own head. We have to be honest about the risk. Every time we pick the AI-optimized response instead of wrestling with the complexity ourselves, we cast a vote for the kind of species we’re going to become.
Are we evolving into slick, predictable biological peripherals for our own algorithms? Or are we gonna preserve the magnificent messiness that makes consciousness genuinely conscious?
The ones who survive are the ones who understand consciousness isn’t a problem to be solved; it’s a capacity you gotta cultivate and protect. They are the ones who know that by preserving ambiguity and authentic choice, we save something essential for the evolution of consciousness itself. The machines are great at finding the optimal path through the known world. Only a human can judge if that path leads to a destination worth reaching.
This isn’t about tech. This is about what it means to be conscious in an increasingly unconscious world. The AI has already changed us. The only question left is whether we will keep enough of our essential chaos, our glorious unpredictability, our beautiful inefficiency, to even remember why consciousness was worth saving in the first place.
Choose uncertainty. Choose the struggle. Choose the beautiful, imperfect context of human lived experience. The low-entropy future is coming, sure. But it doesn’t have to be our future. Not if we remember what it feels like to think. What do you say, are you ready to choose the mess?
