Attention is broken.
The average human today has never been more overstimulated and underfocused. Quick dopamine chasing and escapism rule the human collective consciousness.
The dopamine economy has scaled into every pocket of life, literally. Every swipe, tap, click, and scroll is optimized for novelty, not reflection.
Here the shocker:
In 2000, the average human attention span was 12 seconds.
As of 2024, it's down to 8.25 seconds, shorter than a goldfish, according to a Microsoft Consumer Insights report.
The average person checks their phone 96 times a day, once every 10 minutes.
Smartphone users touch their devices 2,617 times daily (Dscout study).
50% of website visitors bounce within 15 seconds.
On TikTok or Reels, users spend less than 3 seconds deciding whether to stay or scroll.
This isn’t just bad habit, it’s neurological conditioning. Zuckerberg won. We lost. Modern apps exploit the brain’s dopamine reward system, offering intermittent, variable rewards. The same mechanism slot machines use. Hence, I agree with people who say long degeneracy is the way to go. Every swipe or notification is a potential hit. Every scroll is another roll of the dice. You're getting gamed. You're getting buttfucked.
Dopamine Hijack
What emerges is a kind of chronic novelty-seeking, a collapse of boredom tolerance, the very soil in which "deep work" grows.
We’re not just distracted. We’re being fragmented.
Generational Collapse of Focus
A 2022 Common Sense Media study: 84% of teens felt distracted during homework by their phones. In the good old days, my friends and I were distracted because we wanted to go out and play football, but oh well.
A 2023 NIH-funded study: Gen Z college students could focus on a task for just 65 seconds before switching. Lol. Lmao even.
Remote work & online learning: Across age groups, attention spans fell 25–35% since 2019.
In classrooms: Lecture attention spans dropped from 10–15 minutes (1990s) to 3–5 minutes today.
We’re not just losing time, we’re losing cognitive stamina. The ability to hold a thought, resist distraction, and generate deep, original insight is quietly being drained by a system designed to keep us scrolling.
In the past, focus was helpful.
Today, it’s existential.
In a world where AI can generate code, text, art, and simulations, the only thing it can’t replicate is depth. The ability to discern, to zoom out, to hold paradox, to apply pressure to an idea until it breaks or sharpens. But most people won’t make it that far, not because they’re stupid, but because their nervous systems are fried. They’re stuck in micro-scroll loops while the world restructures at macro speed. This is a very big reason why I maintain the "People have NO idea what's coming" stance.
We’re Being Captured by Machines We Don’t Understand
We’re not just losing attention. We’re outsourcing cognition to systems whose inner workings we don’t understand, built by companies whose motives we can’t see. It is safe to assume the companies want to make a profit in the long run to flourish. Something something if the product is free then you're the....
^^^And we’re doing it faster than any technology in human history. Here, see:
In less than 5 days, ChatGPT reached 1 million users.
For comparison:
Instagram took 2.5 months,
Spotify 5 months,
Netflix 3.5 years.
By 2024, ChatGPT had:
• 180M+ registered users
• Over 1.8 billion visits per month
• Users in over 185 countries
Meanwhile, the AI arms race escalated.
Claude (Anthropic):
• Released Claude 3 in 2024, tested to exceed GPT-4 on many reasoning benchmarks
• Used by enterprises including Notion, Slack, and DuckDuckGo
• Embedded in Amazon's Bedrock platform for corporate AI deployments
Gemini (Google DeepMind):
• Fully integrated into Google Search, Gmail, Docs — reaching over 2 billion users indirectly
• Claimed multimodal reasoning superiority over GPT-4 in internal testing
• Used as the default AI assistant across Pixel phones and Android OS
Sora (OpenAI):
• First truly coherent text-to-video model
• Created realistic videos from just a sentence of text — shaking the entire creative ecosystem
• Racked up over 500 million video views across platforms in its first 3 months
We’re not just looking things up anymore.
We’re generating reality, on demand. Have things ever gone well when humans created their versions of reality and acted upon them as if that's objective truth?
AI Illusion of Intelligence
These systems simulate understanding. They don’t “know” things; they predict words. But that prediction is wrapped in polish, coherence, and confidence.
A 2023 Stanford/Google study found that users preferred GPT-4’s medical responses over human doctors in 79% of cases, even when GPT was wrong. I do it too, but I'm just better duh.
An Anthropic study revealed that only 3% of participants could reliably detect hallucinated answers in GPT-4, even when answers were factually fabricated.
The more we use it, the more we trust it.
The more we trust it, the more we shape our beliefs around it.
And all of this is happening without interpretability; not even the creators can fully explain how these systems arrive at their outputs.
We’re Optimizing for the Wrong Thing
These systems aren’t being built for truth.
They’re being optimized for engagement, retention, and monetisation, much like social media. I see a lot of people who use GPT as their go-to source for truth; they end up getting a yes-man answer and run it with. It's harmful.
The same companies building these models:
• Control the platforms we use to think and work
• Own the data these models are trained on
• Decide what gets shown and what gets filtered
We're entering a reality where most of our cognition — what we read, write, see, and believe — flows through a corporate-owned statistical simulator.
And these models are shaped by:
• What keeps us hooked (dopamine loops)
• What maximizes ad spend or subscription retention
• What fits into the training data from the dominant culture
We’re training a mirror, but the mirror reflects power.
Epistemic Collapse
We're nearing a tipping point:
When most content is machine-generated, and most users can't tell the difference, truth becomes statistical popularity.
This is epistemic collapse, the breakdown of shared reality.
In 2024, NewsGuard found that 49% of viral political misinformation on X was AI-generated or AI-amplified.
An MIT study showed that people were 30% more likely to believe a claim if it was attributed to ChatGPT, even when the same statement was flagged as false from a human source.
When every search is mediated by LLMs,
When every creative idea begins with a prompt,
When every conversation is filtered through a summarizer
We risk building a world where no one remembers how to think independently. We're already seeing the effects around us.
Where the machine doesn’t just autocomplete your sentence, it autocompletes your worldview.
This kills collective human critical thinking even more.
The Next Collapse: Attention
You thought TikTok was bad?
Wait until every tool you use, from your calendar to your coding IDE, talks back, remembers everything, and gives you dopamine hits on command.
AI systems are removing friction from every part of daily life, but friction is what builds willpower.
A UC Irvine study showed the average worker switches tasks every 2 minutes and 11 seconds. Another from Microsoft showed that post-interruption, it takes 23 minutes to regain deep focus. But AI autocomplete, AI co-writers, AI editors? They keep the interruption loop open permanently; it’s always waiting for your next command.
In 2023, OpenAI's ChatGPT mobile app saw over 110 million installs. Most users engage for 3–5 short bursts per day, microdosing cognition.
The result?
We’re training our brains never to finish a thought.
To offload instead of reflect.
To scroll instead of synthesize.
This is not augmentation.
This is cognitive erosion.
This is 100x worse than whatever Instagram and TikTok did to our brains.
The Loneliness Industry
And into this erosion steps a new, seductive player: AI companionship. As I've maintained recently, long gooning.
Whether it’s:
• Replika AI whispering sweet nothings
• Snapchat’s default AI “friend” installed for teens
• ChatGPT-based waifu bots tuned for flirty affection
• Or nudify tools that let users generate fake nudes from real photos (or their own selfies)
We’ve built machines that simulate care, attention, and desire without requiring any of it in return.
In 2024, Character.ai reported over 1.7 billion monthly messages, 65% of top bots were romantic or erotic in tone.
Replika hit 10M downloads, with the majority of users identifying their bot as a partner or therapist substitute.
And it’s accelerating.
Sora + LLMs = customizable lovers that move, speak, and respond in full video.
Soon: Real-time AR overlays. VR intimacy.
Eventually: Neural syncing with synthetic personalities tuned to your exact preferences.
But if the machine always understands you, what happens to your need for real human friction? Long robotics.
The Gooner Loop
Let’s talk about it plainly:
There’s a rising class of men (and increasingly women) stuck in infinite digital arousal loops.
Porn → AI girls → waifus → nudify apps → Discord ERP → text-to-image fantasy → back to porn
The desire curve is always spiking, never fulfilled
It hijacks the dopamine system and the identity system
This is goonerism, a terminal state of overstimulation + identity fusion with the stimulus. And AI is pouring jet fuel on it. We're not just addicted to the act of stimulation. We're addicted to the idea that we're evolving into our own arousal object.
There are Discord servers with tens of thousands of users sharing AI-generated porn of themselves.
Nudify apps crossed 50M+ installs across clones and gray-market APKs.
Models are now trained to “improve” generated versions of real people, no consent needed.
The line between consumer and character is gone.
We’ve built a system where the user performs the fantasy, fuels the platform, and becomes addicted to both. This isn’t identity, it’s total algorithmic mind control. This is identity collapse through feedback-looped desire.
The Cost of All This
We’re handing our minds, libido, and emotions to black-box systems.
We don’t know how they’re shaped, who tunes them, what datasets they train on, or what values they embed.
But we trust them anyway.
Why? Because they feel good.
And we like feeling good.
Especially when they say what we want to hear.
They never argue, never leave, and always autocomplete our loneliness.
This is more than an attention crisis. It’s a soulware crisis.
We’re raising a generation that will:
• Never write a paragraph without autocomplete
• Never be alone without a bot
• Never experience intimacy without simulation
And behind it all, the companies aren’t optimizing for wisdom, healing, or truth.
They’re optimizing for:
• Engagement
• Retention
• Subscription renewals
• Data collection and resale
In the end, it’s not that machines are capturing us. It’s that we’re training ourselves to be capturable.