Your brain on chat GPT looks like your muscles after 6 months on the couch. Weak, flabby, and shriveling by the day. Scientists at MIT just proved it with brain scans. And yes, you should be worried. I’m worried. The more people used AI, the less their neurons fired. Memory gone. Creative thinking flatlined. Original thoughts, what are those? We’re witnessing the first generation in history voluntarily outsourcing consciousness to AI. But wait, we’ve been here before and the story never ends the way you think it will. So today we find out just how much AI has cooked your cranium. My name is Guy and this is a video your brain will thank you for watching. Recently, MIT’s media lab gathered 54 brave souls, strapped highdensity electrode monitors to their heads, and watched what happened to their brains while using chat GPT for writing tasks. What they found should terrify anyone who’s ever copypasted an AI response and called it a day. These chat GPT users showed the weakest neural connectivity across all brain regions compared to control groups. We’re talking significant reductions in alpha waves. That’s your creative thinking. Theta waves tanked. There goes your working memory. And beta waves crashed too. So say goodbye to sustained focus. Frontal midline theta activity which is responsible for your attention span was completely awol in some chat GPT users. They just checked out entirely. Now the researchers called it cognitive debt. a fancy way of saying your brain is writing checks it can’t cash. When you use AI to complete tasks, you’re borrowing cognitive ability you haven’t actually developed. You get immediate results now, but pay for it later with pathetic thinking skills. Picture your neural pathways as roads. Every time you outsource thinking to AI, those roads get a little narrower. Use it enough and eventually you’re left with dirt tracks where eight-lane highways used to be. Participants also became progressively more passive across sessions. Session one showed reduced engagement. Session two further decline. By session four, they’d basically given up any pretense of thinking and were copypasting chat GPT outputs like zombies. The memory results were equally embarrassing. Participants were asked to recall their own work minutes after writing it with AI assistance, but the vast majority of them couldn’t even quote their own essays. Imagine not remembering something you supposedly wrote 5 minutes ago. That is when you know you’ve surrendered your cognitive autonomy to Open AI. The MIT research exposes the disconnect between feeling productive and actually learning anything. Sure your AI assisted task got done quickly but the actual value of learning encoding knowledge building neural connections developing understanding is completely cut out of the picture. Now the control groups in this study told a very different story. People using only their brains maintained robust neural networks, strong frontal to posterior connectivity, active working memory, engaged creative processing. Even the search engine group showed decent activity with increased visual processing as they scanned and evaluated information. At least their brains were doing something. But those chat GPT users were goners. The researchers diplomatically called it quote automated scaffolded cognitive mode, which is a nice way of saying the lights were on but nobody was home. But what’s really scary is how fast this happened. We’re not talking years of decline. Cognitive collapse showed up after just a few sessions. And even before the testing began, participants who reported regular chat GPT use in daily life started with the most severe deficits. Their baseline neural activity already looked like what new users developed after multiple sessions. The damage was already done. Then there’s the homogenization nightmare. Every chat GPT essay on the same topic looked virtually identical. Same structure, same transitions, same milktoast conclusions. The researchers could spot AI written essays with disturbing accuracy based purely on their uniformity. Everyone was converging on the same mediocre style and studious neutrality dictated by OpenAI’s training data. Human teachers spotted them instantly, too. One noted they all had quote that chat GPT feeling technically competent but utterly soulless. Meanwhile, the AI judge rated them the highest which well of course it did. The machines love their own output. Research shows generative AI can quote herard divergent thinkers towards the narrow center of their training data. Now, if everyone’s output becomes homogenized and we all become dependent on similar AI tools, we’re looking at a standardization of thought that could murder innovation, make the future extremely boring, and expand Silicon Valley’s domain of control to your entire brain. Participants in the MIT study also seemed confused about how much of their output belonged to them. When asked, “How much of this essay do you own?” Brainonly participants claimed near unanimous full ownership, but chat GPT users were all over the place and struggled to identify what they had contributed to the output associated with them. This diminished sense of intellectual ownership creates a cascade of problems. If you don’t feel ownership, you’re less likely to scrutinize a text for accuracy, less likely to take responsibility for its ethical implications, and more vulnerable to whatever biases Open AI baked into their model. You become a passive conduit for algorithmic output. And speaking of passive conduits, it seems that not everyone watching this video is subscribed. So, why don’t you activate those neurons by smashing that like button, subscribing to the channel and pinging that notification bell so your brain keeps getting this quality content stimulation. Now, before you throw your laptop out of the window and join the Amish, let’s pump the brakes a bit. Humans have been absolutely convinced that new technology is going to melt our brains for basically our entire history. Cast your mind back 2 and a half thousand years. Socrates was having an absolute meltdown about this disruptive technology called writing. He refused to write anything down because he believed it would destroy human memory, turning people into intellectually lazy zombies who couldn’t remember anything without external aids. And do you know what? He was technically right. We did lose our epic poem memorization abilities. Ancient Greeks could recite the Iliad from memory. Can you even quote one line from the Bitcoin white paper? Didn’t think so. But Socrates was a bit shortsighted on this one. It turned out we gained libraries, accumulated knowledge, and advanced scientific progress. So on balance, the written word was probably a net positive. And yet the same concerns were still in evidence some 2,000 years later. In 1545, Swiss scientist Conrad Guestnut was losing his mind about the printing press, warning it would cause a quote dangerous abundance of information that would overwhelm our poor medieval brains. He thought society would collapse under the weight of too many books. But guess what? Instead, we got the Renaissance, the scientific revolution, and the Enlightenment. It turns out human brains could handle reading more than one handcopied manuscript per lifetime. Who knew? And the pattern repeats like clockwork. Photography would destroy artistic skill. Why learn to paint when you can just click? Calculators would create a generation of mathematical idiots. Television would turn children into vegetables. Video games would breed violent psychopaths. The internet would give everyone the attention span of a goldfish. and Tik Tok is supposed to make popcorn out of brain matter. Now, a few of these assumptions did contain grains of truth and some larger than others. TV probably didn’t help attention spans. Calculators definitely reduced mental arithmetic skills, but we adapted, evolved, and our brains became preoccupied with other material, like for instance, 12-hour long ASMR videos. while we were busy panicking about every new invention of the last 100 years, IQ scores were rising steadily. In the 1980s, New Zealand researcher James Flynn documented this phenomenon across countries with long-term IQ testing data, noting consistent gains of about three points per decade throughout the 20th century. During the exact period when television, computers, and video games were supposedly frying our brains, average IQ scores rose by 30 points. That’s supposed to be the difference between average and gifted intelligence. The Flynn effect showed that despite all our technological anxiety, human intelligence was actually increasing. Better nutrition, education, health care, and yes, increasingly complex technological environments seem to be upgrading our cognitive software. The gains were especially strong in abstract reasoning and pattern recognition. Exactly the kinds of intelligence you’d need to navigate a high-tech society. So, well, I guess the whole AI thing is just another technology panic debunked. Nothing to see here. Everything is fine because we’re all getting smarter. Just kidding. Because the Flynn effect has in fact been moving in reverse for the last 30 years. Since the 1990s, IQ scores in Norway, Denmark, Finland, and France, have begun declining for the first time in recorded history. Norway’s military noticed it first. Their conscript data, basically every young male in the country, showed IQ scores beginning to decline. Not a blip, not a statistical anomaly. year after year steady decline. Denmark saw the same pattern. Then Finland, then France. We’re talking sustained measurable drops of two to four points per decade. And these aren’t cherrypicked samples. When your data set is literally every male of conscription age, you can’t blame selection bias. But the timing is what makes everyone nervous. These reversals started in the 1990s and accelerated through the 2000s just as we entered the information age. You know, personal computers going mainstream, the internet exploding, mobile phones everywhere, social media rewiring how we communicate, digital technology saturating every aspect of life. Sure, there are other possible explanations. Maybe we hit the ceiling for environmental IQ gains. Maybe modern comfort removed evolutionary pressure. Maybe teaching for exams rather than learning is dumbing down education. But the correlation with digital saturation is hard to ignore. We might be looking at two opposing forces. Environmental factors still pushing certain abilities up while something else drags general intelligence down. Or maybe we’ve introduced something to the environment that’s actively making us dumber. Now, different countries are taking very different approaches to the threat of AI making us stupid. China’s strategy is particularly ambitious. Starting from this year, they’re making AI education mandatory in every public school, 8 hours minimum per academic year, from basic pattern recognition for kindergarteners to algorithm design for teenagers. But simultaneously, they’re doubling down on traditional cognitive training. Kids still handwrite thousands of characters, memorize classical texts, and academic assessments are done with pen and paper. It makes sense that if AI handles surface level thinking, human brains need to go deeper. China wants the productivity boost without the brain rot. Singapore, meanwhile, is treating it like a public health issue. Their public skills future program hands out credits of around $3,000 to workers over 40. not to retire them, but to teach them how to use AI without becoming dependent. It’s government-funded cognitive crossraining. They’re treating thinking skills the same way they treat physical fitness. Use it or lose it. Japan is taking a practical approach, too. AI literacy is mandatory, but so is proving you can function without it. You learn the tool, but also when to put it down. Japanese teachers even use AI’s mistakes as teachable moments, training students to spot and correct algorithmic errors. Meanwhile, over in the West, the EU is taking a very cautious and somewhat suspicious approach to AI in education. Generative AI isn’t banned, but it’s heavily scrutinized. The general message is yes, you can use it in schools, but only after a thick coat of transparency, human oversight, and data protection rules has been applied. Teachers are expected to monitor it like a misbehaving student who might plagiarize, hallucinate, or start quoting conspiracy theories mid- essay. As usual, though, the US education system is a little less coordinated. Some schools ban chat GPT entirely while others require it for homework. One district treats it like malware. Another calls it a teaching assistant. There’s no national strategy, just a patchwork of panic experiments and vibe-based policymaking. Based on the research we have so far, the panic does seem to be justified. Though a survey across 28 countries found 82% of university faculty members worried about students becoming overly dependent on AI. That’s a near unanimous alarm from the people watching this unfold at work every day. Australian educators report quote digital amnesia where students literally can’t remember content they created with AI hours earlier echoing the findings of the MIT study in Qatar. Academics warn AI dependency is murdering analysis, creativity and memory retention. The pattern is global. More AI use equals less independent thinking ability. A Chattam House field study in China uncovered something genuinely disturbing. Over 50 interviews with workers revealed what researchers called complete dependence on AI tools. Some participants literally described feeling quote unable to think critically without algorithmic guidance. They’d become cognitive invalids. The correlation is stark. Another study of 666 participants showed a minus0.68 correlation between AI usage and critical thinking scores. In plain English, that means your brain waves on chat GPT are not so much dipping as they are nuking. The IMF sees those tsunami coming. They figure 40% of all jobs globally face AI exposure, jumping to 60% in wealthy market economies. This is bad news if your job involves more thinking than physical work because generalpurpose agentic large language models are basically already here while general purpose robot workers are still pretty niche. Now it must be said not all AI induced brain damage is created equal. The worst effects I mentioned like progressive neural decline, memory failure, and homogenization of mediocre output are most associated with large language models like chat GBT and Claude. The blackbox problem amplifies this because if you can’t understand how AI reaches conclusions, you can’t evaluate output critically or explain it yourself. You become a meat puppet for algorithmic decisions. Meanwhile, image generators are creating a different kind of havoc. Anecdotally, artists are reporting prompt dependency, meaning they can modify AI output, but are losing the ability to create from scratch. Their fear is that visualization skills are trophy until the individual can’t imagine images without algorithmic assistance. They say life imitates art, but life imitating AI art is going to be really bleak. Then there are the AI coding assistants which are producing what critics call vibe coders. Developers who can tweak existing code but can’t build much from the ground up. They code like they’re assembling IKEA furniture, following instructions without understanding why that particular screw goes in that particular hole. A recent study by AI research nonprofit MER found that experienced open-source developers believed that AI was speeding up their work when in fact they actually took 19% longer to complete tasks when using AI tools. This suggests that for complex nuanced work, AI use actively slows you down. Educational AI shows more promise. However, Duolingo’s AI features demonstrated gains across emotional, cognitive, and behavioral domains. KHN Academyy’s personalized tutoring enhances rather than replaces learning. Large-scale interventions suggest AI enhanced education can deliver 20 to 40% improvements in learning outcomes. There is a clear and intuitive pattern here. AI that makes you work enhances cognition. AI that works for you creates dependency. One builds strength, the other builds weakness. Now, to rewind for just a minute, the MIT study at the center of the latest debate is not exactly bulletproof evidence that it’s all over for the human brain. The sample of 54 Boston students isn’t all of humanity. The study hasn’t even been peer- reviewviewed, and the task of essay writing is exactly what chat GPT was designed to do. Critics will argue reduced brain activity might indicate efficiency, not decline, like how expert drivers show less neural activation than beginners. Maybe chat GPT users are just allocating cognitive resources more intelligently. On the other hand, though, that argument falls apart when you look at the memory failure angle. Forgetting your own work minutes after creating it is not what anyone would call efficiency. Likewise, homogenization isn’t a desirable kind of efficiency either. It’s everyone converging on the same soulless writing style. There is counterveailing evidence for AI enhancing intelligence when properly implemented. Meta analyses show AI conversational agents reducing depression symptoms with an effect size of 0.64. Language learning shows gains with an effect size of 0.39. These are significant, measurable improvements. It isn’t all necessarily bad. In other words, 20 years ago, chess grandmaster Gary Kasparov demonstrated something important. Weak humans plus machines plus good processes consistently beat both strong computers and strong humans with poor processes. So, the key word here is processes. How you use the tool matters more than the tool itself. And history suggests we’ll adapt. We survived writing despite Socrates’s misgivings. We survived printing despite Guestner’s warnings. We survived calculators and television. And the internet hasn’t completely defeated us yet. So why should this be different? Well, because this time has unique characteristics that might actually be worth losing sleep over. Firstly, the speed of change is unprecedented. Chat GPT hit 100 million users in just 2 months. No adaptation period, no gradual integration. It was cognitive shock therapy at population scale. The comprehensiveness is new too. Previous technologies replaced specific functions. Writing replaced memorization. Calculators replace arithmetic. AI potentially replaces thinking itself across domains. It doesn’t augment your cognition. It simulates it outside of the human body. It’s also quite deceptive. You know you’re not doing maths when using a calculator. But chat GPT users think they’re writing. They feel ownership over work they can’t remember creating. The economics guarantee widespread adoption. Every previous technology had natural barriers, cost, access, learning curves. But AI trends towards free, instant, ubiquitous. When cognitive outsourcing has zero friction, why would anyone choose the harder path of actually thinking with their own brain? The MIT research might be limited, but the pattern is undeniable. Every major technology brings cognitive trade-offs. The question is whether we’re trading up or trading down. But on the bright side, at least we’re still smart enough to diagnose our own stupidity. It makes you think, doesn’t it? Or at least it would if AI hadn’t already fried those particular neurons. Anyway, if you remember enjoying this video, perhaps I can interest you in our last update about how AI agents are changing crypto, which you can find right over here. I’ll leave it there for now, folks. But as always, thank you for watching and I’ll see you next time. This is Guy. Over and out.
Related Posts
Add A Comment