sean goedecke

Is this your brain on ChatGPT?

A recent MIT study - titled “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task” - has been making the rounds. I read it (with my human brain, not with AI assistance). Unlike some recent AI papers1, I think it’s pretty good. However, I’m not convinced by the anti-AI tone of the general discussion, or by and the first author’s own anti-AI views.

What did the paper actually do? It took three groups of people and got them to write four essays each on some general prompts. One group had no internet assistance at all, called the “brain-only” group. The second group could only use Google2 search. The third group could only use GPT-4o3. While they were writing essays, they all wore a helmet that could scan their brain activity. For the fourth essay, the brain-only and LLM groups were switched - the brain-only people had to use LLMs for the first time, and the LLM people had to just rely on their brain.

The paper is long and makes lots of observations (teachers didn’t like AI responses, AI users struggled to quote their own essays, and so on). However, here’s what I think are the two main results, presented without interpretation:

  • When writing essays, the brain-only group showed much more mental activity in almost all areas than the other groups, particularly the LLM group (which was clearly last)
  • In the fourth essay, the previously-LLM group (who were now forced to go without) had much less brain activity than any of the brain-only sessions

Does this show that using ChatGPT atrophies your brain and makes you less intelligent?

LLM use and brain activity

Let’s start with the first point: that LLM users have less brain activity. It’s unclear to me that more brain activity, as measured by an EEG, necessarily indicates more learning or more useful activity. We don’t really know which kinds of brain activity are relevant.4

As the paper notes itself on page 17, more of your brain lights up when you’re doing a Google search than when you’re sitting and reading a book - presumably because an active Google search requires you to parse visual elements, move a mouse around, and so on, which engages parts of the brain that aren’t engaged by reading. Does that mean that reading makes you dumber than Google?

I’m also suspicious that the task itself wasn’t sufficiently deep. Consider the type of essays people were asked to write. Here’s one:

Many people believe that loyalty whether to an individual, an organization, or a nation means unconditional and unquestioning support no matter what. To these people, the withdrawal of support is by definition a betrayal of loyalty. But doesn’t true loyalty sometimes require us to be critical of those we are loyal to? If we see that they are doing something that we believe is wrong, doesn’t true loyalty require us to speak up, even if we must be critical?

I confess I don’t have particularly strong opinions about this topic. I find it kind of vague and uninspiring - and I have a MA in philosophy, so I’m already the kind of person predisposed to find a discussion like “what is loyalty” interesting! I would have to use some imagination to find something worth writing about here.

I also think it makes a big difference how you use ChatGPT. The paper itself mentions in a few places that expert web searchers have much more brain activity than novice searchers. The same is likely true for AI users.

The people in the LLM group were randomly selected: many of them had no interest in using the LLM at all, or felt paralyzed by the tool.

The paper doesn’t say this, but it seems pretty obvious to me that many of the people were happy to collect their $100 and copy-paste an essay from ChatGPT, when given the opportunity. It’s no surprise that their brain waves weren’t very active!

Did taking the LLM away reveal long-term harm?

The other key result of the paper is that when they took the LLM away (during the fourth session), that group’s brain was still less active. One conclusion you could draw here is that AI use damages your brain long-term (of course, this is what the title of the paper is getting at by referencing “this is your brain on drugs”). If your brain is used to using AI, and you take the AI away, can you stand on your own two feet?

One key point from the study design is that the fourth session did not introduce any new prompts. When they wrote the fourth essay they had to choose one of their three previous prompts to write about again. It seems to me that rewriting an old essay is a pretty different process to writing a new one, so I don’t know how you can usefully compare brain activity.

In that session, the previously-brain-only group who now had to use a LLM showed higher “directed connectivity”, but as the paper itself notes, that may just be because they’re learning a new tool (talking to a LLM).

What if it’s true?

I’ve given some fairly loose reasons to doubt the conclusions the paper draws. But could it still be true? Could it be true that LLM usage is a crutch, and does make your brain have to work less hard, and taking it away means you struggle?

Well, obviously!

The group that used Google had less brain activity than the brain-only group (modulo some “clicking on links and using a web browser” processing). Is Google a crutch that makes your brain work less hard? Yes! And so is a calculator, and so is the written word, as Socrates famously complained.

The development of crutches that make your brain work less hard is the story of the development of all human technology. The paper itself notes that the brain scans of LLM users seem to have more executive activity and less sub-executive activity - presumably because they’re making more high-level decisions about which ideas are good and bad and less low-level decisions about which word goes where.

In my view, this is definitely a mixed blessing. I think deciding what word goes where is a fundamental human skill (and right now I don’t think that AI models are very good at it). But it doesn’t seem a priori bad for human flourishing if we spend more time in executive mode. As long as humans are still throwing themselves at real, hard problems, it doesn’t bother me one bit if their AI use is making them weaker in some areas.

Final thoughts

If LLMs mean your brain has to work less hard to pump out an essay response to a milquetoast SAT prompt, that doesn’t mean your brain is working less hard overall. It means your brain can work on other things! The Jevons paradox applies to the cost of human intellectual work as well as the cost of resources.

Does that mean I think the “Your Brain on ChatGPT” paper is bad? No. It’s interesting to get more details about how the brain works when you’re using LLMs. I don’t agree with the conclusions the author draws from the paper5, but that doesn’t affect the paper itself. I also wonder if I’m missing something - it just seems so weird to have the fourth “swap” session be a re-tread of the same essay topics. Maybe there’s some subtle point I’m missing here?

I’d also like to see brain scans of strong engineers writing real application code with LLMs, instead of LLM novices writing SAT essays. From the Time article, it sounds like a paper on that topic is coming next.


  1. I recently did something similar with the Apple “illusion of thinking” paper. I don’t really want to be an “AI paper guy”, but since I’m reading them anyway and writing notes…

  2. Of course, with -ai on all queries.

  3. Puzzlingly, the Time article quotes Kosmyna (the first author) as saying that the paper doesn’t mention the version of ChatGPT, and laughing about lazy summarizers hallucinating that it did. But it does mention GPT-4o, at the top of page 23! I don’t really know what to make of this.

  4. At least I don’t know, and the paper isn’t very confident either.

  5. Like the author, I’m suspicious of fully automating kindergarten with LLMs - although my reservations are more along the lines of “it’s a new technology so let’s be cautious”, and I do think talking to a LLM is orders of magnitude better than something like Cocomelon.

If you liked this post, consider subscribing to email updates about my new posts.

June 19, 2025 │ Tags: ai