For many, patience is the killer LLM feature
Sam Altman, CEO of OpenAI, famously said that his goal was to make intelligence “too cheap to meter”. Right now, buoyed by venture capital, we’re living in that world. Every human (with internet access) on the planet has free access to language models that are smart enough to assist with a wide variety of problems. And people are using it! ChatGPT has more traffic than Wikipedia1.
However, there doesn’t seem to be a huge consumer pressure towards smarter models. Claude Sonnet had a serious edge over ChatGPT for over a year, but only the most early-adopter of software engineers moved over to it. Most users are happy to just go to ChatGPT and talk to whatever’s available. Why is that?
It could just be that people know about ChatGPT, and it takes a lot of marketing to get people to use a tool they don’t already know about. It could be that 4o is already “good enough”: depending on how cynical you are, either smart enough to do most ordinary tasks, or smart enough to be smarter than many of the people chatting with it. Or it could be that intelligence is not the main value most users are getting out of LLMs.
Patience
What is, then? I’ve thought for a while that the answer is patience. Consider this article by Julia Baird, where she describes how herself and other people are using ChatGPT for therapy:
Dimity said she had been having “regular (intensely chaotic and cathartic) chats” with a version of therapy, ChatGPT, in order to “offload everything I don’t have time, money, or sometimes sanity to process elsewhere”. She said convenience is crucial: “Professional therapy isn’t super accessible for me, I’m prioritising my kids’ mental health needs, which means my own support has to be… well, free and available at 11:47pm when I’m feeling feelings and eating toast over the sink.”
So I asked ChatGPT about it. And this damn robot was kind, empathetic, understanding and gentle. It told me, in short, to acknowledge the massive love I had for her, to have some compassion for myself, to write her a letter. It sounds simple, I know, but I was gobsmacked.
I have not used ChatGPT for personal advice or therapy myself. But I can see the appeal. Most good personal advice does not require substantial intelligence. You could go a long way by just writing “breathe, be kind to yourself, don’t react in anger” and similar platitudes on a sheet of paper and reading it periodically. They’re platitudes because they’re true!
LLMs are not superhuman at giving this kind of advice. However, they are fundamentally a good fit for doing it because they are:
- Always available, no matter where or when you are
- Never judgmental or mean
- Willing to listen indefinitely without getting frustrated
In short, they are superhumanly patient, and have been so for many years now. Even GPT-3.5 fit these criteria, though it was probably just below the line of intelligence that let it reliably understand whatever users were saying to it2.
Concerns
I do worry that superhuman patience is a magnifier on whatever advice-giving problems the language model already has. For instance, 4o’s recent sycophancy problem was certainly made worse by the fact that the model was patient enough to validate the user indefinitely.
Language models are also not therapists. This isn’t to say that professional therapists are all better than the LLMs - like doctors, there’s a huge variety in ability and effort among therapists - but therapists do have escalation pathways that language models don’t.
It’s also possible that some users might get used to AI patience and be frustrated with human beings for not being able to measure up, which doesn’t seem healthy either.
Summary
As software engineers, we’re focused on how language models can help (or supplant) knowledge work. We’re obsessed with smarter and smarter models. But it may be that one of the most transformative capabilities of language models is already here. Until the early 2020s, everything that could talk could also run out of patience - or at the very least need to sleep, or eat, or do things besides talk. There has never been a time in human history where every human could talk with something infinitely patient. Patience has become “too cheap to meter”.
If you liked this post, consider subscribing to email updates about my new posts.
May 17, 2025 │ Tags: ai