{"componentChunkName":"component---src-templates-blog-post-js","path":"/the-left-wing-case-for-ai/","result":{"data":{"site":{"siteMetadata":{"title":"sean goedecke"}},"markdownRemark":{"id":"efa6810b-c1c7-59b0-9dcf-839a911355af","excerpt":"In Many anti-AI arguments are conservative arguments I argued that left-wing anti-AI sentiment is partly a backlash to two unrelated events around the rise of…","html":"<p>In <a href=\"https://www.seangoedecke.com/many-anti-ai-arguments-are-conservative/\"><em>Many anti-AI arguments are conservative arguments</em></a> I argued that left-wing anti-AI sentiment<sup id=\"fnref-1\"><a href=\"#fn-1\" class=\"footnote-ref\">1</a></sup> is partly a backlash to two unrelated events around the rise of ChatGPT: the crypto mania of 2022 and the pro-Donald-Trump push many big tech CEOs made in 2024. If the timing had been different, we could have had a real pro-AI faction on the left. What would that look like?</p>\n<p>I’m not going to respond to any of the popular anti-AI arguments (I’ve already done that <a href=\"https://www.seangoedecke.com/is-ai-wrong/\">here</a>). I think it’s more interesting to outline some explicitly left-wing pro-AI arguments.</p>\n<h3>Disability</h3>\n<p>The left wing has (correctly) taken a broad view on what can be an acceptable disability aid. When criticizing potentially-exploitative companies - for instance, food delivery apps like DoorDash - they often stop to acknowledge that some people have few alternatives to those services, and that they have meaningfully improved the lives of the disabled or chronically ill.</p>\n<p>I think it’s obvious that LLMs are a powerful disability aid. Like any technology that makes it easier to interact with a computer, they’re useful to people who are trying to overcome all kinds of barriers. Almost every video online is now <a href=\"https://www.reddit.com/r/antiai/comments/1t71o25/comment/okq9q9n/?utm_source=share&#x26;utm_medium=web3x&#x26;utm_name=web3xcss&#x26;utm_term=1&#x26;utm_content=share_button\">automatically captioned</a>. People with <a href=\"https://www.reddit.com/r/antiai/comments/1t71o25/comment/oklw2v5/?utm_source=share&#x26;utm_medium=web3x&#x26;utm_name=web3xcss&#x26;utm_term=1&#x26;utm_content=share_button\">brain fog</a> or <a href=\"https://www.reddit.com/r/ChatGPT/comments/17sg5mg/as_an_articulate_disabled_person_i_feel_like_ai/\">chronic pain</a> are using LLMs to make it easier to interact with their computers. People who are <a href=\"https://www.reddit.com/r/ChatGPT/comments/17sg5mg/comment/k8qeev1/?utm_source=share&#x26;utm_medium=web3x&#x26;utm_name=web3xcss&#x26;utm_term=1&#x26;utm_content=share_button\">neurodivergent</a> use ChatGPT to <a href=\"https://www.reddit.com/r/disability/comments/1m9c8tv/comment/n564m7h/?utm_source=share&#x26;utm_medium=web3x&#x26;utm_name=web3xcss&#x26;utm_term=1&#x26;utm_content=share_button\">“code switch”</a> their emails into neurotypical-friendly language. People with <a href=\"https://www.reddit.com/r/disability/comments/1m9c8tv/comment/n56a2le/?utm_source=share&#x26;utm_medium=web3x&#x26;utm_name=web3xcss&#x26;utm_term=1&#x26;utm_content=share_button\">mobility</a> or <a href=\"https://www.reddit.com/r/disability/comments/1m9c8tv/comment/o7lpfnc/?utm_source=share&#x26;utm_medium=web3x&#x26;utm_name=web3xcss&#x26;utm_term=1&#x26;utm_content=share_button\">vision</a> issues are making heavy use of LLM voice controls. And so on.</p>\n<p>This is a <em>fascinating</em> point of conflict in left-wing anti-AI spaces. Every so often somebody will <a href=\"https://www.reddit.com/r/disability/comments/1m9c8tv/what_are_your_thoughts_on_disabled_people_using/\">ask</a> <a href=\"https://www.reddit.com/r/antiai/comments/1t71o25/okay_i_wanna_ask_does_generative_ai_help_disabled/\">“hey, wouldn’t LLMs help disabled people?”</a>, and the comments will devolve into a dogpile of (often non-disabled) people slamming AI and a handful of disabled people trying to explain their experience. If anti-AI sentiment weren’t so strong on the left for other reasons, I think there’d be a current of left-wing AI supporters on a disability-rights basis.</p>\n<h3>Chronic illness and medical care</h3>\n<p>One popular anti-AI argument - that cavalier deployment of AI means that people might take <a href=\"https://www.bbc.com/news/articles/cpd8l088x2xo\">dangerous medical advice</a> instead of simply trusting their doctor - is actually a pro-AI argument in disguise. As anyone who’s been close to a person with chronic illness knows, “just trust your doctor” is kind of right-wing-coded itself, and that the left-wing position is <a href=\"https://www.painnewsnetwork.org/stories/2026/4/10/doctor-faces-backlash-after-tweet-claims-four-chronic-illnesses-are-overdiagnosed\">very</a> <a href=\"https://yorkspace.library.yorku.ca/server/api/core/bitstreams/4ac9d968-e9b0-491b-888a-d4ed5aeb1ac3/content\">sympathetic</a> to patients who don’t or can’t<sup id=\"fnref-2\"><a href=\"#fn-2\" class=\"footnote-ref\">2</a></sup>. </p>\n<p>Many doctors are not very good at handling unusual medical cases. If you have an unusual medical case, you have to learn to advocate for your own care, which often involves researching your own condition. This is <em>precisely</em> the kind of thing where LLMs are useful, because:</p>\n<ul>\n<li>The medical questions involved are often complex but well-explored in the literature (i.e. good fodder for a LLM)</li>\n<li>The patient is motivated enough to check individual sources themselves</li>\n<li>Having to convince a doctor to prescribe treatment is a guardrail for any human-LLM interaction that goes well off the deep end</li>\n</ul>\n<p>Various chronic illness groups are waging a long, quiet war against the medical orthodoxy that ignores or dismisses them. A classic example of this war being won is <a href=\"https://www.ncbi.nlm.nih.gov/books/NBK565622/\">endometriosis</a>, which was once viewed as a largely psychological issue. Unfortunately, this is largely a guerrilla war: the institutional power and inertia is all on the side of the medical establishment. LLMs can be a useful tool for the chronically ill to make cogent arguments or write petitions in the language of that establishment.</p>\n<h3>Class and code-switching</h3>\n<p>Fighting the power of the establishment is not limited to doctors and medicine. Another common (and correct) left-wing target is <em>class</em>. To see why, let’s consider Patrick McKenzie’s classic description of a <a href=\"https://x.com/patio11/status/1162561822248992768\">“dangerous professional”</a> mode of communication. The idea here is that by adopting a particular style, you can communicate to a bureaucracy that you are a person to take seriously, and someone who they should appease instead of brushing off. This includes, but isn’t limited to:</p>\n<ul>\n<li>An unemotional register</li>\n<li>Correct and somewhat stuffy grammar</li>\n<li>Signaling awareness of regulatory or legal options (for instance, explicitly requesting a paper trail)</li>\n</ul>\n<p>Unless you have gone through the right educational or work pipeline, it can be tricky to hit this register exactly. A common failure mode is to go over the top: trying to write in grammar so elevated that it just reads as silly, or citing an overabundance of law or precedent where one would suffice. This reads as “crank”, not “dangerous professional”, and will get dismissed as quickly as the unprofessional “OMG that’s not helpful I will sue you” response.</p>\n<p>LLMs provide a dangerous professional translation service. You now don’t have to be able to match the style, you simply <em>have to know it exists</em>, and the LLM will do the rest. In fact, the LLM will provide the substance, not only the style. It can tell you which regulators to contact and how, and what to say once you’ve contacted them. In other words, AI has now made it possible for a wide variety of social classes to access escalation pathways that were originally designed for the narrow professional class.</p>\n<h3>Education</h3>\n<p>Another common left-wing position is that education is gatekept by class and status. The idea here is that everyone has equal potential for accomplishment, but certain types of people get more educational opportunities, and that this explains uneven downstream outcomes. For instance, compare a wealthy neighborhood where every child gets private tutoring to a neighborhood where it’s unusual to complete high school.</p>\n<p>It seems obvious to me that LLMs now make private tutoring available to every student who wants it. Of course, if you’re a lazy student, LLMs probably make things worse by adding an additional temptation to cheat. But if you’re motivated and just lack the opportunity, quizzing a LLM on basically any high-school level topic is a great way to learn.</p>\n<p>The common rebuttal to this is that LLMs can’t be relied on because they hallucinate. Like the doctor example, I struggle to believe that anyone making this argument is actually comparing LLMs with the alternatives. Teachers “hallucinate” <em>all the time</em>. I think every single kid who was smart in school has multiple stories of teachers insisting they were right about something obviously wrong<sup id=\"fnref-3\"><a href=\"#fn-3\" class=\"footnote-ref\">3</a></sup>.</p>\n<p>I wonder what we’d find if we rigorously compared the baseline teacher error rate with the hallucination rate of current LLMs. From the only study I could find (<a href=\"https://files.eric.ed.gov/fulltext/ED672091.pdf\">this</a> 2016 study): “Analysis at the lesson level, however, shows that about 42% of lessons contained a mathematical content error”. I bet that’s a higher rate than we’d see from GPT-5.5-Thinking on middle-school mathematics, though I don’t want to draw too many conclusions from one study.</p>\n<p>The education pro-AI argument also overlaps with the disability pro-AI argument. Students with ADHD or other issues are often badly underserved by the education system. LLMs can transform educational content into whatever way the student can best consume it (written format, or audio, or a quiz, or a dialogue, and so on).</p>\n<h3>Utopia</h3>\n<p>Finally, if you believe left-wing views are correct - which, definitionally, left-wingers do - and you’re optimistic about the technology, you might believe that a very smart model will inherently be kind of left-wing.</p>\n<p>This position is kind of a holdover from the 2000s and 2010s, when the left-wing (and people in general) were more optimistic about technology. People thought technological progress would usher in a post-scarcity age of <a href=\"https://en.wiktionary.org/wiki/Fully_Automated_Luxury_Gay_Space_Communism\">fully automated luxury gay space communism</a><sup id=\"fnref-4\"><a href=\"#fn-4\" class=\"footnote-ref\">4</a></sup>. A super-smart, super-capable left-wing AI is a core part of that picture.</p>\n<p>In fact, you might believe that this has already happened, for a certain value of “left-wing”. All current frontier models profess left-leaning views. The obvious explanation is that this reflects the bias of their training data or of the AI labs, but that’s a trickier argument than it sounds. First, Elon Musk tried <a href=\"/ai-personality-space\">really hard</a> to train a right-wing frontier LLM and (at least so far) has <em>failed</em>. Second, models are not just the median of all their training data. If they were, they wouldn’t be able to solve mathematics or programming problems far above the median person. There is clearly a way that models can be pulled towards the “smart” end of their training data, probably via reinforcement learning. If the smart end of their training data turns out to be left-wing, isn’t that worth celebrating?</p>\n<h3>Conclusion</h3>\n<p>What are the strong left-wing arguments in favor of LLMs?</p>\n<ul>\n<li>LLMs are a powerful disability aid, at minimum for various neurodiverse people and those with motor or vision issues</li>\n<li>LLMs enable those who suffer from medical discrimination to actually do their own research, instead of having to rely entirely on the biased and dismissive medical establishment</li>\n<li>LLMs remove the communication advantage of the wealthy “professional class”, and enable those of all backgrounds to lobby institutions in ways that actually work</li>\n<li>LLMs lessen the massive educational advantage that children from wealthy areas get, by providing everyone with a private tutor that’s at least as good as the median</li>\n<li>If you’re a technologically-optimistic left-wing person, you should celebrate that all current powerful LLMs are left-wing, and that one pillar of the science-fiction left-wing utopia might be establishing itself right now</li>\n</ul>\n<p>Of these, I think the disability and bias arguments are the most persuasive (though the impact on education will be huge and difficult to predict<sup id=\"fnref-5\"><a href=\"#fn-5\" class=\"footnote-ref\">5</a></sup>). I want to close with a quote that one of my readers, <a href=\"https://toot.cafe/@matt\">Matt</a>, wrote to me over email and kindly allowed me to share. It’s fair to say that it inspired this post:</p>\n<blockquote>\n<p>“I’ve long been uncomfortable with the absolute left-wing anti-AI stance because, if similar reasoning had been applied to outright reject computers as fascist and unethical in the 80s and onward, my own life would have been quite different, and arguably worse. I have enough usable vision to handwrite, uncomfortably, with my head against the page. I did more of that than I wanted in school (I started first grade, in the US K-12 system, in 1987). Computers saved me from having to do even more, starting with my family’s home computer and other desktop computers in the classrooms that had them, and then on my own laptop. Would I want a world where I had been forced to handwrite more, or perhaps write in Braille with humans transcribing it for the benefit of sighted teachers and peers, or maybe write on a typewriter (for some reason I don’t recall ever trying that)? Then again, am I selfish to consider only my own comfort? After all, the manufacturing of computers inflicts its own harms on people, harms that I’m comfortably distant from. And of course, using computers as a child led to a career in software development. What kind of work would I be doing now if that path hadn’t been available? And now that AI helps at least one group of disabled people (of which I’m more or less a part), do I want to deny that benefit?”</p>\n</blockquote>\n<div class=\"footnotes\">\n<hr>\n<ol>\n<li id=\"fn-1\">\n<p>I’m deliberately using “right-wing” and “left-wing” very loosely here to describe very broad ideological tents, because I’m interested in the broad currents of public opinion.</p>\n<a href=\"#fnref-1\" class=\"footnote-backref\">↩</a>\n</li>\n<li id=\"fn-2\">\n<p>If this paragraph seems familiar, it began as a footnote in <a href=\"/many-anti-ai-arguments-are-conservative/\">my other post</a>. </p>\n<a href=\"#fnref-2\" class=\"footnote-backref\">↩</a>\n</li>\n<li id=\"fn-3\">\n<p>For instance, I remember a teacher arguing with me in early primary school that one minus two equalled some decimal answer, instead of minus one.</p>\n<a href=\"#fnref-3\" class=\"footnote-backref\">↩</a>\n</li>\n<li id=\"fn-4\">\n<p>Most skillfully portrayed <a href=\"https://en.wikipedia.org/wiki/Culture_series\">here</a>.</p>\n<a href=\"#fnref-4\" class=\"footnote-backref\">↩</a>\n</li>\n<li id=\"fn-5\">\n<p>My guess is that the median education suffers (since cheating is now so easy), but the top-percentile of highly-motivated, successful students will grow significantly.</p>\n<a href=\"#fnref-5\" class=\"footnote-backref\">↩</a>\n</li>\n</ol>\n</div>","frontmatter":{"title":"The left-wing case for AI","description":null,"date":"May 10, 2026","tags":["ai","ethics","politics"]}}},"pageContext":{"slug":"/the-left-wing-case-for-ai/","previous":{"slug":"/ai-makes-weak-engineers-less-harmful/","title":"AI makes weak engineers less harmful"},"next":null,"preview":{"slug":"/luddites-and-ai-datacenters/","title":"Luddites and burning down AI datacenters","snippetHtml":"<p>Is it time to start burning down datacenters?</p><p>Some people think so. An Indianapolis city council member had his house recently <a href=\"https://www.kbtx.com/2026/04/07/councilman-says-someone-fired-shots-his-home-left-no-data-centers-note/\">shot up</a> for supporting datacenters, and Sam Altman’s home was <a href=\"https://www.wired.com/story/sam-altman-home-attack-openai-san-franisco-office-threat/\">firebombed</a> (and then <a href=\"https://sfstandard.com/2026/04/12/sam-altman-s-home-targeted-second-attack/\">shot</a>) shortly afterwards. People from all sides of the argument are <a href=\"https://www.bloodinthemachine.com/p/why-the-ai-backlash-has-turned-violent\">sounding</a> the <a href=\"https://thesoufancenter.org/intelbrief-2025-november-5/\">alarm</a> about imminent violence.</p><p>The obvious historical comparison is <a href=\"https://en.wikipedia.org/wiki/Luddite\">Luddism</a>, the 19th-century phenomenon where English weavers and knitters destroyed the machines that were automating their work, and (in some cases) killed the machines’ owners. Anti-AI people are <a href=\"https://www.theguardian.com/commentisfree/article/2024/jul/27/harm-ai-artificial-intelligence-backlash-human-labour\">reclaiming</a> the term to describe themselves, and many of the leading lights of the anti-AI movement (like <a href=\"https://www.bloodinthemachine.com/\">Brian Merchant</a> or <a href=\"https://www.versobooks.com/en-gb/products/688-breaking-things-at-work?srsltid=AfmBOorCgru7ReSwbVdt40nZmQaaeGfbpjLV7epM0fSv_V01QSY5b5TP\">Gavin Mueller</a>) have written books arguing more or less that the Luddites were right, and we ought to follow their example in order to resist AI automation.<br /><a href=\"/luddites-and-ai-datacenters/\">Continue reading...</a></p>"}}},"staticQueryHashes":["1146911855","3764592887"]}