LLMs &c.
There are a lot of fucking opinions out here on these Internet streets about LLMs, and while we sure as hell don’t need another one, it turns out that I do need a place to put mine. And while a personal notebook or the circular file might arguably be a better (and safer) place for them, I sometimes find myself in a position where I wish I could just link people to arguments I’ve already made about LLMs. Maybe that’s what this will be, eventually.
So. LLMs, techno-savior or spicy autocomplete?
I like that line, but I don’t think it’s the right question. (but the answer is stochastic parrot)
The right question is: why on Earth would you ever fucking trust OpenAI?
Okay, fine, there are lots of good and right questions to be asking. But seriously, as far as I can tell, (almost) everyone involved in the LLM industry is a lying con artist or someone about to get bilked. All of the GenAI models are built on stolen data that these multi-trillion dollar companies continue to refuse to pay for. Why wouldn’t they steal your (company’s) data? Because their license says they won’t? What are you going to do, sue them? Good luck with that. As Zuckerberg has so well demonstrated for us, this generation of tech CEOs don’t give a flying fuck—if a little fascism and genocide is the cost of doing business, do you really think a lawsuit is going to register?
Why worry about them stealing your data when there’s so much other people’s data out there on the Internet to steal? SEO slop farms are already starting to poison that well, accelerating the pace of model collapse [citation needed].
Okay, felt good to get some ranty bits out of my system, time to do some link dumping:
- If you want a more cogent and well written list of reasons to hate AI, look no further than this piece by Marcus Hutchins. Though I think there’s less actual belief in AGI than he does. Greed, in the form of replacing human labor with compliant, “good enough” AI, and hype maestros beating their drums as they desperately search for a problem for which LLMs are actually a solution.
- The phrase “Potemkin comprehension” is just :chefs kiss:
- The Hype is the Product from @rysiek (via Stef Walters)
- The Hater’s Guide to the AI Bubble - Ed Zitron bringing the receipts
- ELIZA and the ELIZA Effect
- GitHub Copilot Research Finds ‘Downward Pressure on Code Quality’ (from 2024)
I have seen some interesting uses for LLMs, though, and I want to keep an eye on those too. While I don’t they can be trusted to build production systems, I understand that there are many other kinds of software.
- I have a couple in this thread from 2024.
- Simon Willison is doing a lot of tooling that I think I would probably love if I was into LLMs