cross-posted from: https://programming.dev/post/36394646

In the last two years I’ve written no less than 500,000 words, with many of them dedicated to breaking both existent and previous myths about the state of technology and the tech industry itself. While I feel no resentment — I really enjoy writing, and feel privileged to be able to write about this and make money doing so — I do feel that there is a massive double standard between those perceived as “skeptics” and “optimists.”

To be skeptical of AI is to commit yourself to near-constant demands to prove yourself, and endless nags of “but what about?” with each one — no matter how small — presented as a fact that defeats any points you may have. Conversely, being an “optimist” allows you to take things like AI 2027 — which I will fucking get to — seriously to the point that you can write an entire feature about fan fiction in the New York Times and nobody will bat an eyelid.

In any case, things are beginning to fall apart. Two of the actual reporters at the New York Times (rather than a “columnist”) reported out last week that Meta is yet again “restructuring” its AI department for the fourth time, and that it’s considering “downsizing the A.I. division overall,” which sure doesn’t seem like something you’d do if you thought AI was the future.

Meanwhile, the markets are also thoroughly spooked by an MIT study covered by Fortune that found that 95% of generative AI pilots at companies are failing, and though MIT NANDA has now replaced the link to the study with a Google Form to request access, you can find the full PDF here, in the kind of move that screams “PR firm wants to try and set up interviews.” Not for me, thanks!

In any case, the report is actually grimmer than Fortune made it sound, saying that “95% of organizations are getting zero return [on generative AI].” The report says that “adoption is high, but transformation is low,” adding that “…few industries show the deep structural shifts associated with past general-purpose technologies such as new market leaders, disrupted business models, or measurable changes in customer behavior.”

Yet the most damning part was the “Five Myths About GenAI in the Enterprise,” which is probably the most wilting takedown of this movement I’ve ever seen:

  • AI Will Replace Most Jobs in the Next Few Years → Research found limited layoffs from GenAI, and only in industries that are already affected significantly by AI. There is no consensus among executives as to hiring levels over the next 3-5 years.
  • Generative AI is Transforming Business → Adoption is high, but transformation is rare. Only 5% of enterprises have AI tools integrated in workflows at scale and 7 of 9 sectors show no real structural change.
  1. Editor’s note: Thank you! I made this exact point in February.
  • Enterprises are slow in adopting new tech → Enterprises are extremely eager to adopt AI and 90% have seriously explored buying an AI solution.
  • The biggest thing holding back AI is model quality, legal, data, risk → What’s really holding it back is that most AI tools don’t learn and don’t integrate well into workflows.
  1. Editor’s note: I really do love “the thing that’s holding AI back is that it sucks.”
  • The best enterprises are building their own tools → Internal builds fail twice as often.

These are brutal, dispassionate points that directly deal with the most common boosterisms. Generative AI isn’t transforming anything, AI isn’t replacing anyone, enterprises are trying to adopt generative AI but it doesn’t fucking work, and the thing holding back AI is the fact it doesn’t fucking work. This isn’t a case where “the enterprise” is suddenly going to save these companies, because the enterprise already tried, and it isn’t working.

An incorrect read of the study has been that the “learning gap” that makes these things less useful, when the study actually says that “…the fundamental gap that defines the GenAI divide [is that users resist tools that don’t adapt, model quality fails without context, and UX suffers when systems can’t remember.” This isn’t something you learn your way out of. The products don’t do what they’re meant to do, and people are realizing it.

Nevertheless, boosters will still find a way to twist this study to mean something else. They’ll claim that AI is still early, that the opportunity is still there, that we “didn’t confirm that the internet or smartphones were productivity boosting,” or that we’re in “the early days” of AI, somehow, three years and hundreds of billions and thousands of articles in.

I’m tired of having the same arguments with these people, and I’m sure you are too. No matter how much blindly obvious evidence there is to the contrary they will find ways to ignore it. They continually make smug comments about people “wishing things would be bad” or suggesting you are stupid — and yes, that is their belief! — for not believing generative AI is disruptive.

Today, I’m going to give you the tools to fight back against the AI boosters in your life. I’m going to go into the generalities of the booster movement — the way they argue, the tropes they cling to, and the ways in which they use your own self-doubt against you.

They’re your buddy, your boss, a man in a gingham shirt at Epic Steakhouse who won’t leave you the fuck alone, a Redditor, a writer, a founder or a simple con artist — whoever the booster in your life is, I want you to have the words to fight them with.