AI or Smoke-and-Mirrors? Spotting False AI Claims Before You Sign the LOI
Gartner tracked over 1,000 vendors marketing "AI-powered" products in the past 18 months, many of them little more than rules-based workflows dressed up with buzzwords. Meanwhile, a 2025 McKinsey survey reports that only 1% of global companies consider their AI capabilities mature. That gap between marketing and reality is your risk — as a buyer or investor, you are often being asked to pay an innovation premium for tech that is not real, scalable, or defensible.
If you are looking at a company pitching AI as part of its value story, especially pre-LOI, there are five areas where you need hard evidence. Can they show real ML assets — working models, active feature stores, or infrastructure that supports model development like GPU usage logs? Is there a data backbone that supports inference and retraining, or is everything stitched together manually? Look for MLOps maturity: monitoring, rollback plans, version control, and some kind of ethical risk handling, because if the whole thing runs on Jupyter notebooks that is a red flag. Do they have actual ML engineers, or just one "AI product lead" with no delivery history? And have they monetised the AI in a real product, or is it still in demo-land? Most AI projects die after pilot stage, and you need to be sure what you are seeing is built to last and tied to revenue.
Boards do not want another glossary; they want clarity, which means translating AI claims into three questions: is it real, is it working, and does it scale profitably. If the answer is not yes to all three, the AI should not justify a premium valuation.
AI-washing is real and expensive. Finding out after the close that the "platform" is just a thin wrapper around ChatGPT and some Zapier flows is the kind of discovery that turns a good deal into a write-down. The due diligence is where you catch it.