It hallucinates numbers that look right
You ask it for last month's MRR. It returns a number. The number sounds reasonable. You paste it into a board update. Three days later, finance pulls the same metric and gets something different. Turns out the AI joined the wrong tables, used a stale event, and confidently invented a number that was off by 18%. The worst part is you didn't know to question it.
No validation layer. The model doesn’t know what "MRR" actually means in your business — only what it looks like in random examples from its training data.