LLMs are AI - always have been. The term “artificial intelligence” has always been broad in computer science: it covers anything that performs a cognitive task normally requiring human intelligence. A chess engine from 1999 is AI. A spam filter is AI. An LLM is AI. Narrow AI, sure, but still AI.
The confusion comes from people equating “AI” with sci-fi AGI (human-level general intelligence, HAL/JARVIS/Skynet/etc.). That’s a specific subset, not the whole category. When companies say “AI-powered” they’re not claiming AGI - they’re saying the product uses machine learning or pattern recognition in some way. Marketing inflates the language, yes, but the underlying tech is real and fits the definition.
If/when we reach actual AGI, it will be a civilization-level shift - far beyond today’s spell-checker-that-sometimes-hallucinates. People will look back and say “we had AI for years,” but they’ll mean narrow tools, not the thing that can invent new science or run a company autonomously. The goalposts aren’t moving; the hype is just using the broad term loosely.
Population decline is going to be a massive problem in the not too far future yet hardly anyone is talking about it.