Bold claim: fear of AI hype misses the real story about an economic upheaval in disguise.
Democracy Dies in Darkness
You can be genuinely unsettled by artificial intelligence, but not because a sudden financial crash is looming—and that distinction matters.
February 13, 2026 at 2:23 p.m. EST (yesterday at 2:23 p.m. EST)
Covid-19 delivered a brutal lesson about exponential growth, and that lesson colors how we assess AI today. Right now, things may look stable, but remember how, back in early March 2020, appearances were deceiving. By month’s end, our world shifted: lockdowns, supply shortages, and stockpiled essentials became the new normal.
In short, the exponential nature of crises doesn’t vanish just because technology advances. It sneaks up on us, often when we least expect it. As AI progresses, the same pattern can emerge: steady performance today masking rapidly accelerating risks tomorrow. The warning isn’t about doom; it’s about staying prepared, vigilant, and thoughtful as the tools grow more capable.
So, what does this mean for everyday decisions and policy? It means we should monitor growth curves and systemic impacts with the same seriousness we used for pandemic projections—anticipating tipping points, preparing contingencies, and having candid conversations about trade-offs.
And this is the part most people miss: the pace of change matters just as much as the change itself. A slow, steady rise can feel manageable until a sudden leap redefines markets, jobs, and everyday life. Where do you stand on readiness—are we overselling AI’s benefits or underselling its potential risks, and how should we balance ambition with caution? Share your perspective in the comments.