By Stephen DeAngelis
If the world feels jumpy because the world is jumpy. The turbulence isn’t in your head — it’s in your headlines, your supply chains, your social feeds, and your weather reports.
If you’ve spent any time in a boardroom or a strategy session over the past three decades, you’ve heard the diagnosis: we live in a VUCA world — volatile, uncertain, complex, ambiguous. The term was born at the U.S. Army War College in the late 1980s, military shorthand for the post-Cold War mess. Business consultants loved the martial ring of it. They adopted it eagerly, and for years it served as the organizing grammar of corporate strategy: the world is VUCA, therefore be agile, be adaptive, be ready.
I used that language myself. In the aftermath of September 11th, I founded my first company to help the government fix a problem that had suddenly become existential: national security agencies sitting on mountains of data they couldn’t talk to each other about, let alone integrate. We were building resilience models at Carnegie Mellon’s Software Engineering Institute, trying to figure out how critical infrastructure — public and private — could withstand threats that no one had gamed out. Volatility was the right word for what we faced. Uncertainty was the water we swam in. And complexity? That was the landscape we were trying to map with tools that hadn’t been built yet.
But something shifted. VUCA didn’t fail because it was wrong — it failed because it became wallpaper. It described the weather without offering an umbrella. Worse, it normalized perpetual crisis management. Leaders got comfortable saying “the world is VUCA” the way they might say “the sky is blue” — true, but not useful. As Arthur D. Little’s Shift Institute argued just this month, VUCA “presents a depiction of the world with four separate dimensions rather than interconnected and intertwined elements.” Diagnosis without a recipe.
Meanwhile, VUCA masked the deeper fragility building beneath the surface of globalization. Just-in-time supply chains juiced efficiency but left systems brittle as glass. Deregulated financial instruments spread risk so wide that no one could see where it had pooled. We chased short-term wins and offshored not just jobs but institutional memory. That’s strategic nearsightedness at its worst — and it echoes Keynes’s warnings about Versailles, where the architects of peace mistook order for stability and kicked the can down the road toward the next collapse.
Enter BANI
In 2018, the futurist Jamais Cascio named what many of us were feeling but hadn’t yet articulated. He called it BANI: brittleness, anxiety, non-linearity, incomprehensibility. Where VUCA described conditions, BANI describes what those conditions do to us — the emotional and structural damage of living inside systems that fail not gracefully but catastrophically.
I want to be precise about what Cascio got right, because it matters.
Start with brittleness. This isn’t volatility wearing a different hat. A volatile market swings — it moves, and you can ride the movement if you’re paying attention. A brittle supply chain doesn’t swing. It snaps. No warning, no gradual degradation. One day it works; the next day you’re on the phone with your board explaining why you can’t ship product.
Anxiety is the one that gets under my skin, because I watch it warp decision-making in real time. Uncertainty is an intellectual state — you don’t know what’s coming, and you can work with that. Anxiety is different. It’s physiological. It’s the room full of smart people who can’t stop bracing for impact long enough to think clearly.
Non-linearity is the one that keeps the mathematicians up at night. (I say this as someone who has spent far too many nights in that category.) Complexity tells you there are many moving parts. Non-linearity tells you that a tiny input over here can produce a wildly disproportionate output over there — and good luck predicting which one.
And then there’s incomprehensibility, which is genuinely new. Ambiguity means the picture is fuzzy. Incomprehensibility means there may not be a picture at all.
Cascio’s book, Navigating the Age of Chaos, published last October with Bob Johansen and Angela Williams, makes a compelling case that BANI is the honest description of our present condition. I agree. His proposed responses — resilience, empathy, improvisation, intuition — are admirable human capacities.
But here’s where I part company — respectfully, and based on three decades of building operational systems for Fortune 500 companies and governmental agencies.
The Missing Layer
BANI’s response set is entirely human and behavioral. Resilience as a personal quality. Empathy as a leadership practice. Improvisation as a cultural capacity. Those are necessary. They are not sufficient.
Here’s why. The challenge of our moment is not merely that people feel anxious and leaders need more empathy. The challenge is that the systems on which seven billion people depend — energy grids, financial networks, health care logistics, food supply chains, democratic governance itself — are operating beyond the cognitive capacity of any human or team of humans to monitor, diagnose, and correct in real time.
I sit with Fortune 500 executives regularly. When someone’s supply chain spans 40 countries and 10,000 SKUs, and the conversation turns to “how do we build resilience,” the answer cannot be “practice empathy.” I don’t say that to be glib. I say it because I’ve watched empathetic, well-led organizations get blindsided by cascading failures that no amount of emotional intelligence could have prevented. The answer has to be engineered. It has to be mathematical. It has to be embedded in the architecture of the system itself — not layered on as a leadership seminar after the fact.
The behavioral responses and the engineered responses are complementary. But only one of them scales.
What Engineering Resilience Actually Looks Like
I, We, America, learned this lesson the hard way. After September 11th, the national security community had more data than any human could process — signals intelligence, financial flows, communications intercepts, satellite imagery — and no way to integrate it fast enough to act. The volume was the enemy. That’s what drove me to applied mathematics and the intersection of complexity science and artificial intelligence: not because I was fascinated by algorithms (though I am — ask anyone who’s made the mistake of sitting next to me at dinner), but because the data volume from counterterrorism operations demanded it.
The same structural problem now confronts every enterprise. The CEO of a consumer goods company managing demand signals across six continents faces a version of what I faced in 2001: too much data, too many interdependencies, too little time, and frameworks built for a simpler world that no longer hold.
VUCA told leaders to be adaptive. Useful, but vague. BANI tells leaders to be empathetic and improvisational. Also useful. Also vague. What neither framework offers is a systems-level engineering specification for how to build organizations, technologies, and governance structures that can actually function under the conditions both frameworks describe.
The Profitable Equilibrium of Chaos
Here’s the part no one wants to say out loud: BANI is not just an unfortunate side effect of technological progress. It’s a profitable equilibrium.
Platforms monetize anxiety. Consultancies thrive on permanent crisis. And opacity? Opacity feeds authoritarian appeal. If the world is incomprehensible, the strongman who claims to have simple answers gains traction precisely because the complexity exhausts everyone else.
Erich Fromm diagnosed this in Escape from Freedom back in 1941. Individuals flee from the isolating anxiety of modern freedom into authoritarian submission. That psychological pattern hasn’t changed. What’s changed is the scale — and the technology that amplifies it.
Brittleness and incomprehensibility are not bugs in the system. For some actors, they are features. A world that can’t understand itself is a world that can be governed by those who claim to understand it — even when they don’t. We’ve seen this movie before. Weimar Germany. The interwar period. The same structural conditions producing the same political consequences.
Breaking out of that equilibrium requires more than a new acronym. It requires building alternative systems that don’t reduce complexity but render it governable — through transparency, through mathematical rigor, through technology that shows its work.
The Explainer Gap
My friend and longtime collaborator Tom Barnett — the strategist behind The Pentagon’s New Map — and I have spent more than two decades watching a pattern that genuinely alarms us. In the mid-twentieth century, figures like Einstein, Keynes, von Neumann, Oppenheimer, Churchill, and FDR served as what we call “Explainers-in-Chief.” They took the most complex, terrifying developments of their era — quantum physics, economic collapse, nuclear weapons, totalitarianism — and translated them into shared understanding that allowed democratic societies to act coherently.
Where are those figures today?
Complexity has accelerated. The density of high-visibility explainers has declined. Today’s equivalents publish academic papers and give TED talks, but they lack the cultural penetration to create the kind of shared understanding that a democracy needs to govern itself through radical transformation.
That gap — the explainer gap — is not incidental to our current crisis. It’s a root cause. When the people who understand the systems can’t explain them, and the people who can explain things don’t understand the systems, the field is wide open for demagogues offering false clarity.
What Comes Next
I don’t have a bumper sticker for the answer yet, and frankly, I’m suspicious of anyone who does. But I’ve spent twenty-five years building pieces of what I think it looks like, and I want to think through it out loud.
Take brittleness. The response can’t just be personal toughness or “grit” — it has to be engineered resilience. Systems designed to absorb shocks, learn from them, and reconfigure without waiting for a human to intervene at every step. The principles behind that draw on decades of complexity science and applied mathematics — and they’re already working in practice, across industries.
Anxiety is the human signal that our systems have become too complex to trust without seeing how they think. Trusting intuition alone ignores the prompt. What people crave instead is radical transparency: AI and decision systems that show their reasoning, trace their logic from input to output, and submit to audit. The black-box era in AI is ending, and not because of some philosophical preference for openness. It’s ending because regulators are mandating it. The EU AI Act’s explainability provisions take full effect in August 2026. Every enterprise deploying high-risk AI systems will need to demonstrate traceability or face penalties up to 35 million euros. The market is about to discover that opacity is not just an ethical problem — it’s a commercial liability.
Non-linearity demands more than improvisation—fine for jazz, but brittle for systems under stress. It requires anticipatory capacity: modeling multiple futures and preparing for disruptions before they strike, instead of reacting to the wreckage. Most miss this key distinction: prediction extrapolates backward from what happened; anticipation projects forward, mapping what could happen. In a non-linear world, that difference is everything.
Incomprehensibility — arguably the scariest of Cascio’s four horsemen \— demands lucidity: a shared, culturally embedded commitment to clarity that slices through fog, resists manipulation, and prioritizes understanding over sloganeering. Lucidity is democracy’s lifeline in a world of trillion-parameter models and converging crises. Without it, we fulfill Fromm’s warning: an escape from freedom into the arms of anyone promising to make the confusion stop.
The Stakes
I’ve been working at the intersection of AI, complexity science, and national security since before most people knew what a neural network was. I started because the data volumes from post-9/11 intelligence operations demanded computational tools that didn’t yet exist. I continued because I discovered something I didn’t expect: the same mathematical frameworks that helped the national security community integrate chaotic data could help a consumer goods company optimize a global supply chain — or help a government anticipate a pandemic’s second-order effects before they cascaded.
That connection isn’t accidental. It reflects a deeper truth: the problems of the 21st century are not sector-specific. They are structural. And the tools to address them must be structural too.
Next time, I want to dig into the first of those capacities — and it’s one that most AI systems are currently designed to prevent: the ability to break and recover. Not as a metaphor, but as a measurable, diagnosable, engineerable property of a system. The field of resilience engineering is older than most people think, and its lessons are more urgent than most leaders realize.
VUCA is dead. BANI named the body. The question now is what we build on the grave.
Stephen F. DeAngelis is the founder and CEO of Enterra Solutions and co-founder of Massive Dynamics. He has served as a Visiting Professional Executive at Princeton University, a Visiting Scientist at Carnegie Mellon’s Software Engineering Institute and Oak Ridge National Laboratory, and a collaborator with MIT CSAIL.





