The scenarios no one will put in a deck
Why analysts imagine extreme scenarios but don't say them—and what AI changes about that
My 15-year-old son said something to me this week that I can’t stop thinking about.
We were driving to the physiotherapist. He’d strained his hamstring playing soccer—the kind of injury that happens when you’re competing hard, six days a week, at a level that doesn’t leave much room for rest. It came at the end of an intense week of trials, and I made the mistake of saying, in hindsight, he should have dialed things back before it happened.
His response: “Well, Dad, I didn’t know I was going to be injured until it happened.”
He’s right, of course. And he’s also describing the exact problem that keeps organizations flat-footed when the world shifts beneath them.
The Greenland test
We’re watching this play out in real time with Greenland this week.
Over the past year, financial markets have shown a striking tolerance for Donald Trump’s rhetoric. While tariff threats and diplomatic provocations have triggered brief bouts of volatility, they have rarely produced sustained market disruption. Equities have largely recovered, volatility has remained contained by historical standards, and many analysts have treated political shocks as episodic rather than systemic. Over time, that pattern has sent its own signal: much of this risk is seen as manageable, already priced in, and unlikely—on its own—to be the thing that breaks the system.
Then Trump briefly refused to rule out military force over Greenland, and something shifted. Gold went up. Markets went down. Suddenly, there was heightened recognition that the operating assumptions no longer held.
But here’s the thing: this wasn’t unimaginable. Someone, somewhere, almost certainly thought about it. Analysts who track US foreign policy, researchers who study Arctic geopolitics, strategists who game out territorial ambitions—at least a few of them probably entertained this scenario.
They just didn’t say it.
The consensus window
The conventional wisdom is that crises catch us off guard because we fail to imagine them. Black swans. Unknown unknowns. The vocabulary of surprise has become so familiar that it functions as an explanation in itself.
But I’m not sure imagination is actually the problem.
The real gap is between what’s thinkable and what’s sayable.
Consider what happens when an analyst prepares a risk assessment for a publicly traded company, an industry association, a government ministry. They build a dashboard. They map scenarios. They assign probabilities.
And then they filter.
Not consciously. Not maliciously. But instinctively. That’s because the professional identity of a public affairs analyst—of anyone whose value comes from expert political judgment—depends on being credibly right. Not imaginatively useful. Credibly right.
So the scenarios that make the cut are the ones that other serious professionals wouldn’t fault you for including. They fall within the consensus window: the range of possibilities that, even if they don’t happen, won’t make you look foolish for naming.
Everything outside that window gets quietly discarded because it would feel professionally embarrassing to name.
This is why risk dashboards fail in ways that matter most.
A dashboard can track sentiment. It can monitor media coverage, political temperature, market signals. It can aggregate data and surface trends. What it cannot do is capture scenarios that no analyst would put in a deck for fear of looking like they’ve lost the plot.
The Greenland scenario was thinkable. It just wasn’t sayable—not without risking your credibility as a serious professional.
And so it lived in what I’ve started calling the silent zone: the space between private imagination and professional speech where ideas go to die because no one will sponsor them.
The usual responses to this problem aren’t working.
Red teams help, but they’re expensive and infrequent. Scenario planning works, but it tends to produce five to ten carefully curated possibilities—each one selected, at least in part, because it won’t embarrass the person presenting it. Pre-mortems can surface risks, but participants often stay in familiar ruts. The creative leap required to name something absurd rarely survives the social pressure of a room full of colleagues.
All of these approaches share the same structural limitation: they depend on humans being willing to say out loud what they think might sound ridiculous.
That’s asking a lot. Maybe too much.
What AI changes
Here’s where I think AI changes the equation—but not in the way most people are talking about.
The obvious use case is having AI generate more creative scenarios. Feed it the parameters, let it imagine possibilities you haven’t considered. That’s useful, but it’s not new. Quant teams have run Monte Carlo simulations for decades. Brute-force computation isn’t the breakthrough.
The breakthrough is what AI does to permission.
When a human analyst names an extreme scenario, they own it. Their judgment is on the line. Their credibility is attached to the claim. If the scenario sounds absurd, they look absurd.
When an AI simulation surfaces the same scenario, no one owns it. The analyst can bring it to the table without putting their reputation at stake. “The simulation flagged this” is a fundamentally different sentence than “I think this might happen.”
AI provides institutional cover—a mechanism for extreme possibilities to enter the conversation without anyone having to personally vouch for them.
I’ve started calling this surfacing silent risks. Running thousands of AI-generated scenarios, then deliberately extracting the ones that fall outside the consensus window. Not because they’re more likely. Because they’re the ones no human analyst would feel safe naming on their own.
A friend of mine is a data analyst at an English Premier League club. He told me the club runs 10,000 simulations of a match before kickoff. Not ten. Not a hundred. Ten thousand.
There’s something instructive in that number. You’re not looking to find the most probable outcome as much as you’re mapping the full distribution of possibilities—including the tails that seem unlikely but matter enormously if they happen.
Now imagine applying that same logic to geopolitical risk, regulatory change, or reputational exposure. On the surface, it may seem like you’re trying to predict what will happen. In reality, you’re trying to surface what could happen that no one is willing to say.
The axis that matters is distance from consensus.
Pre-processing the shock
What does an organization actually get from this?
Not prediction. No system can reliably tell you that a US president will threaten military action over Arctic territory and NATO ally. But you can do something almost as valuable: pre-process the shock.
Think about what happens when an extreme scenario becomes real. Organization A used a silent-risk surfacing system six weeks earlier. A simulation flagged “US attempts territorial expansion through coercion.” The C-suite read it. They had a 15-minute conversation. They moved on.
Organization B didn’t. They’re hearing this for the first time with everyone else.
On Monday morning, Organization A can begin planning immediately. They don’t need to waste a day—or worse, a week—convincing themselves this is really happening. They’ve already done the cognitive work of accepting the scenario as possible. They skipped the disbelief stage.
Organization B is still stuck in that stage. Not analyzing. Not planning. Processing.
That gap—the time lost to shock—is where institutions lose power long before they lose control.
I realize this might sound like a small advantage. It’s not.
The organizations that respond fastest to rupture aren’t the ones that predicted it. They’re the ones that had already made peace with the possibility. They’d held the scenario in their minds long enough that when it arrived, it felt like recognition rather than revelation.
That’s what surfacing silent risks actually produces. Not foresight in the traditional sense. Something closer to pre-processed shock. You haven’t planned for the scenario. But you’ve imagined it, discussed it, sat with it. And that’s enough to collapse your response time when the absurd becomes real.
From unthinkable to unlikely
There’s one more thing worth saying.
Running an AI simulation that surfaces extreme scenarios doesn’t obligate you to plan for any of them. It doesn’t mean you have to take every possibility seriously. Many organizations will look at what gets surfaced and reasonably decide it’s too remote to warrant action.
That’s fine. The point isn’t to turn every silent risk into a strategic priority. The point is to move those risks from the private imaginations of individual analysts into the shared awareness of the organization.
Once it’s been named, it’s no longer unthinkable. It’s just unlikely.
And “unlikely” is a much better starting point than “we never saw it coming.”
Photo by Jonny Gios, Unsplash


