Tenet 6: Health 3.0 promotes the antifragile — an introduction to antifragility.

Two years ago I hid away in every crevice of a Princess Cruises ship to Alaska doing something that would make life never be the same again.

No, it wasn’t illegal or scandalous. I was rereading Nassim Taleb’s latest book Antifragile: Things That Gain from Disorder. And it started for me with this:

Photo by Danielle MacInnesUnsplash

A porcelain cup is sitting on your table. You drink up from it and rest it back on the table. The cup’s fine. You drop it from an inch up. This makes you a little nervous. But the cup stays intact. You drop it from 4 inches. The cup cracks. You drop it from a foot. The cups shatters into a hundred pieces.

This is the simple observation that helped Nassim Taleb define what’s fragile. Something is fragile when shocks of higher intensity cause greater and greater harm.

If you drop the cup twelve times from an inch up, it’s not the same as if you dropped it once from a foot up. The scenario is nonlinear. The damage on the cup when dropped from a foot high is clearly greater than the damage when dropped twelve times from an inch. If the scenario were linear — if the cup were damaged from even the slightest height — then it couldn’t exist.

And as Taleb explains, anything fragile doesn’t like volatility. The cup doesn’t like earthquake-prone locations.

In volatile situations, things that are fragile have more to lose than to gain. If you drop the cup from higher up, it has increasingly more damage. If you just pick it up and put it down gently, it stays the same.

So anything fragile in a nonlinear environment has more downside than upside. Especially under conditions of volatility.

Now say you exercise through a program of lifting one-pound weights. Your muscles grow slightly. You change your weights to ten pounds. Your muscles grow faster and bigger. You eventually change your weights to fifty pounds. Your muscles hypertrophy even more.

In his book Taleb coins a new word to describe this exact opposite situation from fragile: antifragile.

Something is antifragile when shocks of higher intensity cause greater and greater benefit.

Again, the scenario is nonlinear. Lifting a twenty-pound weight once would lead to greater muscle growth than lifting a one-pound weight twenty times. Otherwise we would all be Arnold Schwarzeneggers.

Antifragile things thrive on volatility, as they have more to gain than to lose. If your exercise program had no resistance training with weights, you would still have your muscles. But with variable resistance training, your muscle gain would accelerate (up to a point).

So anything antifragile in a nonlinear environment has more upside than downside. Especially under volatile conditions.

Taleb emphasizes that antifragile isn’t the same thing as robust. If you’re robust, you stay the same in volatile situations. You’re resilient. Antifragile goes a step further: if you’re antifragile, you gain in volatile situations. You actually need volatility to benefit more.

If you’re antifragile, you have a trial-and-error strategy. You tinker and tweak. You make mistakes along the way but limit their magnitude. You’re not interested in the rightness or wrongness of things. You’re interested in payoffs. Especially the big one.

In Taleb’s lexicon, a Black Swan is an event of rare probability that we cannot predict with mathematical models or evidence analysis — no matter how intelligent or advanced we think we are. Taleb says that these outliers have deep historical impact, but human nature makes us rationalize them after the fact.

If you’re antifragile, you mitigate risks and are even willing to take small losses during regular times. When the rare Black Swan event occurs, you’re in position to exploit it for a big payoff.

Fragile systems seek to control volatility so as to take small gains. But unpredictable Black Swan events wipe them out. And they retrospectively model for these events, only to be exposed to the next Black Swan.

Natural systems are inherently antifragile. Indeed, natural selection rewards it. Evolution itself can be seen as the promotion of the antifragile, the survival of things that had greater upside than downside in nonlinear systems of high volatility.

Photo by Shannon KelleyUnsplash

And though we consider the physiosphere a linear system, biological, cultural, and socioeconomic systems are decidedly nonlinear.

So if we attempt to engineer the removal of volatility, the randomness, the stressors on these systems, in order to make them more uniform and (apparently) predictable — we’re fragilizing these systems. Which makes them more prone to blowing up when the unpredictable inevitably occurs.

Taleb says it’s a lot easier to figure out if something is fragile than to predict a Black Swan event that would harm it. If something has more downside than upside from random shocks, it’s fragile.

And he says that our modern, top-down world has been fragilizing just about everything: education, politics, food, parenting, the monetary system, and the economy.

The most obvious recent example of this, the example that Taleb put his credibility on the line to warn against before it occurred, is the financial system. Taleb had identified in 2003 that Fannie Mae and banks were fragile, and the banking system would collapse. Other economists apparently ran simulations showing otherwise.

The crucial difference is that Taleb wasn’t predicting the chance of rare events. He was predicting fragility. With a simple rule-of-thumb. So simple that academics and sophisticated policy wonks missed it.

In 2008 we know of course what happened.


Health 2.0, as it’s being crafted, is increasingly making our health care system fragile.

(One of those economists who felt Fannie Mae was safe, Peter Orszag, went on to help create Obamacare — a model of Health 2.0.)

Health 3.0 is antifragile.

In the next several posts, I will explain why.

It starts with health care turning into Chase Bank: too big to fail.