Fragility, Robustness, and Antifragility

Last time, we discussed a new way of looking at the concept of “risk” and our inability to scientifically define it. Today, we go a little deeper and break systems into three categories based on their exposure to volatility: fragile, robust, and antifragile.

First we need to realize that volatility, time, and risk are really the same thing. This sounds bizarre, right? But let’s return to our least-worst definition of risk: “exposure to negative outcomes brought about by volatility." With this in mind, we can see that given enough time, every possible extreme of volatility will be experienced. To return to our coffee mug example, if you leave that mug on a table in San Francisco, eventually an earthquake will knock it off and break it. So, the combination of its exposure to negative outcomes (falls make it break) plus time (eventually something will happen to knock it off) means that more time equals more likely that it is broken. I should point out, of course, that this concept is statistical and empirical in nature, which has three important implications: 1)it is only really valid probabilistically, 2) it can’t predict the outcome for any one single thing, and 3) it makes no theory-based claims as to why time will eventually break most coffee mugs. 

We now know that exposure is the most practical thing to care about, since it definitely matters and we can actually observe it. Contrast this with the other elements of “risk” usually considered - the timing or intensity of the outcomes concerned, which we can’t every really know ahead of time. WIth our focus squarely on exposure, we can now look at how to categorize systems based on exposure, and It turns out that there are three categories: fragile, robust, and antifragile. The first two will be familiar and intuitively obvious, but this last category is novel and thus gives Taleb’s book its name. Let’s go through what each category means and give some examples.

Fragile things are hurt by volatility - eventually something goes wrong and they break. The easy example we’ve been using is the coffee mug. An important concept that we’ve skipped over so far is that these categories are defined by their relationship with volatility, which can be changed by circumstances - it’s not an intrinsic quality, even though some things more inherently tend towards fragility than others. 

To understand this distinction, I like to think about potential energy. A brick sitting on top of a ladder has more potential energy than one sitting on the ground. Nothing about the brick changes except its relationship with the gravity well it’s sitting in. Likewise, with our coffee mug, if you wrapped it in bubble wrap, stuffed it in a box, put that box in a waterproof bag, then put that inside a nuke proof bunker filled with packing peanuts, it would be significantly less fragile than sitting on your coffee table, which is less fragile still than balancing on top of a stack of books as you try to open a door.

The most relevant real world examples are traditional stocks and artificial systems that are overly optimized. Sure, traditional stocks go up when volatility goes in their favor, but they go down when it goes down, and you need bigger growth to offset your loss. Highly optimized systems, on the other hand, work great until they catastrophically don’t. These failures are usually due to an unforseen change in environmental circumstance or an excess of predicted capacity. Traffic is a great example. Highways don’t get linearly slower as cars are added during rush hour - they go from zooming to standstill exponentially quickly as their capacity is exceeded. If you like guns, think about the (somewhat exaggerated) difference between an M16 and an AK-47: an M16 is more accurate and generally higher functioning, but stops working after much less abuse than the less optimized, cruder AK.

So, the AK-47 can be described as “more robust” or “less fragile” than the M16. Robust is usually considered the opposite of fragile, but that’s not quite accurate. Robust things are indifferent to volatility - shit happens and they abide. Volatility neither particularly hurts nor particularly helps them. Think of Notre Dame in Paris - through centuries of wars, invasions, and revolutions, it has remained beautiful and solid and standing. Sure it’s a little rougher around the edges, but it’s not a pile of rubble. Nor is it larger or more beautiful or superior in some way. It simply is.

When it comes to examples of increasing robustness in day to day life, I like to think about my finances and my schedule. As I mentioned before, I keep somewhat large cash savings around, because it makes me more indifferent to volatility. I can pay for the unexpected without indebting myself or scrambling around. Likewise, when scheduling, I build in buffers of time (a trick I originally learned in the army) so that if something unexpected happens (major accident on the freeway, a train stops traffic, there’s a marathon I didn’t know about - whatever) I still get there on time. 

Finally, we get to the star of the show, the most novel category: antifragile, the true opposite of fragile. Antifragile systems actually benefit from volatility - eventually something will happen that makes them better or stronger. These systems are most often found in nature. Think of your immune system: once you have chicken pox, you’re immune. Or your muscles: lift weights and you get stronger. Most interesting of all, look at evolution: species and even ecosystems get more efficient and better adapted to their environment over time by responding to the various stressors that come up due to volatility.

Because this is exciting and sounds like a super power, everyone immediately thinks “how can I be more antifragile?” Well, there’s good and bad news. First off, the bad news is that it is more applicable to systems than to individual entities. The good news is that we create systems in our life constantly, and we can engineer those to be as antifragile as possible. 

Taleb became wealthy by betting against the market with out of the money put options - when the financial crisis hit and the market tanked, those options paid off. He had no precise knowledge of when a crash would happen or how big it would be, but given enough time it would. So he constructed a system with convex exposure where he accepted finite, limited downside constantly for near unlimited upside in the event of volatility. 

For ourselves, it helps to remember that robust is usually a necessary stepping stone to antifragile. Remember the savings we talked about? Step 1 is you don’t care about negative outcomes. Step 2 is to create ways to actively benefit from them - such as by buying undervalued stocks when there is “blood in the streets” or setting up positions that automatically benefit like Taleb’s. Or you might find other ways that disorder in your life can benefit you - have something read or listen to whenever you find yourself unexpectedly delayed, for example. Another favorite of mine is to practice Jocko Willink’s habit of responding to all negative situations with “Good.” Or perhaps,  invest in business ideas that take advantage of the seemingly negative, like staffing agencies in a world of declining full time employment

The basics of Taleb’s framework come pretty easy, but there is so much depth there. For example, I didn’t even touch the math behind it, because that’s not my forte. I promise that as you think about these concepts, you will begin seeing them in your life and trying to find ways to apply them. If you figure out anything interesting, please let me know!


Want more stuff like this post delivered straight to your inbox every week? Sign up for my newsletter below.