Antifragility

Antifragility is a property of systems in which they increase in capability to thrive as a result of stressors, shocks, volatility, noise, mistakes, faults, attacks, or failures. The concept was developed by Nassim Nicholas Taleb in his book, Antifragile, and in technical papers. As Taleb explains in his book, antifragility is fundamentally different from the concepts of resiliency (i.e. the ability to recover from failure) and robustness (that is, the ability to resist failure). The concept has been applied in risk analysis, physics, molecular biology, transportation planning, engineering, Aerospace (NASA), and computer science.

Taleb defines it as follows in a letter to Nature responding to an earlier review of his book in that journal:

Antifragile versus robust/resilient
In his book, Taleb stresses the differences between antifragile and robust/resilient. "Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better." The concept has now been applied to ecosystems in a rigorous way. In their work, the authors review the concept of ecosystem resilience in its relation to ecosystem integrity from an information theory approach. This work reformulates and builds upon the concept of resilience in a way that is mathematically conveyed and can be heuristically evaluated in real-world applications: for example, ecosystem antifragility. The authors also propose that for socio-ecosystem governance, planning or in general, any decision making perspective, antifragility might be a valuable and more desirable goal to achieve than a resilience aspiration. In the same way, Pineda and co-workers has proposed a simply calculable measure of antifragility, based on the change of “satisfaction” (i.e network complexity) before and after adding perturbations, and apply it to random Boolean networks (RBNs). They also show that several well known biological networks such as Arabidopsis thaliana cell-cycle are as expected antifragile.

Antifragile versus adaptive/cognitive
An adaptive system is one that changes its behavior based on information available at time of utilization (as opposed to having the behavior defined during system design). This characteristic is sometimes referred to as cognitive. While adaptive systems allow for robustness under a variety of scenarios (often unknown during system design), they are not necessarily antifragile. In other words, the difference between antifragile and adaptive is the difference between a system that is robust under volatile environments/conditions, and one that is robust in a previously unknown environment.

Mathematical heuristic
Taleb proposed a simple heuristic for detecting fragility. If $$f(a)$$ is some model of $$a$$, then fragility exists when $$H<0$$, robustness exists when $$H=0$$, and antifragility exists when $$H>0$$, where

$$H = \frac{f(a-\Delta)+f(a+\Delta)}{2}-f(a)$$.

In short, the heuristic is to adjust a model input higher and lower. If the average outcome of the model after the adjustments is significantly worse than the model baseline, then the model is fragile with respect to that input.

Applications
The concept has been applied in physics, risk analysis, molecular biology, transportation planning, urban planning, engineering,  aerospace (NASA), megaproject management, and computer science.

In computer science, there is a structured proposal for an "Antifragile Software Manifesto", to react to traditional system designs. The major idea is to develop antifragility by design, building a system which improves from environment's input.