Last summer I wrote a post highlighting the fact that the global financial system is a scale-free network. This in itself is not particularly insightful – although I wonder how many of the most senior executives, regulators and politicians understand this explicitly and more importantly use it as an intellectual framework on which to base their ideas on systemic risk management and regulation. This is important because understanding the mathematical underpinnings and topology of such networks is crucial if we ever hope to construct a system of monitoring and regulation that is robust and well adapted. I was reminded of this late last night as I was re-reading an article written in 2003 by Albert-Laszlo Barabasi and Eric Bonabeau published in Scientific American on scale-free networks where they (presciently) note that:
Understanding how companies, industries and economies are interlinked could help
researchers monitor and avoid cascading financial failures.
For anyone wanting an introduction to scale-free networks this paper is an excellent place to start but basically as a reminder (via John Robb):
A scale-free network is one that obeys a power law distribution in the number of connections between nodes on the network. Some few nodes exhibit extremely high connectivity (essentially scale-free) while the vast majority are relatively poorly connected. The reason that scale-free networks emerge, as opposed to evenly distributed random networks, is due to these factors: Rapid growth confers preference to early entrants. The longer a node has been in place the greater the number of links to it.
This in a nutshell is why some financial institutions are ‘too big to fail’, or (as we heard much chatter about when first Bear Stearns, then Lehman Brothers went down) more accurately, ‘too connected to fail’. Scale-free networks are extremely resilient to random failure but highly vulnerable to specific failure of the most important hubs (Barabasi and Bonabeau):
In general, scale-free networks display an amazing robustness against accidental failures, a property that is rooted in their inhomogeneous topology. The random removal of nodes will take out mainly the small ones because they are much more plentiful than hubs. And the elimination of small nodes will not disrupt the network topology significantly, because they contain few links compared with the hubs, which connect to nearly everything. But a reliance on hubs has a serious drawback: vulnerability to attacks.
…The Achilles’ heel of scale-free networks raises a compelling question: how many hubs are essential? Recent research suggests that, generally speaking, the simultaneous elimination of as few as 5-10% of all hubs can crash a system.
Hopefully readers will recognize in this why the failure of ‘hubs’ like Bear Stearns or Lehman Brothers was potentially so damaging, setting off a cascading epidemic throughout the financial system. It is also why the Madoff failure in and of itself was not at all systemically threatening, whereas LTCM was – the key difference being ‘connectedness’ not size per se. A further consideration – based on the application of diffusion theories used to predict the propagation of a contagion throughout a population – is that the critical threshold (for propagation of an ‘infection’) is effectively zero for a scale-free network. That is all ‘viruses’ no matter how weakly contagious, will spread and persist in the system. In other words it is mathematically impossible to eradicate such sources of failure from a scale-free network. More bluntly, any attempt to eradicate or prevent financial viruses, say for instance poorly conceived sub-prime mortgages, is an act of futility.
Why is this important? Because most financial regulation, is conceived and implemented with this objective as a founding principle and worse ignores the topology and structure of the network it is trying to protect. Not only does this vastly increase the probability that the regulatory framework will ultimately fail to achieve it’s goal, but it imposes severe additional costs on the system for no greater gain in stability or robustness. Current financial regulation distinguishes far too little between the different nodes in the network, the vast majority of which are of no consequence to the overall robustness of the system. Fifty percent of financial firms could probably fail without any risk of catastrophic systemic failure as long as none of those firms were important hubs. I’m exaggerating of course (but not as much as you think.) That is why for instance the EU’s recent draft legislation on alternative investment funds – with rules uniquely predicated on size and leverage – is so wrong-headed: it misses the point. Not completely, but this is mainly due to the fact that correlation between size and connectedness is not zero (all other things being equal, bigger firms are likely to be more connected.)
However wouldn’t it make much more sense if the regulatory framework focused explicitly on the root cause of systemic vulnerability rather than accidentally or obliquely? Before any agitated readers get too excited, I realize that what I have outlined has been grasped (belatedly) to some extent by the regulators, bankers and politicians and has started to shape the discussion on the reformation of financial regulation, especially in the US where it seems increasingly likely that the new regulatory proposals will be much more concerned with the effective systemic impact of a market participant rather than their legal or organizational structure. The recognition that the fact that an organization is a bank or insurance company or hedge fund or whatever is less important than the exact types of activities it undertakes and its connectedness to the rest of the system is obviously a welcome development but it doesn’t go far enough.
Wouldn’t it make much more sense to build a set of rules that explicitly addresses the vulnerabilities of a scale free network and as such focuses disproportion attention and resources on protecting the hubs from attack or failure. The beauty is that the digital global financial system of the 21st century and advances in the science of networks actually now allows us to do this: we can empirically and quantitatively observe, measure and manage the ‘connectedness’ of institutions. Forget the rating agencies, companies like Bonabeau’s IcoSystems and others could help the regulators create, maintain and monitor network ‘maps’ and score each market participant in terms of their connectivity. This should be the defining core metric of financial regulation and mirroring the power law distribution of the underlying network, financial regulation should focus its attention and resources in geometrically increasing fashion.
This would have a number of (self-reinforcing) beneficial effects:
- It would impose (geometrically) increasing costs on institutions as they grow in complexity and systemic connectedness creating a natural optimal equilibrium that balances the benefits (to the institution) of such growth against the external costs it imposes on the system. It effectively puts a price on the negative externalities and avoids the tragedy of the commons without needing to dictate to firms how big or complex they are allowed to become (which is doomed to failure due to the law of unintended consequences and the problems of quantum thresholds (ie clustering just below the threshold.) I doubt very much that a firm like Citigroup would have come into being under such a regime.
- The size of a financial institution would not be a driver and so simple, relatively unconnected firms could operate with a very light regulatory touch. This would allow the system to naturally exploit economies of scale that don’t give rise to incremental systemic risk.
- Innovation would be allowed to flourish without anyone – regulators, executives, politicians, super-intelligent alien forces – needing to decide which innovations were toxic and which were beneficial. As long as the key players in the system were vaccinated against these viruses and protected against mutations, you could let Darwinian evolution progress more or less unimpeded in the long tail of systemically unimportant firms. Indeed by allowing an increased rate of failure in the overall network, you would be able to more quickly and less painfully identify dangerous risks as they emerge in the network.
- Resource allocation for regulators becomes much easier and more transparent. The amount of regulation and regulatory attention each firm would receive would become directly proportional to their systemic importance.
We can’t prevent dangerous risks from developing in the financial system but we can work with the grain of the underlying structure to mitigate the systemic danger instead of against it, or at best ignoring it. The robustness of scale-free networks to accidental failure has many advantages in that it allows our financial system to operate very efficiently and robustly most of the time. And by explicitly recognizing the mechanisms by which catastrophic failure can occur in our approach to regulation we will be much less likely to suffer such failures in the future and the costs of regulation will be appropriately borne within the system creating a virtuous circle that drives the system to self-organize into the optimal configuration of complexity and connectedness.
If you know Tim Geithner or Charlie McCreevy or Lord Turner, please send them this link. Hopefully it’s not too late! 😉
And if you are looking for the perfect Father’s Day gift for the financial regulator or Senate Banking Committee member in the family, you could do worse than Bonabeau’s book Swarm Intelligence: From Natural to Artificial Systems.
Related articles by Zemanta
- Geithner Calls for Major Overhaul of Financial Rules (nytimes.com)
- The Swensen plan (money.cnn.com)
- Central banks focus on financial firms’ risk (financialpost.com)
- Geithner to rein in derivatives (money.cnn.com)
- Regulating the Cobblestones on Wall Street (time.com)
- ‘Too big to fail’ distorts capitalism (dailyfinance.com)
- Citi to re-arrange deck chairs, market applauds? (parkparadigm.com)