Sean Park Portrait
Quote of The Day Title
I say profound things

The science of financial regulation.

Scale Free network generated by Barabasi-Alber...
Image via Wikipedia

Last summer I wrote a post highlighting the fact that the global financial system is a scale-free network. This in itself is not particularly insightful – although I wonder how many of the most senior executives, regulators and politicians understand this explicitly and more importantly use it as an intellectual framework on which to base their ideas on systemic risk management and regulation. This is important because understanding the mathematical underpinnings and topology of such networks is crucial if we ever hope to construct a system of monitoring and regulation that is robust and well adapted. I was reminded of this late last night as I was re-reading an article written in 2003 by Albert-Laszlo Barabasi and Eric Bonabeau published in Scientific American on scale-free networks where they (presciently) note that:

Understanding how companies, industries and economies are interlinked could help
researchers monitor and avoid cascading financial failures.

For anyone wanting an introduction to scale-free networks this paper is an excellent place to start but basically as a reminder (via John Robb):

A scale-free network is one that obeys a power law distribution in the number of connections between nodes on the network. Some few nodes exhibit extremely high connectivity (essentially scale-free) while the vast majority are relatively poorly connected. The reason that scale-free networks emerge, as opposed to evenly distributed random networks, is due to these factors: Rapid growth confers preference to early entrants. The longer a node has been in place the greater the number of links to it.

This in a nutshell is why some financial institutions are ‘too big to fail’, or (as we heard much chatter about when first Bear Stearns, then Lehman Brothers went down) more accurately, ‘too connected to fail’. Scale-free networks are extremely resilient to random failure but highly vulnerable to specific failure of the most important hubs (Barabasi and Bonabeau):

In general, scale-free networks display an amazing robustness against accidental failures, a property that is rooted in their inhomogeneous topology. The random removal of nodes will take out mainly the small ones because they are much more plentiful than hubs. And the elimination of small nodes will not disrupt the network topology significantly, because they contain few links compared with the hubs, which connect to nearly everything. But a reliance on hubs has a serious drawback: vulnerability to attacks.

…The Achilles’ heel of scale-free networks raises a compelling question: how many hubs are essential? Recent research suggests that, generally speaking, the simultaneous elimination of as few as 5-10% of all hubs can crash a system.

Hopefully readers will recognize in this why the failure of ‘hubs’ like Bear Stearns or Lehman Brothers was potentially so damaging, setting off a cascading epidemic throughout the financial system. It is also why the Madoff failure in and of itself was not at all systemically threatening, whereas LTCM was – the key difference being ‘connectedness’ not size per se. A further consideration – based on the application of diffusion theories used to predict the propagation of a contagion throughout a population – is that the critical threshold (for propagation of an ‘infection’) is effectively zero for a scale-free network. That is all ‘viruses’ no matter how weakly contagious, will spread and persist in the system. In other words it is mathematically impossible to eradicate such sources of failure from a scale-free network. More bluntly, any attempt to eradicate or prevent financial viruses, say for instance poorly conceived sub-prime mortgages, is an act of futility.

Why is this important? Because most financial regulation, is conceived and implemented with this objective as a founding principle and worse ignores the topology and structure of the network it is trying to protect. Not only does this vastly increase the probability that the regulatory framework will ultimately fail to achieve it’s goal, but it imposes severe additional costs on the system for no greater gain in stability or robustness. Current financial regulation distinguishes far too little between the different nodes in the network, the vast majority of which are of no consequence to the overall robustness of the system. Fifty percent of financial firms could probably fail without any risk of catastrophic systemic failure as long as none of those firms were important hubs. I’m exaggerating of course (but not as much as you think.) That is why for instance the EU’s recent draft legislation on alternative investment funds – with rules uniquely predicated on size and leverage – is so wrong-headed: it misses the point. Not completely, but this is mainly due to the fact that correlation between size and connectedness is not zero (all other things being equal, bigger firms are likely to be more connected.)

However wouldn’t it make much more sense if the regulatory framework focused explicitly on the root cause of systemic vulnerability rather than accidentally or obliquely? Before any agitated readers get too excited, I realize that what I have outlined has been grasped (belatedly) to some extent by the regulators, bankers and politicians and has started to shape the discussion on the reformation of financial regulation, especially in the US where it seems increasingly likely that the new regulatory proposals will be much more concerned with the effective systemic impact of a market participant rather than their legal or organizational structure. The recognition that the fact that an organization is a bank or insurance company or hedge fund or whatever is less important than the exact types of activities it undertakes and its connectedness to the rest of the system is obviously a welcome development but it doesn’t go far enough.

Wouldn’t it make much more sense to build a set of rules that explicitly addresses the vulnerabilities of a scale free network and as such focuses disproportion attention and resources on protecting the hubs from attack or failure. The beauty is that the digital global financial system of the 21st century and advances in the science of networks actually now allows us to do this: we can empirically and quantitatively observe, measure and manage the ‘connectedness’ of institutions. Forget the rating agencies, companies like Bonabeau’s IcoSystems and others could help the regulators create, maintain and monitor network ‘maps’ and score each market participant in terms of their connectivity. This should be the defining core metric of financial regulation and mirroring the power law distribution of the underlying network, financial regulation should focus its attention and resources in geometrically increasing fashion.

This would have a number of (self-reinforcing) beneficial effects:

  • It would impose (geometrically) increasing costs on institutions as they grow in complexity and systemic connectedness creating a natural optimal equilibrium that balances the benefits (to the institution) of such growth against the external costs it imposes on the system. It effectively puts a price on the negative externalities and avoids the tragedy of the commons without needing to dictate to firms how big or complex they are allowed to become (which is doomed to failure due to the law of unintended consequences and the problems of quantum thresholds (ie clustering just below the threshold.) I doubt very much that a firm like Citigroup would have come into being under such a regime.
  • The size of a financial institution would not be a driver and so simple, relatively unconnected firms could operate with a very light regulatory touch. This would allow the system to naturally exploit economies of scale that don’t give rise to incremental systemic risk.
  • Innovation would be allowed to flourish without anyone – regulators, executives, politicians, super-intelligent alien forces – needing to decide which innovations were toxic and which were beneficial. As long as the key players in the system were vaccinated against these viruses and protected against mutations, you could let Darwinian evolution progress more or less unimpeded in the long tail of systemically unimportant firms. Indeed by allowing an increased rate of failure in the overall network, you would be able to more quickly and less painfully identify dangerous risks as they emerge in the network.
  • Resource allocation for regulators becomes much easier and more transparent. The amount of regulation and regulatory attention each firm would receive would become directly proportional to their systemic importance.

We can’t prevent dangerous risks from developing in the financial system but we can work with the grain of the underlying structure to mitigate the systemic danger instead of against it, or at best ignoring it. The robustness of scale-free networks to accidental failure has many advantages in that it allows our financial system to operate very efficiently and robustly most of the time. And by explicitly recognizing the mechanisms by which catastrophic failure can occur in our approach to regulation we will be much less likely to suffer such failures in the future and the costs of regulation will be appropriately borne within the system creating a virtuous circle that drives the system to self-organize into the optimal configuration of complexity and connectedness.

If you know Tim Geithner or Charlie McCreevy or Lord Turner, please send them this link. Hopefully it’s not too late! 😉

And if you are looking for the perfect Father’s Day gift for the financial regulator or Senate Banking Committee member in the family, you could do worse than Bonabeau’s book Swarm Intelligence: From Natural to Artificial Systems.

Reblog this post [with Zemanta]
  1. […] Excerpt from: The Park Paradigm – The science of financial regulation. […]

  2. At 3:21 pm on 05 Jun 09 Chuck Farley said:

    Now-a-days we are far beyond having a nuclear war -proof Internet, and the network is actually much more scale-free than most people realize. A few key backbones (hubs) carry a lot of the packets. So I've thought a bit about a connectedness metric that might be shared between TCP/IP and financial (capital) networks. (But nothing as hip as the IcoSystems stuff.)

    You can't just measure how much traffic a website gets, because that can be more about size than connectedness. This would be similar to using market cap as a financial connectedness proxy. (Not good.) However if a server suddenly slows down because of too much traffic *and* the server is “equivalently slow” from a set of disparate nodes, then it is probably more connected. What is a proxy for whether a financial organization is (un)responsive from the perspective of a variety of clients, not just you?

    – lag before the financial organization confirms an OTC trade via documentation
    – lag before getting a response to an RFQ on a structured, exotic product
    – long-distance phone charges, as a proportion of market cap
    – ping response time to the organization's main email (SMTP) server

    Again we would not care about the average level of these metrics, but their range (variance) across a population of clients. I would imagine, back in the day, that LTCM responded to a structured bond bid rapidly, regardless of whether or not you were the Bank of Japan, a market maker in Chicago, or a Cypriot hedge fund.

    Oh, the Space Syntax people at UCL do this sort of thing for pedestrian and motor traffic:

  3. At 5:01 am on 07 Jun 09 Paul Wallis said:


    During centuries of global economic development understanding precisely how business interacts with flows, firstly of water – then steam – then electricity – then oil, petroleum, and components – has been vital for businesses, the economy and society:

    1770s – mechanisation, factories, and canals – water
    1830s – steam engines, coal, and iron railways – steam
    1870s – steel and heavy engineering, telegraphy, refrigeration – electricity
    1910s – oil, mass production, and the automobile – oil; components; petrol

    Over time, the professions of architecture and engineering, and various sciences, co-operated to develop standards and practices of measurement, management, safety, optimisation and valuation to accurately understand these various flows.

    The result was clarity about how the business worked.

    This created trust.

    That is why huge industrial plants are often sited relatively close to centres of population. It is why we don’t think twice about starting a car, turning on a kettle or lighting a gas ring.

    There is always operational risk. But all over the modern world, no matter the political system, no matter the economic system, no matter the regulatory regime, the above holds true. The understanding of flow is critical to understanding how the business works and to creating trust.

    And where there is trust people are more likely to do business.

    Today, most business sectors rely heavily on IT and flows of data.

    But unlike, for example, the utilities, finance does not know precisely how data (money) flows through the assets of the business.

    There are many technical and business reasons for this circumstance, but in a nutshell, in today’s world finance doesn’t have enough understanding of how everything is put together to make the business, and the financial system, work.

    Andrew Haldane of the Bank of England discussed this in his recent speech “Rethinking the financial network”, where he spoke of the financial network as a place where ‘complexity caused seizures in certain financial markets’ and ‘financial innovation…increased complexity and uncertainty.’

    He went on to discuss ‘appropriate control of the damaging network consequences of the failure of large, interconnected institutions’, and the need ‘to ensure the financial network is structured so as to reduce the chances of future systemic collapse.’

    Financial institutions must see things clearly and understand exactly how the financial system works, so as to rebuild trust.

    Clarity will be achieved by understanding precisely how data flows.

  4. At 2:08 pm on 24 Jun 09 parkparadigm said:

    Thanks Paul. I agree that we are in an 'Age of Data', and this is a key pillar in our investment thesis and shapes much of our thinking on the future. Interestingly, your 4 dates correspond nicely to Perez's first four techno-economic paradigms (the fifth being, the Age of Communications; microprocessors 1970s) and the sixth? Well perhaps its the Age of Ubiquitous Computing in which data is what 'flows.'

  5. At 7:57 pm on 24 Jun 09 Paul Wallis said:

    Hi Sean,

    You are correct re Carlotta Perez.

    Recently I wrote to thank her for the inspiration, her ideas have helped me describe clearly, I hope, why finance has had problems and why it (and many other business sectors), and the economy, will continue to have problems despite revised ‘regulation’.

    Most modern businesses do not know precisely how they work. Finance does not know precisely how data (money) flows.

    The age of data flows started about thirty years ago and it continues today.

    But because we haven’t understood data flows in the way we understood the previous flows, we have seen, for example, billions blown on failed government IT projects around the world; banks operating in silos and pumping out too much data (money); and extraordinary lapses in the protection of critical data in the public and private sector.

    And that is why there is a danger in things like, for example, ‘The Cloud’. Any reasonably complex organisation that puts critical data, or flows of ‘money’, in the Cloud is asking for trouble, because using traditional techniques there is no way of knowing how likely it is to flow ‘safely’. How do you know which individual parts of the global IT infrastructure your data will flow through? How robust are those parts? Are certain parts overloaded? Are all the parts secure? Can someone tap into your dataflow and alter it?

    When oil or electricity flows we trust that each asset/component/machine that enables the flow has been individually tested to meet safety/operational standards. We know that they will all have been engineered and put together to meet rigorous standards.

    There are no such standards in IT. So when something like a server fails and data flow stops it can cause immense disruption, if it hasn’t been anticipated.

    You may have noticed recently that Barclays systems failed for an afternoon. With many mergers and demergers in the offing we can expect more incidents in finance. We have to hope they don't cause serious disruption to the economy, with knock-on effects in society at a time of political unrest.

    If I had been an investor in a ‘new’ business during the past two hundred years, I like to think I would follow the advice of Warren Buffet who,

    “will only invest in businesses he can understand and analyse, rejecting those …. where he is unsure of their operating model…He has largely ignored the technology sector because he claims not to fully understand their business.”

  6. At 8:41 pm on 24 Jun 09 parkparadigm said:

    At the risk of sounding ridiculous, I disagree slightly with Mr. Buffet's view on new businesses – at least as it has been mythologized. I'm not taking issue with his investment approach – it clearly has worked for him – but rather that I think it is possible to invest in and profit from the build out of new paradigms. Clearly this is a different risk / reward equation but approached robustly – in the spirit of Mr. Buffet: analytical and questioning – I think it can be every bit as interesting. Indeed I would look more (in terms of visionary role models) to the likes of John Rockefeller (putting aside the ruthless monopolistic tendencies and manifest character flaws!) as an example of someone who profited from identifying and leading a fundamental economic paradigm shift.

    I agree with you that big businesses and banks today don't have a deep understanding of how data flows, but in this I see an enormous opportunity – for new businesses, new leaders, new approaches – that do understand data flows. Businesses that in fact are predicated on this understanding. Naturally such a paradigm shift does not happen smoothly, nor is it without risks; accidents are sure to happen. But I am optimistic that “the other side” of this valley does exist and is attainable. But probably not by those – people, companies and institutions – that are fundamentally of this side…

  7. At 5:56 pm on 21 Aug 09 The Park Paradigm - On financial networks. said:

    […] been trying to articulate (much less completely and articulately) for some time now. (see The science of Financial Regulation (June 2009) and Averting (financial) ecological disasters (August 2008)) I hope his voice is listened to and […]

  8. At 6:32 pm on 14 May 10 investor relations said:

    I think the Scale Fee Network is the best option.

  9. At 9:04 am on 28 May 12 The Park Paradigm - A Damascene Conversion? said:

    […] The science of financial regulation. ( […]

Leave a comment

You must fill in an email address, but don't worry - it won't be displayed to the public.

* Required field.