Sean Park Portrait
Quote of The Day Title
I say profound things

Articles tagged 'data'

This is no way to run a financial system

The micro-cracks are turning into fissures, soon to be gaping crevasses as (finally) the obsolescence of our industrial age banking system plays itself out in spectacular front page headlines. Meanwhile it would seem that our society and our leaders are (mostly) frozen in some kind of macabre trance – eating popcorn and mesmerized by the inevitable Crash.

If you look at the LIBOR scandal in the context of the technology of the fast emerging information economy, it is absolutely mind-boggling that such an anachronistic process even exists in the world of 2012. In a world where every financial flow is digitized and only really exists as an entry in a database. In a world where truly enormous real-time data sets (ones that make the underlying data required for a true LIBOR look puny) are routinely captured and analyzed in the time it takes to read this sentence. In a world where millions (soon billions) of people have enough processing power in their pocket to compute complex algorithms. In a world where a high school hacker can store terabytes of data in the cloud.  In this world, we continue to produce one of the most important inputs into global financial markets using the equivalent of a notebook and a biro… WTF???

You think I’m joking? Libor is defined as:

The rate at which an individual Contributor Panel bank could borrow funds, were it to do so by asking for and then accepting inter-bank offers in reasonable market size, just prior to 11.00 London time.

For each (of 10) currencies, a panel of 7-18 contributing banks is asked to submit their opinion (yes, you read right) each morning on what each rate (by maturity) should be. The published rated is then the “trimmed arithmetic mean”; basically they throw out the highest and lowest submissions and average the rest. No account is taken of the size or creditworthiness or funding position of each bank and the sample size after the “trimming” for each calculation is between 4-10 banks. However, the BBA assures us that this calculation method means that:

…it is out of the control of any individual panel contributor to influence the calculation and affect the bbalibor quote.

You don’t need to be a banker or a quantitative or statistical genius, or an expert in sociology, or even particularly clever to figure out that this is a pretty sub-optimal way to calculate any sort of index, let alone one that has an impact on the pricing and outcomes of trillions of dollars worth of contracts…

In the 1980s when LIBOR was invented – and (lest the angry mob now try to throw the baby out) it should be said an important and good invention – this methodology just might have been acceptable then, as the “best practical solution available given the market and technological context.” Banks used to have to physically run their bids in Gilt auctions to the Bank of England (thus why historically banks were located in the City, tough to compete on that basis from the West End or Canary Wharf, at least without employees a few Kenyan middle distance Olympians…) But you know what?  And this is shocking I know… They don’t do it that way anymore!!!

So if LIBOR is important (and it is), how should we be calculating this in the 21st century? Here’s a few ideas:

  • include all banks participating in the market – and not necessarily just those in London – how about G(lobal)IBOR??
  • collect and maintain (in quasi-real time) important meta-data for each contributing bank (balance sheet size and currency breakdown of same by both deposits and loans, credit rating, historical interbank lending positions, volatility/consistency of submissions, derivative exposure to LIBOR rates, etc.)
  • collect rates and volumes for all realized interbank trades and live (executable) bids and offers (from say 9-11am GMT each day)
    build robust, complex (but completely transparent and auditable) algorithms for computing a sensible LIBOR fixing arising from this data; consider open-sourcing this using the Linux model (you might even get core LIBOR and then forks that consenting counterparties might choose to use for their transactions, which is ok as long as the calculation inputs and algorithms are totally transparent and subject to audit upon request1)

This is not only possible, but in fact relatively trivial today. Indeed companies like the Climate Corporation*, Zoopla*, Metamarkets*, Palantir, Splunk (and dozens and dozens more, including newcomers like Indix* and Premise Data Corp) regularly digest, analyze and publish analogous datasets that are at least (almost certainly far more) as big and complex as the newLIBOR I’m suggesting.

Indeed, the management of this process could easily be outsourced to one – or better many – big data companies, with a central regulatory authority playing the role of guardian of standards (the heavy lifting of which could actually be outsourced to other smart data processing auditors…) In theory this “standards guardian” could continue to be the BBA (the “voice of banking and financial services”) but the political and practical reality is that it should almost certainly be replaced in this role, perhaps by the Bank of England, but given the global importance of this benchmark, I think it is also worth thinking creatively about what institution could best play this role. Perhaps the BIS? Or ISO? Or a new agency along the lines of ICANN or the ITU - call it the International Financial Benchmarks Standards Insitute (IFBSI)? The role of this entity would be to set the standards for data collection, storage and computation and vet and safekeep the calculation models and the minimum standards (including power to subsequently audit at any time) required to be a calculation agent (kitemark.) Under this model, you could have multiple organizations – both private and public – publishing the calculation and in principle if done correctly they should all get the same answer (same data in + same model = same benchmark rate.) Pretty basic “many eyes” principal to improve robustness, quickly identify corrupt data or models.

As my friend (and co-founder of Metamarkets and now Premise Data Corporation) David Soloff points out:


And it’s not just LIBOR as Gillian Tett highlights in the FT:

If nothing else, this week’s revelations show why it is right for British political figures, such as Alistair Darling, to call for a radical overhaul of the Libor system. They also show why British policy makers, and others, should not stop there. For the tale of Libor is not some rarity; on the contrary, there are plenty of other parts of the debt and derivatives world that remain opaque and clubby, and continue to breach those basic Smith principles – even as bank chief executives present themselves as champions of free markets. It is perhaps one of the great ironies and hypocrisies of our age; and a source of popular disgust that chief executives would now ignore at their peril.

Rather than join the wailing crowd of doomsayers, I remain optimistic. The solution to this – and other similar issues in global finance – either exist or are emerging at a tremendous pace. I know this because this is what we do here at Anthemis. But I’m clear-headed enough to know that we only have a tiny voice. Clearly it would seem that our long predicted Financial Reformation is starting to climb up the J-curve. I just hope that if Mr. Cameron does launch some sort of parliamentary commission that voices that understand both finance and technology are heard and listened to. Excellent, robust, technology-enabled solutions are entirely within our means, I’m just not confident that the existing players have the willingness to bring these new ideas to the table.

* Disclosure: I have an equity interest, either directly or indirectly in these companies.

1 There may exist some good reasons for keeping some of the underlying data anonymous, but I think it would be perfectly possible to find a good solution whereby the data was made available to all for calculation purposes but the actual contributor names and associated price, volume and metadata were kept anonymous and only known to the central systemic guardian. Of course you’d have to do more than just replace the bank name by some static code, it would need to be dynamically changing, different keys for different calculation agents etc. but all very doable I’m sure. You’d be amazed what smart kids can do with computers these days.

Enhanced by Zemanta

From our cold dead hands.

A phrase popularized by the late Charlton Heston in his crusading role as the poster boy for the NRA. But I’m surprised it hasn’t yet been officially adopted by more old economy industry groups as a rallying cry to marshall support to save and protect their dying business models. To the bitter end.

I was reminded of this when my dad sent me this Globe & Mail article from the home country:

An Ontario court has shut the door on attempts to create new web sites to repackage real estate listings using data from the Multiple Listings Service system.

In a ruling released Monday, Mr. Justice David Brown of the Ontario Superior Court said Toronto real estate broker Fraser Beach did not have the right to provide broad public access to MLS data through a web site he helped create while working for BCE Inc. division Bell New Ventures in 2007.

The decision comes after the Toronto Real Estate Board (TREB) shut down several attempts in recent years to create new web sites allowing members of the public to sort MLS data – including an operation started by Mr. Beach.

That the Canadian Real Estate Association would want to protect its MLS data is entirely reasonable, indeed it is a very valuable dataset. However one would hope that they would take this as a wake-up call and start thinking very hard about developing a new business model around this data. One that reflects the modern realities of a fully connected, digitized economy. Perhaps they are. To be honest I have no idea. So acknowledging that this is pure unadulterated speculation, I suspect they aren’t. I suspect like the newspaper, music, bookselling, banking, etc. sectors before them, the main focal point of their efforts is to keep the bloody genie in the bottle. At least for long enough for the old hands to ride off into the sunset and let the next generation deal with it.

It’s a shame really, because on paper – as for most incumbents – not only do they have the most (everything) to lose when the paradigm shifts, but they are also by far the best positioned to maintain a leadership position so long as they adapt (in time.) Inertia, installed base and brand recognition take care of that. Basically they’ve got a strong hand. But time and time again it seems that these kinds of companies and institutions can’t help themselves but to overplay it. Taking another card while holding two Jacks kind of thing. Admittedly it would be hard work for someone to build up a competitive offering to the MLS from scratch, but I suspect not impossible. I don’t know what the public information access laws are like in Canada but if they are similar to those in the UK for instance, a smart entrepreneur might mimic the route taken by Zoopla and bootstrap prices starting from public sales records. And even if they do manage to maintain a data monopoly, they and their member agents will be faced with an increasingly angry client base who won’t readily accept being held hostage by secretive data trolls.

If I were a Canadian real-estate broker, I would be leading the charge to flip the MLS and traditional broker roles on their heads. Having read this excellent post on the future of my profession, I would understand that my customers are (mostly) not looking to do away with me but to get real value from my services and insights and conversely will become annoyed and resentful if they get the feeling they’re just paying a toll to a glorified data monkey.

The way a broker creates value in a world of abundance (vs a world of scarcity) is fundamentally different. Someone forgot to tell the record companies. Let’s not make the same mistake again. Save a real estate broker: free the data.

Reblog this post [with Zemanta]

Bittersweet mint.

A couple years ago, I had just decided to try to build what would become Nauiokas Park.  I wasn’t entirely sure exactly how I was going to go about it but I had a vision of what it might look like and I knew the market opportunity – to develop technology-enabled disruptive business models in financial services and markets – was vast.  Also, Saul and Reshma’s inaugural seedcamp had given me an excuse (or a push) to stop ‘mulling it over’ and ‘get started’ even if I didn’t exactly know what ‘it’ was yet.

One of the first things I did was to start building a database of startups and private growth companies that I thought fell into my embryonic firm’s new investment universe,  and one of the first companies I added (on August 29th, 2007 to be exact) was  I had first heard of them early that year when they were raising a Series A round and the concept had always appealed to me (and I had always wondered why banks had been so oblivious to it.)  I had definitely hoped to be able to take a closer look once I had raised outside investment capital (they were already past the seed stage where I could have contemplated trying to play as an angel) and so it was one of the first companies on our internal ‘radar screen’.  Well as they say in the start-up game, it always takes longer than you expect and here we are – one giant financial crisis later – in the fall of 2009 and Mint will now be coming off our radar screen (into our archives) having gone and gotten itself acquired by Intuit for $170mn.

Image via Wikipedia

On the one hand, it is exciting to see innovation in the space we are calling our own, succeed and be rewarded. And although I’ve never had the pleasure of meeting Aaron, I would like to congratulate him and wish him continued success with Mint and Intuit. Who knows, perhaps I’ll get to meet him in the future. Maybe when he’s contemplating his next venture? On the other hand, I can’t help but wonder if they sold too soon. I have to insert a disclaimer here – I have absolutely no idea what Mint’s financials looked like – so my view is entirely speculative, but I can’t shake the suspicion that if they had enough traction to get $170mn from Intuit, they had already hit and passed the inflection point and could have aimed at becoming (at least) a billion dollar company and owned the space.

Bittersweet? Well partly for not having invested as an angel but that’s just back-trading, so not really. Mainly it’s because – if the company was for sale – I would have really liked to have been in a position to run our slide-rule over it and, if it made sense, put in a bid, either alone or as part of a club deal with one or two private equity peers. If they have attained critical mass – which it looks like they may well have – it doesn’t take too much imagination (if you live in the sixth paradigm) to see them developing into a multi-billion dollar business over the next 5 years or so. Don’t get me wrong, I understand why management, the angels and the VCs, might find this exit attractive, especially given events of the past 24 months, but I can’t help thinking they’d done the hardest part and instead of letting a winner run, took their profits too soon.

PS If anyone knows where I can find Mint’s financials and projections, I’d love to have a look.
Reblog this post [with Zemanta]

Smarter finance.

I finally got the chance this weekend to take a closer look at IBM’s Smarter Planet initiative and I was impressed.

We can make our world smarter.
Intelligence can be infused into how we manufacture and sell… move goods, people and money…
The world is ready for a smarter planet.
Find out how to build it together.

If you would rather avoid wading through the inevitable corporate speak on IBM’s website, a good place to find out about what they are doing and how they are thinking is this recent article “IBM’s Grand Plan to Save the Planet” from Fortune:

In the parlance of the information technology industry, these situations all represent “dumb network” problems. The term sounds pejorative, but it simply means that we don’t truly understand commuter traffic or electricity flow or the inner workings of the cacao genome, and as a result our highways, utility grids, and cash crops are not managed as effectively as they could be.

The good news is that we now have the technology to convert these analog distribution systems into multidirectional “smart” networks. Readily available sensor technologies like RFID chips and digital video can track movements in granular detail. Cheap data storage, powerful analytics software, and abundant computing capacity give us the ability to warehouse and make sense of all that information. With the knowledge we’re gaining, we can remake our world in a more efficient way…

…So Palmisano is encouraging his employees to think even bigger, to scout out any dumb network that can be made smarter. Because, as any self-respecting capitalist knows, in great pain lies dormant profit. “We are looking at huge problems that couldn’t be solved before. We can solve congestion and pollution. We can make the grids more efficient,” he says. “And quite honestly, it creates a big business opportunity.”

IBM Smarter MoneyBy now, you probably understand why this resonated with me; there is significant congruence with the themes explored here and that underpin the foundations of out investment thesis at Nauiokas Park. In particular applying the amazingly powerful computing technologies that exist today to make sense of highly complex systems and networks, and of course to analyze and extract meaning from enormous and growing data sets. (Of course it’s also nice that they seem to have been inspired by our logo when designing their icon for ‘Smarter Money’!) On their website, IBM describes the opportunity they see for Smarter Money for a Smarter Planet:

Money, in other words, has been reduced to zeros and ones. It’s intangible, invisible. It’s information. Which is central both to the problem we face and to its solution.

Without question, the replacement of physical money with electronic money — and the spectrum of financial innovations that have accompanied it — have helped the world’s economy grow and prosper. But our technical and management systems haven’t kept pace. They couldn’t provide warning signals of risk concentrations, over-leveraging or underpricing. Banks could repackage risk and sell it, but they couldn’t value an individual loan in order to unwind the debt when needed. However, the same digitisation that has helped create this challenge is starting to provide the means to solve it. Intelligence is being infused into the way the world works, including our financial systems.
We’re all aware of advances like online banking, but the transformation happening underneath is far more profound.

Unprecedented computing power and advanced analytics can turn oceans of ones and zeros into insights, in realtime. Which means we could potentially have a more transparent, predictable and intelligent financial system for a smarter planet.

While it is very exciting to see a giant like IBM get behind such an intelligent and forward thinking strategy, I must admit I was a little disappointed not to find more substance on the Smarter Planet websites. It’s not that I suspect this is just a nice marketing campaign, rather that the communications department needs to work a bit harder to plug in to the projects and ideas IBM is working on in the trenches so to speak to make this vision a reality. And I think they could do more to engage a wider community through their Smarter Planet Blog and/or other social communication tools. Again as it is now it seems a bit sterile and very much a one-way broadcast, as opposed to a two-way dialog. Indeed one of the things I’ve tried to do – both through this blog and with our company – is to help to build a community of people interested in debating and shaping the future of financial services and markets. I think we have had some success, however I have nothing like the reach or resources of a giant like IBM and so it would be fantastic if they were to join the conversation and amplify it far beyond our modest community.

The Fortune article concludes:

Leadership positions, as the company knows all too well, come and go. But with luck, the tone of “Smarter planet” will remain. The message – that technology can be deployed to greater ends than creating the next fetishized cellphone – is bigger than any single company. And so, too, is Palmisano’s epiphany. He deftly led IBM out of the dotcom doldrums. Perhaps more important, he has revealed a model for monetizing scientific research in a way that benefits humanity.

Sure, not everyone can afford $6 billion a year for R&D. But real innovation rarely comes from big, rich companies. With luck, IBM’s ad campaign, coupled with its blowout 2008, will call scientists and entrepreneurs to arms. They’ll see our archaic global shipping infrastructure, a dilapidated educational system, disappearing honeybees, the fraud on Wall Street, and think, I know how to fix that. And I can make a killing doing it.


Reblog this post [with Zemanta]

Somethings change everything

Carlota Perez is one of my heroes. Her fantastic articulation (in Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages) of how technological revolutions mark turning points in long economic cycles, building on the work of Schumpeter and Hayek, is in my opinion an incredible lens through which to understand long term economic growth and its effect on financial markets. Her approach is a key foundational pillar for our investment thesis, and is why we feel confident that it is possible to generate excess returns by catching the long term secular economic waves that ultimately govern capital markets. (Think of it as the polar opposite of day trading.)

In her thesis, each successive long wave of the economic cycle is initially catalyzed by a technological revolution, usually only visible in hindsight:

Perez - Recurring phases of each great surge in the core countries

My suspicion is that we are living through a “phase change” now (be careful, “now” in this context means a period of a few years, not “today” or “this quarter”…) – and so I’ve been wondering what will come to be seen as the foundational technological revolution of the sixth paradigm. The previous five, as defined by Perez, are below:
Perez - Approximate dates of the installation and deployment periods of each great surge of development

There are many possibilities, but I’m starting to think that the transition to cloud computing (enhanced by ubiquitous wireless connectivity) just might be it. And for the sake of taking a punt on what might be a good symbolic starting point for this revolution, how about the launch of Amazon‘s S3 and EC2 in 2006?

Google Trends:

And it just keeps getting better (via GigaOm):

Amazon today said it would bring web-scale computing power for use in workloads such as web indexing and data mining to just about anyone. The bookseller now offers MapReduce (a programming model created by Google to help deal with incredibly large data sets) using Hadoop on Amazon’s Elastic Compute Cloud and Simple Storage Service. This allows AWS customers to access the power of a Google- or Yahoo-style server and programming infrastructure to model business decisions and analyze huges sets of customer or corporate data without having to invest in thousands of servers (as well as dozens of programmers). Dana Gardner over at ZDNet says one could think of it as having access to a personal supercomputer.

Just as Intel’s 4004 microprocessor was the catalyst for a wave of creative destruction in the 70s and 80s, will AWS prove the same for the 00s and 10s? Probably. We’re seeing it already. And it’s going to disrupt the hell out of the mastodons of industry across most sectors of the economy. Why? Because their cultures and leaders are entirely ill-equipped to face such a fundamental paradigm shift. They know how to play by the old rules. The strategic competitive advantages they built up over decades risk suddenly – poof! – to become obsolete. (from Dan Gardner:)

Think of it as having your own tuned supercomputer that you can plug gigantic data sets into and ask questions that will determine the course of your businesses for the next decade. Oh, and you can pay for the pleasure on a credit card.

This high-end BI value has pretty much been the sole purview of large, skilled and deep-pocketed enterprises. But there are plenty of people, researchers, government agencies, academics, small to medium enterprises, venture capitalists and the like that would hugely benefit from sussing out important trends and findings from the growing reams of raw data generated by modern businesses and societies. Talk about metadata on steroids!

“This high-end BI value has pretty much been the sole purview of large, skilled and deep-pocketed enterprises.” Not anymore… Think about that for a moment.

Size used to be an advantage in almost any industry…now? Not so much. New rules, new winners.

Thought experiment: Let’s take, oh say…banking. Which would you rather run (if say your life depended on success, which I know these days is a bit far-fetched but humor me…)?

  • A greenfield start-from-scratch-bank (assuming you had access to sufficient capital to get started, say $100 million or so)? Or,
  • [insert favorite megabank here] (assuming you had access to sufficient capital to not be immediately insolvent, say $100 billion or so)?

Well unless you are a sociopath as per Hugh Gapingvoidand see the key metric of success being how many people report to you and whether or not global political leaders will take your call, I think the answer is pretty bloody obvious.

So what does all this mean? Well, for us it means investing in companies that are positioned to ride this wave (not build a levee against it, hoping it won’t break.) Some – like cohesiveFT are right in the heart of the technology facilitating this new paradigm. Others, like our most recent investments Zoopla and FX Capital Group, are building new business models adapted to the new technological landscape that will allow them to disrupt and extend existing markets. But it also means remembering that you can be right (about the future) but still not come out on top:

The network is the computer. Sun Microsystems (1982-2009)

These really are incredibly exciting times.

UPDATE (jan2011): Great graph from Cloudkick via The Economist:

Growth in virtual machines on AWS

UPDATE (apr2011): Nice summary of recent MSFT white paper on the (overwhelming) economic advantages of cloud computing.

Enhanced by Zemanta

Semantic, shemantic…rich, open data is what we want.

A few weeks ago, I issued a call to action with respect to creating a W3C working group focused on advancing the implementation of semantic web technologies and approaches in the financial services domain. Chris kindly responded, and in particular took issue with the usefulness/appropriateness of using (the existing) semantic web toolkit (RDF triples, OWL, etc.) due to their innate complexity. He writes:

At the moment though it seems that you have a problem if you need somebody that understands derivatives OR XML schema, and a real headache if you want somebody that understands derivatives AND XML schema.

Two thoughts. Firstly, I’m not a developer and so my enthusiasm for any particular solution or outcome with respect to software and code is necessarily (due to my lack of knowledge) more conceptual than practical: ie it is hard for me to have a robust opinion on the underlying path taken to achieve a certain result. What excites me and I think is important, is to build on the technologies of the web and ultra-cheap storage and bandwidth, to create rich, linked, open data and metadata sets in finance. Much of course already exists as Chris correctly points out, but so much more remains to be done. Secondly, it might be ‘a real headache’ but what 21st century finance needs is exactly people that understand derivatives and XML schemas.* I’m sorry but it isn’t that hard to develop these kind of people, and in a nutshell encapsulated the vision we had for Digital Markets at DrKW several years ago. I suspect that many Digital Generation finance professionals already (or could easily) fit this criterea. (The barriers however are cultural. When we built Digital Markets, while I knew there would be many challenges, I completely underestimated just how threatening such a vision was to the status quo: in particular, the very idea of calling into question the distinction between front and back office staff, even if just a subtle blurring of the line for a few dozen employees, caused the corporate anti-bodies to go on full alert. Removing the distinction between star-belly and plain-belly Sneetches was not something the organization was ready to condone.)

Indeed one of the most important and valuable objectives of setting up such a working group (whether or not it is under the auspices of W3C, although I lean toward not reinventing the wheel and building on the existing infrastructure of such a collaborative industry forum and think there is a better cultural fit with the objectives of such a project at W3C than say at any financial sector industry association…) is to create a focal point – not a gatekeeper (!) – for the community to innovate around a common theme and purpose. Another is to cultivate a shared respect for and understanding of the value of open standards, something that is taken for granted in many other industries but is still anathema on Wall Street and in the City. Of course there are glimmers of light to be seen in things like the FIX Protocol, but even here the underlying cultural mindset was more Microsoft than Unix… Way back in 2003-ish (?), when I was running syndicate at DrKW, we published (on the web) an XML schema describing a new bond issue, with the goal being to help others create e-bookbuilding platforms that would be able to communicate with ours. (I tried to find the link but was unable, any current DKIB folks know if it is still live?) Pretty tame stuff right? Well suffice to say the reaction of our/my peers was various combinations of:

  1. what the hell is XML and what are you guys on about?
  2. you guys have the best e-bookbuilding platform, why on earth would you give away your data structure???
  3. is this some kind of trojan horse? what are you trying to pull?

I no longer work day to day in a big institutional banking environment, so it’s hard for me to judge how much, if at all, these attitudes have evolved over the past couple years. I may be naive but I don’t see why we assume finance and derivative professionals can understand (and apply) concepts like convexity, but balk at expecting them to understand ontology and its implications. I thought these folks were supposed to be clever. In my view it’s about leadership. If the folks in the corner office think ontology is important, so will the rank and file.

So maybe semantic web tools aren’t the only – or even the most important – path to enabling my vision; I’d still think it would be useful to catalyze a more formal community of interest around creating a truly rich set of linked data in financial services and markets. And I hope Chris, and others like him, would be keen to get involved.

* If a few more of these kind of people had populated the top of securitization groups of the last several years, we may have avoided some of the worst excesses; securitization is nothing but managing vast and complex sets of (inter-related) data. Data quality is more important than credit quality: garbage in, garbage out…

Reblog this post [with Zemanta]

e^x is for data analytics


As you know, one of the key fundamental foundation pillars of our investment thesis here at Nauiokas Park is the migration of value in many (most?) markets from transactions (matching, broking) to data. Quick and dirty: technology is driving the marginal cost of matching buyers and sellers to zero, and is driving the ability to collect, store and analyze previously unimaginable amounts of data and metadata to a different dimension. The value (and creativity and innovation from a business model point of view) now lies in thinking up ways to harness this new ability to good effect. The possibilities seem vast to us and we love discovering clever entrepreneurs and technologists who identify opportunities along this vector.

If you are a data geek, (or just a wannabe/groupie like me) you need to add Joshua Reich’s i2pi blog to your RSS feed. Not only does he know alot about data and technology, but he can leverage that knowledge through his excellent and lucid understanding of markets and business:

The premise that led us to this mess was that with only a modicum of data and some threadbare models trading would be the final arbiter of value and the collective intelligence of efficient markets would result in fundamentally sound pricing. Now that liquidity has gone from the markets, traders of these illiquid instruments are bulking up their data and models to try and better their understanding of fundamental value. And so it is that when markets are liquid the market relies on trading to assimilate the information of individual agents. Without this method of price discovery these agents need to gather their own data as the market no longer performs the role of grand aggregator. Data trades inversely to liquidity.

And he gives great math lessons too (which is great for those of us having mid-life worries about having forgotten more than they’ve remembered…) He’s just (re)started his consulting business i2pi, but I’ve got my eye on him for my new bank so if you are interested in his services, you better move quickly! 😉

Reblog this post [with Zemanta]

Call to action: cultivating the Semantic Web for Finance.

Serendipity. Possibly my favorite word. I was doing a quick scan of my RSS feeds this morning and saw Juliana’s post on Tim Berners-Lee and DBpedia – which I was thrilled to learn is collaborating with Freebase (created by Danny HillisMetaWeb – a company I’ve been following since its inception.) From there I stumbled accross and then the W3C Semantic Web Activity homepage where I noticed there was a Semantic Web Health Care and Life Sciences (HCLS) Interest Group:

The mission of the Semantic Web Health Care and Life Sciences Interest Group, part of the Semantic Web Activity, is to develop, advocate for, and support the use of Semantic Web technologies for biological science, translational medicine and health care. These domains stand to gain tremendous benefit by adoption of Semantic Web technologies, as they depend on the interoperability of information from many domains and processes for efficient decision support.

The group will:

Document use cases to aid individuals in understanding the business and technical benefits of using Semantic Web technologies.
Document guidelines to accelerate the adoption of the technology.
Implement a selection of the use cases as proof-of-concept demonstrations.
Explore the possibility of developing high level vocabularies.
Disseminate information about the group’s work at government, industry, and academic events.

New, Improved *Semantic* Web!
Image by dullhunk via Flickr

Now if I were 20 years younger, I might well be diving feet first into the realm of data, meta-data, and the semantic web. In 1990, there was a lot of opportunity and value to extract if you were skillful and comfortable understanding and manipulating cashflows; being a bond or interest rate swap trader was both financially and intellectually rewarding. That time has passed. (Although this didn’t stop the banks from flogging the horse until well after it was dead and decomposing…) In the 2010’s (the teens?), I suspect an analogous opportunity will exist for those that have mastered the art of managing or “trading” data. Hal Varian at Google articulates this well:

I keep saying the sexy job in the next ten years will be statisticians. People think I’m joking, but who would’ve guessed that computer engineers would’ve been the sexy job of the 1990s? The ability to take data—to be able to understand it, to process it, to extract value from it, to visualize it, to communicate it—that’s going to be a hugely important skill in the next decades, not only at the professional level but even at the educational level for elementary school kids, for high school kids, for college kids. Because now we really do have essentially free and ubiquitous data. So the complimentary scarce factor is the ability to understand that data and extract value from it.
I think statisticians are part of it, but it’s just a part. You also want to be able to visualize the data, communicate the data, and utilize it effectively. But I do think those skills—of being able to access, understand, and communicate the insights you get from data analysis—are going to be extremely important. Managers need to be able to access and understand the data themselves.

I may no longer be young enough to master a completely new domain like this, but I think I’m wise enough to spot something important when I see it. And the semantic web and financial markets were made for one another. But even if I had the time, I don’t have the knowledge or the skills to get a Financial Services and Markets Interest Group up and running, even though the mission statement is pretty much a cut and paste from the one above. But I am fairly confident that amongst the very clever readers of the Park Paradigm and beyond – amongst your network of friends and colleagues – there is the Ocean’s 11 dream team needed to make this happen. And I’d be thrilled just to ‘hang around the edges’ shouting out ideas from the peanut gallery and pouring coffee so to speak.

This is big. This is important. President Obama calls for transparency in financial markets (hallelujah!): the financial semantic web is an important piece in that puzzle. Perhaps Secretary Geithner and the President’s Working Group on Financial Markets can lend moral and financial support to this project?

JP? Malcolm? Phil? Don? Pat? Chris? Roger? Bueller? Perhaps this is something that David Leinweber at CIFT can help catalyze?

As they say, ideas on a postcard!

Reblog this post [with Zemanta]

If I had one two mbillion dollars…

If I had a billion dollars. (If I had a billion dollars.)
Well I would buy you a Skype. (I would buy you a Skype.)
I would buy a Twitter for your Skype (so you could tweet and chat and call all your friends.)

(…with apologies to those great Canadians – Barenaked Ladies)

CNET asks “Is Skype for Sale?”

The news has left many in the industry wondering if eBay will put Skype, which it paid a hefty $2.6 billion to buy in 2005, on the auction block. Donahoe had said last year that eBay would consider selling the business unit if it couldn’t be integrated with its auction or PayPal payment system.

Image representing Skype as depicted in CrunchBase
Image via CrunchBase

And according to statements made during the conference call, it looks like Donahoe doesn’t think there is much the Skype technology can do to help eBay’s other businesses. When asked what eBay was doing to add shareholder value to Skype, Donahoe admitted that “the synergies between Skype and the other parts of our portfolio are minimal,” the paper said.

Well if it were up to me, I’d sell eBay – maybe Ken Lewis at BoA might be interested, would look innovative and might distract the federales from the Afghanistan that is the Merrill acquisition – and keep Skype. eBay could have been the Betfair of consumer goods, instead it became the Microsoft of marketplaces…

Anyhow, I’d buy Skype. Maybe not for $2 billion, but I think it is potentially a very valuable asset and I’m convinced that it is not even scratching the surface of its potential. The problem is that they seem to be trapped in linear thinking with respect to their business model. Selling minutes and add-value telco services. A telco. An alternative and innovative telco. But a telco. Nothing wrong (well you know what I mean…) with telcos but if you want to buy a telco, buy BT – its a lot cheaper. And its not just management (that can’t think out of the box) – it’s the press, analysts etc:

So an acquirer would likely be buying Skype for its 370 million registered users, which is nothing to sneeze at. But the big question is how much money can be made from these users? Sure, people love using Skype’s free services, but most of its revenue is made from a small portion of its users. Skype generates most of its revenue from its SkypeOut service, which charges users to make calls from the Skype service to regular landline phones and cell phones.
The SkypeOut revenue stream is sufficient to sustain Skype’s business model today, but as IP networks are deployed throughout the world and all communications becomes IP-enabled, there will be fewer opportunities to make money from connecting Skype calls to the regular phone network. What’s more, as Skype adds more subscribers, those users are more likely to talk to one another over the free Skype-to-Skype network rather than paying to call these friends and family on regular phones. Of course, it will likely take years for this scenario to play out, but this fact could color a potential acquirer’s willingness to pay a premium for the service.
“As more people adopt Skype, there’s potential for the asset to peak in value,” Friedland said. “It won’t likely happen for another five to eight years. And unless Skype comes up with a new meaningful revenue driver, it could start to decline.”

370 million registered users. Three hundred and freakin’ seventy million. And growing. Fast. And more people joining is a bad thing?!?

Let’s just pause here for a moment. So Mr. Friedland, if Skype ended up having say one or two billion – BILLION – registered users and so like became the de facto communications substrate for the vast majority of the connected citizens of the planet, that would be…ummmm…bad?

There are a hundred and one ways to bootstrap amazing, profitable, cash generative businesses off of Skype’s brilliant platform and installed base, and they are all in my new book: Managing Skype for Dummies. Actually, I didn’t write it. And it’s usual title is the Cluetrain Manifesto but still…

1. Markets are conversations.

I don’t know what Meg was thinking (those of you who listened to the eBay analyst webcast and pored over the accompanying presentation the day eBay announced it was buying Skype will surely remember that at the end of both you were even more confused than at the beginning…) But even if it was by accident, she was on to something (admittedly she did get a bit punchy with the pricing, although if she had paid in paper instead of cash…) It’s just that that something wasn’t being able to call EvilRabbit467 and haggle over the price of an iPod nano to ‘close the deal’…

Seriously if I was the captain of some vast private investment capital pool, I would be sitting around with my partners and a handful of clever young associates and putting together a plan for Skype. But if I were Donahoe, I’d spin Skype out to my shareholders as a separate listing, this would create value and possibly more importantly, especially in these interesting times, give Skype an explicit valuation and an acquisition currency. Then it gets interesting.

Let’s talk.

Reblog this post [with Zemanta]

Data, AI, Web, Repeat.

Today announced a further GBP3.75 million investment round in which we are very excited to be participating alongside Atlas Venture and Octopus Ventures. I will also be joining their advisory board. We were first introduced to the very talented founder and CEO, Alex Chesterman, by my friend Fred Destin almost a year ago after I had congratulated him on Atlas’ original investment in Zoopla and had expressed my admiration for Zoopla’s site and approach. logo

So what is Zoopla!? In their own words: is a unique property website offering users information and tools to help them make better-informed property decisions. Our aim is to provide the most comprehensive source of residential property market information in the UK to help buyers, sellers, owners and estate agents alike and give them an advantage in the property market…

…We have started by providing FREE value estimates, sold prices and local information as well as letting users add content by editing information and uploading photos. We are the UK’s fastest growing property website and by far the largest and most active property community in the UK, with over a million user contributions to our website in 2008 alone…

…Our value estimates are calculated using a proprietary algorithm (a secret formula) that we have developed by analysing millions of data points relating to property sales and home characteristics throughout the UK. The algorithm works by comparing relationships between home prices, economic trends and property characteristics in given geographic areas. Our estimates are constantly refined, using the most recent data available and a variety of statistical methodologies, in order to provide the most current information on any home.

We are still testing and improving our features and tools and recognise that things aren’t perfect yet…

So what’s so interesting about Zoopla!? Or perhaps more specifically, how does Zoopla fit into Nauiokas Park’s investment universe? Two words: rich data.

  1. In Zoopla, Alex and Simon Kain (co-founder and CTO), have leveraged the web to feed intelligent algorithms that allow them to bootstrap basic, publicly available data, into an increasingly more robust, accurate, rich and granular dataset of UK residential property.
  2. They have built the site in a way that naturally compels visitors to improve and enrich the dataset. This user-generated data is not only very valuable but is itself subject to Metcalfe’s Law and so adds tremendously to the sustainable advantage of the site and their database. This is not trivial. When I was running a Credit Trading business, complex-data quality issues were absolutely critical to running the business efficiently and having effective risk management. We, like other banks, were plagued with bad quality (inconsistent, out-of-date, missing, etc. etc.) data. As a part of the ‘web-ification’ of our business (pre Digital Markets stuff), one of the single most effective things we did was to expose our various data structures to broad populations of users within the bank and allow users to correct and enhance the data on an ad hoc basis. Of course the ‘data priests’ were aghast…but it worked. Really I think it’s just applying a variation of Linus’ Law: “given enough eyeballs, all bugs are shallow.”

But how does a unique, rich, ever-improving, granular, transparent, database of UK property prices fit with Nauiokas Park’s focus on disruptive business models and technologies in financial services and markets? Well, we think Zoopla is ideally positioned to drive and benefit from a fundamental shift in the economic structure underlying the property markets. (This is a theme regular readers will recognise,) ie the shift from a market predicated on information scarcity to one build on information abundance. And you don’t even have to be particularly clever to work out how this is likely to play out, as property is the ith market in a series of [N] markets to have this thrust upon them. I don’t want to give too much away, but for the City types out there just think back to the bond markets of 1990. (For Wall Street types you only have to think back to oh about, 2004…) All other things being equal, as this “phase change” occurs in an industry, value moves away from transactions (matching) to data. (Think Merrill Lynch vs. Bloomberg LP over the past few years as a reasonable pair trade in this vein. Or all investment banks vs. Markit Group…)

Post-2008, even the proverbial man-in-the-street knows there was a data… how would you say… “issue”… when it came to the intersection of residential property and finance… Now I’m not suggesting (not quite anyways) that had Zoopla existed and been well-established globally years ago that the sub-crimeprime crisis would not have occurred (stupid is as stupid does)…but having easy access to the kind of readily “digestable” data available from Zoopla would clearly have been a boon to any responsible mortgage underwriter or securitization professional. In fact, I’d go so far as to say that today were I an institutional investor in UK RMBS, I would require that the underwriters/originators of the pools provide me with a FTP feed of the individual Zoopla data of every property in the pool. And if I were running say a big UK mortgage book and/or originator, I would certainly be interested in having an independent automated external mark-to-market run at least monthly, probably weekly…you get the idea.

And finally, whenever you have good, digital, reproduce-able data, well there my friend you have the makings of a myriad of listed and OTC markets in that underlying. Think Case-Shiller only better.

We are truly excited by the myriad of business opportunities available to Zoopla as it continues to grow and improve its core database and builds products and services on top, but perhaps most exciting is being able to participate once again at the early stages of a company that is set to play a key role in transforming an important and large marketplace, reducing friction and creating an entirely new value paradigm. Even reminds me a little of another UK start-up you might have heard of called Betfair… And we can’t wait to see what Alex and the team will achieve in the next few years and look forward to helping them in any way we can.

So, if you live in the UK, what are you waiting for? Go Zoopla! your home, claim it, enhance the data and presto, you now have effectively a pretty good proxy ticker-tape for (probably) the most important asset you own.

Reblog this post [with Zemanta]