Chris Anderson, the author of the paradigm-shifting book The Long Tail, is purported writing a book that examines the economics of abundance, appropriately called Free. In that, the argument is around the idea that money can be made just giving things away to customers. In the internet/web 2.0 space, you can see that happening everywhere. I can already get more storage for free from my Google Mail account than I am allowed to have on our corporate network, for which there is a relatively hefty charge to my cost centre.
The economics of free might work on the internet, but they certainly don’t in bank-grade computing. At least, they don’t when you do it with a traditional approach.
So I’ve begun to wonder what the trend to utility computing really means for banks. Initially, it’s been argued, the thought was we’d provide IT services in a pay-as-you-go format to the business, with capacity sharing and nett reduction of costs all round. All that was required was decent virtualisation technologies and a robust billing engine, and we were there. But lately, I’ve been wondering if this is more of an accounting innovation than anything else, and if the true opportunity is waiting just around the corner.
I’m asking myself if we actually need to run a data centre. Or a network. Is there anything so specific in our portfolio of workloads that we must hand-craft all these infrastructures to make the bank go? Our workloads have special requirements driven by the nature of the banking business, but that’s true of all industries and sectors. Our special requirements are probably no more complicated than anyone else’s.
Why wouldn’t we just take these workloads and run them wherever there are cheap cycles available? Cheap cycles equals cost effective data centres with cheap clean power and good network connections – not necessarily the attributes we have for our current data centres. The same for storage: where can we buy the most reliable bytes for the buck? The network? Just use the public internet and run some secure virtual links over the top. Google and its ilk have been in the data centre business for a way, way, shorter period of time than us, but their capabilities seem to be quite advanced compared to our own. That’s because their key focus is the data centre, and our isn’t. We’re a bank. We’re about managing money, not compute resources.
In this ultimate vision of utility computing we’d pay for only what we need. But more importantly, we don’t pay for what we don’t need. That’s the basic problem with utility computing when consumer and provider are, ipso-facto, the same entity. We’re working hard on virtualisation at Lloyds TSB, but there’s quite a bit of investment you have to put in place up front before you can play. And then, you have to find enough internal customers to make it all worthwhile.
On the other hand, the virtual data centre lets someone else put in all that capital. As we’re a bank, the payback for lending that money has a much better return for us than buying servers and racks.
The public internet can be as secure as leased lines when you add the right encryption and makes it possible to eliminate a proprietary network altogether. At the same time, the elimination of fixed point to point connections gives access to data centres everywhere to run our workloads. The oft-repeated argument against this – that you can’t guarantee availability and reliability – is fallacious. You can’t guarantee the availability and reliability of a single connection to the internet, but you can reasonably expect that the cloud as a whole will be there. The internet is self healing, and adapts to failures within it. Just have several connections to the cloud, and I bet you can achieve availability and reliability as good as any proprietary network.
Storage, when you do it yourself, absolutely defies the industry trend of tending towards a cost of nothing. You have to have a SAN, it has to be backed up, it has to be redundant, and disaster resistant. Then, to manage costs in the face of constantly increasing demands by applications and users, you must have near-line and off-line archiving. And all of that must be redundant and disaster resistant as well. These factors make it expensive to do storage, if you do it the standard way. In the cloud, though, things are different.
BNP Paribas signed a deal with IBM last year to lease capacity on a shared infrastructure for compute intensive applications, and at the time, I criticised the deal by comparing a price hypothetically arrived at against what they could have achieved using the Amazon EC2 service. I may have missed the point, which is that by talking those cycles out of their data centres in the first place, they were driving towards being totally independent of their own infrastructures. As IT is usually the second largest cost in any bank, that going to give them substantial competitive advantage if they continue the trend.
In the end, we’re nowhere close to being able to implement this kind of thing right now. Even my back of the envelope calculations suggest we’d be talking a payback of several decades, given what we’d have to implement in our own application estate to make all this a reality. On the other hand, our core banking system has been delivering returns for us for nearly 30 years now, so long term investments in technology aren’t unfamiliar.
Would anyone care to make a bet how long it will be until everyone is getting rid of their infrastructures? 10 years? More? In my opinion, I think the former, with signs of it starting already: we’re getting to the point now where everything is so complicated and expensive that a phase shift will have to occur before we can start to drive forward again.