Virtualise everything and put it in the cloud

Chris Anderson, the author of the paradigm-shifting book The Long Tail, is purported writing a book that examines the economics of abundance, appropriately called Free. In that, the argument is around the idea that money can be made just giving things away to customers. In the internet/web 2.0 space, you can see that happening everywhere. I can already get more storage for free from my Google Mail account than I am allowed to have on our corporate network, for which there is a relatively hefty charge to my cost centre.

The economics of free might work on the internet, but they certainly don’t in bank-grade computing. At least, they don’t when you do it with a traditional approach.

So I’ve begun to wonder what the trend to utility computing really means for banks. Initially, it’s been argued, the thought was we’d provide IT services in a pay-as-you-go format to the business, with capacity sharing and nett reduction of costs all round. All that was required was decent virtualisation technologies and a robust billing engine, and we were there. But lately, I’ve been wondering if this is more of an accounting innovation than anything else, and if the true opportunity is waiting just around the corner.

I’m asking myself if we actually need to run a data centre. Or a network. Is there anything so specific in our portfolio of workloads that we must hand-craft all these infrastructures to make the bank go? Our workloads have special requirements driven by the nature of the banking business, but that’s true of all industries and sectors. Our special requirements are probably no more complicated than anyone else’s.

Why wouldn’t we just take these workloads and run them wherever there are cheap cycles available? Cheap cycles equals cost effective data centres with cheap clean power and good network connections – not necessarily the attributes we have for our current data centres. The same for storage: where can we buy the most reliable bytes for the buck? The network? Just use the public internet and run some secure virtual links over the top. Google and its ilk have been in the data centre business for a way, way, shorter period of time than us, but their capabilities seem to be quite advanced compared to our own. That’s because their key focus is the data centre, and our isn’t. We’re a bank. We’re about managing money, not compute resources.

In this ultimate vision of utility computing we’d pay for only what we need. But more importantly, we don’t pay for what we don’t need. That’s the basic problem with utility computing when consumer and provider are, ipso-facto, the same entity. We’re working hard on virtualisation at Lloyds TSB, but there’s quite a bit of investment you have to put in place up front before you can play. And then, you have to find enough internal customers to make it all worthwhile.

On the other hand, the virtual data centre lets someone else put in all that capital. As we’re a bank, the payback for lending that money has a much better return for us than buying servers and racks.

The public internet can be as secure as leased lines when you add the right encryption and makes it possible to eliminate a proprietary network altogether. At the same time, the elimination of fixed point to point connections gives access to data centres everywhere to run our workloads. The oft-repeated argument against this – that you can’t guarantee availability and reliability – is fallacious. You can’t guarantee the availability and reliability of a single connection to the internet, but you can reasonably expect that the cloud as a whole will be there. The internet is self healing, and adapts to failures within it. Just have several connections to the cloud, and I bet you can achieve availability and reliability as good as any proprietary network.

Storage, when you do it yourself, absolutely defies the industry trend of tending towards a cost of nothing. You have to have a SAN, it has to be backed up, it has to be redundant, and disaster resistant. Then, to manage costs in the face of constantly increasing demands by applications and users, you must have near-line and off-line archiving. And all of that must be redundant and disaster resistant as well. These factors make it expensive to do storage, if you do it the standard way. In the cloud, though, things are different.

BNP Paribas signed a deal with IBM last year to lease capacity on a shared infrastructure for compute intensive applications, and at the time, I criticised the deal by comparing a price hypothetically arrived at against what they could have achieved using the Amazon EC2 service. I may have missed the point, which is that by talking those cycles out of their data centres in the first place, they were driving towards being totally independent of their own infrastructures. As IT is usually the second largest cost in any bank, that going to give them substantial competitive advantage if they continue the trend.

In the end, we’re nowhere close to being able to implement this kind of thing right now. Even my back of the envelope calculations suggest we’d be talking a payback of several decades, given what we’d have to implement in our own application estate to make all this a reality. On the other hand, our core banking system has been delivering returns for us for nearly 30 years now, so long term investments in technology aren’t unfamiliar.

Would anyone care to make a bet how long it will be until everyone is getting rid of their infrastructures? 10 years? More? In my opinion, I think the former, with signs of it starting already: we’re getting to the point now where everything is so complicated and expensive that a phase shift will have to occur before we can start to drive forward again.

6 Responses to“Virtualise everything and put it in the cloud”

  1. September 5, 2007 at 8:32 pm #

    James–
    I’ve been thinking some of the same things lately. Banks and CU aren’t in the business of running data centers, they are in the business of serving their customers and members. The cloud concept you’ve spoken of could dramatically change the fundamental ways that banks and credit unions currently run their networks and core processing operations.
    I do think, however, that many credit unions, and I’m assuming banks as well, have many security concerns with outsourced virtualization and cloud computing. To be success, I think people need to address those security concerns first. If you can overcome the security hurdle, it will look pretty promising.

  2. September 6, 2007 at 8:49 am #

    Yes the security concerns are the most significant blockers. It requires a fundamental shift in the way that applications are architected – they have to start taking care of themselves. That is something that most people would be doing anyway as a part of the move to service oriented architectures, where you simply can’t trust that the caller is going to be well behaved…

  3. September 6, 2007 at 10:48 am #

    You’re right, James. The Internet provides the world’s most resilient and modern network architecture. It’s non-specific hardware framework and multi-ownership model facilitates “viral” development where it is constantly refreshed with faster and better networking components. Something that any corporate network manager could only dream about!
    But it is just that, a network. When the subject broadens to storage, you’re now talking about a single owner, such as Google, Amazon or whoever.
    With small to medium and even some larger business models, this is a compelling proposition. No backups, no maintenance and no hardware refresh requirements can save businesses a fortune. But banks?
    I’ve always been an advocate of looking at a technology problem from a human perspective. Banks in the UK at least are merely agents of the Bank of England. Our currency is merely a promissory note for the gold held in the Bank of England’s vaults. So why not manage the information storage the same way?
    All the banks could share a central storage facility at the BofE “VLANed” off to individual banking houses, but connected via the Internet for ultimate resilience. A dual mirror site elsewhere would provide DR and load balancing.
    This shared service wouldn’t be free, I grant you, with costs shared five, six or more ways. Savings would come from not requiring specific, individual SOX, Basel II, FSA accountability, compliance and governance processes, which a free storage model as you described would demand.
    Imagine the freedom of no storage management worries would give a bank?
    They would be free to really think about smart data presentation solutions without the constraints they have currently. Data payments would be instant as transactions would be within the same network area. The benefit implications of this would be truly enormous.
    Thanks for stimulating thought on this, James, well done!

  4. anon
    September 12, 2007 at 8:10 pm #

    Your utility processing cycles are ready and here, at http://www.sun.com/service/sungrid/index.jsp.
    As a bank we just need to move to a forward looking vendor instead of our current incumbent supplier.
    Good talk this morning, I realise it probably means I’m out of a job – but finally someone at your level is talking openly and sensibly.

  5. September 13, 2007 at 10:28 pm #

    I can’t see how the utility model can really be animated until there is some kind of identity and identity management infrastructure that is integrated into it. The cost of “security” must surely be far greater than the cost of the processor cycles, right?

  6. November 16, 2007 at 7:20 pm #

    IT the second biggest cost for banks? Actually for ours it is real estate. In terms of IT – network/voice costs are #1.

Leave a Reply

Your email address will not be published. Required fields are marked *

(Required)

Proudly powered by WordPress   Premium Style Theme by www.gopiplus.com