How a Private Cloud Stacks Up at State Street
March 22, 2011
Chris Perretta and a project team at State Street are putting in place an "infinitely scalable software mainframe."
In today's jargon, they're installing a private cloud of computing power at the $9 billion a year asset management, asset servicing, investment research and trading firm.
"There's hype in it,'' Perretta says of 'cloud computing,' "But there's substance as well."
In State Street's private cloud, customers do not interact with it over the public Internet on computers run and maintained by someone else. State Street runs and maintains its systems -- and is directly responsible for making sure its cloud of services is safe and secure for doing business. And delivering the savings that the use of thousands of commodity servers and special purpose storage units can deliver, if constantly apportioned to tasks as needed.
"The fact of the matter is we're big enough to get the economies even to ourselves." Perretta told Money Management Executive, rather than relying on data centers operated by someone else that is combining and overseeing the computing activities of a large number of technology customers.
Meeting the 'Cloud' Standard
The State Street private cloud -- to be launched later this year -- will run on the Linux operating system and low-cost servers using Intel chips that feature multiple "core" processors on each. It is what Perretta calls the "Lintel" standard, in cloud computing.
"The definition of cloud is, I'm using a large collection of hardware that is easily extendable, that I can provision in real time, that I can (use to) requisition hardware with minimal management expertise, that it's scaleable, that it's metered," he said. "Internal clouds do meet all the standard definitions of clouds."
The only key difference: "We control the cloud,'' he said.
The effort got started in late 2009 and early 2010, in the wake of the global financial crisis. State Street typically spends somewhere between 20 and 25 percent of all its operating budget on information technology.
And when credit markets seized and business activity slumped, the costs of conducting computing the conventional way -- where a half-dozen servers might have to be devoted to a single application and that application only -- drove the search for a new way to operate.
State Street's operating revenue had peaked in 2008 at $10.5 billion and fell to $8.1 billion the following year. Its earnings, on a per-share basis, plunged from $5.61 to $3.32.
And overall computing costs were climbing.
Information technology costs were "growing faster than revenue, growing faster than the business was growing, growing faster than productivity improvements,'' he said. "So you kind of get the sense that the model is broken.''
If State Street were to continue to operate in the traditional way -- with asset utilization running at about 10 percent to 20 percent of capacity -- there was only one way to save money. Cut discretionary spending. Cut spending on new product development. In effect, cut one's own throat.
"When you press on that model and you say, oh, gee, you have to limit spending, well, you go after the discretionary spend which is your product development lifetime,'' Perretta said. "And there was consensus, even at the board level, that that's not a long-term proposition."
'Stack and Cloud' Initiative
Thus was launched, as 2010 arrived, the 'Stack and Cloud' initiative at State Street. The cloud would be its effort to put limitless compute power at its technologists fingertips -- but be able to raise the utlization of that capacity past 50 percent of the computing power under its roofs at any time.
The "stack" would be a rationalization of its technology stack, the conglomeration of methods, tools and software to build apps.
The apps would be build in a standard way, outside the cloud. Then, injected in an automated fashion into the cloud, without any alteration on the way in or after placed on a bank of servers.
"The point was, let's build common frameworks that everyone can use that allow everyone to exploit the cloud,'' said Perretta.
"What we are really after is to change the mix in the IT organization from what I spend in running my systems into what I spend in building new stuff,'' said Perretta. ' "We want to spend more of our new efforts in building new stuff.''
The "Stack and Cloud" effort got started in early 2010 with the testing of a couple internal applications related to technology operations on a "small cloud" that used 100 processors.
In June, a second test began, on a "large cloud" that used 500 processors. This test simulated the creation and operation of two ''virtual" data centers. The idea was to see if a cloud could be operated in "an industrial-strength way."