As the saying goes, every great journey begins with that first step. One set of IT managers at the investment banking half of JPMorgan Chase (JPMC) might add that its okay if the first step is virtual.
This is not to say that the first step isnt realquite the contrary. JPMC has some very concrete reasons for experimenting with virtualization. The most obvious one is the growth in the credit derivatives business that is driving the application teams and the business side to move to the next application platform for trading. "The credit sides appetite to recode or spend money on the legacy applications did not exist," says Shawn Findlan, vice president of investment bank technology, global credit infrastructure at JPMC, who has been playing a leading role in the banks virtualization project.
"They did not want to spend time recoding the applications or migrating them over to Linux or things like that," says Findlan, who has been working closely with Ciaran P. Henry, chief technology officer (CTO) for the credit and emerging markets businesses at JPMC.
This new step is also not a grid.
The grid in question is the pioneering Compute Backbone (CBB), which was featured in the October 2002 edition of Waters. Findlan and Henry say that the grid-computing project is not competing against the virtualization project. "The project is related to the CBB in that it uses some of the same conceptsservice-oriented, repurposing systems and better utilizationbut expands this to the application and database layers," Findlan says. But the CBB, which is one form of virtualization, focuses on high-performance computing while the virtualization project targets end-to-end services for an entire application. "If the application can utilize the CBB for compute services, it will be sent to the CBB," Findlan says. "Virtualization separates the service from dedicated hardware and allows an application component to request service from any number of providers of that component."
In short, the virtualization project "allows a heterogeneous environment to be managed centrally, based off events in the overall environment and, like grids, it can be managed independently from the services for which it provides," says Findlan. "The key difference between grid computing and virtualization is that grid computing is typically limited to compute components."
The CCB and virtualization projects are complementary, Henry says. "I think about grid as high-performance compute services making use of high-speed interconnects and faster, cheaper processors. The concept of virtualization as we define it adds the end-to-end management at the application and database layers,"he says.
"We have seen tremendous results from CBB, led by Adrian Kunzle," Henry points out, referring to the vice president and global co-head of investment bank technology architecture for the firm. But CBB and virtualization together "provide better, cheaper and faster services for the entire environment," he says. "Having both solutions in placesuch as we do at JPMorganallows a company to reap the benefits across the entire application stack."
The credit group was the first to realize the rewards of virtualization after a successful pilot program. "The project is operational for credit derivatives," Findlan says. "We are planning and implementing the CDIR (Credit Derivatives Infrastructure Refresh) solution at various stages in other lines of business." This first brush with success for the "Virtualized Infrastructure Service" of the CDIR could provide the firm with another option for delivering infrastructure technology for trading.
"What we had to do is figure out a way to deliver the same types of results that you see in grid computing," Findlan says. The firm is hoping to wind up with a highly scalable, high-throughput path to CPU resources that keeps pace with the demands of a cutthroat market. If virtualization gains more ground at JPMC, it would allow traders who need on-demand computing to access big iron hardware and process highly complex transactions much faster than before. "We want to find a way to do that across multiple hardware platforms, multiple operating systems, and multiple layers of the application," Findlan says.
The Promise of Going Virtual
The aspects that the two computing modes may have in common are the many steps they require and the changes they promise.
One of the major changes to come will be a gradual elimination of the hodgepodge of hardware systems of the credit group. The eclectic collection lacks the total cost of ownership (TCO) advantages of state-of-the-art servers, and has become expensive to maintain. The virtualization project is allowing the bank to take a holistic view of the IT infrastructure for the group. "If we took out a clean sheet of paper, how would we redesign this infrastructure and that platform? We moved it down from 20 hardware platforms to three," Findlan says.
The legacy of garden-variety combinations of vendors, models and hardware architectures includes two-way, four-way, eight-way and 32-way servers. This has been streamlined to three hardware modelsone from IBM and two from Sun Microsystems. "This alone has yielded significant operational savings and improved resiliency of the environment through simplification," Findlan says.
For this part of the project, JPMorgan has had some help. Sun is providing high-end Sun Fire F15K platforms, and IBM is supplying the IBM x335 two-way servers. "Sun Microsystems has been involved in this from the beginning and stepped up to help JPMorgan design this solution," Findlan says. "We used several consultants that Sun Microsystems brought on board to help work through the various phases of the project and supplement our internal team." In addition to the paid consultants from Sun, the vendor provided time in its iForce Lab in California to test the applications and virtualization solution, loaned JPMorgan equipment and provided significant pre-sales consulting work.
JPMorgans virtualization team consists of a core of 15 people, covering application development and IT infrastructure. They were helped intermittently by about 15 others from groups that interacted with the business side. "The program started as a concept in October 2003. It was officially funded and started in March 2004 with the first phase, Server Consolidation and Refresh," Findlan says. "It was our intention to do this well, which meant that we took the necessary R&D time to review, test and build new technologies, concepts and processes that needed to be introduced to the bank and to the vendors with whom we were dealing. Now that we have this down, the team can build this significantly faster the next time its deployed."
Thus the days of the old hodgepodge of hardware are numbered. "A lot of it is being rolled off. Some of it is being used for development," Findlan says. "The end result is that it will all go away." What will remain are the "legacy, internally developed applications, which I think is a key part of the story," he says.
Service On Demand
The applications have been spared revisions and are treated to "what we call service on demand," Findlan says. "This is taking that infrastructure architecture that already exists in legacy applications and allowing you to create a virtual environmenta system that does not exist on a day-to-day basisand creating it on the fly." This new approach can be quite useful for a disaster recovery event or for quickly applying more resources for key applications such as end-of-day processing. "Youre taking that entire application service and youre giving it to the users," he says.
Virtualization also "breaks fixed, inflexible links between systems and processes, allowing any component to request a service from any number of providers of that service," says Duncan Johnston-Watt, CTO of Enigmatec, a key supplier in the virtualization project. An industry veteran, Johnston-Watt has a history of building large-scale platforms inside investment firms. "The power of virtualization lies in decoupling consumers from producers, allowing producerssuch as a grid of compute engines or a Web server farmto scale and be managed independently from the consumers they service," he says. The producers and consumers of the credit derivatives group got to see this upfront in early September.
By taking a virtualized approach, the derivatives applications used by the credit staff are able to access the power of high-end servers on a real-time basis. Other benefits from the effort are becoming clear.
"We have several benchmarks that were working off of," Findlan says. "The cost per transaction is one of those. I think we can say that weve cut that down to a third of the cost per transaction, from an infrastructure perspective. Weve come to that calculation by taking the total cost of ownership for the infrastructure and dividing that by the number of transactions that were putting through the system right now," he says.
As for overall system recovery, it is much faster. "CDIR detection and alerting is tightly linked with execution automation and allows full system recoveries and migration of the application state from one data center to another in well under 30 minutes," Findlan says. "The solution also allows us to automate complex end-to-end procedures that once took many hours or were out of range for current technologies due to long distances and network latency issues."
The IT staff has also seen a marked improvement in systems usage. "Utilization has been increased in the virtualized environments by re-purposing the systems when they were not in use or other higher priority event-driven needs came up," Findlan says. "This meant that we didnt need to purchase additional systems to meet some of the growth demands."
In another key benchmark, the firm has seen end-of-day calculation performance drop from four hours to one. "We are working to improve this further," Findlan says.
The key technologies for the virtualized infrastructure are internally developed applications and third-party products, including Enigmatecs EMS, IBM and Sun hardware, IBMs Tivoli system management software, Microsoft Windows Server 2003, the Sybase relational database, and the WebLogic application server from BEA Systems, Findlan says. Sybase and Sun officials confirm that they are part of the virtualization project. BEA and IBM officials did not respond to inquiries by press time.
Five Fronts
With the IT tools in place, the journey to virtualization for the global credit infrastructure group is proceeding on five major fronts: Consolidation and Refresh; Standardized Architecture and Deployment Methodology; Infrastructure and Application Provisioning; Execution Management; and Monitoring and Event Management.
The technology refresh is governed by the Standardized Architecture and Deployment Methodology, which serves as a blueprint to guarantee that future procurement of technology is orderly. These safeguards are intended to prevent the spread of systems in an unwieldy fashion. For the policy enforcement of the Execution Management aspect of the project, JPMC has deployed Enigmatecs EMS to help the firm govern system failure management, disaster recovery, workload management and the optimizing of CPU power.
The policy management functions are slated to be tied into another important element of the virtualization pushthe monitoring solution, which creates a view of the virtualized infrastructure in production. "The solution delivers monitoring from the infrastructure layer all the way up to in-house coded applications and allows us to see all of that in one place," Findlan says. "The monitoring piece has improved the resiliency and uptime of the applications because we can notice events that may cause a failure before they actually happen now. Were tying that monitoring solution to Enigmatecs EMS to allow it to execute on failures. If theres a failure thats noticed in the monitoring solution, the EMS kicks off an event to re-purpose the system and move the users to it."
The monitoring system is heavily dependent upon the systems management offerings of Tivoli, to which JPMorgan has tied in-house monitoring controls, scripts and third-party tools such as the Formula visualization tool, now called End-to-End Service Manager, from Managed Objects. "The key of the monitoring solution itself is that everything had to be tied into Tivoli to act as the single bus of data," Findlan says. "Once we have everything in the Tivoli repository, we can visualize, monitor and manage it." The monitoring piece is likely to be the first of many deliverables to come.
Another key area is finding the right provisioning tool so that systems can be fitted with operating systems and applications automatically and "on-the-fly." Third-party provisioning products are being reviewed, Findlan says. "Instead of taking weeks to build out systems, it will happen in a few short minutes," he says. "The goal is to have this completely tied together so that Enigmatec can call out to a provisioning tool and say weve had a failure; rebuild system Y into system X."
Ultimately, the journey could bring high-performance computing, if not to the masses, to key constituencies within the trading groups of the firm, such as another technology refresh for the credit business and the global emerging markets unit, Findlan says. "I think it can go quite a long way. If we get the right focus on this, I think theres a lot this project can do in terms of moving the technology forward and changing the way people think around delivering technology services," he says.
For now, the infrastructure for the initial implementation of CDIR virtualization is based in North America, but is innately flexible and can be deployed globally. "It can go wherever we feel it can fit," Findlan says.
Eugene Grygo is the editor of Dealing with Technology.