Architecting the Cloud with Vblock: New Data Center Shows Value

There are big IT projects at every company and in everyone’s career.  I was fortunate enough to be a part of the largest IT infrastructure project in EMC’s history.  Simple, open a datacenter, migrate all of the applications, close a data center.

One of our Massachusetts data centers had served EMC and Data General well for decades.  However, we were constrained by power, cooling and space.  It was also far too close to our other data center to protect EMC from a regional disaster like Hurricane Sandy.  EMC selected Durham, North Carolina to build out a new 20,000-square-foot, state-of-the-art data center.

First mover advantage

We’ve written a lot about our Durham Cloud Data Center in the past. We purchased the Durham site in October 2009 and planned to close the near-capacity Massachusetts data center by December 31, 2012.  If the migration took longer, we estimated that it would cost EMC millions of dollars in 2013 to extend the lease and staff, power, cool and insure the facility.  Three years, no problem—except the Durham facility was a warehouse, not a data center.  The facility remodel wouldn’t be ready until October 2010, giving us eight quarters to migrate more than 2,500 servers and 500 applications and a ninth quarter to decommission the Westborough facility.

No trucks, new architecture

Data center migrations aren’t anything new, but EMC IT was challenged to execute this migration in an entirely new way, migrate over the WAN and and re-architect to the new Vblock concept.  VMware, Cisco and EMC had just inked an agreement to create a joint venture (VCE) to sell and support the Vblock, converged infrastructure of compute storage and network.

In early 2010, the Vblock reference architecture was published and the new Cisco UCS servers and switches were available. But VCE had just started up and only had a few reference installations.  Their factory wasn’t online yet either.  They would build and configure your Vblock at your site.

Blank whiteboard

We pulled together a small team of EMC IT’s most senior architects to design our cloud, 100 percent virtualized data center.  The architecture team was made up of about a dozen key members from storage, networking, backup, security, compute and virtualization. We would be breaking new ground, but EMC IT is EMC’s first and best customer. It’s our job to lead from the front and prove out EMC’s products and vision.

At the first meeting I was hoping many of the team members would already have some draft design documents so that we had something, anything to start from—a Visio, a napkin. We had nothing with a side of nothing. Literally a blank whiteboard.

And, we didn’t have a lot time.  If you account for the vendor lead time, requisition and purchase order approval processes, we only had about two months to complete the design and put up the requisitions so that the gear would be available on day 1 in October 2010. We only had eight quarters; every day was precious.

Everyone was overwhelmed by the concept of the task at hand. The Vblock architecture was still being finalized. Even more daunting was the fact that it had taken EMC 20 years to build out its data center architecture that was currently running the business from Massachusetts. And we were asking the team to do it all over again in a matter of weeks using totally virtualized hardware and infrastructure.

Brainstorming over Vblocks

Determined, I gathered the team, grabbed a marker, went up to the white board and began extracting details from them and writing them down. How many Vblocks do we need? How were we going to organize and purpose them? How were we going to connect them? Would we have data base grids? How many? What would they run on? Were they supported running on a VM (virtual machine)?

We spent the better part of two hours brainstorming, with everybody taking their turn going up to the white board and adding their section to the design. After the first day, we had a whole white board full of colored lines, links and scribbles and the following details:

  • Very big projects, like our line of business or CRM applications, would have their own Vblocks.
  • Production in Hopkinton, Disaster Recovery, Dev, Test, and Performance in Durham, where there was more space and power.
  • For the remaining applications we decided to segregate the infrastructure by service level, purpose and environment. For example, there would be a Mission Critical, Application, Production Vblock, and another Vblock for Dev/Test.

A simplified process

Over the next several weeks, that original white board drawing evolved quite quickly into the high-level design that was the basis for our new data centers. The Vblock reference architecture radically simplified the number of options, which is its purpose. And then it was just a matter of defining things like how we were going to organize the infrastructure, what size storage arrays we would need, and how we would back things up. With the core design complete, we began the procurement process, just in the nick of time.

Since the VCE factory wasn’t online yet, we decided to build the 12 Vblocks we would use for our data center ourselves. We didn’t get to utilize the benefits of the pre-integrated Vblock technology that currently makes its deployment even easier.

Lessons learned

So what did we learn?

  • First, Vblocks are awesome. The performance is amazing. They are incredible easy to install, configure and grow. Additional compute and storage can be very easily added to an existing Vblock with no disruption or downtime.
  • We also learned that databases are fully supported running on VMs and those VMs can be clustered from all the database vendors.  Database performance and reliability are outstanding.
  • Probably the most significant lesson was that you don’t need to be perfect, you just need to start.  Since compute and storage capacity can be so quickly brought online, don’t worry about figuring out every detail for every application.

In hindsight, I think we should have segregated workloads by service level and purpose on the virtual level not the physical level. Rather than building out 12 physical Vblocks of different sizes and purposes we should have started with one or two generic Vblocks and created differentiated service level at the VM or ESX Virtual Cluster.

The key benefit is that it spreads out the spend over the life of the project. Not everyone is EMC, and not everyone is going to go “All in” on their entire infrastructure. Additionally, as soon as the first Vblock is online, you can begin migrating applications to it. Having a mixture of service levels and environments gives you the flexibility to turn down some applications during peak loads or to deal with a hardware failure to keep Production up and running.

Overall, I’m pleased to report that we proved the value of a large scale Vblock deployment in our groundbreaking, successful and efficient data center construction and data migration.

Stay tuned for my next blog about EMC’s Cloud Data Center journey, when I will write about the First 90 Days of this exciting transformation project.  

About the Author: Stephen Doherty