WEB EXCLUSIVE: Data Centres are in demand
If data centres are not on your radar screen, you’ll be surprised to learn what a hot item they’ve become in corporate real estate circles. Most large firms globally are involved in, or will soon undertake, a major upgrade of their IT services. Indeed, data centres are growing so quickly that power grids can hardly keep up with them. For instance, San Francisco-based Digital Realty Trust, which builds data centres throughout North America and Europe, is already the second-biggest electricity user in the Chicago area, second only to O’Hare International Airport (the fourth-busiest airport in the world). “Data centres are on a trend to consume more power than will be available in the next 10 years,” says Matt Parker, Practice Leader at Stantec’s office in Raleigh, N.C., and an electrical engineer specializing in critical facilities.
First, let’s define “data centre” (with thanks to Wikipedia and techweb.com): a facility housing computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls such as air conditioning and fire suppression, and security devices. Regardless of the extent to which computers are distributed within an organization, larger enterprises always seem to need a centralized data centre.
Many factors are driving the demand for newer, bigger and better data centres. First, computers have been getting faster and more efficient, as ordained by Moore’s Law, named for Intel cofounder Gordon E. Moore. In a 1965 paper, he observed that the number of transistors that can be placed inexpensively on an integrated circuit-hence, processing power- doubles approximately every two years.
As they grow more powerful, computers are also shrinking. Back in the old days of big mainframe computers, dimensions for computer equipment were standardized as components sliding into a rack, which was a tray measuring 24 inches wide and 84 inches tall. Until about five years ago, a single computer component, or server, resembled a box that looked like a CD player. The “1U” or standard, one-rack-unit configuration, defining the minimum possible size of a server, was 19 inches wide and 1.75 inches tall.
Then manufacturers realized they could dispense with the metal box around the circuit board, and the rack evolved into a chassis that holds naked circuit boards, or blades. Where a 1U rack could accommodate 42 servers, it can now house 128 blades.
“You can get equal or better performance in half the space,” Parker says. “The fundamental dilemma is, those blades are individually more efficient, but you still have twice as much power and heat being consumed and generated in half the space. The rate of rise for power and cooling is growing almost exponentially.”
To cope with the demand, infrastructure engineers went back to the future and adopted techniques, slumbering for 20 years, for mainframe rooms. These were the futuristic-looking, air-conditioned, dust-free, glassed-in rooms at corporate headquarters with rows of big CPU cabinets with blinking, coloured lights. Tall, skinny take drives held spinning, shiny, 10-inch metal reels of inch-wide magnetic tape that bobbed up and down as it was sucked into a vacuum column to ensure a fast, smooth read across the tape heads. “We turned back to the big mainframe rooms,” Parker says, “where you dump a boatload of power and cooling into a room and allow the individual racks to take out of it what they need.”
Then there’s the redundancy, or safety databackup, issue. “One reason our infrastructure got so out of control is that people kept using old rule-of-thumb concepts for reliability,” Parker adds. “The more pieces of equipment you make available, the more reliable it will be. And, if you need one server and you have three and all connected the right way, you won’t have an outage. The problem is, you have to have those three things running all the time, 24/7, to support one piece of IT equipment.”
The cost of downtime has to be balanced against the cost of redundancy. “Most newly designed data centres have Tier Three redundancy,” says Dan McMullen, Sales Executive, Site and Facilities, IBM Global Technology Services in Markham, Ont. “In a Tier Four installation, all mechanical and electrical infrastructure has dual redundancy. This is extremely expensive to build. There are very few such installations in Canada.”
Other trends driving the need for more processing power in data centres include:
- Cloud computing (Your computer acts as a “dumb terminal” that connects via the Internet to remote servers where the software and data reside.)
- Electronic record-keeping (Conducting business with digital rather than paper records. In healthcare, this includes the digital imaging of X-rays and MRI scans etc.)
- Internet mail and social networks (Gmail and Hotmail, Facebook and LinkedIn contacts, and Twitter Tweets all live in the cloud, not your desktop.)
- Mobility (Apps and data for smartphones running Android, BlackBerry, iPhone and Windows Phone also live in the cloud.)
- Outsourcing software (Renting expensive software on an as-needed, pay-as-you-go basis, such as an accounting firm that only needs tax-return programs during income-tax season. The software stays in the cloud.)
- Virtualization (Hosting multiple operating systems, such as Windows and Mac, and their applications, on a single server.)
Virtualization lets users extract more performance out of their servers, at the price of increased power consumption. “Data center infrastructure evolution is like the Wild West out there right now. In 1996, we built a Microsoft data centre for 1.7 kilowatts (1,700 watts) per rack – about the same power consumption as a hair dryer. Now we routinely build for 12 to 15 kW per rack and we see as high as 30 kW or more per rack [IBM data centres can scale up to 50 kW per rack],” says Terry Rennaker, Director Critical Properties, C.B. Richard Ellis Global Corporate Services in Toronto.
“This has put an enormous stress on existing data centres, and why you are seeing a renewal of data centres around the world. The old infrastructure-anything built more than five years ago-is really outdated. You can buy some time, but, ultimately, you have to build out again. Densification catches you at some point. You can run but you can’t hide.”
How does corporate real estate fit into this picture? Time was, every company had their own computer facility and IT staff to run it. In today’s economy, with people doing more with less, companies are hiring other companies to build, maintain and operate remote data centres that house their computers.
“People using the network and out-sourced software just log in and do whatever it is they always did,” Parker says. “But the reality is that there is a whole new layer of business opportunity here for corporate real estate.” Companies like Digital Realty Trust find existing space in under-utilized buildings in industrial parks. “Instead of leasing square footage, they lease rack space where you put your blade in.”
This is good news for the vast stock of old warehouse loft buildings languishing throughout the Rust Belt. “There’s an opportunity for adaptive reuse of a lot of existing empty space,” Parker says. “A warehouse, for example, can make a perfect shell for a mission-critical space because it has a nice, solid concrete floor and plenty of room where you can do pretty much whatever you need to do.”
But, there’s more to just the physical plant, he cautions. “The biggest challenge, and where the most time and effort is spent, is in checking site feasibility. If you don’t have access to large-bandwidth communications and highly reliable power, then an existing warehouse will not serve you well.”
He cites Guelph, Ont., as an area that meets these conditions, which explains why many financial institutions have located their data centres there.
As for Toronto, however, “some of the infrastructure is pretty well tapped out. And Silicon Valley, in California, being the home of many high-tech companies, has essentially reached its limit for mission-critical facilities. It’s a capacity issue.” In response, the California Energy Commission is leading the way, among states and provinces, in creating incentives for energy-efficient retrofits and new facilities.
“Picking a site for a data centre is a completely different skill set from picking a warehouse or an office,” says Rennaker. “We’ve run into situations where there is great fibre-optic connectivity, great physical isolation from other buildings and great power. But then you find out there isn’t enough water near the site, so you move on. Most data centres have a water-cooling aspect, including a cooling tower for evaporation.”
Water? Mixing computers and H2O may sound risky, but water cools more quickly than air alone and the technology is proven. “We’ve built data centres that have water-cooling down to the chip level,” IBM’s McMullen says.
Northern cities have a leg up over southern Sunbelt locations, McMullen adds. “In Canada, there is a trend to take advantage of our climate and get free cooling by leveraging the environment. The GTA and Montreal, for example, offer up to six months of free cooling, which dramatically reduces energy costs.”
Finally, Rennaker points out the advantage that a large, vertically integrated company, such as C.B. Richard Ellis, offers to data-centre clients. “We act as corporate advisors, first with the overall strategy, then in helping execute the site acquisition, project management of the design and delivery of the site, and facilities management for the ongoing life of the building.
“We can do all this in bits and pieces, but we are at our best when we do all of it. We are fully integrated and can do all of it in-house. A project manager will see certain aspects of a site differently from a broker and a facility manager. We make sure we have input from all three sources. And during the project management phase, you want to keep the broker in the loop, in case you need an exit strategy. You should also keep the facility manager in because he or she will operate it once you build.”
This article first appeared in Connect, the quarterly journal of the Canadian Chapter of CoreNet Global (corenetglobal.org).Reprinted with permission.