Data Center Services
Talking 'Connectivity Du Jour' and Synching with Data Center Services Provider NGD
For the past few weeks, I have been sharing parts of a conversation I had with Simon Taylor, Next Generation Data’s (NGD) chairman – a discussion that has covered why data center services provider NGD was founded and where NGD is headed.
NGD is a specialist in the operation and ongoing management of mission-critical environments, offering flexible, cost effective and resilient data center solutions. The company created Europe’s largest Tier 3 data center – NGD Europe based in South Wales, UK - which is currently one of only a handful of high-quality Tier 3 data centers outside London and Europe.
Below, my conversation with Taylor continues. For last week’s conversation, click here.
I take it that you have redundant bandwidth going into and out of this facility so that you’re able to handle technical problems, natural disasters and such.
Taylor: Oh yes. We’ve got fiber coming in from BT, Cable & Wireless (News - Alert) and NTL in the UK. We maintain carrier neutrality with interconnect points for each of the major carriers. All of these carriers are flexible and are capable of installing multiple gigabit-per-second data pipes along relatively quick time scales that can ultimately connect to the rest of the world. Moreover, we have a submarine cable from the US coming ashore at Hybridge, in Somerset, so we’ve got a transatlantic link as well offering direct international connectivity to America. We therefore enjoy considerably diverse and resilient connectivity options. A customer can choose from among dark fiber, 10 gigabit per second long-range optical Ethernet and multi-homed IP transit services. Additional major carriers have been looking at our site and we are investigating the idea of obtaining bandwidth from them too.
The building itself has four independent site entry points, with multiple ducting routes supporting separate carrier cables. We have two physically separate and independent “Meet Me Rooms” per floor, and secure cross-connects and connection raceways can be found throughout the building.
To sync or not to sync?
Taylor: There are two major aspects to the data center market. There’s something called synchronous replication and then there’s non-synchronous replication. Both techniques involve replicating data to a remote secondary site.
To explain synchronous replication, consider a trading floor in a city such as London. Because of the way the regulations work and the rules they have to follow, if you happen to be buying £20 billion worth of oil from BP, that transaction is time-critical, and yet the transaction must be almost instantly backed-up in case some sort of failure occurs.
As its name implies, synchronous replication ensures that a remote copy of the data, identical to the original “primary” copy, is created almost instantly after the primary copy is updated. First data is written to the primary storage system, then data is written to the remote system, and then the remote storage system sends a write acknowledgement back to the primary or host unit. That last part is very important—in synchronous replication, an Input/Output update operation or block level data transaction is considered finished only when completion is confirmed at both the primary and remote mirrored storage sites. The host application simply will not proceed until the data is successfully committed to all storage systems. If during this process a problem occurs, the resulting incomplete operation is rolled back at both locations, guaranteeing that the remote copy is always a precise mirror image of the primary copy.
As it happens, synchronous data replication involves some exquisite timing issues, and for this reason, it is extremely sensitive to network latency and bandwidth. The ultimate shortcoming of synchronous data replication is latency resulting from the propagation delay associated with the speed of light, which is 300,000 kilometers per second in a vacuum or about 200,000 in optical fiber. The propagation delays increase as the distance to the remote storage site increases, or about 1 millisecond for every 200 kilometers.
The laws of physics thus impose a “distance limitation” on synchronous data replication.
Most equipment vendors will tell you that the safest practical distance between the primary system and a remote data center is somewhere between 35 to 50 kilometers or 20 to 30 miles. This distance is obviously not sufficient to protect data in the case of an extensive, catastrophic natural or man-made disaster.
For greater distances, some sort of non-synchronous data replication process is called for, such as asynchronous or semi-synchronous data replication.
We were never interested pursuing the synchronous replication market. We really were interested in non-synchronous replication, used for such things as disk-to-disk backups, helping companies to comply with Sarbanes-Oxley, and so forth. Our research indicates that whereas only 15 percent of businesses demand synchronous data replication for certain specific processes, 85 percent of businesses use non-synchronous data replication techniques for the vast majority of their online and offline processes. Hence, our data center in Newport, Wales, is capable of capturing a great deal of this potential market.
You know, engineers are an incredible breed. Britain’s data center capital at the moment is London and what we call the M25, which is on the map as a big ring-shaped road that circles around London. All of that tends to be within 25 kilometers of the center of London. The truth is, if a business demands synchronous data replication, there are complexity issues and a very specialized, high-bandwidth network is needed, which means that they will pay about three times the price of what we charge at our data center in Newport. The fact is, only 15 percent of the market needs synchronous replication. But, ironically, there are CIOs out there who are very keen on visiting and admiring their data centers in London’s Park Royal, or the Docklands, and all the while they remain unperturbed at the fact that they are costing their shareholders an extra £20 million or perhaps even £30 million more a year over a 15-year contract. The CIO is able to influence the CFO because the CFO is usually not a technical guru. This kind of thing has been going on for years in Britain. Businesses in several other countries, however, have figured out that everything doesn’t have to be backed up in synchronous mode.
I imagine it’s a delicate selling point.
Taylor: Getting the word out about this is one of our big marketing challenges. Corporate shareholders everywhere should realize that millions of millions of pounds are being needlessly extracted from the bottom line when it comes to data centers. My message to them is that their businesses need not be plunked down in the middle of London or on the M25. It’s all a facade, something of a humbug, really. But people get away with it all the time.
The other thing that happens in London is that companies will employ the services of huge global commercial property agents—I like to call them “massive estate agents”— who advise them that they should be situated in London and on the M25 because the agents’ transaction fees are much bigger on commercial properties there.
This is the reality of the world. When you pick up the phone to make a call, you don’t know where that dial tone is coming from, do you? These days, because of such technological innovations as voice-over-IP and efforts such as C21 in Britain, your dial tone could be coming from anywhere, not just the traditional exchange down the road. It’s the same with data centers.
Please check back at the Data Center Services Channel next week for more from Taylor.
Edited by Carrie Schmelkin