Article Info

Equinix – Proxy for the Internet

image Lane Patterson, Chief Technologist of Equinix (EQIX), shared his thoughts on data centers and the challenges facing his industry at the 2007 Gilder Telecosm conference. He coined the term ‘bitmile’ and shed some light on how application providers such as CDN’s are adjusting their optical transport architectures to optimize cost.

Equinix provides data centers for colocation and interconnect. They build their own or lease a property from companies like Digital Realty Trust (DLR)  and equip it with connectivity and additional infrastructure. Then they sublet the space to companies looking for data center space – everyone from Carriers to CDN’s to Web 2.0 service providers. Google, Yahoo and a few others build their own mega data centers – for everyone else there is Equinix. Lane used an airline analogy; they are not a carrier but more like an airport where various carriers meet to exchange passengers.

image

Profile of a large Equinix data center:

  • Supports up to 30 Megawatts of power
  • 250k square feet of floor space
  • 40% of this space is consumed by support infrastructure and cannot be leased to datacenter tenants.
  • 200km of inside fiber plant
  • 7-8 dark fiber providers connected to the building with 144, 432, or 864 fibers each.

Lane indicated that demand for optical capacity was surging and the backbone being ‘refreshed’ with higher speed technology, except with a higher focus on cost. The metric Equinix uses is Bitmiles, defined as the cost of sending a bit a one mile. This is identical to the speed*distance/cost concept I like to use but Bitmiles is a much better word.

Equinix data centers face the same problem as carriers – when you exceed the capacity of a router chassis (or the square footage of a datacenter) the only option is to mesh chassis together. Equinix is doing this in a metro area using dark fiber from providers like AboveNet (ABVT.pk) and multiple 10G wavelengths to virtualize datacenter capacity across a metro area. They call this IBXLink, and it allows customers to place servers in separate data centers but have it appear they are locally connected. This is only possible if optical transport costs can be minimized.

Lane mentioned two approaches he felt were successfully addressing the bitmile problem.

1. Build your own WDM transport network by putting long haul optical transponders right on the router and installing standalone passive WDM filters. The resulting dirt-cheap optical network is completely passive except for EDFAs. Equinix customers will purchase optical modules such as these from Finisar (FNSR) and insert them into switching equipment. This can yield savings of 60% when compared with buying separate transport equipment, but is not manageable and requires all failure protection to be done in the router. My opinion – it’s a cheap hack but sometimes that is all one needs.

2. Stop spending money on QOS when 90% of your traffic is best effort. Move this traffic to a secondary network using best effort Ethernet L2 switching at 1/10 the per port cost when compared with high end routers. My opinion – this is the future.

Option one isn’t a new approach but the combination of pluggable optics with embedded OAM&P functionality (flexible SONET/OTN/GE provisioning) and new ultra-cheap datacenter switches like those made from Force10 is a truly disruptive solution that combines elements of both options. I believe this solution will be very attractive once this low cost hardware can implement emerging Metro Ethernet Forum standards.

While option one is good from a cost perspective at some point network manageability and protection requirements require a standalone transport solution. To the degree Equinix is focused on driving down transport costs, the fact that they are a major customer of Infinera (INFN) lends credibility to Infinera’s claims of having a lower cost transport solution.

Author owns positions in Abovenet and Finisar.

Discussion

Comments are disallowed for this post.

  1. Thanks for another insightful analysis. I always felt that Cisco’s vulnerability lies with solution #2. If I can switch at cheap L2 prices why do I need expensive routers. Off-loading is the way to go.

    Posted by Bill Baker | November 7, 2007, 7:20 PM
  2. Interesting. the inter-play between transport (L1), switching (L2), and routing (L3), and “services” (multiple layer) boxes is always confusing and the “beauty is in the eye of beholder” ( ie the network designer/user )

    I’ve always wondered why option 2 hasn’t been used more often. The L2/L3 switch industry has made very little in-roads on this this ( they are mainly deployed as L2 boxes ). Big routers continue to sell well. It’s been happending for a while, too. Over 5 years ago I visited a major Canadian cable providers datacenter that was built like option 2. It consisted of one GSR, lots of low-end Cisco switches, and a backbone consisting of Extreme black diamond Modular Switches. It covered a huge geographic region — all of Alberta and BC. But not a big population. One reason I believed they could get away with this was that they owned the whole network.

    It seems like transport is only required to support the “data-center”. The “end-customers” money comes from services and connections to routers. Thus the good equipment business continues to be routers and services boxes. The transport gear is not directly revenue generating ( which does not bode well for it ).

    On the Cisco front. I’ve found it interesting that they dominate everything above L1. There are notable chinks in the armour with F5, JNPR, etc. But they still dominate the revenue generating “functions” and look to continue.

    Posted by Iain Verigin | November 9, 2007, 1:49 PM
  3. I think if you look at research networks such as SURFnet6 of The Netherlands, two years ago, and a growing number of networks on NLR , along with the upsurge in MPLS and now Carrier Ethernet transport options (PBT, e.g.), and then look at how many large corporates are now outfitting their private (dark-fiber) networks at L2, you’ll find that the trend in collapsing Layer 3 down to L1 wavelengths and L2 Ethernet is already well along. Why did we have, and why do we continue to maintain, so many hierarchical instances of routers where they are not really needed in the first place? I chalk this phenomenon up to an absolutely superb selling job by those who brought us “The New World Network” template (consisting of four layers of routing in every node building _and _ service provider location) about eight to ten years ago, as one reason. Also, a greater familiarity with IP, which accounted for the new kids on the block remaining comfortable with routers, and not just a touch of bellhead vs. nethead bigotry, too, I’m sure.

    Posted by Frank Coluccio | November 9, 2007, 4:37 PM
  4. So one thing that is not mentioned here, and rarely in discussions of large data centers is that of supplied power per customer. 20A per rack really doesn’t cut it for big routers these days. If I have to purchase large equipment such as Cisco CRS multi-chassis devices, I have to buy an inordinately large amount of space just to get the power to plug this ‘single’ device into, and even then, large colo facilities don’t like it. As power consumption for high bandwidth backbone gear, whether housing SoNET/SDH or Ethernet interfaces grows, this problem will get bigger. 30MW sounds impressive, but if one customer can only get 2-4kW per cage, is that very efficient use of space and HVAC?

    Posted by Charlie Brown | December 4, 2007, 3:11 AM

    Trackbacks / Pingbacks

  5. Why I didn’t like Transport Gear Markets before and like them less now « Iain’s Chips & Tech | November 9, 2007, 2:25 PM