A few weeks back, while in CA for OFC/NFOEC, I was lucky to get a late night tour of 365 Main, a massive, state of the art data center near the Embarcadero in San Francisco. My guide, Peter Kranz, was someone I worked with and co-adventured with in College who is now the owner and CEO of Unwired, a San Francisco based CLEC. His equipment is in this data center, and he invited me in for a look.
While I spent over a decade analyzing, understanding, and helping architect optical telecommunication silicon, I never had the opportunity to set foot in a data center and see it all in action. Thanks Peter!
Here’s an informal list of my impressions:
- The infrastructure surrounding the networking hardware was the most impressive part of the tour. The building sits on a completely isolated base, allowing it to sway 15 inches without interior damage. The pipes, cables, walkways – all are flexible at the interface to the building. The building itself is an old Marine supply depot, built on bedrock adjacent to the bay bridge support piling.
- The building has 20 Megawatts of backup power generation capability that can operate for three days with fuel stored on-site, and contracts to rapidly secure more if needed in case of a ‘major event’. There are no batteries in the building, in the event of a sudden outage, a massive flywheel provides power for the few seconds required for the diesel generators to turn on and reach the RPMs necessary for generation. The same flywheel also provides power conditioning. I don’t know what that flywheel must weigh, but I sure would hate to see what happens if it comes off its bearings.
- Power and Heat. Customers told me over and over again about how power consumption was important, but standing in a room 1/2 the size of a football field stuffed with servers drove years of customer comments. I could feel the cool A/C blowing in the room, but it felt like it was barely winning the battle against all of the heat I felt radiating from the racks as I stood close. Peter informed me that for some of the customers in the building, the power bills exceeded the cost of the square footage they leased. Each customer pays for the electricity their installation uses, and given all of the exotic measures taken to prevent an outage, it costs 2-3x times what you pay per Kilowatt at your home. A PC that costs $10 a month to operate in your home would cost $30 in this data center. Google has published papers on how electrical costs are closing in on actual hardware costs, while Sun Microsystems (SUNW) is betting the company on this metric in a bold bet against commodity Intel (INTC) and AMD (AMD) CPUs.
- There was a very motley collection of servers throughout the facility, everything from IBM high-end bladeservers with perfectly maintained cabling to Taiwanese no-name rackable servers with a rats nest of CAT5. Walking through the facility and seeing the sheer amount of computing hardware made me realize that there is little future in building commodity server hardware. The profits will come from the management software and other value-add areas. Servers will be bought on the basis of reliability, form factor, and power consumption. Given that these metrics are more determined by the component vendors that supply the hardware the servers are built from, being an assembler of server hardware seems like a low margin opportunity- unless your specialty is having the best low-margin supply chain, like Dell. (DELL)
- Where the server farms were crowded and chaotic, the network colocation room, where all of the broadband carriers terminate and interconnect their WAN, was more of an open cathedral. Security was particularly tight for this area, as a pulled fiber could disrupt a significant amount of traffic. Each carrier had their equipment bunched in a particular area. There was also a clear class system when it came to equipment – SBC/AT&T (T ) had high end Cisco GSR’s and Ciena (CIEN) transport equipment, where as Abovenet (ABVT.PK) had a WDM system from the now defunct LuxN (bankrupt, acquired by Sorrento, acquired by Zhone).
- 365 Main is a hub for much of the fiber owned by PG&E, the local power company. When PG&E puts an electrical cable into the basement of a commercial building they add in a fiber cable as well. CLECs then contract with a company designated by PG&E to lease this dark fiber and provide services to customers in those buildings. It would seem that these assets are now significantly more valuable given the recent rulings on business line sharing.
All in all, a great time for a hardware geek like myself. Unfortunately, no photos to share, and even if I had them I am not sure I would be comfortable posting them. Also, I took no notes during my visit, so please be careful quoting some of the above as my memory is not always to be trusted.
Looks like a great place to work if you’re a Giants fan, but this seems like an expensive spot for compute resources. How will 365 compete with a company that provides the same services but is leasing property in… say Livermore?
I’ve always used data centers to host my servers(co-locate). Now that Andrew mentions it, my data centers were in prime locations of Downtown Los Angeles and Marina del Rey. I think there’s got to be more to it than just opening one where ever you want to.
It would be interesting to find out the reason data centers seem to always be put in expensive areas.
As a colo customer I’m interested in having servers within walking distance from the office. Perhaps this is a contributing factor for a lot of the customers of CBD located facilities?
Also, access to various communications and power circuits would have to be better in the CBD.
Trackbacks / Pingbacks