Intel (INTC) has nearly completed a complete clawback of server market supremacy with today’s announcement that Sun Microsystems (SUNW) will closely collaborate with Intel. After substantially improving their devices and surpassing the benchmarks set by AMD (AMD), Intel is back in the drivers seat when it comes to high margin server CPUs. This was an outcome I felt was never in doubt.
Archive for Symbol 'sunw'
This was a very interesting debate among some very heavy hitters who operate data centers about where the bottlenecks are in the data centers, and if the new model of massively distributed computing in one centralized data center is a sustainable model.
Continue reading ‘Storewidth Bottlenecks - Gilder Telecosm 2006′
It’s very profitable for distributors - while it lasts. I examine the recent disclosures from Vitesse Semiconductor and the potential impact on their largest distributor, Nu Horizons.
Continue reading ‘Trickle Down Economics and Channel Stuffing’
Perhaps investors will pause for a moment from their ritualistic dead-horse-beating about the demise of Intel (INTC) and the ascendency of AMD (AMD). They should read a recent performance benchmarking from AnandTech, “Intel Woodcrest, AMD’s Opteron and Sun’s UltraSparc T1: Server CPU Shoot-out“.
It’s a great overview of the performance of Intel’s new Woodcrest Server chip, and how the Core architecture is improving the performance of their chips across the board. In my discussion of the New Four Horsemen, I also linked to an article comparing the Intel Core versus AMD’s K8 architecture.
Intel is not without troubles. But the idea that AMD is going to continue to take large share gains is absurd. AMD clearly took advantage of the horrific Xeon performance in the server space. Much of the Intel share loss and most of the profit hit was from losing this high margin business. Read the Anandtech articles and it’s clear to me that the days of easy share gain for AMD are over.
Add to the fact that one analyst is seeing the signatures of a nasty price-war developing and I think we are seeing what Andy Grove likes to call an inflection point.
Now they just need to fix their awful marketing.
The real loser here? Sun Microsystems (SUNW) and their build-their-own CPU strategy pulled from the playbook of virtually every computer company that went out of business in the last 20 years, something we pointed out six months ago.
In the semiconductor business, the clock re-starts every two years. You put yourself out of business or the other guy will. There is no permanant advantage. And the clock just restarted. Game on boys.
Cisco (CSCO), Oracle (ORCL), Sun (SUNW), and EMC (EMC) were the darlings of the internet boom and were referred to as the ‘Four Horsemen‘. Your broker was overheard in 2000 “Yes, things are in fact a bit irrational but these companies have real products, revenues, and earnings and are investment-grade leaders of the new economy.”
Your broker neglected to mention that the biblical Four Horsemen of the Apocalypse were steered by four riders - Conqueror, War, Famine and Death. I’ll leave it to my readers to pair them appropriately.
Scott McNealy, ex-CEO of Sun Microsystems (SUNW) talking about Net Neutrality (term defined) (my opinions here) in an interview with the Washington Post:
A few weeks back, while in CA for OFC/NFOEC, I was lucky to get a late night tour of 365 Main, a massive, state of the art data center near the Embarcadero in San Francisco. My guide, Peter Kranz, was someone I worked with and co-adventured with in College who is now the owner and CEO of Unwired, a San Francisco based CLEC. His equipment is in this data center, and he invited me in for a look.
While I spent over a decade analyzing, understanding, and helping architect optical telecommunication silicon, I never had the opportunity to set foot in a data center and see it all in action. Thanks Peter!
Here’s an informal list of my impressions:
- The infrastructure surrounding the networking hardware was the most impressive part of the tour. The building sits on a completely isolated base, allowing it to sway 15 inches without interior damage. The pipes, cables, walkways - all are flexible at the interface to the building. The building itself is an old Marine supply depot, built on bedrock adjacent to the bay bridge support piling.
- The building has 20 Megawatts of backup power generation capability that can operate for three days with fuel stored on-site, and contracts to rapidly secure more if needed in case of a ‘major event’. There are no batteries in the building, in the event of a sudden outage, a massive flywheel provides power for the few seconds required for the diesel generators to turn on and reach the RPMs necessary for generation. The same flywheel also provides power conditioning. I don’t know what that flywheel must weigh, but I sure would hate to see what happens if it comes off its bearings.
- Power and Heat. Customers told me over and over again about how power consumption was important, but standing in a room 1/2 the size of a football field stuffed with servers drove years of customer comments. I could feel the cool A/C blowing in the room, but it felt like it was barely winning the battle against all of the heat I felt radiating from the racks as I stood close. Peter informed me that for some of the customers in the building, the power bills exceeded the cost of the square footage they leased. Each customer pays for the electricity their installation uses, and given all of the exotic measures taken to prevent an outage, it costs 2-3x times what you pay per Kilowatt at your home. A PC that costs $10 a month to operate in your home would cost $30 in this data center. Google has published papers on how electrical costs are closing in on actual hardware costs, while Sun Microsystems (SUNW) is betting the company on this metric in a bold bet against commodity Intel (INTC) and AMD (AMD) CPUs.
- There was a very motley collection of servers throughout the facility, everything from IBM high-end bladeservers with perfectly maintained cabling to Taiwanese no-name rackable servers with a rats nest of CAT5. Walking through the facility and seeing the sheer amount of computing hardware made me realize that there is little future in building commodity server hardware. The profits will come from the management software and other value-add areas. Servers will be bought on the basis of reliability, form factor, and power consumption. Given that these metrics are more determined by the component vendors that supply the hardware the servers are built from, being an assembler of server hardware seems like a low margin opportunity- unless your specialty is having the best low-margin supply chain, like Dell. (DELL)
- Where the server farms were crowded and chaotic, the network colocation room, where all of the broadband carriers terminate and interconnect their WAN, was more of an open cathedral. Security was particularly tight for this area, as a pulled fiber could disrupt a significant amount of traffic. Each carrier had their equipment bunched in a particular area. There was also a clear class system when it came to equipment - SBC/AT&T (T ) had high end Cisco GSR’s and Ciena (CIEN) transport equipment, where as Abovenet (ABVT.PK) had a WDM system from the now defunct LuxN (bankrupt, acquired by Sorrento, acquired by Zhone).
- 365 Main is a hub for much of the fiber owned by PG&E, the local power company. When PG&E puts an electrical cable into the basement of a commercial building they add in a fiber cable as well. CLECs then contract with a company designated by PG&E to lease this dark fiber and provide services to customers in those buildings. It would seem that these assets are now significantly more valuable given the recent rulings on business line sharing.
All in all, a great time for a hardware geek like myself. Unfortunately, no photos to share, and even if I had them I am not sure I would be comfortable posting them. Also, I took no notes during my visit, so please be careful quoting some of the above as my memory is not always to be trusted.
Recent Comments
Andrew Schmitt, BlindMan, Joe, BlindMan
Andrew Schmitt, Bandgap, Zed, Andrew Schmitt, Zed [...]
Dan Collins, Andrew Schmitt, ohad, Andrew Schmitt, ohad
Andrew Schmitt, m@
Matt, And, Don, And, chuck goolsbee [...]
yf, Andrew Schmitt, m@
five_whys, Ron, Bill Baker, Andrew Schmitt, Bill Baker [...]
Andrew Schmitt, Brent, m@, Andrew Schmitt, punci [...]
RDS, eugene rudkevich, JV, Andrew Schmitt, Eve Griliches [...]