Last month I wrote that it's not possible for you to build VDI cheaper than a huge DaaS provider like Amazon can sell it to you. Amazon can literally sell you DaaS and make a profit all for less than it costs you to actually build and operate an equivalent VDI system on your own. ("Equivalent" is the key word there. Some have claimed they can do it cheaper, but they're achieving that by building in-house systems with lower capabilities than what the DaaS providers offer.)
One of the reasons huge providers can build VDI cheaper than you is because they're doing it at scale. While we all understand the economics of buying servers by the container instead of by the rack, there's more to it than that when it comes to huge cloud provider. Their datacenters are not crammed full of HP or Dell's latest rack mount, blade, or Moonshot servers; rather, they're stacked floor-to-ceiling with heaps of circuit boards you'd hardly recognize as "servers" at all.
Building Amazon's, Google's, and Facebook's "servers"
For most corporate datacenters, rack-mounted servers from vendors like Dell and HP make sense. They're efficient in that they're modular, manageable, and interchangeable. If you take the top cover off a 1U server, it looks like everything is packed in there. On the scale of a few dozen racks managed by IT pros who have a million other things on their mind, these servers work wonderfully!
But what if you worked at Amazon, and your boss just told you that you have to pick the hardware to run a VDI environment for 100,000 users. What would you buy? Sure, you can do the back-of-the-napkin calculation to see that you're looking at 20,000 CPU cores and 400,000 gigabytes of memory, but how do you get that? Do you go out and buy 2,000 servers from Dell or HP? Probably not. 2,000 1U servers take up a lot of space—they'd be almost 300 feet high if stacked one on top of the other.
Instead you take a commercial off-the-shelf 1U server and look at it—I mean really look at it. It has a lot of nice features for customers buying a few dozen at a time. But you're buying a few thousand at a time. So you open it up. What do you find? A lot of air. You're paying your server vendor to enclose a lot of air into a 1.75" tall metal box, stacks of which you'll place into an even larger metal box (which, conveniently, you'll also buy from them).
So the first thing to go are those metal boxes. No server chassis and no racks. You just need the guts.
Next up is the power supplies. Why does each server need its own? Power supplies are expensive in every sense of the word: they cost money, they take up space, and they waste power since they're not too efficient at converting AC to DC. Let's take a fresh look at this. The power coming into your datacenter is, what, 480 volts? Maybe 270? Then it's cut down to 110, run down your aisle where a pair of power supplies (in every server!) convert it to DC with outputs at 12, 5, and 3.3 volts. Why are we going through all that conversion effort, thousands and thousands of times, in tiny little inefficiency circuits in metal boxes inside other metal boxes inside bigger metal boxes that we bought from our server vendor?
Instead why don't we just install a big power supply at the end of the aisle that takes 270 volts AC and converts it directly to 12vdc with enough current to directly power a few thousand of our motherboards. Then we can design (or buy) our custom box-less server motherboards that only require a single 12vdc input.
Oh yeah, I should mention that this point that we're designing or buying our own custom motherboards... It's not as daunting as it sounds, because 99% of a motherboard's design is already done by Intel, and everything outside of what they provide is just bloat we don't need anyway.
So now that we're looking at custom-built motherboards, let's see what else we don't need. For example, do our servers need VGA ports? We're running thousands of servers...will we ever be in a situation where we need to plug a monitor into one? Does our DaaS engine even have a UI? No. So get rid of the VGA port and the display controller along with its costs (capital, power, heat, and space). Gone-zo!
And those USB ports? Zap! The USB controller? Buh-bye!
Okay, so what else is inside that 1U server that we don't need. How about that smaller pair of 3.5" metal boxes (which are mostly empty) that we call hard drives? Sure, dedicating twenty cubic inches per drive made sense when we had magnetic spinning platters and controller boards filled with discrete components, but what's inside the SSD drives that we're using now? More chips? Umm... yeah, you're gone. Rip the chips out of the SSD drives and solder them directly to our motherboard. While we're at it we can rip out the SATA and SAS controllers and connect the SSD controller chip directly to the PCI bus.
Hey, this is getting fun! What else can we do? What about all those eight tiny circuit boards standing up on edge with more chips on them? What's that, memory? Why are we wasting space and money with cute little edge connectors and angled boards? Get rid of them, and solder all those chips directly to our motherboard too.
So what are we left with our little game is over? We have a single circuit board with a couple of Intel CPUs, memory, and SSD chips, with the only connections to the outside world being a couple of Ethernet ports and a 12vdc power connection. Most importantly, our new "server" costs less, consumes less power, generates less heat, and takes up less space than the smallest commercial rack-mount, blade, (or even Moonshot) server you can buy.
We'll keep this thing up-to-date with the times, too. Maybe we'll buy a few off-the-shelf Nvidia GRID K2 cards for testing, but when it comes time to roll out GPUs for our DaaS platform, do you think we're going to pay a few thousand bucks each for a stack of GRID cards? Hell no! We're going to call Jen-Hsun and say, "Hey, send us ten thousands GRID GPUs—that's right, just the chips—and we'll take it from there." ("Also we will pay you $500 each.")
How real is this?
Okay, so that was fun. But how realistic is it? More real than you might think. Back in 2011, Facebook announced plans to openly share their datacenter designs (with custom servers like this) via an initiative they're calling the Open Compute Project. Google has shared a bit about what they're doing too as they aim to be more transparent with their efficient efforts. (Check out their efforts from five years ago. Crazy back then and I'm sure even crazier now!)
Of course we don't actually know what's going on inside those datacenters today, but we can be sure those cloud providers are thinking more along these lines rather than sending RFQs to Dell and HP. And the Open Compute Project means that even smaller DaaS and IaaS providers who don't have electrical engineers on staff can still buy these types of systems from white box builders in Asia.
The scale of these providers means they strive to minimize the "value add" from their providers. They don't need a reseller, distributor, or server-box maker to do anything for them that they can do in house. They only go outside to get the lowest-level stuff they need, buying it from the people who literally build it. (Intel, Nvidia, Samsung, SanDisk, etc.) The truly massive cloud providers are not buying from HP or Dell.
What does this mean for DaaS and VDI?
So here's the thing: Amazon is paying significantly less than you to buy servers. They're paying less for real estate to house them, they're paying less for electricity to power them, and they're paying less for the energy to cool them. This is why there is absolutely no way you can compete with them on price. (They also have more VDI experts than you, but that's an article for a future day. Also their kids are prettier.)
To be clear, I'm not saying you should never build VDI on your own or that you should only use DaaS (since there's so much more that goes into the decision). I'm just saying that if it turns out that the VDI you need has the same specs of what you can buy from a DaaS provider, you will never beat someone like Amazon on price.
What does this mean for HP and Dell?
While slightly off-topic, it's interesting to think about what this custom hardware trend means for server hardware vendors like HP, Dell, all the storage vendors who sell hardware, and, well, pretty much all of the hardware vendors other than those who manufacture the boards and chips.
While it sounds like a crazy future, if you believe that more of our on-premises datacenters will move to the cloud, and that the cloud providers need to have scale to compete, and that having scale means that you're not paying for bezels and metal boxes, then, err, what does this mean for HP's and Dell's server hardware business long term? Yikees! :(