Why huge DaaS providers don't use Dell and HP, and why they can do VDI cheaper than you!

Last month I wrote that it's not possible for you to build VDI cheaper than a huge DaaS provider like Amazon can sell it to you. Amazon can literally sell you DaaS and make a profit all for less than it costs you to actually build and operate an equivalent VDI system on your own.

Last month I wrote that it's not possible for you to build VDI cheaper than a huge DaaS provider like Amazon can sell it to you. Amazon can literally sell you DaaS and make a profit all for less than it costs you to actually build and operate an equivalent VDI system on your own. ("Equivalent" is the key word there. Some have claimed they can do it cheaper, but they're achieving that by building in-house systems with lower capabilities than what the DaaS providers offer.)

One of the reasons huge providers can build VDI cheaper than you is because they're doing it at scale. While we all understand the economics of buying servers by the container instead of by the rack, there's more to it than that when it comes to huge cloud provider. Their datacenters are not crammed full of HP or Dell's latest rack mount, blade, or Moonshot servers; rather, they're stacked floor-to-ceiling with heaps of circuit boards you'd hardly recognize as "servers" at all.

Building Amazon's, Google's, and Facebook's "servers"

For most corporate datacenters, rack-mounted servers from vendors like Dell and HP make sense. They're efficient in that they're modular, manageable, and interchangeable. If you take the top cover off a 1U server, it looks like everything is packed in there. On the scale of a few dozen racks managed by IT pros who have a million other things on their mind, these servers work wonderfully!

But what if you worked at Amazon, and your boss just told you that you have to pick the hardware to run a VDI environment for 100,000 users. What would you buy? Sure, you can do the back-of-the-napkin calculation to see that you're looking at 20,000 CPU cores and 400,000 gigabytes of memory, but how do you get that? Do you go out and buy 2,000 servers from Dell or HP? Probably not. 2,000 1U servers take up a lot of space—they'd be almost 300 feet high if stacked one on top of the other.

Instead you take a commercial off-the-shelf 1U server and look at it—I mean really look at it. It has a lot of nice features for customers buying a few dozen at a time. But you're buying a few thousand at a time. So you open it up. What do you find? A lot of air. You're paying your server vendor to enclose a lot of air into a 1.75" tall metal box, stacks of which you'll place into an even larger metal box (which, conveniently, you'll also buy from them).

So the first thing to go are those metal boxes. No server chassis and no racks. You just need the guts.

Next up is the power supplies. Why does each server need its own? Power supplies are expensive in every sense of the word: they cost money, they take up space, and they waste power since they're not too efficient at converting AC to DC. Let's take a fresh look at this. The power coming into your datacenter is, what, 480 volts? Maybe 270? Then it's cut down to 110, run down your aisle where a pair of power supplies (in every server!) convert it to DC with outputs at 12, 5, and 3.3 volts. Why are we going through all that conversion effort, thousands and thousands of times, in tiny little inefficiency circuits in metal boxes inside other metal boxes inside bigger metal boxes that we bought from our server vendor?

Instead why don't we just install a big power supply at the end of the aisle that takes 270 volts AC and converts it directly to 12vdc with enough current to directly power a few thousand of our motherboards. Then we can design (or buy) our custom box-less server motherboards that only require a single 12vdc input.

Oh yeah, I should mention that this point that we're designing or buying our own custom motherboards... It's not as daunting as it sounds, because 99% of a motherboard's design is already done by Intel, and everything outside of what they provide is just bloat we don't need anyway.

So now that we're looking at custom-built motherboards, let's see what else we don't need. For example, do our servers need VGA ports? We're running thousands of servers...will we ever be in a situation where we need to plug a monitor into one? Does our DaaS engine even have a UI? No. So get rid of the VGA port and the display controller along with its costs (capital, power, heat, and space). Gone-zo!

And those USB ports? Zap! The USB controller? Buh-bye!

Okay, so what else is inside that 1U server that we don't need. How about that smaller pair of 3.5" metal boxes (which are mostly empty) that we call hard drives? Sure, dedicating twenty cubic inches per drive made sense when we had magnetic spinning platters and controller boards filled with discrete components, but what's inside the SSD drives that we're using now? More chips? Umm... yeah, you're gone. Rip the chips out of the SSD drives and solder them directly to our motherboard. While we're at it we can rip out the SATA and SAS controllers and connect the SSD controller chip directly to the PCI bus.

Hey, this is getting fun! What else can we do? What about all those eight tiny circuit boards standing up on edge with more chips on them? What's that, memory? Why are we wasting space and money with cute little edge connectors and angled boards? Get rid of them, and solder all those chips directly to our motherboard too.

So what are we left with our little game is over? We have a single circuit board with a couple of Intel CPUs, memory, and SSD chips, with the only connections to the outside world being a couple of Ethernet ports and a 12vdc power connection. Most importantly, our new "server" costs less, consumes less power, generates less heat, and takes up less space than the smallest commercial rack-mount, blade, (or even Moonshot) server you can buy.

We'll keep this thing up-to-date with the times, too. Maybe we'll buy a few off-the-shelf Nvidia GRID K2 cards for testing, but when it comes time to roll out GPUs for our DaaS platform, do you think we're going to pay a few thousand bucks each for a stack of GRID cards? Hell no! We're going to call Jen-Hsun and say, "Hey, send us ten thousands GRID GPUs—that's right, just the chips—and we'll take it from there." ("Also we will pay you $500 each.")

How real is this?

Okay, so that was fun. But how realistic is it? More real than you might think. Back in 2011, Facebook announced plans to openly share their datacenter designs (with custom servers like this) via an initiative they're calling the Open Compute Project. Google has shared a bit about what they're doing too as they aim to be more transparent with their efficient efforts. (Check out their efforts from five years ago. Crazy back then and I'm sure even crazier now!)

Of course we don't actually know what's going on inside those datacenters today, but we can be sure those cloud providers are thinking more along these lines rather than sending RFQs to Dell and HP. And the Open Compute Project means that even smaller DaaS and IaaS providers who don't have electrical engineers on staff can still buy these types of systems from white box builders in Asia.

The scale of these providers means they strive to minimize the "value add" from their providers. They don't need a reseller, distributor, or server-box maker to do anything for them that they can do in house. They only go outside to get the lowest-level stuff they need, buying it from the people who literally build it. (Intel, Nvidia, Samsung, SanDisk, etc.) The truly massive cloud providers are not buying from HP or Dell.

What does this mean for DaaS and VDI?

So here's the thing: Amazon is paying significantly less than you to buy servers. They're paying less for real estate to house them, they're paying less for electricity to power them, and they're paying less for the energy to cool them. This is why there is absolutely no way you can compete with them on price. (They also have more VDI experts than you, but that's an article for a future day. Also their kids are prettier.)

To be clear, I'm not saying you should never build VDI on your own or that you should only use DaaS (since there's so much more that goes into the decision). I'm just saying that if it turns out that the VDI you need has the same specs of what you can buy from a DaaS provider, you will never beat someone like Amazon on price.

What does this mean for HP and Dell?

While slightly off-topic, it's interesting to think about what this custom hardware trend means for server hardware vendors like HP, Dell, all the storage vendors who sell hardware, and, well, pretty much all of the hardware vendors other than those who manufacture the boards and chips.

While it sounds like a crazy future, if you believe that more of our on-premises datacenters will move to the cloud, and that the cloud providers need to have scale to compete, and that having scale means that you're not paying for bezels and metal boxes, then, err, what does this mean for HP's and Dell's server hardware business long term? Yikees! :(

Join the conversation

4 comments

Send me notifications when other members comment.

Please create a username to comment.

It's certainly a valid point, although I propose that no one really should care whether the Amazon's of the world can 'do it for cheaper'. What everyone cares about is whether they can buy it from the Amazon's of the world cheaper than one can do themselves.


It would be interesting for you to do a post that expands on the cost comparison of a couple DaaS providers vs a few on-prem scenarios. It is easy to conclude that the Amazon's of the world are really smart and have massive scale, but that doesn't mean they aren't making huge profit margins or doing things efficiently on top of the hardware (ie overcommitment). From our internal cost calculations, we continue to be considerably cheaper at doing it ourselves, although this is going to a YMMV situation (depending on how efficient or not one is doing this themselves) and I suspect the DaaS providers will continue to drop prices at a rapid pace as they have with their IaaS products so this will continue to change over time and we will continue to watch this space. This is not going to be the case for everyone.


It may surprising to many, but many IaaS providers are not overcomitting resources on their platforms. This results in less cost efficiency as you can imagine and higher prices to purchases. For people performing heavy levels of overcommitment in their own internal infrastructure, this can be material enough to make their costs lower than IaaS providers (generally speaking here, not specific to DaaS). I've yet to see any published information on DaaS providers and their levels of overcommitment, so not sure if this same dynamic will play out in the DaaS space as it has in the server IaaS space.


Cancel

Hi Brian,


As before with the previous article about about AWS, this disregards both latency and the fact that Customers will be looking for something specific to *them*, not a cookie cutter model that is the same for all. If it were not for these two points I would agree with you wholeheartedly.


As it is all of the SMB Customers are going to be wanting something a little bit more personal than what AWS will be able to provide. And the point of latency means that unless SMB's are prepared to commit to moving *ALL* of their compute to AWS then just moving the VDI proportion is simply going to add latency to the whole equation?


I'm happy to be disproved, but it seems that latency should still be a valid concern?


Cheers,


Dave


Cancel

I am a SMB with 50 users and 40 of them just use applications which will happily run on my desktop in the Cloud, with no data 'proximity' issues, and where latency really isn't too much of an issue.


I still need to provide some more challenging applications to my remaining 10 users and as David says, latency and data proximity ARE an issue.


So, do I cloud host my 40 and deal with the hassle of hosting the remaining 10 myself?


At what point does the ratio of Cloud hosted desktops versus on-premise desktops make the economics of the 'Cloud' piece viable?


How many SMB's are in a position where they can realistically push all of their desktops into the cloud and make the numbers add up? I really don't see this flying as a mainstream solution in the Enterprise !


When network bandwidth becomes ubiquitous (been waiting for that for 15 years, ubiquity is always about a year away!) and Cloud providers can guarantee latency, maybe this will be a viable proposition.


Cancel

I cannot help but feel this scenario is a little far fetched, if believable.  We know Google and Facebook build their own custom servers and that Amazon do too, but I am not entirely sure this is makes a DaaS service more cost effective.


Taking one look at Amazon workspaces, its expensive compared to our own DaaS offering and those of our nearest competitors, plus they do not offer on-boarding services.


So if Amazon are building their DaaS off highly customized platforms then they are not passing the cost advantages over to their customers.


Google, Facebook don't do DaaS, but I can see their cookie cutter approach to custom server building working for DaaS, I just do not see it working in the way it should, to deliver a cost effective service to customers.


If you are looking for a custom DaaS provider who can build you a private DaaS cloud to specification, check out my own company tuCloud : tucloud.com/DaaS_Provider_Info.html


We can build cost effective DaaS platforms and do.


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close