If HP's "machine" really shrinks a datacenter to the size of a fridge, what will it mean for us?

You've probably read about the HP Labs research project called "The Machine"-which HP says will use "electrons for computation, photons for communication, and ions for storage." This is one of those things that we hear about every so often that seems really cool, but always leaves me scratching my head about how it will affect our little corner of the IT world.

You've probably read about the HP Labs research project called "The Machine"—which HP says will use "electrons for computation, photons for communication, and ions for storage." This is one of those things that we hear about every so often that seems really cool, but always leaves me scratching my head about how it will affect our little corner of the IT world. (I can think about "our little corner" both in terms of enterprise end user computing as well as enterprise IT in general.)

So I wonder... let's imagine that this thing exists and we suddenly have a 1000x leap in terms of computing power, storage, and memory for the same size and power of today's devices. Given that, what's the impact to enterprise IT? (In other words, I don't want to talk about whether this thing will actually exist, rather, let's imagine that we suddenly have 1000x the power. What does that mean for us? What do we do with it?)

I guess that rather than renting little virtual servers from cloud providers, we'll all run the equivalent of an entire cloud datacenter in our own server rooms (or potentially on our own personal devices). That's cool and all, but so what? I guess we'd still have to have all the connectivity to be able to get them to do anything more useful than they already do today.

When it comes to desktop virtualization, that's all about Windows, so what would Windows even do with 1000x the power? Sure, all my users have VMs that boot instantly and always run at full speed, but now what? The first thing we'd need is to really dial-in our management and automation systems. (If you thought "VM sprawl" was a problem before, imagine what it would be like when firing up a new VM as powerful as any server in your own environment only costs a few tenths of a penny per month?)

I could envision that new types of apps might be created in a world where memory and persistent storage are essentially one-in-the-same (and unlimited), but we'll still have the same challenges we have today since most of the big enterprise apps are so engrained into the roots of a company that it would take years or decades to rewrite them. I wonder if that's something that 1000x the power could do? Some kind of really incredible app transformation? Then again, are today's app transformation products hamstrung my a lack of MIPS? I would think not.

I'm curious as to whether anyone has any thoughts on this? If we had 1000x the computing power in our datacenters for the same costs of operation today, how would that change your world?

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

My perspective is that the HP vision goes beyond the world of VMs and Windows.  I’ve always seen VMs and VDI as a stop-gap solution to the inefficiencies of how we use our computing resources, bridging the old world of OSes and applications designed for standalone and underutilized physical computers to a new world of shared, clustered resources that can be more effectively utilized.

In HP’s vision I would think the model of containerized apps (the “Docker” model) is more appropriate, with those containers being able to run just as easily on a local device as the cloud (and maybe that distinction no longer matters).  Since there is no longer a memory vs storage distinction, the current client OS does not make sense.  I would think applications integrate and communicate on a device in the same way they do in the cloud.  The Chrome OS model may be the closest to this today.

Another way to think about it; instead of managing thousands of individual virtual operating systems running on a big cluster of machines, how about managing one single operating system that runs across thousands of little physical machines?  It is just really an expansion of clustered computing.

As for enterprise “legacy” applications and systems, I really do not see them migrating as-is to this new world of HP’s vision.  I imagine they will be stuck on old traditional architectures, or possibly further virtualized on top of the new architecture, but maybe the costs of maintaining them would encourage re-writing to take advantage of the new capabilities.

So personally I would not worry about this new vision adding management headaches to the current environment in the way it exists today, but in the long run I would see it (hopefully) drastically changing how we manage our environments and deliver services.


The people who would be most likely able to take advantage of this new stuff the quickest is, unfortunately, the bad guys.  Think more systems hacked into faster, more data breached... better encryption methods now breakable with less effort.  Thanks progress.


Windows 24 will need 4Tb RAM, 32 3.0Gghz CPUs and 16Tb  disk space, so you'll still only get around 100 VMs per physical host. Office will scale equally efficiently.

Seriously, if Windows resources had remained static for the past 10 years, a typical sever today would appear the same as the future system you are talking about.

Software will grow in size and complexity exponentially and consume all available resources, regardless of how hardware capacity increases.

I still look back with fondness on the days when I had to squeeze NetBEUI, NBTCP and the Novell ODI stack into just 256k of memory :-)


DR could be much simpler.

Pick up your Rubik's cube sized data center hardware device and relocate to Starbucks, connect to their ubiquitous 100Gb free WiFi, place your data centre on the coffee table  and take power from their free inductive charger.

The coffee will be just as bad as the stuff in your normal data centre.


Agree with drive up utilization, exploited by bad guys and I'll add porn industry will take advantage of it.


While 1000 times the power would be nice, Fink's Machine is targeted to deliver only 5 times the performance of current generation hardware and will do nothing to change how we compute. It's big advantage is that it will do so with 80 times less energy than today's servers.

Back when I was directly concerned about the data center footprint of thousands of XenApp servers I spent a lot of time looking at energy consumption. In 2007 the EPA produced a report showing that US data centers consumed 1.5% of US electricity production ( www.energystar.gov/.../Report%20to%20Congress%20on%20Server%20and%20Data%20Center%20Energy%20Efficiency.pdf), which was projected to rise to 3% by 2011. If I was considering hardware for the future, a new hardware platform that consumes less energy and therefore produces less waste heat could save me significant amount of money in data center energy costs and means I can increase my compute power per square foot without worrying about exceeding the maximum capacity in my data center. Which in turn could save millions of dollars in new data center construction costs.

If HP delivers here, as well as doing very nicely in the enterprise, it will do huge business selling this technology to Google, Facebook, Amazon, Microsoft, and anyone else with cloud scale applications that can be readily tweaked to take advantage of specific hardware, either by selling boatloads of boxes or by licensing the technology so that the cloud giants can incorporate it directly into their own bespoke platforms.