Is converged storage (compute + storage in one box) a good thing or bad?

There's a trend in the datacenter called "converged storage" which is where you combine compute and storage into the same box. In our world Nutanix is probably the best known example of this, though we've written about others too (like Simplivity and Scale Computing).

There's a trend in the datacenter called "converged storage" which is where you combine compute and storage into the same box. In our world Nutanix is probably the best known example of this, though we've written about others too (like Simplivity and Scale Computing).

I loved this idea when I first heard about it back in 2011. In fact Gabe and I gave Nutanix the "Best of Show" award in the desktop virtualization category at VMworld that year. (And they've won Best of Show or been a runner up in 2012 and 2013 too.

It seems that the rest of the world is catching on too, with more people talking about how converged storage is the next big thing. (Over 1m results in Google for ["converged storage"].) but I'm starting to wonder... is this a good thing or a bad thing?

One of the problems with converged storage is that you have to buy your compute and storage at the same time. So it's only really useful for greenfield projects. That hasn't really bothered me though since I focus on VDI and most VDI projects are new. (Or at least they're not reusing existing storage which is not tuned for desktops.) So my wondering whether converged storage makes sense is not a greenfield thing.

Rather, I'm wondering how smart it is to hitch your storage purchases to your compute (and vice-versa). Back in 2011 this was cool because it meant we could do all sorts of "special" things because the compute had certain guarantees about the storage and the storage had certain guarantees about the compute. But now there are literally dozens of storage vendors providing features that were only previously available on converged platforms. So in today's world do we want to lock the two of these together? It seems like the exact opposite of the whole modular / flexible promise of virtualization.

With a converged system, if I want to buy more storage, I have to buy more compute. It's like cable TV bundling or record labels forcing me to buy a whole album even though I only like the one song. 

The original premise was that converged systems were easier to configure and performance was more predictable, but again, today there are so many storage vendors and many of them talk about automatic storage and no management. I wonder truly how much time the converged systems actually save me. What, I can't hook into a SAN on my own?

The other problem with the converged systems is that I'm now at the mercy of the vendor as to whether they'll let me pop the lid off the thing and install my own add-ons. They claim they need to do that to guarantee performance that and people don't want the complexity of the options, but if you can't figure out how to insert a GPU card then you shouldn't be allowed to touch a VDI.

So how have the converged storage vendors addressed this? By creating "families" of products with different options for CPU, memory, and storage.

Nutanix now has 11 different models, 9 different storage options, and 27 different processor + memory configurations. For example their NX-6000 series alone has 6 different processor options with three storage size options. Then specific combinations of those have 32, 64, 128, 256, and/or 512GB of memory options. So I have 14 different possible options just within the 6000 series alone, and that's one of four different product series they offer!

So what exactly am I getting with convergence besides some crazy Beautiful Mind type purchasing process?

This guy gets it

Can converged systems really offer advantages that I can't get by buying whatever computer and storage I want as separate entities?

Join the conversation

11 comments

Send me notifications when other members comment.

Please create a username to comment.

One of the things I like about the VMware VSAN beta is that you can build a bit more of a balance between compute and storage since you can have compute nodes that have no storage.  Obviously that is limited to the beta's eight nodes, two disk groups per node, and a few other things, but I think those limititations will be reduced when VSAN gets out of beta.  


I think other vendors like Nutanix will likely add this capability in the future.  


Happy Thanksgiving


Cancel

@Rick - SimpliVity is there. The SimpliVity OmniCube solution exposes the storage as a NFS mount. This allows for the use of existing compute/memory (ie. Hp Proliant) or if there is only a need for compute - go purchase your vendor of choice.


Cancel

Scale Computing HC3 addresses this by selling storage only nodes that can join a hyper-converged cluster to expand storage capacity and performance (additional IOPs, bandwidth) ... this is one of the big advantages of not running storage as a VSA (VM based storage appliance) ... you can't run a VSA VM's without that compute resource (and VMware licensing) ... both limitations that HC3 avoids by building distributed / redundant storage functionality directly into the hypervisor (and not using VMware).  We also offer compute heavy nodes as well so customers can expand by adding just the resources they need.


Cancel

On a strategic level, I completely agree with Brian's thesis for large companies. I would expect that converged storage makes sense for small to medium sized business that have new projects and where simplicity makes a big difference.


I am curious to see how converged storage is being bought - is it mostly SMB or are large companies also buying?


Cancel

Brian,


I appreciate your arguments, but "converged storage" appears to assume that SANs are the natural order of things, and that what Nutanix offers is a variation of that. But "storage" should be reserved for partially used paint cans, boxes of old records or even archived data. When it comes to frequently accessed information and high IOPS, “storage” shouldn’t even be used in the same sentence.


"Converged storage" is no more applicable to Nutanix than to the Google GFS architecture. Disk and flash should be bundled with the compute where the workloads are instantly accessed. Relegating the disks and flash to proprietary arrays introduces latency by requiring access across the network. It is a far more complex environment that includes lower resiliency, higher cost and an inability to linearly scale.


Nutanix has several options so that organizations can purchase the appropriate levels of compute to disk/flash. But a Nutanix BOM is only a few line items. Compare this to the BOM for the leading converged infrastructure product which can run to five pages.


The conventional model of separate compute + storage is so complex that VCE spends a whole lot of marketing dollars bragging about how Vblocks can be delivered in only 45 days, and can then up and running in only 2 - 3 days. In contrast, Nutanix typically ships in under a week, and is up and running in under an hour. And then it's entirely managed by the virtualization team from the virtualization console.


So in answer to your question is Compute & Storage in one box a good thing?  For the answer, we only need look to the most demanding computing environments on the planet: firms such as Google, Facebook, Amazon, Twitter, Azure, etc.  They all use this exact architecture. It is only a matter of time until it also wins in the enterprise.


Cancel

> So in answer to your question is Compute & Storage in one box a good thing?


Yeah. Servers and storage have been around since day dot.


Depends on whether organizations mind getting locked into another vendor on both compute + storage in one box.  With commodity server and (increasingly) storage, it's still compelling to get the best bang for your buck on these items and get generic server/VMware/Hyper-V/storage skills to put them together.  


Anyhow... that's if you still want on-premise.  For most SMB's there's a lot on offer with cloud providers and new offerings like WorkSpaces.


Cancel

Yes – convergence hitches the storage purchase to the server acquisition. But I would argue that if this seems constraining, it’s only because of what we have endured over the past 10 to 15 years. Namely, that storage must be a datacenter resource to be separately purchased, configured, managed and scaled.


Today, however, the VM is the common unit of datacenter management. Everything should be VM-centric. When it’s not, look at the hoops that a virtualization manager has to leap through in order to provision storage for a new VM? Today it resembles something like:


1. Contact storage team to request a particular size LUN be created on SAN


2. SAN team creates LUN on top of a particular RAID group (or creates a new RAID group)


3. SAN team ensures LUN is appropriately zoned and masked for requested use case


4. Virtualization manager scans for new LUNs on each host in cluster


5. Virtualization manager adds new LUN and formats with VMFS into VMFS-datastore


Rinse and repeat for every VM. And, then trust the SAN manager to continuously monitor capacity utilization and hot spots.


With a “converged” infrastructure like Nutanix (and we’re not the only ones), software automatically provisions - and actively manages - the required storage for a new VM.  All of the above steps are eliminated. Adding more storage = add node to cluster. And yes, you CAN scale storage resources without adding compute.


Maybe this is why there is nary a SAN to be found in most cloud and web-scale datacenters?


Cancel

Hi Brian,


Interesting thoughts and discussion.


Maxta's Software-only solution addresses the concerns that you have highlighted here.


Maxta’s groundbreaking Software-Defined, VM-centric storage platform dramatically simplifies and streamlines IT, while delivering significant cost savings. It enables the convergence of compute and storage on standard servers leveraging server-side flash and disk drives to optimize performance and capacity. Maxta enables shared storage with enterprise-class data services and full scale-out without performance degradation of compute and storage independently.


Cancel

@Greg- "Today, however, the VM is the common unit of datacenter management. Everything should be VM-centric. When it’s not, look at the hoops that a virtualization manager has to leap through in order to provision storage for a new VM?"


>>Excellent framing- thanks to you & Steve for your input here.  Also, thanks for the input from Scale- I really need to get that in my lab


Cancel

@Greg Yup, we've done a ton of TCO analysis on not only the CAPEX HW cost but also the operational cost, the $ amt for each one of those tasks. Customers are taken aback when they see it. CI eliminates that. We eliminate complexity even more by not forcing our customers to pay for vmware licenses, dealing with licensing structures etc. Our customers don't want to deal with that.


@TKO Let's see if we can get Scale in your lab, ping me valvarez at scalecomputing dot com


Cancel

> look at the hoops that a virtualization manager has to leap through in order to provision storage for a new VM


Yeah it's huge... every month or so (for example)


Server guy "Give me a new LUN thanks"


Storage dude "Done"


For small shops it's usually the one guy - so no one else to blame  :-)


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close