Datacore releases a "nirvana" VDI storage solution. Full local virtual storage that's really cheap!

Back in September, I wrote an article describing a product that I wanted that didn't exist: a local "virtual" storage option for VDI. Basically I described why I didn't like SANs for VDI and that I thought it would be cool if there was some sort of software that could virtualize access to the local hard drives that are in a VDI host server.

Back in September, I wrote an article describing a product that I wanted that didn't exist: a local "virtual" storage option for VDI. Basically I described why I didn't like SANs for VDI and that I thought it would be cool if there was some sort of software that could virtualize access to the local hard drives that are in a VDI host server. I was thinking a solution like that could create the best of both worlds: fast flexible storage without the overhead costs of a SAN.

The new "IntelliCache" feature that's part of XenServer 5.6 SP1 is kind of along those lines, although it only works with shared disk images today (while most of today's VDI solutions use persistent disks).

In a new white paper from Datacore (direct PDF link), they claim that their SANmelody software running on a VDI host does fulfill my fantasy storage requirements. And they claim they can do it with full multi-server redundancy with a cost of less than $70 per user. (That's $70 for everything.. the VM host, the SANmelody software, the disks you need for storage... everything!)

Frequent readers know that I'm not one to republish vendor papers. But in this case, the Datacore paper (by Ziya Aral & Jonathan Ely) is actually really, really good. They take a no-BS look at VDI storage, and they validate their architecture with standard tools like Login Consultants' VSI benchmark.

From the paper: Previous publications have reported on configurations which use thousands of virtual desktops to defray the cost of these controllers. Reading between the lines, it becomes immediately apparent that per-virtual desktop hardware costs rise very sharply as such configurations are scaled downward. Yet, it is precisely these smaller VDI configurations which are the more important from most practical standpoints. On the other hand, this configuration may also be scaled upwards, in a linear fashion, to thousands of virtual desktops, thus eliminating distended configurations created by the search for artificial "sweet spots" at which costs are optimized.

They continue:

The problem for Virtual Desktop implementations is that SANs are often implemented with large and costly storage controllers and complex external storage networks. While these have the advantage of achieving reasonable scalability, they introduce a very large threshold cost to VDI implementations. To overcome this burden, hardware vendors typically benchmark with one-to-several thousand virtual desktops.

Many companies, while understanding the potential benefits of the technology, are introducing pilot programs or attempting to fit initial VDI implementations into existing structures. If the granularity of VDI implementations is to be in the thousands, then the user is forced to consume, not just the “whole loaf”, but an entire bakery truck full of loaves at one sitting ... and this before even knowing whether the bread tastes good. The alternative is equally unappetizing. The user “bites the bullet” and accepts the high threshold cost and complexity of a full-blown SAN while running far less than the optimal number of virtual desktops. [In this case] the per-desktop cost of the implementation becomes much larger than it would have been if the “old scheme” of discrete desktops had remained. This is quite an introduction to a new, “cost-saving” technology... as an increasing number of ever-practical bloggers have noted.

The authors give an overview of their testing scenario, which started with two identical servers each configured to host 110 virtual desktops. Each server had five drives: a boot drive and two pairs of Datacore-controlled 2-disk pools, one which was used for that server's storage and the second which was used as a backup mirror of the other server's pool. They used simple SATA drives since they wanted to establish the simplest baseline possible:

We were loath to use anything but the most standard, easily configured SATA components. SAS, Fibre Channel spindles, fast devices, hybrid disks, and SSDs all have their place in the real world and are well understood to deliver a multiple of ordinary SATA performance. Still, the incorporation of such devices in baseline benchmarking has the same effect as building VDI architectures around this or that storage array. Such optimizations may become the dozen different tails attempting to wag an increasingly confused dog. Not only do they make comparisons difficult, but they lead to the subtle intrusion of a specific hardware architecture into the realms in which the VDI requirements themselves should be paramount."

Their hardware worked out to $3848 per server (~ $35 per user), with the full price going up to ~$67 total per user once you factored in the Datacore software. They went on to explain:

The principle innovation in this benchmark result is the use of DataCore’s storage virtualization system on the same hardware platforms used to host Virtual Desktops. While conventional wisdom might suggest that the DataCore software would thus become a competitor for the same scarce resources of the hardware platform, such as memory and CPU, numerous experiments proved the opposite to be true. In each case, such co-located configurations easily outperformed configurations with external storage.

The reasons are easy to isolate, retrospectively. The Virtual Desktop application is not particularly I/O intensive and proves to be easily served by DataCore at the cost of very few CPU cycles. The elimination of the lion’s share of external channel traffic serves to further reduce those demands. In return, block-cache latencies become nearly nonexistent, channel overheads disappear, and I/O latencies are eliminated. The resident SANmelody “pays” for itself without losing anything in the way of capability or portability.

While the test scenario was limited to two servers, you can also scale up via a "star" topology. The idea there is that you still use local drives in your VDI hosts (each running Datacore) as your primary storage, while the center hub in the star topology is a continuously synced replica of each VDI host to ensure no data is lost if you lose a VDI host. (And if you need ultimate high availability, you can keep a spare VDI host that could boot VMs from the central hub storage pool temporarily.) They also pointed out another advantage of using local storage, namely that "these types of configurations ... are inherently “self-tuning”. Each time a group of Virtual Desktops are added, so too is the storage infrastructure necessary to support them."

The Datacore authors were aware of how different this approach is versus typical VDI storage architectures:

On the whole, the benchmarked architecture differs from most of the results published by other storage vendors. Instead of constructing a VDI configuration around a storage controller or pre-existing SAN architecture, it builds the SAN around the VDI servers themselves.

Finally, they poked fun at the "tuner" crowd (themselves included), to suggest that no amount of "tuning" a typical VDI environment would yield enough price-to-performance gains as high as their virtual storage option:

Memory tweaks, enhanced hardware, and various tuning “tricks” produce the expected results. Still, increases in costs and complexity produce a system which is little better in price/performance than the one tested here. Sadly, for a group of “old tuners”, we have had to conclude that the benchmarked configuration is optimized enough to cross the threshold for practical VDI configurations and that additional low-level optimizations make only small, incremental changes.

To me this is one of the best vendor papers I've read in a long time. What do you think? What are we missing?

Join the conversation

6 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Brian, I wonder why you're so blatantly anti-SAN.  I realize that traditional storage vendors are bulky, expensive, and slow, but throwing a bunch of equally expensive technology layers locally is also messy.


WhipTail is a XenDesktop and View certified solid-state storage appliance that can scale 8500 VDI users on a 2U appliance drawing less than 180 watts of power for less than $30/user.


The problem with storage and VDI is that hard drives are slow.  If WhipTail solves that problem for $30/user why go through the administrative and risky process of local storage and addtional servers and software layers and COST?


WhipTail offers 200,000 IOPS random read and 250,000 IOPS random WRITE performance at 4k block size over 10 GbE or 4 Gb FC, or 40 Gb IB.


Plug it in, power it up, cable it, LUN and DONE - VDI deployed.  Move on to the next strategic project.


$30 per user.


Cancel

So for WhipTail, how many users do you need for $30/user? If I have 50 users can I buy it for $1,500?


That's my problem with SAN vendors.. Yeah everyone can get prices down, but usually it's a huge commitment to get started, like thousands of users. That's why I like the Datacore thing so much. What's the WhipTail small end look like?


Cancel

I'm totally with Brian about that large scale ("Walmart") economics  from the first client ("banana")!


Do you know how does this scale from the storage space perspective?


When rolling out large scale persistent disks I guess the delta's would grow pretty big so how will the same servers be still able to replicate all of the data when scaling from 200 to 2000 users?


Cancel

@Ron


As Brian noted, when you scale-out such a configuration from 100's to 1000's of users you are automatically bringing additional DataCore IO processing capacity into the environment each time you add another host.


Adding storage space is the easy bit.  All DataCore storage is thin-provisioned, based on hot-add Disk Pools.  Add more physical disk to a host, assign it to the DataCore disk pool and keep on rolling.


DataCore licensing is based on a capacity model, in 1TB increments.  The 'hub' server of an HA configuration can accommodate PB's of storage.  iSCSI, FC and FCoE transports are all options when it comes to supporting the demands of sync. replication.


Cancel

Hello Brian, I didn’t realize the focus of this article was on small environments of 50-100 users, maximum.   You are correct in assuming WhipTail is focused on customers who are looking to scale VDI; maybe they begin with 50-100 users, but they want to scale higher, and require centralized, shared storage for the inherent benefits versus local storage (and/or third and fourth party software/hardware).


In the case of someone only going to 100 users max, WhipTail would still be cost-competitive versus the 2+ trays of Fiber Channel 15k hard drives you would need for a centralized, shared storage approach, in addition to providing unmatched, instantaneous storage performance.


However, I definitely get your point that for the ultra-cost-conscious 100 VDI max end-user your article could certainly appear as “nirvana” to them from a CAPEX perspective – as long as the IOPS numbers remain low.


To answer your very astute question, our entry level Virtual Desktop XLR8r is 1.5 terabyte and retails for $49,000 (this 2U unit can be upgraded to as much as 12TB).  As you know, the capacity requirements for VDI vary depending on how efficient the end-user can be with their images.  We’ve seen our more efficient customers in the 1-4 GB / user range.  So that is anywhere from 375 – 1500 users on our entry level appliance which makes the cost range from $32 - $130 per user.  At 5,000 users, you actually get closer to $21 / user – we like to safely average it around $30 / user.  However, we don’t charge by user, so our customers don’t have to think that way – we charge by capacity – so they are free to scale.


Very soon we will be releasing a small business XLR8r with half the capacity (or less – TBD) for less than half the price.  But, again, you are right, for a customer looking to do only 100 users max, they will need to make a decision on whether they want to invest a premium for centralized, shared storage.   The other factor that would come into play is that even a 100 user VDI workloads can be so I/O intensive that this local model you propose would require local SSD drives in addition to the storage virtualization software in order to perform to end-user satisfaction, which can quickly drive the TCO over the cost of a WhipTail.


And remember, VDI can’t go down – and that (plus capacity utilization and easy of management) is/are the value of a SAN and/or NAS.


So, in conclusion, I concede that for the 100 VDI max environments, you’re on to something here, but as soon as that number grows to as low as 150 – 200, they are entering into the TCO of a WhipTail.


And for 1000+, forget about it!  Lol…  WhipTail appreciates your response and continued effort to help everyone think through the challenges of VDI.  We all share the goal of solving the puzzle to enable IT to provide this strategic tool to the business.


Cancel

Expounding from Brian’s point regarding the sizing aspect – one example where I see this being a great fit is customer adoption Demo units.  In a scenario where you are trying to provide a large # of small (200 or less) to medium (500 or less) demo units for customers to test drive the technology initially.  This would drastically reduce your cost of building and shipping demo units to customers and once they are sold on the technology of VDI.  At this point, you have them sold on VDI as a technology and then I would presume to think you would determine the customer requirements the next 3 to 5 years and they might have a large EMC presence they wish to leverage from a shared services perspective in which case you might simply snap in to their existing fiber channel or if the intent is to meet a specific business need (which is generally the case with VDI) you might start with this option as your “economy” solution and then work your way up from there.  


I see this as an immediate opportunity to lower my cost on Demo units.


I do take issue with the product – and please don’t take this the wrong way, it is not my intent to cause debate but I feel it should be mentioned.  I was curious why they left Citrix XenServer or even the open source version of Xen off the list of hypervisors.  I’m wondering if there is a technical reason of interoperability or they just left it off.  I was just surprised to not see it listed being XenServer is the majority leader in the “cloud computing” space compared to VMWare and Hyper-V.  VMWare might be the leader in fortune 1000 companies from a server virtualization perspective but XenServer has earned its wings in the cloud computing space where it is utilized by some of the top cloud services providers in the world.


This is not an anti-VMWare/Hyper-V statement because I leverage all three.


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close