F*** the SAN. VDI storage should be local!

When discussing storage in the context of VDI, people often talk about things like SAN sizing and IOPS and linked clones and thin provisioning and disk image streaming and... the list goes on.

(Be sure to check out Ron Oglesby's rebuttal to this: Brian's wrong...about VDI and local storage)

When discussing storage in the context of VDI, people often talk about things like SAN sizing and IOPS and linked clones and thin provisioning and disk image streaming and... the list goes on. But one of the most important aspects of storage design for VDI has to do with the where the disk image files live for each user’s Windows VM.

What exactly is “storage” for VDI?

I guess before we talk too much about storage for VDI, we should define “storage.” I mean obviously we’re talking about disks and SANs and stuff, but in the context of VDI, “storage” applies to:

  • Windows OS disk image file location. Where is the system disk image file (VHD or VMDK) for each VDI virtual machine?
  • User data. Home drives, application storage, etc. This could also include user environment settings.
  • Applications. Where do the applications live once they’re installed?
  • Backup.

So when we’re talking about storage for VDI, which “storage” are we talking about? For the user data, apps, and backup, whether you’re using VDI or not doesn’t really impact how your storage is designed. (After all, you’re still going to store home drives on a NAS or file server regardless of whether you’re using VDI or not.)

The big discussion point around storage for VDI has to do with the OS disk image file locations for each VM. Where will the actual VHD or VMDK files that make up each user’s VM be stored? There are a few options:

  • Each VHD/VMDK is stored locally on the VM host server.
  • Each VHD/VMDK is stored on a SAN, the VM host server mounts them via FC HBA or iSCSI
  • Each VHD/VMDK is stored on a file server / NAS and mounted/streamed across the network
  • Some other technology is used to present/build the disk image, like Atlantis, Unidesk, etc.

There’s a lot that goes into this decision: Will your users share a single master disk image that “resets” each time they log off, or does each user have his or her own personal persistent image? How many VM host servers do you have? Are you designing your VDI environment specifically for desktops, or are you taking what you did for server virtualization and just copying for your desktops?

VDI disk image storage

Most of us have learned that the biggest constraint / bottleneck for desktop disk image storage is not storage capacity, but IOPS. (Read Ruben & Herco’s amazing article for more on this.) This problem is magnified in environments where you have shared master images since you now have many users (tens? hundreds?) accessing a single master file, and if you thought your own personal hard drive could get bogged down, imagine 200 people sharing the same bits on the same drive! Various SAN-based “solutions” exist to address this, like storing the master file in some very fast way (SSD, striped, cache) or by making lots of full replicas of it so that each master file is only accessed by a subset of your users.

But there’s another great way to “fix” your master disk file oversubscription problem: Don’t mount your shared files from the SAN! Instead you can store your master files locally on each VM host server.

Don’t get me wrong: I’m all about using a SAN were it makes sense. But storing “disposable” stuff on the SAN does not make sense. (It’s like a gold-lined trash bag.) If you’re just going to throw it away, why even bother storing it in the most expensive place you have?

Of course this means that you’ll need to load-up your servers with drives, but eight 2.5” 15k SAS drives aren’t too much more expensive than a Fibre Channel HBA. (And of course the local drive option doesn’t require a SAN, so it’s actually much cheaper.) Choosing local storage doesn't prevent you from using things like thin provisioning and linked clones--it just means that you'll need one master on each host (the management of which is scriptable and much cheaper than a SAN).

Do SANs ever make sense for boot disk image storage?

I’m making a pretty strong point that in the vast majority of cases, it doesn’t make sense to boot your desktop images from a SAN. But that’s not to suggest that booting from a SAN never makes sense. I just think that booting a desktop from a SAN doesn’t make sense.

If I had a truly flexible virtual datacenter infrastructure where all my servers were disk images and I wanted to be able to boot any image from any VM host and to do live migration and everything, then yes, absolutely, booting those disk images from a SAN makes sense.

If I had a bunch of servers that grow and shrink and need to move on demand, then yes, booting from the SAN makes sense.

But for desktops, when I’m not using live migration and when I’m in “rack ‘em and stack ‘em” mode, I can’t possibly see how it makes sense to boot them from a SAN. Even if I had a scenario where each user had his or her own personal persistent desktop, I still think I’d store those on a NAS and stream them down to the VM with Citrix Provisioning Services, Doubletake Flex, or Wyse Streaming Manager. All of those would allow me to move the IOPS to the VM host where they’re much cheaper than on the SAN.

Maybe this will change in the future?

For the record, I love Chetan’s grand vision about us someday being able to deliver a better desktop via VDI than local. But that’s not here today. If a compelling reason—like a huge performance bump—emerges that requires a SAN, then of course I’ll reconsider. But right now the only thing a SAN tends to introduce to most VDI deployments is increased cost.

What do you think? Do you boot your desktops from a SAN? If so, why?

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Hi Brian - I have been a reader for a couple of years - but this is my first comment post.

I can't decide whether I agree with you on this point or not.

You have mentioned the positives of using local storage - i.e. reduction in solution cost, but this approach WILL lead to an increase in solution complexity when designing resilience.

Using XenDesktop for example you would need to make sure a Pool's VM's are spread across all hosts evenly. A clear naming convention such as host(X)vm(Y) would be required so that VM's on a particular host can be put into maintenance mode when host maintenance is required. And of course without live migration this maintenance will need to be planned further in advance (not a terrible thing I suppose)

With a small number of VM pools this setup would be a little more complicated - but managing a larger environment........

However, despite this I find myself weighing up the increase in complexity with the cost reduction which could very well be the difference in getting a desktop project the green light or not.

After typing this I think I have just swayed towards agreeing with you


This really boils down  to a couple of features, HA and live migration. Do you need them for VDI? I don't know about the rest of you but i like the ability to live migrate active sessions, something WinTel TS has NEVER been able to do thanks to NTFS.


For pooled, reusable desktops such as those provided by standard mode vDisks in Provisioning Server this makes a great deal of sense. However, in my experience, a majority of large enterprise customers cannot use this model as it breaks their app delivery processes.  For one to one desktop assignments, using local storage could become a management nightmare as you need to know who's desktop is on which server and backup/recovery and DR  processes start to look more complex.

The IOP's question remains...even with 8 local drives spinning at 15000rpm and a reasonable amount of BBWC, you still have a finite number of IOP's at your disposal, so the need still exists to properly size each server.  Will we end up hosting just 5 desktops per server if some users have a high storage I/O profile?

Where is the tiiping point between spending X number of additional $$$ on local storage capabilities, and pooling these $$$ and buying a decent SAN? Add the additional hidden cost inherent in the management of many local storage repositories over a centralised array.

If we use local storage we also end up with the possibility of needing to 'silo' our virtual deskops in a similar way we used to do with TS/XenApp.  Some desktops will need to be hosted on servers with fewer users for performance reasons.  Some desktops may need to be silo'd along management boundaries, or for reasons of security, corporate politics or mission criticality.

IMHO, using local desktops is a great idea for smaller businesses where pooled desktops can be used, but this seems to look increasingly like a very small proportion of use cases.


Why can't 1-to-1 desktops also be "local" via something like Citrix Provisioning Services? All the users' images would live on the Provisioning Server--they'd be managed in one central place and backed up and everything, but then when  each user logs on, he or she is routed to the least-loaded VM host with the desktop streamed on demand?

In that case the IOPS "hit" is local (esp. if you keep the page file out of the central image), but you still get the central mgmt of central images and 1-to-1 images.. it's really like you're just using the local disks in the VM hosts as temporary cache points for persistent disks?


Pooled non-persistent desktops are easy - there's no need for a SAN if configured properly. At least with Provisioning services, you can have the same vDisk stored on each PVS and users will hit the least loaded one. If there's enough memory on the PVS servers, it'll store the disk bits in the system cache and not even touch the disk itself. The difference file ("write cache" in Citrix terminology) can also be stored local to the hypervisor, although that means either a number of 15k's or an SSD, depending heavily on # of desktops and usage profiles. In the rare case that a hypervisor crashes, the user will just load up an identical desktop from a different hypervisor.

Assigned desktops need a little more thought depending on the reason for going to assigned, but you can still sometimes go SANless.


I mentioned this in my session at BrForum and it is something that seems to be becoming a religious war.

If your SLAs allow for you a user to get disconnected from their desktop in the event of a host failure, and them allow them to reconnect to another desktop... why not?

My biggest point is that most of us run desktops TODAY on local storage with Terminal Services/XenApp. We are fine with that but are just dropping desktops on top of virtualization designs that we created for servers (Which have completely different SLAs).

Local disk can provide a low cost solution and provide for higher perfromance at a much lower cost (SSD local drives as compared to SSD centralized).

As to the HA/Vmotion fatures mentioned, you are correct. These fatures go away. you liken it to terminal servers that cant move sessions? Often we wanted to move sessions not for hardware maint, but for OS patching and Application updates (hard installs). In these cases you STILL have to shut down/reboot the VM or whatever. So the user session is still impacted. Hardware maintenance/changes are a tiny % of the number of planned OS outages.

As for HA- Valid point. If you need that for some reason, or have got mgmt to sign off on replicating every desktop to a secondary DC or something then centralized storage is needed. but I would think replicating the user's personalization/workspace and their base image and giving them a "new" desktop in the event of a DR event would be fine.

Anyway, if you aren't considering local disk and at least making an educated/reasoned selection on storage, you are making a mistake.

Spend time and fine the cost effective solution for your DESKTOP SLAs and needs, dont just keep using SAN because "Thats what we did with Servers".



I am a big fan of local storage for desktop virtualization. Let's face it, in the hosted virtual desktop model with XA people have been doing this for years…… I think it makes a lot of sense in many cases and I speak to people all the time who implement this way for the VDI model. I think the whole live migration thing for a desktop is a niche use case for most people and frankly I don't think it's widely used even with sever virtualization. I've never had to live to migrate a physical desktop in my career so I don't really see a true need. I doubt the likes of Google are using expensive storage solutions for GMAIL etc, so I think this warrants further thought.

To me the harder problem is organizational process. Where and how do you place your desktop VMs so a business unit is not taken out in the event of host hardware failure? How/when do you set maintenance schedules so that a reboot of the host allows you to not impact productivity of your users that are salt and peppered across hosts? Tip, in my past, I've alway been a proponent of a fixed maintenance window organization wide that over time people get used to. Of course you may need an exception process etc, but it solves a ton of issues. These are things that are going to have to be baked around your organizational process if you have any. To me that is the greater challenge I see many people facing. It's their inability to drive central organizational process. Until that happens I think many are stuck looking at technology to solve what is really an organizational problem in many cases.


I think the idea of storing the master image locally presents many limitations. For example, using linked clones or any similar disk differencing technology would require the users' delta disks to reside locally as well. For persistent desktops, these deltas will probably need to be backed up, thereby complicating the backup strategy since each virtualization host will now have to become a backup target.

We ran benchmarks tests at my company and came to the conclusion that local storage can get bogged down pretty quickly, even with eight 2.5” 15k SAS drives.

Regarding the master image bottleneck, this could be easily solved by front-ending the physical disk with lots of cache. Isn't this one of the hallmarks of Atlantis Computing?

One should run, not walk, to solutions such as Atlantis Computing. And if Unidesk offers a similar solution, kudos to them as well. Forget EMC and Netapp because their solutions were not built from the ground up for VDI. VMware's linked clone and Microsoft diff disks are good for instant provisioning of desktops, but it's been shown that the deltas tend to grow quickly, and therefore they're not necessarily storage-saving solutions.

Finally, Citrix Provisioning Server and similar solutions are not "streaming" solutions, at least not the last time I checked. They don't stream the bits down to the local client disk and subesequently get out of the way. They mount the "virtual disk" across the network just like iSCSI, even though in some cases they employ a proprietary protocol. CPS uses a UDP- based protocol, while DoubleTake Flex (emboot acquisition) uses iSCSI.

I think a new breed of VDI-specialized storage solutions has to emerge, and this is being pioneered today by the likes of Atlantis Computing and Unidesk.


@Brian - Not 100% sure I understand your reply and apologies if I am being dense and missing the point entirely!!

Are you talking about having a vDisk per user, centrally hosted on the PVS server with write cache created locally at the virtiualisation tier?  This appraoch would have interesting implications for management and scalability at the PVS side, or whatever component does the job of PVS in the wider discussion.

I agree that the IOP's hit is local, but this still needs to be sized and factored into any scalability equations.

As a CTP you no doubt understand that leveraging local disk resources at the virtualisation tier is something that Citrix and other vendors are looking at for some of the very reasons you mention above.


I just want to reiterate this point one more time: You can still have your disk images mounted from local storage AND have stateless VM hosts. If you have 1-to-1 VMs with persisten images, you can use local disks while storing the images somewhere else and streaming them.

Stateful images + stateless VM hosts + local storage = Yes!


Got it, Harry!!! So if Live Migration is not so important, then I dare you to strip out Load Balncing from your next XA release!


Sorry Brian, "streaming" in the context of CPS, DT Flex, etc, is not much different than mounting a disk over iSCSI. Six of one, half a dozen of the other.


EdgeSeeker AND Brian M

Obviously this topic is going to float around the big boys so I will hit that first. Yes Linked clone will require some centralized storage of kind and if you look to use a 1:1 in a traditional sense it is much tougher to use local disk as failure of the server requires a certain amount of down time if using a traditional "fat" VMDK/VHD for each VM/User.

But that doesnt mean it cant be done for a a couple of reasons:

1- loss of a host from a hdw failure isnt that common any more. Host servers are built with a pretty solid level of redundancy to ensure that. But lets say you do "loose" a host.

What % of the environment is out?

How long is out?

Can you get the users to a new desktop w/ their workspace/personalized stuff ? and if so how long will that take?

Lets say you have a 20 server 500-800 user environment. at 35-40 desktops per host or so.

if you have a hardware failure that takes down a single host more than once per year I would be surprised. But lets say its an 8 hour outage 1 time per year of 1 server (this is a non-planned outage, not OH a drive went red, let me replace it type of outage).

8 hourse out of 2000+ work hours per user.

800 users have 1.6 million working hours a year.

40 desktops out for 8 hours is 320 lost hours (this assumes you ahve to repair the server and not just move them to a "new desktop" with their "stuff"

The 320 hours is like three tenths of a % of non planned downtime.  thats like 99.98% uptime.

So the question becomes... pay 2 or 3 or 5 or 10 time more for the storage line item for your VDI desktop to MAYBE turn that 99.98 into a 99.99....

see where I am going?

I dont "pimp" my products. Unidesk can do this all with local storage (sorry Prov. Server and Composer), but that is besides the point.  The point is that some pretty sharp people have done this math and it can work... Hell one of the big VDI users that gets talked about a lot (A big ole bank in NYC) uses NOTHING BUT local disk... for several years now.

if they can do it. why can it be considered. Just sayin'


Kaviza has taken this route to simplified VDI. We hear it's best to consume food grown locally. Seems to apply to VDI storage, too. Their grid overcomes manual/scripted placement of your gold image. Really can't wait for them to add Hyper-V support and try out RemoteFX.


Local disk all the way. The reality is that all you can deploy is persistent VDI. Linked clones don't work, the child delta just grows like a fat kid and you end up with tons of expensive shared storage. Waste of time, just as is PVS with persistent VDI, it offers no value and just makes things hard.  Atlantis does not work in the real world. It's too finicky to setup and is really immature. To get Atlantis set up requires a PHD and tons of hand holding. They are not even close to a real out of the box experience. I challenge Atlantis to prove me wrong there and for the record Atlantis can also accelerate local disk IOPS if you can ever get it to work.  Granted they have great vision, a good idea, but execution is not there. Sorry Chetan, not a personal attack on you, just keeping it real, I actually like a lot of what you say. I also don't buy Atlantis is the right solution for this medium term and I am sure the storage and hypervisor vendors are the right people to fix it. That would leave me with yet more complexity with Atlantis in my desktop infrastructure it it ever worked. Unidesk is also unproven although they come with less hype than Atlantis, which if true I don't understand why a VDI or storage vendor has not pounced on them already. Perhaps they understand it's architecturally not the future and unproven. So don't run towards these solution, be real about where they are today and keep an open mind. For the record I wish Atlantis well, but IMO it is not a real simple out of the box solution today.

I do agree a new breed of storage solutions are required for VDI but i have to deploy Windows 7 next year, so none of these solutions help. Back to local disk people, it works is cheap and I agree with other posters that you have to figure out how to manage that in the data center. If the vendors even truly enable single image then we can have a different discussion but they are not even close.

@edgeseeker, livemigration is a BS use case, nobody cares for the desktop, more hassle and cost than it's worth. Nothing to do with XA load balancing, it's just a wish list from legacy TS guys who never got what they wanted. If people really wanted it and their were real $$$$ reasons to do it, it would have been done. I see no way for any vendor to make money from this BS niche ask. Anyway if you use local disk and simply are deploying VHD files, then you can simply use things like iScsi to copy VHDs over from on datacenter to another for disaster recovery. This is very different from live session migration. BS, desktops needs reboots weekly so I actually agree set a maintenance schedule, it's all you need as opposed to massively complex infrastructure.

@joegasper WTF is Kaviza anyway. It's just a persistent VM with local storage. Why can't you do that yourself, and have something far more scalable? Hmmmm Citrix does not allow you to use HDX standalone…..Ugghhh!!


@Edgeseeker - Live session migration, an old  item on the TS wish list, is indeed "mostly" without merit. The idea was that if a TS host required a reboot to replenish its resources, one would migrate all the sessions to another TS host before rebooting the problematic one. However, the running sessions are the VERY reason why the problematic host needs rebooting. This means that migrating sessions to another host would soon require that the new host be rebooted as well. However, a case had been made in the past that if a particular TS host were chosen to run a particular session, this session would be stuck on that host for its entire lifetime. And the use case was that if well-behaved sessions are being compromised by other misbehaving sessions, it would make sense to live-migrate them to undersubscribed hosts. But of course, this would have brought about a wave of over-engineered solutions that would have probably quickly caused us to reach a point of diminishing returns. Solution? CPU utilization management. That's how the problem was ultimately mitigated.

Resource utilization management is built into just about every virtualization platform, and therefore can be leveraged to mitigate the problem of misbehaved desktops compromising well-behaved ones. Now that I've rationalized it to myself, I agree that live migration is, for the most part, a case of over-engineering.

@Ron Oglesby - Could you please explain to us how your solution can bolster the case for using local disks? How do you differ from Atlantis Computing? And aside from the layering value prop, do you also increase IOPS the way Atlantis does, and in a dramatic fashion, might I add?


"Why can't 1-to-1 desktops also be "local" via something like Citrix Provisioning Services? All the users' images would live on the Provisioning Server--they'd be managed in one central place and backed up and everything, but then when  each user logs on, he or she is routed to the least-loaded VM host with the desktop streamed on demand?"

If you use that route, deploy 5 images, provision number 6 and your whole host blows up.

Provisioning server kills local disks on IOps. Provisioning an image takes up to 140 IOps, where local disks usually do not have more then 200 available.


@edgeseeker not trying to exclude a use case, just expressing it's relative importance in the desktop world. Too oftern things are over engineered and sit in pilot missing the point for many. If you can print money and hand it over to storage vendors for a use case I am sure they will buy you a round of golf :-)


@Harry - Sorry, but I've already posted my thoughts on this issue already. Yes, it is indeed a limited-value use case, but look at Load Balncing in XA; one can argue that it too is over-engineered. The point I'm trying to make is that you're simply discounting the Live Migration use case because your Xen platform doesn't do it nearly as well as VMware. This has been one of Citrix's hallmarks: if we don't have it, or if ours isn't as good as our competition, than it must not be so important.  


@Controvirtual - Provisioning Server is a form of SAN, and I don't care what sort of spin Citrix wants to put on it. Let's not drink the "streaming CoolAid, please!!!


@ edgeseeker

ignoring the layering for a second: unidesk essentially provides the C: drives to the desktop VMs by the "desktop" connecting to our CachePoint. The CachePoint is a virtual appliance running on the same host. This appliance can have its VDMK on local storage or on a SAN, NFS, whatever.

To get the "gold image" to that cachepoint we do our replication and updates to layers via TCP. So if I update the "Gold image" on my master, the changes are pushed to the other cachepoints over the network. So the location of the VMDK for our virtual appliances is irrelevant.

Hope that makes sense.



I ignored the IO thing. Sorry. Too fast to write/comment. Anyway we dont (today) do much directly with IO. We do have some caching but not along the lines of PAM modules, from NetApp, atlantis etc. We started with the idea of "how do we do the file system and registry layering correcty" then we'll look at improvind IO.

Specifically on the IO stuff I recommend using our stuff to shrink the foot print as much as possible, then use the "right" disk underneath to handle the IO load.

Personally I think the right disk in 50% or more case is local (mostly SCSI some SSD).

Then move to centralized storage if SLAs require it and if they do I am still big on storage with IO optimization (such as netapp w/ pam or the new EQ arrays with the SSDs and ability to move data, or Whiptail arrays for straight low cost SSD).

Sorry for the tour of storage I use... But all my lab here with 6 servers, 4 brokers, etc, etc uses local storage with the exception of an NFS volume I use for some specific server VMs,


@ Ron - I don't know much about your CachePoint approach. Personally, I would prefer a solution like LeftHand Networks VSA, which transforms all the local storage in multiple hosts into a cluster-wide SAN. I don't know what HP intends to do with LeftHand now that they've acquired them, Of course, Unidesk's value prop could be in the layering, but Atlantis has already done that quite masterfully, not to mention the incredible IOPS gains that they're eeking out of their solution.

Does VDI have to be so complex? Do we need a storage solution that has to have replication and complexity on top of compexity? If VDI needs all of these over-engineered solutions to justify it, then it shouldn't have been conceived in the first place.

I still say that a modernized RDS, with advanced isolation properties similar to those found in Parallels Virtuozzo Containers, and coupled with application and user environment layering capabilities similar to those found in RingCube and others, would put this flawed approach to VDI to rest once and for all.

Isolated Containers + App Layering + User Environment Layering = Real VDI


@ edgeseeker

essentially our layering is the secret sauce. pictured virtualized file syste, and registery to the point of the peronalization layer (what you call User Environment) including changes made by the user including app installs, HKLM registry changes etc.

We call our layers "OS layer, Application layers, Personalization layer"



@Ron - Excuse me, but this "secret sauce" is not so secret. It's been talked about for years. Many vendors have been doing it in one way or another.



really? grab any workspace vendor or PVS, or linked clones, etc. have them spin up a VM and use it.

Install itunes. record the domain and comuter SID, etc

Then have them update the gold image with a service pack and push that to your VM.

let me know how that works out :-)



Brian, Excellent post, thank you for bringing up this topic!

From looking at the responses, many on this thread seem to equate local storage to no HA and no linked clones.  I certainly understand why people might be reaching this conclusion - VDI has used shared storage because it is easier to keep state once you go beyond one server.

But, this ease of implementation from a vendor perspective comes at a high cost both in $$s and complexity and bottlenecks for the users.  

This does not have to be the case. You CAN build VDI using local storage with HIGH-AVAILABILITY, ON-DEMAND SCALING.  

Don't want to plug our stuff, but just want to point out that Kaviza has done exactly this.    While it is a lot harder to implement (a lot of patent-pending IP around it), we have developed a Google-esque architecture that uses just local direct attached storage to create a logical sense of shared storage and provide built-in high-availability and scaling.  

The beauty of this design is that it scales out horizontally, like Google search - you don't have the bottleneck of a SAN, you don't have the high cost and complexity of shared storage, and yet you get on-demand scaling and high-availability.  Distributed topology done right is, in our opinion, the best architecture for VDI.



@Ron - If you've read my posts, you would have understood I'm not insinuating an apples-to-apples comparison between Unidesk and block-level solutions such as VMware Linked Clone, Microsoft diff disks, and many SAN vendors thaat claim to optimize VDI storage.

Unidesk just released their first version, right? OK, so let's compare that to Atlantis Computing. Furthermore, the idea of segregating the OS, apps, and personalization into distinct layers have been floating around since before Unidesk even became a company. In fact, many app virtualization solutions already do this. Among them are RingCube and InstallFree.

Nothing personal here. I value what your company is trying to do. All I'm saying is that there's really nothing that innovative that we haven't already heard about. There no clear departure from the status quo here. It's just more of the same.


Wow! This topic struck a nerve. I meant to comment but that's superflux now. Anyways, I just wanted to tip my cap for a good topic and great discussion.


@appdetective Thanks for the always honest, mostly acerbic, sometimes accurate notes on Atlantis and my view of where the industry is headed.  I’m glad that you agree with a few of my view points.  I’m not going to debate or argue about my product, but instead offer to set the record straight with you.  How about setting up  a chat based webex or a gotomeeting that keeps you anonymous (since that seems really important to you) and I get to set the record straight. If that doesn’t work for you – maybe you have a better suggestion. In any case I would like to take you up on your offer to prove you wrong.  I can be reached at chetanATatlantiscomputing.com if you want to discuss.

@edgeseeker – thank you for bringing balance to the force ☺  Let me take a stab at articulating the key difference between Atlantis and Unidesk.  Both of us are solving two distinct sides of the same problem (how do you make storage smarter from a Windows perspective).  This means that there are some areas of overlap (layering, desktop decomposition) but mostly the two companies focus on two very different value propositions:

For Atlantis this is resolving the infrastructural challenges of desktop virtualization in the datacenter (Storage performance, IOPS and physical capacity) by making the storage fabric and network smarter, while Unidesk (based on my interactions with Chris Midgley and Don Bulens) I believe is offering a better way to manage desktops while offering new degrees of personalization.  This means that Atlantis focuses on CAPEX spend reduction of desktop virtualization in the datacenter around the infrastructure you need to buy while Unidesk focuses on the OPEX spend reduction around the management.

Both are important problems and tend to mesh together at the block storage layer,  but I think both atlantis and unidesk have until now done a poor job of addressing the important questions – “How are you different or do you compete?”. So hopefully this response goes some way in bringing some perspective to the issue.  

At atlantis our focus is on turning CPU cycles into IOPS by making the storage fabric/network smarter. If the storage fabric/network  path can be smarter by becoming  content aware, then it can better service the vast majority of IO requests that desktops generate without involving the storage system. We do this via a content aware cache (the cache speaks NTFS and Windows) and IO de-duplication, which eliminate the need for FC SANs and allow you to leverage any mix of storage for VDI (DAS, NAS or 2buck chuck iscsi SAN – you know who you are!)

The $0.02 I’d like to add to Brian’s post is that the real question to ask is should storage be dumb  but highly reliable ( as it has historically in the datacenter,  getting around data entropy challenges through block/chunk duplication via  RAID - which is one of the key culprits behind why we have an IOPS crisis in the first place) or should storage be smarter about what its storing (i.e content aware)  and therefore do other things to be fast while still being reliable without being constrained by the mechanical limits imposed by rotational media?

In many ways this question has an analog in the whole Oil and energy debate – should we continue to use an expensive and limited energy source like OIL or use cheap and abundant sources like sunlight once we can figure out how to harness it effectively.  We are paralleling this in the datacenter when we use an expensive and limited source of IOPS (spindles) when we could try and harness the almost free and unlimited supply of IOPS we get year after year with Moore’s law.  

For people who have lived all their lives in an oil soaked world, it is hard to imagine a transformative technology that will change our way of life within our lifetime. Indeed we are all vested in some way or the other to the old way because of the enormous costs the transformation will impose on us financially and in behavioral change. Similarly, we approach VDI and the datacenter with prejudice and scar tissue and tend to be overly pessimistic about any technology that is potentially transformative for the desktop storage and IOPS problem and insist that it needs to be dumb, reliable  and fast and get frustrated when it doesn’t work that way.  

As we progress in 2011 we will see the mainstreaming of many of these transformative storage technologies (atlantis being one of many) – there is so much innovation in the datacenter/network/storage ecosystem and so much investment in silicon valley around the sector  that many interesting ideas from the datacenter will get applied to desktops and change the way we compute.


Chetan Venkatesh

CTO & Founder

Atlantis Computing


@edgeseeker I do agree and more than that, wrote about this MANY times, that VDI as of today, to get it 'half working' requires combining several pieces from several vendors that at the end simply give you a 100% unsupported solution to the Microsoft eyes.

What does that mean? Well once you add the Atlantis storage layering, the Unidesk software layering, the XenDesktop Agent and the ESX hosting infrastructure to make VDI work and then things go wrong, who are you going to call?

If you call Microsoft, are we 100% sure and certain they will support such 'VDInstein' monstruosity that we are creating?

That in itself would be a reason for anyone not to go down the VDI route. To be honest I never officially asked Microsoft (or any other vendor for that matter) on what their support policy is if I call them with an issue with the OS or an application and I tell them I am running this on a hosted desktop managed/constructed with all these third party layers that are there to make Windows work in a VDI world.

As I said before the problem with VDI is Windows. And we are doing a Patchwork to get that going.

Not saying we should all stop and shelve VDI. Nope. But we need to carefully understand where we are heading and on what we are getting into when going down the VDI path.

Someone should actually write an article on the supportability of a VDI environment so we can all hear what all the major vendors have to say once they get a call from a customer with such 'nice and well put together' setup. :-)



@chetan. No need to talk to me, convince the world with customer traction at scale and have them come up and say Atlantis is a piece of cake to set up and works like a charm. While you are at it can you let @edgeseeker know you no longer are in the layers business or are you? Seriously dude, I like what you write, but now it's time to see some real world execution so people believe that the vision is not delusion. That's all I am pushing you towards, F what I think, care what the market thinks of your solution.

@edgeseeker, containers again huh. Has parallels got traction yet with windows and desktops? I like the idea, but it's just not going to happen without MS making that a platform feature. As for whether @ron has something unique, I think they do, although I see your point around layers is not new. However their approach with layers is, just like Moka 5. They are all focusing on management of the desktop OpEx which @chetan correctly points out. That is a big deal, when we already know app virtualization get's us only part of the way and containers well.......That said there is a long way to go.

@crod, I disagree with you. I could make the same argument about application virtualization and 3rd party support. Reality is nobody gives a $h1t. When it comes to paying maintenance, they all support what sells. These things are just short term constraints that forward thinkers fight through. If I spent my career worrying about what may go wrong I would do nothing. That is a major mind set difference I have seen between guys at the top and the ground soldiers. The leaders embrace change, while the rest just stick to the status quo for comfort. Of course VDI is supportable, it already is by some including their vendors. The VDI adoption question is a different question, and no iPeak is not going solve it :-)


I have a simple question for the so called experts on this string (excluding vendors)  What solutions have you tested in the past 3 months running 200 virtual desktops. For those of you bashing the vendors please explain why the solution did not work.  - lets see what comes of that!


OK. There seems to be a whole lot of misunderstanding here, and people giving too much love to the storage and software vendors.

Firstly, edgeseeker, Citrix PVS is NOT a type of SAN. The write cache can be in any one of three locations when using non-persistent images: On the PVS host, On local disk (this is what Brian is getting at), or in RAM. Each location has benefits and drawbacks. Which is best? do the math and work it out for your self given your particular use case. IMHO PVS is of limited use for persistent images, as are Linked Clones, and so is Local Disk.

What is the best solution? Atlantis? Unidesk? PVS? Linked Clones? individual virtual disks? local storage? Shared storage??? Dunno, I don't know what your particular use case is. Each technology has its place, and no one size fits all.

Honestly guys - there is no point in getting into religious wars on which tech is better than the other and supporting the vendor that last bought you lunch or took you out on a jolly or - worse still - feel you have to support because you have to justify the decision that you made.

Great post Brian, keep the quality coming.

BTW, Dan Feller @ Citrix discusses local disks for PVS Write Cache in his (excellent) reference architecture for the (Fictional???) ABC school district.


@Edgeseeker - Yes, I happen to be a big fan of Containers because I can't imagine this mishmash of over-engineered solutions that we call VDI to be the long-term answer to desktop management tao.

I do agree that Containers won't take off unless Microsoft shows some out-of-the-box thinking. But I wouldn't bet on that because this company is devoid of anything remotely ressembling innovation. I doubt Parallels is making much headway as far as VDI is concerned, even though I'm sure they've had a few wins here and there. Microsoft doesn't want its partners to do anything that can possibly be construed as "platform" because the platform is Microsoft's baby, and partners better stick to extending this platform by building management features on top of it.

I suspect VDI will remain a niche just like RDS, and that's what Microsoft would like to see. Otherwise, it will cannibalize the Windows desktop business, which has been Microsoft's mainstay since there has been Microsoft. This can be simply observed by watching Microsoft's stock. It only surges whenever a Windows refresh cycle is upon us.


@Mr. Incredible - Sorry that you don't agree. Just because PVS offers a client cache option doesn't change the fact that vDisks are mounted across a network connection. You can't simply fill the client cache and disconnect the PVS. Therefore, it's essentially a SAN-like architecture. If your disk is being served up from a host, then it's a SAN-like solution whether you agree or not.

I haven't seen anyone shoving any vendor's solution down anyone else's throat. As far as I can see, this thread has been not short of a healthy discussion. It sounds to me that Dan Feller is the one taking you out to lunch.


Sorry, I meant @appdetective in my previous post. That's twice in one day. Hey, what kind of car do you drive? I bet my space shuttle can take you out.



Why don't you challenge the vendors on this thread to make a 1 hour video on how to deploy 1000 virtual desktops.

I'll be more than happy to add a whole bunch of usage scenarios  and what to audit to prove the solution works. (VDI only no offline)

You could call it the VDI low cost storage Smackdown!

By the way HA is a must have!


I agree that a SAN is overkill for non-persistent desktops.  We've been using Sychron for desktop virtualization for several years (before View and XenDesktop were viable) and they demonstrated to us the benefits (cost, complexity and performance) of local storage early on.  Being a smaller company (<200 desktops total) using a local storage approach is giving us greater density per physical server - further helping reduce costs.  HA has never been an issue for us as we were a TS shop before starting the move to virtual desktops - what has always been an issue is cost.


@edgeseeker, I 100% agree with you MS will protect their monopoly and do not innovate, and would love to keep VDI niche. I'm actually thinking of writing a BM.com post on this topic, I'll put my thinking cap on.

@Watson, I've tried getting Atlantis to work, it's not easy and way to painful so gave up trying to get this scaled out. Unidesk I have played with, not deployed at scale, still too early and I am not sure how all the layers are going to replicated globally. Well I should say it's unproven in my world currently so keeping an open mind. There is a ways to go IMO and i am still not 100% that all apps will work if layered. I already have an app compat problem with TS...

@Watson I would love to understand why you think HA is a must have. HA for what? Do you deploy HA desktops today? I have a better idea, why don't you write down your use cases and let's start a discussion on what matters.


@appdetective  look at my other posts I have Atlantis up an running to reduce IOPS and storage it took 1 day to deploy what are you talking about!  Never tried Unidesk which is one of the reasons  I posed the question.  Frankly I don't know why I  would need it.  I run Symantec which is a kernel level driver (almost the same as Unidesk) I only use it for applications that will not play nice together.  There is no way I would try to virtualize 100% of my apps it won't work I don't care what anyone says!

HA is a must for a couple of reasons:

1. I did not to bother with all the layering crap. In my environment every user has a persistent desktop with local profiles.

2.   I enjoy being employed and collecting  a paycheck so HA is a must.  I don't know where you work but if I brought down 500 users and they lost everything they are working on I would be fired.  

My use cases are simple:


Remote Call center workers

Remote developers

Within 6 months:

Standard office workers

Why - I can deploy VDI for close to the same price as PC's .  OPEX goes down my company is happy


@Watson. My atlantis issues were around the stability of ilio, lack of mgmt features such as making changes to existing configurations. The system dies and corrupted many times and I had to set it up again from scratch. Eventually I just gave up. It was premature to even put this through a security audit to see if what they are doing represents any risk.  Great to hear it's working for you.

Which part of the Symantec suite do you use. Unclear what you mean above. by kernel driver. Which product are you referring to? I agree with you that it's not possible to virtualize all your apps, makes no sense to try either will never happen.

I don't understand if you use persistent VDI (like everybody else in the real world) with local profiles (not like most VDI folks) for personalization how you claim this is cheaper in the data center, if I understand your situation correctly. Sure atlantis has the potential to reduce storage in the DC and accelerate IOPS if it works, but is that really cheaper, an amazing claim that I doubt scales for the masses today without a ton of hand holding. Not saying it doesn't work for you, I just don't buy it otherwise many more people would be using Atlantis today.....

I also don't get why you would bother with local profiled (terrible legacy) practice that will just grow your disk in the data center over time. No wonder you need HA, ouch bad architecture IMO that won't scale to complex users. If today you remote call center workers why not an RDS based solution which is far cheaper? What is VDi buying you for remote call center workers, especially if you have to use Atlantis to get the costs down which I still find hard to believe that it's cheaper than a PC.


@Brian, I have asked myself the same question in the past and published an article about that. I agree with you that there are some use cases where local storage is a fit and I have a real large implementation use case for that. In this cases HA and vMotion may not be as important.

Expensive storage array not required for VDI?


Andre Leibovici


Hi Brian

Really good article and many good comments (noticed this on your twitter :-)

But could you maybe make a simple drawing of how you see this configuration with local storage and Provisioning server?

We are about to setup a complete new envoriment, with XenDesktop, Provisioning Server, XenApp for streaming etc. (running on VMware).

But I can't really seem to get an overall overview of your teroie, so maybe if you just could explain it in a quick drawing, it might be more easy to understand?




@Chetan - Thanks for the clarification on the differences between Atlantis and Unidesk. However, I don't buy this huggy kissy explanation as I'm fairly sure both Atlantis and Unidesk are working to plug any perceived holes. Both companies are inevitably destined to become competitors.

@ Appdetective - How long ago did you set up ILIO? In all fairness, it worked great for us and really didn't encounter any problems. We did seek some hand-holding from Atlantis during the early stages, however.

I still feel that VDI-aware storage solutions are the way to go. Traditional SANs should be dubbed F***SANs or FSANs from this point on. VDI-aware SANs should be content-aware, and therefore both Atlantis and Unidesk are on the right path. I do agree that storage should be local, but not in the manner Brian is describing. A VDI-aware SAN should preferably create a cluster-wide array leveraging the local storage found in each individual host in the virtualization cluster. This is similar to LeftHand Networks's VSA architecture, but now that they've been gobbled up by HP, I expect smaller, more nimble players to emerge. But again, content-awareness at the storage level is key. Let's call it CAS.

I guess politics, real estate, and now VDI storage, do have something in common after all - They're all LOCAL.



"@crod, I disagree with you. I could make the same argument about application virtualization and 3rd party support. Reality is nobody gives a $h1t. When it comes to paying maintenance, they all support what sells. These things are just short term constraints that forward thinkers fight through. If I spent my career worrying about what may go wrong I would do nothing. That is a major mind set difference I have seen between guys at the top and the ground soldiers. The leaders embrace change, while the rest just stick to the status quo for comfort. Of course VDI is supportable, it already is by some including their vendors. The VDI adoption question is a different question, and no iPeak is not going solve it :-)"

I disagree. :-)

I think, correct me if I am wrong, you work for a company, full time, on IT, and not as a consultant dealing with hundreds of different customers a year. Is that the case?

My experience through my companies (one is a consulting one) is that we have seen MANY projects where a certain technology was indeed canned/dumped because it lead them to an unsupported configuration. Very typical example was Microsoft App-V. For very long they supported apps deployed using it but NOT apps with MUI, a must have requirement in Canada. One of the environments we worked on with 50,000 concurrent users simply canned App-V for that reason.

I have several other examples showing that. The main question is, if there is indeed a chance or a small possibility that a 100,000 user environment will have parts that are NOT supported by a major vendor, will the decision makers (here I am talking about C-level people and not the IT guys implementing anything) agree and sign off on such solution? Again, my experience is, if they are indeed aware of it, they NEVER sign it off. Where I saw that happening it did happen because the IT people, for several reasons, simply ignored the fact and decided not to tell anyone what was really going on.

My point again is any new technology usually stays in an 'unsupported' mode for a while, until it indeed becomes mainstream and then, at that stage, vendors have no other option but to support it.

As of today I do NOT think that is the case with VDI. First because it is NOT mainstream, no matter how many people here post they are doing it. There is more people in the world than us. :-)

Secondly because yes, as of today, you need to glue together MANY parts to get it working properly.

I guess we will find out what the real policy is by simply setting up a fake call to Microsoft's PSS and going through the exercise of trying to get support for a thinapped Office 2010 running on a hosted Windows 7 VM with layering. Then we post the results. Anyone wants to try that? :-)

As this is off topic, feel free to continue the discussion by email.



@appdetective - I appreciate the merit of your views and its good to know that secretly behind all those *** herniating wedgies you try and give Atlantis  on the comment stream - you're actually rooting for us. But clearly your views on Atlantis's  maturity  are dated. We will be at vmworld and maybe you'll come and look us up (anonymously)  and update them.    We have put an enormous effort behind simplicity and ease of deployment - in addition to our software, we now offer a hardware appliance completely pre-configured for 500 & 1000 desktops with defaults for enabling XenDesktop, Quest vWorkspace and Vmware View.   The actual implementation timeframe is under 2 hours from rack and stack to user login.  Both software and hardware support any storage backend - local Disk, DAS, NAS, SAN (iscsi/ NFS) and  Fiberchannel as well as ethernet (1Gbe and 10Gbe).  

@edgeseeker - you totally called me out there and you are spot on :).  I think there is  weight  to what you say about some  inevitable competition that will result once holes are plugged.  But I dont see Atlantis as a desktop management company which Unidesk aspires to do (There is a BM.com interview where Chris Midgley says as much). I dont want to put words in their mouth but that to me means the companies inevitably have 2 different contexts that they will build their businesses from.

@appdetective: we are absolutely interested in layering and continue to offer those capabilities as an add on module to our VDI acceleration and storage optimization product.  But our primary focus is on building a VDI storage foundation around our IO acceleration and storage optimization capabilities that enable at scale desktop virtualization in the datacenter. Layering is not the front and center of atlantis but a useful and  important after market add on. This was a change in direction from when we first went to market in early 2009.  Why did this change - because  I learnt  that none of my customers care about enabling  end user installed apps or thought a richer  personalization capability for the user is vital to a better VDI experience.  And the customers who thought UIA was useful or gave those kinds of administrative privileges to their users tended to be small shops (the S in SMB) and they dont have datacenters and therefore dont have interesting and expensive  storage problems when they virtualize their desktops.   If customers want to provide some form of UIA - they will do it through things  like Citrix Dazzle and Microsoft's upcoming (rumored) Appstore.  Those apps will come pre-virtualized through AppV, Citrix XA (and streaming), thinApp. Symantec Altiris  etc. Our layering add on is purely to support those few (and ever shrinking but still inevitable list of) Windows applications that cant be virtualized and for containerizing OS settings and customizations and managing those as independent layers).

This has been by far one of the most exhilarating discussions on VDI and storage on the internet EVER.  Kudos to each and everyone who has posted their views to make this such a rich learning experience

Chetan Venkatesh

CTO & Founder

Atlantis Computing

twitter: @chetan_


Bottom line- vendors have a vested interest in capturing a piece of the lucrative desktop market. They are making promises and suggestions that in practise  are far more complex and expensive to deploy than they even know. Coming to the recognition that local storage is a valid option is a natural conclusion after deploying a few systems, especially ones that involved HA, high SLA, scale and any real complexity.

The real lesson here is not to trust market hype and vendor assertions. They have gotten particularly unrealistic in the last few years around VDI.

Right now "VDI" is a very specialized tool that does great things for the right uses but they very specific cases and nowhere near the mass adoption case that vendors want you to put your $$ behind.

In this regard Kaviza and Virtual Bridges do get it. Atlantis is also on the right track as they address the key challenge in providing scale to a VDI solution. Can't speak to the production readiness of these solutions, but the insight they convey is exactly in line with real world experience.

Thanks for the useful and honest discussions


@Steve Greenberg, thank you for your comment on Kaviza, we agree :), and here's why.

Here are the choices folks seem to be contending with:

Option 1: Shared storage. You get high availability, dynamic load balancing, etc but cost is high.

Option 2: Local storage with traditional VDI. None of the above but cheap.

Where is Option 3??

The conclusion here seems to be that Option 2 is good enough for VDI.  Or is it? What are you losing? The main reasons people go to VDI is to streamline management, and increase desktop uptime.  Will Option 2 give these?

i) What happens when you go beyond 1 server? Do you manage all the desktop images manually on each server? Keep them up-to-date? Statically provision to each server?

ii) What happens when a server fails? These are not server workloads, but users care about their desktops being up. No redundancy means downtime. Imagine a server running 50 desktops goes down. You want 50 irate users??

The critical question is whether we can retain the inexpensive, high performance, and obvious choice of local storage in VDI when we need scalability and high availability.  Can we get a scalabile, highly available desktop service without the high upfront and ongoing costs of shared storage?

So, what if you had an Option 3?

Option 3: Local storage with a new VDI architecture designed to deliver high availability and dynamic load balancing on DAS.

This can be done - in fact, it's exactly what Kaviza has. For a technical explanation of how we achieve this, please see kaviza.blogspot.com/.../yes-we-can-vdi-with-local-storage-and.html


@ Watson - For your number two, I have several issues.  A good percentage of users have had physical desktops for years with their data and profile on centralized servers so their desktops would not need to be backed up. Why is is now that for these same users we need HA, and are we confusing HA with ZERO downtime?

HA does NOT need to include a SAN, as with load balanced TS we always had HA if we had N+1.  When a user logged on after a host failure they lost what they were working on that was not saved, but were directed to an available host for a new session.  The same goes for VDI.  I never needed a SAN for TS load balancing, so why does one assume that they need it for load balancing Hyper-V or ESX Servers.

The answer to me is obvious, this is what people are used to doing for mission critical servers that are virtualized, so that's what they are designing into virtual desktop deployments. The people screaming that they need HA are people that (for the most part) have only done server virtualization, and the ones that say for non-persistent desktops a SAN is overkill are the people that have been doing TS for 10-15 years.

There is no one size fits all.


I was also curious what server hosts 500 concurrent virtual desktops, as most host in the 30-75 range and do SANs never fail?  SANs are useful for storing my image and application libraries, persistent virtual machines, user data, server VMs...but for a non-persistent desktop where the deskop a user gets from server 6 does the same as the one on any other server, how is a SAN necessary, less expensive, more performance or even more highly available?


The challenge with a N+1 model for high availability is how do the desktop images stay in lock-step across these machines? Who is doing this?  It cannot just be assumed.

That is why traditional VDI relies on a shared storage pool - they assume that every server can reach every image.  If you use local DAS across multiple servers with a traditional VDI architecture, someone else has to do this work.  Who?

Which is why, to leverage local storage and still give high availability via N+1 and scalability, you need to redesign the VDI stack.  




@Krishnas - You can have shared storage in a clustered environment without having to have HA. There's no reason why individual hosts cannot all connect to the same set of storage volumes. Should host A fail, any stateful VM desktop registered on host A can be programmatically re-registered on host B. That would be the task of the VDI management layer (the broker), and we all know that not all brokers are created equal. From what I know of Kaviza, you're actually using VMware linked clone on the local drives of each ESX host, correct? Please correct me if I'm wrong.

@ Patrick - You failed to mention that your comprehensive solution obviates the need for roaming profiles, regardless of whether the desktops are stored locally or centrally on a SAN. Don't you have some sort of built-in profile management feature?

Also, let's distinguish between VMware's HA and the generic need to achieve "high availability". VMware's HA is not the only way to do HA.


@ krishnas - you're right, the parent disks need to be in sync, just like they need to if using something like VMware View and linked clones where a replica of the master image is replicated out to the different datastores.  If we treat each server's local disk like a datastore, where each server has a replica... If one can update 20 or 100 servers in about the same time as one server, then there isn't much of a problem, there just needs to be a management process to do this, not an additional body.

You are correct to assume nothing, as assumptions get people in over their head.


@ Patrick  - you are correct but the difference as you know is when a single PC breaks, corrupt profile etc.  One user loses a couple of hours work.  When virtual desktop sessions go down 500 users might lose 15 minutes of productivity the impact is that much greater.  Most importantly based on my companies internal SLA’s I would be looking for another job.

That’s cool for a TS environment in fact we run a single call center app on TS and that is exactly how we have it configured.  The same does not go for VDI because the business is not used to having all their users lose connectivity for 15 min out of the workday including what they were working on.  Too much risk!  

We are all entitled to our opinions.  Unfortunately in my organization my opinion is mute when it comes to not meeting up time SLA’s.  Therefore HA is a must!

500 concurrent desktops on an ILIO controller.  We have blades deployed on the same rack hosting the hypervisors and guests.  Sorry I don’t understand your follow up statement can you please explain?


@Watson - You keep saying that all 500 users are going to be out of commission. Can you clarify how? If you have 20 servers with 25 desktops each, I would assume that in the unlikely event of a server failure, only 25 desktops would be temporarily down. Depending on which VDI management solution you have in place, as well as your deployment architecture, those 25 desktops could be brought back up on another server pretty quickly. You don't need a "luxury HA" feature that costs an arm and a leg to achieve HA. Talk to Patrick and he'll show you.


@edgeseeker -

When individual hosts connect to a shared volume, is that not shared storage? This is what we want to avoid - not because it is bad, but because it is expensive.  

So, without using ANY shared storage for the desktop images, how do you achieve redundancy, fail-over, and the ability to dynamically scale and load balance?

With Kaviza, everything is stored on the local DAS - both the golden images and the linked clones. Kaviza handles replication as-needed, optimal load balancing, dynamic scaling and high-availability automatically using just DAS.  

The key is, how does a VDI solution supply this functionality out-of-the-box - so that a desktop IT admin, not a storage or virt. expert, can setup and manage virtual desktops?  And keep costs low enough that every virtual desktop costs less than a PC to deploy - even if you are only doing a small number like 25 desktops.




@krishnas - Yes, I know it's shared storage. Just because it's shared storage doesn't mean you have to have VMware HA. Besides, you can procure a robust shared storage solution nowadays without having to pay an arm and a leg. Of particular interest to me are the virtual appliance-based solutions, like LeftHand Networks, that transform the local disks found in each virtualization host  into a cluster-wide shared storage system. Come on, who in their right mind would want to replicate master images and deltas to multiple hosts? That's f****** crazy!!! VDI desrves better than this clunky approach. On top of that, many features in Hyper-V and ESX are only available if you use shared storage. Excuse me, but your solution is VMware-centric, correct? So, let's not open another can of worms as far as the $$$ you have to disk out to buy the VMware "Rolls Royce" from f****** Palo Alto. And you do use linked clone, right? Well, it sucks!!! Anyone will tell you that it's a far cry from the content-aware storage solution that VDI is in dire need for. Sorry buddy, but I'm not buying it.  Now I'll give you one thing, though: your solution is probably fine for 50 to 100 users (and I'm being generous here). Anything above that, and it becomes a crazy management proposition.



Our goal is to bring down the cost of virtual desktops - the solutions you mention are good technologies but they all add to deployment costs.

Kaviza is hypervisor independent - we currently support both free XenServer and the cheapest ESX, and Hyper-V support is on the roadmap.  

We do not rely on any of the added management features of any of these frameworks. Our all-in-one VDI solution uses our own distributed desktop-specific management layer for  HA - so that you do not require added expense of elements such as the ones you mention e.g. Left Hand Networks  or vCenter.

In terms of scaling - Kaviza scales on-demand, you can add servers on the fly and grow the grid. We actually have some videos on our site showing how easy this is.  



@Krishnas - Will give you the benefit of the doubt and watch the videos. Good points you made there.



Racks go down!  Our first deployment was for offshore developers I'm not going to bother explaining what would happen if a rack went down and I did not have HA.

I agree with those of you who say you can use local storage with profile solutions.  In fact we have a really cool lab deployment with LWL profile unity, WIN 7 (not supported by VM but works so far) running off local drives. density is to be determined as we are tweaking it now.


@Watson - Yes, racks do go down, but how often? And if that's the way think, then SANs go down too, and maybe the entire data center. How many 9's are you trying to get to?


@edgeseeker  Not sure where you work but burn the VP of XYZ business group once and they won't take any risk moving forward.  It's called playing it safe.  

Without VDI - Little to no perceived risk

With VDI - Lots of perceived risk

Over architect and everyone is happy.  Frankly I'm not convinced I'm being overly cautious because it's been hard to find a profile solution which truly does the job.  More importantly I probably would not have to go through this if Gartner and every other analyst was not bashing the cost and complexity of VDI.  When we adopt new technology the sharks start to circle looking to move in for the kill.  One screw up and I'm chum!  Sorry not going to happen!


@watson, you talk about reliability, which I understand but you seem to missing the fact the higher probability of failure is the broker itself. No vendors provide a way to directly connect around this failure (Ericom does to a point) So over engineering the wrong component  for the sake of it makes little sense to me.

@Chetan. I am a fan of solutions that work and enable the eco system at scale. Sorry my information is not overly dated (6 months) and I’ve verified it with people more recently who see marginal improvement. Atlantis keeps changing it’s story. First it was layers rah rah rah, then it was we do this great storage thing (not proven) and now oh yeah we have this layers thing to. This is mostly vendor hype and I will call out BS. You are correct this does not mean I want you to fail, but it does mean that you need real world traction which you do not have right now no matter what you may claim. If you have, I’d love those real world customers to come forward and speak on your behalf and I will gladly shut up and eat humble pie. Right now it’s too difficult to set up, and you having to keep reconfiguring the F’ing ilio everytime you make a change. The product also crashed in testing often so very scary to scale up for production workloads.  A lot of TLC is needed to pilot.  Immature product, not a product with not potential, and I do sincerely wish you luck.

@claudio, I have worked for companies and consultants for many many years. At the C level 100% yes risk is assumed to carry the business forward. Vendors are aholes and resist change, one can’t let those dead brained vendors hold one back. Of course if a large % of your environment is not going to be supported because you are so far out there, represents a ton more risk. I don’t think this represents the vast majority of virtual desktop deployments. Still today there are plenty of vendors who say TS is not supported, and who really cares.....  With the exception of very specific use cases as you mention for some businesses. As to your point about complexity, I also have to take some exception to some extent. The hypervisor can’t matter if something is running on it. F the vendor who says they won’t support including MS. I don’t think that leg is valid to stand on. It’s a risk worth assuming. And yes go call PSS, if a customer is dumb enough to accept what PSS tells them and not insist and create a fire drill then they deserver to be stay behind the times and #15 in their respective industry. Every F’ing place I have worked is filled with constraints minded IT people who have no clue how to present risk to a C level person who can then make an intelligent decision about how to move forward. I.E C level people have vision, IT people represent resistance. Talk to a business user and ask them how much they like IT, they don’t, it’s a joke in most organizations. Business users are sick of IT resistance and especially information security idiots resistance to everything. That world is a dead world. Business users don’t give a crap if IT supports it if they can get what they want from elsewhere and they happens all the time and why users ask for things like UIA. How about IT started to act like business users and demand the same from their vendors....


You need a SAN:  

1 reason, 2 letters.  

D    R    

-jbird has spoken.


D R = Dumb Reason. If you store data on your desktop, fault number 1. If you are smart, use home directories for data on SAN or NAS by all means. To DR systems files that have no data = Dumb Reason.


@appdetective  do you even work in IT?  That was the dumbest comment I have ever heard!  let me paraphrase.  IT who is a cost center is going to tell the business units who generate revenue to do it the way IT tells them too!


@Watson I'm not the one posting all over and getting my panties in a twist and not understand desktop 101 that CapEx < OpEx. I'm not the one posting and crying that I can't implement VDI due to CapEx because I don't understand how to build a use case. I'm not one looking at just Helpdesk tickets to confuse the hell out of myself. I'm not the one recommending expensive SAN solutions introducing frankenstein solutions like Altantis to get it to work and exaggerating a vendors claims. Dumb is, people who really think they need SAN to build DR for desktops and cry like schoolgirl over projects stuck on CapEx. Dumb is, people who need to back up a stateful desktop as opposed to being smart about what to backup on the destkop. There is very little point in further debating junior IT guys who have no clue how to manage with the P & L business because they suck up business spend with stupid IT architecture and then wonder why the CIO thinks they are idiot.


@ appdetective - your not stating anything!  which is not helping those of use who are trying to collaborate and find solutions to our problems.  I wish the past couple months of posting would have done that but alas it's not the case.  I concede your smarter than me and obviously know how to implement VDI/TS in a more cost effective manor.  Too bad you don't want to share any of that knowledge and help idiots like myself learn how to do it better. I'll check back in on BM next year!


While I'm sure this message wont get seen now the article is about to exit the front page of BM.com... but I have something to raise...

I think using the SAN for DR of desktops is wrong and HA can be achieved in other cheaper ways - but... What happen's if your using blades? You have limited options for local storage. what 2 X 15k disks? Not a punch packing IOPs. If we went down the blade route (still might if the future) I would expect to get at least 30 - 40 Win7 VM's.

Depending on the desktop solution, rapid clone (differencing disk/link clone) technology is still an option on local disks. So Capacity isn't my number 1 concern - though it's still a concern.

Just a thought.

SANs should be used for servers, data and apps.

Personally I love the idea of using local storage for desktops.



apologies for the odd spelling.... winmo predictive txt...


My response to this would be - why do people purchase SANs today?  An overwhelming and unavoidable ROI.  Business continuity and flexibility.  If you want to go backwards to local, distributed storage, that is certainly your choice but you be prepared to unwind that TCO and IT agility.

The only reason this conversation exists is because SSD SANS have not been viable and cost effective yet, until WhipTail released the Virtual Desktop XLR8r.  Now you have a cost-effective, and performance-effective way to have SAN based VDI.  250,000 random write IOPS in a 2U form factor requiring less than 200 watts of power at 1/10th the cost of other VDI SAN storage options.


LOL. I figured this out for myself last week. I plan to use local storagein each of my 4 XenServers, use PVS to provision the W7 desktops wich XenDesktop.

The only thing I havent figured out yet is how to add the local storage to a XD pool and loadbalance amongst the XenServers.