Brian’s wrong… about VDI and Local Storage.

A couple of weeks ago Brian wrote an article titled "F*** the SAN. VDI storage should be local!"

A couple of weeks ago Brian wrote an article titled “F*** the SAN. VDI storage should be local!” It had about 40 comments and 3000 views in its first day, showing the emotion and interest in this topic. It really sparked a small war within the community (as seen by the comments section) and a number of debates about the various use cases for SAN, where it fits, and where it doesn’t.  I commented in favor of local disk, but only when it makes sense, and with appropriate caveats.  Well, today I am going to take the other side of the argument and say that “Brian is wrong”.

Let me state that I don’t have a dog in the fight. I don’t sell storage (local or centralized), nor do I pimp a product that requires either type of storage (Unidesk will work with either). And while I believe that to ignore local storage when designing a solution is essentially “IT malpractice”, I do not believe you can make a flat out statement (like Brian did) that all “VDI storage should be local!” Also, I should note that I met up with some of the EMC VCE guys (like Fred Nix and Brian Whitman) last week who raked me over the coals pretty well about local disk vs SAN and supplied some of the pro-SAN ideas for this article

With all of this said, let’s get the party started. People are interested in this topic for two basic reasons:

  • Storage is the biggest line item (in terms of dollars) for most VDI implementations
  • Everyone is looking to “solve” the disk problem (loosely defined as reducing cost and handling the IO)

Obviously just reducing the cost of the storage does you no good if you ignore the IO issue. Yes, you could get really cheap SAN storage by using nothing but the biggest and lowest cost SATA drives around. But you may be doing so without being able to handle the IO demands of the desktops and have to purchase better storage later on anyway. Of course SAN storage is expensive when compared strictly on a GB vs GB cost with local storage, and local storage has been used for desktops for a long time, or so the story goes.

So why can’t I use local storage?

  • Not always enough drives/spindles in the server
  • DR / Replication
  • Fault Tolerance / Failover
  • Power management / load management
  • Rapid Provisioning and Reclamation

Not enough drives/spindles in the chassis

When looking at today’s hot computing models for enterprises the growing use of blade servers cannot be ignored. Cisco’s UCS, HP’s Blade series and even Dell blades are being purchased at an ever increasing rate. The reasons behind the use of blades are irrelevant. What is relevant is the number of spindles you can get in any given blade.

If we look at the majority of 2 socket blade configurations, your max number of drives is going to be two (2). Assuming you are using 15K SAS you would have about 400-500 total IO available (obviously variable based on configuration). If maximum user density is your goal, you may not have enough IO to support more than 20 or 30 desktops on this blade and need to move to a centralized storage configuration that has some high end caching and the ability to move “hot” data to faster spindles or maybe SSD (local or remote). But if you move to SSD locally your cost per GB may be just as high as your centralized storage anyway.

Of course you can always move into a 1 or 2U rack server configuration, but often times the “VDI Guy” doesn’t own the hardware selection in enterprises and is stuck with what he can get…

DR/Replication

Using local storage will require that you use “Some other tool” for DR. SAN based storage allows you to replicate the VMs, leverage things like SRM, and know that the desktop VMs are replicated just like any other VMs in the environment.

Does this mean you COULDN’T do DR with local disk? No, but it does mean having the desktops in a second location, spun-up or ready to be spun-up. These DR desktops have to be patched and managed as any “stand-by machines” have to be. This is often a “hidden” cost we don’t look at much.

Fault Tolerance/Failover

Regardless of the hypervisor, recovery from a server/host OS failure is going to be faster with centralized storage. Those of us that are VMware guys have fallen in love with HA services and the ability for VMs for be restarted within minutes on another host in the cluster. If you are using local storage this option goes away and recovery must be done another way.  If you are using persistent desktops this becomes a much more sticky issue than if you are using “throw away” pooled desktops.

Power Management and Load Management

While not important to every environment the idea of being able to consolidate loads dynamically during non peak hours, and possibly even power down or place hosts in stand-by is very attractive. If 80% of your users use their desktops during the day, then 60% (or more) of the time the VMs are unused and there is potentially a large power and cooling savings sitting out there.
Without centralized storage these features are not available or require numerous hoops to jump through.

Rapid Provisioning and Reclamation

Another benefit of centralized storage is the rate at which new desktops (or an VM) can be deployed. Often local storage will require longer to provision or sometimes a manual process. Let’s face it, copying a template disk to disk is going to be faster than local disk to local disk over the network.

Of course Reclamation is even a bigger thing to me. If you are allocating persistent desktops, the ability to track your storage use and reclaim as desktops become “Stale” is going to be very important. Much like “VM sprawl” that occurred once VMware kicked in the datacenter door it is likely you will wind up with the same issues here. Unused desktops, old templates, etc etc. I remember a  MetaFrame consolidation project I did once where we found 6 servers in a silo that hadn’t been used by a user in over a year… no one ever noticed.

The dynamic datacenter demands centralized control, provisioning and reclamation of resources. Without centralized storage your VDI implementation is essentially NOT part of that dynamic datacenter.

Is this for everyone?

Now, to be realistic, if you work in an environment that only has 20 or 30 servers and is planning to use 2, 3 or 4 of them for desktops you may not see anything in this list that applies to you. If that is the case, that’s OK!!! But for those in enterprise environments looking over their IT landscape and all the requirements placed on systems for recovery, compliance, DR, availability, limited facilities, etc, etc you are starting to see that local disk may reduce CAPEX right now, but create a number of other problems for you and your team.

With all of that said…Brian is wrong. Not ALL VDI storage should local. There is a place for local storage and you should be able to articulate WHY you are (or are not) going to use it. But to ignore SAN storage and wave it off for VDI projects is just as much “IT malpractice” as saying there is no local disk option.

Join the conversation

45 comments

Send me notifications when other members comment.

Please create a username to comment.

It seems this dialog around disk is driven from a distributed compute model perspective.  The desktop PC had a hard drive in it, ergo the virtualized PC should have a hard drive too, makes sense...


What happens when you virtualize the hard drive?  When the hard drive is virtualized you have delivered Citrix Provisioning services.  A read only disk image delivering a consistent OS platform from which to launch additional client (ICA) and application (the app that must run from single user kernel space) services.


Citrix Provisioning server does offer a distributed model approach to hard drive in that we carve a hard disk, formatting this disk with NTFS, and then attach this disk to a template leveraged to stream the OS to a physical endpoint or hypervisor within the data center.  The purpose of all this is to support Write Cache for the read only streamed image.


As the PvS server handles the Read IO, the assigned NTFS disk hosting the Write Cache (on the SAN or local), is tasked with delivering the Write IO for the session.  Now we are into spindles and IO per session count to support user sessions.


What about RAM?


There is an opportunity however to deliver this Write IO function to RAM assigned to the OS.  I worked a project recently streaming OS via Citrix PvS to physical devices.  We did not fret over SAN storage for the delivery of "PC" workloads.  Of the 2GB of RAM available from the physical host, 1.5 GB is set aside for the write cache function, and there you go.  This same process would apply should these workloads be instead delivered to a hypervisor with 2GB RAM allocated to the machine template.


Ron, you have more experience with this sort of thing than many of us.  What is the impact of delivering required WC to RAM as opposed to disk in the context of your article.


How much disk can we cut out of the equation, local or SAN?


I see the VDI play as a cost modeling tool, a sum of the parts proposition.  It is not a matter of being right or wrong.  It is a matter of knowing what you pay for, why you pay for it, and right sizing the annual spend to drive flexibility and opportunity for the business.  


Cancel

Ron


This is a complex and heavily nuanced area for discussion so forgive me if I don't express this as accurately as I could face-to-face.  


I really don't think that there is one right answer here, with too much depending not only on, functional and nonfunctional requirements requirements (including for example factors such as the nature of a typical desktop workload, as well as return to operation and recovery point objectives for business continuity or DR SLAs) and data center environment, but also organizational issues (chiefly centered around the ability of all of the different technology verticals to work together effectively).


Certainly many of the arguments in favor of local storage to not always apply, but at the same time many of the arguments again local storage are addressable today.


I'm sure if we look at this argument again in six months time the balance will have shifted, from what I have seen from startups like Atlantis Computing who's ILIO product is equally applicable to solving SAN and local disk IO issues, it is possible that local storage will become more accepted. of course the same goes if  SSD drops in price sufficiently (and  the technology improves with respect to write performance).  At the same time, if EMC  can provide sufficient IOPS  at sensible prices we could see the pendulum swinging towards the SAN.  


Either way we can say that with two competitive direction is available to those of others implementing server hosted virtual desktop solutions we can expect intense competition to lead us towards more cost-effective solutions.  


Regards


Simon


Cancel

@ Ron -


"Assuming you are using 15K SAS you would have about 400-500 total IO available (obviously variable based on configuration). If maximum user density is your goal, you may not have enough IO to support more than 20 or 30 desktops on this blade..."


Honestly, do you really think this two-drive configuration can handle 20 or 30 desktops? I bet you won't be able to do 10 desktops without the users raising a fit about performance.


Cancel

@ Ron -


DR/Replication of desktop images is just about the most overrated requirement in VDI.


Cancel

@Simon


"I really don't think that there is one right answer here,"


Spot on.


Local storage might be right for many, but for others, shared storage and the agility that this gives you operationally will be key.


I think that dismissing the ability to provide agility and DR for simple desktops is very short sighted. The fact that we didn't do this for physical desktops doesn't make it the right approach. It might be expensive right now, but I'll eat my own earwax if in the very near future this diesn;'t become the defacto standard for the common desktop.


I've said this before,  it really doesn't matter where your virtual desktops or storage live, the problem we have right now is a lack of mature management and backup tools that sit across the stack and meet the demands of a majority of VDI deployments.  We are ending up with a mish-mash of cobbled together solutions which are bespoke and hard to support. This can only be a barrier to VDI adoption.


The vendor that will ultimately clean up in the VDI war will be the one who provides a comprehensive dashboard of products to make the end to end management of the solution a sealess experience.


While Citrix may have the best solution for providing access to virtual desktops, their competitors, on paper, have all of the pieces to build a much more mature end to end solution.  Based on previous acquisition and the ability to tightly integrate new technologies into core product lines, I dont see any of teh VDI vendors making this happen any time soon.


@edgeseeker


I think you are spot on with the statement that local disks wont scale to any reasonable level, but will decent read/write cache help here?  Maybe a chance for a storage vendor to come up with a VDI specific RAID controller which makes the local disk scaling problem go away? It's more likely that SSD technology will mature to a point where it becomes a cost effective solution for even the smallest deployments, then we'll see some interesting scalability possibilities.


However, if we take Terminal Services or XenApp  as an example, we used to see many circumstances where a server would scale well to 50-75 users, but dropped to 10-20 users per server if scalability testing and application integration testing was not adequately performed.  In this case we used to 'Silo', separating out the applications based upon their resource utilisation and scalability.  Maybe we'll end up with teh same approach for desktops....some 'silos' where only 5-10 desktops per server are allowed, but others where we can scale much higher?


Looking back at the really interesting couple of threads on this topic I cant help but think that we need to sit back and take stock of where we really are right now.  Most of the storage problems we are seeing today  could well be irrelevant in 12 months time. The management issues, will however, remain for some time to come.


Cancel

@EdgeSeeker


"Honestly, do you really think this two-drive configuration can handle 20 or 30 desktops? I bet you won't be able to do 10 desktops without the users raising a fit about performance."


Yeah I do. To some extent. It depends on a number of factors but obviously the most important is workload within the VM.


Do I think you will always get 20-30 VMs in that config.. no.


But lets look at what you could see. You COULD see 10, or maybe even 5 or 6.  Or you could run into one of these guys that says they are getting 150 or 200 VMs in a blade config using  local disk.  Now I dont subscribe to 100+ VMs on a blade (for sure using local storage) and in most cases when you get in that high end range people are just measuring server metrics and not looking at perceived performance/application response time at all...


Anyway. the 20-30 number was an upper limit. Kind of like "What is that, a 1.0 liter motor in that car.. you'd be lucky to get past 60- 70 MPH in that.  more than likely they wont get to 50.


As for DR.


You'd be surprised. I think there are a number of ways to handle DR with regards to application and desktop access but requirements placed on IT is often different than what we wish or want them to be.


Cancel

@ Ron


With all due respect, you must be smoking something to think that 20-30 VMs can be supported on a two-drive configuration. I bet you've never run a real-life stress test. Otherwise, you wouldn't be saying this non-sense.


Cancel

@edgeseeker


come on man..


I am starting to believe that you essentially like to poke people to be "edgie" vs having a real discussion on the topic following numbers and where ever they lead.  


20-30 VMs on a Blade with local disk. can It be done? yes. what is the user doing is the key. As I noted the workload within the VM, the type of applications, its disk usage, how AV is done, supporting services configurations or removal, the OS version, how the OS is configured, what is disabled in the OS, has the admin configured something like "Windows Fundamentals for Legacy Systems" for the users or are they using Windows 7 with every service running.


I know you like to be edgie and poke at people. but to poke requires that you either backup any poke with rock solid documented data that I'm smoking something and 20-30 VMs COULD NEVER be supported on a blade with local disk in real life.  So what is your workload like? To take my "high" number for local disk and make it personal to be "Edgie" is simply simple and doesn't help.


As far as this


"I bet you've never run a real-life stress test"


I have...


So if you would like to stop using a nom de plume... and compare numbers no problem. If you would like to run a load test, define the tools, workload etc etc and be realistic with different scenarios and types of workers, NO PROBLEM. Lets do it. I am fine with doing load testing and showing my results and methodology. And I DONT CARE what the numbers come out as!


Or you can continue to insult just to try to be seen as 'winning' an "argument" that you created by taking what I used as a possible high number and stating that I am saying any production workload will work in that config with those numbers...


Cancel

@Ron/edgeseeker


I have seen the results of some basic scalability tests which showed an average of 10 IOP's per desktop, running a pretty average workload of Outlook, Word and helper apps.


So, ignoring the boot storm issue, where all 30 users startup and login to their desktops simultaneously,  20-30 desktops is just about feasible on two 15K RPM spindles (~300 IOP's), excluding any hypervisor and underlying OS activity, and ignoring any effect that read/write cache might have.


Hell, if you can optimise your virtual desktop to consume ~1 IOP, scaling to 300 users in this environment might be possible, with enough vCPU's and RAM!!


As with all shared infrastructure (and its a good job so many of us here come from a TS/XenApp background and have expereinced thso pain), integration and scsalability testing is well understood to be a core pre-requisite, so it is unlikely that anyone will say categorically that X infrastructure will support Y users without the due diligence of testing.


Unfortunately, many people still try to get X infrastructure to support KY users, thinking that the lubricative nature of the KY user will allow them to slip more in than they really should!!


Cancel

@help4ctx / edgeseeker


The funny thing here is that I think edgeseeker and I agree that there is not enough IO in the blade configurations w/ local disk to use for anyone worried about VM:core ratios or VM to Host ratios. Of course you COULD go to local SSD to get the IO but it will cost $$.


we essentially violently arguing about the number of VMs it wont support..


Cancel

@Ron


Did you have your coffee this morning? I suggest you try decaf.


Go ahead and run 30-40 VMs on a two-disk configuration is that suits you. And good luck with it. All of those OS optimization guidelines  you cited have a marginal effect on workload reduction. You're still leaving in the late 90's/early 2000's when everyone was desperately trying to eek out an extra horsepower out of their Terminal Servers by turning this or that service off, and tweaking this or that registry setting. Good luck!


And listen dude, I think I've made many more substantive arguments on this blog than you have.


Nothing personal, so don't be going military on me.


Cancel

@help4ctx


I don't know many people who go to work every day and just run Outlook. If you're ESPN and it's superbowl season, you build your system with enough capacity to account for peak usage times. Anyways, those are my views, and they're certainly backed up by empirical data.


Edgie


Cancel

@edgeseeker


I agree, but the load testing I saw was for a real world 'basic desktop' scenario, and it seems to be working in the real world. In fcat, in this example, we are actually seeing substantially less day to day activity than expected due to the random nature of application utilisation.


'Empirical' means via observation/experience/experiment, and everyones observations/experiences and experiments will be different.


I'm sure ESPN dont sit there 24x7x365 with enough capacity ramped up to meet superbowl levels of activity. They dynamically adjust their capabilities based upon trend analysis and peak flow measurements.


I'm certainly not going to enginer enough resources into a simple desktop on the offchance that one of my users might suddenly want to run 24 apps that they've never used before, or suddenly change their usage patterms in a way that blows my scalability calculations to hell. I will however engineer enough overhead, dynamic or otherwise which will account for a moderate surge in activity.


As Ron says, we actually agree over this....any system will work within the parameters it was designed for, gathering those parameters is the key to success.


Cancel

@help4ctx


I'm sure we can beat this topic to death, but I question if it's worth doing so. Most companies considering VDI as an option don't fall into the "simple desktop" category. Yes, observations and experiments are very subjective, but unless users power up their VMs and sit still waiting for the next email to pop into their inbox, I don't see how the two-disk configuration is a viable one.


I do respect your opinion, and yes, we certainly do agree on the broad picture.


Edgie


Cancel

Hello everybody,


I am, a first time poster (long time reader), but this topic ist very interesting.


Back on topic - working with local drives and having up to 75 Users running on ONE server (HP DL 380 or HP DL 385) is absolutely possible. Actually we have a customer (in Germany), wich is using the above mentioned method. Of course, you will need more then 2 disks, bur with 5x 2,5"SFF 15K disks it is no problem.


This customer has a brand new (from March) VDI-Environment based upon HP DL380/385 and is running 60% WinXP and 40% Win7 with a total of up to 450 different Applications and 2.500 Users (50 servers now), Of cousre, not every user has all 450 applications installed, but some users have as much as 180 apps installed and use this applications inside their VDi-environment (ESX 4.x with View).


And - yes it is very helpfull to "tweak" the OS - simply deactivating the NTFS-time_stamp function, the layout.ini, the automatic defragmentation etc. etc. can improve the speed and lower the IOPS demand per user more then you think.


Fact is, we have benchmarked (for this customer only) more then 50.000 desktops (and have benchmarks from over 1.000.000 desktops over the past 2 years)  - allways up to 60 desktops per server with different configurations (V_RAM, V_CPU, some OS-tweaks).


If you do it right, you dont need a SAN. For a new project (another customer) we will benchmark some 90.000 desktops, because this customer needs 100% Win 7 running with up to 80 desktops per server and a total of up to 60 applications per user.


But this solution ist not very common yet :-)


PS: please excuse my bad english,


Cancel

Go military? whats that old Air Force saying? you know you're over the target when you start to see the flak.


Cancel

@Ron


But you keep missing time and time again. Maybe the target is you. You're certainly an easy one.


Cancel

Why is it that the only time I talk about shutting down comments is when EdgeSeeker gets an anonymous ego?


Knock it off, unless you're going to tell the rest of the world who you are. I don't care if you "can't because of your high profile," "company policy says we're not allowed to post," or if your mom monitors your internet and won't let you on after bedtime.


The petty, personal, anonymous attacks need to stop.

Cancel

@Gabe -


Ron started it, not me.


Cancel

Gabe’s of course right. @edgeseeker’s comments are what are popularly called Internet Trolling.


en.wikipedia.org/.../Troll_(Internet)


As it’s summer/vacation time there’s not always the time to thwart the trolling attack that, such as in this case, unjustly was alone to defend for @Ron. Maybe I’m the wrong guy to say it, but it makes me sad. I also feel myself robbed of good conversation.


As for the topic, I didn’t intend to comment on it.  I’m just too scattered right now.


Cancel

I give up!!! I think Brian is wrong and Ron is right!!!


Cancel

Disagreeing with Ron isn't the problem, man. Your points are valid, and you've obviously been around long enough to know what you're talking about, but the little jabs from behind the anonymity curtain are what has to stop.


I've had this same conversation with other pseudonymous posters in the past. Who knows, it's probably the same person over and over again :)

Cancel

Which jabs? Can you point them out?


Cancel

We're using Parallels Virtuozzo Containers 4.5 with Quest vWorkspace 7.1 running on an Intel Modular Server (IMS) for the deployment of 180 virtual desktops (90 concurrent) for our small medical nonprofit company.  The IMS chassis consists of (up to) 6 dual-cpu Intel Xeon 5500 compute modules (blades) with 24GB RAM and an internal SAN (not iSCSI but a basic SAN) of 14 - 300GB 10K 2.5" SAS drives.  With this solution, 30 virtual desktops per blade is easy to attain (we've tested up to 60 with more RAM and no dire impacts but wanted to maintain a sane consolidation ratio) with users running Lotus Notes 8.5.1, iSeries Access, MS Office, IE8, and a few other apps.


Since we have some blades running dedicated (non-virtualized) OS/App loads, only 7 of the total 14 drives are assigned to a pool and the 3 blades we run for VDI purposes.  That's pretty close to 2 drives per blade with RAID overhead.


The combination of containers and memory-cache techniques 'could' have less impact on I/O than the current VMware/Citrix VDI solutions, but I have no concrete proof of this.  Also, our containers are (nearly) always running so I don't have any experience with a 'boot' storm.


...just to give you the perspective of a small business already using one flavor of VDI.


Rodd Ahrenstorff


Cancel

@Rodd


The use of Containers is exactly why you're able to achieve this amazing consolidation ratio. That's why Containers are a more natural fit for VDI.


Cancel

As a follow on, what is the consensus regarding the ability of a RAID array controller to circumvent this IO bottleneck.  I took the following dialog from a colleague, and I am interested in feedback on this issue.  The context here is XenDesktop leveraging PvS.


"Forget about IOPS and think about throughput of data in MB.  Let’s assume your customer’s pooled desktop at peak logon generates 10 MB of write operations…if a logon takes one minute and during your peak logon you have 500 workstations logging on, then you will generate 5 GB of write data over the course of a minute.  If the write cache on the controller is dedicated to the LUNs hosting XenDesktop and it has 8 GB of write cache, then you will never overrun the cache and the write operations generated by the desktops will be of little concern."


Cancel

(Assuming I have a server with enough spindles) I can knock out all of your points using provisioning server still utilizing local storage, except the power reallocation by taking one of the spare servers and putting active desktops on it, firing up when needed.  All the other points refer to doing desktop virtualization with a server virtualization mindset.


So I guess I agree with Brian.


Cancel

@ Rick


I guess if you have a logon that generates 10MB of write... you could have a bw issue.


Then again we are assuming 500 users over 1 minute (30,000 user an hour - pretty big env), then the question becomes where the desktops are located. If they are local storage then they are spread across numerous controllers For your 500 they are probably on at least 50 or 75 different controllers (servers), but in reality probably closer to 100 controllers (servers). Then your 5 GB isnt so bad.


from a SAN perspective it really depends what it is front ended width. EMC has some new stuff coming out that I was "told" is killer for this type of application (but they always say that crap). Often when people think SAN they think about the disk type and maybe the bandwidth in, but the caching mechanisms going in today really change that for "spikes" like this.


Anyway for 500 users/min against a single SAN I would have to defer to a SAN guy that knows that their gear and what it can handle, I dont know those numbers.


Cancel

Good thing this comment thread is going on. May I break in with Q?


How’s the in-memory de-dup, rebasing, cross…bind.. etc. thing going? I guess I'm asking how far we are able to optimize I/O yin the HW level (controller), you know, in a standardized way – without any of the esoterics?


Cancel

Ron,


When you say 10MB.  Do you mean 10mbs per second or do you mean 10mbs of data?  


Cancel

@Rick, presuming your write cache is in RAM, you could have a solid point, given that the IOPS consideration really only applies when we are dealing with spindles. If you're serving up out of RAM, the spindles shouldnt matter and MB is all you care about. Note though that 5 GB over one minute is about .66 Gbps, meaning you're at a little over 60% of a gigabit network link. so while you're fine at 10 MB per user logon, you're probably going to start choking the network at 15. Of course all of this assumes that the network has no retransmissions or bottlenecks, which is like assuming a frictionless surface anyway.


Incidentally @Ron this is why I like the idea of local disk. While I agree with most of your points I do hear some interesting ideas out there about utilizing local RAM on the box directly, which I'm still thinking through, so wont talk about in detail just yet ;)


Cancel

Talk about analysis paralysis and not understanding the practical reality at scale. Actually @edgeseeker is right in what he is saying. He is pushing back on a ridiculous argument being put forward and sounds like vendor speak as opposed to real world quality user experience. What everybody fails to understand is that the COST of SAN based storage goes through the roof to get the performance you need. Even if you use NetApp de-dupe the cost of the cache you need to handle a desktop workload is very expensive unless you have enough buying power to get a massive discount. The storage vendors do not produce cheap VDI class storage. That is a huge gap, that we MUST wake up to. I'll predict that over time the hypervisors will do more things natively to optimize for desktop workloads. It just makes a ton of sense. The storage vendors have no incentive to do anything at the current volume, and hence why new solutions are emerging.


The other really stupid argument is backup of images. Which idiot wants to back up system32 files? Why the F would you do that? All your state can surely be stored on SAN/NAS and the desktop spun up anywhere you feel like it. In fact just just windows file servers for this, why make storage so complicated? Google doesn't...... The VRC reports show the writes/read ratio on a desktop workload to be write heavy. So why keep pumping that crap (page files) over the network? why not move the storage closer to the desktop workload and be smarter and avoid arcane DR arguments. Also don't forget when you add in SAN/NAS scenarios and can't afford to spend the money on cache to deal with hotspots, your DR strategy is weak, containing single points of failure, much like an inline connection broker.


With respect to the number of users on a disk it depends. If you want to provide crappy unpredictable user experience then by all means gang bang the users and make it cheap. RDS/XA are great for that, and if it ever was supported by MS containers like Parallels would also be great. Nobody shares my PC with me today, so I find in amazing that people think that I will want to share resources and my PC tomorrow. "F you IT admins" is what users will tell you.


If anybody has actually tired to implement VDI at scale out of the box with the vendor solutions, you already know that the only way to do it at scale is 1-1 images. Use SAN for that model and let me know how it works out from a CapEx and an OpEx point of view. Sure the Unidesks and Atlantis type solutions of the world are trying to produce solutions to address these, but they are nowhere near enterprise class solutions. In the case of Unidesk they are an SMB focused play. It will take them a long time to get to enterprise class if they can ever scale. I want to roll our Windows 7 now, so Unidesk Atlantis etc are for the most part irrelevant at scale even in a local disk use case due to their maturity. Of course Ron and party will fire back with their 3 customers, that's ok and expected and I hope they continue to grow as they are offering a better tomorrow post Windows 7 IMO.


So again I agree with local disk as practical reality, @edgeseeker is correct also in pushing back on BS logic that confuses the F out of people to produce really poor user experience.


Cancel

@appdetective...while I disagree with your confrontational and hostile style, I have to agree that eventually the hypervisor companies are going to wake up and smell the coffee...would make things a hell of a lot simpler if you could just configure your hypervisor to use a VDI profile that allows more efficient use of the local disk...perhaps even a large local raid cache designed for random access...


Cancel

Instead of debatting about this it would be smart to do the maths.


With Vmware view you can do it on local disks. As long as there is space the images will perform. With View composer, aka linked clones, you can not do it as there are to many write IOps that will bring your performance down a lot.


Just take a server and do your maths instead of throwing a debate into this area.


Cancel

@maximus I don't like to play Switzerland for the sake of it. If you want to read bad neutral opinions that never really get to the heart of the matter, there are plenty of so called "analysts" out there who have a vested interest to only push so far with a vendor and only uncover so much to avoid getting shut out by the vendors themselves. It's a game, and if you want to read complete BS from bought our analysts, there is always the Tolly group.......


The problem IMO is that too much confusion debating nothing when the obvious is in front of you, and people just sit there in endless analysis and never move forward. Sure, do due diligence and get smart on a space, but endless debate is a disease in IT organizations which results snail pace progress and poor ROI for every dollar the business invests in technology. As a results there are forward thinking IT people who get $h1t done and move a business forward, and then there are those (average IT idiot) who follow years later and wonder why their firms are #15 in their respective industry.


Cancel

@controlvirtual Exactly!!! Just goes to show who is actually doing it in the real world vs. theory from vendors and neutral opinions for the sake if it. To get any shared storage to work right, the costs are huge very quickly. The tech to make it work right are not there at desktop price points today. HUGE startup opportunity.


Cancel

Well HP and Citrix presented a great solution to reduce the cost of storage to $750 per user.  Ouch!


www.infostor.com/.../storage-highlights.html


Cancel

@watson perhaps HP and Citrix should have looked at desktop pricing first :-)


www.tigerdirect.com/.../category_tlc.asp


Cancel

@appdetective.


Try Greenbytes.


We do.


A 50K box where we run up to 10.000 windows 7 images on. thats 5 dollars per image


just get yourselve the correct technology


Cancel

@controvirtual. I looked at them as well as Whiptail. Interesting and good to see a new idea, however I am not yet convinced of their enterprise quality, mgmt tools and more importantly I trying to void lock into proprietary hardware just like I avoid thin client like the plague. I am more interested in commodity local disk with perhaps a distributed file system on top that is more common. Non of the major vendors do this today. I used to have high hope for left hand networks then HP showed up and screwed it up.


Cancel

@Controvirtual  I tried Greenbytes when we were first looking at Atlantis.  It blew up every time I got more than 100 virtual machines loaded and ran it for 2 weeks?


Cancel

@Watson,


I don't know what went wrong then. I love the Greenbytes 4000 very much. It runs thousands of images simultanious without latency.


Maybe you hat a wrong configuration or a beta box?


The Greenbytes storage can definetly run much more


Cancel

@edgeseeker


If you run perfmon on an XP workstation the requred iops are normally between 10-30 (daily average).  Netapp test using a figure of about 15-20 for a typical workstation.  So I reckon 20-30 workstations is about right in a 2 disk configuration.


Choke points are reboots, virus scans, logon - same as for SAN without some fast RAM based cache


Cancel

@ukbrown - good luck with that one!


Cancel

Guys,


Pardon my accent, but yes, you can:


www.citrix.com/tv


PS. Ron and Brian -- thank you, this is an excellent subject even for 2012 ;)


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close