Understanding how storage design has a big impact on your VDI (updated September 2011)

Note: This article has been updated a few times since originally published back in December 2009, updated in December 2010 and September 2011; Virtual Desktop Infrastructure, or VDI, is hot. It's cool, secure, centrally managed, flexible--it's an IT manager's dream.

Note: This article has been updated a few times since originally published back in December 2009, updated in December 2010 and September 2011;

Virtual Desktop Infrastructure, or VDI, is hot. It’s cool, secure, centrally managed, flexible--it’s an IT manager’s dream.

VDI comes in two flavors: Server-Hosted VDI (centralized, single-user remote vDesktop solution) and Client-Side VDI (local, single-user vDesktop solution). a full coverage of Desktop Virtualization can be found here. The advantages of a VDI infrastructure are that virtual desktops are hardware independent and can be accessed from any common OS. It's also much easier to deploy virtual desktops and to facilitate the freedom that the users require of them. And because of the single-user OS, application compatibility is much less of an issue than it is with terminal servers.

However, when implementing a VDI infrastructure, certain points need to be addressed. First of all, the TCO/ROI calculation may not be as rosy as some people suggest. Secondly, the performance impact on applications--specifically multimedia and 3D applications--needs to be investigated. And finally, you have to deal with the licensing aspects, as this can be a very significant factor in VDI infrastructure. While centralized desktop computing provides important advantages, all resources come together in the datacenter. That means that the CPU resources, memory resources, networking and disk resources all need to be facilitated from a single point--the virtual infrastructure.

The advantage of a central infrastructure is that when sized properly, it's more flexible in terms of resource consumption than decentralized computing. It's also more capable of handling a certain amount of peak loads, as these only occur once in a while on a small number of systems in an average datacenter. But what if the peak loads are sustained and the averages are so high that the cost of facilitating them is disproportionate to that of decentralized computing?
As it turns out, there is a hidden danger to VDI. There’s a killer named “IOPS”.

Intended Audience

This article is intended to warn people not to oversimplify storage design, especially for VDI projects. It does not try to be a fully accurate storage sizing guide. Some information is aggregated to a simple number or rule of thumb to allow people who are not in-depth storage experts, to get the general message and still know what’s going on.
When I talk about IOPS per disk, the table simply shows the number of IOPS that a disk will be able to handle. It is however more complicated than that. The SAN setup, disk latency and seek time all influence the net IOPS a disk can handle. But for the sake of argument, an aggregated number is used that is extracted from practice and several vendor storage sizing calculators. The numbers used here are correct for general setups but some storage vendors will argue that they can handle higher IOPS per disk, which they probably can. But now you’ll have a good starting point for the discussion

The Client I/O

A Windows client that is running on local hardware has a local disk. This is usually an IDE or SATA disk rotating at 5,400 or 7,200 RPM. At that rate it can deliver about 40 to 50 IOPS.
When a Windows client starts, it loads both the basic OS and a number of services. Many of those services provide functionality that may or may not be needed for a physical system and make life easier for the user. But when the client is a virtual one, a lot of those services are unnecessary or even counter-productive. Indexing services, hardware services (wireless LAN), prefetching and other services all produce many IOPS in trying to optimize loading speed, which works well on physical clients but loses all effectiveness on virtual clients.

The reason for this is that Windows tries to optimize disk IO by making reads and writes contiguous. That means that reading from a disk in a constant stream where the disk’s heads move about as little as possible is faster than when the head needs to move all over the disk to read blocks for random reads. In other words, random IOs are much slower than contiguous ones.

The amount of IOPS a client produces is greatly dependent on the services it’s running, but even more so on the applications a user is running. Even the way applications are provisioned to the user impacts the IOPS they require. For light users the amount of IOPS for a running system amounts to about three to four. Medium users show around eight to ten IOPS and heavy users use an average of 14 to 20 IOPS. The exact number of IOPS for an environment can be determined by running an assessment in your current environment with tools like Liquidware Labs or other capacity planning tools.

Now the most surprising fact; those IOPS are mostly WRITES. A great many researchers have tested the IOPS in labs and in controlled environments using fixed test scripts. The read/write ratio turned out to be as high as 90/10 as a percentage. But in reality users run dozens or even hundreds of different applications, whether virtualized or installed. In practice, the R/W ratio turns out to be 50/50 percent at best! In most cases the ratio is more like 30/70, often even 20/80 and sometimes as bad as 10/90 percent.
But why is that important? Most vendors don’t even mention IOPS or differentiate between reads and writes in their reference designs.  

Boot and Logon Storms

When implementing a VDI infrastructure, it should be optimized for IOPS as that is one of the main bottlenecks. Powering on a VM at the time a user actually wants to use it, would put unnecessary strain on the infrastructure. The user would have to wait relatively long before he can log on and booting a machine creates IOPS that can also be read before working hours start. So for those two reasons, it makes sense to power on the amount of machines you will use at a certain day before users actually start using them. On the other hand, 90 to 95 percent of the IOPS during boot are reads. If you have a large enough cache or a cache that dedicatedly captures OS reads, the strain on the storage would be minimal and hosts would suffer more from impact on memory and CPU than on storage. All in all, when designed correctly, the impact of boot storms on a VDI infrastructure can be minimized or even avoided completely during production hours.

Logon storms are something else though. The impact on IOPS when a user logs on is very dependent on the way the profiles and policies are set up and also on how application are delivered. The way to deal with this, is to not only optimize the VM image but also optimize user environment management. When done correctly, the impact of logon storms can be reduced to a manageable factor. In practice, the read/write ratio during login turns out to be 80/20 percent to even 95/5 percent. While the amount of write IOPS during logon can be more than double the amount of average production use, by far the most amount of IOPS are reads which are much more manageable than writes. Nevertheless, if all users start working at the same time, you need a substantial larger amount of IOPS (and storage) to accommodate them than when the logons are dissipated over several hours. The only way to properly design for this is to know when users log in. For example, if a logon takes 30 seconds and the maximum amount of simultaneous logons at a peak moment is 10%, with double the amounts of write IOPS and 10 times the amount of read IOPS, the storage would need to facilitate an additional 36% of IOPS compared to regular production use. But if 3% of the users log in simultaneously, the storage only needs to facilitate an additional 11% of IOPS.

Besides logging in, there’s a 3rd factor to take into account; application’s first run. The first couple of minutes after a user logs on, applications will start for the first time. As it turns out, the read/write ratio during this stage is about 50/50 percent. When the first runs are completed, the reads drop by a factor of about 5 on average but the amount of writes stay the same! That means that a few minutes after a user logs in, the R/W ratio goes down from 50/50 percent to 20/80 percent which is what people see in almost all production environments. It’s important to notice that the reads go down but the writes don’t. And it’s the writes that cause trouble.

The Storage I/O

When all IOs from a client need to come from shared storage (attached directly to the virtualization host or through a SAN) and many clients read and write simultaneously, the IOs are 100% random (at least from the storage point of view.)

SCSI versus ATA

There are two main forms of disks - SCSI and ATA. Both have a parallel version (regular SCSI vs IDE or PATA) and serial version (SAS vs SATA).
The main differences between the architecture of the SCSI and ATA disks are rotation speed and protocol. To start with the protocol, the SCSI protocol is highly efficient with multiple devices on the same bus, and it also supports command queuing. ATA devices have to wait on each other, making them slower when grouped together.
The higher rotation speed means that when the head needs to move to a different location, it does not need to wait as long for the data to pass beneath it. So a SCSI disk can produce more IOPS than an ATA disk. The faster a disk rotates, the less time the head needs to wait before data passes beneath it and the sooner it can move to the next position, ergo the more IOs it can handle per second.
To give some idea of the numbers involved; a 15,000 RPM disk can handle about 180 random IOPS, a 5,400 RPM disk about 50. These are gross figures and the number of IOPS that are available to the hosts depend very much on the way they are configured together and on the overhead of the storage system. In an average SAN, the net IOPS from 15,000 RPM disks is 30 percent less than the gross IOPS.

RAID Levels

There are several ways to get disks to work together as a group. Some of these are designed for speed, others for redundancy or anything in between.

RAID5

The way a traditional RAID5 system works is that it writes the data across a set of hard disks, calculates the parity for that data and writes that parity to one of the hard disks in the set. This parity block is written to a different disk in the set for every further block of data.

To write to a RAID5 system, the affected blocks are first read, the changed data is inputted,  the new parity is calculated and the blocks are then written back. On systems with large RAID5 sets this means a write IO is many times slower than a read IO. Some storage systems, like HP’s EVA, have a fixed set of four blocks for which parity is calculated, no matter how many disks are in a group. This increases overhead on a RAID5 group because every set of four disks needs a fifth one, but it does speed things up. Also, on most storage systems, write operations are written to cache. This means that writes are acknowledged back to the writing system with very low latency. The actual write to disk process takes place in the background. This makes incidental write operations very speedy, but large write streams will still need to go directly to disk.
With 15,000 RPM disks the amount of read IOPS are somewhere in the 150-160 range while write IOPS are closer to the 35-45 range.

RAID1

A RAID1 set is also called a mirror. Every block of data is written to two disks and read from either one. For a write IO to occur, the data doesn’t need to be read first because it does not change part of a parity set of blocks but rather just writes that single block of data. This means that writing to a RAID1 is much faster than to a RAID5.
With RAID1 the data is read from one of the two disks in a set and written to both. So for 15,000 RPM disks, the figures for a RAID1 set are still 150-160 IOPS for reads, but 70-80 for writes. 

RAID0

RAID0 is also called striping. Blocks of data are written in sequence to all disks in a RAID0 set but only to one at the time. So if one disk in the set fails, all data from the set of disks is lost. But because there is no overhead in a RAID0 set, it is the fastest way of reading and writing data. In practice this can only be used for volatile data like temporary files and temporary caches, and also perhaps for pagefiles.
If used, the amount of IOPS a RAID0 set can provide with 15,000 RPM disks is 150-160 for reads and 140-150 for writes.

RAID DP

Despite several people from NetApp reading this whitepaper before it was released, the RAID DP information was only partly correct. In http://media.netapp.com/documents/tr-3298.pdf they state that RAID DP is comparable to RAID 4 for performance. But there’s more to it than that. The typical Read/Write ratio of VDI highlights another feature of NetApp arrays. The way their WAFL works allows random IOs to be consolidated and effectively written to disk sequentially. Therefore, writing to a WAFL with 100% random blocks is faster than reading from it randomly. That means that the write penalty for RAID DP is not comparable to RAID 1  at all. That’s why the RAID DP chapter has been replaced by a section about serializing random IO.

Disk Alignment

Because we want to minimize the amount of IOPS from the storage we want every IO to be as efficient as possible. Disk alignment is an important factor in this.
Not every byte is read separately from the storage. From a storage perspective, the data is split into blocks of 32 kB, 64 kB or 128 kB, depending on the vendors. If the filesystem on top of those blocks is not perfectly aligned with the blocks, an IO from the filesystem will result in 2 IOs from the storage system. If that filesystem is on a virtual disk and that virtual disk sits on a file system that is misaligned, the single IO from the client can result in three IOs from the storage. This means it is of utmost importance that all levels of filesystems are aligned to the storage.

Unfortunately, Windows XP and 2003 setup process misalign their partition by default by creating a signature on the first part of the disk and starting the actual partition at the last few sectors of the first block, misaligning the partition completely. To set this up correctly, create a partition manually using ‘diskpart’ or a Linux ‘fdisk’ and put the start of the partition at sector 128. A sector is 512 bytes, putting the first sector of the partition precisely at the 64 kB marker. Once the partition is aligned, every IO from the partition results in a single IO from the storage.

The same goes for a VMFS. When created through the ESX Service Console it will, by default, be misaligned. Use fdisk and expert mode to align the VMFS partition or create the partition through VMware vCenter which will perform the alignment automatically. Windows Vista and later versions try to properly align the disk. By default they align their partition at 1 MB, but it’s always a good idea to check if this actually is the case .

The gain from aligning disks can be 3-5 percent for large files or streams up to 30-50 percent for small (random) IOs. And because a VDI IO is an almost completely random IO, the performance gain from aligning the disks properly can be substantial.

Prefetching and Defragging

The NTFS filesystem on a Windows client uses 4 kB blocks by default. Luckily, Windows tries to optimize disk requests to some extent by grouping block requests together if, from a file perspective, they are contiguous. That means it is important that files are defragged. However, when a client is running applications, it turns out that files are for the most part written. If defragging is enabled during production hours the gain is practically zero, while the process itself adds to the IOs. Therefore it is best practice to disable defragging completely once the master image is complete.
The same goes for prefetching. Prefetching is a process that puts all files  read more frequently in a special cache directory in Windows, so that the reading of these files becomes one contiguous reading stream, minimizing IO and maximizing throughput. But because IOs from a large number of clients makes it totally random from a storage point of view, prefetching files no longer matters and the prefetching process only adds to the IOs once again. So prefetching should also be completely disabled.
If the storage is de-duplicating the disks, moving files around inside those disks will greatly disturb the effectiveness of de-duplication. That is yet another reason to disable features like prefetching and defragging. (A quick way to check if a partition is aligned is by typing “wmic partition get BlockSize, StartingOffset, Name, Index” in a command shell. If the number isn’t a multiple of 65536 (64 kB) or 1048576 (1 MB) the partition is unaligned).

The Maths

So much for the theory. How do we use this knowledge to properly size the infrastructure?

Processor

On average, a VDI client can share a processor core with six to nine others. Of course, everything depends on what applications are being used, but let’s take an average of 7 VMs per core. With a dual socket, quad core CPU system that means we can house 7 x 2 x 4 = 56 clients. However, the Intel Nehalem architecture is very efficient with hyper-threading and allows 50-80 percent more clients. That means that when it comes to the CPU, we can host 150% * 56 = 84 VMs.

Memory

The amount of memory the host must have depends primarily on the applications the users require and the OS they use. On average a Windows XP client needs 400-500 MB of RAM for basic operations and a standard set of applications. Add some caching and the memory usage should stay below 700 MB.
The Windows OS starts paging when 75 percent of its memory is allocated. It will always try to keep at least 25 percent free. But paging in virtual environments is a performance-killer. So instead of giving it the recommended (in physical systems) amount of 1.5 to 2 times the amount of memory in swap space, we limit the pagefile size to a fixed amount of 200 to perhaps 500 MB. If that is not enough, just add more RAM to the client, rather than extending the pagefile.
This also means we aim for at least 25 percent free RAM space with most applications running. Additionally, about half of the used memory contains the same blocks in all clients (Windows DLLs, same applications, etc). This is lower on Windows 7 clients because of ASLR (Address Space Load Randomization), which means that the amount of memory shared between clients is 25% (empty space) + 75% / 2 = 62.5%.

 

So when running Windows XP on ESX servers, if 60 percent of memory per client is actually being used, 50 percent of which is shared between clients, we need 1 GB x 60% x 50% = 300 MB per client. Every VM needs about 5 percent more than the amount allocated as overhead from the host. So you need an additional 50 MB (5 percent of 1 GB) per client.
We have seen from the CPU calculation that we can host 84 clients, so a host would need 4 GB (for the host itself) + 350 MB x 84 = at least 34 GB of RAM.
However, if 75 percent of memory is used and only a third of that can be shared, every client needs 1 GB x 75% x 67% = 512 MB of dedicated host memory. So for 84 clients the host needs 4 GB + (512 + 50) MB x 84 = 52 GB of RAM.


 

Of course if you run on a host that doesn’t support transparent page sharing, the amount of memory needed is 4 GB + 84 * (1024 + 50) MB = 96 GB of RAM.
For Windows 7 clients the numbers are (2 GB + 100 MB) x 60% x 50% = 660 MB per client on average, 4 GB + 660 MB x 84 = 60 GB of minimum host memory and 4 GB + 84 x (2 GB + 100 MB) = 188 GB per host if the host doesn’t support memory over-commitment.

Disks

The amount of IOPS a client produces is very much dependent on the users and their applications. But on average, the IOPS required amount to eight to ten per client in a read/write ratio of between 40/60 percent and 20/80 percent. For XP the average is closer to eight, for Windows 7 it is closer to ten, assuming the base image is optimized to do as little as possible by itself and all IOs come from the applications, not the OS.
When placing 84 clients on a host, the amount of IOPS required would be 840, of which 670 are writes and 170 are reads. To save on disk space, the disks are normally put in a RAID5 set up. But to deliver those numbers, we need 670 / 45 + 170 / 160 (see ‘RAID5’ section earlier in this document) = 16 disks per host. Whether or not this is put in a central storage system or as locally attached storage, we will still require 16 disks for 84 VMs. If we used RAID1, the number changes to 670 / 80 + 170 / 160 = 10 disks. That means, however, that using 144 GB disks, the net amount of storage drops from 16 x 144 GB x 0.8 (RAID5 overhead) = 1840 GB to 10 x 144 GB x 0.5 (RAID1 overhead) = 720 GB; less than 2.5 times the amount of net storage. 

Practical Numbers

All these numbers assume that clients are well-behaved and that most of the peaks are absorbed in the large averages. But in reality you may want to add some margins to that. To be on the safe side, a more commonly used number of clients per host is 65 (about 3/4 of 84). That means that the minimum amount of memory for the average XP client solution would be 65 x 350 MB + 4 GB = 27 GB, or for Windows 7: 65 x 660 MB + 4 GB = 47 GB.
The amount of IOPS needed in these cases is 10 IOPS x 65 clients = 650 IOPS where 80 percent (= 520) are writes and 20 percent (= 130) are reads. With RAID5 that means we need (520 / 45) + (130 / 160) = 13 disks for every 65 clients. Should you require 1,000 VDI desktops, you will need (1000 / 65) x 13 = 200 disks. Put on RAID1, that number decreases to 112, which is quite substantial considering that it only serves nine clients per disk.
So, to be sure of the number you need to use, insist on running a pilot with the new environment where a set of users actually use the new environment in production. You can only accurately size your infrastructure once you see the numbers for those users, the applications they use and the IOPS they produce. Too much is dependent on correct sizing - especially in the storage part of the equation!!

Summary

The table below summarizes the sizing parameters:

The following table summarizes the IOPS for the different RAID solutions:

To illustrate the above figures, a few samples follow:

Alternatives

Cache

There are many solutions out there that claim to speed up the storage by multiple factors. NetApp has its FlashCache (PAM), Atlantis Computing has vScaler, and that’s just the tip of the iceberg. Vendors such as Citrix with its Provisioning Server and VMware with its View Composer and storage tiering, but also cloning technology from storage vendors, aid storage by single-instancing the main OS disk, making it much easier to cache it.
But in essence they are all read caches. Caching the IOPS for the 30 percent that are reads, even with an effectiveness of 60 percent, will still only cache 30% x 60% = 18% of all IOs. Of course they can be a big help with boot and logon storms but all write IOs still need to go to disk.
However, most storage systems also have 4 GB, 8 GB or more cache built-in. While the way it is utilized is completely different for each vendor and solution, most have a fixed percentage of the cache reserved for writes, and this write cache is generally much smaller than the read cache.
The fact is that when the number of writes remains below a certain level, most of them are handled by cache. Therefore it is fast; much faster than for reads. This cache is, however, only a temporary solution for handling the occasional write IO. If write IOs are sustained and great in number, this cache needs to constantly flush to disk, making it practically ineffective. Since, with VDI, the large part of the IOs are write IOs, we cannot assume the cache will fix the write IO problems, and we will always need the proper number of disks to handle the write IOs.  

SSD

SSD disks are actually more like large memory sticks rather than disks. The advantage is that they can handle an amazing amount of IOPS; sometimes as high as 50,000 or 100,000. They have no moving parts so accessing any block of data takes mere microseconds, instead of milliseconds. Also, the power consumption of SSD disks is only a fraction of a spinning SCSI disk. An array of SSD disks consumes a only a few 100s of Watts while an array of traditional disks can consume many 1000s of Watts.

There are two types of flash memory; NOR and NAND. With NOR-based flash memory, every bit is separately addressable but writes to it are very slow. NAND-based memory is much faster and because it requires less space per memory cell, has a higher density than NOR memory. It’s also much cheaper. Downside of NAND memory is that it can only address blocks of cells at once. For block based devices this works perfect and because of its speed and density, NAND memory is used for SSD disks.


NAND memory traditionally stores one bit per cell. This is called a Single-Level Cell (SLC). Newer types can store multiple levels of electrical charge allowing more than one bit of information per cell. Those are called Multi-Level Cells (MLC). MLC memory is most commonly used because it is much cheaper to make than SLC memory (currently about 4 times cheaper). But the downside is that where SLC memory has a 100,000 write cycle endurance, MLC only allow 5,000 to 10,000 write cycles. Because of the longer lifespan of SLC memory, this type is used by most enterprise vendors in their SAN solutions where cost is subordinate to reliability. Some solutions however, use MLC memory. By caching and optimizing write IOs, the amount of writing of the actual cells can be greatly reduced, thus extending the expected lifetime of MLC memory considerably.


Currently the SSD disks are four to ten times more expensive than fiber channel hard disks and most storage vendors don’t guarantee a lifespan of multiple years with an IO profile like that of VDI.  But better SSD cells are being developed constantly. With a more even read/write ratio, a longer lifespan, larger disks and better pricing, we may see SSD disks in a SAN become more common within a year.

Serializing Random Writes

When sizing storage for IOPS, all formulas assume 100% random IO which is especially true in VDI environments.

But when IOs are sequential, a spinning disk can handle much more IOPS than with random IO. The exact numbers when using sequential IO vary between 300 to 400 IOPS to sometimes over 600 to 800 IOPS for a 15000 RPM fiber channel disk depending on the overhead and RAID levels involved.

If a storage system manages to cache random IO and write it to disk sequentially, it would boost the write performance of that system significantly. If the reads for that system are fully random, the impact on read performance would be small. There are several storage systems that achieve this.

NetApp with its ONTAP OS and WAFL filesystem serializes random IO by journaling all writes in NVRAM, coalescing them, and writing the data on the first available block nearest to the head of the spinning disk. This way, the effective write IOPS of a NetApp storage system is some 3 times higher than a RAID 1 system . The exact number of IOPS varies with how much free space is in the system but that’s something that can be taken into account when sizing a NetApp system. Sizing for VDI is determined by IOPS which usually leaves plenty of capacity to work with. NetApp has several calculators that can help determine the exact number of disks needed when sizing a system for VDI. It’s not uncommon that NetApp's storage systems need half the number of disks compared to more traditional storage arrays for the same amount of IOPS.

Another system that uses serialization is Whiptail. They have SSD storage systems that cache writes in amounts of a full row of SSD cells before writing that row. That way, cells get written far less often than when every IO would be written directly. This effectively eliminates the need for Wear Leveling and prolongs the live of SSDs considerably. That’s why they can use MLCs instead of SLCs, making their systems more cost effective. Also, the IOPS such a system can handle runs in the 100.000s.
There are probably more systems out there that can serialize random IO but this should give a pretty good idea of the effects of serialization.

BlockSize

Unfortunately, there is still more to it. After sizing and implementing several dozens of VDI projects, most fit well within the layout set in this document. But some of them deviate horribly. After some investigation it turns out that there are applications out there (mostly the streaming kind) that use a block size that is totally out of scope for VDI projects.

VDI Storage Profile

Most VDI environments show a specific mixture of block sizes that we called the VDI Storage Profile. Measured over time we counted the blocks per size entering the storage system. When we draw this up in a 3D graph, it usually looks like shown in the graph to the right.
These are taken from a live VDI environment using the method describe in http://virtuall.eu/blog/hbr/ita-s-sample-time-with-vscsistats. It uses 60 second intervals to count the amount of different sized blocks. It shows a healty 75-80% amount of 4kB blocks and calculates down to 10-15kB block size on average

 

The IOPS Multiplication

With certain application however, block sizes can be much bigger and as high as 80kB or 100kB on average throughout the day. If your storage is built to use 4kB blocks internally, this could become a problem really fast. Every 100kB block would show as a single IO but become 25 blocks of 4kB each on the storage array. So a client producing 5 IOPS in that block size would actually require 25 * 5 = 125 IOPS at the storage level. Luckily, these are usually the exceptions and they dissipate away in the large averages. But if those applications are more generally used, sizing the storage can becomes a daunting task. Also, just imagine what swapping, which usually happens in 64kB blocks, would add to this load.
It’s therefor imperative to run a decent Proof of Concept, measure block sizes and know exactly how the intended storage solution handles blocks internally.

In Conclusion

It should be obvious by now that calculating the amount of storage needed in order to properly host VDI is not to be taken lightly. The main bottleneck at the moment is the IOPS. The read/write ratio of the IOPS that we see in practice in most of the reference cases demonstrate figures of 40/60 percent, sometimes even as skewed as 10/90 percent. The fact is that they all demonstrate more writes than reads. And because writes are more costly than reads - on any storage system - the number of disks required increases accordingly, depending on the exact usage of the users and the application.

Some questions remain:

  • What is the impact of application virtualization on the R/W IOPS?
  • What exactly is the underlying cause of the huge difference in read/write ratios between lab tests and actual production environments? 
  • What if all the write IOs only need to be written to a small part of the total dataset (such as temporary files and profile data)? Could all the data, or at least most of it, be captured in a large write cache?

These questions will be investigated as an increasing number of VDI projects are launched.

And as a final note, it is imperative that you run a pilot. Run the actual applications with actual users in the production environment beforehand so that you know how they behave and what the read/write ratio is. If you don’t size correctly, everybody will complain. All users, from IT staff to management and everybody in between, will complain and the VDI project… will FAIL

VDI and Storage = Deep Impact!

Comments or Feedback ?! Please let us know! (rsp@pqr.nl; www.twitter.com/rspruijt)

Credits

The credits for this information are for my colleague Herco van Brug (!); The whitepaper can be downloaded from the Virtuall website as well: HERE ; Find Herco on twitter: www.twitter.com/brugh

Join the conversation

32 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

This is  a good overview on how to calculate IOPS requirements for hosted virtual desktops.  I've been working on a tool to do these calculations automatically for you.  The good news is that the numbers both of us have identified as critical to the calculations are pretty much in sync.  


My only major issue with the white paper is you are just calculating average IOPS.  Average IOPS takes into account a user’s entire workday. I believe this skews the numbers much lower than what the storage system must support.  For example, my personal IOPS calculations are broken down into


1. Bootup rate


2. Logon rate


3. Working rate


4. Logoff rate


An average would take all of these into account, and lower the overall value.  But what happens if you have a logon storm?  If you designed your storage subsystem based on average numbers, it will not keep be able to keep up with the IOPS requirements of a logon storm.  


That being said, I think this article is a great way to get an understanding on the concepts required in order to calculate IOPS.  


Daniel - Lead Architect - Citrix


Twitter: @djfeller


Cancel

Ruben - Great read! For those of us with and SBC background doing things like turning off unnecessary services seem obvious but to be honest I have not been doing things like turning off prefetching in my VDI test environment, but it makes perfect sense.


I have not had enough coffee to comment more thoroughly, but I will reread this afternoon for sure!


Thanks again!


Cancel

Very nice piece of work Ruben.


I think though that your data on SSDs might be out of date.  I've seen write cycles quoted that are an order of magnitude greater than the figure you quoted 100,000 write cycles per cell.  Mtron qote their 32GB SSD as having an operational life of : >140 years @ 50GB write/day.


It should also be pointed out that a SSD RAID array will significantly extend the operational life of individual SSDs.  In some configurations this can extend the life of a system such the write endurance exceeds MTBF.


Cancel

Rubin - Congrats on a fabulous post.  This is hands down the best researched and written article on the subject of VDI and IO ever (IMO).   You have done a great service to the industry by helping move the conversation from simplistic discussions of physical storage consumed to talking about the real sizing  challenge that adds to the swirl in  VDI Storage complexity and cost- IOPS.  


One point I'd like to make in the context of  read and write caching - Atlantis Computing's vScaler does both types  (not just read) - it maintains separate channels for reads and writes and in fact keeps different kinds of write IOs in different segments.  So virtual mem/paging  IO is kept in a different cache (bucket) from application IO, working set and end user IO.  This allows us to apply different cache and persistence policies (write thru versus write back for example)  that can impact and scale IO linearly.  This is in contrast to the simple cache approaches that do block level cache and count frequency or time stamps to decide what to keep in a single monolithic cache. Most cache techniques  fragment and hemorrhage as you add more workloads and as IO variations force the cache to flush to make space for hot data. This makes the caches good for read acceleration but they bring nothing to writes. We have a contrarian approach -  Our approach is to look at write IO very closely, understand its characteristic is , de-duplicate it and then cache it in an appropriate bucket for subsequent reads.  This has the effect of improving IOPS for write (and reads) dramatically. We can discard duplicate writes instantly.   a Single vScaler instance can deliver 10,000 IOPS (8KB 70% random) from pure software. That takes a lot of spindles to do esp enterprise class drives.


@Daniel Faller - Daniel - you are spot on - sizing for avg. IOPS is dangerous. Desktop  Applications and workloads are bursty in nature and peak IOPS can collude from many workloads and go up as much as 5000 IOPS for simple things like boot storms, user logons ( and in things like disk scans (A/V)).  If your storage array can only handle 1000 IOPS peak and its avg load is 650 IOPS, Then a 5000 IOPS burst will take several seconds to execute during which all array IO capacity is used to service the burst. During this time all other IO is suspended.  This is common and happens all the time. Another important point is VDI is all about shared storage ans shared risk - On a physical PC - the disk is dedicated to the end user and he/she usually has more than enough IOPS (50 per SATA versus 20 for a heavy duty user).  More importantly no one else's IO is in contention for IO elevator time because the HDD is your and yours alone. A shared storage system virtualizes all the  disks and the logical abstraction is shared between users - so there is no my disk and your disk (these are just logical).  So now my IO is in contention with your IO for time on the elevator to get read and written from disk.  Sizing for Avg. IOPS is dangerous because any one user who consumes a disproportionate share of resources will kill the performance for everyone on the shared storage.  


Disclosure - I work for Atlantis Computing. I designed large parts of vScaler.


Chetan Venkatesh


Atlantis Computing


Twitter: @chetan_


Cancel

Ruben,


Excellent post with lots of great info.  I'm with Daniel and Chetan on the IOPS averaging, but aside from that, it's all very good info.  This must have taken a good deal of time to put together so my hats off to you and Herco.


Shawn


Cancel

Good post, but just goes to show way too much complexity for a freaking desktop. Why not just use local storage so much more simpler to manage and you get the IOPS you need. Also I still don't get why anybody would run >1-2 VMs per core? What are you going to do at concurrency? I can't predict concurrency easily to maintain service levels.


Again great article, pulling it all together. I'll going to ready it again now to make sure I understand all the complexity that I don't need :-)


Cancel

Great article for sure Ruben. With the layering one that was published here a couple weeks ago, it just shows one thing, and I will write a post about this tomorrow: Windows was NEVER meant and/or developed with virtualization in mind and what every single vendor out there (Atlantis, Citrix, etc) are trying to do is to glue together a solution to make something that was NOT written to work in a certain way, work. The bottom line is, there is no way, IMHO, that VDI will ever work and/or be stable/simple enough for mass adoption, UNLESS Microsoft and Intel (or AMD for that matter) redefine and redesign the Desktop OS and the Hardware (to match this new, virtual world). Once a new OS that understands layering, virtual I/O, etc comes out all these issues people are banging their heads to fix, will be fixed. Again, a new, rearchitected OS and HW is required to make this a reality. Until then we will have to deal with a myriad of pieces put together to make this work. Considering the current landscape on VDI land, would I trust such solution for a very large scale deployment, with 100,000+ seats? No, unless I am on crack. And when such solution, made of several pieces from several different vendors breaks or misbehaves, who am I gonna call? Ghostbusters? Citrix? Atlantis? Microsoft? Quest? Let the fingerpointing war begin.


Cancel

Well done, great read!  


This just shows that you can't throw VDI into your datacenter and expect it to perform perfectly without a lot of work ahead of time.


@appdetective I agree that this shows it is very complex to move desktops in the datacenter BUT I truly feel there are use cases where it seems to be worth the expense/effort.


Cancel

Ruben,


very nice work. Enjoyed reading it!


Eli


Cancel

Very well done! Thank you for putting this together.


Cancel

Great post, Ruben. This should be read by anyone who has to deal with VDI even remotely.


Some quick comments:


I think you switched numbers in the section "The Client I/O". You state that IOPS are mostly writes, but the numbers you give after that show that researchers found that, too, while you want to show that they got it wrong (if I understood correctly).


SATA has command queueing, too. It is called NCQ.


Cancel

The read/write ratio turned out to be as high as 10/90 as a percentage.


This is completely true!!!!


Cancel

Loved this post, however can I clarify your section on disk alignment.  I have seen many conflicting reports around XP and W2k3 with their disk alignment.  From what I can obtain from an average of all the posts I have read, it is that your System drive for XP and W2k3 is aligned, so you do not need to change this, however it is the additional 'non-system' drives that must be manually aligned.


Cheers


Jase


Cancel

@Jason, nor XP or 2003 align the disk and therefore the boot disk period.


Only windows vista/2008 and up align the disk by default with a 2048 sectors (1MB) to accomdoate all storage types, models and vendors.


NetApp developed a tool to post align Windows systems that were created th default 63 sectors aligment but to my knowledge this is only available for VMware environment...


@Ruben, your information regarding SSD specs are outdated but you make the point and this is a very good overview of the VDI challenges IT guys will have to face if they go for a deployment


Rgds,


Didier


Cancel

Ruben, very good article on what we have been calling, “The hidden cost of VDI”.  We have seen countless POC environments that made it through the testing phase with flying colors, only to fail under moderate load because the architect never paid appropriate attention to the storage layer.  The sad truth is that once you add up all of the costs involved in properly sizing a traditional storage system for VDI, it can quickly becomes the largest overall expense.


The capital cost of traditional arrays capable of supporting a 1,000 user load can be cost prohibitive for many medium – large customers ($300,000+).  In our minds, this is a bitter pill to swallow, and I am glad that you were able to touch on the solution.


As you mentioned, SSDs can provide an incredible performance advantage over traditional arrays, allowing customers the ability to shrink the footprint of their storage systems from RACKS to a few rack units.  Unfortunatly as Ruben mentioned, SSDs have traditionally suffered from two major drawbacks: endurance (lifetime), and cost.


WhipTail technologies (disclaimer: I am the CTO) directly addresses both of these issues, delivering a system that can support a massive amount of sustained IO performance with a rated lifetime of over 7 years.  With a single storage appliance, WhipTail can deliver over 125,000 IOPS (read) and 65,000 IOPS (write) drastically slashing the storage costs for any VDI deployment.


James Candelaria


jc@Whiptailtech.com


Cancel

Disk Alignment: "If the number isn’t a multiple of ... or 1048575 (1 MB) the partition is unaligned", 1024 *1024 = 1048576 (not ...5)


Thanks for the post, Brian, I learned a lot.


Cancel

VDI Sizing tool available for download as well:


virtuall.nl/.../vdi-sizing-tool


Cancel

Excellent OP and comments guys- thanks to all.


I like the title, "Deep Impact"  -- perhaps we could transform this into a movie script:  "VDI: I know what you did in the datacenter"


Cancel

- I've missed the term 'thin provisioning' here. It makes a lot of difference especially in VDI (and especially when memory sharing is utilized) - as there is much more chance for the master image to be cached - even on the host itself.


- Playing with the amount of cache each guest actually holds is also worth trying - might free some memory as well - again, for the host to cache 'globally' for all guests.


- ASLR still makes sharing code pages possible, since they are still on 4K boundaries, not completely random location of pages. Data is harder to share, but is harder anyway.


- I'd be happy if you could mention another issue: reduce the IO the client is performing in the first place (just as the best way to saving gas is less driving!) - remove unneeded protections (system restore perhaps? Anti-virus, if you have a different (offline?) protection, unneeded devices and services that start up needlessly (and take memory and disk reads at least), make sure you don't SWAP in the guest often, etc. This is probably worth an article by itself.


Cancel

Excellent read. Thanks OP. This kind of research sets BrianMadden apart from other SBC websites and user groups. Keep up the good work.


Cancel

@pironet - According to Microsoft, you can not align the system drive in operating systems prior to W2k8.  


Cancel

@Jason Conomos - if that is true, than MS is wrong ! Simply connect the wannabe bootdisk to another host and align it ! then disconnect it and connect it to the server you'd like to install Windows on. We do it all the time.


Cancel

Hi


I don't understand the disk alignment 100%, what if (as we do) only use NFS storage? Then as I understand it we don't need to make the disk alignment or do we?


We have a NetApp storage system and our goal is to only use NFS to access the storage from the ESX hosts via 2 x 10G LAN in each ESX hosts.


So when we use the NetApp storage via NFS, it's the NetApp filesystem that I on the volumes/datastores, and on top of this the vmdk files. So way will a misaligned Windows Server 2003 that is installed inside a vmdk file on top of the NFS NetApp filesystem have any impact on the performance?


Hope someone can explain it to me?


Cancel

Great post, Ruben.


Some quick comments:


You may be mistaken with respect to "The Client I/O". You state that IOPS are mostly writes, but the numbers you give after that should show that they got it wrong


Just my 2 cents.  For more information check out www.licensedefenselawgroup.com.


Cancel

I made a typo in the text and mixed the Read/Write ratio.. fixed that!


Cancel

Done a lot of calculations after this article and finally end up with one simple question (that I didn't found on the web so far):


"If I need such a large number of discs to provide the recommended IOPS, why should I spent money and administrative effort on capacity optimization technologies like Provisioning server, vComposer or deduplication?".


If I look at the capacitiy of the disc I need for the IOPS I have more capacitiy as I need for full provisioned clients...I can't save any money on purchasing discs because I need them for my IOPS.


I can only save money on discs if I'm able to optimize IOPS at the same time (SSDs; write/read caching, ...).


Am I right or did I make calculation errors? What's your point of view?


...and if I'm right, why should anyone use these optimization features today (more expensive licence for vComposer, more servers for PVS, loosing personalisation after reboot/recompose, one more complex technology to learn and to administrate, ....).


Cancel

It seems that Raid 5+0 might be a good option since write speed is much improved. Could someone comments on 5+0?


Cancel

Great article, though I think you've made some fairly large errors in your IOPS/Spindle calculations and assumptions, especially with the RAID-DP figures which are out by a factor of five.


I've posted a fairly long (6000 word) response on my own blog which addresses the areas which I think need some clarification along with a bunch of other stuff related to VDI workloads on NetApp controllers.


Once again, kudos to you for a great blog post.


Regards


Cancel

Just some clarifications on RAID-DP:


It's dual parity, but they're not the same. The second parity is sort of a diagonal parity. You mentioned:


'When a parity disk fails, a new disk simply needs to replicate the data from the other parity disk.' that is obviously wrong then, since the parities are not the same. The parity disk will be rebuild, just as any other failed disk would be.


On the other hand, with RAID-DP you do NOT incur the write penalties, that you see with RAID6 (as you've mentioned) since WAFL always writes to 'new' blocks.


Gandalf mentioned: 'But it’s good to know that with any dual parity system the write penalty is not 4, but 6 ! So for every random write that enters the storage system the original data block needs to be read as well as the 2 parity blocks; then the new data will overwrite the old block and because the parity is in fact an XOR of all data, you now know what the new parity has to be, so the new parity can now be written to the 2 parity blocks.'


-> Even on RAID6 the second parity will be different from the first, therefore it's a bit more tricky than described here. And RAID-DP, as mentioned above, doesn't overwrite the old blocks, so it works a lot faster, writing full stripes, not having to re-read the old blocks for calculating the parity.


By doing this, it's effectively serializing random writes, which speeds up SATA disks quite a bit.


You mentioned:'So, with 15,000 RPM disks in a RAID-DP, the number of read IOPS per disk is some 150-160 but the number of write IOPS lies somewhere between 70-80 IOPS.' Since RAID-DP is only used on NetApp systems and they all have NVRAM and acknowledge write requests as soon as it's safely in there (not when it finally reaches the disks), the write figures should be way higher. Except if you really overburden a NetApp it will always write faster than it reads...


Finally, the question came up: "If I need such a large number of discs to provide the recommended IOPS, why should I spent money and administrative effort on capacity optimization technologies like ... deduplication?"


 If you're using a dedupe-aware cache, like on a NetApp (with Flash Cache even up to 2TB/system...) you can reduce the number of spindles necessary quite a bit. Therefore on a NetApp VDI and DeDupe get along very nicely.


Sebastian Goetze


NCI (NetApp Certified Instructor, freelance...)


Cancel

The article is updated December 2010;


Goal, RAID-DP, alternatives, Added Serializing random writes are changed/added;


Cancel

Disclosure: I work for a storage vendor (Nimble Storage).


Ruben,


great updates to an already helpful discussion (including comments). While there might be some minor quibbles about the exact numbers, the wide range of topics covered makes this a very helpful place to start.


For instance not many independent forums cover nuances such as the IOPS impact of serializing random writes. For anyone interested, our CTO recently touched on this topic in his blog comparing different file systems.


Cancel

@Mads Soerensen


Within NFS data stores misalignment can still be an issue.


The NTFS file system that resides within the VMDK is not aware of the fact that the underlying HDD (VMDK file) is actually a file lying on another file system (WAFL)


Since WAFL uses 4k blocks, if the NTFS filesystem begins halfway a 4k block due to a partition offset not divisible by 4k, every write (or read for that matter) from the NTFS layer will require 2 blocks to be accessed on the WAFL layer. On a NetApp this can have a significant performance impact due the relatively small block size, storage systems which use larger block sizes will suffer less impact from misalignment. Using a variable block size file system like ZFS in your guest OS would give some pretty awful results as well. For a good overview of misalignment on NetApp filers have a look at this post on the NetApp forum. communities.netapp.com/.../new-vmdk-misalignment-detection-tools-in-data-ontap-735-and-802


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close