Liquidware Labs Flex-IO storage optimization has been released. Can it be a standalone product?

Since VMworld, we've been hearing bits and pieces about Liquidware Lab's storage optimization solution for non-persistent desktops, Flex-IO. At VMworld I was shown a demo of an early version which, at the time, was going to be included as part of ProfileUnity.

Since VMworld, we’ve been hearing bits and pieces about Liquidware Lab’s storage optimization solution for non-persistent desktops, Flex-IO. At VMworld I was shown a demo of an early version which, at the time, was going to be included as part of ProfileUnity. This was exciting to me because it represented a turning point in the market where storage optimization was essentially obtainable by anyone. If I’m in the market for both storage optimization and profile management, how could I not at least consider Liquidware Labs in both of those discussions? In fact, I even wrote an article about how storage optimization was suddenly becoming a commodity.

The thought process for Liquidware at the time was that the overall optimizations wouldn’t be as feature-rich as more dedicated solutions from classic companies (Atlantis, Greenbytes, Infinio, and about a dozen others). For instance, it only works on non-persistent desktops and there’s no single instance storage, but they say they can do their caching with a smaller memory requirement compared to the competition. Despite any shortcomings, it’s better than nothing, and adding it to ProfileUnity would have helped them win key deals away from AppSense and RES.

When they tested it at customer sites, Liquidware found that Flex-IO will add in the neighborhood of 25,000 IOPS per host using their virtual appliance to cache in RAM. There’s no single instance storage, which is one of the limitations of the solution, but they say they can do their caching with a smaller memory requirement compared to the competition. The performance they saw at beta sites led them to the conclusion that they had something with more value than as an add-on to ProfileUnity, so Flex-IO is now a standalone product. The price is still low compared to the big names (it lists at $3,000 per host), but falls in the middle of the pack compared to the entire space.

It’s worth noting that calculating a direct price comparison between solutions is pretty tough because there are so many pricing models and technologies out there. Some license by the socket, others by the users, and all of them are using different density numbers. That’s before you even consider how effective the optimizations are. You can bet every one of the companies sat down with a spreadsheet and played some games to try to figure out what was easier to position. The bottom line is that it’s hard to believe one generalized statement over another. You’ll have to actually plug in your company’s requirements to get a good idea of how much each solution will cost, and that usually involves getting the product on site.

Flex-IO works with any broker, although it is currently tied to ESX for the host. The installation process involves standing up a virtual appliance, connecting to the web interface, and pointing it to vCenter. Using vCenter, it enumerates the data stores, and lets you pick one that you want to optimize. From there, you can build your desktop environment on it. You can migrate users by assigning them to the new data store, so while it’s not instantly optimized like other solutions, it does seem fairly easy to implement.

Regarding the optimization itself, Flex-IO sets up a NFS data store on existing storage, compressing and caching the common read blocks. Nothing special happens with writes–they’re cached and written back to disk whenever possible. If you’re paying attention, that means that if power should be lost, those uncommitted writes are also lost. Since Flex-IO only works for non-persistent desktops, that shouldn’t be a problem.

In terms of scalability, the solution uses 2 vCPUs to support 200 users, and requires “as little as” 16GB of memory for every 50 users (so for those 200 users, you'd need at least 64GB of RAM). For those just testing, there is a setting that allows you to dedicate as little as 6GB of RAM to test a small lab.

As a proponent of persistent desktops, I’m not in love with the fact that this only supports non-persistent scenarios. That said, there are plenty of non-persistent desktop supporters out there. If you were going to make a lightweight solution that aimed to add some optimizations into an environment, building one for non-persistent means you don’t have as much work to do, so it makes sense. At least, it would make sense to me if this was still a free feature of ProfileUnity. By turning this into a standalone product, Liquidware is inviting competition and a deeper inspection of what is going on under the covers. While I’m sure they’re prepared for that and confident in the outcome, I was looking forward to seeing what sort of disruption it would cause to have a UEM solution on the market that just so happened to include some sort of storage optimization.

Now customers will have to weigh this product against the many others that are out there, when before it would have just been there for people to use. What’s interesting to keep in mind during that evaluation is that despite Liquidware being known for assessment and UEM solutions, the team behind Flex-IO is made up of ex-Vizioncore people. Vizioncore was a data protection and monitoring company that was acquired by Quest software, so they’re probably more comfortable creating storage optimization solutions than they are making UEM solutions. That means they’re not a just software company that thinks they can play in the storage space–they actually do have a background to support it.

I’m anxious to see how all of the different storage optimization solutions shake out in 2014. There are so many that we’re sure to see some dustups between them, trying to decide which method is better, or what provides the best value. With this release, Liquidware is certainly in the fray.

Join the conversation

7 comments

Send me notifications when other members comment.

Please create a username to comment.

As always it's down to use cases but a solution focusing on read only savings seems very limited IMO.


The problem I've witnessed with solutions that focus on read IOPs only have been when users start to stream apps, App-v, Jukebox, etc. or when some apps do a lot of writing.


For double the list cost I can use Hyper-V with the new de-dupe magic and a FusionIO card (OEM) and get more IOPs and service write IOPs too. I also wont have any ongoing software maintenance costs, additional memory to buy and another solution to manage. Just saying ;)


Cancel

Interesting, especially for the price/performance they claim.  


@Daniel Bolton - Gabe says they do both Read and Write above.


Cancel

"Nothing special happens with writes".. Their cached we're possible which is fair enough since writes  are mainly unique but with a hardware solution (Fusion, etc.) all writes are optimal.


One of the beta partners (UK based but will remain nameless) were very impressed with the solution but said there was no visible benefits to writes in their tests.


But like I said, depends on use cases... Not that well suited where a lot of dynamic activity happens.


I think this will be a commodity in years to come and just a feature of the HV anyway.


Cancel

I also wanted to let you all know we are offering two full NFR Flex-IO appliances for your home labs, etc. FREE. Fully functioning, perpetual. Who cant use 25,000+ free Iops and 75% less disk latency :)


Enjoy


T.Rex


Here: www.liquidwarelabs.com/download


Cancel

I wouldn’t say Flex-IO doesn’t do much on the write side of the house, when we talk about 25,000 on average it’s a split between 5,000 write and 20,000 read. So, based on these numbers and let’s say 100 users a host your still getting 50 IOPS write and 200 read per user per host. Because of what we do with writes all provisioning is faster than without us. Anyone that doesn’t see good numbers/performance from testing I would love to work with them to understand how they are doing their testing, in ALL cases we are see cookie cutter results on production class hardware. The only time the numbers are low a.k.a less 15,000 in IOPS, the person doing the testing is still amazed their non-production class home built box from 5+ years ago can even do those numbers.


The goal here is to help solve both read and write IOPS for non-persistent desktops, at an affordable cost over most products on the market and lastly “time to value”. If you do the steps to base line your IOPS today, download, install, config, deploy and base line your IOPS after, this can all be done in under and 1 hour. We have clients reporting all the things they should be faster login times, faster application load times etc.etc. (see process here -> www.liquidwarelabs.com/.../Liquidware-Labs-Flex-IO-Admin-Guide.pdf)


If your choice is to use persistent desktops then ‘Flex-IO’ is not for you (the rest of our products are fine with persistent desktops and non-persistent), but most all clients we talk to are using non-persistent desktops or are going to non-persistent desktops for 90%+ of their desktops in their VDI deployments.


Lastly, unlike other products on the market we have a frictionless approach to testing and validating your results. Point being, fill out the download form and you can download without needing to talk to anyone, do your own testing and let me know how it goes. Jason.mattox (@) liquidwarelabs.com


Happy 2014!!


Cancel

Full disclosure — I’m with Infinio, one of the “classic companies” mentioned. I won’t turn this into a vendor pitch, but I will point out a few corrections worth noting:


> Infinio needs just 2 vCPUs & 8GB of RAM per ESXi host we accelerate. We aggregate those caches as a single unified deduplicated cache.


> Infinio costs just $499 per socket, making us pretty inexpensive if not less expensive


> Infinio can accelerate all workloads, not just VDI


I’m with @Daniel Bolton that in the long run the hypervisor and/or storage array will likely have this level of detail built in. No matter the solution, the problem is here today.


Flex-IO sounds like a great competitor with the other VDI-only solutions out there. They cover a case where having RW acceleration for a specialized workload could be worth the extra investment. Here’s where I have to disagree with @Daniel’s point above: Read is a heavy hitting part of any virtual workload. The inclusion of a write-through read cache acceleration layer between the hypervisor and the backend datastore offers massive benefit to performance. That fact is strongly connected to why we have strong funding behind us.


I just wanted to clear that up and throw a personal opinion out there too. Thanks for the thoughtful article.


Cancel

@Matt > I've never heard of Infinio (just had a quick look at the site) so can't really comment. Price point seems attractive though.


Just to be clear I think these solutions are technically fantastic and offer performance benefits.


I completely agree about the importance of read/write and having seen Atlantis in action myself know just how well software acceleration solution can work for both. I've just heard from good/trusted sources who have tested FlexIO (and were very impressed) that they didn't see any obvious benefits at scale with write when simulating complex workloads.


I guess my point is there are other ways such as FusionIO who might have a high up front cost (OEM not so much) but come with no hidden costs, no Hypervisor restrictions and not yet another point solution to manage (no matter how simple). Most of the partners/solution providers I've spoken to still prefer the hardware based approach.


Personally I would never rule anything out and it's all about use cases and aligning the best solution to the use case.


I applaud Liquidware on their trial mentality and NFR scheme!


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close