Riverbed releases Granite for VMware View. Brilliant or lame?

You may remember Riverbed's "Granite" product from last year which uses WAN optimization and disk smarts to project storage across the WAN.

You may remember Riverbed's "Granite" product from last year which uses WAN optimization and disk smarts to project storage across the WAN. (So essentially you have a branch office server which thinks its storage is local, but it's actually in the central datacenter with a pair of Riverbed appliances in the middle. At VMworld last August, Riverbed released a Tech Preview of Granite for VMware View, and last week they announced it's now officially available and VMware Ready. After talking with folks a Riverbed a few weeks ago about Granite for View, I can't decide if this product is awesome or lame. So I'm curious as to what you think?

To review how Granite for View works, the idea is that if you have a branch office, rather than having all the users access their View desktops via PCoIP over the WAN, there are scenarios where you can get a better user experience if you host their VDI sessions on a View server that's locally at the branch. This of course is nothing new, and a topic I covered in depth in my 2002 book about Citrix MetaFrame. (Here's a link to that part of the book if you want to details of the pros/cons of hosting your desktops centrally versus at the branch.)

Because I'm super lazy I'm just going to copy and paste the images from my book that explain this. This first image is the "typical" deployment, with the desktop session at the central location and the users at the branch office. In this case the remoting protocol goes across the WAN.

 

(To modernize this drawing, replace "MetaFrame XP" with "View," "ICA" with "PCoIP," and "128k WAN Link" with "10mbps WAN Link.")

Now let's look at putting the desktop server on the "wrong" end of the WAN, like this:

NewImage

This second drawing is basically what Riverbed Granite for VMware View is doing. The big difference between Granite in 2013 and doing this manually in 2002 is that with Granite, you don't have to worry about the actual servers, VDI hosts, and disk images at the remote site. You just put a Granite appliance in place (which runs ESX and is managed by vCenter, visible to View, etc.). Then in your central View environment you simply create a desktop pool for that Granite box which you point towards the central storage at your main location. Then when you assign users to that pool, they login to a local VDI desktop at the remote site and off they go. The actual primary storage stays at the central site, and the Riverbed Granite appliances handle disk block-level caching, compression, streaming, etc. You can literally have 10TB of disk images in your central location that all appear to be local to the remote site.

Why isn't this all the way cool?

The only reason I wrote that this might be lame is that fact that when you create a desktop pool in View, you have to assign it to a specific Granite appliance. When I first heard about Granite last year, I thought, "Oh cool. So you can have 20 branch offices and all your desktops centrally, and your employees can roam anywhere and if they happen to connect to a View desktop from a branch office with a Granite appliance then they'll get their desktop served locally instead of centrally." Unfortunately the ability to seamlessly roam is not a feature of Granite since you have to assign a desktop pool to a specific Granite appliance (and each pool can only be assigned to one location).

Okay, so Granite isn't about the roaming use case. Fine. (There probably aren't even that many users like that anyway.) The main advantages Granite has over just manually putting VMware View servers at the edge are:

  • You don't need additional hardware at the endpoint
  • You don't need to configure anything at the endpoint
  • You can project much more data from the datacenter than you can actually store at the endpoint 

(To be clear, these advantages are not about putting View servers on the edge versus the center, rather, these advantages are specifically why putting View servers on the edge is better with Granite versus doing it manually.)

But I question whether they're worth the added cost of Granite? ($3-$10k per box, depending on the hardware) For example, this Granite box's main job is to run the Granite VM and project all the central disk storage at the endpoint. Riverbed claims that buying Granite means you don't need any additional servers at the endpoint, but that's only true if your branch office is small enough that all the users could run their VDI sessions on the spare capacity of the Granite appliance. Well if that's the case, then your branch office is really small. How much of a pain was it to manage before anyway? And why wouldn't you just buy a real server designed for VDI and create the desktop pools and user mappings there in the View Management Console? How's that harder?

Second, Riverbed claims that Granite endpoints are simple to set up—basically just one step—and they're integrated with your existing View environment. But how hard is a remote View server to setup anyway? (And you can script the whole thing for big rollouts.) Honestly I've never been impressed with "easy setup" as a selling point since most people only set up their environment once. Don't you want to buy the system that's the best for the 25,000 hours it will be used over three years versus the initial few hours it takes to set up?

Finally, Riverbed claims that with Granite, you can project much more data from the datacenter than is actually available on the Granite appliance. But if you have to hard-code users to specific Granite appliances (and therefore specific sites), who cares? I have to pay for storage someone, and if I'm hard-coding a user to a site then it would be just as easy to buy the storage for that site and then save the money on the central backend. The ultimate storage balances out, regardless of where it is. (In fact I could argue that the storage needed in the central datacenter is more expensive than the storage of a branch office.)

What do you think?

What do you think of Riverbed Granite for Vmware View? I really, really want to like it, and the folks who made it happen deserve an attaboy for their technology. But in terms of actual usefulness in the real world, I don't know if I can get too excited about it. Am I missing something? What do you think?

Join the conversation

6 comments

Send me notifications when other members comment.

Please create a username to comment.

I'd be more curious to see who's actually using Granite for anything?  Riverbed has been talking about Granite for years, the first use case being iSCSI over the WAN, I've yet to meet someone who is using Granite outside of a lab.


Cancel

Once scenario where I see Granite as a good fit would be remote sites that present special limitations and challenges, such as offshore platforms.


Cancel

The Riverbed Granite product has been selling well since its launch in February last year, and already has many happy customers who have understood the benefits of the new technology and successfully realized the operational efficiencies only possible with Granite.


Per the comment from "calmo" above, one such customer is Alamos Gold : www.riverbed.com/.../Alamos_Gold.php  


By centralizing distributed resources, Granite typically allows customers to improve data security & control, reduce IT administrative burdens and improve business continuity, thus significantly reducing overall IT costs. The application of this technology together with vmware's Horizon View VDI platform is another compelling use case and should enable similar benefits for these environments.


Cancel

Hi Brian,


(Disclaimer: I work for Riverbed).


Clearly you have a deep understanding of how Granite works, and I appreciate that, since it's frequently misunderstood.  And you have some valid points--it would be great if Granite applied to the roaming use case for VDI, but it doesn't.  However, I'd like to offer a counterpoint.  The Granite architecture reflects the philosophy that data should be consolidated and compute should be distributed, to the extent possible.  True, you could distribute your VDI servers in 2002, but most people chose not to do this because it meant you also had to distribute the data for those VDI servers, which meant you had potentially thousands of "islands of storage", each to be allocated, managed, and backed up individually.  Unless, of course, you didn't care about being able to back up and restore a virtual desktop, or a VDI server.  Granite allows you to distribute the compute function on a WAN, yet preserve local performance.  Second, your local servers/VDI hosts *may* run on the Steelhead appliance itself, for small offices, but you also have the option of Granite-only, and serving data to an external ESX or VDI server.  The customers who are using Granite today but not the on-box virtualization component are enthusiastic about it because they like the idea of dropping a diskless (or at least, storage-less) ESX server into an office, rather than managing a unique storage infrastructure in each office.


Cancel

Hi Brian,


(full disclosure, I work for Dell Software on vWorkspace)


When I was running large View installations I used to complain loudly to VMware that their architecture tied me to a central SAN (or cobble something together with something like a Lefthand software SAN).  With that in mind, I agree that this is the right idea, but I’m not really convinced that it’s worth it to move the entire disk image to the edge using additional hardware.  


I think a better idea is to use cheap storage on the edge (be it local storage or a cheap array of some sort) and then run user data/profile over the WAN using a centralized SAN.  This way you’re running the ‘expendable’ disk images off storage close to the compute, and the ‘important’ user data is centralized and available to central backup strategies.  I know that this doesn’t fly for static/persistent desktops, for that you can still run them on edge storage but you have to figure out backups.


Again, in the interest of full disclosure, this is how vWorkspace works via Hyper-Cache and local storage (it’s awesome and simple).  But even if you were to run View I think something like an Ilio diskless VDI appliance would be simpler to manage.


Cancel

Adding additional (unnecessary) products to your architecture NEVER makes anything simpler and easier.


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close