AutoVirt: File Virtualization

AutoVirt is a small three year old venture capital backed company based in Nashua, New Hampshire, they have what seems to be an interesting idea they call "File Virtualization".

I had a chance to meet the CTO of AutoVirt, Klavs Landberg, the other night and to learn about what they do.  A small three year old venture capital backed company based in Nashua, New Hampshire, they have what seems to be an interesting idea they call "File Virtualization".   They may not be the first with this idea, with EMC, Acopia, and Brocade either in or previously talking about this space, but others may have failed by being too early.

File Virtualization is about breaking the relationship between remote file references and the remote files themselves.  By virtualizing the references to remote files and shares, you become free to manipulate where things are actually stored and become more creative about duplication, all without having to worry about breaking an application because you moved things. 

If you have worked with DFS and DFS-R (formerly FRS), you probably understand some of the limitations in what DFS provides. Companies that have lots of existing file shares (and who doesn't?) need to pretty much manually modify the application references to shares and files to use DFS.  AutoVirt doesn't require you to do this.  Instead, automates the process of setting up the indirection by scanning the environment for shares and configures what looks to me to be a DNS broker service.  When an application makes a remote file request, that broker redirects application requests to be best location.  So now you can use the broker to move or replicate.  Being based on DNS, the broker also takes care of locality pretty easily too.

I haven't worked with the product at this point, so there may be lurking issues with what they are doing, but Klavs (pronounced "Claus") had excellent answers to all the questions posed to him by me and others.  The company is focused on the enterprise, but I'm not sure that their might not be a more important role in the Cloud for this stuff.  Check them out at  If you already have checked them out, let us know what you think.

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Hmmm, sound like a software version of an ARX.  I wonder what the scale is like?  How many users can you direct through one of these before it becomes the bottleneck?



I guess my previous question should have been "How many concurrent file i/o requests before this thing becomes the bottleneck?"


Because it acts as a broker, it should scale quite high.  We're talking redirecting file open requests here.  

On the back end, it does have to monitor file changes to perform any replication - which is more likely to be a scale concern.  Replication is on a file basis, not a block basis, but Klavs indicated that most of the files you want this for are office files, meaning that when someone changes one character to a doc file the software re-writes the entire file anyway.


Tim, DFS links being effectivly ntfs reparse points and the built-in tool MKLINK (works with UNC paths)  what's the big differentiator?


Sorry. Forgot to add: There's always the broker.... I'll have @appdetective to break free the brokers.

Hah, I still do not quite understand the broker/brokerless thing. I really wish for an article on the topic. I mean, I'm not totally stupid, I see a lot of the points made againts (and for)  in the specifics, but fail to grasp the whole picture. The philosophics behind.

It'll get rather annoying for me to get through the various brokering mechanics, so I abstain. I feel like a left out fool scratching my head.... Help me out please!


Kimmo - I can't help you with broker(less)!  As to MkLink...

Let's say I go visit a typical server and look at what shares exist.  Maybe only a couple.  But who (what applications installed at what places) uses these links?  I have no friggin' idea.

So suppose I want to move that data to another server.  If I knew where all the apps were that referenced it, and know how to make the apps look elsewhere, I can move it (ignoring synchronization issues)

Otherwise, yeah, I could move all of that data onto another server and place a link to it from it's old location.  But then each use of the file would still go through this server.  What a mess.  Very soon I have such a complicated infrastructure where I don't dare take down a single old server because stuff might be passing through it.  Ugh!

At least I think that is the idea with this product.  It allows me to place a redirection to the share without knowing the applications.  Once in place, the server can go "poof" without harm.  Make sense?


Thaks for the info Tim!

I'm not sure how to respond on the product as such, me quite obviosuly being verty concervative on enterprise use cases, while at the same time being the total geek, With regret , for my rationalization this is an odd product for some even odder use cases from a totally unknown company. Here's where I rather send me away. Off.


This "DFS replacement" is an interesting product.  A little reading sheds some light on how this solution works without any agent or client side changes.  On the client side, it appears that legacy DNS and then CIFS requests are handled by the AutoVirt nodes which redirect to the new location.  There appears to be a feature rich management framework that controls the migration, consolidation, replication and archiving of your files via CIFS.  The licensing model appears to be perpetual with yearly maintenance.


We've been following this discussion over here at AutoVirt and Klavs has written an interesting response to some of the questions asked in this thread over at his blog (

In particular, Klavs answers these questions:

1) What’s the big differentiator to DFS?

2) What about reparse points and MKLINK? Why do you need a broker like AutoVirt instead of just using the built-in reparse points?

Thanks Tim!


@Brian Gladstein

Thanks for the link to your blog post. Appreciated


I've deployed AutoVirt, and I have to say, it is a slick product. They have an out-of-band approach to provide a global namespace while not getting in the datapath and impacting performance.

The best comparison I can make to another product is the now defunct Brocade StorageX/NetApp VFM product. It was also an out-of-band solution, but AutoVirt has added replication and archiving features in their recent releases.

ARX and RainFinity are in-band solutions, intercepting requests and rerouting them to the server and then back to the client, like a proxy file server. One difficulty with this approach is that you have to size the box doing this file virtualization to handle whatever traffic you MIGHT throw at it, 3 years from now. Generally this means that the appliances have to be relatively beefy, and expensive to handle the load.

AutoVirt on the other hand is able to run on just a few VM's. 2 for redundant namespace servers that perform redirections to the real file location, and 1 or more data movers for performing file migrations.

The support for the product was also phenomenal, we had a VERY complex cluster to deal with in a migration, that had been in operation for years. Support worked with us to successfully migrate the data to a new NetApp Storage System.

Two thumbs up!

Hmmm, looks like the beginning of a blog post for


@Kimmo Jernstrom

Anytime Kimmo!