Natural Language User Interfaces are making their way into useful products

Elements of Natural Language User Interfaces are making it into enterprise products. We've seen this before, but this time it's for real.

As the desktop virtualization space matures, the main platforms have grown their feature set to include many features that used to require third party software. This has been good for us, because the third party vendors are pushed towards the fringe to find unique solutions to problems that present themselves as opposed to resting on their laurels. App management is one of those things, and we've covered it a lot. I noticed at VMworld, though, that monitoring platforms like Lakeside SysTrack, Liquidware Labs Stratusphere, and even VMware's TrustPoint (based on Tanium, which Jack wrote about a few weeks ago) are beginning to use natural language queries to make it easier to extract relevant information from all the data they collect.

These Natural Language User Interfaces, or NLUIs, are nothing new. In one way or another we've seen them for years, but until recently they've been nothing more than gimmicks. They've been buggy, or limited to a very small subset of words that you can ask. Often times, the phrases you could use weren't natural at all. Instead of asking "What time do the Buckeyes play?" you would have to ask "Find the time Ohio State plays football on Saturday." It's hardly natural, and it's definitely not conversational. Really, those systems are just using speech to text and throwing out everything but keywords, which is something we've been able to do for well over a decade.

Today things are different, and these new systems are much more intelligent. I spoke to Mike Schumacher at Lakeside Software at VMworld, and he explained how their platform works. With their system, you can speak to the computer or just type in the question. They are identical in terms of how the request is handled, since the speech is just rendered to text before it's analyzed. Then, they run it through their own query analysis system that continuously learns from the questions that are asked. 

Here is where it gets interesting. The question you ask is given a confidence score, much like what you saw when IBM Watson played Jeopardy.

If the confidence score is above a certain threshold using Lakeside's system, it displays the data it thinks you were asking for. But, if the confidence score is lower than that threshold, Lakeside sends that query off to the real Watson (a cloud service offered by IBM) in order to use its supercomputing superpowers. When Watson finishes, which is pretty much instantly, it reports back a response with a high confidence score as well as information that Lakeside can use to add to their own algorithms. That means that every time Lakeside defers to Watson, the Lakeside platform gets smarter, too. The idea is that the more the system is used, the less they'll rely on Watson. You could argue that Lakeside and Watson are still throwing out words that aren't needed, but there is an extra level of inferrence that actually gauges the intent of what was asked by the way it was asked, as opposed to just dumping out all the nouns and adjectives into a one-size-fits-all search query.

Now, to be fair, what I saw at VMworld was a little rough around the edges. Also in the spirit of fairness, I did what any nerd would do when I saw it: I tried to break it. I succeeded a few times by asking what I thought were normal questions. I asked things like "Which of my users use the most GPU?" which confused the system because in the scope of the data available, desktops have GPUs, not users. The thing is, even a bad query that either didn't work or didn't produce the reports I wanted to see was still good data for the engineers working on the backend, which will result in a better product down the road. The next time they update their linguistics engine, you'll be able to ask that question.

It's cool to see products adding in features like this. We've gotten addicted to collecting data, but we're not always capable of handling the crushing weight of it. Loads of metrics about everything under the sun are only helpful if you can access it and read the information. As NLUIs advance and become more ingrained in our data collection and monitoring systems, our lives should get much, much easier. Siri started off as little more than a gimmick, and now even the grouchiest tech haters are using it. Just think about what we could do in our enterprises.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.


Now that I think about it, natural language queries are probably easier in IT admin apps than in general-purpose digital assistants. In these cases, the NLUI just has to deal with a narrowly-defined domain, rather than, well, everything...

What will be interesting to see if this type of stuff is disruptive to apps and EUC as mobile was. Just imagine, in a few years we could be talking transforming apps to be NLUI-enabled, just like how we talk about mobile-enabling apps now.

The question is how can we use NLUI to create apps that were never possible before? Is NLUI a faster horse, or can you make it a car? (Sorry to use that over-used cliche.)
Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close