Thomas Koetzing was recently talking about features he’d like to see in future versions of Citrix's Web Interface, and one thing he mentioned was that he’d like to see an option in the administration GUI to configure the robots.txt. I think this is a fantastic idea and it would bring this important security issue to the forefront.
The problem is that most people who use Citrix's Web Interface or NFuse products forget that all websites are indexed by Google, MSN, and other search engines. This means that typing a few specific keywords into a web search box can instantly deliver the URLs to hundreds of Citrix Web Interface login pages. For example, doing a Google search on "MetaFrame Presentation Server Login" (with quotes) will deliver you directly to the Web Interface login pages for over 70 companies. "MetaFrame XP Login" will net you almost 300. Of course you can also perform custom searches with these words and selected Citrix-only web paths ("/metaframe/defauly/login.aspx," etc.) to find literally thousands of corporate Web Interface front doors. (This will work even if you've changed your default text.) Attackers could also even focus their attack on a specific company by including those names as keywords in their searches.
So what's the big deal? I’d be willing to bet that a fair number of these sites would be accessible via administrator/password, admin/12345, administrator/administrator, test/test, or similar user credentials.
It's also important to remember that your WI Admin and PNAgent Admin sites are also generally accessible via the Internet, so an attacker could build a fairly elaborate phishing scheme where they point your Web Interface server towards their farm to collect even more credentials.
This issue is something that most Citrix administrators probably don’t think about which is why I’d like to bring it into the spotlight here. Fortunately there’s a really easy way to fix this, and fixing this problem should take less than a minute for each Web Interface server you have.
To prevent this, you need to create a robots.txt file. As a matter of convention, any search engine spider that visits your site will first look for a file called robots.txt. This is a simple text file that you can use to give instructions to search engine indexers as to whether they are allowed to index your site and if so which pages they’re allowed to visit.
If your Web Interface or NFuse server is on its website, you can disallow all indexing by all search engines by creating a file called robots.txt with the following two lines:
You can also build more elaborate robots.txt files that block only certain pages or only certain search engines. Check out this robots.txt tutorial for more information.
Once this file is created put it in the root of your web server and you’re all set. Important: You’ll want to make sure that it’s in the root and that it’s accessible via http://yourwebinterfaceserver.yourcompany.com/robots.txt or whatever. Do not place it in the /Citrix/MetaFrame folder or else it won’t work.
Of course excluding your site from search engines doesn't count as "real" security and is not a replacement for having good passwords and/or using two-factor authentication. However, it does ensure that an attacker or prankster will have an easier time finding someone else, and that should make your life easier.