Understanding all data and system availability solutions in 30 minutes: IT made easy!

While storage solutions and virtualization can often become a goal in themselves, in respect to availability, it's not a simple task to design a datacenter availability solution. All aspects of the datacenter influence one another.

While storage solutions and virtualization can often become a goal in themselves, in respect to availability, it’s not a simple task to design a datacenter availability solution. All aspects of the datacenter influence one another. To position the products needed for datacenter solutions and to create understanding of the components, a solutions diagram was created. This diagram in no way pretends to be a complete detailed overview of every possible solution but tries to give a general overview of the methods involved. 

The Data and System Availability diagram aims to give an overview of all of the components that make up the datacenter and the relations between them.

Advanced IT-Infrastructures made easy

 

In an advanced ICT infrastructures it's all about Users, Applications, Data and System Availability.

Infrastructures are built to provide users with the applications they need to do their work. These applications produce data and data is provisioned by systems. To allow users to work anywhere at any time, the applications and the desktops they work on have to be delivered to them in a certain way. To give an overview of the possible ways to do that, a diagram was created called ‘Application and Desktop Delivery’ solutions. The components used in this diagram are things like Virtual Desktop Infrastructure, Terminal Server, Bladed Workstations, Application Virtualization and many more.

An advanced ICT infrastructure made easy: "Application and Desktop Delivery" and "Data and System Availability" is key! 

Data and System Availability solutions overview

The dynamics of applications and desktops are making them location, device and time independent. Data and systems have completely different availability requirements. They are typically stored in a datacenter that is not dynamically provisioned, although we may see that change in the near future with the upcoming cloud computing initiatives.

 

 

(A High Quality version of the diagram can be downloaded here)


Servers

It’s servers that provide users or applications with the services they need. Services can be anything: web services, file and print services, authentication, database services, etc. In a traditional datacenter, these services are mostly executed on physical servers.

These physical servers come with a lot of resources that most services don’t need. They either have way too much storage, CPU power and memory, or too little. When there’s not enough resources available, adding more usually adds too much of the resource, over-dimensioning them.

Also, physical servers with local storage have a few disadvantages that limit their availability. If a physical server fails, the service is no longer available. A new server has to be setup, data restored and settings reconfigured. All in all a process that could take up to several days.

Storage

To cope with these availability problems, it makes sense to start with centralizing the storage. This makes it easier to allocate the right amount of storage to a service and makes it easier for the service to access it from another location, thus enhancing its availability.

Centralizing storage also has some disadvantages. All storage is now on one system that becomes a new single point of failure. If it fails, the whole infrastructure fails.

So this central storage has to be redundant in every aspect. It needs redundant connections, redundant switches, redundant power, redundant hard disks, redundant everything. This is what makes a Storage Area Network (SAN) more expensive than local storage.

Storage Area Network

Connectivity to the SAN can be divided into two main groups: Fiber Channel (FC) and Ethernet. Where Fiber Channel provides the best performance, it’s also the most expensive. A very valid question in designing a storage infrastructure therefore is ‘does the customer really need that high end performance?’. The alternatives aren’t really that far behind anymore.

Ethernet based infrastructures are less expensive because connectivity takes place over regular Ethernet switches and regular Network Interface Cards (NICs).

Not too long ago, iSCSI was the main storage protocol to be used over Ethernet. It allows LUNs to be presented as full disks to a host. With the upcoming virtualization technology however, NAS is a strong contestant now too. Whether it’s NFS or CIFS, a host simply connects to a network share and stores it’s data on the file system that the storage provides. This flexibility has some disadvantages though. Hosts are no longer managing the storage and proprietary file systems like VMFS don’t work on it. On the other hand, a storage solution with a smart file system like NetApp’s Write Anywhere File Layout (WAFL) makes it very easy, with the right toolset, to work with (consistent!) snapshots.

Thin Provisioning

With a SAN, data per gigabyte is more expensive than with local storage. The advantages of having it available independent of the servers make up for a lot of the cost but it’s still better to be conservative with allocating storage. Application developers or server administrators tend to ask for more storage than they actually need.

One solution to this problem is to give them the storage they need, but only actually store what they really use. This is called ‘thin provisioning’. It’s a smart way to dynamically size the LUN on the array as it’s needed.

Linked Clones

Another way to save storage is to use linked clones. The principle of this technique is that it provides one set of data to multiple virtual machines, while keeping track of the differences between them and storing those differences in a separate location. When this is done on the array, the performance impact is negligible.

A physical server can also provide virtual machines with linked clone disks. This is a little bit slower and does take some CPU resources away from the VMs but it doesn’t need an intelligent storage array and is also a very good solution

Deduplication

At the moment, deduplication is mainly used in backup scenario’s. That means that data is first stored on a main storage system and at backup time deduplicated at a separate system or a different tier in the storage system. The reason it is not used on active data yet is mainly because the deduplication process is a very calculation intensive process that, at the moment, simply isn’t fast enough for modern storage demands.

The deduplication process works by first accepting all data. It then either inline or in a background process, first compresses it and then at a block level, checks if that block already exists. If it does, it simply points to that block, if not, the new block is stored. This can reduce the backup data size of multiple backups by 50% to even 90% of a traditional backup data set.

 

 

Archiving

Because a central high performance storage system can be quite expensive, a lot of companies decide to move less used data to less expensive high capacity storage. This is typically done by setting up the storage in multiple tiers.

This process can be all inline of the storage system that moves data on block level to a slower, and therefore less expensive set of disks. When the data is accessed again, it is moved back to the fast storage tier. Clients and applications can access all the data as it stays online at all times.

Another way to archive data is to have a data management solution decide what data to move. This is then done at a file-, mail- or database object level. The advantage of this system is that it actually moves data out of the systems, possibly leaving a so called ‘stub’ behind as a reference for clients and applications. This means that when the data is accessed again, it needs to be restored from another location which can be a time consuming process. On the other hand, this significantly reduces the active data size which in turn reduces backup time by large factors.

Indexing service

When data is moved between different storage tiers or systems, clients, applications and backup systems can get confused about where the data is actually stored.
An indexing server keeps track of the location of all the data in a storage system. It interfaces with the archiving solution and provides a transparent interface to clients and applications. The archiving solution on demand moves data back to other tiers.

Virtualization

Once the availability of data is improved, it’s time to do the same for the servers. Having data online is only half the solution. Without services to deliver it to the clients and applications, it is of no more use than a backup.

A physical solution to improve server availability is clustering. Clustered systems require shared storage or have their own copy of the data that is kept in sync by using application level replication.

Another solution to improving server availability is virtualization. Virtual machines are independent of the physical hardware and can very easily be moved from one host to another, whether this host is on the same site or a failover site. Higher server availability can be achieved by a virtualization solution that actively monitors all virtual machines and in case of a physical host failure, automatically restarts the virtual machine on another host. Depending on the management tools available, it’s also possible to load balance all virtual machines across the available physical hosts by implementing live migration options.

There are two main types of hypervisors for virtualization solutions; the thin hypervisor, also called microkernelized hypervisor and the thick hypervisor, also called monolithic hypervisor. Thin hypervisors are used by virtualization solutions like XenServer and Hyper-V whiles ESX uses a thick hypervisor.

The complete whitepaper can be downloaded here

Credits are also for Herco van Brug (hbr@pqr.nl), we developed the Data and System Availability diagram and the whitepaper together.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close