14
2012
What are High Availability Servers?
High availability servers a modern implementation of server hardware that are designed to extremely high fault tolerance levels to keep performing even in the event of a failed component, and are the latest buzzword in server technology with major CPU designers hailing the technology as the next evolutionary step.
Traditionally, server hardware and software have been designed to optimize the functions of a single CPU controlling memory, storage, and input/output, leading to never satisfactory multitasking that has to a great extent been dependent on operating system software juggling processes within the CPU. This method gives the illusion of multitasking, but is more accurately described as task switching rather than multitasking.
Current designs of high availability servers known as shared memory mulitprocessors (SMP) offer increasing numbers of single processors in a single server, and making use of larger banks of shared memory, allowing for scalable allocation of tasks and consequently faster processing. Ultimately SMPs are designed for more reliable service and better uptime.
Advances in hot swap technology are actively used within high availability servers and is now used not just in USB accessories, but also in hard drives, memory, video cards, even some CPUs, allowing for online spares to be provisioned. The ability of a server to continue functioning thru intelligent allocation based on the number of working components within the server is a major development offering greater than average up-times.
A component that fails is intelligently ignored until IT has the chance to replace the component. In combination with RAID storage and larger memory blocks, modern servers are evolving from single computers into mini networks and can competently be used in smaller storage area networks with a reduced overall number of servers.
Using high availability servers in this manner also allows for more robust server mirroring and clustering that is internally optimized with it’s own redundancies built in, reducing the need for costly backup networks, and making outsourced disaster recovery data centers a very attractive option despite limits to network speed that make offsite backup less appealing.
As the technology improves, software vendors are anticipating huge demand for new generations of server management software that not only manages a single multiprocessor server, but is also able to manage data across smaller more powerful clustered networks.
Particularly important in high availability server clusters is proper configuration of server load balancing, a technique where two or more servers are able to process similar tasks but accessing the same database, thus allowing for more users concurrently. It seems moot to suggest load balancing needs to be considered, yet many administrators don’t tune their networks for optimal balancing leading to slower performance and potential glitches.
Maintaining high availability servers requires up to date and specialist knowledge of the tools and technologies used to ensure operating platforms are more secure against attack and resilient against failure. Staff employed in this important function should be certified by one of the major vendors as having been tested and found to pass the relevant examiniations.
Further readings
Advertisements
Recent Posts
- What is a Disaster Recovery Data Center
- What is a Relational Database?
- What is a Flat File Database?
- What is a DSN or Database Source Name?
- What is a Disaster Recovery Plan?
- What is an Open Source Database?
- What is Disaster Recovery?
- What is a Database Cluster?
- What are Database Servers?
- What are Database Forms?