IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image
How to converse in cloud: Cloud balancing
Thu, 9th Feb 2017
FYI, this story is more than a year old

As with many cloud technologies, cloud balancing not only changes the nature of traditional load balancing, it changes the whole way applications are delivered to end users.

The Server Traffic Cop

To understand cloud balancing, it's important to understand load balancers. Anyone in IT has most likely been working with load balancers (aka: application delivery controllers) for years. Load balancers represent one of the best ways to ensure fast application response and consistent uptime.

Load balancers act like traffic cops, standing in front of a bank of web or application servers and routing each incoming request to the server or virtual machine best equipped to quickly and efficiently fulfill it, while taking care not to overload any single server.

Load balancers continually monitor server health, so if one server drops out due to maintenance or a hardware or software failure, it knows how to distribute future requests among the remaining functioning servers.

Similarly, if an application server is added to the server farm, the load balancer will start routing some requests to the new server.

More recently, application delivery controllers (ADCs) have added a bevy of optional capabilities, including SSL termination, access control, DDoS attack protection, application firewalls and more.

They've also evolved from mostly hardware solutions to a combination of hardware and more standardized virtual application delivery controller software (vADC) that can be deployed globally and in the cloud.

These global, cloud load balancing environments require direct and secure interconnection to reliably, effectively and efficiently support these critical capabilities.

Going global

Global server load balancers (GSLBs) take the load balancing concept to the next level and really set the stage for cloud balancing. GSLBs present a single virtual IP address to the client while distributing web and application requests among globally dispersed data centers.

GSLBs also take into account the user's location, the health of network and data center resources, and any number of other configurable variables.

This is a great way to provide robust disaster recovery, business continuity and performance. Directing users to the closest data center geographically ensures low latency, which is increasingly important in a streaming, mobile world.

Into the cloud

The emerging category of cloud balancing takes global load balancing further into the hybrid, multi-cloud, where applications may reside in many different private and public clouds. As with global load balancers, cloud balancers distribute requests among relevant private and public cloud services.

With cloud balancing, a new set of decision variables may come into play, including the following potentially configurable cloud balancing policy items:

  • Compliance
  • Time of day
  • Cost of delivery
  • Cloud service capacity
  • Service level agreements
  • Contractual obligations
  • Energy consumption

Ideally, cloud load balancing solutions should be well integrated, able to communicate with each other and easily managed.

This can happen by deploying a single integrated solution leveraging an Interconnection Oriented Architecture (IOA), a repeatable model for interconnecting people, locations, cloud and data.

For example, Equinix partners with F5 Networks to enable organizations to deploy F5 Big-IP load, global and cloud balancing solutions through the Equinix Cloud Exchange and Performance Hub.

Fast, direct multi-cloud interconnections and F5 cloud balancing add up to a very powerful combination for ramping up application performance and security

Article by Ryan Mallory, Equinix blog network