From service providers through to ambitious home network operators, load balancers fulfil a variety of purposes. However, there are many more functions available than the mere “distribution of loads” as the name initially seems to suggest. This article takes a look at those features, what they are used for and the type of Ioad balancers that can be used today.
What are the benefits of distributing loads in the network?
The network load describes simultaneous access attempts to a resource. A resource can be a server for example, which is expected to transmit a response to a request with the lowest possible latency. If this is to work (fast enough) every time, a specific resource must not be overloaded with too many simultaneous incoming requests.
That is where the load balancer comes in. It is able to steer the incoming network load in different directions. That is to say, a load balancer enables horizontal load distribution in the conventional sense, so the network can be scaled without restricting even one single user.
Horizontal load distribution means that enough resources can be provided simultaneously to satisfy maximum demand at peak times. This type of load distribution is especially useful when you yourself provide various services on site.
Modern load balancing requires flexibility
The simple approach of making the same speed available to each request requires the constant provision of those resources that can be requested at peak times. However, if this only occurs once a year, for example, this approach could quickly become uneconomical – especially in the cloud.
That is why load balancers are also capable of “vertical” or, in combination, of “hybrid” traffic distribution. Vertical load distribution involves optimising the use of an available resource or of available resources for the requests received. A second resource is not utilised until demand for the first resource reaches full capacity.
However, with vertical load balancing, problems can occur with the speed if other resources are not provided quickly enough or are not provided at all. This often results in excessively long response times and therefore virtually unusable services.
Hybrid load balancing in the cloud
As with other cloud applications, a hybrid approach between horizontal and vertical load distribution also makes sense when it comes to load balancing. Depending on the request, traffic is allocated to resources in such a way that requests with a quick response time are pooled to one resource. More wide-ranging types of request can then access a dedicated resource and fully utilise its bandwidth. In this way, both types of request can achieve a satisfactory response time with efficient use of resources.
Load balancers as a safety net
In addition to distributing loads, load balancers also enhance network security. As a load balancer only needs one single port to communicate outward with all the resources in the network, all other ports can remain closed. This reduces risks and attack vectors to a minimum.
Load balancer traffic can be further protected by multiple encryption – e.g. using HTTPS protocol via Kemp’s free load balancer. Additional encryption and authorisation can also minimise any remaining attack vectors.
New hardware again?
Not so fast! Some load balancers don’t need any hardware of their own, just a virtual machine for them to run on. Kemp, for example, can be run in a completely virtual environment and therefore undergo intensive testing in the customer’s own network. It is therefore possible to test free of charge whether or not a load balancer actually provides added value before rolling it out for the entire network.
All in all…
… load balancers provide a large number of applications that are not all necessarily related to mere load balancing. Load balancers help to filter traffic to where it is supposed to go – be it between sites or between computers.
Their modern features and flexible implementation options – virtual or in the cloud – make load balancers a key tool for the IT connoisseur. Therefore, anyone who has not yet addressed the issue in their own network should do so now without delay!