vineri, 7 ianuarie 2011

Connection Pooling in Next Gen Load Balancing - Your Need to Know Basics

Connection pooling

What is connection pooling and what does it do?

In order to answer this question we need to look a little into why it's relevant (This is not a politicians answer, honest!)

This feature, although having been around for many years originated in the days of the main frame.

Recently, since load balancing manufacturers have been pushing it as a feature, interest in connection pooling has increased.

As load balancing is what I know slightly more about, let's start there. The first generation load balancer used a direct server return method meaning that the user request would first hit the virtual IP of the load balancer and then it would get routed on to a server. The server would respond directly back to the user i.e. miss out the load balancer. This was fine in the "old" days, indeed it is still being use by some of the more simplistic layer4 devices today. However, it has some major drawbacks.

1)      How do you know how busy a server is?

i.e. You know how many requests you sent to it but you don't know how quickly they were executed.  This major problem has tried to be worked around with some rather crude solutions:

The guess - "Weighted load balancing"

You take a look at the box and estimate how much better (or worse) it is in relation to the other servers. I think that my new Dell is twice the power of my old one etc - you get the idea. The big problem, apart from the potentially massive inaccuracy of the initial guess, is the usage. For example, User A could run a massive report that takes ages, where as User B could request a small image thus creating different load requirements.

Run a client application

The idea of this is that you have to install some software on the servers. The load balancer will make a periodic request to this software to gather information such as number of connections or CPU utilisation. One thing to note is that many people use CPU but that does not always work. A server that is running with a lot of CPU does not always mean its going to be the slowest to serve your request.

2)      The other drawback is that you can't do anything with the response of the connection. Every user will make a new connection to the webserver.

So what's changed?

Most of the newer load balancers and ADC Application Delivery Controllers now use a proxy based approach. This means that both the requests and the responses pass through the load balancer.

The web servers only ever talk to the load balancer. Well they may talk to other internal servers directly but as far as the end user conversation is concerned, all traffic goes through the load balancer.

This obviously addresses the issues highlighted above but also allows you to manage the way that connections are passed to the servers.

So managing the connections to the servers has the obvious advantage that we know exactly how busy they are and how quickly they are able to process requests. However in addition to this you can optimise the connection as you have control of both ends.

On one side you have the load balancer and on the other you have the web servers, as such it seems pointless to initialise and destroy a TCP connection for every request.

Connection pooling allows the load balancer to initialise a number of connections to the servers and pool them for use with multiple end users. This reduces the effect of TCP slow start and also helps to reduce load at the networking processing level.

The other advantage of this proxy type approach is the simple abstraction between the client and server connections. A server sitting on a high speed LAN can have a very high speed conversation with the load balancer. The data can be sent and the connection left free to serve another request. The load balancer can then buffer the response and send it to the client over the much slower WAN.

This approach can really help busy web servers, especially if client connections are filling up on the server due to slow links and/or using methods such as HTTP keep alive.

Keep an eye out for my next blog....

Read more: http://www.articlesbase.com/internet-articles/connection-pooling-in-next-gen-load-balancing-your-need-to-know-basics-2961161.html#ixzz1ALEoN3r7
Under Creative Commons License: Attribution