vineri, 7 ianuarie 2011

Server Load Balancing – Take the Load Off

Load balancing is the technique of spreading work between computers, hard disks, processes, etc. with the purpose of decreasing computing time and getting optimal resource utilization. This technique is generally performed by load balancers, whose aim is to exceed the capacity of a single server by increasing that of a server farm. Load balancers also allow the service to continue even in situations of server maintenance or server failure. Reducing downtime is essential in the prevention of huge losses, which is why automatic detection of server failure or server lock-up software are recommended.
But why is server load balancing necessary? If all the incoming HTTP requests for your website are responded to with only one web server, its capacity may fail in handling high volumes of traffic, when and if your website becomes popular. This means that pages will load slowly and there will be quite some time before some of the users get to have their requests processed by the web server. The more connections and incoming traffic to your website, the greater chances are you will need server upgrading, which may no longer be cost effective. This is what server load balancing is about: adding more servers and distributing load among them, in order to attain web server scalability. Be it an application server, database server, or HTTP server, load balancing applies for and is recommended for all types of servers.
There is a limit for each server as far as the number of users that it can server is concerned. Once that limit has been reached, you only have two options. You can either replace it with a newer one, or you can add one more server and make it possible that the load is shared between them. A load balancer will help you in that it will distribute connections among servers and cut the work proportionally for each of them.
Many server deployments can do with using basic information about the services and the clients they want to reach, as means for load balancing. Still, this will prove insufficient as your needs develop and become more complex. At this point, you have to take a closer look at the options you have and make a more detailed decision about load balancing. This is where Layer 7 load balancer comes into place.
There are times when sharing traffic over a group of servers simply is not enough. One reason is that all the content that comes with a request will have to be on each server, if they are equally likely to service that particular request. Your servers will function with maximum efficiency if you group them so as to handle different tasks at the same time. For instance, some can be optimized for streaming video downloading, others can serve as massive storage systems, while others can handle transactions. In fact, the main reason for using a load balancer is that of increasing the capacity of IP servers.
The ability to handle huge amounts of traffic is extremely important in a server load balancer. This is all the more true for sites with significant volumes of SSL transactions, which require built-in SSL terminations and SSL acceleration cards. Theses cards are very important to the security of transactions on e-commerce web sites through the encryption keys that they generate. But the process of performing a secure transaction can overload a server severely, which results in a smaller number of transactions being processed by second. This ultimately leads to fewer sales. The importance of SSL acceleration cards is obvious, as they take over the process, subsequently allowing the load on the server to be reduced. Under these circumstances, the Layer 7 load balancer can perform correctly, despite the large number of HTTPS packets coming in. The ideal product will increase efficiency and reduce costs, as it provides performance and quality, and requires no separate appliance.
Layer-7 load balancer is concerned with parsing requests and distributing them to servers, according to the different types of content these requests have. By doing so, Layer-7 load balancer improves overall cluster performance, although its scalability is rather limited compared to Layer-4 load balancer, due to the high overhead of parsing requests.
An important feature of any efficient Layer- 7 load balancer is its ability to manage traffic based on the content. Traffic management is performed by comparing URL content with customized configuration settings. This process helps to determine which server is able to handle the request and making an appropriate routing.
For more resources about layer 7 load balancer or even about server load balancing please review this page http://www.cainetworks.com
Read more: http://www.articlesbase.com/business-articles/server-load-balancing-take-the-load-off-162077.html#ixzz1ALFWEetG
Under Creative Commons License: Attribution

Continue Reading »

Load Balancing Servers!

Local balancing is the defined process through which inbound internet protocol (IP) traffic can be spread across multiple networks of servers. It is used for improving the performance of the servers, optimal utilization and creates a balance in counting on all the servers attached in the local balancing method. It is meant basically for busy networks where it is quite difficult to guess the right number of requests that will be passes through the server.

The Process:

In this process, mainly two or more web pages are employed in the procedure of load balancing. It is done to create equilibrium while receiving the requests. If one server is overloaded with requests, it will be automatically gets forwarded to another without any hassle. Also, by employing the local server, the overall service time will be brought down so that multiple servers can easily handle the requests. All you need to get is a load balancer to identify which server has the current availability to receive the traffic.

This effective process is used right away, in a straight manner. A webpage requests is sent to the load balancer which sends the further traffic request to one of the vacant servers. The server then messages to the load balancer and sends the desired request to the end user.

One of the greatest advantages of using this option is to allow the service to continue even when the server is closed for failure or maintenance works. In such cases, you still can use the services offered by load balancing server at almost ease.

Summary: To handle high volume of incoming traffic; to cope up with web server scalability, more servers is added to distribute the load among the group of servers, the load distribution among servers is known as load balancing. Load balancing spreads HTTP request by IP Spraying.

Keywords: load balancing servers, servers, hosting, vps, dedicated servers

load balancing,load balancing servers,load balancing server, servers, balancing servers

http://www4.be/nl/servers.htm

http://www4.be/nl/dedicatedservers.htm

http://www4.be/nl/vps.htm

Read more: http://www.articlesbase.com/information-technology-articles/load-balancing-servers-3284356.html#ixzz1ALF1f1Qw
Under Creative Commons License: Attribution

Continue Reading »

Connection Pooling in Next Gen Load Balancing - Your Need to Know Basics

Connection pooling

What is connection pooling and what does it do?

In order to answer this question we need to look a little into why it's relevant (This is not a politicians answer, honest!)

This feature, although having been around for many years originated in the days of the main frame.

Recently, since load balancing manufacturers have been pushing it as a feature, interest in connection pooling has increased.

As load balancing is what I know slightly more about, let's start there. The first generation load balancer used a direct server return method meaning that the user request would first hit the virtual IP of the load balancer and then it would get routed on to a server. The server would respond directly back to the user i.e. miss out the load balancer. This was fine in the "old" days, indeed it is still being use by some of the more simplistic layer4 devices today. However, it has some major drawbacks.

1)      How do you know how busy a server is?

i.e. You know how many requests you sent to it but you don't know how quickly they were executed.  This major problem has tried to be worked around with some rather crude solutions:

The guess - "Weighted load balancing"

You take a look at the box and estimate how much better (or worse) it is in relation to the other servers. I think that my new Dell is twice the power of my old one etc - you get the idea. The big problem, apart from the potentially massive inaccuracy of the initial guess, is the usage. For example, User A could run a massive report that takes ages, where as User B could request a small image thus creating different load requirements.

Run a client application

The idea of this is that you have to install some software on the servers. The load balancer will make a periodic request to this software to gather information such as number of connections or CPU utilisation. One thing to note is that many people use CPU but that does not always work. A server that is running with a lot of CPU does not always mean its going to be the slowest to serve your request.

2)      The other drawback is that you can't do anything with the response of the connection. Every user will make a new connection to the webserver.

So what's changed?

Most of the newer load balancers and ADC Application Delivery Controllers now use a proxy based approach. This means that both the requests and the responses pass through the load balancer.

The web servers only ever talk to the load balancer. Well they may talk to other internal servers directly but as far as the end user conversation is concerned, all traffic goes through the load balancer.

This obviously addresses the issues highlighted above but also allows you to manage the way that connections are passed to the servers.

So managing the connections to the servers has the obvious advantage that we know exactly how busy they are and how quickly they are able to process requests. However in addition to this you can optimise the connection as you have control of both ends.

On one side you have the load balancer and on the other you have the web servers, as such it seems pointless to initialise and destroy a TCP connection for every request.

Connection pooling allows the load balancer to initialise a number of connections to the servers and pool them for use with multiple end users. This reduces the effect of TCP slow start and also helps to reduce load at the networking processing level.

The other advantage of this proxy type approach is the simple abstraction between the client and server connections. A server sitting on a high speed LAN can have a very high speed conversation with the load balancer. The data can be sent and the connection left free to serve another request. The load balancer can then buffer the response and send it to the client over the much slower WAN.

This approach can really help busy web servers, especially if client connections are filling up on the server due to slow links and/or using methods such as HTTP keep alive.

Keep an eye out for my next blog....

Read more: http://www.articlesbase.com/internet-articles/connection-pooling-in-next-gen-load-balancing-your-need-to-know-basics-2961161.html#ixzz1ALEoN3r7
Under Creative Commons License: Attribution

Continue Reading »

Load Balancing Router Benefits

With the increasing importance of the need to remain connected to the information highway constantly, there is a need for appliances and solutions that distribute traffic efficiently. When that happens, traffic flow improves and bottlenecks caused by a single connection are removed. In case one connection fails, remaining connections will be utilized so that internet continuity is ensured. This is load balancing.

On a network, it does not take long for traffic to build up and lead to congestion. Applications like VoIP can choke your bandwidth capacity. When data packets queue up, there could be slowing down of traffic and data loss. Through load balancing applications, multiple connections are grouped instead of using a single connection. This increases bandwidth tremendously. When traffic is shared among different internet connections, the chances of congestion are reduced. The end user does not have to suffer unnecessary delays. Load balancing also helps to ensure that data packets are not delivered out of order. Computers also have the ability to connect to various internet access technologies. Load balancing is achieved through load balancing routers.

There are two kinds of load balancing.

1. Outbound load balancing

2. Inbound load balancing

Outbound load balancing has several benefits. Some of its most obvious advantages include:

improved bandwidth efficiency

1. Almost uninterrupted uptime

2. Ability to use different internet technologies

3. Flexible load balancing options may be implemented

Inbound load balancing is designed for networks that operate and provide services like web hosting, email servers or other networking application that fulfills the incoming request of various computers.

For inbound load balancing to work, data requests from external users who require data from the network must be managed. To achieve this, data distribution over multiple internet connections takes place, instead of over a single connection. If a single connection is used, requests choke up the network and congestion occurs.

Why is load balancing important?

Well, for many internet users, reliable internet service is literally worth its weight in gold. By avoiding downtime, they gain a competitive advantage just as experiencing downtime can make them incur huge financial loss. In this day and age, forward thinking business people realize the importance of redundant internet access. By leveraging on the power of load balancing, they can enjoy elevated bandwidth.

Read more: http://www.articlesbase.com/information-technology-articles/load-balancing-router-benefits-3627371.html#ixzz1AL6ujHhF
Under Creative Commons License: Attribution

Continue Reading »

Clustering Versus Load Balancing in Web Hosting

Clustering and load balancing are technical terms used to describe the backend functioning of hosting applications. They are terms that networking technicians will use when explaining the functionality and performance of a server. The terms are sometimes used interchangeably when in fact they actually refer to different types of applications. In order to compare the two, it is important to first understand each term and how it is used in the industry.
Defining Clustering
Clustering is comparatively the simpler of the two applications. It also refers to computer software rather than hardware.  The process of clustering refers to the conversion of a single function of an application to a master controller. From there, the requests are then sent to multiple functions. For example, on an e-commerce site, clustering may be used when a customer has items in a shopping cart. When they capture the payment details, the process needs to then close the transaction and take the customer to a new page to show that the transaction is complete. The clustering is done using standard industry algorithms. Because there are several processes that need to happen almost simultaneously, clustering provides a way to improve the response time of a server or increase its capacity. It does this by adding further functions to the servers. Because of the feature, clustering is often confused with load balancing in that load balancing has a similar capability.
What Is Load Balancing?
Load balancing often refers to Application Delivery Controllers, more commonly referred to as ADCs.  As mentioned, load balancing also increases the capability and capacity of hosting servers. However, the main difference is that compared to clustering, load balancing uses more complex algorithms. Because of this, they have the faster response times of the two as well as greater functionality and flexibility. Where clustering can only use traditional application variables, load balancing has the capability to draw information from other sources such as network-based data. Load balancing is also more transparent than clustering.
The Pros and Cons of Clustering
A big advantage of clustering is that it doesn't require highly advanced technical knowledge in order to implement. Someone with a basic level of networking knowledge will be able to set it up. Clustering applications usually come as part of a server enterprise package and because of this, they are considerably cheaper. On the down side, major disadvantages of clustering are the limitations it has. You can generally only use clustering on homogeneous servers and even then high availability is not guaranteed. Most clustering applications also use separate hardware for the cluster controller. Because of this, on managed application sever instances, node agents are needed. 
Comparing ADC Load Balancing
In general, ADC load balancing applications are reputed to provide higher availability and better load balancing. In addition, it can do this in less restrictive homogeneous environments. Technicians like them because when they are deployed there is no need to make changes to existing servers or applications. Perhaps the biggest advantages are that load balancing provides better server performance and improved server security; it also has the added value of optimizing the applications. The disadvantage to ADC or load balancing is that it requires more advanced technical networking knowledge to set up and manage. If you do not have employees capable of managing the process, you may need to bring in outside expertise or train employees to become competent This can be both time-consuming and expensive. ADC also requires additional infrastructure to be built into the architecture of the server which complicates the management of it. The last con is that these solutions are generally more expensive than clustering. 
How Do You Choose Which Option Is Best for You?
The decision on whether to use clustering or ADC load balancing is one that you should take your time making. It is a complex issue that has many variables. Firstly, consider the technical expertise that you have at your disposal and if the skilled individuals are more proficient in either clustering or load balancing. Sometimes you may not be able to justify the additional expense of getting the necessary technical skills. Cost is another important consideration. You need to look at this in terms of the set up as well as the ongoing maintenance costs. Perhaps most important though is how important server performance is to you. When you look at the results that you want and what this could mean in terms of customer satisfaction and retention, then it may well be worth your while to spend the money on more advanced technology.
Read more: http://www.articlesbase.com/web-hosting-articles/clustering-versus-load-balancing-in-web-hosting-3318957.html#ixzz1AL6kGAQp
Under Creative Commons License: Attribution

Continue Reading »

Know More About Load Balancing

Load balancing, by definition, is the process of spreading the amount of work that is conducted by a computer system between a number of different computer systems to increase the speed that the work is completed in. There are several different methods in which load balancing can be accomplished by and the technique can use many different types of computer components, including both hardware and software applications. Load balancing is typically completed using a cluster of computer servers that may or may not be located in the same location. Some load balancers provide a mechanism for doing something special in the event that all backend servers are unavailable. This might include forwarding to a backup load balancer, or displaying a message regarding the outage. Load balancing can be useful when dealing with redundant communications links.
There are many different companies that see the benefits of using load balancing and implement the procedure for their companies. Companies that conduct business transactions in large numbers using the internet are prime candidates to use load balancing to ensure that all of their clients and customers will be able to conduct their transactions in a quickly and accurately manner. Companies that need to network a great deal of computers for individual users also typically use load balancing to ensure that all computers will work properly and have the right amount of power to be able to perform the functions that they are intended to perform. It also ensures that the company will still be able to do business if one server becomes corrupted or goes down for an extended period of time.
There are several different methods that are widely used for load balancing. One of the most popular methods of load balancing is Global Server Load Balancing. This technique distributes the incoming tasks to a group of servers in a particular geographic location. This technique is widely used by companies that have a global presence and have a need to satisfy customers or employees in many different geographical locations. Using Global Server Load Balancing ensures that the work load is distributed throughout the entire server system in an easy to manage manner and ensures that all geographical locations are obtaining the correct information from the correct set of servers.
Another load balancing technique that is commonly used is called Persistence Load Balancing. This technique assigns each new client to a different server in a round robin (distributed page requests evenly to one of three Squid cache servers) type of allocation. This client is then assigned to this specific server for the future of their relationship with the business. This ensures that no one server is overloaded with a particular type of client, such as those in a certain geographical area or use a specific type of service and ensures that the clients are distributed evenly through out all of the servers that the business possesses. These server assignments are typically monitored by using the customers IP address as the customer's unique identification code.
Read more: http://www.articlesbase.com/computers-articles/know-more-about-load-balancing-287509.html#ixzz1AL6YJwlJ
Under Creative Commons License: Attribution

Continue Reading »

Load Balancing: What is it and what can it do for my internet business?

Load balancing, simply, is the distribution of a workload across many nodes. In the webhosting industry, it is typically used for balancing http traffic over multiple servers acting together as a web front-end.

Load Balancers allows users to intelligently distribute traffic to a single IP across any number of Storm Servers. This means that traffic can be shared across multiple servers to increase performance during times of high activity, increasing the reliability of your web application. You can scale up in anticipation of a traffic increase, and back as your traffic slows, paying for only what you need. Load Balancing also allows you to build your application with redundancy in mind. If one of your server nodes fails, the traffic is distributed to your other nodes without any interruption of service.

Load Balancing is fault tolerant and redundant by utilizing multiple servers running Zeus Load Balancer 7.

When and Why to Use Load Balancing
Load balancing, by its very nature, is the solution for more than one problem. You can use load balancing to keep your site up through traffic spikes, or grow with you as your resource needs increase.

The most common uses for Load Balancing:

  • Failover and Redundancy: If one node fails, the traffic is redistributed and your site stays up.

  • Load Distribution: As your site gains popularity, share the load across many nodes.

  • Preparation for Traffic Spikes: If you are anticipating an increase in traffic you can clone your server, and load balance between them before the traffic hits.


Read more: http://www.articlesbase.com/web-hosting-articles/load-balancing-what-is-it-and-what-can-it-do-for-my-internet-business-3220463.html#ixzz1AKqPpqqr
Under Creative Commons License: Attribution
Continue Reading »