vineri, 7 ianuarie 2011

Server Load Balancing – Take the Load Off

Load balancing is the technique of spreading work between computers, hard disks, processes, etc. with the purpose of decreasing computing time and getting optimal resource utilization. This technique is generally performed by load balancers, whose aim is to exceed the capacity of a single server by increasing that of a server farm. Load balancers also allow the service to continue even in situations of server maintenance or server failure. Reducing downtime is essential in the prevention of huge losses, which is why automatic detection of server failure or server lock-up software are recommended.
But why is server load balancing necessary? If all the incoming HTTP requests for your website are responded to with only one web server, its capacity may fail in handling high volumes of traffic, when and if your website becomes popular. This means that pages will load slowly and there will be quite some time before some of the users get to have their requests processed by the web server. The more connections and incoming traffic to your website, the greater chances are you will need server upgrading, which may no longer be cost effective. This is what server load balancing is about: adding more servers and distributing load among them, in order to attain web server scalability. Be it an application server, database server, or HTTP server, load balancing applies for and is recommended for all types of servers.
There is a limit for each server as far as the number of users that it can server is concerned. Once that limit has been reached, you only have two options. You can either replace it with a newer one, or you can add one more server and make it possible that the load is shared between them. A load balancer will help you in that it will distribute connections among servers and cut the work proportionally for each of them.
Many server deployments can do with using basic information about the services and the clients they want to reach, as means for load balancing. Still, this will prove insufficient as your needs develop and become more complex. At this point, you have to take a closer look at the options you have and make a more detailed decision about load balancing. This is where Layer 7 load balancer comes into place.
There are times when sharing traffic over a group of servers simply is not enough. One reason is that all the content that comes with a request will have to be on each server, if they are equally likely to service that particular request. Your servers will function with maximum efficiency if you group them so as to handle different tasks at the same time. For instance, some can be optimized for streaming video downloading, others can serve as massive storage systems, while others can handle transactions. In fact, the main reason for using a load balancer is that of increasing the capacity of IP servers.
The ability to handle huge amounts of traffic is extremely important in a server load balancer. This is all the more true for sites with significant volumes of SSL transactions, which require built-in SSL terminations and SSL acceleration cards. Theses cards are very important to the security of transactions on e-commerce web sites through the encryption keys that they generate. But the process of performing a secure transaction can overload a server severely, which results in a smaller number of transactions being processed by second. This ultimately leads to fewer sales. The importance of SSL acceleration cards is obvious, as they take over the process, subsequently allowing the load on the server to be reduced. Under these circumstances, the Layer 7 load balancer can perform correctly, despite the large number of HTTPS packets coming in. The ideal product will increase efficiency and reduce costs, as it provides performance and quality, and requires no separate appliance.
Layer-7 load balancer is concerned with parsing requests and distributing them to servers, according to the different types of content these requests have. By doing so, Layer-7 load balancer improves overall cluster performance, although its scalability is rather limited compared to Layer-4 load balancer, due to the high overhead of parsing requests.
An important feature of any efficient Layer- 7 load balancer is its ability to manage traffic based on the content. Traffic management is performed by comparing URL content with customized configuration settings. This process helps to determine which server is able to handle the request and making an appropriate routing.
For more resources about layer 7 load balancer or even about server load balancing please review this page http://www.cainetworks.com
Read more: http://www.articlesbase.com/business-articles/server-load-balancing-take-the-load-off-162077.html#ixzz1ALFWEetG
Under Creative Commons License: Attribution

Continue Reading »

Load Balancing Servers!

Local balancing is the defined process through which inbound internet protocol (IP) traffic can be spread across multiple networks of servers. It is used for improving the performance of the servers, optimal utilization and creates a balance in counting on all the servers attached in the local balancing method. It is meant basically for busy networks where it is quite difficult to guess the right number of requests that will be passes through the server.

The Process:

In this process, mainly two or more web pages are employed in the procedure of load balancing. It is done to create equilibrium while receiving the requests. If one server is overloaded with requests, it will be automatically gets forwarded to another without any hassle. Also, by employing the local server, the overall service time will be brought down so that multiple servers can easily handle the requests. All you need to get is a load balancer to identify which server has the current availability to receive the traffic.

This effective process is used right away, in a straight manner. A webpage requests is sent to the load balancer which sends the further traffic request to one of the vacant servers. The server then messages to the load balancer and sends the desired request to the end user.

One of the greatest advantages of using this option is to allow the service to continue even when the server is closed for failure or maintenance works. In such cases, you still can use the services offered by load balancing server at almost ease.

Summary: To handle high volume of incoming traffic; to cope up with web server scalability, more servers is added to distribute the load among the group of servers, the load distribution among servers is known as load balancing. Load balancing spreads HTTP request by IP Spraying.

Keywords: load balancing servers, servers, hosting, vps, dedicated servers

load balancing,load balancing servers,load balancing server, servers, balancing servers

http://www4.be/nl/servers.htm

http://www4.be/nl/dedicatedservers.htm

http://www4.be/nl/vps.htm

Read more: http://www.articlesbase.com/information-technology-articles/load-balancing-servers-3284356.html#ixzz1ALF1f1Qw
Under Creative Commons License: Attribution

Continue Reading »

Connection Pooling in Next Gen Load Balancing - Your Need to Know Basics

Connection pooling

What is connection pooling and what does it do?

In order to answer this question we need to look a little into why it's relevant (This is not a politicians answer, honest!)

This feature, although having been around for many years originated in the days of the main frame.

Recently, since load balancing manufacturers have been pushing it as a feature, interest in connection pooling has increased.

As load balancing is what I know slightly more about, let's start there. The first generation load balancer used a direct server return method meaning that the user request would first hit the virtual IP of the load balancer and then it would get routed on to a server. The server would respond directly back to the user i.e. miss out the load balancer. This was fine in the "old" days, indeed it is still being use by some of the more simplistic layer4 devices today. However, it has some major drawbacks.

1)      How do you know how busy a server is?

i.e. You know how many requests you sent to it but you don't know how quickly they were executed.  This major problem has tried to be worked around with some rather crude solutions:

The guess - "Weighted load balancing"

You take a look at the box and estimate how much better (or worse) it is in relation to the other servers. I think that my new Dell is twice the power of my old one etc - you get the idea. The big problem, apart from the potentially massive inaccuracy of the initial guess, is the usage. For example, User A could run a massive report that takes ages, where as User B could request a small image thus creating different load requirements.

Run a client application

The idea of this is that you have to install some software on the servers. The load balancer will make a periodic request to this software to gather information such as number of connections or CPU utilisation. One thing to note is that many people use CPU but that does not always work. A server that is running with a lot of CPU does not always mean its going to be the slowest to serve your request.

2)      The other drawback is that you can't do anything with the response of the connection. Every user will make a new connection to the webserver.

So what's changed?

Most of the newer load balancers and ADC Application Delivery Controllers now use a proxy based approach. This means that both the requests and the responses pass through the load balancer.

The web servers only ever talk to the load balancer. Well they may talk to other internal servers directly but as far as the end user conversation is concerned, all traffic goes through the load balancer.

This obviously addresses the issues highlighted above but also allows you to manage the way that connections are passed to the servers.

So managing the connections to the servers has the obvious advantage that we know exactly how busy they are and how quickly they are able to process requests. However in addition to this you can optimise the connection as you have control of both ends.

On one side you have the load balancer and on the other you have the web servers, as such it seems pointless to initialise and destroy a TCP connection for every request.

Connection pooling allows the load balancer to initialise a number of connections to the servers and pool them for use with multiple end users. This reduces the effect of TCP slow start and also helps to reduce load at the networking processing level.

The other advantage of this proxy type approach is the simple abstraction between the client and server connections. A server sitting on a high speed LAN can have a very high speed conversation with the load balancer. The data can be sent and the connection left free to serve another request. The load balancer can then buffer the response and send it to the client over the much slower WAN.

This approach can really help busy web servers, especially if client connections are filling up on the server due to slow links and/or using methods such as HTTP keep alive.

Keep an eye out for my next blog....

Read more: http://www.articlesbase.com/internet-articles/connection-pooling-in-next-gen-load-balancing-your-need-to-know-basics-2961161.html#ixzz1ALEoN3r7
Under Creative Commons License: Attribution

Continue Reading »

Load Balancing Router Benefits

With the increasing importance of the need to remain connected to the information highway constantly, there is a need for appliances and solutions that distribute traffic efficiently. When that happens, traffic flow improves and bottlenecks caused by a single connection are removed. In case one connection fails, remaining connections will be utilized so that internet continuity is ensured. This is load balancing.

On a network, it does not take long for traffic to build up and lead to congestion. Applications like VoIP can choke your bandwidth capacity. When data packets queue up, there could be slowing down of traffic and data loss. Through load balancing applications, multiple connections are grouped instead of using a single connection. This increases bandwidth tremendously. When traffic is shared among different internet connections, the chances of congestion are reduced. The end user does not have to suffer unnecessary delays. Load balancing also helps to ensure that data packets are not delivered out of order. Computers also have the ability to connect to various internet access technologies. Load balancing is achieved through load balancing routers.

There are two kinds of load balancing.

1. Outbound load balancing

2. Inbound load balancing

Outbound load balancing has several benefits. Some of its most obvious advantages include:

improved bandwidth efficiency

1. Almost uninterrupted uptime

2. Ability to use different internet technologies

3. Flexible load balancing options may be implemented

Inbound load balancing is designed for networks that operate and provide services like web hosting, email servers or other networking application that fulfills the incoming request of various computers.

For inbound load balancing to work, data requests from external users who require data from the network must be managed. To achieve this, data distribution over multiple internet connections takes place, instead of over a single connection. If a single connection is used, requests choke up the network and congestion occurs.

Why is load balancing important?

Well, for many internet users, reliable internet service is literally worth its weight in gold. By avoiding downtime, they gain a competitive advantage just as experiencing downtime can make them incur huge financial loss. In this day and age, forward thinking business people realize the importance of redundant internet access. By leveraging on the power of load balancing, they can enjoy elevated bandwidth.

Read more: http://www.articlesbase.com/information-technology-articles/load-balancing-router-benefits-3627371.html#ixzz1AL6ujHhF
Under Creative Commons License: Attribution

Continue Reading »

Clustering Versus Load Balancing in Web Hosting

Clustering and load balancing are technical terms used to describe the backend functioning of hosting applications. They are terms that networking technicians will use when explaining the functionality and performance of a server. The terms are sometimes used interchangeably when in fact they actually refer to different types of applications. In order to compare the two, it is important to first understand each term and how it is used in the industry.
Defining Clustering
Clustering is comparatively the simpler of the two applications. It also refers to computer software rather than hardware.  The process of clustering refers to the conversion of a single function of an application to a master controller. From there, the requests are then sent to multiple functions. For example, on an e-commerce site, clustering may be used when a customer has items in a shopping cart. When they capture the payment details, the process needs to then close the transaction and take the customer to a new page to show that the transaction is complete. The clustering is done using standard industry algorithms. Because there are several processes that need to happen almost simultaneously, clustering provides a way to improve the response time of a server or increase its capacity. It does this by adding further functions to the servers. Because of the feature, clustering is often confused with load balancing in that load balancing has a similar capability.
What Is Load Balancing?
Load balancing often refers to Application Delivery Controllers, more commonly referred to as ADCs.  As mentioned, load balancing also increases the capability and capacity of hosting servers. However, the main difference is that compared to clustering, load balancing uses more complex algorithms. Because of this, they have the faster response times of the two as well as greater functionality and flexibility. Where clustering can only use traditional application variables, load balancing has the capability to draw information from other sources such as network-based data. Load balancing is also more transparent than clustering.
The Pros and Cons of Clustering
A big advantage of clustering is that it doesn't require highly advanced technical knowledge in order to implement. Someone with a basic level of networking knowledge will be able to set it up. Clustering applications usually come as part of a server enterprise package and because of this, they are considerably cheaper. On the down side, major disadvantages of clustering are the limitations it has. You can generally only use clustering on homogeneous servers and even then high availability is not guaranteed. Most clustering applications also use separate hardware for the cluster controller. Because of this, on managed application sever instances, node agents are needed. 
Comparing ADC Load Balancing
In general, ADC load balancing applications are reputed to provide higher availability and better load balancing. In addition, it can do this in less restrictive homogeneous environments. Technicians like them because when they are deployed there is no need to make changes to existing servers or applications. Perhaps the biggest advantages are that load balancing provides better server performance and improved server security; it also has the added value of optimizing the applications. The disadvantage to ADC or load balancing is that it requires more advanced technical networking knowledge to set up and manage. If you do not have employees capable of managing the process, you may need to bring in outside expertise or train employees to become competent This can be both time-consuming and expensive. ADC also requires additional infrastructure to be built into the architecture of the server which complicates the management of it. The last con is that these solutions are generally more expensive than clustering. 
How Do You Choose Which Option Is Best for You?
The decision on whether to use clustering or ADC load balancing is one that you should take your time making. It is a complex issue that has many variables. Firstly, consider the technical expertise that you have at your disposal and if the skilled individuals are more proficient in either clustering or load balancing. Sometimes you may not be able to justify the additional expense of getting the necessary technical skills. Cost is another important consideration. You need to look at this in terms of the set up as well as the ongoing maintenance costs. Perhaps most important though is how important server performance is to you. When you look at the results that you want and what this could mean in terms of customer satisfaction and retention, then it may well be worth your while to spend the money on more advanced technology.
Read more: http://www.articlesbase.com/web-hosting-articles/clustering-versus-load-balancing-in-web-hosting-3318957.html#ixzz1AL6kGAQp
Under Creative Commons License: Attribution

Continue Reading »

Know More About Load Balancing

Load balancing, by definition, is the process of spreading the amount of work that is conducted by a computer system between a number of different computer systems to increase the speed that the work is completed in. There are several different methods in which load balancing can be accomplished by and the technique can use many different types of computer components, including both hardware and software applications. Load balancing is typically completed using a cluster of computer servers that may or may not be located in the same location. Some load balancers provide a mechanism for doing something special in the event that all backend servers are unavailable. This might include forwarding to a backup load balancer, or displaying a message regarding the outage. Load balancing can be useful when dealing with redundant communications links.
There are many different companies that see the benefits of using load balancing and implement the procedure for their companies. Companies that conduct business transactions in large numbers using the internet are prime candidates to use load balancing to ensure that all of their clients and customers will be able to conduct their transactions in a quickly and accurately manner. Companies that need to network a great deal of computers for individual users also typically use load balancing to ensure that all computers will work properly and have the right amount of power to be able to perform the functions that they are intended to perform. It also ensures that the company will still be able to do business if one server becomes corrupted or goes down for an extended period of time.
There are several different methods that are widely used for load balancing. One of the most popular methods of load balancing is Global Server Load Balancing. This technique distributes the incoming tasks to a group of servers in a particular geographic location. This technique is widely used by companies that have a global presence and have a need to satisfy customers or employees in many different geographical locations. Using Global Server Load Balancing ensures that the work load is distributed throughout the entire server system in an easy to manage manner and ensures that all geographical locations are obtaining the correct information from the correct set of servers.
Another load balancing technique that is commonly used is called Persistence Load Balancing. This technique assigns each new client to a different server in a round robin (distributed page requests evenly to one of three Squid cache servers) type of allocation. This client is then assigned to this specific server for the future of their relationship with the business. This ensures that no one server is overloaded with a particular type of client, such as those in a certain geographical area or use a specific type of service and ensures that the clients are distributed evenly through out all of the servers that the business possesses. These server assignments are typically monitored by using the customers IP address as the customer's unique identification code.
Read more: http://www.articlesbase.com/computers-articles/know-more-about-load-balancing-287509.html#ixzz1AL6YJwlJ
Under Creative Commons License: Attribution

Continue Reading »

Load Balancing: What is it and what can it do for my internet business?

Load balancing, simply, is the distribution of a workload across many nodes. In the webhosting industry, it is typically used for balancing http traffic over multiple servers acting together as a web front-end.

Load Balancers allows users to intelligently distribute traffic to a single IP across any number of Storm Servers. This means that traffic can be shared across multiple servers to increase performance during times of high activity, increasing the reliability of your web application. You can scale up in anticipation of a traffic increase, and back as your traffic slows, paying for only what you need. Load Balancing also allows you to build your application with redundancy in mind. If one of your server nodes fails, the traffic is distributed to your other nodes without any interruption of service.

Load Balancing is fault tolerant and redundant by utilizing multiple servers running Zeus Load Balancer 7.

When and Why to Use Load Balancing
Load balancing, by its very nature, is the solution for more than one problem. You can use load balancing to keep your site up through traffic spikes, or grow with you as your resource needs increase.

The most common uses for Load Balancing:

  • Failover and Redundancy: If one node fails, the traffic is redistributed and your site stays up.

  • Load Distribution: As your site gains popularity, share the load across many nodes.

  • Preparation for Traffic Spikes: If you are anticipating an increase in traffic you can clone your server, and load balance between them before the traffic hits.


Read more: http://www.articlesbase.com/web-hosting-articles/load-balancing-what-is-it-and-what-can-it-do-for-my-internet-business-3220463.html#ixzz1AKqPpqqr
Under Creative Commons License: Attribution
Continue Reading »

Load Balancing Web Hosting Plan

One of the most effective ways of promoting people, products, and services is through the worldwide web. It cannot be denied that practically all sorts of products, services, and even businesses are anchored on the internet. As such, it is not surprising why many businesses have created and established web sites for their products.

However, creating a website is not as easy as 1-2-3. First and foremost you need to consider the web hosting site that will host your web site. Since there are a lot of different web hosting plans and sites, choosing may seem a pretty difficult thing to do. However, if you know the factors to consider in choosing a web host though, deciding on one is not actually that difficult.

In buying web hosting, these factors have to be considered: stability, resources, and price. More than anything else, the credibility and reputation of the web host is very important. After all, these are some of the influencing factors to the success and failure of any website. It is also very important to consider the plan that you will sign up for. The plan should feature everything that you need for your website. While quality should never be compromised with cost, it is also advisable to consider the price of the web hosting services.

The most practical arrangement for web hosting is a shared hosting account. A lot of blogs, forums, and corporate sites are hosted on shared accounts. The downside of a shared account though is the number of users that are competing for the same resources. If there are a lot of users competing for the same resources, it results to server meltdown. When there is a server meltdown, the server becomes overloaded and eventually fails to function. As such, in order to avoid this problem in shared hosting, load balancing is introduced.

Load balancing is actually a technique used by web hosting companies to divide the server load among different machines. In this manner, server meltdown is avoided. In load balancing, the total load shouldered by the web hosting company is shared by the number of machines in the network. This set up is definitely practical since the load will now be distributed to the different machines, instead of just one machine. Some of the most common resources that can be shared include bandwidth, CPU, RAM, and disk space.

However, load balancing is not really required at all times. Load balancing can be applied when special activities such as product launch or events are anticipated. Although servers share the load in load balancing, there is no reduction in loading time for a particular web site at all. Therefore, in choosing a web host, make sure that it offers load balancing in its plan.

Read more: http://www.articlesbase.com/internet-articles/load-balancing-web-hosting-plan-3590213.html#ixzz1AKoWrZZC
Under Creative Commons License: Attribution

Continue Reading »

Load Balancing Dedicated Servers

Major websites like YouTube, gMail are using this technique to improve the efficiency of performance of the server and manage the load accordingly. Because of heavy usage, usually services and applications or hardware service exhaust. In order to reduce the load, this technique has been introduced.
It usually depends upon two main factors such as network-transfer speed and server response time. Both servers of the equal high-configuration is required especially for programs like CGI. It may also need more of RAM mainly for running simultaneous HTTP daemon process. This is also used to manage network traffic on the servers.
A network load balancing cluster routes to one single IP to the available servers in the cluster. Each machine runs independent processes of the others duplicating resource on each server. The database is located on one server and is manageable by all servers.
Many different servers can be configured for many services like TCP, UDP or applications such as HTTP, FTP, SSL, SSL Bridge, DNS, SIP etc..
For applications such as SSL can increase loads on the servers and can be a burden on the resources mostly on the CPU. Load balancers manages web servers for SSL connection and improves the performance of the servers. It can also prevent from D-Dos Attacks by providing features such as SYN cookies and delayed-binding.
It also allows compression of HTTP services reducing the loads again. It can also help clients from different TCP to reach HTTP requests. It can buffer the resources to slow clients increasing efficiency. It allows HTTP caching which can reached without reaching the web servers.
Load balancing clusters also allows sources to control or track traffic. In order to increase safety for HTTP, it can also hide HTTP error pages. One of the main features include prioritizing traffic and Global Server Load Balancing Dedicated Servers.
It can also send different request to different servers. It also manages traffic between many ISP's. Load balancers can authenticate clients to access the websites. It also follows layer-7 security policy to ensure HTTP and HTTPS websites.
Spam detection from known spammers is yet another feature of load balancers.
Read more: http://www.articlesbase.com/web-hosting-articles/load-balancing-dedicated-servers-1902484.html#ixzz1AKoLBB9A
Under Creative Commons License: Attribution

Continue Reading »

joi, 6 ianuarie 2011

Clustering vs. Load Balancing

Before you can talk about differences between clustering and load balancing, and there are more than a few, you’ve got to get the definitions straight. Clustering is often understood to mean the capability of some software to provide load balancing services, and load balancing is often used as a synonym for a hardware- or third-party-software-based solution.
In practice, clustering is usually used with application servers like IBM WebSphere, BEA WebLogic and Oracle AS (10g). Also being used in that environment are load balancing features found in Application Delivery Controllers (ADC) like BIG-IP. (For simplicity, we will talk about clustering versus ADC approaches.)
Scalability, horizontally speaking
There are hardware load balancers, of course, but there we talk about pools or farms, the server groupings where application requests get distributed. It is in the software world that the term cluster is applied to that same group.
Clustering will typically convert one instance of an application server to a master controller, then process/distribute requests to multiple instances using such industry standard algorithms as round robin, weighted round robin or least connections. Clustering is similar to load balancing in that it has horizontal scalability, a nearly transparent way to add additional instances of application servers for increased capacity or response time performance. To ensure that an instance is actually available, clustering approaches typically use an ICMP ping check or, sometimes, HTTP or TCP connection checks.
Health and transparency
For load balancing, ADCs support the same industry algorithms, but have additional, complex number-crunching processes, and check such parameters as per-server CPU and memory utilization, fastest response times, etc. ADCs also support more robust health monitoring than the simple app server clustering solutions. This means they can verify content and do passive monitoring, dispensing with even the low impact of health checks on app server instances.
For applications that require the user to interact with the same server during a session, clustering uses server affinity to get the user there. This is most common during the execution of a process like order entry, where the session is used between pages (requests) to store data needed to close a transaction, like a shopping cart.
For the same situation, ADCs use persistence. Clustering solutions are usually somewhat limited as to the variables they can use, while ADCs can not only use traditional application variables but also get other information from the application or network-based data.
More than a few clustering solutions need node-agents deployed on each instance of an application server that is clustered by a controller. It may not be a burden as far as deploying and managing it, since it is often in place, but it is still means more processes running on the servers and consuming memory and CPU resources. Of course, it also adds another possible failure point to the data path. Since ADCs need no server-side components, they remain completely transparent.
Making the choice
Some would ask, Why do the extra work of building a distributed software system and cluster server setup when you can have multiple servers fulfilling specific roles such as separate database servers, web servers, mail servers, etc. whenever necessary?
So, how do you choose? That depends on the reasons you are considering this kind of solution in the first place, and (perhaps) whether or not you have to make an additional purchase to achieve clustering capabilities for the particular application server you have. There is also the broader question of whether or not you need (or want) to provide support for multiple application server brands. Clustering, of course, is proprietary to the application server, but ADCs can provide services for any and all applications or web servers.
Clustering checklist
Pros:
*Typically available with application server’s enterprise package
*Doesn't require the highest level of networking know-how
*Usually less costly than redundant ADC deployments
Cons:
*High availability not assured with clustering solutions
*Best practices deploy the cluster controller on separate hardware
*Node agents required on managed app server instances
*Clustering is "proprietary" (you can cluster only homogeneous servers)
ADC checklist
Pros:
*Provides high availability and load balancing in heterogeneous environments
*Added value of application optimization, security and acceleration
*No changes required to applications or servers where they’re deployed
Cons:
*An additional piece of infrastructure in the architecture
*Generally more costly than clustering solutions
*Could require new skill set to deploy/manage
Recommendations
Get more insight into performance, configurations and case studies by reading some testing-based articles on ADCs, and testing-based reviews of server clustering. Look for case studies that mirror your own situation, as closely as possible, and talk to people who are doing what you are planning (or thinking about). Unlike government going into the car business or taking over health care, do not do something quickly just to be seen doing something. Take care with this decision.
Read more: http://www.articlesbase.com/web-hosting-articles/clustering-vs-load-balancing-1235211.html#ixzz1AKnlvz5W
Under Creative Commons License: Attribution

Continue Reading »

Load balancing in computer technology

Load balancing (which performs load balancer) is a type of service that executes a computer that assigns the workload of the group (networked) server stations so that computer resources are used optimally.

In computer technology, extensive calculations or large amounts of requests by means of load distribution over several concurrent systems. This can have very different characteristics. A simple load distribution takes place, for example, on computers with multiple processors. Each process can run on its own processor. The nature of the distribution of processes on processors can have a major impact on the overall performance of the system, since, for example, the cache content is local to each processor.

Another method is found in computer clusters. Here are several computers to a composite that behaves to the outside mostly as a single system. This is implemented with server load balancing process. Some possible methods are installing a computer, which divides the questions or the use of DNS with the round-robin.

Load distribution takes place even with large server farms that serve, for example, the response to HTTP requests. There systems are upstream (front-end server), which are distributed according to specified criteria, the individual requests to the back-end server. This additional information from the HTTP request is used to putting everyone on a session with a user belonging to the same server packages. This is also important when using SSL to encrypt the communication so that can not be performed for each request a new SSL handshake must.

A good implementation of a load distribution requires more information about how the utilization of the target systems looks like.

The term load balancing in the broadest sense, a mechanism for resilience is understood: by building a cluster and the distribution of queries on individual systems, achieved an increase of the reliability, provided that the failure of a system is detected and the request will be automatically transferred to another system.

Read more: http://www.articlesbase.com/web-hosting-articles/load-balancing-in-computer-technology-3642290.html#ixzz1AKms1pml
Under Creative Commons License: Attribution

Continue Reading »