Load sharing is taking some share of the incoming requests and handling it. Got example if there are 100 requests / sec and the server is capable enough to process only 20 Requests/sec then there is a need to scale up the server or put parallel servers that take the rest of 80 requests.
So you can have in all 5 servers handling 20 requests each, there by sharing the load.
But there has to be some traffic policeman who diverts these requests to the free server. This policeman is called as the Load balancer.
I want to be myself, but a better myself.<br />~ SCJP 1.4 (91%), SCWCD 1.4 (86%) ~
DNS Round Robin: Spreading incoming IP packets among a number of DNS addresses equally. That means that each subsequent packet is sent to the next address in a list, until the end of the list is reached and the next packet is sent to the first address again. T
It is LOAD SHARING technique rather than load balancing. Load balancing distributes connection loads across multiple servers, giving preference to those servers with the least amount of congestion. In round robin's case, server distribution remains on a rigid one IP address to one user rotating basis.