Round Robin System

The round robin method gives all elements of a limited group access to a resource evenly and in a specific sequence. This is usually done from top to bottom. Once the end has been reached, the process starts again. Put simply, the process takes place "alternately, one after the other." As the servers of the NTP pool are distributed all over the world, the system offers a high level of redundancy and reliability, which makes it a resource for global timekeeping on the Internet.

But what happens if the targeted server is not available or returns nonsensical times?

The round robin method gives all elements of a limited group access to a resource evenly and in a specific sequence. This is usually done from top to bottom. Once the end has been reached, the process starts again. Put simply, the process takes place "alternately, one after the other." As the servers of the NTP pool are distributed all over the world, the system offers a high level of redundancy and reliability, which makes it a resource for global timekeeping on the Internet.

Round Robin Load Balancing

Round robin load balancing is a simple method of distributing customer requests to a group of servers. A customer request is forwarded to each server in turn. The algorithm instructs the load balancer to return to the top of the list and repeats the process.

A schematic representation of the round-robin method in a network system

Round Robin is the most commonly used load balancing algorithm and is easy to implement and design. With this method, customer requests are cyclically forwarded to the available servers. Round-robin load balancing works best when the servers have approximately the same computing power and storage capacity.

How does Round Robin load balancing work?

With round-robin network load balancing, the connection requests are distributed to the web servers in the order in which the requests are received. As a simplified example, let's assume that a company has a cluster of three servers: Server A, Server B and Server C.

  • The first request is sent to server A
  • The second request is sent to server B
  • The third request is sent to server C

The load balancer forwards the requests to the servers in this order. This ensures that the server load is distributed evenly to cope with the high data traffic.

Share this post