What are Distributed Systems ?

A distributed system is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another [Source Google] . In Simple terms instead of just having one central server you have multiple servers or nodes which share the traffic amongst themselves is known as distributed system.

Well here are some of the characteristics of a truly distributed system :

It should be able to :

1 ) Scalable : Our distributed system should be able to scale with the increasing demand that is lets take the case of a payment system it should work in a similar fashion for 20,000 requests /month and for 2,00,00,000 requests / month . There are two ways of ways to scale your system :

Horizontal Scaling : You horizontally scale by adding more resources to the pool of resources into the pool of resources .

Vertical scaling : You scale vertically by adding more power (CPU, RAM, Storage, etc.) to an existing server.

2) Reliability : Our distributed systems should be reliable that is it should keep on delivering even if some or most of the services keep on failing . This can be achieved by replacing the old faulty one by a new one Or by implementing a fault tolerant system which can be built using Kafka or Rabbit MQ .

3) Highly Available : An ideal distributed system should be always available but practically it is not possible to achieve . So we try to move towards a highly available system .

How to measure the efficiency of a distributed network

Two standard measures of efficiency are the response time (or latency) that denotes the delay to obtain the first item and the throughput (or bandwidth) which denotes the number of items delivered in a given time unit .

Load Balancers

One of the most important component of any distributed network would be load balancer . As the name suggests it is responsible for distribution of traffic / requests to different servers / nodes . It does this by observing the health of each server ie if a server is not responding or has an elevated timeout . Load Balancer will stop sending traffic to that server . Load balancer tries to distribute the load evenly across the servers .

Load Balancer typically sits between the client and server . It distributes the load onto different servers using different load balancing algorithms . This prevents all requests from accumulating at one server . Thus improving overall availability and responsiveness .

Load Balancing Algorithm

Load Balancing algorithms before forwarding a request to a load balancer checks whether server is responding appropriately to requests or not . At the most basic level this is done by health check API’s in the servers

There is a variety of load balancing methods, which use different algorithms for different needs[Source educative.io]

  • Least Connection Method — This method directs traffic to the server with the fewest active connections. This approach is quite useful when there are a large number of persistent client connections which are unevenly distributed between the servers.
  • Least Response Time Method — This algorithm directs traffic to the server with the fewest active connections and the lowest average response time.
  • Least Bandwidth Method — This method selects the server that is currently serving the least amount of traffic measured in megabits per second (Mbps).
  • Round Robin Method — This method cycles through a list of servers and sends each new request to the next server. When it reaches the end of the list, it starts over at the beginning. It is most useful when the servers are of equal specification and there are not many persistent connections.
  • Weighted Round Robin Method — The weighted round-robin scheduling is designed to better handle servers with different processing capacities. Each server is assigned a weight (an integer value that indicates the processing capacity). Servers with higher weights receive new connections before those with less weights and servers with higher weights get more connections than those with less weights.
  • IP Hash — Under this method, a hash of the IP address of the client is calculated to redirect the request to a server.

As we all know we are always trying to avoid the single point of failure . Sometimes Load Balancers can become our single point of failure and we try to avoid it by having a second load balancer in conjunction with the first one where either one is active .

CAP Theorem

Cap Theorem states a distributed system can only provide at most two out of three guarantees in terms of consistency , availability and partition tolerance . CAP Theorem states we need to pick two out of three options for our distributed system :

Consistency : All the nodes see the same data at the same time . Consistency is achieved by updating several nodes before allowing reads

Availability : Every Request gets a response either success or failure

Partition Tolerance : Our distributed system continues to work despite message loss or partial failure . A partial tolerant system can sustain any amount of failures that doesn’t result in the complete failure of the the entire system .

Why we cannot achieve all the three simultaneously ?

We cannot build a general data store that is continually available, sequentially consistent, and tolerant to any partition failures. We can only build a system that has any two of these three properties. Because, to be consistent, all nodes should see the same set of updates in the same order. But if the network suffers a partition, updates in one partition might not make it to the other partitions before a client reads from the out-of-date partition after having read from the up-to-date one. The only thing that can be done to cope with this possibility is to stop serving requests from the out-of-date partition, but then the service is no longer 100% available.[Source Educative.io]

A person trying to do things his own way