Hi Alex,

This is the definition of loadbalancing from
https://www.nginx.com/resources/glossary/load-balancing/ nginx is a
popularly used as a loadbalancer - while this is an L7(application layer
from OSI) load balancer, the term loadbalancer can be used for other layers
too.
A load balancer <https://www.nginx.com/solutions/adc> acts as the “traffic
cop” sitting in front of your servers and routing client requests across
all servers capable of fulfilling those requests in a manner that maximizes
speed and capacity utilization and ensures that no one server is
overworked, which could degrade performance. If a single server goes down,
the load balancer redirects traffic to the remaining online servers. When a
new server is added to the server group, the load balancer automatically
starts to send requests to it.

It is common to design a web app with this
<https://dreampuf.github.io/GraphvizOnline/#digraph%20G%20%7B%0A%0A%20%20subgraph%20cluster_0%20%7B%0A%20%20%20%20style%3Dfilled%3B%0A%20%20%20%20color%3Dlightgrey%3B%0A%20%20%20%20node%20%5Bstyle%3Dfilled%2Ccolor%3Dwhite%5D%3B%0A%20%20%20%20w0%3B%0A%20%20%20%20w1%3B%0A%20%20%20%20w2%3B%0A%20%20%20%20%0A%20%20%20%20label%20%3D%20%22Web%2FApp%20servers%22%3B%0A%20%20%7D%0A%20%20%0A%20%20%0A%20%20lb%20%5Blabel%3D%22load%20balancer%22%5D%3B%0A%0A%20%20lb%20-%3E%20w0%3B%0A%20%20lb%20-%3E%20w1%3B%0A%20%20lb%20-%3E%20w2%3B%0A%20%20%0A%20%20%0A%20%20c0%20%5Blabel%3D%22client%200%22%5D%3B%0A%20%20c1%20%5Blabel%3D%22client%201%22%5D%3B%0A%20%20c2%20%5Blabel%3D%22client%202%22%5D%3B%0A%20%20%0A%20%20w0%20%5Blabel%3D%22app%200%22%5D%3B%0A%20%20w1%20%5Blabel%3D%22app%201%22%5D%3B%0A%20%20w2%20%5Blabel%3D%22app%202%22%5D%3B%0A%20%20%0A%20%20c0%20-%3E%20lb%3B%0A%20%20c1%20-%3E%20lb%3B%0A%20%20c2%20-%3E%20lb%3B%0A%20%20%0A%20%20db%20%5Blabel%3D%22Database%22%5D%0A%20%20w0%20-%3E%20db%3B%0A%20%20w1%20-%3E%20db%3B%0A%20%20w2%20-%3E%20db%3B%0A%20%20%0A%7D>
topology. (btw this online graphviz is amazing :) )

[image: image.png]


This is not a static system. The argument being that the database could
have different read and write throughputs and you can potentially server a
large number of reading clients by doing this kind of load balancing. (app
0, 1 and 2 are identical instances). I hope this clarifies.

Regards,
Kashyap

On Fri, Jun 7, 2019 at 6:59 AM Alexander Burger <a...@software-lab.de> wrote:

> Hi Kashyap,
>
> > If an OS instance can only run N picolisp processes. What would be the
> > strategy to serve more than N concurrent clients?
>
> As I said, I don't know about load balancers. But in any case it depends
> on the
> structure of the system. Is it a static system, which can be put on several
> machines in parallel, and you just need to distribute client requests? Or
> are
> there interdependent databases, which must be replicated and synchronized,
> introducing questions far beyond simple load balancing?
>
> ☺/ A!ex
>
> --
> UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe
>

Reply via email to