Hi Kashyap,

> This is the definition of loadbalancing from
> https://www.nginx.com/resources/glossary/load-balancing/

OK, that's what I thought of.


> [image: image.png]

Thanks for the nice picture :)


> This is not a static system. The argument being that the database could
> have different read and write throughputs and you can potentially server a
> large number of reading clients by doing this kind of load balancing. (app
> 0, 1 and 2 are identical instances). I hope this clarifies.

There are several ways to implement such a setup.

It is no problem to put app0, app1, app2 etc. on different machines, but I think
these are not the bottlenecks, and also not the load balancing itself.

It is how you structure and distribute the common database.

One way would be putting the DB on its own, separate machine, and have the apps
communicate with it via remote/2 etc. calls. If that DB is modified by the apps
a lot, then it is surely the bottleneck and all the load balancing is useless.

Another way is to have the apps maintain parts of the data model in local,
private DBs, and synchronize only certain parts to the central DB, resulting in
lower stress on it. But this depends on the model.

Yet another way is to have each app hold a copy of the full DB, and mirror
everything to everybody else, but this requires rules about who is allowed to
modify which at any given moment. I use this currently in an application with
two central DBs and one remote DB per user permanently mirrored between the
central DB and all user DBs on mobile devices. Naturally this involves a big
synchronization overhead, but works here quite well as there are only about 40
mobile DBs.

☺/ A!ex

-- 
UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe

Reply via email to