I guess for multiple images on the same server we need to spawn off images listening on different ports.
On Tue, Jun 26, 2018, 14:44 Andrei Stebakov <lisper...@gmail.com> wrote: > What would be an example for load balancer for Pharo images? Can we run > multiple images on the same server or for the sake of balancing > configuration we can only run one image per server? > > On Tue, Jun 26, 2018, 14:32 Andrei Stebakov <lisper...@gmail.com> wrote: > >> Thanks, guys! I really appreciate your input! >> >> On Tue, Jun 26, 2018, 11:16 Sven Van Caekenberghe <s...@stfx.eu> wrote: >> >>> >>> >>> > On 26 Jun 2018, at 15:52, Norbert Hartl <norb...@hartl.name> wrote: >>> > >>> > >>> > >>> >> Am 26.06.2018 um 15:41 schrieb Sven Van Caekenberghe <s...@stfx.eu>: >>> >> >>> >> >>> >> >>> >>> On 26 Jun 2018, at 15:24, Norbert Hartl <norb...@hartl.name> wrote: >>> >>> >>> >>> >>> >>> >>> >>>> Am 26.06.2018 um 14:52 schrieb Andrei Stebakov <lisper...@gmail.com >>> >: >>> >>>> >>> >>>> Does anyone use Pharo for Miro services? I heard about Seaside and >>> Teapot, just was wondering if Pharo can handle multiple simultaneous >>> requests and if it can, where it reaches the limit. >>> >>> >>> >>> I use it extensively. I use Zinc-REST package to offer services. The >>> answer how much it can handle in parallel is hard to answer. For this you >>> need to tell what you are about to do. But a rule of thumb is not to exceed >>> 5 parallel tasks that are working at the same time. But a lot of tasks have >>> wait times while accessing another HTTP service, a database, a filesystem >>> etc. For this you can easily go up to 10 I guess. >>> >>> >>> >>> But these numbers are more of a gut feeling then something scientific >>> >>> >>> >>> Norbert >>> >> >>> >> A single ZnServer instance on a single image can handle thousands of >>> requests per seconds (local network, very small payload, low concurrency). >>> On a modern multi core / multi processor machine with lots of memory you >>> can 10s if not 100s of Pharo image under a load balancer, provided you >>> either do not share state or use high performance state sharing technology >>> - this is the whole point of REST. >>> >> >>> >> Of course, larger payloads, more complex operations, real world >>> networking, etc will slow you down. And it is very easy to make some >>> architectural or implementation error somewhere that makes everything slow. >>> As they say, YMMV. >>> >> >>> > >>> > I meant it regarding what a single image can do. And it can do >>> thousands of requests only if there is no I/O involved and I doubt this >>> will be a very useful service to build if it does not any additional I/O. >>> Still I would try not to have more than 5 req/s on a single image before >>> scaling up. The only number I can report is that 2 images serving 30 >>> requests/s while using mongodb are not noticable in system stats. >>> > >>> > Norbert >>> >>> That is what I meant: it is an upper limit of an empty REST call, the >>> rest depends on the application and the situation. If your operation takes >>> seconds to complete, the request rate will go way down. >>> >>> But with in memory operations and/or caching, responses can be quite >>> fast (sub 100 ms). >>> >>> >>>