Am 27.03.2015 um 17:11 schrieb "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= <ola.fosheim.grostad+dl...@gmail.com>":
On Friday, 27 March 2015 at 16:06:55 UTC, Dicebot wrote:
On Friday, 27 March 2015 at 15:28:31 UTC, Ola Fosheim Grøstad wrote:
No... E.g.:

On the same thread:
1. fiber A receives request and queries DB (async)
2. fiber B computes for 1 second
3. fiber A sends response.

Latency: 1 second even if all the other threads are free.

This is a problem of having blocking 1 second computation in same
fiber pool as request handlers -> broken application design. Hiding
that issue by moving fibers between threads is just making things worse.

Not a broken design. If I have to run multiple servers just to handle an
image upload or generating a PDF then you are driving up the cost of the
project and developers would be better off with a different platform?

You can create more complicated setups where multiple 200ms computations
cause the same latency when the CPU is 90% idle. This is simply not good
enough, if fibers carry this cost then it is better to just use an event
driven design.

So what happens if 10 requests come in at the same time? Does moving things around still help you? No.

BTW, why would an event driven design be any better? You'd have exactly the same issue.

Reply via email to