Am 28.03.2015 um 10:17 schrieb "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= <ola.fosheim.grostad+dl...@gmail.com>":
On Friday, 27 March 2015 at 16:48:26 UTC, Sönke Ludwig wrote:
1. No stack.

That reduces the memory footprint, but doesn't reduce latency.

It removes hard to spot dependencies on thread local storage.

You can access TLS from an event callback just as easy as from a fiber.


2. Batching.

Can you elaborate?

Using fibers you deal with a single unit. Using events you deal with a
request broken down into "atomic parts". You take a group of events by
timed priority and sort them by type. Then you process all events of
type A, then all events of type B etc. Better cache locality, more fine
grained control over scheduling, easier to migrate to other servers etc.

And why can't you do the same with fibers and schedule the fibers accordingly? There is no difference between the two models, except that fibers provide additional persistent state in the form of a stack.


But the fundamental problem with using fibers that are bound to a thread
does not depend on long running requests. You get this also for multiple
requests with normal workloads, it is rather obvious:

@time tick 0:

Thread 1…N-1:
100 ms workloads

Thread N:
Fiber A: async request from memcache (1ms)
Fiber B: async request from memcache (1ms)
...
Fiber M: async request from memcache…

@time tick 101:

Thread 1...N-1:
free

Thread N:
Fiber A: compute load 100ms

@time tick 201:
Fiber B: compute load 100ms

etc.

It's you who brought up the randomization argument. Tasks are assigned to a more or less random thread that is currently in the scheduling phase, so that your constructed situation is simply *highly* unlikely.


Also keep in mind that in a real world setting you deal with spikes, so
the load balancer should fire up new instances a long time before your
capacity is saturated. That means you need to balance loads over your
threads if you want good average latency.

They *do* get balanced over threads, just like requests get balanced over instances by the load balancer, even if requests are not moved between instances. But IMO it doesn't make sense to go further with this argument with some actual benchmarks. It's not at all as clear as you'd like what the effects on overall performance and on average/~maximum latency are in practice for different applications.


Antyhing less makes fibers a toy language feature IMO.

Reply via email to