Am 03.03.2014 22:58, schrieb Bienlein:
On Monday, 3 March 2014 at 14:27:53 UTC, Sönke Ludwig wrote:

Just out of curiosity, what did you miss in vibe.d regarding fiber
based scheduling?

There is something else I forgot to mention. One scenario I'm thinking
of is to have a large number of connections like more than 100.000 I
want to listen on. This results in a situation with blocking I/O for all
those connections. Fibers in D are more like continuations that are
distributed over several kernel threads. The way Sean Kelly has
implemented the FiberScheduler a fiber is invoked in case it receives an
item like data through the connection it serves as in my scenario. At
least this is the way I understood the implementation. So I can have
like 100.000 connections simultanously as in Go without having to use Go
(the Go language is too simple for my taste).

In vibe.d, there are basically two modes of fiber scheduling. The usual mode is purely driven by the event loop: Once a task/fiber triggers a blocking operation, lets say a socket receive operation, it registers its handle for the corresponding event and calls an internal rawYield() function. Once the event fires, the fiber is then resumed.

The other mode happens when yield() (in vibe.core.core) is explicitly called. In this case, tasks are inserted into a singly-linked list, which is processed in chunks alternated with a call to processEvents() and in FIFO order to ensure a fair scheduling and to avoid blocking event processing when tasks perform continuous computations with intermittent yield() calls.

So the first mode AFAICS is working just like how Sean has made his fiber scheduler. And at least on 64-bit systems, there is nothing that speaks against handling huge numbers of connections simultaneously. 32-bit can also handle a lot of connections with small fiber stack sizes (setTaskStackSize), but using decently sized stacks will quickly eat up the available address space.

Reply via email to