On Friday, 7 March 2014 at 12:23:09 UTC, Atila Neves wrote:

My point was that just by writing it in Go doesn't mean magical performance benefits because of its CSP, and that vibe.d's fibers would do just fine in a direct competition. The data seem to support that.

Right. I was refering to a large number of threads apparently not being a problem in Go. It was not about execution speed. This way, admittedly, I highjacked the thread a bit.

68K connections is nothing. I'll start getting interested when his benchmarks are 200K+. Event-based systems in C can handle millions of concurrent connections if implemented properly. I'd like to believe vibe.d can approach this as well.

That's good to hear. I read a blog from a company that changed from using C with libevent to Go. I searched for it now for quite a while, but couldn't find it again. From what I remember they claimed they could now handle much more connections using Go.

One question - doesn't Vibe.d already use green threads?

What they are saying on their web site is that they are using fibers and at the same time they say they are using libevent. That is confusing for me. On http://vibed.org/features they write: "Instead of using classic blocking I/O together with multi-threading for doing parallel network and file operations, all operations are using asynchronous operating system APIs. By default, >>libevent<< is used to access these APIs operating system independently."

Further up on the same page they write: "The approach of vibe.d is to use asynchronous I/O under the hood, but at the same time make it seem as if all operations were synchronous and blocking, just like ordinary I/O. What makes this possible is D's support for so called >>fibers<<".

It does. Bienlein has a very vague knowledge of topics he
comments about.

I thought the vibe.d guys would shed some light at this at the occasion, but no luck. What I don't understand is how fibers can listen to input that comes in through connections they hold on to. AFAIKS, a fiber only becomes active when it's call method is called. So who calls the call method in case a connection becomes active? That is then again a kernel thread? How does the kernel thread know something arrived through a connection? It can't do a blocking wait as the system would run out of kernel threads very quickly.

I think what Go and Erlang do is to use green threads (or equivalent, goroutines in Go) for the applications side and a kernel thread pool within the runtime doing "work stealing" on the green threads. This is more or less (ish) what the Java Fork/Join framework of Doug Lea does as
well.

When in Go a channel runs empty the scheduler detaches the thread that served it and attaches it to a non-empty channel. In Go all this is in the language and the runtime where it can be done more efficiently than in a library. AFAIU, this is a main selling point in Go.

Vert.x is caliming to be able to handle millions of active connections.

All right, as you can't have millions of threads on the JVM they must do that through some asynchronous approach (I guess Java NewIO). I read that an asynchronous solution is not as fast as one with many blocking threads as in Go or Erlang. I don't understand why. It was just claimed that this were the case.

Reply via email to