Am 27.04.2012 11:06, schrieb Nick Sabalausky:
"Sönke Ludwig"<slud...@outerproduct.org>  wrote in message
news:jndl9l$26eh$1...@digitalmars.com...

We still have a more comprehensive benchmark on the table but it seemed to
get along happily with about 60MB of RAM usage during a C10k test. The
average request time went down to about 6s if I remember correctly. The
test was using a dual-core Atom server over a WiFI connection so the
results may have been skewed a litte.


I guess I don't have much of a frame of reference, but that certainly sounds
very impressive. Especially the RAM usage (you could probably kill lighttpd,
from what I've heard about it) and also the fact that was all over WiFi.

Have you looked at long-term memory usage? I wonder if D's imprecise GC
could be a liability and would need to be ?

I'm starting to monitor it now. Directly after startup, the website is at 32 MB. The traffic has dropped a bit but hopefully it's enough to see something if there is a hidden leak.

After the C10k test, the memory usage dropped back to normal if I remember right so there should be nothing too bad in the core.


In terms of HTTP/1.1 implementation it is most importantly lacking
multipart form uploads (will be added shortly). Otherwise the protocol
supports most HTTP features. I'll put a more detailed comparison on my
TODO list.


Thanks, I'm anxious to take a look. I'm particulary curious how much of your
intended scope does/doesn't overlap with Nginx's domain and it's (rather
Apache-competetive) suite of modules: http://wiki.nginx.org/Modules

From going just over the module index, the features that are in or planned are:

 - most of Core is there
 - Basic auth - check
 - Auto index - part of a separate library that is still in dev.
 - FastCGI client is planned, possible also a server
 - Gzip responses (the static file server componen does this)
 - Headers - check
 - Limit Requests/Zone is planned
 - Limit Conn is planned
 - Log - check
 - Proxy - check
 - scgi - maybe instead of FastCGI
 - Split Clients - easy to do programmatically
 - Upstream - planned
 - SSL - check
 - WebDAV - planned
 - simplistic SMTP - check

Do you see this as eventually (or even currently) competing with Nginx in
general? Or is it more focused on web framework and general async I/O?
(Actually that's another thing, how general is the async I/O in it? Is it
intended to be used for just...whatever general I/O uses...or is it more
intended as a means-to-an-end for web?)
>
> The way it's designed, I suppose the features of many of Nginx's modules
> could simply be implemented using vibed, for example directory listings.
>


In general it's more a general application framework or an extension library. But a stand-alone webserver would be an easy project - nothing that I have directly planned myself though.

The I/O is supposed to be as general as possible. Right now the focus has been on web applications but we also plan a desktop application that is going to use it for all of its I/O. For this I'm also looking into how to integrate the event loop with the OS message queue efficiently on all platforms.

There are still some things missing for the libevent driver but with the upcoming libuv driver everything that libuv provides should be available.

Reply via email to