Dmitry Olshansky wrote:

> 04-Aug-2013 23:38, Marek Janukowicz пишет:
>> I'm writing a network server with some specific requirements:
>> - 5-50 clients connected (almost) permanently (maybe a bit more, but
>> definitely not hundreds of them)
>> - possibly thousands of requests per seconds
>> - responses need to be returned within 5 seconds or the client will
>> disconnect and complain
>>
>> Currently I have a Master thread (which is basically the main thread)
>> which is handling connections/disconnections, socket operations, sends
>> parsed requests for processing to single Worker thread, sends responses
>> to clients. Interaction with Worker is done via message passing.
> 
> Typical approach would be to separate responsibilities even more  and
> make a pool of threads per each stage.
> 
> You may want to make a Master thread only handle new connections
> selecting over an "accept socket" (or a few if multiple end-points).
> Then it may distribute connected clients over I/O worker threads.
> 
> A pool of I/O workers would then only send/receive data passing parsed
> request to "real" workers and responses back. They handle disconnects
> and closing though.

This is basically approach "2." I mentioned in my original post, I'm glad 
you agree it makes sense :)

> The real workers could be again pooled to be more responsive (or e.g.
> just one per each I/O thread).

There are more things specific to this particular application that would 
play a role here. One is that such "real workers" would operate on a common 
data structure and I would have to introduce some synchronization. Single 
worker thread was not my first approach, but after some woes with other 
solutions I decided to take it, because the problem is really not in 
processing (where a single thread does just fine so far), but in socket 
read/write operations.

>> The problem with my approach is that I read as much data as possible from
>> each ready client in order. As there are many requests this read phase
>> might take a few seconds making the clients disconnect. Now I see 2
>> possible solutions:
>>
>> 1. Stay with the design I have, but change the workflow somewhat -
>> instead of reading all the data from clients just read some requests and
>> then send responses that are ready and repeat; the downside is that it's
>> more complicated than current design, might be slower (more loop
>> iterations with less work done in each iteration) and might require quite
>> a lot of tweaking when it comes to how many requests/responses handle
>> each time etc.
> 
> Or split the clients across a group of threads to reduce maximum
> latency. See above, just determine the amount of clients per thread your
> system can sustain in time. A better way would be to dynamically
> load-balance clients between threads but it's far more complicated.

Yeah, both approaches seem to be somewhat more complicated and I'd like to 
aovid this if possible. So one client per thread makes sense to me.

>> 2. Create separate thread per each client connection. I think this could
>> result in a nice, clean setup, but I see some problems:
>> - I'm not sure how ~50 threads will do resource-wise (although they will
>> probably be mostly waiting on Socket.select)
> 
> 50 threads is not that big a problem. Around 100+ could be, 1000+ is a
> killer. 

Thanks for those numbers, it's great to know at least the ranges here.

> The benefit with thread per client is that you don't even need
> Socket.select, just use blocking I/O and do the work per each parsed
> request in the same thread.

Not really. This is something that Go (the language I also originally 
considered for the project) has solved in much better way - you can "select" 
on a number of "channels" and have both I/O and message passing covered by 
those. In D I must react both to network data or message from worker 
incoming, which means either self-pipe trick (which leads to Socket.select 
again) or some quirky stuff with timeouts on socket read and message receive 
(but this is basically a busy loop).
 
>> - I can't initialize threads created via std.concurrency.spawn with a
>> Socket object ("Aliases to mutable thread-local data not allowed.")
> 
> This can be hacked with casts to shared void* and back. Not pretty but
> workable.

I'm using this trick elsewhere, was a bit reluctant to try it here. Btw. 
would it work if I pass a socket to 2 threads - reader and writer (by 
working I mean - not running into race conditions and other scary concurrent 
stuff)?

Also I'm really puzzled by the fact this common idiom doesn't work in some 
elegant way in D. I tried to Google a solution, but only found some weird 
tricks. Can anyone really experienced in D tell me why there is no nice 
solution for this (or correct me if I'm mistaken)?

>> - I already have problems with "interrupted system call" on Socket.select
>> due to GC kicking in; I'm restarting the call manually, but TBH it sucks
>> I have to do anything about that and would suck even more to do that with
>> 50 or so threads
> 
> I'm not sure if that problem will surface with blocking reads.

Unfortunately it will (it precisely happens with blocking calls).

Thanks for your input, which shed some more light for me and also allowed me 
to explain the whole thing a bit more.

-- 
Marek Janukowicz

Reply via email to