Domas Mituzas wrote:
> Hi!
> 
>> Going multithread is really easy for a socket listener.
> 
> Really? :) 

Sure. Make each thread call accept and let the kernel give incoming
sockets to one of them. There you have the listener done :)
Solaris used to need an explicit locking, but it is now fixed there, too.


>> However, not so
>> much in the LogProcessors. If they are shared accross threads, you may
>> end up with all threads blocked in the fwrite and if they aren't shared,
>> the files may easily corrupt (depends on what you are exactly doing with
>> them).
> 
> I don't really understand what you say ;-) Do you mean lost data as 'corrupt'?

Given the following incomint events:
udp2log has problems
jeluf created a new wiki
domas fixed the server

I call corrupted this:
jeluf domas
udp2log has fixed the server
problems created a new wiki



>> Since the problem is that the socket buffer fills, it surprised me that
>> the server didn't increase SO_RCVBUF. That's not a solution but should
>> help (already set in /proc/sys/net/core/rmem_default ?).
> 
> It is long term CPU saturation issue - mux process isn't fast enough to 
> handle 16 output streams. 
> Do note, there're quite a few events a second :)
> 
>> The real issue is: what are you placing on your pipes that are so slow
>> to read from them?
>> Optimizing those scripts could be a simpler solution.
> 
> No, those scripts are not the bottleneck, there's plenty of CPU available, 
> and they are not blocking (for too long, everything is blocking for a certain 
> amount of time ;-)
> 
>> Wouldn't be hard to make the pipe writes non-blocking, properly blaming
>> the slow pipes that couldn't be written
> 
> There are no slow pipes. Bottleneck is udp2log step.
> 
> Domas

I don't get it. What is slow on it?

What it does is:
1) Get socket data
2) Split line into pieces
3) fwrite each line in 16 fds
4) Go to 1

If there's plenty of CPU, the pipes doesn't fill, the fwrite doesn't
block...
Why isn't it coping with it?
Too much time lost in context changes?


_______________________________________________
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Reply via email to