> I'm trying to write a server that handles 10000 clients.  On 2.4.x,
> the RT signal queue stuff looks like the way to achieve that.
> Unfortunately, when the RT signal queue overflows, the consensus seems
> to be that you fall back to a big poll().   And even though the RT signal
> queue [almost] never overflows, it certainly can, and servers have to be
> able to handle it.

        Don't let that bother you. In the case where you get a hit a significant
fraction of the descriptors you are polling on, poll is very efficient. The
inefficiency comes when you have to wade through 10,000 uninteresting file
descriptors to find the one interesting one. If the poll set is rich in
ready descriptors, there is little advantage to signal queues over poll
itself.

        In fact, if you assume the percentage of ready file descriptors (as opposed
to the number of file descriptors) is constant, then poll is just as
scalable (theoretically) as any other method. Under both schemes, with twice
as many file descriptors you have to do twice as much work.

        DS

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/

Reply via email to