On Thu, Nov 20, 2003 at 12:49:01PM -0500, Derek Atkins wrote: > > The case of 20,000 simultaneous long lived TCP connections > > is not unheard of, it's routine on IRC servers. It's all > > down to the hashing quality. > > Most machines, out of the box, cannot handle this directly. Most > machines are configured to limit the number of open file descriptors > to about 1024, maybe 4096. The limit in the kernel may be 100k > fds, but that doesn't mean a single process can handle that many.
No disagreement here, but I did not mean for a single process to handle them all. (Actually, it's feasible to require tune-ups on a bigger server either. Makes for a nice material for a new edition of Campbell's book.) > Note also that this is getting better over time, but the libc > interfaces are still rather limited. Show me a system with an fd_set > that can handle 32k fds out of the box! Show me one that can handle > 8k! The max that I've seen is 4k, and even that required recompiling > certain core pieces of the OS, because 1k was the default. Obviously, it's not something for select or poll to handle. There is a reason why Solaris introduced /dev/poll, Linux - epoll, and FreeBSD - kqueue (do I remember that one right?) You can safely forget about fd_set size in this application, even if we made it enormous, it would be of no use because of abysmal performance. I must note that I am not discussing this in all seriousness, because a) I am not familiar with finer detail which matter, and b) I am not aiming to implement it. My gut feeling is that it sounds like a great idea, but it takes some thinking. Let's see if someone volunteers. -- Pete _______________________________________________ OpenAFS-devel mailing list [EMAIL PROTECTED] https://lists.openafs.org/mailman/listinfo/openafs-devel
