Ian Holsman wrote: > > Another area for potential improvement here is the use of file caching. I > > tried > > using mod_cache's fd caching, but quickly ran into problems with the Linux > > per-process fd limits because SPECWeb99 accesses so many different files.
> hmm.. did you use a recent version of mod_mem_cache.. you should be able > to set the maximum number of fd's to be under the limits (not sure if > that would help.. depends on the caching) I had a current mod_mem_cache at the time I tried it (2-3 months ago?). The problem is that when I tell SPECWeb99 that I'm shooting for around 250 concurrent connections on my old ThinkPad, it wants to access 6,768 different files. My ulimit -n on Linux is 1024 and sysctl doesn't like me trying to raise it. So without a lot of pain, I can only cache less than 1/6 of the files using fd caching. As I recall, I restricted the fd caching to the most popular range of files sizes to keep it under the 1k limit, and then proceeded to run into system wide fd limits with prefork. Worker was a lot better of course with respect to system wide fd usage. The current record holder achieved 5000 concurrent connections, which means that the SPECWeb99 clients accessed about 119,700 different files, so the problem gets harder as the server gets more powerful. Greg