Hi Steffen, Hi Brian,
the file descriptor limit is imposed by the operating system to all the processes and its original intention is to avoid resource consumption by arbitrary user process. The limit can be set at runtime and its default value in linux has changed in the last years.

Squid holds a file descriptor for:
- every client connection
- every server connection
- every file in the local cache

When the maximum number of FDs is reached each of those resources is limited. What kind of behaviour would you suggest? Abruptly closing client or server connection leads to network errors, while persistent client or server connection while keeping an FD busy, vastly lowers the load caused by connection establishment.

Actually there is no better solution than changing the maximum open fd number, without changing how Unix works, but I'm still open to discuss any other option. :-)

Regards,

L


Il giorno 24/mar/09, alle ore 12:25, Steffen Joeris ha scritto:
I am running transparent squid in a setup with more than 1000 users. I
reached the limit of file descriptors and that slowed down the internet
for everyone. I've now increased the number of file descriptors in the
default configuration, which seemed to solve the problem. However,
shouldn't squid be programmed so that it doesn't cause a performance
issue, when the limit is reached?
I haven't looked at the current implementation, but it felt wrong, when
the net performance was overall bad for all the users.

--
Luigi Gangitano -- <lu...@debian.org> -- <gangit...@lugroma3.org>
GPG: 1024D/924C0C26: 12F8 9C03 89D3 DB4A 9972  C24A F19B A618 924C 0C26





--
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Reply via email to