On 2010/07/03 18:17, Joerg Sonnenberger wrote:
> On Sat, Jul 03, 2010 at 05:40:45PM +0200, Claudio Jeker wrote:
> > 35M, that is insane. Either they have machines with infinite memory or you
> > can kill the boxes easily.

some would also say that 16K is insane ;-)

> You don't need 35MB per client connection if interfaces like sendfile(2)
> are used. All the kernel has to guarantee in that case is copy-on-write
> for the file content as far as it has been "send" already. Media
> distribution server normally don't change files inplace, so the only
> backpressure this creates is on the VFS cache. Let's assume the server
> is busy due to a new OpenBSD/Linux/Firefox/whatever release. A lot of
> clients will try to fetch a small number of distinct files. The memory
> the kernel has to commit is limited by the size of that active set, not
> by the number of clients.

there is some pretty serious hardware behind it...
http://mirror.aarnet.edu.au/indexabout.html

Reply via email to