On 6/11/07, Michael Puckett <[EMAIL PROTECTED]> wrote:
My squid application is doing large file transfers only. We have
(relatively)few clients doing (relatively)few transfers of very large
files. The server is configured to have 16 or 32GB of memory and is
serving 3 Gbit NICs to the clients downstream and 1 Gbit NIC upstream.
We wish to optimize the performance around these large file transfers
and desire to run large I/O buffers to the networks and the disk. Is
there a tunable buffer size parameter that I can set to increase the
network and disk buffer sizes?

I think for larger files the need to use a caching server like Squid
is diminished because the total time that it will take to push the
data through the network will vastly outweigh the amount of time it
takes to access the file and do a disk seek. Especially for large
files, you'll get > 1Gbit of bandwidth from your disk/storage array
anyway. Still, if you want to cache such objects anyway and you have
enough RAM, you might be able to get away with just copying the files
to a RAM disk and then using a regular HTTP (or what have you) server
that accesses the RAM disk directly.

There are also a number of network parameters you'll want to look at
(especially TCP tuning) that are unrelated to file caching that you'll
want to look at to get optimal throughput.

--
Evan Klitzke <[EMAIL PROTECTED]>

Reply via email to