On Apr 11, 2013, at 12:13 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:

> On 11/04/2013 12:23 a.m., Youssef Ghorbal wrote:
>> I was aware of that page.
>> As you said, it's often RPS so it's not relevant for me.
> 
> It is more relevant than you seem to think. Squid processing overheads are 
> tied tightly to the request parsing and ACL testing processes. These are 
> relatively fixed overheads for each request regardless of the request size.

Thank you for the explanation. It confirms what I was expecting myself.
What I meant, is that in my case I have ONE big object (the infamous >1G file) 
and 1 client and I want to know what to expect on the troughput point of view.
As kinkie said, in my scenario, Squid is basically a network pipe. the overhead 
(ACL parsing etc is done one time)
 
Maybe, the question here is how does Squid handle data buffers, or at which 
rate does it perform server reads on the one side and to client writes on the 
other. 

> IIRC the 50Mbps traffic was on average HTTP request traffic objects in a 
> medium sized ISP - we do not have any good detaisl on what that average was 
> though. With very little extra overheads the same proxy on same hardware 
> might also reach 100Mbps. All that is required is the avg object size to go 
> from, for example, 16KB to 32KB which are both within the bounds of "average 
> HTTP traffic" sizes (11-63KB).

Yousef

Reply via email to