RE: [squid-users] squid 3.2.0.5 smp scaling issues

2011-06-11 Thread Jenny Lee

On Sat, Jun 11, 2011 at 9:40 PM, Jenny Lee  wrote:

I like to know how you are able to do >13000 requests/sec.
tcp_fin_timeout is 60 seconds default on all *NIXes and available ephemeral 
port range is 64K.
I can't do more than 1K requests/sec even with tcp_tw_reuse/tcp_tw_recycle with 
ab. I get commBind errors due to connections in TIME_WAIT.
Any tuning options suggested for RHEL6 x64?
Jenny

I would have a concern using both those at the same time.   reuse and recycle. 
Reuse a socket, but recycle it, I've seen issues when testing my own linux 
distro's with both of these settings. Right or wrong that was my experience.
fin_timeout, if you have a good connection, there should be no reason that a 
system takes 60 seconds to send out a fin. Cut that in half, if not by 2/3's
And what is your limitation at 1K requests/sec, load (if so look at I/O) 
Network saturation? Maybe I missed an earlier thread and I too would tilt my 
head at 13K requests sec!
Tory
---
 
 
As I mentioned, my limitation is the ephemeral ports tied up with TIME_WAIT.  
TIME_WAIT issue is a known factor when you are doing testing.
 
When you are tuning, you apply options one at a time. tw_reuse/tc_recycle were 
not used togeter and I had 10 sec fin_timeout which made no difference.
 
Jenny

 
nb: i still dont know how to do indenting/quoting with this hotmail... after 10 
years.
  

Re: FW: [squid-users] WORKERS: Any compile option to enable? commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory

2011-06-11 Thread Amos Jeffries
Again. cc'ing squid-dev where the people actually working on these 
features can see the report.


On 12/06/11 16:30, Jenny Lee wrote:


I also cannot shut down squid when workers are enabled.

squid -k shutdown gives "No Running Copy"

I have to run a "killall -9 squid"


One side effect of: --disable-kill-parent-hack

Though it may be due to other things. There was a patch proposed some 
weeks back.




Also what happens when I have 2 cores but start 7 workers?


They share cores as separate instances do. Potentially getting in each 
others way grabbing CPU cycles. This may or may not be a problem.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] WORKERS: Any compile option to enable? commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory

2011-06-11 Thread Amos Jeffries

On 12/06/11 16:17, Jenny Lee wrote:


I can't get the workers work. They are started fine. However I get:

kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory
kid2| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory
kid3| commBind: Cannot bind socket FD 9 to [::]: (2) No such file or directory

Is there a compile option to enable/disable workers that I am missing?


I can't seem to replicate that here. More details are needed about what 
FD 13 and FD 9 were being used for please.




The ports below work fine on mono squid.

---
workers 2

if ${process_number} = 1
http_port 1.1.1.1:3128
else
http_port 1.1.1.1:3129
endif
---

Thanks

Jenny


RHEL6 x64
Squid 3.2.0.7


3.2.0.8 is the current most stable beta.

Also, cc'ing squid-dev where the other dev can see this..




Compile:
--disable-carp \
--disable-wccp \
--disable-wccpv2 \
--disable-snmp \
--disable-htcp \
--disable-ident-lookups \
--disable-unlinkd \
--disable-translation \
--disable-auto-locale \
--disable-loadable-modules \
--disable-esi \
--disable-disk-io \
--disable-eui \
--disable-storeio \
--disable-auth-ntlm \
--disable-auth-negotiate \
--disable-auth-digest \
--disable-cache-digests \
--disable-ntlm-auth-helpers \
--disable-negotiate-auth-helpers \
--disable-digest-auth-helpers \


These options are now gone from 3.2:
  http://www.squid-cache.org/Versions/v3/3.2/RELEASENOTES.html#ss4.3



--disable-ipfw-transparent \
--disable-ipf-transparent \
--disable-pf-transparent \
--disable-linux-tproxy \
--disable-linux-netfilter \
--without-netfilter-conntrack \
--disable-url-rewrite-helpers \
--disable-win32-service \
--disable-zph-qos \
--disable-icap-client \
--disable-ecap \
--disable-useragent-log \
--disable-referer-log \


 ... so are these log ones.


--disable-eui \
--disable-poll \
--disable-select \
--disable-kqueue \
--disable-icmp \
--disable-gnuregex \
--disable-cpu-profiling \
--disable-kill-parent-hack \
--disable-follow-x-forwarded-for \
--disable-forw-via-db \
--without-valgrind-debug \
--without-ipv6-split-stack \


  *-ipv6-split-stack was removed in 3.1.6.


--without-po2html   


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


[squid-users] squid 3.2.0.5 smp scaling issues

2011-06-11 Thread Jenny Lee

I like to know how you are able to do >13000 requests/sec.
 
tcp_fin_timeout is 60 seconds default on all *NIXes and available ephemeral 
port range is 64K.
 
I can't do more than 1K requests/sec even with tcp_tw_reuse/tcp_tw_recycle with 
ab. I get commBind errors due to connections in TIME_WAIT.
 
Any tuning options suggested for RHEL6 x64?
 
Jenny
 
 
 
 
---
test setup
box A running apache and ab
test against local IP address >13000 requests/sec
box B running squid, 8 2.3 GHz Opteron cores with 16G ram
non acl/cache-peer related lines in the config are (including typos from 
me manually entering this)
http_port 8000
icp_port 0
visible_hostname gromit1
cache_effective_user proxy
cache_effective_group proxy
appaend_domain .invalid.server.name
pid_filename /var/run/squid.pid
cache_dir null /tmp
client_db off
cache_access_log syslog squid
cache_log /var/log/squid/cache.log
cache_store_log none
coredump_dir none
no_cache deny all

results when requesting short html page 
squid 3.0.STABLE12 4200 requests/sec
squid 3.1.11 2100 requests/sec
squid 3.2.0.5 1 worker 1400 requests/sec
squid 3.2.0.5 2 workers 2100 requests/sec
squid 3.2.0.5 3 workers 2500 requests/sec
squid 3.2.0.5 4 workers 2900 requests/sec
squid 3.2.0.5 5 workers 2900 requests/sec
squid 3.2.0.5 6 workers 2500 requests/sec
squid 3.2.0.5 7 workers 2000 requests/sec
squid 3.2.0.5 8 workers 1900 requests/sec
in all these tests the squid process was using 100% of the cpu
I tried it pulling a large file (100K instead of <50 bytes) on the thought 
that this may be bottlenecking on accepting the connections but with 
something that took more time to service the connections it could do 
better however what I found is that with 8 workers all 8 were using <50% 
of the CPU at 1000 requests/sec
local machine would do 7000 requests/sec to itself
1 worker 500 requests/sec
2 workers 957 requests/sec
from there it remained about 1000 requests/sec with the cpu 
utilization slowly dropping off (but not dropping as fast as it should 
with the number of cores available)
so it looks like there is some significant bottleneck in version 3.2 that 
makes the SMP support fairly ineffective.

in reading the wiki page at wili.squid-cache.org/Features/SmpScale I see 
you worrying about fairness between workers. If you have put in code to 
try and ensure fairness, you may want to remove it and see what happens to 
performance. what you are describing on that page in terms of fairness is 
what I would expect form a 'first-come-first-served' approach to multiple 
processes grabbing new connections. The worker that last ran is hot in the 
cache and so has an 'unfair' advantage in noticing and processing the new 
request, but as that worker gets busier, it will be spending more time 
servicing the request and the other processes will get more of a chance to 
grab the new connection, so it will appear unfair under light load, but 
become more fair under heavy load.
David Lang

FW: [squid-users] WORKERS: Any compile option to enable? commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory

2011-06-11 Thread Jenny Lee

I also cannot shut down squid when workers are enabled.
 
squid -k shutdown gives "No Running Copy"
 
I have to run a "killall -9 squid"
 
Also what happens when I have 2 cores but start 7 workers?
 
Jenny


> From: bodycar...@live.com
> To: squid-users@squid-cache.org
> Date: Sun, 12 Jun 2011 04:17:41 +
> Subject: [squid-users] WORKERS: Any compile option to enable? commBind: 
> Cannot bind socket FD 13 to [::]: (2) No such file or directory
>
>
> I can't get the workers work. They are started fine. However I get:
>
> kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
> directory
> kid2| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or 
> directory
> kid3| commBind: Cannot bind socket FD 9 to [::]: (2) No such file or directory
>
> Is there a compile option to enable/disable workers that I am missing?
>
> The ports below work fine on mono squid.
>
> ---
> workers 2
>
> if ${process_number} = 1
> http_port 1.1.1.1:3128
> else
> http_port 1.1.1.1:3129
> endif
> ---
>
> Thanks
>
> Jenny
>
>
> RHEL6 x64
> Squid 3.2.0.7
>
>
> Compile:
> --disable-carp \
> --disable-wccp \
> --disable-wccpv2 \
> --disable-snmp \
> --disable-htcp \
> --disable-ident-lookups \
> --disable-unlinkd \
> --disable-translation \
> --disable-auto-locale \
> --disable-loadable-modules \
> --disable-esi \
> --disable-disk-io \
> --disable-eui \
> --disable-storeio \
> --disable-auth-ntlm \
> --disable-auth-negotiate \
> --disable-auth-digest \
> --disable-cache-digests \
> --disable-ntlm-auth-helpers \
> --disable-negotiate-auth-helpers \
> --disable-digest-auth-helpers \
> --disable-ipfw-transparent \
> --disable-ipf-transparent \
> --disable-pf-transparent \
> --disable-linux-tproxy \
> --disable-linux-netfilter \
> --without-netfilter-conntrack \
> --disable-url-rewrite-helpers \
> --disable-win32-service \
> --disable-zph-qos \
> --disable-icap-client \
> --disable-ecap \
> --disable-useragent-log \
> --disable-referer-log \
> --disable-eui \
> --disable-poll \
> --disable-select \
> --disable-kqueue \
> --disable-icmp \
> --disable-gnuregex \
> --disable-cpu-profiling \
> --disable-kill-parent-hack \
> --disable-follow-x-forwarded-for \
> --disable-forw-via-db \
> --without-valgrind-debug \
> --without-ipv6-split-stack \
> --without-po2html   

[squid-users] WORKERS: Any compile option to enable? commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory

2011-06-11 Thread Jenny Lee

I can't get the workers work. They are started fine. However I get:
 
kid1| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory
kid2| commBind: Cannot bind socket FD 13 to [::]: (2) No such file or directory
kid3| commBind: Cannot bind socket FD 9 to [::]: (2) No such file or directory
 
Is there a compile option to enable/disable workers that I am missing?
 
The ports below work fine on mono squid.
 
---
workers 2
 
if ${process_number} = 1
http_port 1.1.1.1:3128
else
http_port 1.1.1.1:3129
endif
---
 
Thanks
 
Jenny
 
 
RHEL6 x64
Squid 3.2.0.7
 
 
Compile:
--disable-carp \
--disable-wccp \
--disable-wccpv2 \
--disable-snmp \
--disable-htcp \
--disable-ident-lookups \
--disable-unlinkd \
--disable-translation \
--disable-auto-locale \
--disable-loadable-modules \
--disable-esi \
--disable-disk-io \
--disable-eui \
--disable-storeio \
--disable-auth-ntlm \
--disable-auth-negotiate \
--disable-auth-digest \
--disable-cache-digests \
--disable-ntlm-auth-helpers \
--disable-negotiate-auth-helpers \
--disable-digest-auth-helpers \
--disable-ipfw-transparent \
--disable-ipf-transparent \
--disable-pf-transparent \
--disable-linux-tproxy \
--disable-linux-netfilter \
--without-netfilter-conntrack \
--disable-url-rewrite-helpers \
--disable-win32-service \
--disable-zph-qos \
--disable-icap-client \
--disable-ecap \
--disable-useragent-log \
--disable-referer-log \
--disable-eui \
--disable-poll \
--disable-select \
--disable-kqueue \
--disable-icmp \
--disable-gnuregex \
--disable-cpu-profiling \
--disable-kill-parent-hack \
--disable-follow-x-forwarded-for \
--disable-forw-via-db \
--without-valgrind-debug \
--without-ipv6-split-stack \
--without-po2html 

Re: [squid-users] kid1| assertion failed: helper.cc:697: "hlp->childs.n_running > 0"

2011-06-11 Thread Amos Jeffries

On 12/06/11 15:30, Jenny Lee wrote:




Date: Sun, 12 Jun 2011 14:26:09 +1200
From: squ...@treenet.co.nz
To: squid-users@squid-cache.org
Subject: Re: [squid-users] kid1| assertion failed: helper.cc:697: 
"hlp->childs.n_running>  0"

On 12/06/11 14:16, Jenny Lee wrote:


Dear Squid Users,

I get this occasionally with with NCSA auth followed by a restart.

What does it mean?

Jenny

RHEL6 x64
Squid 3.2.0.7



A helper process died or shutdown. But Squid internal state indicates
there were none of that type of helper running.


Thanks Amos,

Is there a limit to the amount of requests helpers can service?


The bundled helpers don't use the concurrency protocol yet, so they can 
handle 2 simultaneous requests each: one being worked on, and one queued 
waiting.




I have:

auth_param basic children 20 startup=20 idle=2

When this happens, helpers are running and available.


Yes, 20 maximum. minimum of 20 loaded on startup. These can handled 40 
simultaneous client requests.


The error is about a helper closing/dying/shutdown when there are none 
running.


For example 20 helpers started, and 21 socket closed notices received by 
Squid.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] Re: squid 3.2.0.5 even slower than squid 3.1

2011-06-11 Thread david
thanks, I'm out of the office this week (at Usenix), but will see if this 
makes any difference when I get back.


David Lang

On Sun, 12 Jun 2011, Jenny Lee wrote:


Date: Sun, 12 Jun 2011 03:35:25 +
From: Jenny Lee 
To: squid-users@squid-cache.org, da...@lang.hm
Subject: Re: [squid-users] Re: squid 3.2.0.5 even slower than squid 3.1


Hello David,

We read your benchmarks with interest. Thank you for the work.

I have mentioned "--disable-ipv6" issue before and its solution. Attaching it 
for your perusal.

Jenny



one thing that I've found is that even with --disable-ipv6 squid will
still use IPv6 on a system that has it configured (next I'll try and see
if that's what's going wrong on the systems that don't have it configured,
but those systems don't have strace on them, so I'll have to build a
throw-away system instead of using one of my standard build test systems)
David Lang



To: squid-users@squid-cache.org
Date: Thu, 5 May 2011 07:58:40 +
Subject: [squid-users] Impressions about 3.2.0.7




4. "--disable-ipv6" does not work. We had to modify configure to include "#define 
USE_IPV6 0" to remove ipv6.


Re: [squid-users] Re: squid 3.2.0.5 even slower than squid 3.1

2011-06-11 Thread Jenny Lee

Hello David,
 
We read your benchmarks with interest. Thank you for the work.
 
I have mentioned "--disable-ipv6" issue before and its solution. Attaching it 
for your perusal.
 
Jenny
 
 

one thing that I've found is that even with --disable-ipv6 squid will 
still use IPv6 on a system that has it configured (next I'll try and see 
if that's what's going wrong on the systems that don't have it configured, 
but those systems don't have strace on them, so I'll have to build a 
throw-away system instead of using one of my standard build test systems)
David Lang


> To: squid-users@squid-cache.org
> Date: Thu, 5 May 2011 07:58:40 +
> Subject: [squid-users] Impressions about 3.2.0.7
>

> 4. "--disable-ipv6" does not work. We had to modify configure to include 
> "#define USE_IPV6 0" to remove ipv6.  
>  

RE: [squid-users] kid1| assertion failed: helper.cc:697: "hlp->childs.n_running > 0"

2011-06-11 Thread Jenny Lee


> Date: Sun, 12 Jun 2011 14:26:09 +1200
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] kid1| assertion failed: helper.cc:697: 
> "hlp->childs.n_running > 0"
>
> On 12/06/11 14:16, Jenny Lee wrote:
> >
> > Dear Squid Users,
> >
> > I get this occasionally with with NCSA auth followed by a restart.
> >
> > What does it mean?
> >
> > Jenny
> >
> > RHEL6 x64
> > Squid 3.2.0.7
>
>
> A helper process died or shutdown. But Squid internal state indicates
> there were none of that type of helper running.
 
Thanks Amos,
 
Is there a limit to the amount of requests helpers can service?
 
I have: 
 
auth_param basic children 20 startup=20 idle=2
 
When this happens, helpers are running and available.
 
Jenny 

Re: [squid-users] kid1| assertion failed: helper.cc:697: "hlp->childs.n_running > 0"

2011-06-11 Thread Amos Jeffries

On 12/06/11 14:16, Jenny Lee wrote:


Dear Squid Users,

I get this occasionally with with NCSA auth followed by a restart.

What does it mean?

Jenny

RHEL6 x64
Squid 3.2.0.7   



A helper process died or shutdown. But Squid internal state indicates 
there were none of that type of helper running.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


[squid-users] kid1| assertion failed: helper.cc:697: "hlp->childs.n_running > 0"

2011-06-11 Thread Jenny Lee

Dear Squid Users,
 
I get this occasionally with with NCSA auth followed by a restart.
 
What does it mean?
 
Jenny
 
RHEL6 x64
Squid 3.2.0.7 

Re: [squid-users] squid 3.1.

2011-06-11 Thread Amos Jeffries

On 12/06/11 06:28, Shenavandeh wrote:

Hi Amos,

Thanks for your time and nice comments on the config file

It is impossible for me to use "debug_options ALL,6" under such load.
log file  grows wildly and easily eats the whole hard disk !
Any other solutions to capture the log?



There are two tricky alternatives.

One is the -l command line option to send the log to syslog. You need a 
syslog server that can handle the traffic though.


The other is using -X (which generates a huge lot more log 
unfortunately) and piping the results out to somewhere that can handle it.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] redirect

2011-06-11 Thread Amos Jeffries

On 12/06/11 10:09, mbe wrote:

How do I redirect a request back to the internet, if the adress (dst) is not
allowed in the acl? No message or failure notice shall be shown.

matthias


You seem to mistake how both redirect and ACL work.

 * ACL permit/deny permission to use the proxy for a request. When 
denied a response is sent to the *client*.


 * HTTP redirect is just a special type of such responses. Which has a 
30x status instead of 40x.


Whether anything gets shown is up to the clients software agent. GET 
requests usually get repeated automatically and load the new location 
without displaying any difference. Other requests are usually required 
to ask the user if its safe to continue etc.

 The type of agent also matters.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


[squid-users] redirect

2011-06-11 Thread mbe
How do I redirect a request back to the internet, if the adress (dst) is not 
allowed in the acl? No message or failure notice shall be shown.

matthias


Re: [squid-users] squid 3.1.

2011-06-11 Thread Shenavandeh
Hi Amos,

Thanks for your time and nice comments on the config file

It is impossible for me to use "debug_options ALL,6" under such load.
log file  grows wildly and easily eats the whole hard disk !
Any other solutions to capture the log?

Yours Faithfully,
---
Amir H Shenavandeh




On 11 June 2011 10:35, Amos Jeffries  wrote:
> On 11/06/11 05:39, Shenavandeh wrote:
>>
>> Hi,
>>
>> I have a squid installation that crashes twice a day under the load of
>> 7 Mbps bandwidth with the following message in squid.out with no
>> specific traces in cache.log
>>
>> Startup: Fri Jun 10 15:46:20
>> dying from an unhandled exception: !theConsumer
>> terminate called after throwing an instance of 'TextException'
>>   what():  !theConsumer
>> Startup: Fri Jun 10 19:55:29
>>
>> It is compiled using following options:
>>  sbin]# ./squid -v
>> Squid Cache: Version 3.1.12.1
>> configure options:  '--enable-linux-netfilter'
>> '--enable-storeio=ufs,aufs' '--enable-poll'
>> '--enable-x-accelerator-vary' '--enable-follow-x-forwarded-for'
>> '--enable-ssl' '--enable-snmp' '--enable-removal-policies'
>> '--enable-gnuregex' '--with-large-files' '--enable-async-io'
>> 'CFLAGS=-DNUMTHREADS=300' --with-squid=/root/squid-3.1.12.1
>> --enable-ltdl-convenience
>>
>> the platform is as follows:
>>
>> CPU :4 cores of  Intel(R) Xeon(R) CPU           E5504  @ 2.00GHz
>> RAM : 8GB
>> OS: CentOS 5.6 :
>> Kernel: Linux version 2.6.25 compiled with tproxy option.
>>
>> the Squid configuration:
>>
>> cache_mem 4000 MB
>>
>> dead_peer_timeout 30 seconds
>> hierarchy_stoplist cgi-bin ?
>> acl QUERY urlpath_regex cgi-bin \?
>> no_cache deny QUERY
>
> hierarchy_stoplist and the QUERY bits are outdated. It is worth removing
> these.
>
>>
>> maximum_object_size 50 MB
>> maximum_object_size_in_memory 500 KB
>> minimum_object_size 0 KB
>>
>> cache_replacement_policy heap LFUDA
>> memory_replacement_policy heap LRU
>>
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/32 ::1
>> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
>>
>>
>> acl localnet src 10.0.0.0/8    # RFC1918 possible internal network
>> acl localnet src 172.16.0.0/12    # RFC1918 possible internal network
>> acl localnet src 192.168.0.0/16    # RFC1918 possible internal network
>> acl localnet src fc00::/7       # RFC 4193 local private network range
>> acl localnet src fe80::/10      # RFC 4291 link-local (directly
>> plugged) machines
>>
>> acl SSL_ports port 443
>> acl Safe_ports port 80        # http
>> acl Safe_ports port 21        # ftp
>> acl Safe_ports port 443        # https
>> acl Safe_ports port 70        # gopher
>> acl Safe_ports port 210        # wais
>> acl Safe_ports port 1025-65535    # unregistered ports
>> acl Safe_ports port 280        # http-mgmt
>> acl Safe_ports port 488        # gss-http
>> acl Safe_ports port 591        # filemaker
>> acl Safe_ports port 777        # multiling http
>> acl CONNECT method CONNECT
>>
>> http_access allow manager localhost
>> http_access deny manager
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>> http_access allow to_localhost
>
> This is dangerous.
>  to_localhost is designed to match a class of attack signatures and prevent
> DoS. It is intended for use in a "deny" line.
>
>>
>>
>> http_access allow localhost
>> http_access allow localnet
>> http_access allow to_localhost
>
> A second "allow to_localhost" is useless. The first will stop processing
> when it gets tested and matches.
>
>>
>> http_access deny all
>>
>> http_port 3128 tproxy
>>
>> hierarchy_stoplist cgi-bin ?
>
> repeat directive, worth removing.
>
>>
>> cache_dir aufs /cache 24000 16 256
>>
>> coredump_dir cache
>>
>> # Add any of your own refresh_pattern entries above these.
>> refresh_pattern ^ftp:        1440    20%    10080
>> refresh_pattern ^gopher:    1440    0%    1440
>> refresh_pattern -i (/cgi-bin/|\?) 0    0%    0
>> refresh_pattern .        0    20%    4320
>>
>> cache_effective_user squid
>> cache_effective_group squid
>>
>> half_closed_clients off
>> buffered_logs on
>> client_db off
>> quick_abort_max 0 KB
>> quick_abort_min 0 KB
>> memory_pools off
>>
>> cache_swap_high 95%
>> cache_swap_low 90
>> logfile_rotate 10%
>
> logfile_rotate is not a percentage. It is a count of many log files to keep.
> A new one is generated ever time you run "squid -k rotate"
>
>>
>> visible_hostname        Cache
>
> Interesting FQDN.
>
> The admin@Cache contact address for example, does not resolve here.
>
>>
>> quick_abort_min 32 KB
>> quick_abort_max 32 KB
>> quick_abort_pct 95
>>
>> negative_ttl 3 minutes
>> positive_dns_ttl 6 hours
>>
>> pipeline_prefetch on
>>
>> acl snmpkey snmp_community public
>> snmp_port 3401
>> snmp_access allow snmpkey localhost
>> snmp_access deny all
>>
>> refresh_pattern -i
>> \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv|mpg|wma|ogg|wmv|asx|asf)$
>> 26 90% 260009 override-expire
>> refresh_pattern -i
>> \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff|pdf|uxx)$ 26
>> 90% 260009 override-exp

Re: [squid-users] Squid TProxy Problem

2011-06-11 Thread Ali Majdzadeh
Amos,
Sorry for the typo; here are the rules:

ip rule add fwmark 1 lookup 100
ip -f inet route add local 0.0.0.0/0 dev lo table 100
ip -f inet route add local 0.0.0.0/0 dev eth0 table 100

Warm Regards,
Ali Majdzadeh Kohbanani

2011/6/11 Ali Majdzadeh :
> Dear Amos,
> Hi
> As the documentation suggests, I have used the following rules, but
> except the first one, others fail:
>
> ip rule add fwmark 1 lookup 100
> ip -f inet route add local 0.0.0.0/0 dev lo table 100
> ip -f inet route add local 0.0.0.0/0 dev eth0 table 10
>
> Any ideas?
>
> Warm Regards,
> Ali Majdzadeh Kohbanani
>
>
> 2011/6/8 Ali Majdzadeh 
>>
>> Amos,
>> Thanks for your reply. As you had depicted in the diagrams, I think
>> you meant that the clients and the Squid box are both connected to the
>> gateway through the switch, didn't you? If it is so, yes, they are
>> connected, but the default gateway for the clients is set to the IP
>> address of the Squid box.
>> So, you mean we should insert a special firewall rule in our gateway
>> in order to detect and bypass the Squid outward traffic by its MAC
>> address, is that true? Does this method still preserves the clients'
>> IP addresses?
>> Sorry for my elementary questions and thanks in advance for your helpful 
>> notes.
>>
>> Warm Regards,
>> Ali
>>
>> 2011/6/8 Ali Majdzadeh :
>> > Amos,
>> > Thanks for your reply. As you had depicted in the diagrams, I think
>> > you meant that the clients and the Squid box are both connected to the
>> > gateway through the switch, didn't you? If it is so, yes, they are
>> > connected, but the default gateway for the clients is set to the IP
>> > address of the Squid box.
>> > So, you mean we should insert a special firewall rule in our gateway
>> > in order to detect and bypass the Squid outward traffic by its MAC
>> > address, is that true? Does this method still preserves the clients'
>> > IP addresses?
>> > Sorry for my elementary questions and thanks in advance for your helpful 
>> > notes.
>> >
>> > Warm Regards,
>> > Ali
>> >
>> > 2011/6/8 Amos Jeffries :
>> >> On 08/06/11 22:53, Ali Majdzadeh wrote:
>> >>>
>> >>> Amos,
>> >>> Hi
>> >>> Thanks for your reply. The Squid box has only one NIC and it is
>> >>> connected to the internet via it's default gateway, I think I should
>> >>> have corrected our network diagram as follows:
>> >>> Internet<->  Gateway<->  Squid<->  Clients
>> >>> Does this configuration make any difference?
>> >>
>> >> That diagram is no different, but a 1-NIC squid box would be:
>> >>
>> >>  Internet<->Gateway<->Clients.
>> >>  \<->Squid
>> >>
>> >> or:
>> >>
>> >>  Internet<->Gateway<--switch-->Clients.
>> >>\<->Squid
>> >>
>> >>
>> >> That makes a difference.
>> >>
>> >> If you bump cache.log up to ALL,5 during a test connection. You may see
>> >> traffic arrive but then hang while connecting out.
>> >>
>> >>  If you do see that behaviour in cache.log, the problems is at the gateway
>> >> end. It MUST be able to detect and bypass the Squid outward traffic by MAC
>> >> address or tcp_outgoing_tos instead of IP address.
>> >>
>> >> Amos
>> >>
>> >>> Thanks again for your reply. I will try to reconfigure the whole
>> >>> solution from scratch to find out where I go wrong.
>> >>>
>> >>> Warm Regards,
>> >>> Ali Majdzadeh Kohbanani
>> >>>
>> >>> 2011/6/8 Amos Jeffries:
>> 
>>  On 08/06/11 01:15, Ali Majdzadeh wrote:
>> >
>> > Amos,
>> > The configuration is as follows:
>> > Internet<->Squid<->Clients
>> >
>> > Would you please clarify what you mean by declaring "routing packets
>> > to the squid box"?
>> 
>>  That the packets actually do get passed/routed through the squid box and
>>  not
>>  via some other possible route.
>> 
>> > Does the above configuration conform to the
>> > so-called declaration?
>> 
>>  If those are physical wires or even just logical routing table entries,
>>  yes
>>  it does.
>> 
>> > If it is so, what should be done to solve the
>> > issue?
>> 
>>  Your packet counter incrementing is a good sign that the ruting layer is
>>  okay.
>> 
>> > Thanks again.
>> > By the way, we have compiled libcap from source and it is the latest
>> > version of the library.
>> 
>>  Okay. That should do :).
>> 
>> 
>> > 2011/6/6 Ali Majdzadeh
>> >>
>> >> Amos,
>> >> Sorry, the packet counter increments, I made a mistake, but still no
>> >> logs either in access.log nor in cache.log.
>> 
>> 
>>  Given that you have a recent libcap. That means we must suspect the
>>  kernel
>>  handling once TPROXY marks the packets.
>> 
>>  The "table 100" bit of the config has given a lot of people trouble.
>>  AFAIK
>>  "normally" you only have one such table entry and for TPROXY its 
>>  internal
>>  to
>>  the kernel with the "lo" interface. BUT, some people have had to
>>  configure
>>

Re: [squid-users] Squid TProxy Problem

2011-06-11 Thread Ali Majdzadeh
Dear Amos,
Hi
As the documentation suggests, I have used the following rules, but
except the first one, others fail:

ip rule add fwmark 1 lookup 100
ip -f inet route add local 0.0.0.0/0 dev lo table 100
ip -f inet route add local 0.0.0.0/0 dev eth0 table 10

Any ideas?

Warm Regards,
Ali Majdzadeh Kohbanani


2011/6/8 Ali Majdzadeh 
>
> Amos,
> Thanks for your reply. As you had depicted in the diagrams, I think
> you meant that the clients and the Squid box are both connected to the
> gateway through the switch, didn't you? If it is so, yes, they are
> connected, but the default gateway for the clients is set to the IP
> address of the Squid box.
> So, you mean we should insert a special firewall rule in our gateway
> in order to detect and bypass the Squid outward traffic by its MAC
> address, is that true? Does this method still preserves the clients'
> IP addresses?
> Sorry for my elementary questions and thanks in advance for your helpful 
> notes.
>
> Warm Regards,
> Ali
>
> 2011/6/8 Ali Majdzadeh :
> > Amos,
> > Thanks for your reply. As you had depicted in the diagrams, I think
> > you meant that the clients and the Squid box are both connected to the
> > gateway through the switch, didn't you? If it is so, yes, they are
> > connected, but the default gateway for the clients is set to the IP
> > address of the Squid box.
> > So, you mean we should insert a special firewall rule in our gateway
> > in order to detect and bypass the Squid outward traffic by its MAC
> > address, is that true? Does this method still preserves the clients'
> > IP addresses?
> > Sorry for my elementary questions and thanks in advance for your helpful 
> > notes.
> >
> > Warm Regards,
> > Ali
> >
> > 2011/6/8 Amos Jeffries :
> >> On 08/06/11 22:53, Ali Majdzadeh wrote:
> >>>
> >>> Amos,
> >>> Hi
> >>> Thanks for your reply. The Squid box has only one NIC and it is
> >>> connected to the internet via it's default gateway, I think I should
> >>> have corrected our network diagram as follows:
> >>> Internet<->  Gateway<->  Squid<->  Clients
> >>> Does this configuration make any difference?
> >>
> >> That diagram is no different, but a 1-NIC squid box would be:
> >>
> >>  Internet<->Gateway<->Clients.
> >>  \<->Squid
> >>
> >> or:
> >>
> >>  Internet<->Gateway<--switch-->Clients.
> >>\<->Squid
> >>
> >>
> >> That makes a difference.
> >>
> >> If you bump cache.log up to ALL,5 during a test connection. You may see
> >> traffic arrive but then hang while connecting out.
> >>
> >>  If you do see that behaviour in cache.log, the problems is at the gateway
> >> end. It MUST be able to detect and bypass the Squid outward traffic by MAC
> >> address or tcp_outgoing_tos instead of IP address.
> >>
> >> Amos
> >>
> >>> Thanks again for your reply. I will try to reconfigure the whole
> >>> solution from scratch to find out where I go wrong.
> >>>
> >>> Warm Regards,
> >>> Ali Majdzadeh Kohbanani
> >>>
> >>> 2011/6/8 Amos Jeffries:
> 
>  On 08/06/11 01:15, Ali Majdzadeh wrote:
> >
> > Amos,
> > The configuration is as follows:
> > Internet<->Squid<->Clients
> >
> > Would you please clarify what you mean by declaring "routing packets
> > to the squid box"?
> 
>  That the packets actually do get passed/routed through the squid box and
>  not
>  via some other possible route.
> 
> > Does the above configuration conform to the
> > so-called declaration?
> 
>  If those are physical wires or even just logical routing table entries,
>  yes
>  it does.
> 
> > If it is so, what should be done to solve the
> > issue?
> 
>  Your packet counter incrementing is a good sign that the ruting layer is
>  okay.
> 
> > Thanks again.
> > By the way, we have compiled libcap from source and it is the latest
> > version of the library.
> 
>  Okay. That should do :).
> 
> 
> > 2011/6/6 Ali Majdzadeh
> >>
> >> Amos,
> >> Sorry, the packet counter increments, I made a mistake, but still no
> >> logs either in access.log nor in cache.log.
> 
> 
>  Given that you have a recent libcap. That means we must suspect the
>  kernel
>  handling once TPROXY marks the packets.
> 
>  The "table 100" bit of the config has given a lot of people trouble.
>  AFAIK
>  "normally" you only have one such table entry and for TPROXY its internal
>  to
>  the kernel with the "lo" interface. BUT, some people have had to
>  configure
>  other interfaces to get it working.
> 
>  Try to add a table 100 (or whatever you called it) entry for each NIC the
>  box has. If your kernel accepts them check access.log again.
> 
>  If your kernel denies multiple tables, erase the existing one and try
>  creating one for each NIC. Repeating until you find one that works.
> 
>  OR, if that still fails. We have to get help f