Re: [squid-users] squid 3.1.

2011-06-10 Thread Amos Jeffries

On 11/06/11 05:39, Shenavandeh wrote:

Hi,

I have a squid installation that crashes twice a day under the load of
7 Mbps bandwidth with the following message in squid.out with no
specific traces in cache.log

Startup: Fri Jun 10 15:46:20
dying from an unhandled exception: !theConsumer
terminate called after throwing an instance of 'TextException'
   what():  !theConsumer
Startup: Fri Jun 10 19:55:29

It is compiled using following options:
  sbin]# ./squid -v
Squid Cache: Version 3.1.12.1
configure options:  '--enable-linux-netfilter'
'--enable-storeio=ufs,aufs' '--enable-poll'
'--enable-x-accelerator-vary' '--enable-follow-x-forwarded-for'
'--enable-ssl' '--enable-snmp' '--enable-removal-policies'
'--enable-gnuregex' '--with-large-files' '--enable-async-io'
'CFLAGS=-DNUMTHREADS=300' --with-squid=/root/squid-3.1.12.1
--enable-ltdl-convenience

the platform is as follows:

CPU :4 cores of  Intel(R) Xeon(R) CPU   E5504  @ 2.00GHz
RAM : 8GB
OS: CentOS 5.6 :
Kernel: Linux version 2.6.25 compiled with tproxy option.

the Squid configuration:

cache_mem 4000 MB

dead_peer_timeout 30 seconds
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY


hierarchy_stoplist and the QUERY bits are outdated. It is worth removing 
these.




maximum_object_size 50 MB
maximum_object_size_in_memory 500 KB
minimum_object_size 0 KB

cache_replacement_policy heap LFUDA
memory_replacement_policy heap LRU

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1


acl localnet src 10.0.0.0/8# RFC1918 possible internal network
acl localnet src 172.16.0.0/12# RFC1918 possible internal network
acl localnet src 192.168.0.0/16# RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow to_localhost


This is dangerous.
 to_localhost is designed to match a class of attack signatures and 
prevent DoS. It is intended for use in a "deny" line.





http_access allow localhost
http_access allow localnet
http_access allow to_localhost


A second "allow to_localhost" is useless. The first will stop processing 
when it gets tested and matches.




http_access deny all

http_port 3128 tproxy

hierarchy_stoplist cgi-bin ?


repeat directive, worth removing.



cache_dir aufs /cache 24000 16 256

coredump_dir cache

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern .020%4320

cache_effective_user squid
cache_effective_group squid

half_closed_clients off
buffered_logs on
client_db off
quick_abort_max 0 KB
quick_abort_min 0 KB
memory_pools off

cache_swap_high 95%
cache_swap_low 90
logfile_rotate 10%


logfile_rotate is not a percentage. It is a count of many log files to 
keep. A new one is generated ever time you run "squid -k rotate"




visible_hostnameCache


Interesting FQDN.

The admin@Cache contact address for example, does not resolve here.



quick_abort_min 32 KB
quick_abort_max 32 KB
quick_abort_pct 95

negative_ttl 3 minutes
positive_dns_ttl 6 hours

pipeline_prefetch on

acl snmpkey snmp_community public
snmp_port 3401
snmp_access allow snmpkey localhost
snmp_access deny all

refresh_pattern -i
\.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv|mpg|wma|ogg|wmv|asx|asf)$
26 90% 260009 override-expire
refresh_pattern -i
\.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff|pdf|uxx)$ 26
90% 260009 override-expire


These refresh_pattern lines are useless. The query and dot patterns 
above catch all URL in existence. Squid never gets past them to match these.





I would be most grateful if somebody helps me out.
Yours Faithfully,
---
Amir H Sh


A few seconds search in bugzilla shows this:
 http://bugs.squid-cache.org/show_bug.cgi?id=3117

Perhapse you can help provide a trace (debug_options ALL,6) and help 
track down where it is coming from.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] wtf

2011-06-10 Thread Amos Jeffries

On 11/06/11 08:47, MrNicholsB wrote:

squid is caching but refuses to serve cached content to my peers, its =
a small home network and I need to get an estimate for, something that =
will likely take about an hour or two worth of work, to get my squid
working
properly, its quite frustrating to see squids cache growing and 1gig memory
consumption, downloading http content like updates and game patches yet not
being served that cached content. its like a carrot dangling on a stick,
and
I just apparently cant seem to get it.


Have you dropped hierarchy_stoplist from your config?
 Or are you using proxy-only in the peers configuration?

What does your complete config look like right now?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


[squid-users] wtf

2011-06-10 Thread MrNicholsB

squid is caching but refuses to serve cached content to my peers, its  =
a small home network and I need to get an estimate for, something that =
will likely take about an hour or two worth of work, to get my squid working
properly, its quite frustrating to see squids cache growing and 1gig memory
consumption, downloading http content like updates and game patches yet not
being served that cached content. its like a carrot dangling on a stick, and
I just apparently cant seem to get it. 



[squid-users] squid 3.1.

2011-06-10 Thread Shenavandeh
Hi,

I have a squid installation that crashes twice a day under the load of
7 Mbps bandwidth with the following message in squid.out with no
specific traces in cache.log

Startup: Fri Jun 10 15:46:20
dying from an unhandled exception: !theConsumer
terminate called after throwing an instance of 'TextException'
  what():  !theConsumer
Startup: Fri Jun 10 19:55:29

It is compiled using following options:
 sbin]# ./squid -v
Squid Cache: Version 3.1.12.1
configure options:  '--enable-linux-netfilter'
'--enable-storeio=ufs,aufs' '--enable-poll'
'--enable-x-accelerator-vary' '--enable-follow-x-forwarded-for'
'--enable-ssl' '--enable-snmp' '--enable-removal-policies'
'--enable-gnuregex' '--with-large-files' '--enable-async-io'
'CFLAGS=-DNUMTHREADS=300' --with-squid=/root/squid-3.1.12.1
--enable-ltdl-convenience

the platform is as follows:

CPU :4 cores of  Intel(R) Xeon(R) CPU   E5504  @ 2.00GHz
RAM : 8GB
OS: CentOS 5.6 :
Kernel: Linux version 2.6.25 compiled with tproxy option.

the Squid configuration:

cache_mem 4000 MB

dead_peer_timeout 30 seconds
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY

maximum_object_size 50 MB
maximum_object_size_in_memory 500 KB
minimum_object_size 0 KB

cache_replacement_policy heap LFUDA
memory_replacement_policy heap LRU

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1


acl localnet src 10.0.0.0/8# RFC1918 possible internal network
acl localnet src 172.16.0.0/12# RFC1918 possible internal network
acl localnet src 192.168.0.0/16# RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow to_localhost


http_access allow localhost
http_access allow localnet
http_access allow to_localhost

http_access deny all

http_port 3128 tproxy

hierarchy_stoplist cgi-bin ?

cache_dir aufs /cache 24000 16 256

coredump_dir cache

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern .020%4320

cache_effective_user squid
cache_effective_group squid

half_closed_clients off
buffered_logs on
client_db off
quick_abort_max 0 KB
quick_abort_min 0 KB
memory_pools off

cache_swap_high 95%
cache_swap_low 90
logfile_rotate 10%

visible_hostnameCache

quick_abort_min 32 KB
quick_abort_max 32 KB
quick_abort_pct 95

negative_ttl 3 minutes
positive_dns_ttl 6 hours

pipeline_prefetch on

acl snmpkey snmp_community public
snmp_port 3401
snmp_access allow snmpkey localhost
snmp_access deny all

refresh_pattern -i
\.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv|mpg|wma|ogg|wmv|asx|asf)$
26 90% 260009 override-expire
refresh_pattern -i
\.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff|pdf|uxx)$ 26
90% 260009 override-expire


I would be most grateful if somebody helps me out.
Yours Faithfully,
---
Amir H Sh


Re: [squid-users] some thougs about pushing objects

2011-06-10 Thread Luis Daniel Lucio Quiroz
Le vendredi 03 juin 2011 23:52:59, Eliezer Croitoru a écrit :
> well if you do want to push an object you can do it in a more elegant way:
> 
> "export http_proxy=http://localhost:3128  ; wgethttp://fqdn/object";
> 
> and use it on a big site using recursive download and on ram drive.
> also another tip is to use "--delete-after"
> this will pull the file into the squid and write to to the ram... will take
> down some load from the disks.. it will leave the needed files on squid
> and not on the disk as file/s/
> 
> i am using this method to preload and update specific files cache using
> crontab jobs.
> 
> Regards
> Eliezer
> 
> On 02/06/2011 17:20, Luis Daniel Lucio Quiroz wrote:
> > just thinking
> > 
> > if squid-purge tool can purge an specific object from cache by modifing
> > the db cache db,
> > 
> > is there a way to push an object like that with a tool?
> > 
> > 
> > LD

Yes,

the only problem is that that object wont enter inmediatlly i shall ask for 
him several times according policy


Re: [squid-users] multiple http_port names

2011-06-10 Thread errno
On Friday, June 10, 2011 07:33:07 AM Amos Jeffries wrote:

> > ... behaved as intended:  when clients went through
> > the http_port listener 192.168.2.2:80, the tcp_outgoing_address
> > worked as expected, wherein http://www.whatismyip.com
> > displayed 192.168.2.2 rather than 192.168.1.1.
> 
> * whatismyip.com is a global website outside on The Internet
> * 192.168.* is a rfc1918 IP space
> * rfc1918  IP are not used on The Internet
> 
> Ergo the Squid outgoing IP address is not what is being displayed by
> that website.
> 

Sorry for making things difficult with my example - I completely
failed to make it understood that the IPs in the config examples were
dummy values.  All the actual IPs in question under the real config 
are indeed within the publicly routable, non-reserved address space.

So, to simplify, I found that using:

acl  myportname 
http_port  name=
tcp_outgoing_address  

... failed to cause the tcp_outgoing_address to work as 
expected/intended, while:

acl  myip 
http_port  name=
tcp_outgoing_address  

... worked as intended.

.. the only change being that I used the 'myip' acl rather
than the 'myportname' acl.



[squid-users] Re: Squid 3.2 CONNECT not working with NTLM - logs

2011-06-10 Thread Alex Crow

Hi,

I noticed a few people have downloaded my logs - has anyone found 
anything useful? It does appear to be a bug to me, but if it is a change 
in behaviour I'd be happy of any hints to fix my config.


All the best

Alex




Further to my original post, I've found that NTLM auth does not work 
with CONNECT requests from clients in 3.2. I use a similar config to 
that I'm using in production in 2.7.


I have disabled ssl-bump and dynamic SSL until I can resolve this.

I do *not* have to "add http_access allow CONNECT" before "http_access 
deny CONNECT !SSL_ports " in my production config, which seems correct 
as otherwise unauthorised users could use CONNECT.


Here is the config I am testing, which works fine for plain HTTP traffic:

auth_param ntlm program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp

auth_param ntlm children 30
auth_param basic program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-basic

auth_param basic children 30
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
external_acl_type nt_group ttl=20 children=40 %LOGIN 
/usr/lib/squid3/ext_wbinfo_group_acl

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src 10.0.0.0/8# RFC1918 possible internal network
acl localnet src 172.16.0.0/12# RFC1918 possible internal network
acl localnet src 192.168.0.0/16# RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly 
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT
acl AuthorizedUsers proxy_auth REQUIRED
always_direct allow all
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access allow AuthorizedUsers
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow all AuthorizedUsers
http_access deny all
http_port 3128
hierarchy_stoplist cgi-bin ?
cache_mem 2048 MB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
cache_dir diskd /var/spool/squid3 36 64 512
debug_options ALL,3
coredump_dir /var/cache
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern .020%4320
dns_defnames on

However CONNECT requests do not work, the client sits forever waiting 
for data while squid does not match against "AuthorizedUsers" and 
attempts to return an error page:


Logs are available here:

http;//www.nanogherkin.com/nonssl_cache.log.gz
http;//www.nanogherkin.com/ssl_cache.log.gz

Any help much appreciated.

Regards

Alex





Re: [squid-users] multiple http_port names

2011-06-10 Thread Amos Jeffries

On 10/06/11 21:20, errno wrote:

On Friday, June 10, 2011 12:07:49 AM Amos Jeffries wrote:

On 10/06/11 09:34, errno wrote:

I've got squid conf that looks a bit like the following snippet:

# ...
acl ip-192.168.1.2 myip 192.168.1.2

http_port 192.168.1.2:80 name=ip-192.168.1.2
http_port 192.168.1.2:8080 name=ip-192.168.1.2

tcp_outgoing_address 192.168.1.2 ip-192.168.1
# ...


Question:  do those http_port directives need to have
unique 'name=' entries?


unique.


I stand corrected on this. They MAY bet set with matching names.




Or can they all share the
same name? Also - and perhaps more importantly,
is there any similar(ish) problems with the way I've
named the 'myip' acl the same as the http_port names?


myip is at the mercy of the interception lookups.

myportname only depends on what you put in squid.conf and which actual
listening port the traffic arrives on.



Well one thing that occurred is that I at first was using
myportname rather than myip for the acl in question -
but when doing so, all traffic appeared to be comming
from the server's primary ip addr (in this case, 192.168.1.1)
rather than what I intended as specified by tcp_outgoing_address -
in other words, the following (with a bit more config added for
context):

# ...
# 192.168.1.2
acl ip-192.168.1.2 myportname ip-192.168.1.2
http_port 192.168.1.2:80 name=ip-192.168.1.2
http_port 192.168.1.2:8080 name=ip-192.168.1.2
tcp_outgoing_address 192.168.1.2 ip-192.168.1.2

# 192.168.2.2
acl ip-192.168.2.2 myportname ip-192.168.2.2
http_port 192.168.2.2:80 name=ip-192.168.2.2
http_port 192.168.2.2:8080 name=ip-192.168.2.2
tcp_outgoing_address 192.168.2.2 ip-192.168.2.2
# ...


Using the above, tcp_outgoing_address did not work as
expected/intended: using a tool such as http://www.whatismyip.com/ ,
showed 192.168.1.1 in all cases, regardless of which
http_port/myportname the client originated from.


whatismyip.com uses far more than the TCP/IP address to determine what 
the IP is.


To accurately judge whether the packet IP is correct you need to check 
the packets as they flow. "tcpdump -s 0 -w file" and wireshark to read 
the resulting packet dump is the easiest.


Failing that a netstat dump of the TCP connections.

Failing that a cache.log ALL,5 trace has the details buried in it somewhere.




Switching from the above, to:

# ...
# 192.168.1.2
acl ip-192.168.1.2 myip 192.168.1.2
http_port 192.168.1.2:80 name=ip-192.168.1.2
http_port 192.168.1.2:8080 name=ip-192.168.1.2
tcp_outgoing_address 192.168.1.2 ip-192.168.1.2

# 192.168.2.2
acl ip-192.168.2.2 myip 192.168.2.2
http_port 192.168.2.2:80 name=ip-192.168.2.2
http_port 192.168.2.2:8080 name=ip-192.168.2.2
tcp_outgoing_address 192.168.2.2 ip-192.168.2.2
# ...

... behaved as intended:  when clients went through
the http_port listener 192.168.2.2:80, the tcp_outgoing_address
worked as expected, wherein http://www.whatismyip.com
displayed 192.168.2.2 rather than 192.168.1.1.


* whatismyip.com is a global website outside on The Internet
* 192.168.* is a rfc1918 IP space
* rfc1918  IP are not used on The Internet

Ergo the Squid outgoing IP address is not what is being displayed by 
that website.


There are 3 layers of protocols above IP where the address can be found. 
I think whatismyip uses only the HTTP layer ones. The X-Forwarded-For 
and Via: headers added by Squid to provide UID and looping detection.


When the proxy is working properly whatismyip.com should display the 
nicely anonymous rfc1918 (192.168.1.250) or whatever the client browser 
has been assigned by your LAN.





Hope that makes sense; to rephrase/summarize:

* squid server's main/primary IP:  192.168.1.1

* one instance of squid running;

* the single instance listening on multiple:  http_ports:
192.168.1.2:80, 192.168.1.2:8080, 192.168.1.2:80 and
192.168.1.2:8080

results:

~ first example, using:
acl ip-192.168.1.2 myportname ip-192.168.1.2
and:
acl ip-192.168.2.2 myportname 192.168.2.2
... all cache traffic was detected as originating from server's
main/primary ip: 192.168.1.1 - and not from the specified
tcp_outgoing_address

~ BUT, second example, using:
acl ip-192.168.1.2 myip 192.168.1.2
and:
acl ip-192.168.2.2 myip 192.168.2.2
... all cache traffic was this time detected as originating
from the specified tcp_outgoing_address, as intended,
rather than from the squid server instances primary
ip addr (192.168.1.1).





# ...
#
# don't work:
#acl ip-192.168.1.2 myportname ip-192.168.1.2
#
# works as expected/intended:
acl ip-192.168.1.2 myip 192.168.1.2
#
http_port 192.168.1.2:80 name=ip-192.168.1.2
http_port 192.168.1.2:8080 name=ip-192.168.1.2

tcp_outgoing_address 192.168.1.2 ip-192.168.1.2
#...


I'd like to understand what's going on, but the docs
I've read are not supplying any real information on
the matter.


Squid stores the local-endpoint IP address as local_ip and the http_port 
name= as myportname next to each other in an object recording the 
request details.


Those details are then passed to th

Re: [squid-users] Squid and DNS

2011-06-10 Thread Amos Jeffries

On 10/06/11 21:10, ML Alasta wrote:

Hi

I have two Squid which  make all requests DNS with the same source port.
This comportent entraine problems with my load balancer.
Are there a solution for random the source port of all DNS requests ?


Squid choses a random DNS port every startup, and should also on every 
reconfigure as well, but I have not checked that.


Getting two to pick the same number out of 64K possibilities is 
remarkable coincidence. Unless you have patches causing it.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] A way/hack to force squid to die?

2011-06-10 Thread Amos Jeffries

On 10/06/11 20:41, Tom Tux wrote:

Mmm...

Now I were able to force squid to die (with gdb):

 From cache.log:
--- SNIP ---
(squid)(death+0x37)[0x564d37]
/lib64/libpthread.so.0(+0xf5d0)[0x7f99ebe705d0]
FATAL: Received Segment Violation...dying.
2011/06/10 10:37:28.408| storeDirWriteCleanLogs: Starting...
2011/06/10 10:37:28.409| assertion failed: comm.cc:301: "data ==
COMMIO_FD_READCB(fd)"
--- SNAP ---

But there is no mail sent to the mailaddress defined on the
"cache_mgr"-directive. I made also a tcpdump on port 25 to check, if
there even a mail is sent. But no packets on port 25, no mail.



Hmm,
 - check for a file /tmp/squid-XX
 - check that the command line:
   "$mail_program $cache_mgr  - check that your mail system accepts email from 
squid@$unique_hostname and squid@$visible_hostname


 ** Email spam filters is one of several reasons why hostname must be a 
FQDN with working rDNS.


Amos



2011/6/10 Helmut Hullen:

Hallo, Tom,

Du meintest am 10.06.11:


 From squid.conf.documented:
-+- SNIP ---
#  TAG: cache_mgr
#   Email-address of local cache manager who will receive
#   mail if the cache dies.  The default is "webmaster."


"if the cache dies"

That's another thing than a dying squid.

Viele Gruesse!
Helmut




--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] multiple http_port names

2011-06-10 Thread errno
On Friday, June 10, 2011 02:20:49 AM errno wrote:
> * the single instance listening on multiple : http_ports:
> 192.168.1.2:80, 192.168.1.2:8080, 192.168.1.2:80 and
> 192.168.1.2:8080
>

Damn, that should have said:

* the single instance listening on multiple : http_ports:
192.168.1.2:80, 192.168.1.2:8080, 192.168.2.2:80 and
192.168.2.2:8080


... e.g. 192.168.1.2 and 192.168.2.2



Re: [squid-users] multiple http_port names

2011-06-10 Thread errno
On Friday, June 10, 2011 12:07:49 AM Amos Jeffries wrote:
> On 10/06/11 09:34, errno wrote:
> > I've got squid conf that looks a bit like the following snippet:
> > 
> > # ...
> > acl ip-192.168.1.2 myip 192.168.1.2
> > 
> > http_port 192.168.1.2:80 name=ip-192.168.1.2
> > http_port 192.168.1.2:8080 name=ip-192.168.1.2
> > 
> > tcp_outgoing_address 192.168.1.2 ip-192.168.1
> > # ...
> > 
> > 
> > Question:  do those http_port directives need to have
> > unique 'name=' entries?
> 
> unique.
> 
> > Or can they all share the
> > same name? Also - and perhaps more importantly,
> > is there any similar(ish) problems with the way I've
> > named the 'myip' acl the same as the http_port names?
> 
> myip is at the mercy of the interception lookups.
> 
> myportname only depends on what you put in squid.conf and which actual
> listening port the traffic arrives on.
> 

Well one thing that occurred is that I at first was using
myportname rather than myip for the acl in question -
but when doing so, all traffic appeared to be comming
from the server's primary ip addr (in this case, 192.168.1.1)
rather than what I intended as specified by tcp_outgoing_address -
in other words, the following (with a bit more config added for
context):

# ...
# 192.168.1.2
acl ip-192.168.1.2 myportname ip-192.168.1.2
http_port 192.168.1.2:80 name=ip-192.168.1.2
http_port 192.168.1.2:8080 name=ip-192.168.1.2
tcp_outgoing_address 192.168.1.2 ip-192.168.1.2

# 192.168.2.2
acl ip-192.168.2.2 myportname ip-192.168.2.2
http_port 192.168.2.2:80 name=ip-192.168.2.2
http_port 192.168.2.2:8080 name=ip-192.168.2.2
tcp_outgoing_address 192.168.2.2 ip-192.168.2.2
# ...


Using the above, tcp_outgoing_address did not work as 
expected/intended: using a tool such as http://www.whatismyip.com/ ,
showed 192.168.1.1 in all cases, regardless of which
http_port/myportname the client originated from.

Switching from the above, to:

# ...
# 192.168.1.2
acl ip-192.168.1.2 myip 192.168.1.2
http_port 192.168.1.2:80 name=ip-192.168.1.2
http_port 192.168.1.2:8080 name=ip-192.168.1.2
tcp_outgoing_address 192.168.1.2 ip-192.168.1.2

# 192.168.2.2
acl ip-192.168.2.2 myip 192.168.2.2
http_port 192.168.2.2:80 name=ip-192.168.2.2
http_port 192.168.2.2:8080 name=ip-192.168.2.2
tcp_outgoing_address 192.168.2.2 ip-192.168.2.2
# ...

... behaved as intended:  when clients went through
the http_port listener 192.168.2.2:80, the tcp_outgoing_address
worked as expected, wherein http://www.whatismyip.com
displayed 192.168.2.2 rather than 192.168.1.1.

Hope that makes sense; to rephrase/summarize:

* squid server's main/primary IP:  192.168.1.1

* one instance of squid running; 

* the single instance listening on multiple : http_ports:
192.168.1.2:80, 192.168.1.2:8080, 192.168.1.2:80 and
192.168.1.2:8080

results:

~ first example, using: 
acl ip-192.168.1.2 myportname ip-192.168.1.2
and:
acl ip-192.168.2.2 myportname 192.168.2.2
... all cache traffic was detected as originating from server's 
main/primary ip: 192.168.1.1 - and not from the specified
tcp_outgoing_address 

~ BUT, second example, using:
acl ip-192.168.1.2 myip 192.168.1.2
and:
acl ip-192.168.2.2 myip 192.168.2.2
... all cache traffic was this time detected as originating
from the specified tcp_outgoing_address, as intended,
rather than from the squid server instances primary
ip addr (192.168.1.1).



So, something in the difference between:

# ...
acl ip-192.168.1.2 myportname ip-192.168.1.2
http_port 192.168.1.2:80 name=ip-192.168.1.2
http_port 192.168.1.2:8080 name=ip-192.168.1.2
tcp_outgoing_address 192.168.1.2 ip-192.168.1.2
#...

and:

# ...
#
# don't work:
#acl ip-192.168.1.2 myportname ip-192.168.1.2  
#
# works as expected/intended:
acl ip-192.168.1.2 myip 192.168.1.2
#
http_port 192.168.1.2:80 name=ip-192.168.1.2
http_port 192.168.1.2:8080 name=ip-192.168.1.2

tcp_outgoing_address 192.168.1.2 ip-192.168.1.2
#...


I'd like to understand what's going on, but the docs
I've read are not supplying any real information on
the matter.

( and as an additional piece of info; with the second
working-as-intended example, I did not need to set
server_persistent_connections to 'off', like the default
squid conf suggests:

#  TAG: tcp_outgoing_address
#  Allows you to map requests to different outgoing IP addresses
#  based on the username or source address of the user making
#  the request.
#
#  tcp_outgoing_address ipaddr [[!]aclname] ...
# [ ... ]
#  Note: The use of this directive using client dependent ACLs is
#  incompatible with the use of server side persistent connections. To
#  ensure correct results it is best to set server_persistent_connections
#  to off when using this directive in such configurations.


Basically, I have one instance of squid that is listening on multiple
ip:port http_port directives, and I want the tcp_outgoing_address
for each ip to properly reflect the ip that the cache request came in on.




[squid-users] Squid and DNS

2011-06-10 Thread ML Alasta
Hi

I have two Squid which  make all requests DNS with the same source port.
This comportent entraine problems with my load balancer.
Are there a solution for random the source port of all DNS requests ?

Best Regard

Samuel


Re: [squid-users] A way/hack to force squid to die?

2011-06-10 Thread Tom Tux
Mmm...

Now I were able to force squid to die (with gdb):

>From cache.log:
--- SNIP ---
(squid)(death+0x37)[0x564d37]
/lib64/libpthread.so.0(+0xf5d0)[0x7f99ebe705d0]
FATAL: Received Segment Violation...dying.
2011/06/10 10:37:28.408| storeDirWriteCleanLogs: Starting...
2011/06/10 10:37:28.409| assertion failed: comm.cc:301: "data ==
COMMIO_FD_READCB(fd)"
--- SNAP ---

But there is no mail sent to the mailaddress defined on the
"cache_mgr"-directive. I made also a tcpdump on port 25 to check, if
there even a mail is sent. But no packets on port 25, no mail.





2011/6/10 Helmut Hullen :
> Hallo, Tom,
>
> Du meintest am 10.06.11:
>
>> From squid.conf.documented:
>> -+- SNIP ---
>> #  TAG: cache_mgr
>> #       Email-address of local cache manager who will receive
>> #       mail if the cache dies.  The default is "webmaster."
>
> "if the cache dies"
>
> That's another thing than a dying squid.
>
> Viele Gruesse!
> Helmut
>


Re: [squid-users] A way/hack to force squid to die?

2011-06-10 Thread Helmut Hullen
Hallo, Tom,

Du meintest am 10.06.11:

> From squid.conf.documented:
> -+- SNIP ---
> #  TAG: cache_mgr
> #   Email-address of local cache manager who will receive
> #   mail if the cache dies.  The default is "webmaster."

"if the cache dies"

That's another thing than a dying squid.

Viele Gruesse!
Helmut


Re: [squid-users] multiple http_port names

2011-06-10 Thread Amos Jeffries

On 10/06/11 09:34, errno wrote:


I've got squid conf that looks a bit like the following snippet:

# ...
acl ip-192.168.1.2 myip 192.168.1.2

http_port 192.168.1.2:80 name=ip-192.168.1.2
http_port 192.168.1.2:8080 name=ip-192.168.1.2

tcp_outgoing_address 192.168.1.2 ip-192.168.1
# ...


Question:  do those http_port directives need to have
unique 'name=' entries?


unique.


Or can they all share the
same name? Also - and perhaps more importantly,
is there any similar(ish) problems with the way I've
named the 'myip' acl the same as the http_port names?


myip is at the mercy of the interception lookups.

myportname only depends on what you put in squid.conf and which actual 
listening port the traffic arrives on.


The ACL values can of course list several port name= values if you want 
several ports grouped to be handled the same.




I tested the above and things _appear_ to be working
as expected, but... I'm still wondering whether there
might be subtle issues with the above config.


The only subtle problem I have been able to think of is that wildcard 
listening ports are neutral/agnostic of the packets arriving on them.
 The name is fixed with the actual receiving port on Squid, not the 
client visible IP.
 This is by design, but can be confusing to some who don't realize how 
wildcard ports work. Particularly when dealing with multiple IPs spread 
over multiple protocols.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] A way/hack to force squid to die?

2011-06-10 Thread Amos Jeffries

On 09/06/11 23:24, Tom Tux wrote:

Hi

I need to verify, if I get a mail when squid dies (I put a correct
mailaddress in cache_mgr-directive). Is there a way/hack to force
squid to die? I tried several things like revoking permissions on the
the cache-dir, moving cache.log away... ->  but squid is still alive.



The email dress is used on the error pages, to display an *optional* 
"send an email to the admin" URL.


Doing the abort emails automatically requires squid to hit an assertion 
failure, which triggers abort(). It slows down the recovery period. So 
external tools are preferred which watch the log stream for "assertion 
failure:" (I think that is the text in all squids).


 If you can find a way to reliably hit one of the assertions remotely, 
that is a DoS vulnerability and *we want to know* ASAP.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] A way/hack to force squid to die?

2011-06-10 Thread Amos Jeffries

On 10/06/11 08:32, da...@lang.hm wrote:

On Thu, 9 Jun 2011, Helmut Hullen wrote:


Hallo, Tom,

Du meintest am 09.06.11:



I need to verify, if I get a mail when squid dies (I put a correct
mailaddress in cache_mgr-directive). Is there a way/hack to force
squid to die?


Yes - this daemon is started from a script p.e. with the command

/etc/init.d/squid start

(the command depends on your distribution).

And stopping squid is nearly the same command.

Brute force (under a linux distribution):

killall squid

(if you are curious: repeat the command)


actually, you do need to repeat the command. Squid doesn't stop
immediatly when it gets the standard kill signal (15), it does some sort
of graceful shutdown that can take a significant amount of time to
actually take place.

I ran into this when trying to upgrade squid, if I do "killall squid;
/etc/init.d/squid start" I frequently get a message from the start that
squid is still running, but then a little while later squid will stop
(and since the start command failed, an outage occures)

I've also found that the number of killall cycles you need to go through
varies depending on the squid version and config, with versions other
than 3.2 with more than one worker I need to do two killall commands to
get squid to stop, with 3.2 and more than one worker I find I need three.

what I do is to do killall squid in a loop until the killall command
returns an error that there is no process to kill.


Whereas what you should do to _abort_ Squid is simply a:
  squid -k kill

I do hope you realize that killing Squid with "-k kill", the kill or 
killall commands causes you worse problems in the long term.


  The slow graceful shutdown is trading a small amount of time on 
shutdown for about 10x to 20x the same time when re-starting. Squid 
forced to scan the entire cache disk object by object if the swap.state 
is not written out on shutdown.


 It is also trading some delay period waiting for existing client 
connections to complete, preventing client browsers (or downstream 
caches!) receiving corrupted content. Or just as bad, allowing Squid 
time to close the open disk files, preventing local cache corruption.


NP: we are slowly auditing the things which can be done immediately 
without the wait (like dropping idle connections, helpers etc fast). 
Assistance welcome pointing them out for fixing.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2