Re: [squid-users] Why is squid caching local intranet domains??

2012-06-06 Thread Amos Jeffries

On 07.06.2012 04:10, bnichols wrote:

Well the only issue I really have is that any host that is MANUALLY
configure for the squid gets cache hits on the hosts in the
localdomain, which really isny a problem, considering none of my 
hosts
are manually configured, and its all done via forwarding on the 
router.


So in essence, squid is doing what I want it to do, caching all
traffic, and letting the local hosts go directly to local webservers 
on

the intranet.


Squid is not doing this second part. Your router or Squid box firewall 
is. Everything going through Squid gets logged.





 I was just surprised and bewildered by the lack of log file 
generation

when trying to access a local webserver. I would have expected to see
logs with DIRECT in them rather than a lack of logs all together.



There are two separate network configs participating in your setup:

 1) your router box diversion (policy routing or DNAT)
 2) your squid box diversion (DNAT or REDIRECT or TPROXY)

Take another look at the config on the *Squid* box.

I think that you will find as Eliezer said earlier that the packets 
destined to the Squid box web server are ACCEPT'ed without being sent 
into Squid. Even if they come from outside the box.


Amos



Re: [squid-users] FTP access for IPv6 clients

2012-06-06 Thread Amos Jeffries

On 07.06.2012 08:28, Nicolas C. wrote:

Hello,

I'm using Squid as a http/ftp proxy on a university, most of your
workstations and servers have IPv6 activated.

I recently upgraded my Squid proxies to version 3.1.6 (Debian
Squeeze) and the workstations are connecting to the proxy using IPv6
(or IPv4) with no problem.


3.1.6 has quite a few issues with IPv4/IPv6 behaviour in FTP. Please 
try upgrading to the 3.1.19 package in Debian Wheezy/Testing or 
Unstable.





A few computers need to access FTP servers on the Internet and there
are some issues when accessing a IPv4 FTP server : the FTP client
(FileZilla) is using IPv6 to connect to the proxy and it uses FTP
commands unknown to the FTP server (EPSV for example), using the
"ftp_epsv off" option in Squid has no effect.

As a workaround, to force FTP clients to connect to Squid using IPv4,
I created a "proxy-ftp" entry in our DNS pointing to the IPv4 address
of the proxy. If FileZilla is configured to use "proxy-ftp", it's
working fine.

The problem is that sometimes the FTP server has IPv6 enabled and
then it's not working, the workstation is using IPv4 to reach Squid
which is using IPv6 to reach the FTP server. The FTP client is
immediately failing after a PASV command.


Squid is coded to try IPv6+IPv4 compatible commands (EPSV) first. If it 
gets as far as trying IPv4-only PASV command it will not go backwards to 
trying the IPv6+IPv4 EPSV command.
 ... "ftp_epsv off" is making Squid go straight to PASV and skip all 
the non-IPv4 access methods.



The third option is to upgrade your FTP server to one which supports 
those extension commands (they are for optimising IPv4 as much as IPv6 
support). Then you won't have to hack protocol translation workarounds 
through Squid to access it from modern FTP clients.


Amos



[squid-users] FTP access for IPv6 clients

2012-06-06 Thread Nicolas C.

Hello,

I'm using Squid as a http/ftp proxy on a university, most of your 
workstations and servers have IPv6 activated.


I recently upgraded my Squid proxies to version 3.1.6 (Debian Squeeze) 
and the workstations are connecting to the proxy using IPv6 (or IPv4) 
with no problem.


A few computers need to access FTP servers on the Internet and there are 
some issues when accessing a IPv4 FTP server : the FTP client 
(FileZilla) is using IPv6 to connect to the proxy and it uses FTP 
commands unknown to the FTP server (EPSV for example), using the 
"ftp_epsv off" option in Squid has no effect.


As a workaround, to force FTP clients to connect to Squid using IPv4, I 
created a "proxy-ftp" entry in our DNS pointing to the IPv4 address of 
the proxy. If FileZilla is configured to use "proxy-ftp", it's working fine.


The problem is that sometimes the FTP server has IPv6 enabled and then 
it's not working, the workstation is using IPv4 to reach Squid which is 
using IPv6 to reach the FTP server. The FTP client is immediately 
failing after a PASV command.


Is there a known solution to my issue? I did not make network capture yet.

Regards,

Nicolas


Re: [squid-users] Why is squid caching local intranet domains??

2012-06-06 Thread Eliezer Croitoru

the squid is a gateway...
so if you access the local network you are not getting the data\web 
through the squid box..

makes sense.

Eliezer

On 06/06/2012 19:10, bnichols wrote:

Well the only issue I really have is that any host that is MANUALLY
configure for the squid gets cache hits on the hosts in the
localdomain, which really isny a problem, considering none of my hosts
are manually configured, and its all done via forwarding on the router.

So in essence, squid is doing what I want it to do, caching all
traffic, and letting the local hosts go directly to local webservers on
the intranet.

  I was just surprised and bewildered by the lack of log file generation
when trying to access a local webserver. I would have expected to see
logs with DIRECT in them rather than a lack of logs all together.


Of course I get log files just
fine when accessing normal web sites, and logs, and squid functions.

On Wed, 06 Jun 2012 18:51:02 +0300
Eliezer Croitoru  wrote:




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Caching issue with http_port when running in transparent mode

2012-06-06 Thread Hans Musil
Eliezer Croitoru wrote:

> On 05/06/2012 17:22, Hans Musil wrote:
> > Eliezer wrote:
> >
> >> one important thing to be aware of is that if you are using the same
> box
> >> as a gateway and squidbox it's better to use the "redirect" instead of
> >> DNAT.
> >>
> >> you can always try to use:
> >> http://nocat.net/downloads/NoCatSplash/
> >>
> >> or to write your own helper.
> >> it can be pretty simple to build such an helper and you will just need
> >> to use some NAT chains\tables on iptables that will redirect any
> >> connection to the world into the webserver with a login page that
> >> connected to a script that will do some stuff in the iptables "allow"
> >> table.
> >>
> >> do you need to apply some username and password mechanism\auth or just
> >> splash screen to agree some rules\agreement ?
> >>
> >> Eliezer
> >>
> >
> > Thanks again, Eliezer. The hint for the REDIRECT target is a good point.
> >
> > NoCatSplash does not work for my as I need more control. Not only that
> users need to login, they also need to logout when done. Furthermore, I need
> to trigger a traffic quotation system from the login/out script. Also, web
> traffic needs to be logged. NoCatSplash seems not to be flexible enough.
> >
> > Hans
> >
> 
> well.
> nocatspalash can be updated a bit to fit a login\logout.
> i know that there was a tool for billing and quota
> and it depends on the traffic quota basis you want.
> if you want to supply internet traffic based only on web quota or also 
> based on other network services quota.
> 
> i have implemented long ago a traffic meter using iptables and wrote it 
> to a mysql DB.
> and can be added to it a quota limit based on user\ ip.
> 
> if you want the idea then its':
> create a web page to login logout with cgi based on passwd and user in db.
> add to it a quota status if you want(preferably to yes).
> 
> in iptables rules you should create specific talbes for quota meter.
> so the iptables should:
> allow all users in lan traffic to the gw machine web site.
> have tables that count traffic for each ip that will be added by the web 
> scritps.
> have an helper that runs every 30 sec and dumps ip tables stats and 
> reset the counter.
> then parses the data from the file into db by users.
> then checks if quota exceeded and set the proper iptables tables\rules 
> and db flags for that user and.
> 
> i would run the helper every 30 secs for grace time but will run a 
> specific login\logout script\program that will change the proper flags 
> and counters in db for the user\ip.
> 
> this is a tutorial specific for iptablbes counter
> http://www.catonmat.net/blog/traffic-accounting-with-iptables/
> i have seen the thing with the DB here:
> http://wiki.openvz.org/Traffic_accounting_with_iptables
> 
> you can use use snmp to pull the data from db using a script
> 
> to get a specific table data (like a custom one you can use)
> iptables --line-number -xnvL FORWARD
> 
> iptables --line-number -xnvL  FORWARD |gawk '{print $1 " " $3 " " $10}'
> this will give you the bytes statistics for each IP.
> 
> just remember that if you are using a proxy server you will also need to 
> count the redirected\intercepted traffic in a intercept table.
> 
> i have found this nice thing to use snmp for monitoring:
> http://www.nativenet.ch/content/view/28/51
> 
> and also this:
> 
> http://forums.cacti.net/viewtopic.php?t=8091&highlight=iptables
> 
> 
> as for the exact way to measure clients traffic quota i'm sure there is 
> a more "way forward" way then parsing the iptables stats.
> 
> but it's one of the best tools in linux world.
> 
> there is also the quota module of iptalbes but im not sure it's for this 
> case.
> 
> so any way it's a big thing quota and users by itself.
> 
> i think it's doable if you will custom the iptables structure\schema for 
> this specific use.
> every time you check the current counter you can zero it specifically.
> 
> 
> if you are up to the task of combining a psudo code for the whole 
> process with me i will be happy to sit on it some time in from the next 
> week.
> 
> Eliezer
> 
> 
> 

Thank you, Eliezer, for this very detailed description. Some months ago, I 
already did play around with quotation and traffic shaping. And I think I have 
found a reasonable way to manage this things.

As you have mentioned, iptables has a quota module witch is very helpful. In 
contrary to the traffic measuring tutorial you have linked, my goal is not to 
meassure arbitrary traffic, but to set a pre-defined quota. After this quota is 
exceeded, the traffic for this user will be throttled by traffic control rules. 
iptables' task is just to mark packets that exceed the quota. The rest is done 
by the traffic control tool tc.

Unfortunately, I have not yet digged out all my old stuff. Thus, I'm not yet 
able to send you a working example code, but just some key lines:

In iptables:

2 simple chains that mark a packet and return. 10 is normal traffic, 11 is 
traffic that will be t

Re: [squid-users] Caching issue with http_port when running in transparent mode

2012-06-06 Thread Hans Musil
Amos Jeffries wrote:

> On 06.06.2012 07:04, Hans Musil wrote:
> >
> > Ups, an other problem: Amos, your solution looks fine, but there is
> > one problem. My login/logout script needs to know the client's IP, 
> > but
> > it only sees my squid's IP. I know, there is format tag %i, but this
> > would require the non-stable version 3.2. Any better idea?
> 
> Your NAT rules need a bypass when going to your internal server for the 
> login page. So the client connects to the login page directly without 
> Squid being in the way.
> 
> Amos

Yes, this would be the easy way. However, what if I ever would decide to make 
the squid non-transparent? Then all clients needed to know that the gateway 
server has to be excepted from beeing proxied. OK, possible, althought not 
really elegant.

Anyway, thanks for all the help. With this, I'll be able to build up a working 
solution.

Hans
-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de


Re: [squid-users] Why is squid caching local intranet domains??

2012-06-06 Thread bnichols
Well the only issue I really have is that any host that is MANUALLY
configure for the squid gets cache hits on the hosts in the
localdomain, which really isny a problem, considering none of my hosts
are manually configured, and its all done via forwarding on the router.

So in essence, squid is doing what I want it to do, caching all
traffic, and letting the local hosts go directly to local webservers on
the intranet. 
 
 I was just surprised and bewildered by the lack of log file generation 
when trying to access a local webserver. I would have expected to see
logs with DIRECT in them rather than a lack of logs all together.


Of course I get log files just
fine when accessing normal web sites, and logs, and squid functions.

On Wed, 06 Jun 2012 18:51:02 +0300
Eliezer Croitoru  wrote:

> you might have an accept rule before the redirect in iptalbes.
> 
> Eliezer
> On 06/06/2012 18:17, bnichols wrote:
> > One thing that ive noticed is that on machines being forwarded to my
> > squidbox via my router, all other sites show up in the access.log
> > and everything functions fine, however, when I try to access the
> > webserver residing on the squid box there are no logs at all
> > generated for those requests. I would expect to see DIRECT there.
> >
> > Equally of note, when I manually enter the proxy config into the
> > browsers, I get access.log entries for the domain, along with cache
> > hits of course.
> >
> > Just find it interesting that there is no log generation when the
> > webserver is accessed from a machine on the lan being forwarded by
> > my router.
> >
> >
> > On Wed, 06 Jun 2012 18:05:49 +0300
> > Eliezer Croitoru  wrote:
> >
> >> there was a bug on some old version of squid.
> >> you better use the newest version.
> >>
> >> ELiezer
> >> On 06/06/2012 18:01, mrnicholsb wrote:
> >>> Im scratching my head here, Ive got an issue thats driving me
> >>> bonkers...
> >>>
> >>> 1338994323.846 0 10.10.1.105 TCP_IMS_HIT/304 278 GET
> >>> http://deviant.evil/ - NONE/- text/html
> >>>
> >>> Clearly this local site is being cached, what is frustrating is
> >>> that I have the following meta tag on the page
> >>>
> >>> 
> >>>
> >>> Yet squid is apparently ignoring that directive completely.
> >>>
> >>> Ok, no problem, so we set our conf up to always go direct for
> >>> localnet acl right? No dice, still caching,
> >>>
> >>> Could one of you be so kind as to take a look at my conf and tell
> >>> me why?
> >>>
> >>>
> >>> ##
> >>>
> >>> #transparent because ddwrt is forwarding traffic to it
> >>> http_port 3128 transparent
> >>> #parent disabled due to location outside scope of firewall rules
> >>> #cache_peer 192.168.1.205 parent 3128 3129 default
> >>> # no-query no-digest
> >>> never_direct deny all
> >>>
> >>> refresh_pattern ^ftp: 1440 20% 10080
> >>> refresh_pattern ^gopher: 1440 0% 1440
> >>> refresh_pattern (/cgi-bin/|\?) 0 0% 0
> >>> refresh_pattern . 0 20% 4320
> >>>
> >>> dns_nameservers 10.10.1.1
> >>> hosts_file /etc/hosts
> >>> cache_swap_low 95
> >>> cache_swap_high 98
> >>> access_log /var/log/squid3/access.log
> >>> cache_mem 320 MB
> >>> memory_pools on
> >>> maximum_object_size_in_memory 512 KB
> >>> maximum_object_size 400 MB
> >>> log_icp_queries off
> >>> half_closed_clients on
> >>> cache_mgr mrnicho...@gmail.com
> >>> cache_dir ufs /mnt/secondary/var/spool/squid3 3 32 256
> >>> visible_hostname deviant.evil
> >>> shutdown_lifetime 1 second
> >>>
> >>> #icap_enable on
> >>> #icap_send_client_ip on
> >>> #icap_send_client_username on
> >>> #icap_client_username_encode off
> >>> #icap_client_username_header X-Authenticated-User
> >>> #icap_preview_enable on
> >>> #icap_preview_size 1024
> >>> #icap_service service_req reqmod_precache bypass=1
> >>> icap://127.0.0.1:1344/squidclamav
> >>> #adaptation_access service_req allow all
> >>> #icap_service service_resp respmod_precache bypass=1
> >>> icap://127.0.0.1:1344/squidclamav
> >>> #adaptation_access service_resp allow all
> >>>
> >>> acl manager proto cache_object
> >>> acl localhost src 127.0.0.1/32
> >>> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
> >>> acl localnet src 10.10.1.0/24
> >>> acl blacklist dstdomain "/mnt/secondary/squid3/squid-block.acl"
> >>>
> >>> acl SSL_ports port 443
> >>> acl Safe_ports port 80
> >>> acl Safe_ports port 21 # http
> >>> acl Safe_ports port 443 # ftp
> >>> acl Safe_ports port 70 # https
> >>> acl Safe_ports port 210 # gopher
> >>> acl Safe_ports port 1025-65535 # wais
> >>> acl Safe_ports port 280 # unregistered ports
> >>> acl Safe_ports port 488 # http-mgmt
> >>> acl Safe_ports port 591 # gss-http
> >>> acl Safe_ports port 777 # filemaker
> >>> acl CONNECT method CONNECT # multiling http
> >>>
> >>> always_direct allow localnet
> >>>
> >>> #icp_access allow localnet
> >>> #icp_access deny all
> >>>
> >>> http_access deny blacklist
> >>> http_access allow manager localhost
> >>> http_access deny manager
> >>> http_access deny !Safe_ports

Re: [squid-users] Why is squid caching local intranet domains??

2012-06-06 Thread Eliezer Croitoru

you might have an accept rule before the redirect in iptalbes.

Eliezer
On 06/06/2012 18:17, bnichols wrote:

One thing that ive noticed is that on machines being forwarded to my
squidbox via my router, all other sites show up in the access.log and
everything functions fine, however, when I try to access the webserver
residing on the squid box there are no logs at all generated for those
requests. I would expect to see DIRECT there.

Equally of note, when I manually enter the proxy config into the
browsers, I get access.log entries for the domain, along with cache
hits of course.

Just find it interesting that there is no log generation when the
webserver is accessed from a machine on the lan being forwarded by my
router.


On Wed, 06 Jun 2012 18:05:49 +0300
Eliezer Croitoru  wrote:


there was a bug on some old version of squid.
you better use the newest version.

ELiezer
On 06/06/2012 18:01, mrnicholsb wrote:

Im scratching my head here, Ive got an issue thats driving me
bonkers...

1338994323.846 0 10.10.1.105 TCP_IMS_HIT/304 278 GET
http://deviant.evil/ - NONE/- text/html

Clearly this local site is being cached, what is frustrating is
that I have the following meta tag on the page



Yet squid is apparently ignoring that directive completely.

Ok, no problem, so we set our conf up to always go direct for
localnet acl right? No dice, still caching,

Could one of you be so kind as to take a look at my conf and tell
me why?


##

#transparent because ddwrt is forwarding traffic to it
http_port 3128 transparent
#parent disabled due to location outside scope of firewall rules
#cache_peer 192.168.1.205 parent 3128 3129 default
# no-query no-digest
never_direct deny all

refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320

dns_nameservers 10.10.1.1
hosts_file /etc/hosts
cache_swap_low 95
cache_swap_high 98
access_log /var/log/squid3/access.log
cache_mem 320 MB
memory_pools on
maximum_object_size_in_memory 512 KB
maximum_object_size 400 MB
log_icp_queries off
half_closed_clients on
cache_mgr mrnicho...@gmail.com
cache_dir ufs /mnt/secondary/var/spool/squid3 3 32 256
visible_hostname deviant.evil
shutdown_lifetime 1 second

#icap_enable on
#icap_send_client_ip on
#icap_send_client_username on
#icap_client_username_encode off
#icap_client_username_header X-Authenticated-User
#icap_preview_enable on
#icap_preview_size 1024
#icap_service service_req reqmod_precache bypass=1
icap://127.0.0.1:1344/squidclamav
#adaptation_access service_req allow all
#icap_service service_resp respmod_precache bypass=1
icap://127.0.0.1:1344/squidclamav
#adaptation_access service_resp allow all

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl localnet src 10.10.1.0/24
acl blacklist dstdomain "/mnt/secondary/squid3/squid-block.acl"

acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 21 # http
acl Safe_ports port 443 # ftp
acl Safe_ports port 70 # https
acl Safe_ports port 210 # gopher
acl Safe_ports port 1025-65535 # wais
acl Safe_ports port 280 # unregistered ports
acl Safe_ports port 488 # http-mgmt
acl Safe_ports port 591 # gss-http
acl Safe_ports port 777 # filemaker
acl CONNECT method CONNECT # multiling http

always_direct allow localnet

#icp_access allow localnet
#icp_access deny all

http_access deny blacklist
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow localnet
http_access deny all


#Thanks heaps in advance. Squid 3.1.6-1.2 Debian Squeeze










--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Why is squid caching local intranet domains??

2012-06-06 Thread bnichols
One thing that ive noticed is that on machines being forwarded to my
squidbox via my router, all other sites show up in the access.log and
everything functions fine, however, when I try to access the webserver
residing on the squid box there are no logs at all generated for those
requests. I would expect to see DIRECT there.

Equally of note, when I manually enter the proxy config into the
browsers, I get access.log entries for the domain, along with cache
hits of course.

Just find it interesting that there is no log generation when the
webserver is accessed from a machine on the lan being forwarded by my
router.


On Wed, 06 Jun 2012 18:05:49 +0300
Eliezer Croitoru  wrote:

> there was a bug on some old version of squid.
> you better use the newest version.
> 
> ELiezer
> On 06/06/2012 18:01, mrnicholsb wrote:
> > Im scratching my head here, Ive got an issue thats driving me
> > bonkers...
> >
> > 1338994323.846 0 10.10.1.105 TCP_IMS_HIT/304 278 GET
> > http://deviant.evil/ - NONE/- text/html
> >
> > Clearly this local site is being cached, what is frustrating is
> > that I have the following meta tag on the page
> >
> > 
> >
> > Yet squid is apparently ignoring that directive completely.
> >
> > Ok, no problem, so we set our conf up to always go direct for
> > localnet acl right? No dice, still caching,
> >
> > Could one of you be so kind as to take a look at my conf and tell
> > me why?
> >
> >
> > ##
> >
> > #transparent because ddwrt is forwarding traffic to it
> > http_port 3128 transparent
> > #parent disabled due to location outside scope of firewall rules
> > #cache_peer 192.168.1.205 parent 3128 3129 default
> > # no-query no-digest
> > never_direct deny all
> >
> > refresh_pattern ^ftp: 1440 20% 10080
> > refresh_pattern ^gopher: 1440 0% 1440
> > refresh_pattern (/cgi-bin/|\?) 0 0% 0
> > refresh_pattern . 0 20% 4320
> >
> > dns_nameservers 10.10.1.1
> > hosts_file /etc/hosts
> > cache_swap_low 95
> > cache_swap_high 98
> > access_log /var/log/squid3/access.log
> > cache_mem 320 MB
> > memory_pools on
> > maximum_object_size_in_memory 512 KB
> > maximum_object_size 400 MB
> > log_icp_queries off
> > half_closed_clients on
> > cache_mgr mrnicho...@gmail.com
> > cache_dir ufs /mnt/secondary/var/spool/squid3 3 32 256
> > visible_hostname deviant.evil
> > shutdown_lifetime 1 second
> >
> > #icap_enable on
> > #icap_send_client_ip on
> > #icap_send_client_username on
> > #icap_client_username_encode off
> > #icap_client_username_header X-Authenticated-User
> > #icap_preview_enable on
> > #icap_preview_size 1024
> > #icap_service service_req reqmod_precache bypass=1
> > icap://127.0.0.1:1344/squidclamav
> > #adaptation_access service_req allow all
> > #icap_service service_resp respmod_precache bypass=1
> > icap://127.0.0.1:1344/squidclamav
> > #adaptation_access service_resp allow all
> >
> > acl manager proto cache_object
> > acl localhost src 127.0.0.1/32
> > acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
> > acl localnet src 10.10.1.0/24
> > acl blacklist dstdomain "/mnt/secondary/squid3/squid-block.acl"
> >
> > acl SSL_ports port 443
> > acl Safe_ports port 80
> > acl Safe_ports port 21 # http
> > acl Safe_ports port 443 # ftp
> > acl Safe_ports port 70 # https
> > acl Safe_ports port 210 # gopher
> > acl Safe_ports port 1025-65535 # wais
> > acl Safe_ports port 280 # unregistered ports
> > acl Safe_ports port 488 # http-mgmt
> > acl Safe_ports port 591 # gss-http
> > acl Safe_ports port 777 # filemaker
> > acl CONNECT method CONNECT # multiling http
> >
> > always_direct allow localnet
> >
> > #icp_access allow localnet
> > #icp_access deny all
> >
> > http_access deny blacklist
> > http_access allow manager localhost
> > http_access deny manager
> > http_access deny !Safe_ports
> > http_access deny CONNECT !SSL_ports
> > http_access allow localhost
> > http_access allow localnet
> > http_access deny all
> >
> >
> > #Thanks heaps in advance. Squid 3.1.6-1.2 Debian Squeeze
> >
> >
> 
> 



Re: [squid-users] Why is squid caching local intranet domains??

2012-06-06 Thread Eliezer Croitoru

there was a bug on some old version of squid.
you better use the newest version.

ELiezer
On 06/06/2012 18:01, mrnicholsb wrote:

Im scratching my head here, Ive got an issue thats driving me bonkers...

1338994323.846 0 10.10.1.105 TCP_IMS_HIT/304 278 GET
http://deviant.evil/ - NONE/- text/html

Clearly this local site is being cached, what is frustrating is that I
have the following meta tag on the page



Yet squid is apparently ignoring that directive completely.

Ok, no problem, so we set our conf up to always go direct for localnet
acl right? No dice, still caching,

Could one of you be so kind as to take a look at my conf and tell me why?


##

#transparent because ddwrt is forwarding traffic to it
http_port 3128 transparent
#parent disabled due to location outside scope of firewall rules
#cache_peer 192.168.1.205 parent 3128 3129 default
# no-query no-digest
never_direct deny all

refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320

dns_nameservers 10.10.1.1
hosts_file /etc/hosts
cache_swap_low 95
cache_swap_high 98
access_log /var/log/squid3/access.log
cache_mem 320 MB
memory_pools on
maximum_object_size_in_memory 512 KB
maximum_object_size 400 MB
log_icp_queries off
half_closed_clients on
cache_mgr mrnicho...@gmail.com
cache_dir ufs /mnt/secondary/var/spool/squid3 3 32 256
visible_hostname deviant.evil
shutdown_lifetime 1 second

#icap_enable on
#icap_send_client_ip on
#icap_send_client_username on
#icap_client_username_encode off
#icap_client_username_header X-Authenticated-User
#icap_preview_enable on
#icap_preview_size 1024
#icap_service service_req reqmod_precache bypass=1
icap://127.0.0.1:1344/squidclamav
#adaptation_access service_req allow all
#icap_service service_resp respmod_precache bypass=1
icap://127.0.0.1:1344/squidclamav
#adaptation_access service_resp allow all

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl localnet src 10.10.1.0/24
acl blacklist dstdomain "/mnt/secondary/squid3/squid-block.acl"

acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 21 # http
acl Safe_ports port 443 # ftp
acl Safe_ports port 70 # https
acl Safe_ports port 210 # gopher
acl Safe_ports port 1025-65535 # wais
acl Safe_ports port 280 # unregistered ports
acl Safe_ports port 488 # http-mgmt
acl Safe_ports port 591 # gss-http
acl Safe_ports port 777 # filemaker
acl CONNECT method CONNECT # multiling http

always_direct allow localnet

#icp_access allow localnet
#icp_access deny all

http_access deny blacklist
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow localnet
http_access deny all


#Thanks heaps in advance. Squid 3.1.6-1.2 Debian Squeeze





--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


[squid-users] Why is squid caching local intranet domains??

2012-06-06 Thread mrnicholsb

Im scratching my head here, Ive got an issue thats driving me bonkers...

1338994323.846  0 10.10.1.105 TCP_IMS_HIT/304 278 GET 
http://deviant.evil/ - NONE/- text/html


Clearly this local site is being cached, what is frustrating is that I 
have the following meta tag on the page




Yet squid is apparently ignoring that directive completely.

Ok, no problem, so we set our conf up to always go direct for localnet acl 
right? No dice, still caching,

Could one of you be so kind as to take a look at my conf and tell me why?


##

#transparent because ddwrt is forwarding traffic to it
http_port 3128 transparent
#parent disabled due to location outside scope of firewall rules
#cache_peer 192.168.1.205 parent 3128 3129  default
# no-query  no-digest
never_direct deny all

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (/cgi-bin/|\?)  0   0%  0
refresh_pattern .   0   20% 4320

dns_nameservers 10.10.1.1
hosts_file /etc/hosts
cache_swap_low 95
cache_swap_high 98
access_log /var/log/squid3/access.log
cache_mem 320 MB
memory_pools on
maximum_object_size_in_memory 512 KB
maximum_object_size 400 MB
log_icp_queries off
half_closed_clients on
cache_mgr mrnicho...@gmail.com
cache_dir ufs /mnt/secondary/var/spool/squid3 3 32 256
visible_hostname deviant.evil
shutdown_lifetime 1 second

#icap_enable on
#icap_send_client_ip on
#icap_send_client_username on
#icap_client_username_encode off
#icap_client_username_header X-Authenticated-User
#icap_preview_enable on
#icap_preview_size 1024
#icap_service service_req reqmod_precache bypass=1 
icap://127.0.0.1:1344/squidclamav

#adaptation_access service_req allow all
#icap_service service_resp respmod_precache bypass=1 
icap://127.0.0.1:1344/squidclamav

#adaptation_access service_resp allow all

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl localnet src 10.10.1.0/24
acl blacklist dstdomain "/mnt/secondary/squid3/squid-block.acl"

acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 21 # http
acl Safe_ports port 443 # ftp
acl Safe_ports port 70 # https
acl Safe_ports port 210 # gopher
acl Safe_ports port 1025-65535 # wais
acl Safe_ports port 280 # unregistered ports
acl Safe_ports port 488 # http-mgmt
acl Safe_ports port 591 # gss-http
acl Safe_ports port 777 # filemaker
acl CONNECT method CONNECT # multiling http

always_direct allow localnet

#icp_access allow  localnet
#icp_access deny all

http_access deny blacklist
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow localnet
http_access deny all


#Thanks heaps in advance. Squid 3.1.6-1.2 Debian Squeeze




Re: [squid-users] Custom error message woes

2012-06-06 Thread Alan
On Wed, Jun 6, 2012 at 10:47 PM, Amos Jeffries  wrote:
> On 6/06/2012 8:15 p.m., Alan wrote:
>>
>> On Tue, May 29, 2012 at 7:39 PM, Amos Jeffries
>>  wrote:
>>
 2. The %o tag (message returned by external acl helper) is not
 url-unescaped, so the error message reads: bla+bla+bla.
>>>
>>>
>>> Uh-oh bug. Thank you.
>>
>> I have created a bug report as well as a possible solution here:
>>
>> http://bugs.squid-cache.org/show_bug.cgi?id=3557
>>
>> The bug report hasn't even been confirmed, but it would be great if
>> this could be incorporated in the next release.
>
>
> The patch seems not to have attached to the report.
>
> Amos

That is true, I just attached it, please check again.

Alan


Re: [squid-users] Custom error message woes

2012-06-06 Thread Amos Jeffries

On 6/06/2012 8:15 p.m., Alan wrote:

On Tue, May 29, 2012 at 7:39 PM, Amos Jeffries  wrote:


2. The %o tag (message returned by external acl helper) is not
url-unescaped, so the error message reads: bla+bla+bla.


Uh-oh bug. Thank you.

I have created a bug report as well as a possible solution here:

http://bugs.squid-cache.org/show_bug.cgi?id=3557

The bug report hasn't even been confirmed, but it would be great if
this could be incorporated in the next release.


The patch seems not to have attached to the report.

Amos


[squid-users] assertion failed: cbdata.cc:130: "cookie == ((long)this ^ Cookie)"

2012-06-06 Thread John Hay
Hi,

When upgrading from 3.2.0.16 to 3.2.0.17 I ran into this:
2012/06/06 12:19:56 kid1| assertion failed: cbdata.cc:130: "cookie == 
((long)this ^ Cookie)"

I have also tried squid-3.2.0.17-20120527-r11561, but see the same problem.
I'll include my squid.conf, a piece of cache.log and the backtrace. This
happen somewhere from a few seconds after starting squid to a few minutes
later.

The machine is running FreeBSD 8.2-STABLE and up to now, I have not had
such a problem with squid. I have been running the 3.2 branch for a while
because I need the IPv6 -> IPv4 failover. The machine has 2 interfaces
with one side having global IPv4 and IPv6 addresses and the other side
have some private addresses. It also do transparent IPv4 proxying for
those that did not configure a proxy.

Has anybody seen something like this? Can it be something in my config
that is now catching up with me? I have not changed that in a while.

John
-- 
John Hay -- j...@meraka.csir.co.za / j...@freebsd.org
acl local-servers dstdomain .csir.co.za
acl local-servers dstdomain .fmfi.org.za
acl to_ipv6 dst ipv6
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 146.64.0.0/16
acl localnet src 2001:4200:7000::/48
acl localnet src fd9c:6829:597c::/48
acl localdst dst 146.64.0.0/16
acl localdst dst 10.0.0.0/8
acl badsrc src 146.64.8.10
acl SSL_ports port 443
acl SSL_ports port 563
acl SSL_ports port 7002 # 
acl SSL_ports port 9001 # 
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 800 # 
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny badsrc
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access deny all
icp_access allow localnet
icp_access deny all
htcp_access allow localnet
htcp_access deny all
http_port 146.64.8.8:3128
http_port 127.0.0.1:3000 transparent
http_port 3128
icp_port 3130
connect_timeout 5 seconds
connect_retries 2
hierarchy_stoplist cgi-bin ?
cache_log /home/squid/logs/cache.log
access_log /home/squid/logs/access.log squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
cache_mgr j...@meraka.org.za
cache_effective_user squid
cache_effective_group squid
visible_hostname crypton.cids.org.za
cache_mem 96 MB
dns_defnames on
coredump_dir /home/squid
2012/06/06 11:38:48 kid1| Starting Squid Cache version 3.2.0.17-20120527-r11561 
for amd64-portbld-freebsd8.2...
2012/06/06 11:38:48 kid1| Process ID 34071
2012/06/06 11:38:48 kid1| Process Roles: worker
2012/06/06 11:38:48 kid1| With 11095 file descriptors available
2012/06/06 11:38:48 kid1| Initializing IP Cache...
2012/06/06 11:38:48 kid1| DNS Socket created at [::], FD 7
2012/06/06 11:38:48 kid1| DNS Socket created at 0.0.0.0, FD 8
2012/06/06 11:38:48 kid1| Adding domain meraka.csir.co.za from /etc/resolv.conf
2012/06/06 11:38:48 kid1| Adding domain cids.org.za from /etc/resolv.conf
2012/06/06 11:38:48 kid1| Adding nameserver 127.0.0.1 from /etc/resolv.conf
2012/06/06 11:38:48 kid1| Adding nameserver ::1 from /etc/resolv.conf
2012/06/06 11:38:48 kid1| Logfile: opening log /home/squid/logs/access.log
2012/06/06 11:38:48 kid1| WARNING: log parameters now start with a module name. 
Use 'stdio:/home/squid/logs/access.log'
2012/06/06 11:38:48 kid1| Store logging disabled
2012/06/06 11:38:48 kid1| Swap maxSize 0 + 98304 KB, estimated 7561 objects
2012/06/06 11:38:48 kid1| Target number of buckets: 378
2012/06/06 11:38:48 kid1| Using 8192 Store buckets
2012/06/06 11:38:48 kid1| Max Mem  size: 98304 KB
2012/06/06 11:38:48 kid1| Max Swap size: 0 KB
2012/06/06 11:38:48 kid1| Using Least Load store dir selection
2012/06/06 11:38:48 kid1| Set Current Directory to /home/squid
2012/06/06 11:38:48 kid1| Loaded Icons.
2012/06/06 11:38:48 kid1| HTCP Disabled.
2012/06/06 11:38:48 kid1| Accepting HTTP Socket connections at 
local=146.64.8.8:3128 remote=[::] FD 10 flags=9
2012/06/06 11:38:48 kid1| Accepting NAT intercepted HTTP Socket connections at 
local=127.0.0.1:3000 remote=[::] FD 11 flags=41
2012/06/06 11:38:48 kid1| Accepting HTTP Socket connections at local=[::]:3128 
remote=[::] FD 12 flags=9
2012/06/06 11:38:48 kid1| Accepting ICP messages on [::]:3130
2012/06/06 11:38:48 kid1| Sending ICP messages from [::]:3130
2012/06/06 11:38:49 kid1| storeLateRelease: released 0 objects
2012/06/06 11:39:16 kid1| assertion failed: cbdata.cc:130: "cookie == 
((long)this ^ Cookie)"


2012/06/06 12

Re: [squid-users] Custom error message woes

2012-06-06 Thread Alan
On Tue, May 29, 2012 at 7:39 PM, Amos Jeffries  wrote:

>> 2. The %o tag (message returned by external acl helper) is not
>> url-unescaped, so the error message reads: bla+bla+bla.
>
>
> Uh-oh bug. Thank you.

I have created a bug report as well as a possible solution here:

http://bugs.squid-cache.org/show_bug.cgi?id=3557

The bug report hasn't even been confirmed, but it would be great if
this could be incorporated in the next release.

Best regards,

Alan