RE: [squid-users] Getting error msgs when trying to start squid

2009-04-21 Thread joost.deheer
> I have made I few changes to squid.conf based on what you 
> told me, but proxy still doesn't work.  

Define "doesn't work". Clients get an error? Won't start? Something else?

If you get denies, you could try to add a deny_info for every ACL you have, to 
see which ACL is stopping you:
- create a file ERR_ACL_NAME (replace 'ACL_NAME' with the ACL name you use, 
e.g. ERR_LOCALNET for the localnet ACL) in the errors directory (you can find 
the exact path by grepping for error_directory in the default squid config). 
Give it as only content "The ACL 'ACL_NAME' gave a deny".
- deny_info ERR_ACL_NAME aclname (e.g. deny_info ERR_LOCALNET localnet)
- Start the browser, and see which errorpage you get.

If it doesn't start, the error log is your friend. You could also try to start 
the proxy with 'squid -N' to start squid as a console application instead of in 
daemon mode. The errors should then appear on your screen.

Joost

Re: [squid-users] Memory leak?

2009-04-21 Thread Amos Jeffries

Bin Liu wrote:

Thanks for your reply.

# /usr/local/squid/sbin/squid -v
Squid Cache: Version 2.7.STABLE6
configure options:  '--prefix=/usr/local/squid' '--with-pthreads'
'--with-aio' '--with-dl' '--with-large-files'
'--enable-storeio=ufs,aufs,diskd,coss,null'
'--enable-removal-policies=lru,heap' '--enable-htcp'
'--enable-kill-parent-hack' '--enable-snmp' '--enable-freebsd-tproxy'
'--disable-poll' '--disable-select' '--enable-kqueue'
'--disable-epoll' '--disable-ident-lookups' '--enable-stacktraces'
'--enable-cache-digests' '--enable-err-languages=English'


The squid process grows without bounds here. I've read the FAQ, and
tried lowering cache_mem setting, decreasing cache_dir size. That
server has 4GB physical memory, and with total cache_dir size setting
to 60G, squid resident size still can grow beyond bound and start
eating swap.


Note that cache_mem is not a bound on squid memry usage. Merely the RAM 
cache_dir.




The OS is FreeBSD 7.1-RELEASE.


Thanks.

Do you have access to any memory-tracing software (valgrind or similar?)
tracking an actual memory usage while live can be done when built 
against valgrind and certain cachemgr reports. I'll have to look them up.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] Squid BUG?

2009-04-21 Thread Amos Jeffries

Herbert Faleiros wrote:

On Tue, 21 Apr 2009 14:58:22 +1200 (NZST), "Amos Jeffries"
 wrote:
[cut]

As a side issue: know who the maintainer is for slackware? I'm trying to
get in touch with them all.


Sorry, here Squid was build from sources. The distro maintainer and more
info (does not provide a binary Squid package) can be found here:
http://bluewhite64.com (I'm still waiting for an official 64 bits Slackware
port)



 does deleting the swap.state file(s) when squid is stopped fix things?


Apparently yes:

/dev/sdb1 276G  225G   37G  87% /var/cache/proxy/cache1
/dev/sdc1 276G  225G   53G  87% /var/cache/proxy/cache2
/dev/sdd1 276G  225G   37G  87% /var/cache/proxy/cache3
/dev/sde1 276G  225G   37G  87% /var/cache/proxy/cache4

It's running OK again.

Now, another strange log:

2009/04/21 00:26:25| commonUfsDirRebuildFromDirectory: Swap data buffer
length is not sane.

Should I decrease cache_dir sizes?


No. This seems to occur when either the stored object is corrupted, 
incompletely written, or the size of object is apparently larger than 
the size of the file.


At a blind guess, I'd say its a 64-bit build reading a file stored by a 
32-bit build.


The result is that squid immediately dumps the file out of cache. So if 
it repeats for any given object or for any newly stored ones, its a 
problem, but once per existing object after a cache format upgrade may 
be acceptable.





The stranger think was store rebuild reporting > 100%.

Yes, we have seen a similar thing long ago in testing. I'm trying to
remember and research what came of those. At present I'm thinking maybe

it

had something to do with 32-bit/64-bit changes in distro build vs what

the

cache was built with.



Similar logs found here about memory usage (via mallinfo):

Total in use:  1845425 KB 173%

and sometimes negative values:

total space in arena:  -1922544 KB
Ordinary blocks:   -1922682 KB 49 blks

Total in use:  -1139886 KB 59%


Ah, these seems to be regular popups. It's a counter overflow on the 
reporting. We try to fix in the latest release as discovered, there may 
be a patch already in later releases or HEAD. If not bug report time for 
that.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] Can I rewrite URL on browser?

2009-04-21 Thread Amos Jeffries

Oleg wrote:

Hi2All.

Can Squid redirect user's request to another URL on browser concordance?
For example, if user use MSIE < 6.0 redirect him to page with browser 
update from IT site page.
I'm found only access rules for browser string (User-Agent), but not I 
mean.



I'd use a custom deny_info redirect for this.

  acl msie6 browser ..  ...
  deny_info http://example.com/meis_update.html msie6
  http_access deny msie6


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] Can I rewrite URL on browser?

2009-04-21 Thread Oleg

Ya, is what I need! Thank you.

Amos Jeffries пишет:

Oleg wrote:

Hi2All.

Can Squid redirect user's request to another URL on browser concordance?
For example, if user use MSIE < 6.0 redirect him to page with browser 
update from IT site page.
I'm found only access rules for browser string (User-Agent), but not I 
mean.



I'd use a custom deny_info redirect for this.

  acl msie6 browser ..  ...
  deny_info http://example.com/meis_update.html msie6
  http_access deny msie6


Amos


Re: [squid-users] Can I rewrite URL on browser?

2009-04-21 Thread Amos Jeffries

Oleg wrote:

Ya, is what I need! Thank you.

Amos Jeffries пишет:

Oleg wrote:

Hi2All.

Can Squid redirect user's request to another URL on browser concordance?
For example, if user use MSIE < 6.0 redirect him to page with browser 
update from IT site page.
I'm found only access rules for browser string (User-Agent), but not 
I mean.



I'd use a custom deny_info redirect for this.

  acl msie6 browser ..  ...
  deny_info http://example.com/meis_update.html msie6
  http_access deny msie6


Amos


PS. putting my web developers hat on;
   please, please bump them to IE 7 :)
   IE6 is a royal pain in the visuals. 7 is at least a bit better.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


[squid-users] caching cgi_bin in 3.0

2009-04-21 Thread Matus UHLAR - fantomas
Hello,

I'm upgrading to 3.0 (finally) and I see that the new refresh_pattern
default was added in the config file:

refresh_pattern (cgi-bin|\?)   0   0%  0

I hope this is just to always verify the dynamic content, and should not
have any impact of caching it, if it's cacheable, correct?
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Support bacteria - they're the only culture some people have. 


[squid-users] allowedURL don't work

2009-04-21 Thread Phibee Network Operation Center



Hi

i have a new problems with my Squid Server (NTLM AD)

My configuration:

auth_param ntlm program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp

auth_param ntlm children 15
auth_param ntlm keep_alive on
auth_param basic program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-basic

auth_param basic children 15
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
#external_acl_type AD_Group children=50 concurrency=50 %LOGIN 
/usr/lib/squid/wbinfo_group.pl
external_acl_type AD_Group children=50 concurrency=50 ttl=1800 
negative_ttl=900 %LOGIN /usr/lib/squid/wbinfo_group.pl


cache_peer 127.0.0.1parent  80810   proxy-only no-query 
weight=100 connect-timeout=5 login=*:password


## ACL des droits d'accès
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl Lan src 10.0.0.0/8 # RFC1918 possible internal network
acl Lan src 172.16.0.0/12  # RFC1918 possible internal network
acl Lan src 192.168.0.0/16 # RFC1918 possible internal network


##
## ACL pour les sites web consultable sans authentification
##
acl URL_Authorises dstdomain "/etc/squid-ntlm/allowedURL"
http_access allow URL_Authorises
##

acl SSL_ports port 443 563 1 1494 2598
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 563 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

##
# ACL pour definir les groupes AD autorisés a ce connecter
##
acl AllowedADUsers external AD_Group "/etc/squid-ntlm/allowedntgroups"
acl Winbind proxy_auth REQUIRED
##


##
# ACL pour les Droits d'accès d'apres l'Active Directory
##
# Droits d'accès d'apres l'Active Directory
http_access allow AllowedADUsers
http_access deny !AllowedADUsers
http_access deny !Winbind
##

http_access deny all


##
# Parametre Systeme
##
http_port 8080
hierarchy_stoplist cgi-bin ?
cache_mem 16 MB
#cache_dir ufs /var/spool/squid-ntlm 5000 16 256
cache_dir null /dev/null
#logformat squid %ts.%03tu %6tr %>a %Ss/%03Hs %#logformat squidmime %ts.%03tu %6tr %>a %Ss/%03Hs %%Sh/%h] [%
#logformat common %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %"%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh

access_log /var/log/squid-ntlm/access.log squid
cache_log /var/log/squid-ntlm/cache.log
cache_store_log /var/log/squid-ntlm/store.log
# emulate_httpd_log off
mime_table /etc/squid-ntlm/mime.conf
pid_filename /var/run/squid-ntlm.pid
# debug_options ALL,1
log_fqdn off
ftp_user pr...@gw.phibee.net
ftp_passive on
ftp_sanitycheck on
ftp_telnet_protocol on
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20% 4320
icp_port 3130
error_directory /usr/share/squid/errors/French
icp_access allow Lan
icp_access deny all
htcp_access allow Lan
htcp_access deny all





Into my allowedURL, i have:

pagesjaunes.fr
estat.com
societe.com
quidonc.fr



when i want access to www.pagejaunes.fr, he request a authentification 
... i want no authentification

and no limitation of surf.

Anyone see where is my error ?
the correct synthaxe are "pagesjaunes.fr" or ".pagesjaunes.fr" for 
*.pagesjaunes.fr ?


thanks
jerome




Re: [squid-users] caching cgi_bin in 3.0

2009-04-21 Thread Chris Robertson

Matus UHLAR - fantomas wrote:

Hello,

I'm upgrading to 3.0 (finally) and I see that the new refresh_pattern
default was added in the config file:

refresh_pattern (cgi-bin|\?)   0   0%  0

I hope this is just to always verify the dynamic content, and should not
have any impact of caching it, if it's cacheable, correct?
  


Correct.  If the dynamic content gives a "Cache-Control: max-age" and/or 
a "Expires" header that allows caching, the refresh pattern will not 
prevent caching it.


Chris


Re: [squid-users] allowedURL don't work

2009-04-21 Thread Chris Robertson

Phibee Network Operation Center wrote:

Hi

i have a new problems with my Squid Server (NTLM AD)

My configuration:

auth_param ntlm program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp

auth_param ntlm children 15
auth_param ntlm keep_alive on
auth_param basic program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-basic

auth_param basic children 15
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
#external_acl_type AD_Group children=50 concurrency=50 %LOGIN 
/usr/lib/squid/wbinfo_group.pl
external_acl_type AD_Group children=50 concurrency=50 ttl=1800 
negative_ttl=900 %LOGIN /usr/lib/squid/wbinfo_group.pl


cache_peer 127.0.0.1parent  80810   proxy-only no-query 
weight=100 connect-timeout=5 login=*:password


## ACL des droits d'accès
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl Lan src 10.0.0.0/8 # RFC1918 possible internal network
acl Lan src 172.16.0.0/12  # RFC1918 possible internal network
acl Lan src 192.168.0.0/16 # RFC1918 possible internal network


##
## ACL pour les sites web consultable sans authentification
##
acl URL_Authorises dstdomain "/etc/squid-ntlm/allowedURL"
http_access allow URL_Authorises


Are  you sure you don't want to add additional restrictions to the 
http_access allow (such as a limitation on the source IP, or something)?



##

acl SSL_ports port 443 563 1 1494 2598
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 563 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

##
# ACL pour definir les groupes AD autorisés a ce connecter
##
acl AllowedADUsers external AD_Group "/etc/squid-ntlm/allowedntgroups"
acl Winbind proxy_auth REQUIRED
##


##
# ACL pour les Droits d'accès d'apres l'Active Directory
##
# Droits d'accès d'apres l'Active Directory
http_access allow AllowedADUsers
http_access deny !AllowedADUsers
http_access deny !Winbind


These two deny lines are redundant, as everything is denied by the next 
line...



##

http_access deny all


##
# Parametre Systeme
##
http_port 8080
hierarchy_stoplist cgi-bin ?
cache_mem 16 MB
#cache_dir ufs /var/spool/squid-ntlm 5000 16 256
cache_dir null /dev/null
#logformat squid %ts.%03tu %6tr %>a %Ss/%03Hs %%mt
#logformat squidmime %ts.%03tu %6tr %>a %Ss/%03Hs %%Sh/%h] [%
#logformat common %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %"%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh

access_log /var/log/squid-ntlm/access.log squid
cache_log /var/log/squid-ntlm/cache.log
cache_store_log /var/log/squid-ntlm/store.log
# emulate_httpd_log off
mime_table /etc/squid-ntlm/mime.conf
pid_filename /var/run/squid-ntlm.pid
# debug_options ALL,1
log_fqdn off
ftp_user pr...@gw.phibee.net
ftp_passive on
ftp_sanitycheck on
ftp_telnet_protocol on
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20% 4320
icp_port 3130
error_directory /usr/share/squid/errors/French
icp_access allow Lan
icp_access deny all
htcp_access allow Lan
htcp_access deny all


Into my allowedURL, i have:

pagesjaunes.fr
estat.com
societe.com
quidonc.fr



when i want access to www.pagejaunes.fr, he request a authentification 
... i want no authentification

and no limitation of surf.

Anyone see where is my error ?
the correct synthaxe are "pagesjaunes.fr" or ".pagesjaunes.fr" for 
*.pagesjaunes.fr ?


The second option ".pagesjaunes.fr" will match http://pagesjaunes.fr, 
http://www.pagesjaunes.fr and any other hostname in front of pagesjaunes.fr.



thanks
jerome


Chris


RE: [squid-users] allowedURL don't work

2009-04-21 Thread Dustin Hane
I'm trying to work with regex's and have a quick question in response to your 
response. Wouldn't you also be able to do just a url_regex -I pagesjuanes and 
allow that? That should theoretically work yes?

If you are doing a url_allow and if you have the period infront of the domain, 
that allows anything from the "tld".pagesjuanes.fr correct?

---Paste
> when i want access to www.pagejaunes.fr, he request a authentification 
> ... i want no authentification
> and no limitation of surf.
>
> Anyone see where is my error ?
> the correct synthaxe are "pagesjaunes.fr" or ".pagesjaunes.fr" for 
> *.pagesjaunes.fr ?

The second option ".pagesjaunes.fr" will match http://pagesjaunes.fr, 
http://www.pagesjaunes.fr and any other hostname in front of pagesjaunes.fr.

> thanks
> jerome

Chris
End Paste

-Original Message-
From: crobert...@gci.net [mailto:crobert...@gci.net] 
Sent: Tuesday, April 21, 2009 12:59 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] allowedURL don't work

Phibee Network Operation Center wrote:
> Hi
>
> i have a new problems with my Squid Server (NTLM AD)
>
> My configuration:
>
> auth_param ntlm program /usr/bin/ntlm_auth 
> --helper-protocol=squid-2.5-ntlmssp
> auth_param ntlm children 15
> auth_param ntlm keep_alive on
> auth_param basic program /usr/bin/ntlm_auth 
> --helper-protocol=squid-2.5-basic
> auth_param basic children 15
> auth_param basic realm Squid proxy-caching web server
> auth_param basic credentialsttl 2 hours
> #external_acl_type AD_Group children=50 concurrency=50 %LOGIN 
> /usr/lib/squid/wbinfo_group.pl
> external_acl_type AD_Group children=50 concurrency=50 ttl=1800 
> negative_ttl=900 %LOGIN /usr/lib/squid/wbinfo_group.pl
>
> cache_peer 127.0.0.1parent  80810   proxy-only no-query 
> weight=100 connect-timeout=5 login=*:password
>
> ## ACL des droits d'accès
> acl manager proto cache_object
> acl localhost src 127.0.0.1/32
> acl to_localhost dst 127.0.0.0/8
> acl Lan src 10.0.0.0/8 # RFC1918 possible internal network
> acl Lan src 172.16.0.0/12  # RFC1918 possible internal network
> acl Lan src 192.168.0.0/16 # RFC1918 possible internal network
>
>
> ##
> ## ACL pour les sites web consultable sans authentification
> ##
> acl URL_Authorises dstdomain "/etc/squid-ntlm/allowedURL"
> http_access allow URL_Authorises

Are  you sure you don't want to add additional restrictions to the 
http_access allow (such as a limitation on the source IP, or something)?

> ##
>
> acl SSL_ports port 443 563 1 1494 2598
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 563 # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
>
> #http_access allow manager localhost
> http_access deny manager
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
>
> ##
> # ACL pour definir les groupes AD autorisés a ce connecter
> ##
> acl AllowedADUsers external AD_Group "/etc/squid-ntlm/allowedntgroups"
> acl Winbind proxy_auth REQUIRED
> ##
>
>
> ##
> # ACL pour les Droits d'accès d'apres l'Active Directory
> ##
> # Droits d'accès d'apres l'Active Directory
> http_access allow AllowedADUsers
> http_access deny !AllowedADUsers
> http_access deny !Winbind

These two deny lines are redundant, as everything is denied by the next 
line...

> ##
>
> http_access deny all
>
>
> ##
> # Parametre Systeme
> ##
> http_port 8080
> hierarchy_stoplist cgi-bin ?
> cache_mem 16 MB
> #cache_dir ufs /var/spool/squid-ntlm 5000 16 256
> cache_dir null /dev/null
> #logformat squid %ts.%03tu %6tr %>a %Ss/%03Hs % %mt
> #logformat squidmime %ts.%03tu %6tr %>a %Ss/%03Hs % %Sh/%h] [% #logformat common %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs % logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs % "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh
> access_log /var/log/squid-ntlm/access.log squid
> cache_log /var/log/squid-nt

[squid-users] logging changes 2.6 -> 2.7

2009-04-21 Thread Ross J. Reedstrom
Hi all -
Recently upgraded a proxy-accelerator setup to using 2.7 (Debian
2.7.STABLE3-4.1, specifically) from 2.6 (2.6.20-1~bpo40+1). In this
setup, I'm using an external rewriter script to add virtual rooting bits
to the requested URL. (It's a zope system, using ther VirtualHostMonster
rewriter, like so: 
Incoming request:
GET http://example.com/someimage.gif

Rewritten to:

GET 
http://example.com/VirtualHostBase/http/example.com:80/somepath/VirtualHostRoot/someimage.gif

These are then farmed out to multiple cache_peer origin servers.

The change I'm seeing is that the access.log using a custom format
line:

logformat custom %ts.%03tu %6tr %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %h" "%{User-Agent}>h" %Ss:%Sh/%h

The change is that in 2.6 %ru logged the requested URL as seen on the
wire. In 2.7, we get the rewritten URL.

Is this intentional? Is there a way around it? Since referer (sic) url
is not similarly rewritten, it gives log analysis software (that
attempts  to determine click-traces and page views) fits. I can
post-process my logs, but I'd rather fix them at generation time. I can
understand the need to have the rewritten version available: just not at
the cost of missing what was actually on the wire that Squid read.

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer & Admin, Research Scientistphone: 713-348-6166
The Connexions Project  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE


[squid-users] Squid and TC - Traffic Shaping

2009-04-21 Thread Wilson Hernandez - MSD, S. A.

Hello.

I was writing a script to control traffic on our network. I created my
rules with tc and noticed that it wasn't working correctly.

I tried this traffic shaping on a linux router that has squid doing
transparent cache.

When measuring the download speed on speedtest.net the download speed is
70kbps when is supposed to be over 300kbps. I found it strange since
I've done traffic shaping in the past and worked but not on a box with
squid. I stopped the squid server and ran the test again and it gave me
the speed I assigned to that machine. I assigned different bw and the
test gave the correct speed.

Have anybody used traffic shaping (TC in linux) on a box with squid? Is
there a way to combine both a have them work side by side?

Thanks in advanced for your help and input.



Re: [squid-users] allowedURL don't work

2009-04-21 Thread Chris Robertson

Dustin Hane wrote:

I'm trying to work with regex's and have a quick question in response to your 
response. Wouldn't you also be able to do just a url_regex -I pagesjuanes and 
allow that? That should theoretically work yes?
  


I think the -I needs to be lowercase, but otherwise that would work.  
It's just more resource intensive, and would allow 
"http://random.website/?fakequery=pagesjuanes&haha=true"; through.  
Handling regular expressions (url_regex, dstdom_regex) is far more 
complex than performing a string equality test (both for Squid and the 
maintainer).



If you are doing a url_allow and if you have the period infront of the domain, that 
allows anything from the "tld".pagesjuanes.fr correct?
  


Correct.  A dstdomain ACL with a leading dot will match the base domain 
(pagesjuanes.fr in this case) AND "anything" dot  base domain 
(tld.pagesjuanes.fr, www.pagesjuanes.fr, search.pagesjuanes.fr).  If you 
have a few host names you wish to block, while allowing the majority, 
you can combine ACLs on a http_access line...


# Allow most, block a few
acl pagesjuanes dstdomain .pagesjuanes.fr
acl pagesjuanesExceptions dstdomain blocked.pagesjuanes.fr 
bad.pagesjuanes.fr
# Allow access to all pagesjuanes.fr domains, except 
blocked.pagesjuanes.fr and bad.pagesjuanes.fr

http_access allow pagesjuanes !pagesjuanesExceptions
# Block access to all pagesjuanes.fr domains not allowed above
http_access deny pagesjuanes

Chris


[squid-users] squid AND ssl

2009-04-21 Thread joe ryan
Hi,
I have a simple webserver that listens on port 80 for requests. I
would like to secure access to this webserver using squid and SSL. I
can access the simple website through http without any issue. When I
try and access it using https: I get a message in the cache file. See
attached.
The web page error show up as Connection to 192.168.0.1 Failed
The system returned:
(13) Permission denied

I am running Squid stable 2.7 and I used openssl to generate the cert and key.
I have attached my conf file and cache errors.
Can squid secure an unsecure webserver the way i am trying to do do
http_port 192.168.0.1:8080 
cache_mgr administra...@server2003
visible_hostname server2003
cache_dir ufs c:/squid/var/cache 512 16 256
acl Query urlpath_regex cgi-bin \?
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl PURGE method PURGE
acl to_localhost dst 127.0.0.1/8
acl SSL_ports port 441 443
https_port 192.168.0.1:443 accel cert=c:/squid/etc/ssl/mycert.pem 
key=c:/squid/etc/ssl/mykey.pem vhost
cache_peer 192.168.0.1  parent 443 0 no-query originserver default ssl 
sslflags=DONT_VERIFY_PEER
acl Safe_ports port 80 21 441 443 563 70 210 210 1025-65535 280 488 591 777
# acl CONNECT method CONNECT
acl all src 0.0.0.0/0.0.0.0
url_rewrite_host_header off
collapsed_forwarding on
acl webSrv dst 192.168.0.1
acl webPrt port 80
http_access allow webSrv webprt
http_access allow all
always_direct allow all
acl localnetwork1 src 192.168.0.0/255.255.255.0
hierarchy_stoplist cgi-bin ?
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
coredump_dir c:/squid/var/cache
cache_mem 64 MB
dns_testnames localhost
http_access allow manager localhost
# http_access deny manager
# http_access deny !Safe_ports
# http_access allow PURGE localhost
http_access allow localnetwork1
# http_access deny PURGE
access_log c:/squid/var/logs/access.log squid
# no_cache deny QUERY
http_reply_access allow all



[squid-users] SQUID and SSL

2009-04-21 Thread joeR

Hi,
I have a simple webserver that listens on port 80 for requests. I would like
to secure access to this webserver using squid and SSL. I can access the
simple website through http without any issue. When I try and access it
using https: I get a message in the cache file. See attached. 
The web page error show up as Connection to 192.168.0.1 Failed 
The system returned: 
(13) Permission denied
 
I am running Squid stable 2.7 and I used openssl to generate the cert and
key.
I have attached my conf file and cache errors.
Can squid secure an unsecure webserver the way i am trying to do do
http://www.nabble.com/file/p23166622/squid.conf squid.conf 
http://www.nabble.com/file/p23166622/cache.txt cache.txt 
-- 
View this message in context: 
http://www.nabble.com/SQUID-and-SSL-tp23166622p23166622.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] logging changes 2.6 -> 2.7

2009-04-21 Thread Mark Nottingham

That was fixed in STABLE4;
  
http://www.squid-cache.org/Versions/v2/2.7/squid-2.7.STABLE6-RELEASENOTES.html#s7

See also:
  http://www.squid-cache.org/bugs/show_bug.cgi?id=2406

Cheers,


On 22/04/2009, at 5:46 AM, Ross J. Reedstrom wrote:


Hi all -
Recently upgraded a proxy-accelerator setup to using 2.7 (Debian
2.7.STABLE3-4.1, specifically) from 2.6 (2.6.20-1~bpo40+1). In this
setup, I'm using an external rewriter script to add virtual rooting  
bits
to the requested URL. (It's a zope system, using ther  
VirtualHostMonster

rewriter, like so:
Incoming request:
GET http://example.com/someimage.gif

Rewritten to:

GET 
http://example.com/VirtualHostBase/http/example.com:80/somepath/VirtualHostRoot/someimage.gif

These are then farmed out to multiple cache_peer origin servers.

The change I'm seeing is that the access.log using a custom format
line:

logformat custom %ts.%03tu %6tr %>a %ui %un [%tl] "%rm %ru HTTP/%rv"  
%Hs %h" "%{User-Agent}>h" %Ss:%Sh/%For}>h


The change is that in 2.6 %ru logged the requested URL as seen on the
wire. In 2.7, we get the rewritten URL.

Is this intentional? Is there a way around it? Since referer (sic) url
is not similarly rewritten, it gives log analysis software (that
attempts  to determine click-traces and page views) fits. I can
post-process my logs, but I'd rather fix them at generation time. I  
can
understand the need to have the rewritten version available: just  
not at

the cost of missing what was actually on the wire that Squid read.

Ross
--
Ross Reedstrom, Ph.D.  
reeds...@rice.edu
Systems Engineer & Admin, Research Scientistphone:  
713-348-6166
The Connexions Project  http://cnx.orgfax:  
713-348-3665

Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0  
BEDE


--
Mark Nottingham   m...@yahoo-inc.com




[squid-users] caching behavior during COSS rebuild

2009-04-21 Thread Chris Woodfield
So I'm running with COSS under 2.7STABLE6, we've noticed (as I can see  
others have, teh Googles tell me so) that the COSS rebuild a. happens  
every time squid is restarted, and b. takes quite a while if the COSS  
stripes are large. However, I've noticed that while the stripes are  
being rebuilt, squid still listens for and handles requests - it just  
SO_FAILs on every object that would normally get saved to a COSS  
stripe. So much for that hit ratio.


SO - the questions are:

1. Is there *any* way to prevent the COSS rebuild if squid is  
terminated normally?
2. Is there a way to prevent squid from handling requests until the  
COSS stripe is fully rebuilt (this is obviously not good if you don't  
have redundant squids, but that's not a problem for us) ?


Thanks,

-C


Re: [squid-users] logging changes 2.6 -> 2.7

2009-04-21 Thread Ross J. Reedstrom
Ah thanks for the pointer, Mark. I'll take a look at backporting the
debian squeeze (testing) version back to lenny.

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer & Admin, Research Scientistphone: 713-348-6166
The Connexions Project  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE


On Wed, Apr 22, 2009 at 11:50:09AM +1000, Mark Nottingham wrote:
> That was fixed in STABLE4;
>   
> http://www.squid-cache.org/Versions/v2/2.7/squid-2.7.STABLE6-RELEASENOTES.html#s7
> 
> See also:
>   http://www.squid-cache.org/bugs/show_bug.cgi?id=2406
> 
> Cheers,
> 
> 
> On 22/04/2009, at 5:46 AM, Ross J. Reedstrom wrote:
> 
> >Hi all -
> >Recently upgraded a proxy-accelerator setup to using 2.7 (Debian
> >2.7.STABLE3-4.1, specifically) from 2.6 (2.6.20-1~bpo40+1). In this
> >setup, I'm using an external rewriter script to add virtual rooting  
> >bits
> >to the requested URL. (It's a zope system, using ther  
> >VirtualHostMonster
> >rewriter, like so:
> >Incoming request:
> >GET http://example.com/someimage.gif
> >
> >Rewritten to:
> >
> >GET 
> >http://example.com/VirtualHostBase/http/example.com:80/somepath/VirtualHostRoot/someimage.gif
> >
> >These are then farmed out to multiple cache_peer origin servers.
> >
> >The change I'm seeing is that the access.log using a custom format
> >line:
> >
> >logformat custom %ts.%03tu %6tr %>a %ui %un [%tl] "%rm %ru HTTP/%rv"  
> >%Hs %h" "%{User-Agent}>h" %Ss:%Sh/% >For}>h
> >
> >The change is that in 2.6 %ru logged the requested URL as seen on the
> >wire. In 2.7, we get the rewritten URL.
> >
> >Is this intentional? Is there a way around it? Since referer (sic) url
> >is not similarly rewritten, it gives log analysis software (that
> >attempts  to determine click-traces and page views) fits. I can
> >post-process my logs, but I'd rather fix them at generation time. I  
> >can
> >understand the need to have the rewritten version available: just  
> >not at
> >the cost of missing what was actually on the wire that Squid read.
> >
> >Ross
> >-- 
> >Ross Reedstrom, Ph.D.  
> >reeds...@rice.edu
> >Systems Engineer & Admin, Research Scientistphone:  
> >713-348-6166
> >The Connexions Project  http://cnx.orgfax:  
> >713-348-3665
> >Rice University MS-375, Houston, TX 77005
> >GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0  
> >BEDE
> 
> --
> Mark Nottingham   m...@yahoo-inc.com
> 


[squid-users] Authenticator processes after reconfigure.

2009-04-21 Thread Oleg

Hello.

Version: Squid 3.0.STABLE13 on Gentoo 2.6.22-vs2.2.0.7

`squid -k reconfigure` do not close old authenticator processes if that 
was a clients. So my 'NTLM Authenticator Statistics' looks like below.

Is anybody has same symptom?

Oleg.


NTLM Authenticator Statistics:
program: /usr/bin/ntlm_auth
number running: 23 of 15
requests sent: 8896
replies received: 8896
queue length: 0
avg service time: 0 msec


#   FD  PID # Requests  # Deferred Requests Flags   Time
Offset  Request
1   12  23079   459 0   RS  0.002   0   (none)
2   13  23080   89  0   RS  0.000   0   (none)
3   14  23081   37  0   RS  0.000   0   (none)
4   15  23082   36  0   RS  0.002   0   (none)
5   16  23083   342 0   RS  0.000   0   (none)
6   17  23084   10570   RS  0.000   0   (none)
7   18  23085   97  0   RS  0.000   0   (none)
10  21  23089   71  0   RS  0.000   0   (none)
1   20  17695   653 0   0.003   0   (none)
2   22  17696   114 0   0.004   0   (none)
3   23  17697   22  0   0.008   0   (none)
4   24  17698   4   0   0.020   0   (none)
5   25  17699   0   0   0.000   0   (none)
6   26  17700   0   0   0.000   0   (none)
7   27  17701   0   0   0.000   0   (none)
8   28  17702   0   0   0.000   0   (none)
9   29  17703   0   0   0.000   0   (none)
10  30  17713   0   0   0.000   0   (none)
11  31  17714   0   0   0.000   0   (none)
12  32  17715   0   0   0.000   0   (none)
13  33  17716   0   0   0.000   0   (none)
14  34  17717   0   0   0.000   0   (none)
15  35  17718   0   0   0.000   0   (none)

Flags key:

   B = BUSY
   C = CLOSING
   R = RESERVED or DEFERRED
   S = SHUTDOWN
   P = PLACEHOLDER



Re: [squid-users] CONNECT method support(for https) using squid3.1.0.6 + tproxy4

2009-04-21 Thread Mikio Kishi
Hi, Amos

> Ah, you need the follow_x_forwarded_for feature on Proxy(1).

That's right, I know about that, but I'd like to use "source address
spoofing"...

Just only following enables my anxiety.

replacing In tunnelStart()#tunnel.cc

>sock = comm_openex(SOCK_STREAM,
>   IPPROTO_TCP,
>   temp,
>   COMM_NONBLOCKING,
>   getOutgoingTOS(request),
>   url);

with

>if (request->flags.spoof_client_ip) {
>sock = comm_openex(SOCK_STREAM,
>   IPPROTO_TCP,
>   temp,
>   (COMM_NONBLOCKING|COMM_TRANSPARENT),
>   getOutgoingTOS(request),
>   url);
>} else {
>sock = comm_openex(SOCK_STREAM,
>   IPPROTO_TCP,
>   temp,
>   COMM_NONBLOCKING,
>   getOutgoingTOS(request),
>   url);
>}

I think it has no harmful effects. I long for that.
Would you modify that ?

Sincerely,

--
Mikio Kishi

On Sun, Apr 12, 2009 at 1:25 PM, Amos Jeffries  wrote:
> Mikio Kishi wrote:
>>
>> Hi, Amos
>>
>>> What exactly are you trying to achieve with this?
>>
>> I'm really sorry... It's a little bit difficult to explain...
>> The following is the more detail.
>>
>>  ---
>>     The Internet
>>        ---+
>>           |
>>  +-+-
>>         |
>>   +-+---+
>>   |  squid      | (1)
>>   |  (tcp/8080) |
>>   +-+---+
>>         |.2
>>  +-+ 10.0.0.0/24
>>           |.1
>>        +--+--+
>>        |  R  |
>>        +--+--+
>>           |.1
>>  ---+--+ 192.168.0.0/24
>>        |.2
>>   +++
>>   |  squid +    |
>>   |    tproxy   | (2)
>>   |  (tcp/8080) |
>>   +++
>>        |.2
>>  ---+--+ 192.168.1.0/24
>>           |.3
>>        +--+-+
>>        | client |
>>        ++
>>
>>  - The demand
>>   - The client must use proxy(2) using tcp/8080
>>     - by browser settings
>>       HTTP  -> proxy(2) (192.168.1.2:8080)
>>       HTTPS -> proxy(2) (192.168.1.2:8080)
>>     - proxy(2) don't have to be "transparent"
>>   - The proxy(2)'s parent proxy must be proxy(1)
>>     using cache_peer
>>   - Both proxy(1) and proxy(2) must record
>>     "client original source address" in access log for security action
>>         !!! It's most important !!!
>>
>> I think that I have to use tproxy(not transparent)
>> to achieve above demands... what do you think ?
>
> Ah, you need the follow_x_forwarded_for feature on Proxy(1).
>
> proxy(2) will always be trying to set X-Forwarded-For header indicating the
> client IP. Which gets passed to proxy(1).
>
> By enabling follow_x_forwarded_for and log_uses_indirect_ip. proxy(1) should
> log the original client IP.
>
> http://www.squid-cache.org/Doc/config/follow_x_forwarded_for/
> http://www.squid-cache.org/Doc/config/log_uses_indirect_client/
>
>
> Amos
>
>>
>> Sincerely,
>> --
>> Mikio Kishi
>>
>> On Thu, Apr 9, 2009 at 4:54 PM, Amos Jeffries 
>> wrote:
>>>
>>> Mikio Kishi wrote:

 Hi, Amos

> HTTPS encrypted traffic cannot be intercepted.

 Yes, I know that. but, in this case, not "transparent".

>          (1)                     (2)
>
>           |                       |
>  +--+   |     ++    |    +-+
>  |WWW   +---+     |            |    ++ WWW     |
>  |Client|.2 |   .1| squid      |.1  |  .2|  Server |
>  +--+   +-+   + tproxy ++    |(tcp/443)|
>           |     | (tcp/8080) |    |    |(tcp/80) |
>           |     ++    |    +-+
>     192.168.0.0/24          10.0.0.0/24
>
>  (1) 192.168.0.2 -->  192.168.0.1:8080
>                                    ^
>  (2) 192.168.0.2 -->  10.0.0.2:443
>                                  ^^^

 Just only thing I'd like to do is "source address spoofing"
 using tproxy.

 Does that make sense ?
>>>
>>> No. Squid is perfectly capable of making HTTPS links outbound without
>>> tproxy. The far end only knows that some client connected.
>>>
>>> HTTPS cannot be spoofed, its part of the security involved with the SSL
>>> layer.
>>>
>>> What exactly are you trying to achieve with this?
>>>
>>> Amos
>>> --
>>> Please be using
>>>  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
>>>  Current Beta Squid 3.1.0.6
>>>
>
>
> --
> Please be using
>  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
>  Current Beta Squid 3.1.0.6
>