Re: [squid-users] TProxy not faking source address.

2009-05-18 Thread Omid Kosari

I solved the problem . I have installed 

aptitude install libcap2 libcap2-dev

and then recompiled squid and tproxy problem solved.
Thank you Amos for http://wiki.squid-cache.org/Features/Tproxy4  . please
also edit troubleshooting section for Ubuntu 9.04 (Jaunty) users to install
libcap2 libcap2-dev before compiling squid . 
AFAIK the simplest way to running the TPROXY is in Ubuntu 9.04 (Jaunty) .


Amos Jeffries-2 wrote:
 

 Another thing maybe helpful
 when i enable
 http_port 3128 intercept
 in squid.conf , following message appears in cache.log

 cache squid[14701]: IpIntercept.cc(132) NetfilterInterception:  NF
 getsockopt(SO_ORIGINAL_DST) failed on FD 24: (11) Resource temporarily
 unavailable

 
 I'm aware of that. 'intercept' is a NAT lookup, will throw up errors on
 any non-NAT input. 'tproxy' is a spoofed SOCKET lookup.
 
 I don't think any of the basic Ubuntu kernels have the TPROXY options set
 yet. That would account for your custom ones working but the general
 kernels not.
 
 Amos
 


 Omid Kosari wrote:

 I have Ubuntu 9.04 (Jaunty)  but also squid-client spoofing does not
 work
 . it shows squid's ip in tproxy mode .

 dmesg shows
 [   21.186636] ip_tables: (C) 2000-2006 Netfilter Core Team
 [   21.319881] NF_TPROXY: Transparent proxy support initialized, version
 4.1.0
 [   21.319884] NF_TPROXY: Copyright (c) 2006-2007 BalaBit IT Ltd.

 and squid.conf has

 http_port 3128
 http_port 3129 tproxy

 i have compiled squid with these settings
 ./configure --datadir=/usr/share/squid3 --sysconfdir=/etc/squid3
 --mandir=/usr/share/man --localstatedir=/var
 --with-logdir=/var/log/squid
 --prefix=/usr --enable-inline --enable-async-io=8
 --enable-storeio=ufs,aufs --enable-removal-policies=lru,heap
 --enable-delay-pools --enable-cache-digests --enable-underscores
 --enable-icap-client --enable-follow-x-forwarded-for
 --with-filedescriptors=65536 --with-default-user=proxy
 --enable-large-files --enable-linux-netfilter
 and squid is 3.1.0.7

 the debug_options ALL,1 89,6 output is like when we have not
 debug_options
 at all !!

 i had tproxy with my custom kernels but upgraded to Ubuntu 9.04 (Jaunty)
 to prevent custom compiling of kernel and iptables but it does not work



 Amos Jeffries-2 wrote:

 rihad wrote:
 Looks like I'm the only one trying to use TProxy? Somebody else,
 please?
 To summarize: Squid does NOT spoof client's IP address when initiating
 connections on its own. Just as if there weren't a thing named
 TProxy.

 We have had a fair few trying it with complete success when its the
 only
 thing used. This kind of thing seems to crop up with WCCP, for you and
 one other.

 I'm not sure yet what the problem seems to be. Can you check your
 cache.log for messages about Stopping full transparency, the rest of
 the message says why. I've updated the wiki troubleshooting section to
 list the messages that appear when tproxy is turned off automatically
 and what needs to be done to fix it.

 If you can't see any of those please can you set:
debug_options ALL,1 89,6

 to see whats going on?

 I know the squid-client link should be 100% spoofed.  I'm not fully
 certain the quid-server link is actually spoofed in all cases. Though
 one report indicates it may be, I have not been able to test it locally
 yet.


 Amos



 Original message follows (not to be confused with top-posting):

 Hello, I'm trying to get TProxy 4.1 to work as outlined here:
 http://wiki.squid-cache.org/Features/Tproxy4
 namely under Ubuntu 9.04 stable/testing mix with the following:
 linux-image-2.6.28-11-server 2.6.28-11.42
 iptables 1.4.3.2-2ubuntu1
 squid-3.1.0.7.tar.bz2 from original sources

 Squid has been built this way:
 $ /usr/local/squid/sbin/squid -v
 Squid Cache: Version 3.1.0.7
 configure options:  '--enable-linux-netfilter'
 --with-squid=/home/guessed/squid-3.1.0.7 --enable-ltdl-convenience
 (myself I only gave it --enable-linux-netfilter)

 squid.conf is pretty much whatever 'make install' created, with my
 changes given at the end, after the blank line:

 acl manager proto cache_object
 acl localhost src 127.0.0.1/32
 acl to_localhost dst 127.0.0.0/8
 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
 acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
 acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localnet
 

Re: [squid-users] TProxy not faking source address.

2009-05-18 Thread rihad

Omid Kosari wrote:
I solved the problem . I have installed 


aptitude install libcap2 libcap2-dev

and then recompiled squid and tproxy problem solved.
Thank you Amos for http://wiki.squid-cache.org/Features/Tproxy4  . please
also edit troubleshooting section for Ubuntu 9.04 (Jaunty) users to install
libcap2 libcap2-dev before compiling squid . 
AFAIK the simplest way to running the TPROXY is in Ubuntu 9.04 (Jaunty) .


Thanks a lot, it works! Ubuntu 9.04 was indeed missing libpcap-dev 
(libcap2 was installed, though)

$ dpkg -l|fgrep libcap
ii  libcap-dev1:2.16-5 
development libraries and header files for l
ii  libcap2   1:2.16-5 
support for getting/setting POSIX.1e capabil


after rebuilding Squid, it finally started working (even though I didn't 
see _any_ error regarding some capability missing)



P.S.: I think this page: http://wiki.squid-cache.org/Features/Tproxy4
should note it on the forefront that libcap is a required dependency for 
tproxy, not only under the Stopping full transparency: Missing needed 
capability support. header because I never saw it, so didn't read further.


Thanks a lot again.


[squid-users] Re: Any way to redirect pages on return code?

2009-05-18 Thread Platoali
 Platoali wrote:
 Hi,

 I'm looking for a way to make squid to redirect my users  to a specific
 page when  a special return code have been encountered.

 for example, here a part of my access log:
 1242562024.085347 172.20.0.68 TCP_MISS/403 1082 GET http://www.b.com  -
 DIRECT/195.189.143.133 -

 I want squid see that the server is returning 403 code. just redirect it to
 my specific page.


 Does anyone know how can this be done?


I've found this, but does not work at all:

acl filtering  http_status 403
http_access deny filtering
deny_info /var/www/L4.html filtering

but it does not redirect to my page and I see these my cache.log. Any 
suggestion:

2009/05/18 11:56:25| ACL::checklistMatches WARNING: 'filtering' ACL is used but 
there is no HTTP reply -- not matching.
2009/05/18 11:56:26| ACL::checklistMatches WARNING: 'filtering' ACL is used but 
there is no HTTP reply -- not matching.
2009/05/18 11:56:27| ACL::checklistMatches WARNING: 'filtering' ACL is used but 
there is no HTTP reply -- not matching.


But I see that clearly in the access log that 403 is returned from web server.


Best regards
Platoali


Re: [squid-users] Re: Any way to redirect pages on return code?

2009-05-18 Thread Amos Jeffries

Platoali wrote:

 Platoali wrote:

Hi,

I'm looking for a way to make squid to redirect my users  to a specific
page when  a special return code have been encountered.

for example, here a part of my access log:
1242562024.085347 172.20.0.68 TCP_MISS/403 1082 GET http://www.b.com  -
DIRECT/195.189.143.133 -

I want squid see that the server is returning 403 code. just redirect it to
my specific page.


Does anyone know how can this be done?



I've found this, but does not work at all:

acl filtering  http_status 403
http_access deny filtering
deny_info /var/www/L4.html filtering

but it does not redirect to my page and I see these my cache.log. Any 
suggestion:


2009/05/18 11:56:25| ACL::checklistMatches WARNING: 'filtering' ACL is used but 
there is no HTTP reply -- not matching.
2009/05/18 11:56:26| ACL::checklistMatches WARNING: 'filtering' ACL is used but 
there is no HTTP reply -- not matching.
2009/05/18 11:56:27| ACL::checklistMatches WARNING: 'filtering' ACL is used but 
there is no HTTP reply -- not matching.



But I see that clearly in the access log that 403 is returned from web server.


Best regards
Platoali


Try using it in http_reply_access where the _reply_ status is present.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
  Current Beta Squid 3.1.0.7


[squid-users] Squid suddenly crashes (Maybe a bug)

2009-05-18 Thread Omid Kosari

Simply squid crashes after this message in cache.log
assertion failed: comm.cc:2016: !fd_table[fd].closing()

Squid 3.1.0.7
Kernel 2.6.28-11 (Ubuntu 9.04 Jaunty)
CPU AMD Athlon(tm) 64 Processor 3000+ 
RAM 8GB

Any suggestion appreciated.
-- 
View this message in context: 
http://www.nabble.com/Squid-suddenly-crashes-%28Maybe-a-bug%29-tp23593693p23593693.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Re: Any way to redirect pages on return code?

2009-05-18 Thread Platoali
On Do shanbe 28 Ordibehesht 1388 13:15:22 Amos Jeffries wrote:
 Platoali wrote:
   Platoali wrote:
  Hi,
 
  I'm looking for a way to make squid to redirect my users  to a specific
  page when  a special return code have been encountered.
 
  for example, here a part of my access log:
  1242562024.085347 172.20.0.68 TCP_MISS/403 1082 GET http://www.b.com
   - DIRECT/195.189.143.133 -
 
  I want squid see that the server is returning 403 code. just redirect it
  to my specific page.
 
 
  Does anyone know how can this be done?
 
  I've found this, but does not work at all:
 
  acl filtering  http_status 403
  http_access deny filtering
  deny_info /var/www/L4.html filtering
 
  but it does not redirect to my page and I see these my cache.log. Any
  suggestion:
 
  2009/05/18 11:56:25| ACL::checklistMatches WARNING: 'filtering' ACL is
  used but there is no HTTP reply -- not matching.
  2009/05/18 11:56:26| ACL::checklistMatches WARNING: 'filtering' ACL is
  used but there is no HTTP reply -- not matching.
  2009/05/18 11:56:27| ACL::checklistMatches WARNING: 'filtering' ACL is
  used but there is no HTTP reply -- not matching.
 
 
  But I see that clearly in the access log that 403 is returned from web
  server.
 
 
  Best regards
  Platoali

 Try using it in http_reply_access where the _reply_ status is present.


Thank you very much. It works very well that way.

 Amos



[squid-users] Authentication problem. Squid3+ntlm_auth+Firefox.

2009-05-18 Thread xor
Hello,
I have installed squid3 with authorisation in the windows2003 domain, with 
libraries kerberos5 and samba + winbind. OS Debian Lenny 5.0.1.
Packages squid3, samba, krb and winbind are taken from official repositories 
(http://ftp.ru.debian.org/debian/).

The proxy clients working under WinXP with browser IE6 or IE7 pass 
authorisation normally, without superfluous requests of a login/password.

But those who uses Mozilla Firefox browser, at visiting of the sites especially 
containing JavaScript scenaries, often receive request of a login, password and 
domain for authorisation in proxy. If this request to reject (with pressed 
cancel), the client receives standard page of cache access denied. But if 
after that to press to refresh, the page is loaded without login/password 
request, and all works normally before occurrence of the next of authorisation 
request.
This effect observed on the firefox browsers only.
Incr. or decr. of auth_param ntlm children parameters don't helped.

Configs:

###squid.conf
auth_param ntlm program /usr/bin/ntlm_auth --debug-level=10 
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 50
auth_param ntlm keep_alive on
authenticate_cache_garbage_interval 1 minute
authenticate_ttl 2 minutes
authenticate_ip_ttl 2 minutes
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80 81 8080 8081 # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 5222
acl Safe_ports port 443 # https
acl PURGE method PURGE
acl CONNECT method CONNECT
acl bad_pat_servers_ip src /etc/squid3/acl/bad_pat_servers_ip
acl microsoft_activation dstdomain /etc/squid3/acl/microsoft_activation
acl ip_symantec_ftp src 192.168.2.11
acl ftp_symantec dstdomain ftp.symantec.com liveupdate.symantec.com 
liveupdate.symantecliveupdate.com
acl good_sites dstdomain /etc/squid3/acl/good_sites
acl bad_pattern url_regex /etc/squid3/acl/bad_pattern
acl bad_sites dstdomain /etc/squid3/acl/bad_sites
acl odvk url_regex /etc/squid3/acl/odvk
acl odnokl_sites dstdomain /etc/squid3/acl/odnokl_sites
acl odnokl_users proxy_auth /etc/squid3/acl/odnokl_users
acl ip_users src /etc/squid3/acl/ip_users
acl AuthUsers proxy_auth /etc/squid3/acl/users
http_access allow manager localhost
http_access deny manager
http_access allow PURGE localhost
http_access deny PURGE
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow microsoft_activation
http_access deny bad_pat_servers_ip
http_access allow ip_symantec_ftp ftp_symantec
http_access allow good_sites ip_users
http_access allow good_sites AuthUsers
http_access allow odnokl_sites odnokl_users
http_access deny bad_pattern
http_access deny bad_sites
http_access deny odvk
http_access allow ip_users
http_access allow AuthUsers
http_access allow localhost
http_access deny all
icp_access deny all
htcp_access deny all
http_port 192.168.60.60:3128
hierarchy_stoplist cgi-bin ?
cache_mem 256 MB
cache_dir ufs /var/spool/squid3 1024 16 256
access_log /var/log/squid3/access.log squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20% 4320
icp_port 3130
forwarded_for off
coredump_dir /var/spool/squid3

###smb.conf
[global]
   workgroup = PATERSON
   realm = PATERSON.RU
   password server = SRV-MSK11 SRV-MSK12
   server string = %h server
   wins support = yes
   wins server = 192.168.2.11
   dns proxy = no
   interfaces = 192.168.60.60 eth0
   log file = /var/log/samba/log.%m
   log level = 3
   max log size = 1000
   syslog = 0
   panic action = /usr/share/samba/panic-action %d
   security = ads
   encrypt passwords = true
   passdb backend = tdbsam
   obey pam restrictions = yes
   invalid users = root
   unix password sync = yes
   passwd program = /usr/bin/passwd %u
   passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* 
%n\n *password\supdated\ssuccessfully* .
   socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192
   case sensitive = No
   idmap uid = 1-2
   idmap gid = 1-2
   winbind enum groups = yes
   winbind enum users = yes
   winbind separator = +
   winbind use default domain = No
[homes]
   comment = Home Directories
   browseable = no
   read only = yes
   create mask = 0700
   directory mask = 0700
   valid users = %S
[printers]
   comment = All Printers
   browseable = no
   path = /var/spool/samba
   printable = yes
   guest ok = no
   read only = yes
   create mask = 0700
[print$]
   comment = Printer Drivers
   path = /var/lib/samba/printers
   browseable = yes
   read only = yes
   guest ok = no

Best regards, Ehenov Roman.

_ 
Авторский фотоальбом Андрея Оборина и Михаила Семенова
   http://www.oborin.ru/book/Home.html





Re: [squid-users] TProxy not faking source address.

2009-05-18 Thread Amos Jeffries

rihad wrote:

Omid Kosari wrote:

I solved the problem . I have installed
aptitude install libcap2 libcap2-dev

and then recompiled squid and tproxy problem solved.
Thank you Amos for http://wiki.squid-cache.org/Features/Tproxy4  . please
also edit troubleshooting section for Ubuntu 9.04 (Jaunty) users to 
install
libcap2 libcap2-dev before compiling squid . AFAIK the simplest way to 
running the TPROXY is in Ubuntu 9.04 (Jaunty) .


Thanks a lot, it works! Ubuntu 9.04 was indeed missing libpcap-dev 
(libcap2 was installed, though)

$ dpkg -l|fgrep libcap
ii  libcap-dev1:2.16-5 
development libraries and header files for l
ii  libcap2   1:2.16-5 support 
for getting/setting POSIX.1e capabil


after rebuilding Squid, it finally started working (even though I didn't 
see _any_ error regarding some capability missing)



P.S.: I think this page: http://wiki.squid-cache.org/Features/Tproxy4
should note it on the forefront that libcap is a required dependency for 
tproxy, not only under the Stopping full transparency: Missing needed 
capability support. header because I never saw it, so didn't read further.


Thanks a lot again.


Done. Thanks to both of you for helping identify and fix this.

The message doubtless showed up somewhere, but maybe before cache.log is 
opened. Things can get lost that early.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
  Current Beta Squid 3.1.0.7


Re: [squid-users] Authentication problem. Squid3+ntlm_auth+Firefox.

2009-05-18 Thread Amos Jeffries

xor wrote:

Hello,
I have installed squid3 with authorisation in the windows2003 domain, with 
libraries kerberos5 and samba + winbind. OS Debian Lenny 5.0.1.
Packages squid3, samba, krb and winbind are taken from official repositories 
(http://ftp.ru.debian.org/debian/).

The proxy clients working under WinXP with browser IE6 or IE7 pass 
authorisation normally, without superfluous requests of a login/password.

But those who uses Mozilla Firefox browser, at visiting of the sites especially containing 
JavaScript scenaries, often receive request of a login, password and domain for authorisation in 
proxy. If this request to reject (with pressed cancel), the client receives standard 
page of cache access denied. But if after that to press to refresh, the page is loaded 
without login/password request, and all works normally before occurrence of the next of 
authorisation request.
This effect observed on the firefox browsers only.
Incr. or decr. of auth_param ntlm children parameters don't helped.


Please define what you mean by containing JavaScript scenaries? how is 
this relevant to the HTTP requests?


Check that firefox has not saved previous passwords for the user or 
another. This can cause issues as the known passwords are used first 
every time.


With debug_options ALL,1 29,6 28,6 cache.log gets a trace of the auth 
and ACL actions. Check that to see what is going on.
 You can expect to see some holdup while auth details are requested 
from the browser whether or not the popup appears. You can see for those 
checks whether is right to be needed or not though.



Some unrelated notes inline to the config...



Configs:

###squid.conf
auth_param ntlm program /usr/bin/ntlm_auth --debug-level=10 
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 50
auth_param ntlm keep_alive on
authenticate_cache_garbage_interval 1 minute
authenticate_ttl 2 minutes
authenticate_ip_ttl 2 minutes
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80 81 8080 8081 # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 5222
acl Safe_ports port 443 # https
acl PURGE method PURGE
acl CONNECT method CONNECT
acl bad_pat_servers_ip src /etc/squid3/acl/bad_pat_servers_ip


I find it rather confusing that you call this a servers_ip and indeed 
a pattern list yet use src which tests _client_ IP.


The name of the ACL sounds like you mean it to be a destination check of 
some sort.



acl microsoft_activation dstdomain /etc/squid3/acl/microsoft_activation
acl ip_symantec_ftp src 192.168.2.11
acl ftp_symantec dstdomain ftp.symantec.com liveupdate.symantec.com 
liveupdate.symantecliveupdate.com
acl good_sites dstdomain /etc/squid3/acl/good_sites
acl bad_pattern url_regex /etc/squid3/acl/bad_pattern
acl bad_sites dstdomain /etc/squid3/acl/bad_sites
acl odvk url_regex /etc/squid3/acl/odvk
acl odnokl_sites dstdomain /etc/squid3/acl/odnokl_sites
acl odnokl_users proxy_auth /etc/squid3/acl/odnokl_users
acl ip_users src /etc/squid3/acl/ip_users
acl AuthUsers proxy_auth /etc/squid3/acl/users
http_access allow manager localhost
http_access deny manager
http_access allow PURGE localhost
http_access deny PURGE
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow microsoft_activation
http_access deny bad_pat_servers_ip
http_access allow ip_symantec_ftp ftp_symantec
http_access allow good_sites ip_users
http_access allow good_sites AuthUsers
http_access allow odnokl_sites odnokl_users
http_access deny bad_pattern
http_access deny bad_sites
http_access deny odvk
http_access allow ip_users
http_access allow AuthUsers
http_access allow localhost
http_access deny all
htcp_access deny all
http_port 192.168.60.60:3128
hierarchy_stoplist cgi-bin ?
cache_mem 256 MB
cache_dir ufs /var/spool/squid3 1024 16 256
access_log /var/log/squid3/access.log squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20% 4320



icp_access deny all
icp_port 3130


Combined with the icp_access deny all I find this really weird.

The default action in Squid-3 is not to listen for ICP at all and to 
deny all as well. I think you want to remove the icp_* configuration 
entirely.


Same for the htcp_access line further up.


forwarded_for off
coredump_dir /var/spool/squid3

###smb.conf
[global]
   workgroup = PATERSON
   realm = PATERSON.RU
   password server = SRV-MSK11 SRV-MSK12
   server string = %h server
   wins support = yes
   wins server = 192.168.2.11
   dns proxy = no
   interfaces = 192.168.60.60 eth0
   log file = /var/log/samba/log.%m
   log level = 3
   max log size = 1000
   syslog = 0
   panic action = /usr/share/samba/panic-action %d
   security = ads
   encrypt passwords = true
   passdb backend = tdbsam
   obey pam restrictions = yes
   

Re: [squid-users] Squid suddenly crashes (Maybe a bug)

2009-05-18 Thread Omid Kosari

Maybe useful , Squid is under high load

Average HTTP requests per minute since start:   5211.3


Omid Kosari wrote:
 
 Simply squid crashes after this message in cache.log
 assertion failed: comm.cc:2016: !fd_table[fd].closing()
 
 Squid 3.1.0.7
 Kernel 2.6.28-11 (Ubuntu 9.04 Jaunty)
 CPU AMD Athlon(tm) 64 Processor 3000+ 
 RAM 8GB
 
 Any suggestion appreciated.
 

-- 
View this message in context: 
http://www.nabble.com/Squid-suddenly-crashes-%28Maybe-a-bug%29-tp23593693p23595216.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] How to strip/ignore header in squid?

2009-05-18 Thread Amos Jeffries

Robert Collins wrote:

On Wed, 2009-05-13 at 19:39 -0700, Kurt Buff wrote:


I came to that conclusion on my own, and did recompile with that
option ('make --enable-http-violations' then 'make install', and it
went without error) but it didn't help, as I'm getting the same error
message.

I'm sure I'm missing something, but need a clue...


Are you sure you're running a squid with that enabled? (squid -v).

and that said, the first of those headers is actually really useful, you
should get your firewall updated to support HTTP/1.1.

-Rob


Both of these are non-standard. I think you may be confusing the 
Unless-Modified-Since with the standard If-Unmodified-Since . They 
appear to be identical in operation.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
  Current Beta Squid 3.1.0.7


[squid-users] Transparent Squid Stalls For Up To Two Minutes

2009-05-18 Thread Doug Eubanks
I'm having an intermittent squid issue. It's plagued me with CentOS 5.x, Fedora 
6, and now Fedora 11 (all using the RPM build that came with the OS).

My DD-WRT router forwards all of my outgoing port 80 requests to my transparent 
proxy using IP tables. For some reason, squid will hang when opening a URL for 
up to two minutes. It doesn't always happen and sometimes restarting squid will 
correct the problem (for a while). The system is pretty hefty 3ghz P4 with 2G 
of RAM with a SATA II drive. That should be plenty for a small home network of 
about 10 clients.

When I test DNS lookups from the host, requests are returned within less than a 
second. I'm pretty sure that's not the problem.

Here is my squid.conf, any input would be greatly appreciated!

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access allow localnet
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
htcp_access allow localnet
htcp_access deny all
http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
cache_mem 32 MB
maximum_object_size_in_memory 128 KB
cache_replacement_policy heap LRU
cache_dir aufs /var/spool/squid 4096 8 16
max_open_disk_fds 0
minimum_object_size 0 KB
maximum_object_size 512 KB
access_log /var/log/squid/access.log squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20% 4320
visible_hostname doug-linux.dougware.net
unique_hostname doug-linux.dougware.net
coredump_dir /var/spool/squid
cache_mgr ad...@dougware.net
dns_nameservers 10.0.0.254 10.0.0.253 69.197.163.239
store_avg_object_size 64 KB
memory_replacement_policy heap LRU
tcp_outgoing_address 10.0.0.254
udp_outgoing_address 10.0.0.254

Thanks
Doug Eubanks
ad...@dougware.net
919-201-8750


Re: [squid-users] Transparent Squid Stalls For Up To Two Minutes

2009-05-18 Thread Amos Jeffries

Doug Eubanks wrote:

I'm having an intermittent squid issue. It's plagued me with CentOS 5.x, Fedora 
6, and now Fedora 11 (all using the RPM build that came with the OS).

My DD-WRT router forwards all of my outgoing port 80 requests to my transparent 
proxy using IP tables. For some reason, squid will hang when opening a URL for 
up to two minutes. It doesn't always happen and sometimes restarting squid will 
correct the problem (for a while). The system is pretty hefty 3ghz P4 with 2G 
of RAM with a SATA II drive. That should be plenty for a small home network of 
about 10 clients.

When I test DNS lookups from the host, requests are returned within less than a 
second. I'm pretty sure that's not the problem.

Here is my squid.conf, any input would be greatly appreciated!

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access allow localnet
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
htcp_access allow localnet
htcp_access deny all
http_port 3128 transparent


Is the NAT / REDIRECT/DNAT happening on the Squid box?
It needs to.


hierarchy_stoplist cgi-bin ?
cache_mem 32 MB
maximum_object_size_in_memory 128 KB
cache_replacement_policy heap LRU
cache_dir aufs /var/spool/squid 4096 8 16


4GB of objects under 512KB small (avg set at 64KB later),  using only an 
8x16 inode array. You may have a FS overload problem.


Also, Squid 'pulses' cache garbage collection one directory at a time. 
Very large amounts of files in any one directory can slow things down a 
lot at random times.


It's generally better to increase the L1/L2 numbers from default as the 
cache gets bigger.



max_open_disk_fds 0
minimum_object_size 0 KB
maximum_object_size 512 KB
access_log /var/log/squid/access.log squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20% 4320
visible_hostname doug-linux.dougware.net
unique_hostname doug-linux.dougware.net
coredump_dir /var/spool/squid
cache_mgr ad...@dougware.net
dns_nameservers 10.0.0.254 10.0.0.253 69.197.163.239
store_avg_object_size 64 KB
memory_replacement_policy heap LRU
tcp_outgoing_address 10.0.0.254
udp_outgoing_address 10.0.0.254


Does 10.0.0.254 port 53 have access to ALL the DNS servers: 10.0.0.254 
10.0.0.253 69.197.163.239


Are you excluding 10.0.0.254 from the interception at the DD-WRT?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
  Current Beta Squid 3.1.0.7


Re: [squid-users] 3 ISPs: Routing problem

2009-05-18 Thread RSCL Mumbai
On Sun, May 17, 2009 at 11:37 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 RSCL Mumbai wrote:

 On Fri, May 15, 2009 at 10:38 AM, Amos Jeffries squ...@treenet.co.nz
 wrote:

 RSCL Mumbai wrote:

 On Thu, May 14, 2009 at 4:33 PM, Jeff Pang pa...@arcor.de wrote:

 RSCL Mumbai:

 What would like to configure is setup specific G/ws for specific
 clients.

 192.168.1.100 to use G/w 192.168.1.1
 192.168.1.101 to use G/w 192.168.1.1
 192.168.1.102 to use G/w 192.168.1.2
 192.168.1.103 to use G/w 192.168.1.2
 192.168.1.104 to use G/w 192.168.1.2
 192.168.1.105 to use G/w 192.168.1.3
 192.168.1.106 to use G/w 192.168.1.3



 I just found out that squid is removing the marking on the packet:
 This is what I am doing:

 (1) I marked packets coming from 10.0.0.120 to port 80, with mark1
 (mark1 corresponds to isp1)
 (2) I added a route rule which says that all packets having mark 1
 will be routed through ISP 1

 But the packets are not routing via ISP1

 When I disable squid redirection rule in IPTables (post 80 redirection
 to 3128 squid), the markings are maintained and packets route via
 ISP1.

 Now the big question is why is squid removing the marking ??

 Because the packets STOP at their destination software.
 Normally the destination is a web server. When you NAT (redirect) a
 packet
 to Squid it STOPS there and gets read by Squid instead of passing on to
 the
 web server.

 IF Squid needs to fetch the HTTP object requested from the network a
 brand
 new TCP connection will be created only from Squid to the web server.

 And how can this be prevented ??

 By not intercepting packets. As you already noticed.


 Squid offers alternatives, tcp_outgoing_address has already been
 mentioned.
 tcp_outgoing_tos is an alternative that allows you to mark packets
 leaving
 Squid.

 I tried  tcp_outgoing_address  by adding the following to squid.conf

 acl ip1 myip 10.0.0.120
 acl ip2 myip 10.0.0.121
 acl ip3 myip 10.0.0.122
 tcp_outgoing_address 10.0.0.120 ip1
 tcp_outgoing_address 10.0.0.121 ip2
 tcp_outgoing_address 10.0.0.122 ip3

 Restarted squid, but no help.

 Pls help how I can get the route rules to work.

 Simple requirement:
 If packets comes from src=10.0.0.120, forward it via ISP-1
 If packets comes from src=10.0.0.121, forward it via ISP-2
 If packets comes from src=10.0.0.122, forward it via ISP-3
 And so forth.

 Thx in advance.
 Vai

 To prevent the first (default) one being used  you may need to do:

  tcp_outgoing_address 10.0.0.120 ip1 !ip2 !ip3
  tcp_outgoing_address 10.0.0.121 ip2 !ip1 !ip3
  tcp_outgoing_address 10.0.0.122 ip3 !ip1 !ip2


I do not have 5 real interfaces for 5 ISPs.
And I believe virtual interfaces will not work in this scenario.

Any other option pls ??

Thx  regards,
Vai


Re: [squid-users] Transparent Squid Stalls For Up To Two Minutes

2009-05-18 Thread Doug Eubanks
I appreciate your response. I don't believe it's a file system issue, I've 
tried troubleshooting that for several weeks.  Originally, I was using 16 256 
(the default) as directory layout.  I've tried using ext4, reiser (my favorite 
filesystem) and now it's on btrfs.  I also have the filesystem mounted with 
noatime.  When I was using reiser, I had disabled tail packing as well.  As you 
can see, I'm using aufs, but I've also tried diskd.

The IP tables NAT/DNAT stuff happens at my router.  See this DD-WRT wiki 
article for how it's done 
(http://www.dd-wrt.com/wiki/index.php/Transparent_Proxy), I actually wrote the 
section on multiple hosts can bypass the proxy. Either way, it's not a router 
issue.  If I set my browser to the use the proxy directly, the delays still 
happen 99% of the time.

Originally,I was using dans with antivirus.  But the delays have gotten to be 
horrible.  I went back to a standard squid setup to try to resolve the problem. 
 At this  point, I simply want to get squid working because a lot of the sites 
we visit continously may benefit from cacheing (news sites with lots of 
graphics, etc).  Once I get this problem resolved, I'll go back to using dans 
w/ antivirus.

10.0.0.254 (the squid host) is excluded from the IP tables rules on DD-WRT, 
along with my Xbox 360, my BluRay player, my HD-DVD player and my DirecTV 
receiver.

The three DNS servers specified in the squid.conf all resolve names properly 
and are open to the squid host.

Thanks
Doug Eubanks
ad...@dougware.net
919-201-8750

  _  
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
To: ad...@dougware.net
Cc: squid-users@squid-cache.org
Sent: Mon, 18 May 2009 14:55:39 +
Subject: Re: [squid-users] Transparent Squid Stalls For Up To Two Minutes

Doug Eubanks wrote:
 I'm having an intermittent squid issue. It's plagued me with CentOS 5.x, 
 Fedora 6, and now Fedora 11 (all using the RPM build that came with the OS).
 
 My DD-WRT router forwards all of my outgoing port 80 requests to my 
 transparent proxy using IP tables. For some reason, squid will hang when 
 opening a URL for up to two minutes. It doesn't always happen and sometimes 
 restarting squid will correct the problem (for a while). The system is pretty 
 hefty 3ghz P4 with 2G of RAM with a SATA II drive. That should be plenty for 
 a small home network of about 10 clients.
 
 When I test DNS lookups from the host, requests are returned within less than 
 a second. I'm pretty sure that's not the problem.
 
 Here is my squid.conf, any input would be greatly appreciated!
 
 acl manager proto cache_object
 acl localhost src 127.0.0.1/32
 acl to_localhost dst 127.0.0.0/8
 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
 acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
 acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 http_access allow manager localhost
 http_access deny manager
 http_access allow localnet
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localnet
 http_access allow localhost
 http_access deny all
 htcp_access allow localnet
 htcp_access deny all
 http_port 3128 transparent

Is the NAT / REDIRECT/DNAT happening on the Squid box?
It needs to.

 hierarchy_stoplist cgi-bin ?
 cache_mem 32 MB
 maximum_object_size_in_memory 128 KB
 cache_replacement_policy heap LRU
 cache_dir aufs /var/spool/squid 4096 8 16

4GB of objects under 512KB small (avg set at 64KB later),  using only an 
8x16 inode array. You may have a FS overload problem.

Also, Squid 'pulses' cache garbage collection one directory at a time. 
Very large amounts of files in any one directory can slow things down a 
lot at random times.

It's generally better to increase the L1/L2 numbers from default as the 
cache gets bigger.

 max_open_disk_fds 0
 minimum_object_size 0 KB
 maximum_object_size 512 KB
 access_log /var/log/squid/access.log squid
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern (cgi-bin|\?)0   0%  0
 refresh_pattern .   0   20% 4320
 visible_hostname doug-linux.dougware.net
 unique_hostname doug-linux.dougware.net
 coredump_dir /var/spool/squid
 cache_mgr ad...@dougware.net
 dns_nameservers 10.0.0.254 10.0.0.253 69.197.163.239
 store_avg_object_size 64 KB
 memory_replacement_policy heap LRU
 tcp_outgoing_address 10.0.0.254
 udp_outgoing_address 10.0.0.254

Does 10.0.0.254 port 53 have 

[squid-users] Re: squid and ssl connect over ssl-proxy

2009-05-18 Thread Frank Patzig
Amos Jeffries schrieb:

 
 What requests are you seeing with no data?
I see the HTTP datas

 HTTP or HTTPS or other? with
 what headers?


 
 How are Squid and the ssl-proxy linked together?

No.
  is squid a cache_peer of the ssl-proxy?

No.
  or, is the ssl-proxy a reverse-proxy for the web server?

I think. This is not from me.
 
 what headers are going out and coming back?

I dont no.

 from each section of the
 linkages?
 
 Amos

Frank



[squid-users] httpReadReply: Excess data from GET - Corrupt download

2009-05-18 Thread Emanuel dos Reis Rodrigues

Helo ...


I have a problem with migration from squid 2.6 and 2.7 ...

I have one  PHP application  than make reports in PDF from php script 
 using fpdf  lib ...


when access ...  script.php, this return one download of pdf file ... ( 
This is always the same file name doc.pdf )


The behavior is strange  some times the download file is corrupt .. 
and some times that is OK 



Always display this when my try is OK or no ...


httpReadReply: Excess data from GET   http://XXX..COM/anex3.php


I know than ... this message is because   the data is more than  header 
lenght information ... ok ? more sites  do this ...





I use debian 5.0 squid2.7 and in Debian 4 using squid2.6 works without 
problems ...




regards,


Emanuel



[squid-users] New Squid3 Stable 13 Setup

2009-05-18 Thread bharathvn

Hi,

i am trying to setup proxy server as show below

Client ==Sibling == Parent== Internet

i got error when we browse any site from parent server as mentioned below

The following error was encountered while trying to retrieve the URL: /

Invalid URL

Some aspect of the requested URL is incorrect.

Some possible problems are:

Missing or incorrect access protocol (should be http:// or similar)

Missing hostname

Illegal double-escape in the URL-Path

Illegal character in hostname; underscores are not allowed.

Your cache administrator is root.



Generated Sun, 17 May 2009 18:13:40 GMT by proxy1 (squid/3.0.STABLE13)

Parent Proxy config


http_port 8080
cache_peer proxy2 sibling 8080 0
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
cache_mem 100 MB
cache_swap_low 90
cache_swap_high 95
access_log /var/log/squid/access.log squid
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl US src b.b.b.b-b.b.b.254
acl server src c.c.c.1-c.c.c.254
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow US
http_access allow server
http_access allow all
http_reply_access allow all
icp_access deny all
cache_effective_user squid
cache_effective_group squid
icp_port 0
coredump_dir /var/spool/squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern . 0 20% 4320


Sibling Proxy config

http_port 8080
cache_peer proxy1 parent 8080 0 default originserver
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
cache_mem 100 MB
cache_swap_low 90
cache_swap_high 95
access_log /var/log/squid/access.log squid
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl BLR src a.a.a.1-a.a.a.254
acl US src b.b.b.b-b.b.b.254
acl server src c.c.c.1-c.c.c.254
acl TAC src d.d.d.1-d.d.d.254
acl all src 0.0.0.0/255.0.0.0
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow BLR
http_access allow US
http_access allow server
http_access allow TAC
http_access deny all
http_reply_access allow all
icp_access allow all
cache_effective_user squid
cache_effective_group squid
icp_port 0
always_direct deny US
always_direct deny BLR
always_direct deny TAC
 prefer_direct on
coredump_dir /var/spool/squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320

Pls help me on this.

Thanks,
Bharathvn
-- 
View this message in context: 
http://www.nabble.com/New-Squid3-Stable-13-Setup-tp23601156p23601156.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] 3 ISPs: Routing problem

2009-05-18 Thread jeff donovan


On May 18, 2009, at 11:17 AM, RSCL Mumbai wrote:

On Sun, May 17, 2009 at 11:37 AM, Amos Jeffries  
squ...@treenet.co.nz wrote:

RSCL Mumbai wrote:


On Fri, May 15, 2009 at 10:38 AM, Amos Jeffries squ...@treenet.co.nz 


wrote:


RSCL Mumbai wrote:


On Thu, May 14, 2009 at 4:33 PM, Jeff Pang pa...@arcor.de wrote:


RSCL Mumbai:

What would like to configure is setup specific G/ws for  
specific

clients.

192.168.1.100 to use G/w 192.168.1.1
192.168.1.101 to use G/w 192.168.1.1
192.168.1.102 to use G/w 192.168.1.2
192.168.1.103 to use G/w 192.168.1.2
192.168.1.104 to use G/w 192.168.1.2
192.168.1.105 to use G/w 192.168.1.3
192.168.1.106 to use G/w 192.168.1.3




I just found out that squid is removing the marking on the packet:
This is what I am doing:

(1) I marked packets coming from 10.0.0.120 to port 80, with  
mark1

(mark1 corresponds to isp1)
(2) I added a route rule which says that all packets having mark 1
will be routed through ISP 1

But the packets are not routing via ISP1

When I disable squid redirection rule in IPTables (post 80  
redirection

to 3128 squid), the markings are maintained and packets route via
ISP1.

Now the big question is why is squid removing the marking ??


Because the packets STOP at their destination software.
Normally the destination is a web server. When you NAT (redirect) a
packet
to Squid it STOPS there and gets read by Squid instead of passing  
on to

the
web server.

IF Squid needs to fetch the HTTP object requested from the  
network a

brand
new TCP connection will be created only from Squid to the web  
server.



And how can this be prevented ??


By not intercepting packets. As you already noticed.


Squid offers alternatives, tcp_outgoing_address has already been
mentioned.
tcp_outgoing_tos is an alternative that allows you to mark packets
leaving
Squid.


I tried  tcp_outgoing_address  by adding the following to  
squid.conf


acl ip1 myip 10.0.0.120
acl ip2 myip 10.0.0.121
acl ip3 myip 10.0.0.122
tcp_outgoing_address 10.0.0.120 ip1
tcp_outgoing_address 10.0.0.121 ip2
tcp_outgoing_address 10.0.0.122 ip3

Restarted squid, but no help.

Pls help how I can get the route rules to work.

Simple requirement:
If packets comes from src=10.0.0.120, forward it via ISP-1
If packets comes from src=10.0.0.121, forward it via ISP-2
If packets comes from src=10.0.0.122, forward it via ISP-3
And so forth.

Thx in advance.
Vai


To prevent the first (default) one being used  you may need to do:

 tcp_outgoing_address 10.0.0.120 ip1 !ip2 !ip3
 tcp_outgoing_address 10.0.0.121 ip2 !ip1 !ip3
 tcp_outgoing_address 10.0.0.122 ip3 !ip1 !ip2



I do not have 5 real interfaces for 5 ISPs.
And I believe virtual interfaces will not work in this scenario.

Any other option pls ??

Thx  regards,
Vai



hello Val,
look to your routers to make this decision. You can handout default  
gateway info to your clients or routers

if you don't have 3 squid boxes[ my recommendation] then
i would try 3 nics
if thats not available then you need 3 vlans.
-j


[squid-users] Can't access non-anymous FTP via Internet Explorer

2009-05-18 Thread James Zuelow
I have a Squid 2.7Stable3 install.  (Debian Lenny package, kernel, etc.)
Clients are configured with a proxy.pac, and we use ntlm authentication.  This 
is not a transparent proxy.

When users attempt to use IE7 or IE8 for non-anonymous FTP sessions through 
Squid, they receive an anonymous access denied reply.

When users attempt to use IE7 or IE8 without using Squid (a proxy.pac exception 
or explicitly setting the browser to not use a proxy) they are presented with 
the FTP login dialog that they expect.

When users attempt to use Firefox 3.x for non-anymous FTP sessions, they 
receive the login dialog that they expect.

Is there a special trick to get IE7+ to use Squid for non-anonymous FTP?

James ZuelowCBJ MIS (907)586-0236
Network Specialist

[squid-users] TCP_MISS/503 and icp

2009-05-18 Thread Sergio Belkin
Hi,

I have some hosts that use one squid-1 server that has a squid-2 parent:

I mean squid-1 has:

cache_peer parent.domain parent  80803130


But some sites are unaccessible, in special those sites with url having an ?

for example:

 1242674301.146104 10.128.255.189 TCP_MISS/503 1415 GET
http://ar.yahoo.com/? - DIRECT/209.191.93.55 text/html


and browser shows:

Error
The requested URL could not be retrieved

While trying to retrieve the URL http://ar.yahoo.com/?

The following error was encountered:

*Connection to 209.191.93.55

The system returned:

(111) Connectio0n refused


Also, On the squid-1 iptables are doing REDIRECT.

Please could you tell me what's wrong?

Thanks in advance!

-- 
--
Open Kairos http://www.openkairos.com
Watch More TV http://sebelk.blogspot.com
Sergio Belkin -


[squid-users] How to make squid caches windows + anti-virus updates?

2009-05-18 Thread Henrique M.

I would like to know what squid caches by default and where are these files
kept? 

I thought that squid would cache everything, but I read that a few lines
must be added to squid.conf to force squid to cache windows updates, which I
already did, I just don't know if it's working cause I don't know where the
files are. Can you guys tell me how can I setup squid to cache the
anti-virus updates also (for kaspersky)?
-- 
View this message in context: 
http://www.nabble.com/How-to-make-squid-caches-windows-%2B-anti-virus-updates--tp23608722p23608722.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] New Squid3 Stable 13 Setup

2009-05-18 Thread Amos Jeffries

 Hi,

 i am trying to setup proxy server as show below

 Client ==Sibling == Parent== Internet

Huh? do you mean a:
 Client ==Squid == Parent== Internet  ??

'Sibling' is a two-way mesh term, meaning two proxies at teh same level:

Client == ProxyA = ...elsewhere
|||
Client == ProxyB = ...elsewhere

So ProxyA and ProxyB are siblings, both can re-route requests sideways if
their upstream link fails or if its faster to go that way.

What your config does at present for both proxies is:

 http_port 8080

 - listen as a regular proxy on port 8080

 cache_peer proxy1 parent 8080 0 default originserver

 - fetch requests by default from parent web server (originserver) proxy1
port 8080.

 NP: Squid decodes the regular proxy requests and converts them into
webserver client requests (ie.  GET / HTTP/1.0 instead of GET
http://proxy/ HTTP/1.0) when sending to originserver peers.


I'm not sure what exactly you are after, but its one of these two setups:

1) Squid proxy gateway with a parent upstream proxy gateway.
   (All requests from proxy1 routed through proxy2 parent)

proxy1:
  http_port 8080
  cache_peer proxy2 parent 8080 0 default
  prefer_direct off

proxy2:
  http_port 8080


2) two sibling proxies providing failover to the internet.
   (all requests go in internet until that machines external link fails,
then they go through sibling)

proxy1:
  http_port 8080
  cache_peer proxy2 sibling 8080 0
  prefer_direct on

proxy2:
  http_port 8080
  cache_peer proxy1 sibling 8080 0
  prefer_direct on


Hope this helps. If not please provide some exact details of what request
flow you are aiming to achieve.

Amos


 i got error when we browse any site from parent server as mentioned below

 The following error was encountered while trying to retrieve the URL: /

 Invalid URL

 Some aspect of the requested URL is incorrect.

 Some possible problems are:

 Missing or incorrect access protocol (should be http:// or similar)

 Missing hostname

 Illegal double-escape in the URL-Path

 Illegal character in hostname; underscores are not allowed.

 Your cache administrator is root.

 

 Generated Sun, 17 May 2009 18:13:40 GMT by proxy1 (squid/3.0.STABLE13)

 Parent Proxy config


 http_port 8080
 cache_peer proxy2 sibling 8080 0
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 acl apache rep_header Server ^Apache
 cache_mem 100 MB
 cache_swap_low 90
 cache_swap_high 95
 access_log /var/log/squid/access.log squid
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl Safe_ports port 80 # http
 acl Safe_ports port 21 # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70 # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535 # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 acl US src b.b.b.b-b.b.b.254
 acl server src c.c.c.1-c.c.c.254
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access allow US
 http_access allow server
 http_access allow all
 http_reply_access allow all
 icp_access deny all
 cache_effective_user squid
 cache_effective_group squid
 icp_port 0
 coredump_dir /var/spool/squid
 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern (cgi-bin|\?)0   0%  0
 refresh_pattern . 0 20% 4320


 Sibling Proxy config

 http_port 8080
 cache_peer proxy1 parent 8080 0 default originserver
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 acl apache rep_header Server ^Apache
 cache_mem 100 MB
 cache_swap_low 90
 cache_swap_high 95
 access_log /var/log/squid/access.log squid
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl Safe_ports port 80 # http
 acl Safe_ports port 21 # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70 # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535 # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 acl BLR src a.a.a.1-a.a.a.254
 acl US src b.b.b.b-b.b.b.254
 acl server src c.c.c.1-c.c.c.254
 acl TAC src d.d.d.1-d.d.d.254
 acl all src 0.0.0.0/255.0.0.0
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access allow BLR
 http_access allow US
 http_access allow server
 http_access allow TAC
 http_access deny all
 http_reply_access allow 

Re: [squid-users] TCP_MISS/503 and icp

2009-05-18 Thread Amos Jeffries
 Hi,

 I have some hosts that use one squid-1 server that has a squid-2 parent:

 I mean squid-1 has:

 cache_peer parent.domain parent  80803130


 But some sites are unaccessible, in special those sites with url having an
 ?

 for example:

  1242674301.146104 10.128.255.189 TCP_MISS/503 1415 GET
 http://ar.yahoo.com/? - DIRECT/209.191.93.55 text/html


You will get a better trace of these without stripping the query string.

http://www.squid-cache.org/Doc/config/strip_query_terms/


 and browser shows:

 Error
 The requested URL could not be retrieved

 While trying to retrieve the URL http://ar.yahoo.com/?

 The following error was encountered:

 *Connection to 209.191.93.55

 The system returned:

 (111) Connectio0n refused


 Also, On the squid-1 iptables are doing REDIRECT.

 Please could you tell me what's wrong?

By default dynamic pages cannot be trusted through peers. Squid up until
very recently added no-cache to peer requests (IIRC), which screws up the
bandwidth savings. So while its safe enough to turn on caching of dynamic
pages it's still a sticky issue if they pass through peers.

http://www.squid-cache.org/Doc/config/hierarchy_stoplist/

Your trace shows Squid-1 is not using the squid-2 as a source, its just
trying to go there DIRECTly. And the source is actively doing a TCP level
reset/denial.

Amos




Re: [squid-users] New Squid3 Stable 13 Setup

2009-05-18 Thread bharathvn

Hi Amos,

Thanks for responding to my message.

i am trying to achieve as mentioned below

Site A has proxy as Proxy 2 and another proxy is located in different
country Site B through tunnel as Proxy1

Site A has local internet when fails need all web request to be forwarded to
proxy 1 through proxy 2 Ie with out changing client proxy address.

similar setup was running for 1 month, some how messed up had to reconfigure
from scratch.






bharathvn wrote:
 
 Hi,
 
 i am trying to setup proxy server as show below
 
 Client ==Sibling == Parent== Internet
 
 i got error when we browse any site from parent server as mentioned below
 
 The following error was encountered while trying to retrieve the URL: /
 
 Invalid URL
 
 Some aspect of the requested URL is incorrect.
 
 Some possible problems are:
 
 Missing or incorrect access protocol (should be http:// or similar)
 
 Missing hostname
 
 Illegal double-escape in the URL-Path
 
 Illegal character in hostname; underscores are not allowed.
 
 Your cache administrator is root.
 
 
 
 Generated Sun, 17 May 2009 18:13:40 GMT by proxy1 (squid/3.0.STABLE13)
 
 Parent Proxy config
 
 
 http_port 8080
 cache_peer proxy2 sibling 8080 0
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 acl apache rep_header Server ^Apache
 cache_mem 100 MB
 cache_swap_low 90
 cache_swap_high 95
 access_log /var/log/squid/access.log squid
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl Safe_ports port 80 # http
 acl Safe_ports port 21 # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70 # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535 # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 acl US src b.b.b.b-b.b.b.254
 acl server src c.c.c.1-c.c.c.254
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access allow US
 http_access allow server
 http_access allow all
 http_reply_access allow all
 icp_access deny all
 cache_effective_user squid
 cache_effective_group squid
 icp_port 0
 coredump_dir /var/spool/squid
 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern (cgi-bin|\?)0   0%  0
 refresh_pattern . 0 20% 4320
 
 
 Sibling Proxy config
 
 http_port 8080
 cache_peer proxy1 parent 8080 0 default originserver
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 acl apache rep_header Server ^Apache
 cache_mem 100 MB
 cache_swap_low 90
 cache_swap_high 95
 access_log /var/log/squid/access.log squid
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl Safe_ports port 80 # http
 acl Safe_ports port 21 # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70 # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535 # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 acl BLR src a.a.a.1-a.a.a.254
 acl US src b.b.b.b-b.b.b.254
 acl server src c.c.c.1-c.c.c.254
 acl TAC src d.d.d.1-d.d.d.254
 acl all src 0.0.0.0/255.0.0.0
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access allow BLR
 http_access allow US
 http_access allow server
 http_access allow TAC
 http_access deny all
 http_reply_access allow all
 icp_access allow all
 cache_effective_user squid
 cache_effective_group squid
 icp_port 0
 always_direct deny US
 always_direct deny BLR
 always_direct deny TAC
  prefer_direct on
 coredump_dir /var/spool/squid
 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern . 0 20% 4320
 
 Pls help me on this.
 
 Thanks,
 Bharathvn
 

-- 
View this message in context: 
http://www.nabble.com/New-Squid3-Stable-13-Setup-tp23601156p23609041.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] How to make squid caches windows + anti-virus updates?

2009-05-18 Thread Amos Jeffries

 I would like to know what squid caches by default and where are these
 files
 kept?

 I thought that squid would cache everything, but I read that a few lines
 must be added to squid.conf to force squid to cache windows updates, which
 I

Everything it can cache it does, yes. The WU requests are often Ranges
though, small pieces from the middle of a file. Unless squid already has
the  whole file stored it has to fetch each piece separately from the web
and can't yet combine them or store them.

The extra settings you found (quick_abort*, range_offset, etc) should be
the hack to force squid to fetch the whole object on first Range request
and re-use it for the following ones WU does.

 already did, I just don't know if it's working cause I don't know where
 the
 files are. Can you guys tell me how can I setup squid to cache the
 anti-virus updates also (for kaspersky)?

If the access.log shows any 304 or TCP_HIT/TCP_*_HIT messages for WU
requests they are working.

To do it for kaspersky if it is not already, check where the requests are
going to, and why they are not being cached yet.

Amos



Re: [squid-users] New Squid3 Stable 13 Setup

2009-05-18 Thread Amos Jeffries

 Hi Amos,

 Thanks for responding to my message.

 i am trying to achieve as mentioned below

 Site A has proxy as Proxy 2 and another proxy is located in different
 country Site B through tunnel as Proxy1

 Site A has local internet when fails need all web request to be forwarded
 to
 proxy 1 through proxy 2 Ie with out changing client proxy address.

 similar setup was running for 1 month, some how messed up had to
 reconfigure
 from scratch.


Ah, okay this is what you want for the peering then:

Proxy2:
 prefer_direct on
 cache_peer Proxy1 parent 8080 3130
 ...

Proxy1:
  only an ACL permitting Proxy2 to make requests as a client


Note the absence of 'default originserver' on proxy2 and any mention of
peering on proxy1.

If you have any problems with that it will be caused by other configure
options I've overlooked.

Amos


 bharathvn wrote:

 Hi,

 i am trying to setup proxy server as show below

 Client ==Sibling == Parent== Internet

 i got error when we browse any site from parent server as mentioned
 below

 The following error was encountered while trying to retrieve the URL: /

 Invalid URL

 Some aspect of the requested URL is incorrect.

 Some possible problems are:

 Missing or incorrect access protocol (should be http:// or similar)

 Missing hostname

 Illegal double-escape in the URL-Path

 Illegal character in hostname; underscores are not allowed.

 Your cache administrator is root.

 

 Generated Sun, 17 May 2009 18:13:40 GMT by proxy1 (squid/3.0.STABLE13)

 Parent Proxy config


 http_port 8080
 cache_peer proxy2 sibling 8080 0
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 acl apache rep_header Server ^Apache
 cache_mem 100 MB
 cache_swap_low 90
 cache_swap_high 95
 access_log /var/log/squid/access.log squid
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl Safe_ports port 80 # http
 acl Safe_ports port 21 # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70 # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535 # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 acl US src b.b.b.b-b.b.b.254
 acl server src c.c.c.1-c.c.c.254
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access allow US
 http_access allow server
 http_access allow all
 http_reply_access allow all
 icp_access deny all
 cache_effective_user squid
 cache_effective_group squid
 icp_port 0
 coredump_dir /var/spool/squid
 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern (cgi-bin|\?)0   0%  0
 refresh_pattern . 0 20% 4320


 Sibling Proxy config

 http_port 8080
 cache_peer proxy1 parent 8080 0 default originserver
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 acl apache rep_header Server ^Apache
 cache_mem 100 MB
 cache_swap_low 90
 cache_swap_high 95
 access_log /var/log/squid/access.log squid
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl Safe_ports port 80 # http
 acl Safe_ports port 21 # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70 # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535 # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 acl BLR src a.a.a.1-a.a.a.254
 acl US src b.b.b.b-b.b.b.254
 acl server src c.c.c.1-c.c.c.254
 acl TAC src d.d.d.1-d.d.d.254
 acl all src 0.0.0.0/255.0.0.0
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access allow BLR
 http_access allow US
 http_access allow server
 http_access allow TAC
 http_access deny all
 http_reply_access allow all
 icp_access allow all
 cache_effective_user squid
 cache_effective_group squid
 icp_port 0
 always_direct deny US
 always_direct deny BLR
 always_direct deny TAC
  prefer_direct on
 coredump_dir /var/spool/squid
 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern . 0 20% 4320

 Pls help me on this.

 Thanks,
 Bharathvn


 --
 View this message in context:
 http://www.nabble.com/New-Squid3-Stable-13-Setup-tp23601156p23609041.html
 Sent from the Squid - Users mailing list archive at Nabble.com.






Re: [squid-users] New Squid3 Stable 13 Setup

2009-05-18 Thread bharathvn

Thanks Amos

its working now, but i see a small issue when i do query on search engine
like google

i can directly hit on any site but when i do search i get

The system returned: (111) Connection refused

The remote host or network may be down. Please try the request again.



Thanks

Amos Jeffries-2 wrote:
 

 Hi Amos,

 Thanks for responding to my message.

 i am trying to achieve as mentioned below

 Site A has proxy as Proxy 2 and another proxy is located in different
 country Site B through tunnel as Proxy1

 Site A has local internet when fails need all web request to be forwarded
 to
 proxy 1 through proxy 2 Ie with out changing client proxy address.

 similar setup was running for 1 month, some how messed up had to
 reconfigure
 from scratch.

 
 Ah, okay this is what you want for the peering then:
 
 Proxy2:
  prefer_direct on
  cache_peer Proxy1 parent 8080 3130
  ...
 
 Proxy1:
   only an ACL permitting Proxy2 to make requests as a client
 
 
 Note the absence of 'default originserver' on proxy2 and any mention of
 peering on proxy1.
 
 If you have any problems with that it will be caused by other configure
 options I've overlooked.
 
 Amos
 

 bharathvn wrote:

 Hi,

 i am trying to setup proxy server as show below

 Client ==Sibling == Parent== Internet

 i got error when we browse any site from parent server as mentioned
 below

 The following error was encountered while trying to retrieve the URL: /

 Invalid URL

 Some aspect of the requested URL is incorrect.

 Some possible problems are:

 Missing or incorrect access protocol (should be http:// or similar)

 Missing hostname

 Illegal double-escape in the URL-Path

 Illegal character in hostname; underscores are not allowed.

 Your cache administrator is root.

 

 Generated Sun, 17 May 2009 18:13:40 GMT by proxy1 (squid/3.0.STABLE13)

 Parent Proxy config


 http_port 8080
 cache_peer proxy2 sibling 8080 0
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 acl apache rep_header Server ^Apache
 cache_mem 100 MB
 cache_swap_low 90
 cache_swap_high 95
 access_log /var/log/squid/access.log squid
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl Safe_ports port 80 # http
 acl Safe_ports port 21 # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70 # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535 # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 acl US src b.b.b.b-b.b.b.254
 acl server src c.c.c.1-c.c.c.254
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access allow US
 http_access allow server
 http_access allow all
 http_reply_access allow all
 icp_access deny all
 cache_effective_user squid
 cache_effective_group squid
 icp_port 0
 coredump_dir /var/spool/squid
 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern (cgi-bin|\?)0   0%  0
 refresh_pattern . 0 20% 4320


 Sibling Proxy config

 http_port 8080
 cache_peer proxy1 parent 8080 0 default originserver
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 acl apache rep_header Server ^Apache
 cache_mem 100 MB
 cache_swap_low 90
 cache_swap_high 95
 access_log /var/log/squid/access.log squid
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl Safe_ports port 80 # http
 acl Safe_ports port 21 # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70 # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535 # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 acl BLR src a.a.a.1-a.a.a.254
 acl US src b.b.b.b-b.b.b.254
 acl server src c.c.c.1-c.c.c.254
 acl TAC src d.d.d.1-d.d.d.254
 acl all src 0.0.0.0/255.0.0.0
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access allow BLR
 http_access allow US
 http_access allow server
 http_access allow TAC
 http_access deny all
 http_reply_access allow all
 icp_access allow all
 cache_effective_user squid
 cache_effective_group squid
 icp_port 0
 always_direct deny US
 always_direct deny BLR
 always_direct deny TAC
  prefer_direct on
 coredump_dir /var/spool/squid
 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern . 0 20% 4320

 Pls help me on this.

 

[squid-users] Does Squid scale well?

2009-05-18 Thread rihad
Can someone please say how well Squid 3.1/tproxy scales? Would it have 
problems servicing more than 10k simultaneous HTTP requests, and pushing 
as much as 300 mbit/s of traffic? 500 mbit/s? 1 gbit/s?



Planned hardware  setup:

Dell Poweredge 6850 server QUAD Dual Core 3.4GHz 8GB

Hard Drives:

cache_dir will be split across
5x 73GB SAS 15K hard drives


All will run on Ubuntu 9.04/testing mix w/ WCCPv1.


Thanks in advance.


[squid-users] Upstream Squid to identify user

2009-05-18 Thread myocella
Greeting

I have set up an upstream Squid proxy to receive proxy traffic from
other Squid servers.
I would like to log user access on the upstream proxy. The downstream
has this line:

cache_peer  upstreamproxy.foo.com  parent  8080  7 no-query login=*:foo

However, there is no username showing in the upstream Squid log.
What do I need to add into the Squid conf?

Currently it just allows access from dowstream IPs. No auth-param is setup.


cheers,

myocella