Re: Re: [squid-users] acl rep_header SomeRule X-HEADER-ADDED-BY-ICAP

2010-01-06 Thread Trever L. Adams
On 01/-10/-28163 12:59 PM, Chris Robertson wrote:
 Considering the fact that icap_access relies on ACLs, my guess would
 be ICAP is adding the headers after the rep_header ACL is evaluated.

 Is this possible with ICAP + Squid, or is it a bug, or just not
 possible?
   

 Run two Squid instances.  One using ICAP to add the headers, the other
 blocking based on headers present.

 Chris

I am guessing then that there is no clean way of adding such
functionality. So, can you please tell me what configuration option I
would use to tell the acl acting Squid to talk to the upstream ICAP
acting Squid?

Thank you,
Trever
-- 
Avert misunderstanding by calm, poise, and balance. -- Unknown



signature.asc
Description: OpenPGP digital signature


[squid-users] Squid3.1 TProxy weirdness

2010-01-06 Thread Felipe W Damasio
  Hi all,

  I'm new to this list, but checked the archives a lot before asking this.
  I'm trying to get squid-3.1 up and running with TProxy 4.1 on an ISP network.
  My setup is working correctly when only a few users are connected to
the users VLAN. The users can browse and TProxy works.
  But when I plug in the router with all the users (around 6),
squid doesn't respond anymore.
  I first suspected the problem was iptables/ebtables rules not
routing the packets to squid, but iptables -v -t mangle -L shows:

Chain PREROUTING (policy ACCEPT 144K packets, 50M bytes)
 pkts bytes target     prot opt in     out     source
destination
   85  6232 DIVERT     tcp  --  any    any     anywhere
anywhere            socket
 5568 1581K TPROXY     tcp  --  eth0   any     anywhere
anywhere            tcp dpt:http TPROXY redirect 0.0.0.0:3128 mark
0x1/0x1

  And about 2 seconds later:

Chain PREROUTING (policy ACCEPT 208K packets, 62M bytes)
 pkts bytes target     prot opt in     out     source
destination
   92  6692 DIVERT     tcp  --  any    any     anywhere
anywhere            socket
 7690 2210K TPROXY     tcp  --  eth0   any     anywhere
anywhere            tcp dpt:http TPROXY redirect 0.0.0.0:3128 mark
0x1/0x1

  So the requests are going through iptables, right?

  I added debug_options ALL,1 ALL,0 and 33,4, so I could see if
comm_accept returned OK or not. But cache.log doesn't show anything.
  Just so you guys know, eth0 is the client-facing interface and eth1
is the internet-facing interface.
  I'm using a 2.6.29.6 vanilla kernel, with these proc options:

echo 1   /proc/sys/net/ipv4/ip_forward
echo 0  /proc/sys/net/ipv4/conf/lo/rp_filter
echo 1  /proc/sys/net/ipv4/ip_nonlocal_bind
echo 1   /proc/sys/net/ipv4/tcp_low_latency
echo 0  /proc/sys/net/ipv4/conf/eth1/rp_filter
echo 0  /proc/sys/net/ipv4/conf/eth0/rp_filter
echo 0  /proc/sys/net/ipv4/conf/br0/rp_filter
echo 1  /proc/sys/net/ipv4/conf/all/forwarding
echo 1  /proc/sys/net/ipv4/conf/all/send_redirects
echo 1  /proc/sys/net/ipv4/conf/eth0/send_redirects

  Also, I'm using these rules that I got on the squid wiki TProxy tutorial:

iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -i eth0 -p tcp --dport 80  -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3128

ebtables -t broute -A BROUTING -i eth0 -p ipv4 --ip-proto tcp
--ip-dport 80  -j redirect --redirect-target DROP
ebtables -t broute -A BROUTING -i eth1 -p ipv4 --ip-proto tcp
--ip-sport 80 -j redirect --redirect-target DROP

 cd /proc/sys/net/bridge/
 for i in *
 do
   echo 0  $i
 done
 unset i

  Is there any tests I can do or any other info I can provide?

  Ebtables version is ebtables v2.0.9-1 (June 2009). And iptables is
iptables v1.4.3.2.

  What kills me is that if I plug in a single user on the client
interface everything works...also if I put a single user on the VLAN
of the client interface everything works too...no idea why it doesn't
work when all users are plugged in.

  Thanks in advance!

Felipe Damasio


Re: [squid-users] Forward Cache not working

2010-01-06 Thread Guido Marino Lorenzutti

This happends too if you have a proxy that asks ntlm username and password?

Ming Fu fm...@borderware.com escribió:


On 01/05/2010 01:28 PM, Mike Makowski wrote:

I understand that authenticated requests are not cache-able unless over
written by Cache-control: public in server respond.

I am assuming this is true even though the wget header responses above dont
indicate any type of Private or Authenticated session. Is the fact that I am
simply including a username and password in the wget command line enough for
squid to assume this is not a cacheable session?

Yes, any request with Authorization header. The wget will add that  
header on when you have username and passwd

Since I experimented with the squid caching options to no avail last night
could you please suggest a config file line with full syntax that I can try?
Is it simply Cache-control: public in server respond?

Ask the server to put Cache-Control: public on the respond header  
if you have control of it.

Thanks

Mike


-Original Message-
From: Mike Marchywka [mailto:marchy...@hotmail.com]
Sent: Tuesday, January 05, 2010 6:15 AM
To: mi...@btslink.com; crobert...@gci.net; squid-users@squid-cache.org
Subject: RE: [squid-users] Forward Cache not working






From:
To: crobert...@gci.net; squid-users@squid-cache.org
Date: Mon, 4 Jan 2010 22:12:56 -0600
Subject: RE: [squid-users] Forward Cache not working

I have attached is a screenshot of WGET header output with the -S


option.

LOL, can you just email the text in a plain text email? If I didn't know
better
I'd think someone put you up to this- you often are forced to with GUI
output
from which concise ASCII information can not be extracted.




I see nothing about private in the headers so I'm assuming this content
should be getting cached. Yet, each time I run wget and then view the


Squid


access log it shows TCP_MISS on every attempt. I'll try the Ignore Private
parameter in squid just to make sure that isn't the cause.



You can look at ietf spec and grep it for each header key wget returned
( assuming you have an easy way to extract these from your jpg
image that should be quite quick LOL). Text is interoperable, images
require you buy some wget-to-ietf-GUI tool that converts the ietf spec
into the same font as your wget output and looks for blocks of
pixles that are the same ( sorry to beat this to death but it comes
up a lot and creates a lot of problems in other contexts).




Very puzzling.

Mike

-Original Message-
From: Chris Robertson [mailto:crobert...@gci.net]
Sent: Monday, January 04, 2010 6:48 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Forward Cache not working

Mike Makowski wrote:


Here is my basic config. Using defaults for everything else.

acl localnet src 172.16.0.0/12
http_access allow local_net
maximum_object_size 25 MB

Here is a log entry showing one connection from a LAN user through the
proxy. I am guessing that the TCP_MISS is significant. Perhaps the
original source is marked as Private as Chris suggested. Don't really


know


how to even tell that though.


Add a -S to wget to output the server headers.

wget -S http://www.sortmonster.net/master/Updates/test.xyz -O test.new.gz
--header=Accept-Encoding:gzip --http-user=myuserid


--http-passwd=mypassword





Can squid be forced to cache regardless of
source settings?



Yes.


http://www.squid-cache.org/Versions/v3/3.0/cfgman/refresh_pattern.html


Keyword ignore-private.



1262645523.217 305633 172.17.0.152 TCP_MISS/200 11674081 GET
http://www.sortmonster.net/master/Updates/test.xyz - DIRECT/74.205.4.93
application/x-sortmonster 1262645523.464 122

Mike


Chris



_
Hotmail: Trusted email with powerful SPAM protection.
http://clk.atdmt.com/GBL/go/177141665/direct/01/=










Re: [squid-users] Squid url_rewrite and cookie

2010-01-06 Thread Adrian Chadd
Please create an Issue and attach the patch. I'll see about including it!




adrian

2010/1/6 Rajesh Nair rajesh.nair...@gmail.com:
 Thanks for the response, Matt!

 Unfortunately the cooperating HTTP service solution would not work
 as I need to set the cookie for the same domain for which the request
 is coming and that happens only when the request comes to the squid
 proxy.

 I have resolved it by extending the squid-url_rewrite protocol to
 accept the cookie string too and modifying the squid code to send the
 cookie in the 302 redirect response.

 Let me know if anybody is interested in the patch!

 Thanks,
 Rajesh

 On Tue, Jan 5, 2010 at 9:41 AM, Matt W. Benjamin m...@linuxbox.com wrote:
 Hi,

 Yes, you cannot (could not) per se.  However, you can rewrite to a 
 cooperating HTTP service which sets a cookie.  And, if you had adjusted 
 Squid so as to pass cookie data to url_rewriter programs, you could also 
 inspect the cookie in it on future requests.

 Matt

 - Rajesh Nair rajesh.nair...@gmail.com wrote:


 Reading the docs , it looks like it is not possible to send any
 HTTP
 response header from the url_rewriter program and the url_rewriter
 merely can return the redirected URI.
 Is this correct?

 Thanks,
 Rajesh

 --

 Matt Benjamin

 The Linux Box
 206 South Fifth Ave. Suite 150
 Ann Arbor, MI  48104

 http://linuxbox.com

 tel. 734-761-4689
 fax. 734-769-8938
 cel. 734-216-5309





[squid-users] Authentication on server side instead of client? is that possible?

2010-01-06 Thread Roland Roland

Hello,

hope this is possible to implement..

i've read squid.conf.default over and over again with no luck or simply 
no  understanding of what i'm looking for..


is there a way that squid can authenticate with a certain website 
instead of having every client on the network doing so ?


in other words, i have a site that 20 users use daily though having a 
shared password is not favorable..
so is there a way to do so on the server  and then all clients gets 
served an allready authenticated session to that site?

is that feasible?





[squid-users] R: [squid-users] NTLM v2

2010-01-06 Thread Guido Serassio
Hi,

On Windows, the native NTLM helper, when running on a domain member machine, 
will always negotiate the highest usable NTLM protocol version, so if both the 
authentication peers can use NTLMv2, NTLMv2 is automatically selected.

Please note that, if you want to USE NTLMv2, you need to have a Windows Domain 
and you must use domain accounts only. All Windows modern browser are NTLMv2 
capable. 

Regards 

Guido

Guido Serassio
Acme Consulting S.r.l.
Microsoft Gold Certified Partner
Via Lucia Savarino, 110098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135   Fax. : +39.011.9781115
Email: guido.seras...@acmeconsulting.it
WWW: http://www.acmeconsulting.it


 -Messaggio originale-
 Da: Ho, Oiling [mailto:oiling...@credit-suisse.com]
 Inviato: martedì 5 gennaio 2010 16.23
 A: squid-users@squid-cache.org
 Oggetto: [squid-users] NTLM v2
 
 Hi All,
 
 I have squid running on windows XP as a proxy server, I set up my
 computer to use NTLM V2 according to this link
 http://www.imss.caltech.edu/cms.php?op=wikiwiki_op=viewid=396 and
 rebooted my machine, then I used apache http client to connect to squid,
 it should not work since apache does not support NTLM V2, but somehow I
 was able to connect. Does anyone know what is going on? How can I tell
 from squid if it is using NTLM V1 or NTLM V2?
 
 Thanks,
 Oiling
 
 ==
 =
  Please access the attached hyperlink for an important electronic
 communications disclaimer:
  http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
 
 ==
 =
 


RE: [squid-users] NTLM v2

2010-01-06 Thread Ho, Oiling
Hi,

Thanks for your reply. Is there a way we can configure squid to use only 
NTLMV2? Can we tell from one of the log files if NTLMV2 is used instead NTLMV1?

Instead of using a windows browser to connect to squid, I am connecting to 
squid using a Apache Httpclient.

Thanks,
Oiling


-Original Message-
From: Guido Serassio [mailto:guido.seras...@acmeconsulting.it] 
Sent: Wednesday, January 06, 2010 11:44 AM
To: Ho, Oiling; squid-users@squid-cache.org
Subject: R: [squid-users] NTLM v2

Hi,

On Windows, the native NTLM helper, when running on a domain member machine, 
will always negotiate the highest usable NTLM protocol version, so if both the 
authentication peers can use NTLMv2, NTLMv2 is automatically selected.

Please note that, if you want to USE NTLMv2, you need to have a Windows Domain 
and you must use domain accounts only. All Windows modern browser are NTLMv2 
capable. 

Regards 

Guido

Guido Serassio
Acme Consulting S.r.l.
Microsoft Gold Certified Partner
Via Lucia Savarino, 110098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135   Fax. : +39.011.9781115
Email: guido.seras...@acmeconsulting.it
WWW: http://www.acmeconsulting.it


 -Messaggio originale-
 Da: Ho, Oiling [mailto:oiling...@credit-suisse.com]
 Inviato: martedì 5 gennaio 2010 16.23
 A: squid-users@squid-cache.org
 Oggetto: [squid-users] NTLM v2
 
 Hi All,
 
 I have squid running on windows XP as a proxy server, I set up my 
 computer to use NTLM V2 according to this link
 http://www.imss.caltech.edu/cms.php?op=wikiwiki_op=viewid=396 and 
 rebooted my machine, then I used apache http client to connect to 
 squid, it should not work since apache does not support NTLM V2, but 
 somehow I was able to connect. Does anyone know what is going on? How 
 can I tell from squid if it is using NTLM V1 or NTLM V2?
 
 Thanks,
 Oiling
 
 ==
 
 =
  Please access the attached hyperlink for an important electronic 
 communications disclaimer:
  http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
 
 ==
 
 =
 

=== 
 Please access the attached hyperlink for an important electronic 
communications disclaimer: 
 http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html 
 
=== 
 


Re: [squid-users] Authentication on server side instead of client? is that possible?

2010-01-06 Thread Kinkie
On Wed, Jan 6, 2010 at 5:22 PM, Roland Roland r_o_l_a_...@hotmail.com wrote:
 Hello,

 hope this is possible to implement..

 i've read squid.conf.default over and over again with no luck or simply no
  understanding of what i'm looking for..

 is there a way that squid can authenticate with a certain website instead of
 having every client on the network doing so ?

 in other words, i have a site that 20 users use daily though having a shared
 password is not favorable..
 so is there a way to do so on the server  and then all clients gets served
 an allready authenticated session to that site?
 is that feasible?

yes. See http://www.squid-cache.org/Doc/config/cache_peer/ (in
particular the login= option to cache_peer)

-- 
/kinkie


[squid-users] Re: Squid3.1 TProxy weirdness

2010-01-06 Thread Felipe W Damasio
  Hi again,

2010/1/6 Felipe W Damasio felip...@gmail.com:
   I'm new to this list, but checked the archives a lot before asking this.
   I'm trying to get squid-3.1 up and running with TProxy 4.1 on an ISP 
 network.
   My setup is working correctly when only a few users are connected to
 the users VLAN. The users can browse and TProxy works.
   But when I plug in the router with all the users (around 6),
 squid doesn't respond anymore.

  Just so you guys know, I'm compiling squid with:

./configure --enable-async-io --enable-icmp --enable-useragent-log
--enable-snmp --enable-cache-digests --enable-follow-x-forwarded-for
--enable-storeio=aufs --enable-removal-policies=heap,lru
--enable-epoll --enable-http-violations --with-maxfd=100
--enable-linux-netfilter

  Besides following exactly what the TProxy wiki told me, the only
other thing I had to do in order to get TProxy to work was these:

echo 0  /proc/sys/net/ipv4/conf/eth1/rp_filter
echo 0  /proc/sys/net/ipv4/conf/eth0/rp_filter
echo 0  /proc/sys/net/ipv4/conf/br0/rp_filter

   But again, it works when a few clients are connected, when the CMTS
(cable modem router) kicks in, everything goes to hell. Oh, and even
the clients that were already working stop working. Nothing gets
through!

   I tried to log the iptables rules to see if it really sees the
traffic, and got a lot of:

Jan  6 11:24:58 hyper kernel: iptables IN=eth0 OUT=
MAC=00:ea:01:02:7b:a2:00:21:a0:ce:9d:24:08:00 SRC=189.58.247.199
DST=64.233.163.103 LEN=52 TOS=0x00 PREC=0x00 TTL=127 ID=13252 DF
PROTO=TCP SPT=1388 DPT=80 WINDOW=65535 RES=0x00 SYN URGP=0 MARK=0x1

Jan  6 11:24:58 hyper kernel: iptables IN=eth0 OUT=
MAC=00:ea:01:02:7b:a2:00:21:a0:ce:9d:24:08:00 SRC=189.58.246.108
DST=65.54.48.74 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=17259 DF PROTO=TCP
SPT=42895 DPT=80 WINDOW=216 RES=0x00 ACK FIN URGP=0 MARK=0x1

   This could/should be a squid problem, then, right?

   Or is there a proc entry somewhere that could be screwing with me?

   I can post the /proc entries if it would help you guys to help me :-)

  Thanks,

Felipe Damasio


[squid-users] Re: Squid3.1 TProxy weirdness

2010-01-06 Thread Felipe W Damasio
2010/1/6 Felipe W Damasio felip...@gmail.com:
   Or is there a proc entry somewhere that could be screwing with me?

   I can post the /proc entries if it would help you guys to help me :-)

 And here it is, the result of a print of /proc/sys/net/

for i in `find /proc/sys/net`; do if [ -f $i ]; then echo -n $i = 
  /tmp/proc.txt; cat $i  /tmp/proc.txt; fi;done

 File proc.txt attached.

 Thanks,

Felipe Damasio
/proc/sys/net/core/somaxconn = 128
/proc/sys/net/core/wmem_max = 8388608
/proc/sys/net/core/rmem_max = 8388608
/proc/sys/net/core/wmem_default = 120832
/proc/sys/net/core/rmem_default = 120832
/proc/sys/net/core/dev_weight = 64
/proc/sys/net/core/netdev_max_backlog = 4000
/proc/sys/net/core/message_cost = 5
/proc/sys/net/core/message_burst = 10
/proc/sys/net/core/optmem_max = 20480
/proc/sys/net/core/netdev_budget = 300
/proc/sys/net/core/warnings = 1
/proc/sys/net/ipv4/route/gc_thresh = 262144
/proc/sys/net/ipv4/route/max_size = 4194304
/proc/sys/net/ipv4/route/gc_min_interval = 0
/proc/sys/net/ipv4/route/gc_min_interval_ms = 500
/proc/sys/net/ipv4/route/gc_timeout = 300
/proc/sys/net/ipv4/route/gc_interval = 60
/proc/sys/net/ipv4/route/redirect_load = 20
/proc/sys/net/ipv4/route/redirect_number = 9
/proc/sys/net/ipv4/route/redirect_silence = 20480
/proc/sys/net/ipv4/route/error_cost = 1000
/proc/sys/net/ipv4/route/error_burst = 5000
/proc/sys/net/ipv4/route/gc_elasticity = 8
/proc/sys/net/ipv4/route/mtu_expires = 600
/proc/sys/net/ipv4/route/min_pmtu = 552
/proc/sys/net/ipv4/route/min_adv_mss = 256
/proc/sys/net/ipv4/route/secret_interval = 600
/proc/sys/net/ipv4/route/flush = /proc/sys/net/ipv4/neigh/default/mcast_solicit 
= 3
/proc/sys/net/ipv4/neigh/default/ucast_solicit = 3
/proc/sys/net/ipv4/neigh/default/app_solicit = 0
/proc/sys/net/ipv4/neigh/default/retrans_time = 99
/proc/sys/net/ipv4/neigh/default/base_reachable_time = 30
/proc/sys/net/ipv4/neigh/default/delay_first_probe_time = 5
/proc/sys/net/ipv4/neigh/default/gc_stale_time = 60
/proc/sys/net/ipv4/neigh/default/unres_qlen = 3
/proc/sys/net/ipv4/neigh/default/proxy_qlen = 64
/proc/sys/net/ipv4/neigh/default/anycast_delay = 99
/proc/sys/net/ipv4/neigh/default/proxy_delay = 79
/proc/sys/net/ipv4/neigh/default/locktime = 99
/proc/sys/net/ipv4/neigh/default/retrans_time_ms = 1000
/proc/sys/net/ipv4/neigh/default/base_reachable_time_ms = 3
/proc/sys/net/ipv4/neigh/default/gc_interval = 30
/proc/sys/net/ipv4/neigh/default/gc_thresh1 = 128
/proc/sys/net/ipv4/neigh/default/gc_thresh2 = 512
/proc/sys/net/ipv4/neigh/default/gc_thresh3 = 1024
/proc/sys/net/ipv4/neigh/lo/mcast_solicit = 3
/proc/sys/net/ipv4/neigh/lo/ucast_solicit = 3
/proc/sys/net/ipv4/neigh/lo/app_solicit = 0
/proc/sys/net/ipv4/neigh/lo/retrans_time = 99
/proc/sys/net/ipv4/neigh/lo/base_reachable_time = 30
/proc/sys/net/ipv4/neigh/lo/delay_first_probe_time = 5
/proc/sys/net/ipv4/neigh/lo/gc_stale_time = 60
/proc/sys/net/ipv4/neigh/lo/unres_qlen = 3
/proc/sys/net/ipv4/neigh/lo/proxy_qlen = 64
/proc/sys/net/ipv4/neigh/lo/anycast_delay = 99
/proc/sys/net/ipv4/neigh/lo/proxy_delay = 79
/proc/sys/net/ipv4/neigh/lo/locktime = 99
/proc/sys/net/ipv4/neigh/lo/retrans_time_ms = 1000
/proc/sys/net/ipv4/neigh/lo/base_reachable_time_ms = 3
/proc/sys/net/ipv4/neigh/eth0/mcast_solicit = 3
/proc/sys/net/ipv4/neigh/eth0/ucast_solicit = 3
/proc/sys/net/ipv4/neigh/eth0/app_solicit = 0
/proc/sys/net/ipv4/neigh/eth0/retrans_time = 99
/proc/sys/net/ipv4/neigh/eth0/base_reachable_time = 30
/proc/sys/net/ipv4/neigh/eth0/delay_first_probe_time = 5
/proc/sys/net/ipv4/neigh/eth0/gc_stale_time = 60
/proc/sys/net/ipv4/neigh/eth0/unres_qlen = 3
/proc/sys/net/ipv4/neigh/eth0/proxy_qlen = 64
/proc/sys/net/ipv4/neigh/eth0/anycast_delay = 99
/proc/sys/net/ipv4/neigh/eth0/proxy_delay = 79
/proc/sys/net/ipv4/neigh/eth0/locktime = 99
/proc/sys/net/ipv4/neigh/eth0/retrans_time_ms = 1000
/proc/sys/net/ipv4/neigh/eth0/base_reachable_time_ms = 3
/proc/sys/net/ipv4/neigh/eth1/mcast_solicit = 3
/proc/sys/net/ipv4/neigh/eth1/ucast_solicit = 3
/proc/sys/net/ipv4/neigh/eth1/app_solicit = 0
/proc/sys/net/ipv4/neigh/eth1/retrans_time = 99
/proc/sys/net/ipv4/neigh/eth1/base_reachable_time = 30
/proc/sys/net/ipv4/neigh/eth1/delay_first_probe_time = 5
/proc/sys/net/ipv4/neigh/eth1/gc_stale_time = 60
/proc/sys/net/ipv4/neigh/eth1/unres_qlen = 3
/proc/sys/net/ipv4/neigh/eth1/proxy_qlen = 64
/proc/sys/net/ipv4/neigh/eth1/anycast_delay = 99
/proc/sys/net/ipv4/neigh/eth1/proxy_delay = 79
/proc/sys/net/ipv4/neigh/eth1/locktime = 99
/proc/sys/net/ipv4/neigh/eth1/retrans_time_ms = 1000
/proc/sys/net/ipv4/neigh/eth1/base_reachable_time_ms = 3
/proc/sys/net/ipv4/neigh/tunl0/mcast_solicit = 3
/proc/sys/net/ipv4/neigh/tunl0/ucast_solicit = 3
/proc/sys/net/ipv4/neigh/tunl0/app_solicit = 0
/proc/sys/net/ipv4/neigh/tunl0/retrans_time = 99
/proc/sys/net/ipv4/neigh/tunl0/base_reachable_time = 30
/proc/sys/net/ipv4/neigh/tunl0/delay_first_probe_time = 5
/proc/sys/net/ipv4/neigh/tunl0/gc_stale_time = 60

[squid-users] Amount of Bandwidth squid can handle

2010-01-06 Thread nima chavooshi
Hi
First of all thanks for sharing your experience on this mailing list.
I intend to install squid as forward cache in few companies with high
HTTP traffic almost 60 or 80 or 100Mb.
Can squid handle this amount of traffic??of course I do not have any
idea about selecting hardware yet.
May you tell me maximum of bandwidth you could handle with squid?it's
so good if you give me spec of your hardware that run squid on high
traffic.

Thanks in advance

--
N.Chavoshi


Re: [squid-users] acl rep_header SomeRule X-HEADER-ADDED-BY-ICAP

2010-01-06 Thread Chris Robertson

Trever L. Adams wrote:

On 01/-10/-28163 12:59 PM, Chris Robertson wrote:
  

Considering the fact that icap_access relies on ACLs, my guess would
be ICAP is adding the headers after the rep_header ACL is evaluated.



Is this possible with ICAP + Squid, or is it a bug, or just not
possible?
  
  

Run two Squid instances.  One using ICAP to add the headers, the other
blocking based on headers present.

Chris



I am guessing then that there is no clean way of adding such
functionality.


I'm (at best) a scripter, not a coder, so I can't answer that.  I know 
in the 2.7 branch of Squid there is http_access2 
(http://www.squid-cache.org/Doc/config/http_access2/) which acts on the 
post url_rewrite_program, so perhaps it would be possible to have a acl2 
which would work after ICAP.  To the best of my knowledge, nothing like 
this exists right now.



 So, can you please tell me what configuration option I
would use to tell the acl acting Squid to talk to the upstream ICAP
acting Squid?
  


http://www.squid-cache.org/Doc/config/cache_peer/
http://wiki.squid-cache.org/Features/CacheHierarchy


Thank you,
Trever
  


Chris



[squid-users] coss storage scheme.

2010-01-06 Thread Jeronimo Garcia
Hi all.

I've been looking at a bunch of benchmarks about storage schemes for
squid and coss looks
rather impressive , but , reading squid the definitive guide i got to
know that the code might be a bit beta/unstable.

Now i didn't really check when this book was written but I've been
coming across with it for a bunch of years already so i imagine is not
very new.

Does some of you have some real experiences with coss? I'm also very
interested on squid's start-up time when using coss.

Many Thanks
-J


Re: [squid-users] coss storage scheme.

2010-01-06 Thread Chris Robertson

Jeronimo Garcia wrote:

Hi all.

I've been looking at a bunch of benchmarks about storage schemes for
squid and coss looks
rather impressive , but , reading squid the definitive guide i got to
know that the code might be a bit beta/unstable.
  


http://wiki.squid-cache.org/Features/CyclicObjectStorageSystem


Now i didn't really check when this book was written but I've been
coming across with it for a bunch of years already so i imagine is not
very new.

Does some of you have some real experiences with coss?


Squid 2.7STABLE6

My COSS dirs are about 45 GB each on bare partitions (cache_dir coss 
/dev/sdc1 46080 max-size=51200 max-stripe-waste=32768 block-size=4096).  
Each server passes about 100 GB of traffic per day (to my customers, 
~25% from cache), with a peak requests/second of around 150.  No issues 
with stability (currently I have over 50 days of uptime).



 I'm also very
interested on squid's start-up time when using coss.
  


Squid starts up just fine but thrashes the disks hard for about half an 
hour while it reads the COSS dirs to build the index.  There is no 
noticeable effect on performance.



Many Thanks
-J
  


Chris




Re: [squid-users] coss storage scheme.

2010-01-06 Thread Manjusha Maddala

On a related note,

if Squid is configured to use AUFS and COSS for its cache_dirs, does it
store the Vary Internal Marker Objects only in the COSS file or
somewhere else too? For my setup, I notice the marker objects which
provide vary meta-data for all the cached pages are stored in COSS. 

If there is a COSS cache_dir, Squid attempts to store all the cached
objects less than the max_size for COSS object in the COSS file. As a
result, some of the Internal Marker Objects get evicted out (during
COSS recycle) thereby resulting in low cache hit ratio for my setup.



On Wed, 2010-01-06 at 16:58 -0800, Chris Robertson wrote:
 Jeronimo Garcia wrote:
  Hi all.
 
  I've been looking at a bunch of benchmarks about storage schemes for
  squid and coss looks
  rather impressive , but , reading squid the definitive guide i got to
  know that the code might be a bit beta/unstable.

 
 http://wiki.squid-cache.org/Features/CyclicObjectStorageSystem
 
  Now i didn't really check when this book was written but I've been
  coming across with it for a bunch of years already so i imagine is not
  very new.
 
  Does some of you have some real experiences with coss?
 
 Squid 2.7STABLE6
  
  My COSS dirs are about 45 GB each on bare partitions (cache_dir coss 
 /dev/sdc1 46080 max-size=51200 max-stripe-waste=32768 block-size=4096).  
 Each server passes about 100 GB of traffic per day (to my customers, 
 ~25% from cache), with a peak requests/second of around 150.  No issues 
 with stability (currently I have over 50 days of uptime).
 
   I'm also very
  interested on squid's start-up time when using coss.

 
 Squid starts up just fine but thrashes the disks hard for about half an 
 hour while it reads the COSS dirs to build the index.  There is no 
 noticeable effect on performance.
 
  Many Thanks
  -J

 
 Chris
 


[squid-users] Block Proxy sharing

2010-01-06 Thread Niti Lohwithee
Dear All:

I 'm using Squid stable 2.5 stable 14 running on Linux ES 4 . My
server use NCSA for authentication.

I have faced a problem about proxy sharing.   Some users have set the
another proxy server--CCproxy-- and point to my proxy.   I can not
prevent it to share using proxy.

Anyone please give me some advics, How to block it ?


Thanks
Niti: )
-- 
##
Mr niti lowhithee
email mr.n...@gmail.com
##


Re: [squid-users] Amount of Bandwidth squid can handle

2010-01-06 Thread Shawn Wright

We've been running Squid 2.6 for 5+ years with a 10Mb full duplex connection 
serving ~650 active users. It has handled peak loads of 60-90 req/sec without 
issue, which represents a fully utilized 10Mb link (managed with delay pools). 
Last month we upgraded to a full 1Gb (yes 100x speed increase!) on a trial 
basis. During a one week trial, we saw about 2-3x bandwidth use (or 20-30Mbps 
sustained average) with little affect on the proxy server load. During tests we 
were able to manage speedtest results of 250-300Mbps from a single Gb connected 
host to Speakeasy's Seattle test node, and saw no difference between going 
direct or via squid. We were also able to achieve a full 100Mbps speed result 
on each of 4 simultaneous hosts tested via squid (each was using 100Mb NIC). So 
far, the only issue we have seen is a problem our log files exceeding 2Gb in 
less than 24 hours, which required a re-compile to add the '--with-large-files' 
option. 
Still far short of the 60-100Mb rates you mention (are these peak or 
sustained?), but our server appears to have plenty of breathing room left, and 
is modest by today's standards: 

Dell PE2850 with Dual Quad Xeons 
Ubuntu 6.06 32bit, 4Gb RAM 
6x 15K 72Gb SCSI drives, 4 for cache, 1 for logs, one for system, running XFS 
Squid 2.6stable20 
Single Gb NIC in use. 
Lots of ACLs (300,000 lines), delay pools, all clients authenticated via AD 

I expect we will need to do more tuning since opening up the bandwidth, but so 
far, things are going fine. Prior to this week's re-compile, the system was 
running 24x7 since April 08. :-) 

Hope this helps. 

-- 

Shawn Wright 
I.T. Manager, Shawnigan Lake School 
http://www.shawnigan.ca 


- Original Message - 
From: nima chavooshi nima0...@gmail.com 
To: squid-users@squid-cache.org 
Sent: Wednesday, January 6, 2010 11:28:23 AM GMT -08:00 US/Canada Pacific 
Subject: [squid-users] Amount of Bandwidth squid can handle 

Hi 
First of all thanks for sharing your experience on this mailing list. 
I intend to install squid as forward cache in few companies with high 
HTTP traffic almost 60 or 80 or 100Mb. 
Can squid handle this amount of traffic??of course I do not have any 
idea about selecting hardware yet. 
May you tell me maximum of bandwidth you could handle with squid?it's 
so good if you give me spec of your hardware that run squid on high 
traffic. 

Thanks in advance 

-- 
N.Chavoshi 


Re: [squid-users] Amount of Bandwidth squid can handle

2010-01-06 Thread George Herbert
To build on Shawn's comments -

I've handled peak loads in forward cacheing in the several hundred
requests per second per Squid server, with 3.0-STABLE13 through 17 and
some older 2.6 servers, as part of a smartphone company web interface.

Servers were 4 GB dual Xeon quad core, running FreeBSD something for
the 2.6 servers and CentOS 5.2 for the 3.0 servers we were moving
towards.  There were four disks in use - OS, Logs, Cache 1, and Cache
2, with no redundancy.

We operated in larger cache groups initially but pared back to pairs
and triplets due to operational management concerns, over time.  Total
cache hit rate was slightly over 50%.

Peak benchmarking performance was over 600 hits/sec/server with a
production log sample workload, we saw about a third to half of that
as actual operational peaks (and were trying to keep margins of 2.0
from benchmarked perf to max production load).  We did 100k and 1m
request benchmark runs with medium sized IP pools making the queries
for testing, so it was pretty good load testing, though the test
harness was not optimal.


-george william herbert
george.herb...@gmail.com




On Wed, Jan 6, 2010 at 8:14 PM, Shawn Wright swri...@shawnigan.ca wrote:

 We've been running Squid 2.6 for 5+ years with a 10Mb full duplex connection 
 serving ~650 active users. It has handled peak loads of 60-90 req/sec without 
 issue, which represents a fully utilized 10Mb link (managed with delay 
 pools). Last month we upgraded to a full 1Gb (yes 100x speed increase!) on a 
 trial basis. During a one week trial, we saw about 2-3x bandwidth use (or 
 20-30Mbps sustained average) with little affect on the proxy server load. 
 During tests we were able to manage speedtest results of 250-300Mbps from a 
 single Gb connected host to Speakeasy's Seattle test node, and saw no 
 difference between going direct or via squid. We were also able to achieve a 
 full 100Mbps speed result on each of 4 simultaneous hosts tested via squid 
 (each was using 100Mb NIC). So far, the only issue we have seen is a problem 
 our log files exceeding 2Gb in less than 24 hours, which required a 
 re-compile to add the '--with-large-files' option.
 Still far short of the 60-100Mb rates you mention (are these peak or 
 sustained?), but our server appears to have plenty of breathing room left, 
 and is modest by today's standards:

 Dell PE2850 with Dual Quad Xeons
 Ubuntu 6.06 32bit, 4Gb RAM
 6x 15K 72Gb SCSI drives, 4 for cache, 1 for logs, one for system, running XFS
 Squid 2.6stable20
 Single Gb NIC in use.
 Lots of ACLs (300,000 lines), delay pools, all clients authenticated via AD

 I expect we will need to do more tuning since opening up the bandwidth, but 
 so far, things are going fine. Prior to this week's re-compile, the system 
 was running 24x7 since April 08. :-)

 Hope this helps.

 --

 Shawn Wright
 I.T. Manager, Shawnigan Lake School
 http://www.shawnigan.ca


 - Original Message -
 From: nima chavooshi nima0...@gmail.com
 To: squid-users@squid-cache.org
 Sent: Wednesday, January 6, 2010 11:28:23 AM GMT -08:00 US/Canada Pacific
 Subject: [squid-users] Amount of Bandwidth squid can handle

 Hi
 First of all thanks for sharing your experience on this mailing list.
 I intend to install squid as forward cache in few companies with high
 HTTP traffic almost 60 or 80 or 100Mb.
 Can squid handle this amount of traffic??of course I do not have any
 idea about selecting hardware yet.
 May you tell me maximum of bandwidth you could handle with squid?it's
 so good if you give me spec of your hardware that run squid on high
 traffic.

 Thanks in advance

 --
 N.Chavoshi




-- 
-george william herbert
george.herb...@gmail.com