Re: [squid-users] error 401 when going via squid ???

2008-11-13 Thread Gregory Machin
Yes I would assume that the issue is related to Integrated Microsoft
Windows Authentication (a.k.a. NTLM) or something M$ cooked up

Squid Cache: Version 2.6.STABLE4
configure options: '--build=i686-redhat-linux-gnu'
'--host=i686-redhat-linux-gnu' '--target=i386-redhat-linux-gnu'
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr'
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc'
'--includedir=/usr/include' '--libdir=/usr/lib'
'--libexecdir=/usr/libexec' '--sharedstatedir=/usr/com'
'--mandir=/usr/share/man' '--infodir=/usr/share/info'
'--exec_prefix=/usr' '--bindir=/usr/sbin'
'--libexecdir=/usr/lib/squid' '--localstatedir=/var'
'--datadir=/usr/share' '--sysconfdir=/etc/squid' '--enable-epoll'
'--enable-snmp' '--enable-removal-policies=heap,lru'
'--enable-storeio=aufs,coss,diskd,null,ufs' '--enable-ssl'
'--with-openssl=/usr/kerberos' '--enable-delay-pools'
'--enable-linux-netfilter' '--with-pthreads'
'--enable-ntlm-auth-helpers=SMB,fakeauth'
'--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group'
'--enable-auth=basic,digest,ntlm'
'--enable-digest-auth-helpers=password'
'--with-winbind-auth-challenge' '--enable-useragent-log'
'--enable-referer-log' '--disable-dependency-tracking'
'--enable-cachemgr-hostname=localhost' '--enable-underscores'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL'
'--enable-cache-digests' '--enable-ident-lookups' '--with-large-files'
'--enable-follow-x-forwarded-for' '--enable-wccpv2'
'--enable-fd-config' '--with-maxfd=16384' 'CFLAGS=-fPIE -Os -g -pipe
-fsigned-char' 'LDFLAGS=-pie' 'build_alias=i686-redhat-linux-gnu'
'host_alias=i686-redhat-linux-gnu'
'target_alias=i386-redhat-linux-gnu'

thanks


On Wed, Nov 12, 2008 at 8:09 PM, Kinkie [EMAIL PROTECTED] wrote:
 On Wed, Nov 12, 2008 at 3:32 PM, Gregory Machin [EMAIL PROTECTED] wrote:
 Hi

 Hello Greg,

 I have a client that when he tries to access agentdeal.marvel.com the
 web server (IIS) does give a login prompt as it should and instead
 returns a 401 error.

 [...]

 I get the same problem with our proxy and some other people have this
 problem when, behind squid proxy's .

 What version of Squid, and is IIS trying to offer Integrated
 Microsoft Windows Authentication (a.k.a. NTLM)?


 --
/kinkie



Re: [squid-users] large memory squid

2008-11-13 Thread Amos Jeffries

john Moylan wrote:

Hi,

I am about to take ownership of a new 2CPU, 4 core server with 32GB of
RAM - I intend to add the server to my squid reverse proxy farm. My
site is approximately 300GB including archives and I think 32GB of
memory alone will suffice as cache for small, hot objects without
necessitating any additional disk cache.

Are there any potential bottlenecks if I set the disk cache to
something like 500MB and cache_mem to  something like 22GB. I'm using
Centos 5's Squid 2.6.

I have a full set of monitoring scripts as per
http://www.squid-cache.org/~wessels/squid-rrd/ (thanks again) and of
course I will be able to benchmark this myself once I have the box -
but any tips in advance would be appreciated.



Should run sweet. Just make sure its a 64-bit OS and Squid build or all 
that RAM goes to waste.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


[squid-users] The requested URL was not found on this server - squid

2008-11-13 Thread Indunil Jayasooriya
Hi AlL,


I get below error while browsing a website.

its home page is

http://pathiranatimber.mine.nu

 I get the homepage.. (Sorry , I canNOT give usermame and password) -

When I give username and password. It will go to the following page

http://pathiranatimber.mine.nu/home.cgi

Then , it give below error.

The requested URL was not found on this server

This is what access log says.


1226568643.800   1468 192.1.54.62 TCP_MISS/200 4485 GET
http://pathiranatimber.mine.nu/ - DIRECT/124.43.227.181 text/html
1226568644.134805 192.1.54.62 TCP_MISS/200 938 GET
http://pathiranatimber.mine.nu/css.css - DIRECT/124.43.227.181
text/plain
1226568645.053891 192.1.54.62 TCP_MISS/200 385 GET
http://pathiranatimber.mine.nu/jpg/arrow03.gif - DIRECT/124.43.227.181
image/gif
1226568645.361   1198 192.1.54.62 TCP_MISS/200 2164 GET
http://pathiranatimber.mine.nu/jpg/login_7.jpg - DIRECT/124.43.227.181
image/jpeg
1226568645.517   1354 192.1.54.62 TCP_MISS/200 2250 GET
http://pathiranatimber.mine.nu/jpg/login_5.jpg - DIRECT/124.43.227.181
image/jpeg
1226568645.791   1628 192.1.54.62 TCP_MISS/200 4119 GET
http://pathiranatimber.mine.nu/jpg/login_3.jpg - DIRECT/124.43.227.181
image/jpeg
1226568646.129   1075 192.1.54.62 TCP_MISS/200 4102 GET
http://pathiranatimber.mine.nu/jpg/login_8.jpg - DIRECT/124.43.227.181
image/jpeg


1226568657.218809 192.1.54.62 TCP_MISS/200 367 POST
http://pathiranatimber.mine.nu/home.cgi - DIRECT/124.43.227.181
text/html



But, If I bybass squid, It works fine. This is a streaming video site.
But, remember, There is NO firewall running. All ports are open.

ANY ADVICE




-- 
Thank you
Indunil Jayasooriya


Re: [squid-users] The requested URL was not found on this server - squid

2008-11-13 Thread Amos Jeffries

Indunil Jayasooriya wrote:

Hi AlL,


I get below error while browsing a website.

its home page is

http://pathiranatimber.mine.nu

 I get the homepage.. (Sorry , I canNOT give usermame and password) -

When I give username and password. It will go to the following page

http://pathiranatimber.mine.nu/home.cgi

Then , it give below error.

The requested URL was not found on this server

This is what access log says.


1226568643.800   1468 192.1.54.62 TCP_MISS/200 4485 GET
http://pathiranatimber.mine.nu/ - DIRECT/124.43.227.181 text/html
1226568644.134805 192.1.54.62 TCP_MISS/200 938 GET
http://pathiranatimber.mine.nu/css.css - DIRECT/124.43.227.181
text/plain
1226568645.053891 192.1.54.62 TCP_MISS/200 385 GET
http://pathiranatimber.mine.nu/jpg/arrow03.gif - DIRECT/124.43.227.181
image/gif
1226568645.361   1198 192.1.54.62 TCP_MISS/200 2164 GET
http://pathiranatimber.mine.nu/jpg/login_7.jpg - DIRECT/124.43.227.181
image/jpeg
1226568645.517   1354 192.1.54.62 TCP_MISS/200 2250 GET
http://pathiranatimber.mine.nu/jpg/login_5.jpg - DIRECT/124.43.227.181
image/jpeg
1226568645.791   1628 192.1.54.62 TCP_MISS/200 4119 GET
http://pathiranatimber.mine.nu/jpg/login_3.jpg - DIRECT/124.43.227.181
image/jpeg
1226568646.129   1075 192.1.54.62 TCP_MISS/200 4102 GET
http://pathiranatimber.mine.nu/jpg/login_8.jpg - DIRECT/124.43.227.181
image/jpeg


1226568657.218809 192.1.54.62 TCP_MISS/200 367 POST
http://pathiranatimber.mine.nu/home.cgi - DIRECT/124.43.227.181
text/html



Those are all successful requests going through.
NO failures mentioned.




But, If I bybass squid, It works fine. This is a streaming video site.
But, remember, There is NO firewall running. All ports are open.

ANY ADVICE



Random guess for today would be:  a security system at their end needing 
a side connection from the same IP as authenticated ???


You'll need to figure out:
 what URI? sent from where? to where?
 how?? since its apparently not through Squid
and see if you can imagine the answer based on those results.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


Re: [squid-users] Building a Squid Cache Brouter

2008-11-13 Thread Amos Jeffries

Dumpolid Exeplish wrote:

Hello All,

I am trying to build a transparent squid server on a Linux 2.6 kernel
using the bridging code (br-nf)

INTERNET_GW  == BRIDGE/SQUID  === Client Nat Router

in this setup, the Client Nat router has the entire LAN behind it and
the Client nat router will have its default gateway as the
INTERNT_GW's IP address.
The BRIDGE/SQUID box will have two Ethernet cards, one connecting to
the client Nat Router and the other connected to the INTERNET_GW.
The BRIDGE/SQUID box will have one IP address on which Squid will be
listening of connections on.

My aim is to transparently redirect http traffic passing from the
Client Nat Router to the squid process configured on the router
without altering the gateway of the Client NAt Router.

Here are some of the ebtables/iptables that i have tried out but at
this point... i am not sure of how proceed

ebtables -t broute -A BROUTING --in-if $BR_IN -p IPv4 --ip-protocol
tcp --ip-dport 80 -j redirect --redirect-target ACCEPT
ebtables -t broute -A BROUTING --in-if $BR_IN -p IPv4 --ip-protocol
tcp --ip-dport 21 -j redirect --redirect-target ACCEPT
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
modprobe ip_conntrack
modprobe ip_conntrack_ftp
modprobe ip_nat_ftp
iptables -P INPUT DROP
iptables -P OUTPUT ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
iptables -A INPUT -i br0 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -i $BR_IN -j ACCEPT
iptables -t nat -A PREROUTING -i br0 -p tcp –-dport 80 -j REDIRECT
-–to-port $CACHE_PORT
iptables -t nat -A PREROUTING -i br0 -p tcp –-dport 21 -j REDIRECT
-–to-port $CACHE_PORT
iptables -t nat -A PREROUTING -i $BR_IN -p tcp –dport 80 -j REDIRECT
–to-ports $CACHE_PORT
iptables -t nat -A PREROUTING -i $BR_IN -p tcp –dport 21 -j REDIRECT
–to-ports $CACHE_PORT

could anyone out there help me to explain how to progress? is this
even possible at all?


One note before you start. Port 21 - Squid is an FTP client only, it 
cannot accept FTP traffic.

For the port 80 traffic its usable on all 2.6+ Squid.


Pic your combo of interception and transport methods:
 http://wiki.squid-cache.org/ConfigExamples/Intercept
though it looks like you already need the iptables REDIRECT config.

My own experience with this exact box layout, you should not have to use 
a bridge. A relay router is sufficient. Depends on your specs though.


If you do get this going as a bridge, would you mind submitting back the 
ebtabels part and any variation in iptables config for the wiki please? 
It sounds like that would be a useful one to add.


The core ideas are that:
 - the NAT _should_ happen on the squid box or loose client tracking 
ability.
 - NAT _must_ exclude the squid outbound traffic or cause fatal traffic 
loops.
 - routers should forward/tunnel traffic unaltered to the squid box for 
NAT.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


[squid-users] Not able to cache Streaming media

2008-11-13 Thread bijayant kumar
Hi,

I am using Squid-2.7.STABLE4 on Gentoo Box. I am trying to cache the youtube's 
video but not able to do so. I have followed the links
http://www.squid-cache.org/mail-archive/squid-users/200804/0420.html
http://wiki.squid-cache.org/Features/StoreUrlRewrite.
According to these urls I tuned my squid.conf as

acl youtube dstdomain .youtube.com .googlevideo.com .video.google.com 
.video.google.com.au .rediff.com
acl youtubeip dst 74.125.15.0/24
acl youtubeip dst 208.65.153.253/32
acl youtubeip dst 209.85.173.118/32
acl youtubeip dst 64.15.0.0/16
cache allow all
acl store_rewrite_list dstdomain mt.google.com mt0.google.com mt1.google.com 
mt2.google.com
acl store_rewrite_list dstdomain mt3.google.com
acl store_rewrite_list dstdomain kh.google.com kh0.google.com kh1.google.com 
kh2.google.com
acl store_rewrite_list dstdomain kh3.google.com
acl store_rewrite_list dstdomain kh.google.com.au kh0.google.com.au 
kh1.google.com.au
acl store_rewrite_list dstdomain kh2.google.com.au kh3.google.com.au

acl store_rewrite_list dstdomain .youtube.com .rediff.com .googlevideo.com
storeurl_access allow store_rewrite_list
storeurl_access deny all
storeurl_rewrite_program /usr/local/bin/store_url_rewrite
quick_abort_min -1
##hierarchy_stoplist cgi-bin ?
maximum_object_size_in_memory 1024 KB
cache_dir ufs /var/cache/squid 2000 16 256
maximum_object_size 4194240 KB
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern -i  \.flv$  10080   90% 99  ignore-no-cache 
override-expire ignore-private
refresh_pattern get_video\?video_id 10080   90% 99  ignore-no-cache 
override-expire ignore-private
refresh_pattern youtube\.com/get_video\?10080   90% 99  
ignore-no-cache override-expire ignore-private
refresh_pattern .   99  100%99  override-expire 
override-lastmodignore-reload   ignore-no-cache ignore-private  
ignore-auth

But when i am accessing any video of Youtube, I always get TCP_MISS/200 in 
access.log and in store.log
 1226578788.877 RELEASE -1  C51CDFBB595CB7105BB2874BE5C3DFB0  303 
1226578824-1  41629446 text/html -1/0 GET 
http://www.youtube.com/get_video?video_id=tpQjv14-8yEt=OEgsToPDskKT1_ZEE8QGepXgMQ1i-_eYel=detailpageps=
1226578813.447 SWAPOUT 00 0314 7CAD15F0E273355B5D18A2B6F8D1871F  200 
1226578825 1186657973 1226582425 video/flv 2553595/2553595 GET 
http://v16.cache.googlevideo.com/get_video?origin=sjc-v176.sjc.youtube.comvideo_id=tpQjv14-8yEip=59.92.192.176signature=CF0D875938C7073200F62243A6FF964C54749A5F.E443F5A221D07EF10EB8C29233A33ACC45F8DD50sver=2expire=1226600424key=yt4ipbits=2
There are lots of RELEASE and very few almost negligible SWAPOUT in the 
logs.
I repeated this exercise many times but always getting the same result that is 
TCP_MISS/200 in access.log.

Please suggest me am i missing some thing or I need to add some more thing in 
squid.conf.

Bijayant Kumar


  Get your preferred Email name!
Now you can @ymail.com and @rocketmail.com. 
http://mail.promotions.yahoo.com/newdomains/aa/


[squid-users] problem with reply_body_max_size and external ACL

2008-11-13 Thread Razvan Grigore
Hello,

I recently updated to squid3.0/STABLE10 and I'm trying to configure a
working solution integrated with MS Active directory.

Group checking is working fine, but reply_body_max_size is not working
with my external acl helper.

here's the relevant config part:

external_acl_type ad_group children=3 ttl=120 %LOGIN
/usr/lib/squid/wbinfo_group.pl

acl limitadownload external ad_group o-ro-cod-internet-limitadownload

acl intranet src 10.61.0.0/16

if i try:

reply_body_max_size 15 MB intranet
reply_body_max_size 500 KB all

It works as expected.

however, if i try:

reply_body_max_size 15 MB limitadownload all (even without all)
reply_body_max_size 500 KB all

it's not working at all, it gives me 500 kb limit.

I should mention that wbinfo_group.pl is giving me OK in command promt
when checking the group membership.

What should I do?


Re: [squid-users] problem with reply_body_max_size and external ACL

2008-11-13 Thread Amos Jeffries

Razvan Grigore wrote:

Hello,

I recently updated to squid3.0/STABLE10 and I'm trying to configure a
working solution integrated with MS Active directory.

Group checking is working fine, but reply_body_max_size is not working
with my external acl helper.

here's the relevant config part:

external_acl_type ad_group children=3 ttl=120 %LOGIN
/usr/lib/squid/wbinfo_group.pl

acl limitadownload external ad_group o-ro-cod-internet-limitadownload

acl intranet src 10.61.0.0/16

if i try:

reply_body_max_size 15 MB intranet
reply_body_max_size 500 KB all

It works as expected.

however, if i try:

reply_body_max_size 15 MB limitadownload all (even without all)
reply_body_max_size 500 KB all

it's not working at all, it gives me 500 kb limit.

I should mention that wbinfo_group.pl is giving me OK in command promt
when checking the group membership.

What should I do?


Report the bug.

Based on this and a few other occurrences I'm beginning to suspect that 
credential re-checks are missing on all reply controls.
After all, who would guess that users who authenticated to make a 
request would have to re-authenticate to get the reply?


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


Re: [squid-users] error 401 when going via squid ???

2008-11-13 Thread Kinkie
Could you try a more recent version of squid?
I don't think that 2.6S4 supports proxying content when the server
only offers ntlm authentication

On 11/13/08, Gregory Machin [EMAIL PROTECTED] wrote:
 Yes I would assume that the issue is related to Integrated Microsoft
 Windows Authentication (a.k.a. NTLM) or something M$ cooked up

 Squid Cache: Version 2.6.STABLE4
 configure options: '--build=i686-redhat-linux-gnu'
 '--host=i686-redhat-linux-gnu' '--target=i386-redhat-linux-gnu'
 '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr'
 '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc'
 '--includedir=/usr/include' '--libdir=/usr/lib'
 '--libexecdir=/usr/libexec' '--sharedstatedir=/usr/com'
 '--mandir=/usr/share/man' '--infodir=/usr/share/info'
 '--exec_prefix=/usr' '--bindir=/usr/sbin'
 '--libexecdir=/usr/lib/squid' '--localstatedir=/var'
 '--datadir=/usr/share' '--sysconfdir=/etc/squid' '--enable-epoll'
 '--enable-snmp' '--enable-removal-policies=heap,lru'
 '--enable-storeio=aufs,coss,diskd,null,ufs' '--enable-ssl'
 '--with-openssl=/usr/kerberos' '--enable-delay-pools'
 '--enable-linux-netfilter' '--with-pthreads'
 '--enable-ntlm-auth-helpers=SMB,fakeauth'
 '--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group'
 '--enable-auth=basic,digest,ntlm'
 '--enable-digest-auth-helpers=password'
 '--with-winbind-auth-challenge' '--enable-useragent-log'
 '--enable-referer-log' '--disable-dependency-tracking'
 '--enable-cachemgr-hostname=localhost' '--enable-underscores'
 '--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL'
 '--enable-cache-digests' '--enable-ident-lookups' '--with-large-files'
 '--enable-follow-x-forwarded-for' '--enable-wccpv2'
 '--enable-fd-config' '--with-maxfd=16384' 'CFLAGS=-fPIE -Os -g -pipe
 -fsigned-char' 'LDFLAGS=-pie' 'build_alias=i686-redhat-linux-gnu'
 'host_alias=i686-redhat-linux-gnu'
 'target_alias=i386-redhat-linux-gnu'

 thanks


 On Wed, Nov 12, 2008 at 8:09 PM, Kinkie [EMAIL PROTECTED] wrote:
 On Wed, Nov 12, 2008 at 3:32 PM, Gregory Machin [EMAIL PROTECTED]
 wrote:
 Hi

 Hello Greg,

 I have a client that when he tries to access agentdeal.marvel.com the
 web server (IIS) does give a login prompt as it should and instead
 returns a 401 error.

 [...]

 I get the same problem with our proxy and some other people have this
 problem when, behind squid proxy's .

 What version of Squid, and is IIS trying to offer Integrated
 Microsoft Windows Authentication (a.k.a. NTLM)?


 --
/kinkie




-- 
/kinkie


[squid-users] Building a Squid Cache Brouter

2008-11-13 Thread Dumpolid Exeplish
Hello All,

I am trying to build a transparent squid server on a Linux 2.6 kernel
using the bridging code (br-nf)

INTERNET_GW  == BRIDGE/SQUID  === Client Nat Router

in this setup, the Client Nat router has the entire LAN behind it and
the Client nat router will have its default gateway as the
INTERNT_GW's IP address.
The BRIDGE/SQUID box will have two Ethernet cards, one connecting to
the client Nat Router and the other connected to the INTERNET_GW.
The BRIDGE/SQUID box will have one IP address on which Squid will be
listening of connections on.

My aim is to transparently redirect http traffic passing from the
Client Nat Router to the squid process configured on the router
without altering the gateway of the Client NAt Router.

Here are some of the ebtables/iptables that i have tried out but at
this point... i am not sure of how proceed

ebtables -t broute -A BROUTING --in-if $BR_IN -p IPv4 --ip-protocol
tcp --ip-dport 80 -j redirect --redirect-target ACCEPT
ebtables -t broute -A BROUTING --in-if $BR_IN -p IPv4 --ip-protocol
tcp --ip-dport 21 -j redirect --redirect-target ACCEPT
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
modprobe ip_conntrack
modprobe ip_conntrack_ftp
modprobe ip_nat_ftp
iptables -P INPUT DROP
iptables -P OUTPUT ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
iptables -A INPUT -i br0 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -i $BR_IN -j ACCEPT
iptables -t nat -A PREROUTING -i br0 -p tcp –-dport 80 -j REDIRECT
-–to-port $CACHE_PORT
iptables -t nat -A PREROUTING -i br0 -p tcp –-dport 21 -j REDIRECT
-–to-port $CACHE_PORT
iptables -t nat -A PREROUTING -i $BR_IN -p tcp –dport 80 -j REDIRECT
–to-ports $CACHE_PORT
iptables -t nat -A PREROUTING -i $BR_IN -p tcp –dport 21 -j REDIRECT
–to-ports $CACHE_PORT

could anyone out there help me to explain how to progress? is this
even possible at all?


[squid-users] Squid radius encryption

2008-11-13 Thread Johnson, S
Ok, I think I got my issue narrowed down to the encryption that is being
used to authenticate to my Microsoft IAS radius server.  I'm getting an
invalid auth type in the error on the server.  Does anyone know what
type of encryption is used on for this connection and/or how to
configure squid to talk to the IAS radius server?

Thanks


Re: [squid-users] large memory squid

2008-11-13 Thread john Moylan
Should I still leave 30% of my RAM for the OS's cache etc?

J

2008/11/13 Amos Jeffries [EMAIL PROTECTED]:
 john Moylan wrote:

 Hi,

 I am about to take ownership of a new 2CPU, 4 core server with 32GB of
 RAM - I intend to add the server to my squid reverse proxy farm. My
 site is approximately 300GB including archives and I think 32GB of
 memory alone will suffice as cache for small, hot objects without
 necessitating any additional disk cache.

 Are there any potential bottlenecks if I set the disk cache to
 something like 500MB and cache_mem to  something like 22GB. I'm using
 Centos 5's Squid 2.6.

 I have a full set of monitoring scripts as per
 http://www.squid-cache.org/~wessels/squid-rrd/ (thanks again) and of
 course I will be able to benchmark this myself once I have the box -
 but any tips in advance would be appreciated.


 Should run sweet. Just make sure its a 64-bit OS and Squid build or all that
 RAM goes to waste.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2



[squid-users] Squid Stops Responding Sporadically

2008-11-13 Thread Marcel Grandemange
Good day.


Im wondering if anybody else has experienced this.
Since ive upgraded to squid3stable10 the proxy continuously stops
responding.
Firefox will say something along the lines of the proxy isn't setup to
accept connections.

I hit refresh and it loads page perfectly, then next page it loads all the
pics half  and so on..


This has only been introduced in stable10 and isn't the link as I tested
with a neighboring cache running stable9, no issues.


Also for those interested, a while back I had issues with the performance of
squid...
Objects retrieved out of cache never went faster than 400K, turned out when
I changed my cache_dir from aufs to ufs this was resolved, now objects some
down full speed on LAN.



FW: [squid-users] Squid Stops Responding Sporadically

2008-11-13 Thread Marcel Grandemange
Good day.


Im wondering if anybody else has experienced this.
Since ive upgraded to squid3stable10 the proxy continuously stops
responding.
Firefox will say something along the lines of the proxy isn't setup to
accept connections.

I hit refresh and it loads page perfectly, then next page it loads all the
pics half  and so on..


This has only been introduced in stable10 and isn't the link as I tested
with a neighboring cache running stable9, no issues.

Under further investigation system log file presented following:

Nov 13 19:37:21 thavinci kernel: pid 66367 (squid), uid 100: exited on
signal 6 (core dumped)
Nov 13 19:37:21 thavinci squid[66118]: Squid Parent: child process 66367
exited due to signal 6
Nov 13 19:37:24 thavinci squid[66118]: Squid Parent: child process 66370
started

Also for those interested, a while back I had issues with the performance
of
squid...
Objects retrieved out of cache never went faster than 400K, turned out when
I changed my cache_dir from aufs to ufs this was resolved, now objects some
down full speed on LAN.




Re: [squid-users] Strange RST packet

2008-11-13 Thread Itzcak Pechtalt
What is the situation?

Do other clients work OK ?

Did you check if Squid crash occur (check in cache.log) ?

I there a specific scenario leading to RESET ?

Squid doesn't send RESET without reason, check for reason in capture

Itzcak

On Tue, Nov 11, 2008 at 8:50 PM, Luis Daniel Lucio Quiroz
[EMAIL PROTECTED] wrote:
 After debuggin,

 I've found that squid is sending a RST packet to a Windows station (WinXP SP2
 or WinVista).

 Squid is not configured to send RST's.  Is there any explication for this?

 Regards,

 LD



[squid-users] NTLM auth and groupmembership

2008-11-13 Thread Johnson, S
Ok, I scrapped the radius authentication and went back to NTLM.  Is it
possible to check for a group membership during/after authentication to
allow a user to use SQUID?  For instance, I want to be able to take away
or grant access to the proxy based on an AD group membership.

Thanks
  Scott


RE: [squid-users] NTLM auth popup boxes Solaris 8 tuning for upgrade into 2.7.4

2008-11-13 Thread vincent.blondel

hello all,

I currently get some sun v210 boxes running solaris 8 and squid-2.6.12
and samba 3.0.20b I will upgrade these proxies into 2.7.4/3.0.32 next
monday but before doing this I would like to ask you your advices
and/or
experiences with tuning these kind of boxes.

the service is running well today except we regularly get
authentication
popup boxes. This is really exasperating our Users. I already spent lot
of times on the net in the hope finding a clear explanation about it
but
i am still searching. I already configured starting 128 ntlm_auth
processes on each of my servers. This gives better results but problem
still remains. I also made some patching in my new package I will
deploy
next week by overwrting some samba values .. below my little patch ..

--- samba-3.0.32.orig/source/include/local.h2008-08-25
23:09:21.0 +0200
+++ samba-3.0.32/source/include/local.h 2008-10-09 13:09:59.784144000
+0200
@@ -222,7 +222,7 @@
 #define WINBIND_SERVER_MUTEX_WAIT_TIME ((
((NUM_CLI_AUTH_CONNECT_RETRIES) * ((CLI_AUTH_TIMEOUT)/1000)) + 5)*2)

 /* Max number of simultaneous winbindd socket connections. */
-#define WINBINDD_MAX_SIMULTANEOUS_CLIENTS 200
+#define WINBINDD_MAX_SIMULTANEOUS_CLIENTS 1024

 /* Buffer size to use when printing backtraces */
 #define BACKTRACE_STACK_SIZE 64

I currently do not use 'auth_param ntlm keep_alive on' because I do not
know if it will not cause some side effects for web browser used in our
company (ie/windows xp sp2).

I already use some parameters today like these ones below ...

set shmsys:shminfo_shmseg=16
set shmsys:shminfo_shmmni=32
set shmsys:shminfo_shmmax=2097152
set msgsys:msginfo_msgmni=40
set msgsys:msginfo_msgmax=2048
set msgsys:msginfo_msgmnb=8192
set msgsys:msginfo_msgssz=64
set msgsys:msginfo_msgtql=2048
set rlim_fd_max=8192

arp_cleanup_interval=6
ip_forward_directed_broadcasts=0
ip_forward_src_routed=0
ip6_forward_src_routed=0
ip_ignore_redirect=1
ip6_ignore_redirect=1
ip_ire_flush_interval=6
ip_ire_arp_interval=6
ip_respond_to_address_mask_broadcast=0
ip_respond_to_echo_broadcast=0
ip6_respond_to_echo_multicast=0
ip_respond_to_timestamp=0
ip_respond_to_timestamp_broadcast=0
ip_send_redirects=0
ip6_send_redirects=0
ip_strict_dst_multihoming=1
ip6_strict_dst_multihoming=1
ip_def_ttl=255
tcp_conn_req_max_q0=4096
tcp_conn_req_max_q=1024
tcp_rev_src_routes=0
tcp_extra_priv_ports_add=6112
udp_extra_priv_ports_add=
tcp_smallest_anon_port=32768
tcp_largest_anon_port=65535
udp_smallest_anon_port=32768
udp_largest_anon_port=65535
tcp_smallest_nonpriv_port=1024
udp_smallest_nonpriv_port=1024

after some investigations on my servers, I notice we often get lots of
connections in status CLOSE_WAIT and FIN_WAIT_2. I also get lots of
connections in status ESTABLISHED. If I have a look on squid statistics
these are some files giving an idea on the load handled by our machines
..

SUNW,Sun-Fire-V210
2048 Memory size
bge0 100-fdx (or) 1000-fdx
client_http.requests = 242/sec
server.http.requests = 163/sec
Number of clients accessing cache: 1486
cpu_usage = 45.065136%
/dev/dsk/c0t0d0s520655529 15015444 5433530  74%  /var/cache0
/dev/dsk/c0t1d0s520655529 14971972 5477002  74%  /var/cache1
1746418 Store Entries
(some) 1265 ESTABLISHED tcp connections (at high load)
(some) 132 CLOSE_WAIT (or)  FIN_WAIT_2 connections

so these servers are relatively heavy loaded and this is the reason why
I think I still can tune some tcp/udp values in order to optimize and
reduce the cpu usage on my servers. I already found some ideas on the
net like these values below but this is not guraranteed ..

ndd -set /dev/tcp tcp_time_wait_interval 6
ndd -set /dev/tcp tcp_fin_wait_2_flush_interval 67500
ndd -set /dev/tcp tcp_keepalive_interval 15000

many thks to help me because we are really in trouble and I am sure we
can solve these little problems by setting/tuning some parameters.

I made some further investigations and found maybe some relevant issues
..

* first of all, seems the tcp queues are not large enough with some
173201 dropped connections

  # netstat -sP tcp | fgrep -i listendrop
tcpListenDrop   =173201 tcpListenDropQ0 = 0

* seems we do not get any connection problems with our servers and l2
switches ... only 280 input errors on 583 days uptime.

  # netstat -i
  Name  Mtu  Net/Dest  AddressIpkts Ierrs   Opkts
Oerrs Collis Queue
  lo0   8232 loopback  localhost  251726967 0   251726967
0 0  0
  bge0  1500 sbepskcv  sbepskcv   1607581016  280  1645158342
0 0  0
  bge1  1500 sbepskcv-bge1 sbepskcv-bge1  2920250 3355944
0 0  0

* seems we can optimize a bit tcp time-to-live connections because I see
hundreds connections in status 
  CLOSE_WAIT FIN_WAIT_2 TIME_WAIT

* this is a command I see on the net but to be honnest I do not
understand the output of such a command

  # netstat -k inode_cache
  inode_cache:
  size 157855 maxsize 128252 hits 573916370 misses 

FW: [squid-users] Squid Stops Responding Sporadically

2008-11-13 Thread Marcel Grandemange
Good day.


Im wondering if anybody else has experienced this.
Since ive upgraded to squid3stable10 the proxy continuously stops
responding.
Firefox will say something along the lines of the proxy isn't setup to
accept connections.

I hit refresh and it loads page perfectly, then next page it loads all the
pics half  and so on..


This has only been introduced in stable10 and isn't the link as I tested
with a neighboring cache running stable9, no issues.

Under further investigation system log file presented following:

Nov 13 19:37:21 thavinci kernel: pid 66367 (squid), uid 100: exited on
signal 6 (core dumped)
Nov 13 19:37:21 thavinci squid[66118]: Squid Parent: child process 66367
exited due to signal 6
Nov 13 19:37:24 thavinci squid[66118]: Squid Parent: child process 66370
started

Under Even further investigation cache.log revealed following...
JUST before squid crashes each time there is the following entry...

2008/11/14 00:03:55| assertion failed: client_side_reply.cc:1843: reqofs =
HTTP_REQBUF_SZ || flags.headersSent


Also for those interested, a while back I had issues with the performance
of
squid...
Objects retrieved out of cache never went faster than 400K, turned out when
I changed my cache_dir from aufs to ufs this was resolved, now objects some
down full speed on LAN.

I am pretty desperate to get this problem solved as this is affecting us big
time.




Re: [squid-users] Squid Stops Responding Sporadically

2008-11-13 Thread Kinkie
Hi,
  could you plase give us a few more details? Squid version (squid
-v), operating system, whether you got a binary package or rolled your
own, a few info about the setup (forward proxy? Reverse? Transparent?)
It's hard to tell from what I read so far, unless I missed something

On 11/13/08, Marcel Grandemange [EMAIL PROTECTED] wrote:
Good day.


Im wondering if anybody else has experienced this.
Since ive upgraded to squid3stable10 the proxy continuously stops
responding.
Firefox will say something along the lines of the proxy isn't setup to
accept connections.

I hit refresh and it loads page perfectly, then next page it loads all the
pics half  and so on..


This has only been introduced in stable10 and isn't the link as I tested
with a neighboring cache running stable9, no issues.

Under further investigation system log file presented following:

Nov 13 19:37:21 thavinci kernel: pid 66367 (squid), uid 100: exited on
signal 6 (core dumped)
Nov 13 19:37:21 thavinci squid[66118]: Squid Parent: child process 66367
exited due to signal 6
Nov 13 19:37:24 thavinci squid[66118]: Squid Parent: child process 66370
started

 Under Even further investigation cache.log revealed following...
 JUST before squid crashes each time there is the following entry...

 2008/11/14 00:03:55| assertion failed: client_side_reply.cc:1843: reqofs =
 HTTP_REQBUF_SZ || flags.headersSent


Also for those interested, a while back I had issues with the performance
 of
squid...
Objects retrieved out of cache never went faster than 400K, turned out when
I changed my cache_dir from aufs to ufs this was resolved, now objects some
down full speed on LAN.

 I am pretty desperate to get this problem solved as this is affecting us big
 time.





-- 
/kinkie


Re: FW: [squid-users] Squid Stops Responding Sporadically

2008-11-13 Thread Luis Daniel Lucio Quiroz
That sounds like a buggy acl,
i have something like,

you should set debug to level 3 and read cafuly to discover ACL that is doing 
this.

LD
 Good day.
 
 
 Im wondering if anybody else has experienced this.
 Since ive upgraded to squid3stable10 the proxy continuously stops
 responding.
 Firefox will say something along the lines of the proxy isn't setup to
 accept connections.
 
 I hit refresh and it loads page perfectly, then next page it loads all the
 pics half  and so on..
 
 
 This has only been introduced in stable10 and isn't the link as I tested
 with a neighboring cache running stable9, no issues.
 
 Under further investigation system log file presented following:
 
 Nov 13 19:37:21 thavinci kernel: pid 66367 (squid), uid 100: exited on
 signal 6 (core dumped)
 Nov 13 19:37:21 thavinci squid[66118]: Squid Parent: child process 66367
 exited due to signal 6
 Nov 13 19:37:24 thavinci squid[66118]: Squid Parent: child process 66370
 started

 Under Even further investigation cache.log revealed following...
 JUST before squid crashes each time there is the following entry...

 2008/11/14 00:03:55| assertion failed: client_side_reply.cc:1843: reqofs
 = HTTP_REQBUF_SZ || flags.headersSent

 Also for those interested, a while back I had issues with the performance

 of

 squid...
 Objects retrieved out of cache never went faster than 400K, turned out
  when I changed my cache_dir from aufs to ufs this was resolved, now
  objects some down full speed on LAN.

 I am pretty desperate to get this problem solved as this is affecting us
 big time.
On Thursday 13 November 2008 16:08:17 Marcel Grandemange wrote:





Re: [squid-users] Not able to cache Streaming media

2008-11-13 Thread johan firdianto
Dear Kumar,

youtube already changed their url pattern.
before is get_video?video_id
now changed to
get_video.*video_id
you should update part of refresh pattern and store_url_rewrite.

Johan

On Thu, Nov 13, 2008 at 7:29 PM, bijayant kumar [EMAIL PROTECTED] wrote:
 Hi,

 I am using Squid-2.7.STABLE4 on Gentoo Box. I am trying to cache the 
 youtube's video but not able to do so. I have followed the links
 http://www.squid-cache.org/mail-archive/squid-users/200804/0420.html
 http://wiki.squid-cache.org/Features/StoreUrlRewrite.
 According to these urls I tuned my squid.conf as

 acl youtube dstdomain .youtube.com .googlevideo.com .video.google.com 
 .video.google.com.au .rediff.com
 acl youtubeip dst 74.125.15.0/24
 acl youtubeip dst 208.65.153.253/32
 acl youtubeip dst 209.85.173.118/32
 acl youtubeip dst 64.15.0.0/16
 cache allow all
 acl store_rewrite_list dstdomain mt.google.com mt0.google.com mt1.google.com 
 mt2.google.com
 acl store_rewrite_list dstdomain mt3.google.com
 acl store_rewrite_list dstdomain kh.google.com kh0.google.com kh1.google.com 
 kh2.google.com
 acl store_rewrite_list dstdomain kh3.google.com
 acl store_rewrite_list dstdomain kh.google.com.au kh0.google.com.au 
 kh1.google.com.au
 acl store_rewrite_list dstdomain kh2.google.com.au kh3.google.com.au

 acl store_rewrite_list dstdomain .youtube.com .rediff.com .googlevideo.com
 storeurl_access allow store_rewrite_list
 storeurl_access deny all
 storeurl_rewrite_program /usr/local/bin/store_url_rewrite
 quick_abort_min -1
 ##hierarchy_stoplist cgi-bin ?
 maximum_object_size_in_memory 1024 KB
 cache_dir ufs /var/cache/squid 2000 16 256
 maximum_object_size 4194240 KB
 refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
 refresh_pattern -i  \.flv$  10080   90% 99  ignore-no-cache 
 override-expire ignore-private
 refresh_pattern get_video\?video_id 10080   90% 99  
 ignore-no-cache override-expire ignore-private
 refresh_pattern youtube\.com/get_video\?10080   90% 99  
 ignore-no-cache override-expire ignore-private
 refresh_pattern .   99  100%99  override-expire 
 override-lastmodignore-reload   ignore-no-cache ignore-private  
 ignore-auth

 But when i am accessing any video of Youtube, I always get TCP_MISS/200 in 
 access.log and in store.log
  1226578788.877 RELEASE -1  C51CDFBB595CB7105BB2874BE5C3DFB0  303 
 1226578824-1  41629446 text/html -1/0 GET 
 http://www.youtube.com/get_video?video_id=tpQjv14-8yEt=OEgsToPDskKT1_ZEE8QGepXgMQ1i-_eYel=detailpageps=
 1226578813.447 SWAPOUT 00 0314 7CAD15F0E273355B5D18A2B6F8D1871F  200 
 1226578825 1186657973 1226582425 video/flv 2553595/2553595 GET 
 http://v16.cache.googlevideo.com/get_video?origin=sjc-v176.sjc.youtube.comvideo_id=tpQjv14-8yEip=59.92.192.176signature=CF0D875938C7073200F62243A6FF964C54749A5F.E443F5A221D07EF10EB8C29233A33ACC45F8DD50sver=2expire=1226600424key=yt4ipbits=2
 There are lots of RELEASE and very few almost negligible SWAPOUT in the 
 logs.
 I repeated this exercise many times but always getting the same result that 
 is TCP_MISS/200 in access.log.

 Please suggest me am i missing some thing or I need to add some more thing in 
 squid.conf.

 Bijayant Kumar


  Get your preferred Email name!
 Now you can @ymail.com and @rocketmail.com.
 http://mail.promotions.yahoo.com/newdomains/aa/



Re: [squid-users] Squid Stops Responding Sporadically

2008-11-13 Thread Amos Jeffries
 Hi,
   could you please give us a few more details? Squid version (squid
 -v), operating system, whether you got a binary package or rolled your
 own, a few info about the setup (forward proxy? Reverse? Transparent?)
 It's hard to tell from what I read so far, unless I missed something


He has said 3.0.stable10. With aufs problems as well, so probably running
on some form of BSD.

The rest of the Q's still need answering though. I'm particularly
interested in the configure options used to build, confirmation of the OS,
the rest of the backtrace from the core, and whether its a vanilla squid
or patched.

Amos

 On 11/13/08, Marcel Grandemange [EMAIL PROTECTED] wrote:
Good day.


Im wondering if anybody else has experienced this.
Since ive upgraded to squid3stable10 the proxy continuously stops
responding.
Firefox will say something along the lines of the proxy isn't setup to
accept connections.

I hit refresh and it loads page perfectly, then next page it loads all
 the
pics half  and so on..


This has only been introduced in stable10 and isn't the link as I tested
with a neighboring cache running stable9, no issues.

Under further investigation system log file presented following:

Nov 13 19:37:21 thavinci kernel: pid 66367 (squid), uid 100: exited on
signal 6 (core dumped)
Nov 13 19:37:21 thavinci squid[66118]: Squid Parent: child process 66367
exited due to signal 6
Nov 13 19:37:24 thavinci squid[66118]: Squid Parent: child process 66370
started

 Under Even further investigation cache.log revealed following...
 JUST before squid crashes each time there is the following entry...

 2008/11/14 00:03:55| assertion failed: client_side_reply.cc:1843:
 reqofs =
 HTTP_REQBUF_SZ || flags.headersSent


Also for those interested, a while back I had issues with the
 performance
 of
squid...
Objects retrieved out of cache never went faster than 400K, turned out
 when
I changed my cache_dir from aufs to ufs this was resolved, now objects
 some
down full speed on LAN.

 I am pretty desperate to get this problem solved as this is affecting us
 big
 time.





 --
 /kinkie





RE: [squid-users] NTLM auth popup boxes Solaris 8 tuning for upgrade into 2.7.4

2008-11-13 Thread Amos Jeffries

hello all,

I currently get some sun v210 boxes running solaris 8 and squid-2.6.12
and samba 3.0.20b I will upgrade these proxies into 2.7.4/3.0.32 next
monday but before doing this I would like to ask you your advices
 and/or
experiences with tuning these kind of boxes.

the service is running well today except we regularly get
 authentication
popup boxes. This is really exasperating our Users. I already spent lot
of times on the net in the hope finding a clear explanation about it
 but
i am still searching. I already configured starting 128 ntlm_auth
processes on each of my servers. This gives better results but problem
still remains. I also made some patching in my new package I will
 deploy
next week by overwrting some samba values .. below my little patch ..


Before digging deep into OS settings check your squid.conf auth, acl and
http_access settings.
Check the TTL settings on your auth config. If it's not long enough squid
will re-auth between request and reply.

For the access controls there are a number of ways they can trigger
authentication popups. %LOGIN passed to external helper, proxy_auth
REQUIRED acl, and an auth ACL being last on an http_access line.

Also, interception setups hacked with bad flags to (wrongly) permit auth
can appear working but cause popups on every object request and also leak
clients credentials to all remote sites that use auth.

Amos



Re: [squid-users] NTLM auth and groupmembership

2008-11-13 Thread Amos Jeffries
 Ok, I scrapped the radius authentication and went back to NTLM.  Is it
 possible to check for a group membership during/after authentication to
 allow a user to use SQUID?  For instance, I want to be able to take away
 or grant access to the proxy based on an AD group membership.


Yes, its done with an external acl helper that checks group.

I can't seem to find a good config example but these sort of cover whats
needed:
http://wiki.squid-cache.org/KnowledgeBase/NoNTLMGroupAuth
http://wiki.squid-cache.org/ConfigExamples/WindowsAuthenticationNTLM?highlight=%28auth%29%7C%28group%29#head-b97c45f4010166071a17e433b4433cd642defc1f


Amos



Re: [squid-users] Not able to cache Streaming media

2008-11-13 Thread bijayant kumar
Thanks For the reply Johan. I have modified according to your suggestion, but 
no luck.

Bijayant Kumar


--- On Fri, 14/11/08, johan firdianto [EMAIL PROTECTED] wrote:

 From: johan firdianto [EMAIL PROTECTED]
 Subject: Re: [squid-users] Not able to cache Streaming media
 To: [EMAIL PROTECTED]
 Cc: squid users squid-users@squid-cache.org
 Date: Friday, 14 November, 2008, 7:07 AM
 Dear Kumar,
 
 youtube already changed their url pattern.
 before is get_video?video_id
 now changed to
 get_video.*video_id
 you should update part of refresh pattern and
 store_url_rewrite.
 
 Johan
 
 On Thu, Nov 13, 2008 at 7:29 PM, bijayant kumar
 [EMAIL PROTECTED] wrote:
  Hi,
 
  I am using Squid-2.7.STABLE4 on Gentoo Box. I am
 trying to cache the youtube's video but not able to do
 so. I have followed the links
 
 http://www.squid-cache.org/mail-archive/squid-users/200804/0420.html
  http://wiki.squid-cache.org/Features/StoreUrlRewrite.
  According to these urls I tuned my squid.conf as
 
  acl youtube dstdomain .youtube.com .googlevideo.com
 .video.google.com .video.google.com.au .rediff.com
  acl youtubeip dst 74.125.15.0/24
  acl youtubeip dst 208.65.153.253/32
  acl youtubeip dst 209.85.173.118/32
  acl youtubeip dst 64.15.0.0/16
  cache allow all
  acl store_rewrite_list dstdomain mt.google.com
 mt0.google.com mt1.google.com mt2.google.com
  acl store_rewrite_list dstdomain mt3.google.com
  acl store_rewrite_list dstdomain kh.google.com
 kh0.google.com kh1.google.com kh2.google.com
  acl store_rewrite_list dstdomain kh3.google.com
  acl store_rewrite_list dstdomain kh.google.com.au
 kh0.google.com.au kh1.google.com.au
  acl store_rewrite_list dstdomain kh2.google.com.au
 kh3.google.com.au
 
  acl store_rewrite_list dstdomain .youtube.com
 .rediff.com .googlevideo.com
  storeurl_access allow store_rewrite_list
  storeurl_access deny all
  storeurl_rewrite_program
 /usr/local/bin/store_url_rewrite
  quick_abort_min -1
  ##hierarchy_stoplist cgi-bin ?
  maximum_object_size_in_memory 1024 KB
  cache_dir ufs /var/cache/squid 2000 16 256
  maximum_object_size 4194240 KB
  refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
  refresh_pattern -i  \.flv$  10080   90%
 99  ignore-no-cache override-expire ignore-private
  refresh_pattern get_video\?video_id 10080  
 90% 99  ignore-no-cache override-expire
 ignore-private
  refresh_pattern youtube\.com/get_video\?  
  10080   90% 99  ignore-no-cache override-expire
 ignore-private
  refresh_pattern .   99  100%99 
 override-expire override-lastmodignore-reload  
 ignore-no-cache ignore-private  ignore-auth
 
  But when i am accessing any video of Youtube, I always
 get TCP_MISS/200 in access.log and in store.log
   1226578788.877 RELEASE -1 
 C51CDFBB595CB7105BB2874BE5C3DFB0  303 1226578824-1 
 41629446 text/html -1/0 GET
 http://www.youtube.com/get_video?video_id=tpQjv14-8yEt=OEgsToPDskKT1_ZEE8QGepXgMQ1i-_eYel=detailpageps=
  1226578813.447 SWAPOUT 00 0314
 7CAD15F0E273355B5D18A2B6F8D1871F  200 1226578825 1186657973
 1226582425 video/flv 2553595/2553595 GET
 http://v16.cache.googlevideo.com/get_video?origin=sjc-v176.sjc.youtube.comvideo_id=tpQjv14-8yEip=59.92.192.176signature=CF0D875938C7073200F62243A6FF964C54749A5F.E443F5A221D07EF10EB8C29233A33ACC45F8DD50sver=2expire=1226600424key=yt4ipbits=2
  There are lots of RELEASE and very few
 almost negligible SWAPOUT in the logs.
  I repeated this exercise many times but always getting
 the same result that is TCP_MISS/200 in access.log.
 
  Please suggest me am i missing some thing or I need to
 add some more thing in squid.conf.
 
  Bijayant Kumar
 
 
   Get your preferred Email name!
  Now you can @ymail.com and @rocketmail.com.
  http://mail.promotions.yahoo.com/newdomains/aa/
 


  New Email addresses available on Yahoo!
Get the Email name you#39;ve always wanted on the new @ymail and @rocketmail. 
Hurry before someone else does!
http://mail.promotions.yahoo.com/newdomains/aa/


Re: [squid-users] About squid ICAP implementation

2008-11-13 Thread Mikio Kishi
Hi, Henrik

 Allow: 204 is sent if it's known the whole message can be buffered
 within the buffer limits (SQUID_TCP_SO_RCVBUF). It's not relaed to
 previews.

Why is there such a limitation (SQUID_TCP_SO_RCVBUF) ?
I hope that squid always send Allow: 204 to icap servers as much
as possible...

--
Sincerely,
Mikio Kishi