Re: [squid-users] Vary object loop

2008-03-14 Thread Alex Rousskov
On Sat, 2008-03-15 at 11:20 +0900, Adrian Chadd wrote:
> On Fri, Mar 14, 2008, Alex Rousskov wrote:
> 
> > I am not sure at all, but based on a very quick look at the code, it
> > feels like the messages you are getting may not indicate any problems.
> > The attached patch disables these messages at debugging level 1.
> > 
> > If you receive a more knowledgeable answer, please disregard this
> > comment and the patch.
> 
> I think it actually is a bug in the Vary handling in Squid-3.
> The condition:
> 
> if (!has_vary || !entry->mem_obj->vary_headers) {
> if (vary) {
> /* Oops... something odd is going on here.. */
> 
> .. needs to be looked at.

But it is not the condition getting hit according to Aurimas' log, is
it?

Alex.




Re: [squid-users] Squid -k reconfigure causes FATAL

2008-03-14 Thread Emil Mikulic
On Fri, Mar 14, 2008 at 10:20:21PM +, "Stephen" wrote:
> Hi,
> 
> When my cache is busy, if I issue a SQUID -K RECONFIGURE then Squid very
> often crashes with:
> 
> FATAL: Too many queued url_rewriter requests (54 on 12)
> 
> This seems only to happen when the cache is busy. Once the FATAL has
> occurred, Squid needs to be restarted manually.

I've run into this too.  There is a heuristic in helper.c:

if (hlp->stats.queue_size > hlp->n_running * 2)
   fatalf("Too many queued %s requests (%d on %d)", hlp->id_name, ...

If *2 is not suitable for your environment, increase it, or take out the
fatalf() entirely (although then squid loses the ability to react to all
of your rewriters going out to lunch)

> Changing the number of url_rewriters does not seem to make any difference.
> [...]
> I am using SquidGuard 1.3 as the url_rewriter. All DBs are in binary
> format, so startup time is not long.

Sounds like the startup time is long enough that you're accumulating
enough queued requests for squid to kill itself.  Increasing the number
of url_rewriters might not help, as it'll take longer to fire up more
processes.

> Also, issuing the reconfigure when the cache is not being used or is
> under light load is never a problem.

Right, because the queue doesn't build up as quickly and swamp the
rewriters.

--Emil


Re: [squid-users] Cache url's with "?" question marks

2008-03-14 Thread Adrian Chadd
Caching dynamic content doesn't work "like that".

Firstly, removing the QUERY ACL gives you the ability to cache dynamic
content that returned explicit lifetime.

You need to look at all of those MISSes and see why Squid isn't caching
them. Its hard to tell from where I'm sitting.




Adrian


On Fri, Mar 14, 2008, Saul Waizer wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Amos,
> 
> I've implemented the example you sent on Dynamic Content but so far i
> regret to say that no improvement has been made on the hit ratio
> 
> I added the following to my squid.conf
> 
> refresh_pattern (/cgi-bin/|\?) 0 0% 0
> refresh_pattern .0 20% 4320
> acl mydomain dstdomain .mydomain.com
> cache allow mydomain
> 
> my stats look something like this:
> 
> 67.5103% TCP_MISS/200
> 6.07349% TCP_HIT/200
> 4.55681% TCP_MEM_HIT/200
> 1.59761% TCP_IMS_HIT/304
> 
> Any help is appreciated.
> 
> Thanks
> 
> 
> 
> Amos Jeffries wrote:
> > Adrian Chadd wrote:
> >> G'day,
> >>
> >> Just remove the QUERY ACL and the cache ACL line using "QUERY" in it.
> >> Then turn on header logging (log_mime_hdrs on) and see if the replies
> >> to the dynamically generated content is actually giving caching info.
> >>
> >>
> >>
> >> Adrian
> > 
> > http://wiki.squid-cache.org/ConfigExamples/DynamicContent
> > 
> > Amos
> > 
> >>
> >> On Fri, Feb 29, 2008, Saul Waizer wrote:
> > Hello List,
> > 
> > I am having problems trying to cache images*/content that comes from a
> > URL containing a question mark on it ('?')
> > 
> > Background:
> > I am running squid Version 2.6.STABLE17 on FreeBSD 6.2 as a reverse
> > proxy to accelerate content hosted in America served in Europe.
> > 
> > The content comes from an application that uses TOMCAT so a URL
> > requesting dynamic content would look similar to this:
> > 
> > http://domain.com/storage/storage?fileName=/.domain.com-1/usr/14348/image/thumbnail/th_8837728e67eb9cce6fa074df7619cd0d193_1_.jpg
> > 
> > 
> > The result of such request always results on a MISS with a log similar
> > to this:
> > 
> > TCP_MISS/200 8728 GET http://domain.com/storage/storage? -
> > FIRST_UP_PARENT/server_1 image/jpg
> > 
> > I've added this to my config: acl QUERY urlpath_regex cgi-bin as you can
> > see bellow but it makes no difference and I tried adding this:
> > acl QUERY urlpath_regex cgi-bin \?  and for some reason ALL requests
> > result in a MISS.
> > 
> > Any help is greatly appreciated.
> > 
> > My squid config looks like this: (obviously real ip's were changed)
> > 
> > # STANDARD ACL'S ###
> > acl all src 0.0.0.0/0.0.0.0
> > acl manager proto cache_object
> > acl localhost src 127.0.0.1/255.255.255.255
> > acl to_localhost dst 127.0.0.0/8
> > # REVERSE CONFIG FOR SITE #
> > http_port 80 accel vhost
> > cache_peer 1.1.1.1 parent 80 0 no-query originserver name=server_1
> > acl sites_server_1 dstdomain domain.com
> > #  REVERSE ACL'S FOR OUR DOMAINS ##
> > acl  ourdomain0  dstdomain   www.domain.com
> > acl  ourdomain1  dstdomain   domain.com
> > http_access allow ourdomain0
> > http_access allow ourdomain1
> > http_access deny all
> > icp_access allow all
> >  HEADER CONTROL ###
> > visible_hostname cacheA.domain.com
> > cache_effective_user nobody
> > forwarded_for on
> > follow_x_forwarded_for allow all
> > header_access All allow all
> > ### SNMP CONTROL  ###
> > snmp_port 161
> > acl snmppublic snmp_community public1
> > snmp_access allow all
> > ## CACHE CONTROL 
> > access_log /usr/local/squid/var/logs/access.log squid
> > acl QUERY urlpath_regex cgi-bin
> > cache_mem 1280 MB
> > cache_swap_low 95
> > cache_swap_high 98
> > maximum_object_size 6144 KB
> > minimum_object_size 1 KB
> > maximum_object_size_in_memory 4096 KB
> > cache_dir ufs /storage/ram_dir1 128 16 256
> > cache_dir ufs /storage/cache_dir1 5120 16 256
> > cache_dir ufs /storage/cache_dir2 5120 16 256
> > cache_dir ufs /storage/cache_dir3 5120 16 256
> > 
> > Also here is the result of a custom script I made to parse the
> > access.log that will sort and display the top 22 responses so I can
> > compare them with cacti, I am trying to increase the Hit ratio but so
> > far is extremely low.
> > 
> > 1  571121 69.3643% TCP_MISS/200
> > 2  98432 11.9549% TCP_HIT/200
> > 3  51590 6.26576% TCP_MEM_HIT/200
> > 4  47009 5.70938% TCP_MISS/304
> > 5  17757 2.15664% TCP_IMS_HIT/304
> > 6  11982 1.45525% TCP_REFRESH_HIT/200
> > 7  11801 1.43327% TCP_MISS/404
> > 8  6810 0.827095% TCP_MISS/500
> > 9  2508 0.304604% TCP_MISS/000
> >10  1323 0.160682% TCP_MISS/301
> >11  1151 0.139792% TCP_MISS/403
> >12  1051 0.127647% TCP_REFRESH_HIT/304
> >13  430 0.0522248% TCP_REFRESH_MISS/200
> >14  127 0.0154245% TCP_CLIENT_REFRESH_MISS/200
> >15  83 0.0100806% TCP_MISS/401
> >16  81 0.00983769% TCP_CLIENT_REFRES

Re: [squid-users] Vary object loop

2008-03-14 Thread Adrian Chadd
On Fri, Mar 14, 2008, Alex Rousskov wrote:

> I am not sure at all, but based on a very quick look at the code, it
> feels like the messages you are getting may not indicate any problems.
> The attached patch disables these messages at debugging level 1.
> 
> If you receive a more knowledgeable answer, please disregard this
> comment and the patch.

I think it actually is a bug in the Vary handling in Squid-3.
The condition:

if (!has_vary || !entry->mem_obj->vary_headers) {
if (vary) {
/* Oops... something odd is going on here.. */

.. needs to be looked at. In fact, I'd suggest adding some further debugging
to see what the Vary headers were and then we can at least attempt to
determine why the vary processing logic is busted.




Adrian


> 
> Thank you,
> 
> Alex.
> 

> Do not warn about Vary loops and mismatches. 
> 
> I have a feeling that a lot of Vary-handling code has too-high debugging
> levels, but it is not clear to me whether those loops are dangerous 
> enough to warrant level-1 debugging. This needs to be investigated 
> before committing this change.
> 
> Index: src/client_side.cc
> ===
> RCS file: /cvsroot/squid/squid3/src/client_side.cc,v
> retrieving revision 1.779
> diff -u -r1.779 client_side.cc
> --- src/client_side.cc26 Feb 2008 21:49:34 -  1.779
> +++ src/client_side.cc14 Mar 2008 21:11:52 -
> @@ -3274,7 +3274,7 @@
>  /* Oops.. we have already been here and still haven't
>   * found the requested variant. Bail out
>   */
> -debugs(33, 1, "varyEvaluateMatch: Oops. Not a Vary match on 
> second attempt, '" <<
> +debugs(33, 2, "varyEvaluateMatch: Oops. Not a Vary match on 
> second attempt, '" <<
>  entry->mem_obj->url << "' '" << vary << "'");
>  return VARY_CANCEL;
>  }
> Index: src/client_side_reply.cc
> ===
> RCS file: /cvsroot/squid/squid3/src/client_side_reply.cc,v
> retrieving revision 1.154
> diff -u -r1.154 client_side_reply.cc
> --- src/client_side_reply.cc  16 Feb 2008 17:42:27 -  1.154
> +++ src/client_side_reply.cc  14 Mar 2008 21:11:52 -
> @@ -534,7 +534,7 @@
>  
>  case VARY_CANCEL:
>  /* varyEvaluateMatch found a object loop. Process as miss */
> -debugs(88, 1, "clientProcessHit: Vary object loop!");
> +debugs(88, 2, "clientProcessHit: Vary object loop!");
>  processMiss();
>  return;
>  }


-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] squid 2.7 behaviour

2008-03-14 Thread Mark Nottingham

Ah, good; it's not just me...

I'm seeing it on replies with Vary: Accept-Encoding (not sure if  
they're actually encoded responses or not, will try to find out).



On 14/03/2008, at 5:58 PM, Adrian Chadd wrote:


Hm, I thought the vary id stuff was changed to not log at this level.

can you enable header logging in squid.conf and see what the replies  
look like

for these URLs?



Adrian

On Thu, Mar 13, 2008, Pablo Garcia Melga wrote:
Hi, I just upgraded to 2.7 latest Snapshot from 2.6.9 and I'm  
getting a

lot of this errors in cache.log
I'm using SQUID as a reverse proxy with multiple backends

2008/03/13 20:03:45| ctx: exit level  0
2008/03/13 20:03:45| ctx: enter level  0:
'http://listados.deremate.cl/mercedes+benz_dtZgallery_pnZ4'
2008/03/13 20:03:45| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.deremate.cl/mercedes+benz_dtZgallery_pnZ4'
2008/03/13 20:03:45| ctx: exit level  0
2008/03/13 20:03:45| ctx: enter level  0:
'http://listados.deremate.cl/neumatico+195'
2008/03/13 20:03:45| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.deremate.cl/neumatico+195'
2008/03/13 20:03:45| ctx: exit level  0
2008/03/13 20:03:45| ctx: enter level  0:
'http://listados.dereto.com.mx/computacion-impresoras_45336/_dtZgallery_pnZ7'
2008/03/13 20:03:45| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.dereto.com.mx/computacion-impresoras_45336/_dtZgallery_pnZ7'
2008/03/13 20:03:46| ctx: exit level  0
2008/03/13 20:03:46| ctx: enter level  0:
'http://oferta.dereto.com.mx/ajaxg1/QAG2.asp?ido=18090159&itemCant=1&showbutton=1&ispreview=0'
2008/03/13 20:03:46| storeSetPublicKey: unable to determine vary_id  
for

'http://oferta.dereto.com.mx/ajaxg1/QAG2.asp?ido=18090159&itemCant=1&showbutton=1&ispreview=0'
2008/03/13 20:03:46| ctx: exit level  0
2008/03/13 20:03:46| ctx: enter level  0:
'http://listados.deremate.cl/accesorios-repuestos-para-autos-audio-car_43114/_pcZnew_ptZbuyitnow_dtZgallery'
2008/03/13 20:03:46| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.deremate.cl/accesorios-repuestos-para-autos-audio-car_43114/_pcZnew_ptZbuyitnow_dtZgallery'
2008/03/13 20:03:46| ctx: exit level  0
2008/03/13 20:03:46| ctx: enter level  0:
'http://listados.deremate.cl/_uiZ6542852'
2008/03/13 20:03:46| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.deremate.cl/_uiZ6542852'
2008/03/13 20:03:47| ctx: exit level  0
2008/03/13 20:03:47| ctx: enter level  0:
'http://listados.dereto.com.co/htc_pnZ4_srZpricedesc'
2008/03/13 20:03:47| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.dereto.com.co/htc_pnZ4_srZpricedesc'
2008/03/13 20:03:47| ctx: exit level  0
2008/03/13 20:03:47| ctx: enter level  0:
'http://listados.dereto.com.co/accesorios-celulares-ringtones-software_38021/_dtZgallery_pnZ1_srZbiddesc'
2008/03/13 20:03:47| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.dereto.com.co/accesorios-celulares-ringtones-software_38021/_dtZgallery_pnZ1_srZbiddesc'
2008/03/13 20:03:47| ctx: exit level  0
2008/03/13 20:03:47| ctx: enter level  0:
'http://listados.dereto.com.co/animales-mascotas-perros_50267/_prZ11+14_pcZnew_ptZbuyitnow_dtZgallery_srZviewdesc'
2008/03/13 20:03:47| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.dereto.com.co/animales-mascotas-perros_50267/_prZ11+14_pcZnew_ptZbuyitnow_dtZgallery_srZviewdesc'
2008/03/13 20:03:47| ctx: exit level  0
2008/03/13 20:03:47| ctx: enter level  0:
'http://listados.dereto.com.mx/muebles-muebles-bibliotecas_56865/_smZdelivery_srZcloseasc'
2008/03/13 20:03:47| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.dereto.com.mx/muebles-muebles-bibliotecas_56865/_smZdelivery_srZcloseasc'
2008/03/13 20:03:47| ctx: exit level  0
2008/03/13 20:03:47| ctx: enter level  0:
'http://listados.deremate.cl/musica-peliculas-entradas-para-recitales_51440/_lnZrm'
2008/03/13 20:03:47| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.deremate.cl/musica-peliculas-entradas-para-recitales_51440/_lnZrm'


Any Ideas ?

Regards, Pablo


--
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial  
Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in  
WA -


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Squid -k reconfigure causes FATAL

2008-03-14 Thread BJ Tiemessen

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I was going to send this issue to the list this week but have not got
around to it yet.  I noticed this last week.  We are using 2.6 STABLE 16
with a custom written url rewriter (in perl).  We had calls to squid -k
reconfigure when a new user was added to the system but when a batch
user import was run the system crashed.  After doing a little digging I
noticed that for a brief period of time (a couple seconds or less) squid
launches an extra url_rewrite_children number of rewriters.

So on this system our url_rewrite_children is set to 7 and squid -k
reconfigure was called more than 70 times in a few seconds which
resulted in more than 490 rewriters being launched so the system ran out
of memory and crashed.  It turns out on our system we did not need to be
calling reconfigure so we took that call out but it still seems odd that
squid is launching an extra set of redirectors.

So yes your problems sounds like the same thing I ran into and seems to
be a problem with how squid is shutting down and launching redirectors.

BJ

Stephen wrote:
| Hi,
|
| When my cache is busy, if I issue a SQUID -K RECONFIGURE then Squid very
| often crashes with:
|
| FATAL: Too many queued url_rewriter requests (54 on 12)
|
| This seems only to happen when the cache is busy. Once the FATAL has
| occurred, Squid needs to be restarted manually.
|
| Changing the number of url_rewriters does not seem to make any difference.
| Also, issuing the reconfigure when the cache is not being used or is under
| light load is never a problem.
|
| Is the problem with Squid, or the re-writer? I think it may be an
issue with
| how Squid handles incoming requests during the reconfigure (and the
shutdown
| and restart of its url_rewriter helpers). I am using SquidGuard 1.3 as the
| url_rewriter. All DBs are in binary format, so startup time is not long.
|
| I am using Squid 2.6 STABLE 18 with the select loop.
|
| Thanks for any suggestions or thoughts you may have,
|
| Stephen

- --
BJ Tiemessen
eSoft Inc.
303-444-1600 x3357
[EMAIL PROTECTED]
www.eSoft.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFH2wBQxD4S8yzNNMMRAm4zAJ0Vdn3eHw3Hu9rSzI+Buig29PVtzACfZn8A
OBmjX+w2EyG3G8oBOPAQCWg=
=uvqM
-END PGP SIGNATURE-


[squid-users] Squid -k reconfigure causes FATAL

2008-03-14 Thread Stephen
Hi,

When my cache is busy, if I issue a SQUID -K RECONFIGURE then Squid very
often crashes with:

FATAL: Too many queued url_rewriter requests (54 on 12)

This seems only to happen when the cache is busy. Once the FATAL has
occurred, Squid needs to be restarted manually.

Changing the number of url_rewriters does not seem to make any difference.
Also, issuing the reconfigure when the cache is not being used or is under
light load is never a problem.

Is the problem with Squid, or the re-writer? I think it may be an issue with
how Squid handles incoming requests during the reconfigure (and the shutdown
and restart of its url_rewriter helpers). I am using SquidGuard 1.3 as the
url_rewriter. All DBs are in binary format, so startup time is not long.

I am using Squid 2.6 STABLE 18 with the select loop.

Thanks for any suggestions or thoughts you may have,

Stephen


Re: [squid-users] Squid 3.0 STABLE2 LDAP Authentication Failing

2008-03-14 Thread Alex Rousskov
On Fri, 2008-03-14 at 14:23 -0200, Matias Chris wrote:
> I just upgraded from 2.6Stable5 to 3.0Stable2. I was authenticating
> users using LDAP, and this stopped working since I did the upgrade.

Could this be the fixed regression bug #2206?
http://www.squid-cache.org/bugs/show_bug.cgi?id=2206

Alex.




Re: [squid-users] Vary object loop

2008-03-14 Thread Alex Rousskov

On Fri, 2008-03-14 at 14:58 +, Aurimas Mikalauskas wrote:
> The next question is about Vary header. I get absolutely amazing
> amount of these errors in cache.log:
> 
> 2008/03/14 10:46:54| clientProcessHit: Vary object loop!
> 2008/03/14 10:46:54| varyEvaluateMatch: Oops. Not a Vary match on
> second attempt, 'http://some.url' 'accept-encoding'
> 2008/03/14 10:46:55| clientProcessHit: Vary object loop!
> 2008/03/14 10:46:55| varyEvaluateMatch: Oops. Not a Vary match on
> second attempt, 'http://some.other.url'
> 'accept-encoding="gzip,%20deflate"'
> 
> A rough number:
> # grep -c 'Vary object loop' cache.log && wc -l squid_access.log
> 244816
> 1842602 squid_access.log
> 
> Any idea what kind of loop that is and how to avoid it?

I am not sure at all, but based on a very quick look at the code, it
feels like the messages you are getting may not indicate any problems.
The attached patch disables these messages at debugging level 1.

If you receive a more knowledgeable answer, please disregard this
comment and the patch.

Thank you,

Alex.

Do not warn about Vary loops and mismatches. 

I have a feeling that a lot of Vary-handling code has too-high debugging
levels, but it is not clear to me whether those loops are dangerous 
enough to warrant level-1 debugging. This needs to be investigated 
before committing this change.

Index: src/client_side.cc
===
RCS file: /cvsroot/squid/squid3/src/client_side.cc,v
retrieving revision 1.779
diff -u -r1.779 client_side.cc
--- src/client_side.cc	26 Feb 2008 21:49:34 -	1.779
+++ src/client_side.cc	14 Mar 2008 21:11:52 -
@@ -3274,7 +3274,7 @@
 /* Oops.. we have already been here and still haven't
  * found the requested variant. Bail out
  */
-debugs(33, 1, "varyEvaluateMatch: Oops. Not a Vary match on second attempt, '" <<
+debugs(33, 2, "varyEvaluateMatch: Oops. Not a Vary match on second attempt, '" <<
 entry->mem_obj->url << "' '" << vary << "'");
 return VARY_CANCEL;
 }
Index: src/client_side_reply.cc
===
RCS file: /cvsroot/squid/squid3/src/client_side_reply.cc,v
retrieving revision 1.154
diff -u -r1.154 client_side_reply.cc
--- src/client_side_reply.cc	16 Feb 2008 17:42:27 -	1.154
+++ src/client_side_reply.cc	14 Mar 2008 21:11:52 -
@@ -534,7 +534,7 @@
 
 case VARY_CANCEL:
 /* varyEvaluateMatch found a object loop. Process as miss */
-debugs(88, 1, "clientProcessHit: Vary object loop!");
+debugs(88, 2, "clientProcessHit: Vary object loop!");
 processMiss();
 return;
 }


Re: [squid-users] Cache url's with "?" question marks

2008-03-14 Thread Saul Waizer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Amos,

I've implemented the example you sent on Dynamic Content but so far i
regret to say that no improvement has been made on the hit ratio

I added the following to my squid.conf

refresh_pattern (/cgi-bin/|\?) 0 0% 0
refresh_pattern .0 20% 4320
acl mydomain dstdomain .mydomain.com
cache allow mydomain

my stats look something like this:

67.5103% TCP_MISS/200
6.07349% TCP_HIT/200
4.55681% TCP_MEM_HIT/200
1.59761% TCP_IMS_HIT/304

Any help is appreciated.

Thanks



Amos Jeffries wrote:
> Adrian Chadd wrote:
>> G'day,
>>
>> Just remove the QUERY ACL and the cache ACL line using "QUERY" in it.
>> Then turn on header logging (log_mime_hdrs on) and see if the replies
>> to the dynamically generated content is actually giving caching info.
>>
>>
>>
>> Adrian
> 
> http://wiki.squid-cache.org/ConfigExamples/DynamicContent
> 
> Amos
> 
>>
>> On Fri, Feb 29, 2008, Saul Waizer wrote:
> Hello List,
> 
> I am having problems trying to cache images*/content that comes from a
> URL containing a question mark on it ('?')
> 
> Background:
> I am running squid Version 2.6.STABLE17 on FreeBSD 6.2 as a reverse
> proxy to accelerate content hosted in America served in Europe.
> 
> The content comes from an application that uses TOMCAT so a URL
> requesting dynamic content would look similar to this:
> 
> http://domain.com/storage/storage?fileName=/.domain.com-1/usr/14348/image/thumbnail/th_8837728e67eb9cce6fa074df7619cd0d193_1_.jpg
> 
> 
> The result of such request always results on a MISS with a log similar
> to this:
> 
> TCP_MISS/200 8728 GET http://domain.com/storage/storage? -
> FIRST_UP_PARENT/server_1 image/jpg
> 
> I've added this to my config: acl QUERY urlpath_regex cgi-bin as you can
> see bellow but it makes no difference and I tried adding this:
> acl QUERY urlpath_regex cgi-bin \?  and for some reason ALL requests
> result in a MISS.
> 
> Any help is greatly appreciated.
> 
> My squid config looks like this: (obviously real ip's were changed)
> 
> # STANDARD ACL'S ###
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> # REVERSE CONFIG FOR SITE #
> http_port 80 accel vhost
> cache_peer 1.1.1.1 parent 80 0 no-query originserver name=server_1
> acl sites_server_1 dstdomain domain.com
> #  REVERSE ACL'S FOR OUR DOMAINS ##
> acl  ourdomain0  dstdomain   www.domain.com
> acl  ourdomain1  dstdomain   domain.com
> http_access allow ourdomain0
> http_access allow ourdomain1
> http_access deny all
> icp_access allow all
>  HEADER CONTROL ###
> visible_hostname cacheA.domain.com
> cache_effective_user nobody
> forwarded_for on
> follow_x_forwarded_for allow all
> header_access All allow all
> ### SNMP CONTROL  ###
> snmp_port 161
> acl snmppublic snmp_community public1
> snmp_access allow all
> ## CACHE CONTROL 
> access_log /usr/local/squid/var/logs/access.log squid
> acl QUERY urlpath_regex cgi-bin
> cache_mem 1280 MB
> cache_swap_low 95
> cache_swap_high 98
> maximum_object_size 6144 KB
> minimum_object_size 1 KB
> maximum_object_size_in_memory 4096 KB
> cache_dir ufs /storage/ram_dir1 128 16 256
> cache_dir ufs /storage/cache_dir1 5120 16 256
> cache_dir ufs /storage/cache_dir2 5120 16 256
> cache_dir ufs /storage/cache_dir3 5120 16 256
> 
> Also here is the result of a custom script I made to parse the
> access.log that will sort and display the top 22 responses so I can
> compare them with cacti, I am trying to increase the Hit ratio but so
> far is extremely low.
> 
> 1  571121 69.3643% TCP_MISS/200
> 2  98432 11.9549% TCP_HIT/200
> 3  51590 6.26576% TCP_MEM_HIT/200
> 4  47009 5.70938% TCP_MISS/304
> 5  17757 2.15664% TCP_IMS_HIT/304
> 6  11982 1.45525% TCP_REFRESH_HIT/200
> 7  11801 1.43327% TCP_MISS/404
> 8  6810 0.827095% TCP_MISS/500
> 9  2508 0.304604% TCP_MISS/000
>10  1323 0.160682% TCP_MISS/301
>11  1151 0.139792% TCP_MISS/403
>12  1051 0.127647% TCP_REFRESH_HIT/304
>13  430 0.0522248% TCP_REFRESH_MISS/200
>14  127 0.0154245% TCP_CLIENT_REFRESH_MISS/200
>15  83 0.0100806% TCP_MISS/401
>16  81 0.00983769% TCP_CLIENT_REFRESH_MISS/304
>17  35 0.00425085% TCP_MISS/503
>18  20 0.00242906% TCP_DENIED/400
>19  19 0.00230761% TCP_HIT/000
>20  19 0.00230761% TCP_DENIED/403
>21  14 0.00170034% TCP_SWAPFAIL_MISS/200
>22  1 0.000121453% TCP_SWAPFAIL_MISS/30
> 
> Thanks!
> 
> 
> 
> 
>>

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFH2ugEAcr37anguZsRAixSAJ9GioRmL42D1bOSVveMKYcpi21fygCfd4VP
UMqi1CB3vQ5IeaTblK/vYQM=
=6XUg
-END PGP SIGNATURE-


RE: [squid-users] Reverse proxy IP not passing through

2008-03-14 Thread saul waizer
Micah,

I've had to deal with that situation a few times, the solution is quite
simple.

Recompile squid with this option if you haven't done it so far "
--enable-follow-x-forwarded-for"

Add these lines to your squid.conf:

forwarded_for on
follow_x_forwarded_for allow all

Basically what this does is forwards the client IP to the origin server on a
reverse proxy setup

Now, the client IP will be passed through the headers to the origin server
but you need to do some work on apache to be able to fetch those.
Unfortunately apache discussions are beyond the scope of this list, I
suggest you look into rewrite rules, I have the same setup working like a
charm with rewrites.

Hope it helps
Saul W.

-Original Message-
From: news [mailto:[EMAIL PROTECTED] On Behalf Of Micah Anderson
Sent: Wednesday, March 12, 2008 5:49 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Reverse proxy IP not passing through


I upgraded my squid to 2.6 and re-did the configs, everything is working
with the exception of one problem, the old version used to pass the
visitor's IP back to the webserver, but now it just passes the squid
host's IP. I need the requesting IP for some CGI's to work, at the
moment they think that my host is the only one hitting them :O

I used to accomplish this with httpd_accel_uses_host_header and I
understand that this has been replaced in the newer 2.6 versions, but as
you can see from my configuration below, I've made that change.

I've got apache running on port 81 of the same server and if I hit the
webserver itself, it sees the IPs correctly, its just when squid passes
them on. I'm using 2.6.18 backport on debian etch.

Here is my squid.conf, with some ips/domains munged to protect the
innocent, thanks for any ideas!

Micah

http_port 214.132.104.148:80 defaultsite=mydomain.com:80 vhost vport
cache_peer 214.132.104.148 parent 81 0 no-query originserver default
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
acl QUERY urlpath_regex download \?
acl QUERY urlpath_regex trackback \?
acl QUERY urlpath_regex email \?
acl QUERY urlpath_regex review \?
acl QUERY urlpath_regex proposals \?
acl QUERY urlpath_regex submit \?
acl QUERY urlpath_regex admin \?
acl QUERY urlpath_regex prerelease \?
acl POSTS method POST
no_cache deny POSTS
no_cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
cache_swap_low 92
cache_swap_high 96
cache_dir aufs /var/spool/squid 100 16 256
logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %h" "%{User-Agent}>h" %Ss:%Sh %{Host}>h
access_log /var/log/squid/access.log combined
hosts_file /etc/hosts
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
read_timeout 10 minutes
request_timeout 20 seconds
pconn_timeout 10 seconds
redirect_children 20
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl green src 214.132.104.148/255.255.255.255
acl SSL_ports port 443 
acl SSL_ports port 563  
acl SSL_ports port 873
acl Safe_ports port 80  
acl Safe_ports port 21
acl Safe_ports port 443 
acl Safe_ports port 70
acl Safe_ports port 210 
acl Safe_ports port 1025-65535  
acl Safe_ports port 280   
acl Safe_ports port 488 
acl Safe_ports port 591   
acl Safe_ports port 777 
http
acl Safe_ports port 631 
acl Safe_ports port 873   
acl Safe_ports port 901 
acl purge method PURGE
acl CONNECT method CONNECT
acl IMAGES urlpath_regex .jpg$
acl IMAGES urlpath_regex .gif$
acl IMAGES urlpath_regex .swf$
acl IMAGES urlpath_regex .ico$
acl IMAGES urlpath_regex .png$
http_access allow purge green
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow all
http_access deny all
icp_access deny ALL
ident_lookup_access deny all
http_access allow green
http_access deny all
http_reply_access allow all
icp_access allow all
cache_effective_group proxy
delay_pools 1
delay_class 1 1
delay_access 1 allow all
delay_parameters 1 128000/128000  # 512 kbits == 64 kbytes per
second, 1Mbit/sec=128kbytes
strip_query_terms off
coredump_dir /var/spool/squid



No virus found in this incoming message.
Checked by AVG. 
Version: 7.5.518 / Virus Database: 269.21.7/1325 - Release Date: 3/11/2008
1:41 PM
 

No virus found in this outgoing message.
Checked by AVG. 
Version: 7.5.519 / Virus Database: 269.21.7/1328 - Release Date: 3/13/2008
11:31 AM
 

No virus found in this outgoing message.
Checked by AVG. 
Version: 7.5.519 / Virus Database: 269.21.7/1329 - Release Date: 3/14/2008
12:33 

Re: [squid-users] LiveCD type install for transparent caching of YouTube, etc?

2008-03-14 Thread Kinkie
On Fri, Mar 7, 2008 at 9:07 PM, Paul Bryson <[EMAIL PROTECTED]> wrote:
> I have been looking for some sort of easy to install Squid transparent
>  caching proxy.  Something like KnoppMyth (http://mysettopbox.tv/) but
>  just for Squid.  Boot to a CD that has you partition/format your
>  harddrives, installs the OS plus Squid with sane default settings.  If
>  there is a web interface, all the better.

Hi Paul!
A quick googling brought me to a VMWare virtual appliance at
http://www.vmware.com/appliances/directory/57.

I've recorded your suggestion on http://wiki.squid-cache.org/WishList.

Are you willing to help the squid project by starting this activity?

-- 
 /kinkie


[squid-users] squid (Software caused connection abort)

2008-03-14 Thread humberto
 
 
Hi everybody:

My squid server sometimes give me connections problems, after the execution
of periodic daily in my cache logs appear this lines:

Mar 12 03:01:11 ac-ciencia squid[662]: comm_accept: FD 16: (53) Software
caused connection abort   
Mar 12 03:01:11 ac-ciencia squid[662]: httpAccept: FD 16: accept failure:
(53) Software caused connection abort
Mar 12 03:01:11 ac-ciencia squid[662]: comm_accept: FD 16: (53) Software
caused connection abort
Mar 12 03:01:11 ac-ciencia squid[662]: httpAccept: FD 16: accept failure:
(53) Software caused connection abort
Mar 12 03:01:11 ac-ciencia squid[662]: comm_accept: FD 16: (53) Software
caused connection abort
Mar 12 03:01:11 ac-ciencia squid[662]: httpAccept: FD 16: accept failure:
(53) Software caused connection abort
Mar 12 03:01:11 ac-ciencia squid[662]: comm_accept: FD 16: (53) Software
caused connection abort
Mar 12 03:01:11 ac-ciencia squid[662]: httpAccept: FD 16: accept failure:
(53) Software caused connection abort
Mar 12 03:01:11 ac-ciencia squid[662]: comm_select: kevent failure: (9) Bad
file descriptor
Mar 12 03:01:11 ac-ciencia squid[662]: Select loop Error. Retry 1

Please help me 

"All that we are is the result of what we have thought."




RE: [squid-users] squid (Software caused connection abort)

2008-03-14 Thread humberto
 
Hi every body:

My squid server sometimes give me connections problems, after the execution
of periodic daily in my cache logs appear this lines:

Mar 12 03:01:11 ac-ciencia squid[662]: comm_accept: FD 16: (53) Software
caused connection abort   
Mar 12 03:01:11 ac-ciencia squid[662]: httpAccept: FD 16: accept failure:
(53) Software caused connection abort
Mar 12 03:01:11 ac-ciencia squid[662]: comm_accept: FD 16: (53) Software
caused connection abort
Mar 12 03:01:11 ac-ciencia squid[662]: httpAccept: FD 16: accept failure:
(53) Software caused connection abort
Mar 12 03:01:11 ac-ciencia squid[662]: comm_accept: FD 16: (53) Software
caused connection abort
Mar 12 03:01:11 ac-ciencia squid[662]: httpAccept: FD 16: accept failure:
(53) Software caused connection abort
Mar 12 03:01:11 ac-ciencia squid[662]: comm_accept: FD 16: (53) Software
caused connection abort
Mar 12 03:01:11 ac-ciencia squid[662]: httpAccept: FD 16: accept failure:
(53) Software caused connection abort
Mar 12 03:01:11 ac-ciencia squid[662]: comm_select: kevent failure: (9) Bad
file descriptor
Mar 12 03:01:11 ac-ciencia squid[662]: Select loop Error. Retry 1

Please help me 

"All that we are is the result of what we have thought."

-Mensaje original-
De: Saul Waizer [mailto:[EMAIL PROTECTED] 
Enviado el: viernes, 14 de marzo de 2008 13:15
Para: squid-users@squid-cache.org
CC: [EMAIL PROTECTED]
Asunto: Re: [squid-users] Need Help

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Adnan, please reply to the mailing list too.

Look into X-Forwarded-For, you need to recompile squid with that option and
add the x-forwarded... lines to squid.conf

Hope it helps
Saul W

Adnan Shahzad wrote:
> i am using 2.6 Stable version of Squid
> 
> M.Adnan Shahzad
> System Administrator
> Information Technology Services Centre Lahore University of Management 
> Sciences(LUMS) Opposite Sector U, DHA Lahore 54792, PAKISTAN
> Website: http://www.lums.edu.pk
> Ph: +92-42-5722670-79 Ext 4138
> 
> From: saul waizer [EMAIL PROTECTED]
> Sent: Thursday, March 13, 2008 11:10 PM
> To: 'Adnan Shahzad'
> Subject: RE: [squid-users] Need Help
> 
> Which version of squid do you have?
> 
> -Original Message-
> From: Adnan Shahzad [mailto:[EMAIL PROTECTED]
> Sent: Thursday, March 13, 2008 12:45 AM
> To: squid-users@squid-cache.org
> Subject: [squid-users] Need Help
> 
> Dear Sir,
> 
> i am working in a company, Pakistan. My Network setting is
> 
> Squid Machine ---> Packeeter (Hardware for Bandwidth Management 
> (With out NATing)) -> F5 (aggreated internet connection (With 
> out NATing)
> ) > Router (NATing)
> 
> i want to configure Squid with dansguardian for content filter. but 
> problem which i am facing is that squid do NAT and don't forward 
> Client IP. Which i want to forward client IP to Packeeter and squid do 
> cache, log and content filtering job. But i am facing this problem and 
> i study lots of Document and no success so Please guide me and Help me to
resolve this problem.
> 
> looking forward to your positive response.
> 
> Regards
> 
> M.Adnan Shahzad
> System Administrator
> 
> No virus found in this incoming message.
> Checked by AVG.
> Version: 7.5.519 / Virus Database: 269.21.7/1328 - Release Date: 
> 3/13/2008
> 11:31 AM
> 
> 
> No virus found in this outgoing message.
> Checked by AVG.
> Version: 7.5.519 / Virus Database: 269.21.7/1328 - Release Date: 
> 3/13/2008
> 11:31 AM
> 
> 
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFH2rKNAcr37anguZsRApsHAJsGK2xxpOUte00H4rHl6rZVe+DQPQCeJzYh
8udDJj1X23soLTulQuDoswE=
=ALfi
-END PGP SIGNATURE-


__ NOD32 2947 (20080314) Information __

This message was checked by NOD32 antivirus system.
http://www.eset.com




[squid-users] HTML NTLM and 2.6 not behaving

2008-03-14 Thread NOCTECH noctech
Having a rather difficult to fathom problem with a user logging into
some external Outlook WebAccess webmail server.  I've read a bunch of
posts about the problems with NTLM and Squid <= 2.5, but this one is
stumping me.

A little bit about our setup -- using multiple squid and dg boxes and a
WCCP router to transparently cache/filter the web.

Most of our squid caches are 2.6, but we still have two remaining that
are version 2.5 that we're phasing out.  The odd thing is, the login
seems to work correctly with squid 2.5 and incorrectly with 2.6, which
is exactly backwards of what I expect.  When I proxy directly to one of
the squid 2.6 boxes, specifically:

Squid Cache: Version 2.6.STABLE18
configure options:  '--prefix=/usr' '--sysconfdir=/etc/squid'
'--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--localstatedir=/var'
'--libexecdir=/usr/sbin' '--datadir=/usr/share/squid'
'--mandir=/usr/share/man' '--with-maxfd=4096' '--disable-useragent-log'
'--enable-ssl' '--with-openssl' '--disable-ident-lookups'
'--enable-poll' '--enable-truncate' '--enable-gnuregex'
'--enable-async-io' '--with-pthreads' '--with-aio' '--with-dl'
'--enable-storeio=aufs,diskd,ufs,coss,null'
'--enable-removal-policies=heap,lru' '--enable-kill-parent-hack'
'--enable-forw-via-db' '--enable-linux-netfilter' '--enable-underscores'
'--enable-x-accelerator-vary'

I get a login box (in firefox) that reads:
Enter username and password for "" at http://mail.example.com

When I put in the credentials and click OK, the box just keeps coming
back.  When I click cancel, I get a different login box:
Enter username and password for "mail.example.com" at
http://mail.example.com

and the login works.

If I proxy directly to one of the 2.5 boxes:
Squid Cache: Version 2.5.STABLE4
configure options:  --disable-useragent-log --enable-ssl --with-openssl
--disable-ident-lookups --enable-poll --enable-truncate
--enable-gnuregex --enable-async-io --with-pthreads --with-aio --with-dl
--enable-storeio=aufs,diskd,ufs,coss,null
--enable-removal-policies=heap,lru --enable-kill-parent-hack
--enable-forw-via-db --enable-linux-netfilter --enable-underscores
--enable-x-accelerator-vary

It goes directly to the second login box.

Any thoughts?  Any information I can provide to be helpful?

Sean






Re: [squid-users] Need Help

2008-03-14 Thread Saul Waizer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Adnan, please reply to the mailing list too.

Look into X-Forwarded-For, you need to recompile squid with that option
and add the x-forwarded... lines to squid.conf

Hope it helps
Saul W

Adnan Shahzad wrote:
> i am using 2.6 Stable version of Squid
> 
> M.Adnan Shahzad
> System Administrator
> Information Technology Services Centre
> Lahore University of Management Sciences(LUMS)
> Opposite Sector U, DHA
> Lahore 54792, PAKISTAN
> Website: http://www.lums.edu.pk
> Ph: +92-42-5722670-79 Ext 4138
> 
> From: saul waizer [EMAIL PROTECTED]
> Sent: Thursday, March 13, 2008 11:10 PM
> To: 'Adnan Shahzad'
> Subject: RE: [squid-users] Need Help
> 
> Which version of squid do you have?
> 
> -Original Message-
> From: Adnan Shahzad [mailto:[EMAIL PROTECTED]
> Sent: Thursday, March 13, 2008 12:45 AM
> To: squid-users@squid-cache.org
> Subject: [squid-users] Need Help
> 
> Dear Sir,
> 
> i am working in a company, Pakistan. My Network setting is
> 
> Squid Machine ---> Packeeter (Hardware for Bandwidth Management (With
> out NATing)) -> F5 (aggreated internet connection (With out NATing)
> ) > Router (NATing)
> 
> i want to configure Squid with dansguardian for content filter. but problem
> which i am facing is that squid do NAT and don't forward Client IP. Which i
> want to forward client IP to Packeeter and squid do cache, log and content
> filtering job. But i am facing this problem and i study lots of Document and
> no success so Please guide me and Help me to resolve this problem.
> 
> looking forward to your positive response.
> 
> Regards
> 
> M.Adnan Shahzad
> System Administrator
> 
> No virus found in this incoming message.
> Checked by AVG.
> Version: 7.5.519 / Virus Database: 269.21.7/1328 - Release Date: 3/13/2008
> 11:31 AM
> 
> 
> No virus found in this outgoing message.
> Checked by AVG.
> Version: 7.5.519 / Virus Database: 269.21.7/1328 - Release Date: 3/13/2008
> 11:31 AM
> 
> 
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFH2rKNAcr37anguZsRApsHAJsGK2xxpOUte00H4rHl6rZVe+DQPQCeJzYh
8udDJj1X23soLTulQuDoswE=
=ALfi
-END PGP SIGNATURE-


[squid-users] Squid 3.0 STABLE2 LDAP Authentication Failing

2008-03-14 Thread Matias Chris
Hi There,

This is my first message to the list. I had been working with Squid
for the last 3 months and until now I could do everything I wanted
without help.

Now I have a problem and so far could not resolve it by myself, hope
someone here knows how to solve it..

I just upgraded from 2.6Stable5 to 3.0Stable2. I was authenticating
users using LDAP, and this stopped working since I did the upgrade.
If I take out all the related commands about LDAP from the config, the
Squid runs OK. I tried manually to execute squid_ldap_group and is
working fine also.

The symptom is that the authentication popup never comes up, I just
receive a "Denied Access" message.

Here is what I have configured:
auth_param basic program /usr/local/squid/libexec/squid_ldap_auth -d
-v 3 -b "dc=[host],dc=[domain],dc=com" -D
"cn=squid,cn=users,dc=[host],dc=[domain],dc=com" -w [password] -f
sAMAccountName=%s -h Server_IP

auth_param basic children 5
auth_param basic realm X
auth_param basic credentialsttl 5 minutes

external_acl_type busca_el_grupo %LOGIN
/usr/local/squid/libexec/squid_ldap_group -v 3 -R -b
"dc=[host],dc=[domain],dc=com" -D
"cn=squid,cn=users,dc=[host],dc=[domain],dc=com" -w [password] -f
"(&(objectclass=person)(sAMAccountName=%v)(memberof=CN=%a,CN=Users,dc=[host],dc=[domain],dc=com))"
-h Server IP

acl Internet external busca_el_grupo [group]
acl ldap_auth proxy_auth REQUIRED

http_access allow Internet
http_access allow ldap_auth


Debug (ALL,5):
2008/03/14 08:25:16.238| ACLChecklist::preCheck: 0xd44368 checking
'http_access allow Internet'
2008/03/14 08:25:16.239| ACLList::matches: checking Internet
2008/03/14 08:25:16.239| ACL::checklistMatches: checking 'Internet'
2008/03/14 08:25:16.239| authenticateValidateUser: Auth_user_request was NULL!
2008/03/14 08:25:16.239| authenticateAuthenticate: broken auth or no
proxy_auth header. Requesting auth header.
2008/03/14 08:25:16.239| aclMatchAcl: returning 0 sending
authentication challenge.
2008/03/14 08:25:16.239| aclMatchExternal: busca_el_grupo user not
authenticated (0)
2008/03/14 08:25:16.239| ACL::ChecklistMatches: result for 'Internet' is 0
2008/03/14 08:25:16.239| ACLList::matches: result is false
2008/03/14 08:25:16.240| aclmatchAclList: 0xd44368 returning false
(AND list entry failed to match)
2008/03/14 08:25:16.241| ACLChecklist::markFinished: 0xd44368
checklist processing finished
2008/03/14 08:25:16.241| aclmatchAclList: async=1 nodeMatched=0
async_in_progress=0 lastACLResult() = 0 finished() = 1
2008/03/14 08:25:16.241| ACLChecklist::check: 0xd44368 match found,
calling back with 2
2008/03/14 08:25:16.241| ACLChecklist::checkCallback: 0xd44368 answer=2
2008/03/14 08:25:16.241| The request GET http://www.gmail.com/ is
DENIED, because it matched 'Internet'
2008/03/14 08:25:16.241| Access Denied: http://www.gmail.com/
2008/03/14 08:25:16.241| AclMatchedName = Internet
2008/03/14 08:25:16.241| Proxy Auth Message = 
2008/03/14 08:25:16.243| storeCreateEntry: 'http://www.gmail.com/'
2008/03/14 08:25:16.244| store.cc(366) new StoreEntry 0xbde8498
2008/03/14 08:25:16.244| MemObject.cc(76) new MemObject 0x9cf80ec
2008/03/14 08:25:16.246| storeKeyPrivate: GET http://www.gmail.com/
2008/03/14 08:25:16.246| StoreEntry::hashInsert: Inserting Entry
0xbde8498 key '4701868D6A5B27EE086C4E1DA47B76D2'
2008/03/14 08:25:16.247| StoreEntry::setReleaseFlag:
'4701868D6A5B27EE086C4E1DA47B76D2'
2008/03/14 08:25:16.247| Creating an error page for entry 0xb7de8498
with errorstate 0x9d97a98 page id 20

Any help will be much apreciated.

Thanks in advance!
Matias.


Re: [squid-users] squid 2.7 behaviour

2008-03-14 Thread Pablo GarcĂ­a
You mean the log_mime_hdrs ?,

Regards, Pablo

On Fri, Mar 14, 2008 at 4:58 AM, Adrian Chadd <[EMAIL PROTECTED]> wrote:
> Hm, I thought the vary id stuff was changed to not log at this level.
>
>  can you enable header logging in squid.conf and see what the replies look 
> like
>  for these URLs?
>
>
>
>  Adrian
>
>
>
>  On Thu, Mar 13, 2008, Pablo Garcia Melga wrote:
>  > Hi, I just upgraded to 2.7 latest Snapshot from 2.6.9 and I'm getting a
>  > lot of this errors in cache.log
>  > I'm using SQUID as a reverse proxy with multiple backends
>  >
>  > 2008/03/13 20:03:45| ctx: exit level  0
>  > 2008/03/13 20:03:45| ctx: enter level  0:
>  > 'http://listados.deremate.cl/mercedes+benz_dtZgallery_pnZ4'
>  > 2008/03/13 20:03:45| storeSetPublicKey: unable to determine vary_id for
>  > 'http://listados.deremate.cl/mercedes+benz_dtZgallery_pnZ4'
>  > 2008/03/13 20:03:45| ctx: exit level  0
>  > 2008/03/13 20:03:45| ctx: enter level  0:
>  > 'http://listados.deremate.cl/neumatico+195'
>  > 2008/03/13 20:03:45| storeSetPublicKey: unable to determine vary_id for
>  > 'http://listados.deremate.cl/neumatico+195'
>  > 2008/03/13 20:03:45| ctx: exit level  0
>  > 2008/03/13 20:03:45| ctx: enter level  0:
>  > 
> 'http://listados.dereto.com.mx/computacion-impresoras_45336/_dtZgallery_pnZ7'
>  > 2008/03/13 20:03:45| storeSetPublicKey: unable to determine vary_id for
>  > 
> 'http://listados.dereto.com.mx/computacion-impresoras_45336/_dtZgallery_pnZ7'
>  > 2008/03/13 20:03:46| ctx: exit level  0
>  > 2008/03/13 20:03:46| ctx: enter level  0:
>  > 
> 'http://oferta.dereto.com.mx/ajaxg1/QAG2.asp?ido=18090159&itemCant=1&showbutton=1&ispreview=0'
>  > 2008/03/13 20:03:46| storeSetPublicKey: unable to determine vary_id for
>  > 
> 'http://oferta.dereto.com.mx/ajaxg1/QAG2.asp?ido=18090159&itemCant=1&showbutton=1&ispreview=0'
>  > 2008/03/13 20:03:46| ctx: exit level  0
>  > 2008/03/13 20:03:46| ctx: enter level  0:
>  > 
> 'http://listados.deremate.cl/accesorios-repuestos-para-autos-audio-car_43114/_pcZnew_ptZbuyitnow_dtZgallery'
>  > 2008/03/13 20:03:46| storeSetPublicKey: unable to determine vary_id for
>  > 
> 'http://listados.deremate.cl/accesorios-repuestos-para-autos-audio-car_43114/_pcZnew_ptZbuyitnow_dtZgallery'
>  > 2008/03/13 20:03:46| ctx: exit level  0
>  > 2008/03/13 20:03:46| ctx: enter level  0:
>  > 'http://listados.deremate.cl/_uiZ6542852'
>  > 2008/03/13 20:03:46| storeSetPublicKey: unable to determine vary_id for
>  > 'http://listados.deremate.cl/_uiZ6542852'
>  > 2008/03/13 20:03:47| ctx: exit level  0
>  > 2008/03/13 20:03:47| ctx: enter level  0:
>  > 'http://listados.dereto.com.co/htc_pnZ4_srZpricedesc'
>  > 2008/03/13 20:03:47| storeSetPublicKey: unable to determine vary_id for
>  > 'http://listados.dereto.com.co/htc_pnZ4_srZpricedesc'
>  > 2008/03/13 20:03:47| ctx: exit level  0
>  > 2008/03/13 20:03:47| ctx: enter level  0:
>  > 
> 'http://listados.dereto.com.co/accesorios-celulares-ringtones-software_38021/_dtZgallery_pnZ1_srZbiddesc'
>  > 2008/03/13 20:03:47| storeSetPublicKey: unable to determine vary_id for
>  > 
> 'http://listados.dereto.com.co/accesorios-celulares-ringtones-software_38021/_dtZgallery_pnZ1_srZbiddesc'
>  > 2008/03/13 20:03:47| ctx: exit level  0
>  > 2008/03/13 20:03:47| ctx: enter level  0:
>  > 
> 'http://listados.dereto.com.co/animales-mascotas-perros_50267/_prZ11+14_pcZnew_ptZbuyitnow_dtZgallery_srZviewdesc'
>  > 2008/03/13 20:03:47| storeSetPublicKey: unable to determine vary_id for
>  > 
> 'http://listados.dereto.com.co/animales-mascotas-perros_50267/_prZ11+14_pcZnew_ptZbuyitnow_dtZgallery_srZviewdesc'
>  > 2008/03/13 20:03:47| ctx: exit level  0
>  > 2008/03/13 20:03:47| ctx: enter level  0:
>  > 
> 'http://listados.dereto.com.mx/muebles-muebles-bibliotecas_56865/_smZdelivery_srZcloseasc'
>  > 2008/03/13 20:03:47| storeSetPublicKey: unable to determine vary_id for
>  > 
> 'http://listados.dereto.com.mx/muebles-muebles-bibliotecas_56865/_smZdelivery_srZcloseasc'
>  > 2008/03/13 20:03:47| ctx: exit level  0
>  > 2008/03/13 20:03:47| ctx: enter level  0:
>  > 
> 'http://listados.deremate.cl/musica-peliculas-entradas-para-recitales_51440/_lnZrm'
>  > 2008/03/13 20:03:47| storeSetPublicKey: unable to determine vary_id for
>  > 
> 'http://listados.deremate.cl/musica-peliculas-entradas-para-recitales_51440/_lnZrm'
>  >
>  >
>  > Any Ideas ?
>  >
>  > Regards, Pablo
>
>  --
>  - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid 
> Support -
>  - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -
>


[squid-users] TCP_NEGATIVE_HIT/200, Vary object loop

2008-03-14 Thread Aurimas Mikalauskas
Hello,

just started squid 3.0 (stable2) in production. Three web servers,
each has squid in front, all three squids are set as siblings to each
other.

First of, I'm really surprised to get TCP_NEGATIVE_HIT/200. Lots of them:

# wc -l squid_access.log
1482641 squid_access.log
# grep -c 'TCP_NEGATIVE_HIT' squid_access.log
118348
# grep -c 'TCP_NEGATIVE_HIT/200' squid_access.log
118003

so 10% of all requests are negative hits

Here are "storeCheckCachable() Stats":
no.not_entry_cachable   736
no.wrong_content_length 1
no.negative_cached  379766
no.too_big  0
no.too_small0
no.private_key  0
no.too_many_open_files  0
no.too_many_open_fds0
yes.default 144468

Interesting enough, these are mostly images and js/css objects. Then I
though it could be related to:

acl really_static urlpath_regex -i
\.(jpg|jpeg|gif|png|tiff|tif|svg|swf|ico|css|js)$
acl nocache_cookie req_header Cookie NOCACHE\=1
cache allow really_static
cache deny nocache_cookie

And indeed - if I come with Cookie: NOCACHE=1, for objects matching
really_static acl I get one of:
TCP_NEGATIVE_HIT/200
CD_SIBLING_HIT/192.168.10.162
TCP_IMS_HIT/304

So all local hits are named TCP_NEGATIVE_HIT if they match cache
allow. Though I'm not sure if this is a feature or a bug.


The next question is about Vary header. I get absolutely amazing
amount of these errors in cache.log:

2008/03/14 10:46:54| clientProcessHit: Vary object loop!
2008/03/14 10:46:54| varyEvaluateMatch: Oops. Not a Vary match on
second attempt, 'http://some.url' 'accept-encoding'
2008/03/14 10:46:55| clientProcessHit: Vary object loop!
2008/03/14 10:46:55| varyEvaluateMatch: Oops. Not a Vary match on
second attempt, 'http://some.other.url'
'accept-encoding="gzip,%20deflate"'

A rough number:
# grep -c 'Vary object loop' cache.log && wc -l squid_access.log
244816
1842602 squid_access.log

Any idea what kind of loop that is and how to avoid it?

Thanks!

Aurimas


Re: [squid-users] Help needed for ftp access

2008-03-14 Thread Kinkie
You can set ftp_user in squid.conf, but it's a system-wide option.

/kinkie

On Fri, Mar 14, 2008 at 10:53 AM, piyush joshi <[EMAIL PROTECTED]> wrote:
> Dear ,
>   I don't want to provide any username or password in URL tell
>  me the other solution ..
>
>
>
>  On Fri, Mar 14, 2008 at 3:18 PM, Kinkie <[EMAIL PROTECTED]> wrote:
>  > On Fri, Mar 14, 2008 at 10:39 AM, piyush joshi <[EMAIL PROTECTED]> wrote:
>  > > Dear All,
>  > >I am using ftp server for my LAN but when i use proxy
>  > >  ( squid ) it doesn't connect to ftp server because anonymous access is
>  > >  not allowed. Where should i make changes so that squid not send
>  > >  password or username to connect to ftp server .
>  >
>  > Simply use ftp://[EMAIL PROTECTED]/path as the URL you are connecting to.
>  >
>  >
>  > --
>  >  /kinkie
>  >
>
>
>
>  --
>  Regards
>
>  Piyush Joshi
>  9415414376
>



-- 
 /kinkie


Re: [squid-users] Help needed for ftp access

2008-03-14 Thread Kinkie
On Fri, Mar 14, 2008 at 10:39 AM, piyush joshi <[EMAIL PROTECTED]> wrote:
> Dear All,
>I am using ftp server for my LAN but when i use proxy
>  ( squid ) it doesn't connect to ftp server because anonymous access is
>  not allowed. Where should i make changes so that squid not send
>  password or username to connect to ftp server .

Simply use ftp://[EMAIL PROTECTED]/path as the URL you are connecting to.


-- 
 /kinkie


[squid-users] Help needed for ftp access

2008-03-14 Thread piyush joshi
Dear All,
   I am using ftp server for my LAN but when i use proxy
( squid ) it doesn't connect to ftp server because anonymous access is
not allowed. Where should i make changes so that squid not send
password or username to connect to ftp server .

-- 
Regards

Piyush Joshi
9415414376