RE: [squid-users] min-fresh / max-stale not working?

2008-09-03 Thread Markus Karg
Sorry it was a typo. The test was done mit SQUID-2.7-STABLE4 actually.
The HTTP/1.1-Support is only experimental???
 
 -Original Message-
 From: Amos Jeffries [mailto:[EMAIL PROTECTED]
 Sent: Mittwoch, 3. September 2008 07:14
 To: Markus Karg
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] min-fresh / max-stale not working?
 
  Dear SQUID Community,
 
  it seems as if SQUID is not dealing correctly with min-fresh and
  max-stale:
 
  Currently we are evaluating the use of SQUID-2.6-STABLE4. It all
 seems
  to work pretty well, but just min-fresh and max-stale is not
  working. Our client agent wants to guarantee to get data that is
 fresh
  for a specific amount of time. So we provide min-fresh=3500 and
  max-stale=0. To verify SQUID's behaviour we have programmed an
 origin
  server the always responds with some static headers and entity data,
 and
  a client that requests exactly that information, via SQUID as a
 proxy.
  The client uses the Cache-Control header with a min-fresh=3500 and
  max-stale=0 value, and the server is always sending data with a
  max-age=3600 value. But the client gets from SQUID a 200 OK response
  having max-age=3600 and Age=502! So, the current age of 502 plus the
  desired min-fresh of 3500 is 4002, minus the max-stale of 0 still is
  4002, what is much more than the max-age of 3600 -- so the request
  cannot be satisfied without a warning, since the response will not
be
  fresh long enough! So we expect to get at least a Warning header.
But
  there is none! It looks like SQUID just ignores the min-fresh=3500
 and
  max-stale=0 headers!
 
  The HTTP/1.1 specification says:
  13.1.2 Warnings
  Whenever a cache returns a response that is neither first-hand nor
  fresh enough (in the sense of condition 2 in section 13.1.1), it
 MUST
  attach a warning to that effect, using a Warning general-header.
  also it says:
  13.1.1 Cache Correctness
  If a stored response is not fresh enough by the most restrictive
  freshness requirement of both the client and the origin server, in
  carefully considered circumstances the cache MAY still return the
  response with the appropriate Warning header.
 
  In the default case, this means it meets the least restrictive
 freshness
  requirement of the client, origin server, and cache (see section
 14.9)
 
  So for me it looks as if SQUID is buggy, since it does not add the
  mandatory Warning header. Can that be true? Or do I have to enable
 some
  switch like HTTP/1.1-Compliance = YES?
 
 Squid 2.6 is HTTP/1.0 only.  For any HTTP/1.1 stuff you will need
Squid
 2.7 and its experimental support.
 
 As for the cache controls, someone more knowledgeable will hopefully
 speak
 up.
 
 Amos



[squid-users] compiling squid error on windows

2008-09-03 Thread Dooda Dave
Dear all,

I've downloaded squid3.0 stable 8 and am trying to compile it on
windows 2003. however, i hit an error when starting to run make. the
error is as below:

[EMAIL PROTECTED] /cygdrive/c/squid-3.0.STABLE8
$ make
make: *** No targets specified and no makefile found.  Stop.

I couldn't really get help from google at all. Hope some of you may
have encountered the same problem.

Thanks in advance.

Regards,
Dooda


[squid-users] Parent Proxy

2008-09-03 Thread MikeBou

Hi all,

we are trying to get the squid service on a win2k3 server to authenticate
users, but pass all web requests to a proxy server running on RH which
filters everthing through dansguardian. If we set browser config to point to
the linux box, filtering is successful. If we point the browser to the
win2k3 box, filtering is unsuccessful. Below is the squid setting for both
computers.

win2k3 server squid conf file pointing to linux box.

http_port 8080

# ***Name of Server
visible_hostname (server name)

cache_peer (IP address) parent 8080 0 default no-query
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
cache_mem 16 MB
cache_swap_low 95
cache_swap_high 98
cache_dir ufs c:/squid/cache 2500 64 512
cache_access_log c:/squid/log/access.log
cache_log c:/squid/log/cache.log
cache_store_log c:/squid/log/store.log
emulate_httpd_log on
pid_filename c:/squid/sbin/squid.pid
debug_options ALL,1
half_closed_clients off
client_persistent_connections off
server_persistent_connections off
(further configurations follow)

Linux box squid conf file.

http_port 3128
cache_peer (ip address of external offsite proxy) parent 8080 3130 default
no-query
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
cache_mem 16 MB
cache_swap_low 95
cache_swap_high 98
cache_dir ufs /var/spool/squid 2500 64 512
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
emulate_httpd_log on
pid_filename /var/run/squid.pid
debug_options ALL,1
half_closed_clients off
client_persistent_connections off
server_persistent_connections off

(further configurations follow)


Kindest regards

Mike B
-- 
View this message in context: 
http://www.nabble.com/Parent-Proxy-tp19284328p19284328.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] COSS squid2.7stable4 windowsxpsp2

2008-09-03 Thread F-D. Cami
On Wed, 3 Sep 2008 17:08:21 +1200 (NZST)
Amos Jeffries [EMAIL PROTECTED] wrote:

  I've tried using freebsd same conf except path its just working fine.
 
  squid coss for windows works really like this?
 
 
 COSS for windows is not thoroughly tested apparently.

Running squid on Windows XP is not exactly a good idea either...

F


[squid-users] Max open files

2008-09-03 Thread John Doe
Hi,

what max open files should I set for an average squid reverse-proxy config?
By default it is 1024 on my linux distrib and I was wondering if it would be 
enough...
I did set FD=64000.

Thx,
JD


  



Re: [squid-users] NTLM Passthrough

2008-09-03 Thread Amos Jeffries

Mark Wheeler wrote:

Hi,

I'm trying to get my squid proxy to pass-through the NTLM authentication 
information to an upstream proxy.  I have correctly (I think?) configured squid 
using the following line:

cache_peer 10.44.16.72 parent 8080 7 no-query no-digest default login=PASS

However, after the client sends the final NTLM request (which includes the 
correct domain and username) squid sends back a RST and the conversation is 
terminated.

Any idea what I am doing incorrectly?

Many thanks,

Mark



FYI, for Squid-3(HEAD) we now have an experimental patch undergoing 
auditing for a feature to enable the missing NTLM bits.


  http://www.squid-cache.org/bugs/show_bug.cgi?id=1632

Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE8


[squid-users] Re: MSSQL Authentication

2008-09-03 Thread Satya
 Hai,

   Can any one help me to setup the authentication from a MSSQL(Windows) 
 server to the squid(Linux) server

 Thanks in advance.


 With warm Regards
 Satya.


Re: [squid-users] min-fresh / max-stale not working?

2008-09-03 Thread Amos Jeffries

Markus Karg wrote:

Sorry it was a typo. The test was done mit SQUID-2.7-STABLE4 actually.
The HTTP/1.1-Support is only experimental???


Brand new in 2.7 and some bugs still being found.
It's also only on one side of Squid, the one which links to Servers 
IIRC, so the client-facing code is still HTTP/1.0-only.


Amos

 

-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED]
Sent: Mittwoch, 3. September 2008 07:14
To: Markus Karg
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] min-fresh / max-stale not working?


Dear SQUID Community,

it seems as if SQUID is not dealing correctly with min-fresh and
max-stale:

Currently we are evaluating the use of SQUID-2.6-STABLE4. It all

seems

to work pretty well, but just min-fresh and max-stale is not
working. Our client agent wants to guarantee to get data that is

fresh

for a specific amount of time. So we provide min-fresh=3500 and
max-stale=0. To verify SQUID's behaviour we have programmed an

origin

server the always responds with some static headers and entity data,

and

a client that requests exactly that information, via SQUID as a

proxy.

The client uses the Cache-Control header with a min-fresh=3500 and
max-stale=0 value, and the server is always sending data with a
max-age=3600 value. But the client gets from SQUID a 200 OK response
having max-age=3600 and Age=502! So, the current age of 502 plus the
desired min-fresh of 3500 is 4002, minus the max-stale of 0 still is
4002, what is much more than the max-age of 3600 -- so the request
cannot be satisfied without a warning, since the response will not

be

fresh long enough! So we expect to get at least a Warning header.

But

there is none! It looks like SQUID just ignores the min-fresh=3500

and

max-stale=0 headers!

The HTTP/1.1 specification says:
13.1.2 Warnings
Whenever a cache returns a response that is neither first-hand nor
fresh enough (in the sense of condition 2 in section 13.1.1), it

MUST

attach a warning to that effect, using a Warning general-header.
also it says:
13.1.1 Cache Correctness
If a stored response is not fresh enough by the most restrictive
freshness requirement of both the client and the origin server, in
carefully considered circumstances the cache MAY still return the
response with the appropriate Warning header.

In the default case, this means it meets the least restrictive

freshness

requirement of the client, origin server, and cache (see section

14.9)

So for me it looks as if SQUID is buggy, since it does not add the
mandatory Warning header. Can that be true? Or do I have to enable

some

switch like HTTP/1.1-Compliance = YES?

Squid 2.6 is HTTP/1.0 only.  For any HTTP/1.1 stuff you will need

Squid

2.7 and its experimental support.

As for the cache controls, someone more knowledgeable will hopefully
speak
up.

Amos





--
Please use Squid 2.7.STABLE4 or 3.0.STABLE8


Re: [squid-users] compiling squid error on windows

2008-09-03 Thread Amos Jeffries

Dooda Dave wrote:

Dear all,

I've downloaded squid3.0 stable 8 and am trying to compile it on
windows 2003. however, i hit an error when starting to run make. the
error is as below:

[EMAIL PROTECTED] /cygdrive/c/squid-3.0.STABLE8
$ make
make: *** No targets specified and no makefile found.  Stop.

I couldn't really get help from google at all. Hope some of you may
have encountered the same problem.

Thanks in advance.

Regards,
Dooda


3.0 has no official windows support. What is there is very, very 
experimental, and while improving slowly. Guido is the only one with a 
proper MS devel install to test stuff, and he is still working on both 
squid versions. If you are able to help at all, thank you.


3.x windows issues had probably best go to squid-dev.

Anyway,

I'm very not sure of this, so make a backup copy of your squid code 
files before trying.


... but ... you probably need to run ./configure to generate the 
makefiles for your system.


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE8


Re: [squid-users] Parent Proxy

2008-09-03 Thread Amos Jeffries

MikeBou wrote:

Hi all,

we are trying to get the squid service on a win2k3 server to authenticate
users, but pass all web requests to a proxy server running on RH which
filters everthing through dansguardian. If we set browser config to point to
the linux box, filtering is successful. If we point the browser to the
win2k3 box, filtering is unsuccessful. Below is the squid setting for both
computers.

win2k3 server squid conf file pointing to linux box.

http_port 8080

# ***Name of Server
visible_hostname (server name)

cache_peer (IP address) parent 8080 0 default no-query


You say this box should be 'pointing' at other box. I assume from the 
config you mean 'pointing' to be 'child-of'.


In that case you need the linux box port in the above line:

  cache_peer (IP address of linux box) parent 3128 0 default no-query


also, cache_peer_access allowing requests through the linux peer?
also, never_direct forcing all requests through linux box?


hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
cache_mem 16 MB
cache_swap_low 95
cache_swap_high 98
cache_dir ufs c:/squid/cache 2500 64 512
cache_access_log c:/squid/log/access.log
cache_log c:/squid/log/cache.log
cache_store_log c:/squid/log/store.log
emulate_httpd_log on
pid_filename c:/squid/sbin/squid.pid
debug_options ALL,1
half_closed_clients off
client_persistent_connections off
server_persistent_connections off
(further configurations follow)

Linux box squid conf file.

http_port 3128
cache_peer (ip address of external offsite proxy) parent 8080 3130 default
no-query
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
cache_mem 16 MB
cache_swap_low 95
cache_swap_high 98
cache_dir ufs /var/spool/squid 2500 64 512
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
emulate_httpd_log on
pid_filename /var/run/squid.pid
debug_options ALL,1
half_closed_clients off
client_persistent_connections off
server_persistent_connections off

(further configurations follow)


Kindest regards

Mike B


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE8


Re: [squid-users] Max open files

2008-09-03 Thread Amos Jeffries

John Doe wrote:

Hi,

what max open files should I set for an average squid reverse-proxy config?
By default it is 1024 on my linux distrib and I was wondering if it would be 
enough...
I did set FD=64000.



How many req/sec are you expecting? multiply by 3, then multiply by the 
time in seconds you expect an 'average' connection to stay open (I'd 
expect 1-5 seconds on a fast system).


Does the number you come up with seem reasonable to use? if not how 
close can you get.


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE8


[squid-users] visible_hostname resolution problem

2008-09-03 Thread Gustavo Lazarte

Hello,
We are testing Squid 2.6 stable 21 for the first time and we are getting the 
following error when we are trying to start the Squid service.

2008/09/03 11:02:20| parseConfigFile: line 3028 unrecognized: 'localhost'
FATAL: Could not determine fully qualified hostname.  Please set 
'visible_hostname' (same error with serversname.company.com)

Squid Cache (Version 2.6.STABLE21): Terminated abnormally.

I added a value on the host file and on the DNS server trying to resolve the 
localhost value and the servername.company.com. Should I define the DNS server 
inside Squid?

Thanks



Re: [squid-users] compiling squid error on windows

2008-09-03 Thread Guido Serassio

Hi Amos,

At 15.46 03/09/2008, Amos Jeffries wrote:

Dooda Dave wrote:
 Dear all,

 I've downloaded squid3.0 stable 8 and am trying to compile it on
 windows 2003. however, i hit an error when starting to run make. the
 error is as below:

 [EMAIL PROTECTED] /cygdrive/c/squid-3.0.STABLE8
 $ make
 make: *** No targets specified and no makefile found.  Stop.

 I couldn't really get help from google at all. Hope some of you may
 have encountered the same problem.

 Thanks in advance.

 Regards,
 Dooda

3.0 has no official windows support. What is there is very, very
experimental, and while improving slowly. Guido is the only one with a
proper MS devel install to test stuff, and he is still working on both
squid versions. If you are able to help at all, thank you.


Squid 3.0 STABLE 8 should build on both MinGW+MSYS and Cygwin. I 
don't know how it works  :-(



3.x windows issues had probably best go to squid-dev.

Anyway,

I'm very not sure of this, so make a backup copy of your squid code
files before trying.

... but ... you probably need to run ./configure to generate the
makefiles for your system.


Sure, like any other platform.

Amos: there are some Windows informations missing from 3.0 release 
notes, you can find it in the 2.6 one.


Regards

Guido


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE8



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



[squid-users] squidguard ssl redirect

2008-09-03 Thread martin perner
Hi,

I'm running a squid 2.7.STABLE3 on a SLES10 as a normal proxy.

For content-filtering we are using squidguard which redirects a user to
a special page if he hits a blocked page.

If the redirect goes to a http page everthing works as expeced.

But if the redirect goes to a https page, the user gets a errorpage
saying that the connection failed and the system returned '(71) Protocol
error'. In the cache.log a error is printed (attached).

A deny_info to the https page works without any problem.

When i'm adding 'sslproxy_flags DONT_VERIFY_PEER' to the squid.conf the
error disappears.

The question is now: is the sslproxy_flags method opening any holes in
the setup or is there an other way for solving this problem?

Thanks in advance



part of the cache.log (cut the detail about the certificate):

2008/09/03 17:50:05| SSL unknown certificate error 20 in (cert)
2008/09/03 17:50:05| fwdNegotiateSSL: Error negotiating SSL connection
on FD 48: error:14090086:SSL
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed (1/-1/0)
2008/09/03 17:50:05| SSL unknown certificate error 20 in (cert)
2008/09/03 17:50:05| fwdNegotiateSSL: Error negotiating SSL connection
on FD 48: error:14090086:SSL
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed (1/-1/0)
2008/09/03 17:50:05| SSL unknown certificate error 20 in (cert)
2008/09/03 17:50:05| fwdNegotiateSSL: Error negotiating SSL connection
on FD 48: error:14090086:SSL
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed (1/-1/0)


Re: [squid-users] Max open files

2008-09-03 Thread John Doe
  what max open files should I set for an average squid reverse-proxy 
  config?
  By default it is 1024 on my linux distrib and I was wondering if it would 
  be 
 enough...
  I did set FD=64000.
  
 
 How many req/sec are you expecting? multiply by 3, then multiply by the 
 time in seconds you expect an 'average' connection to stay open (I'd 
 expect 1-5 seconds on a fast system).
 
 Does the number you come up with seem reasonable to use? if not how 
 close can you get.

Let's say 100r/s, with 1s average requests = 300, which seems reasonable.
But, I might have from times to times a wave of people (with potentialy slow 
connections) requesting big videos at the same time.
So the connections are going to stay open for many seconds, maybe even 
minutes...
So, to stay on the safe side, I guess I will raise it to 4096.

Thx,
JD


  



Re: [squid-users] source-hash balancing...

2008-09-03 Thread John Doe
  So what would be the alternative method in my case (2 pools of 3 servers)?
  Would this work?
  
acl u1 dstdomain u1.example.com
acl u2 dstdomain u2.example.com
  
cache_peer_access u1pool1 allow u1
cache_peer_access u1pool2 allow u1
cache_peer_access u1pool3 allow u1
cache_peer_access u1pool1 deny u2
cache_peer_access u1pool2 deny u2
cache_peer_access u1pool3 deny u2
  
cache_peer_access u2pool1 allow u2
cache_peer_access u2pool2 allow u2
cache_peer_access u2pool3 allow u2
cache_peer_access u2pool1 deny u1
cache_peer_access u2pool2 deny u1
cache_peer_access u2pool3 deny u1
  
  Does it spread the requests or won't the first cache_peer_access always be 
  chosen...?
  
 
 Try something like this:
 
 cache_peer 192.168.1.1 parent 80 0 no-query front-end-https=auto 
 originserver name=origin_1_1 sourcehash
 cache_peer 192.168.1.2 parent 8080 0 no-query front-end-https=auto 
 originserver name=origin_1_2 sourcehash
 acl service_1 dstdomain site.com
 cache_peer_access origin_1_1 allow service_1
 cache_peer_access origin_1_2 allow service_1

Do I need to explicitly deny the other dstdomains or can I just use a deny all 
(unless it will override the previous allow)?
By example If I have 3 pools of 2 servers:

acl u1 dstdomain u1.example.com
acl u2 dstdomain u2.example.com
acl u3 dstdomain u3.example.com

cache_peer_access u1_1 allow u1
cache_peer_access u1_2 allow u1
cache_peer_access u1_1 deny all
cache_peer_access u1_2 deny all

cache_peer_access u2_1 allow u2
cache_peer_access u2_2 allow u2
cache_peer_access u2_1 deny all
cache_peer_access u2_2 deny all

etc...

Thx,
JD


  



RE: [squid-users] min-fresh / max-stale not working?

2008-09-03 Thread Markus Karg
Is there a plan when HTTP/1.1 completely will be supported in all sides?
I mean, I hardly can't believe it -- HTTP/1.1 was specified in 2008. Why
waiting so long?

Thanks
Markus

 -Original Message-
 From: Amos Jeffries [mailto:[EMAIL PROTECTED]
 Sent: Mittwoch, 3. September 2008 15:40
 To: Markus Karg
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] min-fresh / max-stale not working?
 
 Markus Karg wrote:
  Sorry it was a typo. The test was done mit SQUID-2.7-STABLE4
 actually.
  The HTTP/1.1-Support is only experimental???
 
 Brand new in 2.7 and some bugs still being found.
 It's also only on one side of Squid, the one which links to Servers
 IIRC, so the client-facing code is still HTTP/1.0-only.
 
 Amos
 
 
  -Original Message-
  From: Amos Jeffries [mailto:[EMAIL PROTECTED]
  Sent: Mittwoch, 3. September 2008 07:14
  To: Markus Karg
  Cc: squid-users@squid-cache.org
  Subject: Re: [squid-users] min-fresh / max-stale not working?
 
  Dear SQUID Community,
 
  it seems as if SQUID is not dealing correctly with min-fresh and
  max-stale:
 
  Currently we are evaluating the use of SQUID-2.6-STABLE4. It all
  seems
  to work pretty well, but just min-fresh and max-stale is not
  working. Our client agent wants to guarantee to get data that is
  fresh
  for a specific amount of time. So we provide min-fresh=3500 and
  max-stale=0. To verify SQUID's behaviour we have programmed an
  origin
  server the always responds with some static headers and entity
 data,
  and
  a client that requests exactly that information, via SQUID as a
  proxy.
  The client uses the Cache-Control header with a min-fresh=3500 and
  max-stale=0 value, and the server is always sending data with a
  max-age=3600 value. But the client gets from SQUID a 200 OK
 response
  having max-age=3600 and Age=502! So, the current age of 502 plus
 the
  desired min-fresh of 3500 is 4002, minus the max-stale of 0 still
 is
  4002, what is much more than the max-age of 3600 -- so the request
  cannot be satisfied without a warning, since the response will not
  be
  fresh long enough! So we expect to get at least a Warning header.
  But
  there is none! It looks like SQUID just ignores the min-fresh=3500
  and
  max-stale=0 headers!
 
  The HTTP/1.1 specification says:
  13.1.2 Warnings
  Whenever a cache returns a response that is neither first-hand nor
  fresh enough (in the sense of condition 2 in section 13.1.1), it
  MUST
  attach a warning to that effect, using a Warning general-header.
  also it says:
  13.1.1 Cache Correctness
  If a stored response is not fresh enough by the most restrictive
  freshness requirement of both the client and the origin server, in
  carefully considered circumstances the cache MAY still return the
  response with the appropriate Warning header.
 
  In the default case, this means it meets the least restrictive
  freshness
  requirement of the client, origin server, and cache (see section
  14.9)
  So for me it looks as if SQUID is buggy, since it does not add the
  mandatory Warning header. Can that be true? Or do I have to enable
  some
  switch like HTTP/1.1-Compliance = YES?
  Squid 2.6 is HTTP/1.0 only.  For any HTTP/1.1 stuff you will need
  Squid
  2.7 and its experimental support.
 
  As for the cache controls, someone more knowledgeable will
hopefully
  speak
  up.
 
  Amos
 
 
 
 --
 Please use Squid 2.7.STABLE4 or 3.0.STABLE8


Re: [squid-users] min-fresh / max-stale not working?

2008-09-03 Thread Adrian Chadd
When someone contributes the work or funds development.



Adrian

2008/9/4 Markus Karg [EMAIL PROTECTED]:
 Is there a plan when HTTP/1.1 completely will be supported in all sides?
 I mean, I hardly can't believe it -- HTTP/1.1 was specified in 2008. Why
 waiting so long?

 Thanks
 Markus

 -Original Message-
 From: Amos Jeffries [mailto:[EMAIL PROTECTED]
 Sent: Mittwoch, 3. September 2008 15:40
 To: Markus Karg
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] min-fresh / max-stale not working?

 Markus Karg wrote:
  Sorry it was a typo. The test was done mit SQUID-2.7-STABLE4
 actually.
  The HTTP/1.1-Support is only experimental???

 Brand new in 2.7 and some bugs still being found.
 It's also only on one side of Squid, the one which links to Servers
 IIRC, so the client-facing code is still HTTP/1.0-only.

 Amos

 
  -Original Message-
  From: Amos Jeffries [mailto:[EMAIL PROTECTED]
  Sent: Mittwoch, 3. September 2008 07:14
  To: Markus Karg
  Cc: squid-users@squid-cache.org
  Subject: Re: [squid-users] min-fresh / max-stale not working?
 
  Dear SQUID Community,
 
  it seems as if SQUID is not dealing correctly with min-fresh and
  max-stale:
 
  Currently we are evaluating the use of SQUID-2.6-STABLE4. It all
  seems
  to work pretty well, but just min-fresh and max-stale is not
  working. Our client agent wants to guarantee to get data that is
  fresh
  for a specific amount of time. So we provide min-fresh=3500 and
  max-stale=0. To verify SQUID's behaviour we have programmed an
  origin
  server the always responds with some static headers and entity
 data,
  and
  a client that requests exactly that information, via SQUID as a
  proxy.
  The client uses the Cache-Control header with a min-fresh=3500 and
  max-stale=0 value, and the server is always sending data with a
  max-age=3600 value. But the client gets from SQUID a 200 OK
 response
  having max-age=3600 and Age=502! So, the current age of 502 plus
 the
  desired min-fresh of 3500 is 4002, minus the max-stale of 0 still
 is
  4002, what is much more than the max-age of 3600 -- so the request
  cannot be satisfied without a warning, since the response will not
  be
  fresh long enough! So we expect to get at least a Warning header.
  But
  there is none! It looks like SQUID just ignores the min-fresh=3500
  and
  max-stale=0 headers!
 
  The HTTP/1.1 specification says:
  13.1.2 Warnings
  Whenever a cache returns a response that is neither first-hand nor
  fresh enough (in the sense of condition 2 in section 13.1.1), it
  MUST
  attach a warning to that effect, using a Warning general-header.
  also it says:
  13.1.1 Cache Correctness
  If a stored response is not fresh enough by the most restrictive
  freshness requirement of both the client and the origin server, in
  carefully considered circumstances the cache MAY still return the
  response with the appropriate Warning header.
 
  In the default case, this means it meets the least restrictive
  freshness
  requirement of the client, origin server, and cache (see section
  14.9)
  So for me it looks as if SQUID is buggy, since it does not add the
  mandatory Warning header. Can that be true? Or do I have to enable
  some
  switch like HTTP/1.1-Compliance = YES?
  Squid 2.6 is HTTP/1.0 only.  For any HTTP/1.1 stuff you will need
  Squid
  2.7 and its experimental support.
 
  As for the cache controls, someone more knowledgeable will
 hopefully
  speak
  up.
 
  Amos
 


 --
 Please use Squid 2.7.STABLE4 or 3.0.STABLE8




Re: [squid-users] squidguard ssl redirect

2008-09-03 Thread Marcus Kool

Hi Martin,

Squid is a little awkward:
the URL returned by squidguard must have the protocol as the original URL.
So for a URL with HTTPS protocol, squidguard must return a URL that uses the 
HTTPS protocol.
This is really not nice but the workaround is to use a 302 redirection:
   redirect302:http://www.internal-server.com/blocked.html

-Marcus


martin perner wrote:

Hi,

I'm running a squid 2.7.STABLE3 on a SLES10 as a normal proxy.

For content-filtering we are using squidguard which redirects a user to
a special page if he hits a blocked page.

If the redirect goes to a http page everthing works as expeced.

But if the redirect goes to a https page, the user gets a errorpage
saying that the connection failed and the system returned '(71) Protocol
error'. In the cache.log a error is printed (attached).

A deny_info to the https page works without any problem.

When i'm adding 'sslproxy_flags DONT_VERIFY_PEER' to the squid.conf the
error disappears.

The question is now: is the sslproxy_flags method opening any holes in
the setup or is there an other way for solving this problem?

Thanks in advance



part of the cache.log (cut the detail about the certificate):

2008/09/03 17:50:05| SSL unknown certificate error 20 in (cert)
2008/09/03 17:50:05| fwdNegotiateSSL: Error negotiating SSL connection
on FD 48: error:14090086:SSL
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed (1/-1/0)
2008/09/03 17:50:05| SSL unknown certificate error 20 in (cert)
2008/09/03 17:50:05| fwdNegotiateSSL: Error negotiating SSL connection
on FD 48: error:14090086:SSL
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed (1/-1/0)
2008/09/03 17:50:05| SSL unknown certificate error 20 in (cert)
2008/09/03 17:50:05| fwdNegotiateSSL: Error negotiating SSL connection
on FD 48: error:14090086:SSL
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed (1/-1/0)




Re: [squid-users] visible_hostname resolution problem

2008-09-03 Thread Amos Jeffries

 Hello,
 We are testing Squid 2.6 stable 21 for the first time and we are getting
 the following error when we are trying to start the Squid service.

 2008/09/03 11:02:20| parseConfigFile: line 3028 unrecognized: 'localhost'
 FATAL: Could not determine fully qualified hostname.  Please set
 'visible_hostname' (same error with serversname.company.com)

 Squid Cache (Version 2.6.STABLE21): Terminated abnormally.

 I added a value on the host file and on the DNS server trying to resolve
 the localhost value and the servername.company.com. Should I define the
 DNS server inside Squid?

 Thanks


Your host name configuration is severely broken.

check /etc/resolve.conf is configured properly (it should contain
'nameserver' and 'search' or 'domain' lines)
check /etc/hostname is configured properly

What you should do is give your machine a valid name ie 'guber'. Which can
be combined with your company name ie guber.example.com  to create a
globally unique FQDN. That name is how you and other administrators are
expected to track the machine back to report and/or solve any problems.
The FQDN + its rDNS lookup is what Squid uses to identify itself, and its
own official public IP address.

Amos




Re: [squid-users] compiling squid error on windows

2008-09-03 Thread Amos Jeffries
 Hi Amos,

 At 15.46 03/09/2008, Amos Jeffries wrote:
Dooda Dave wrote:
  Dear all,
 
  I've downloaded squid3.0 stable 8 and am trying to compile it on
  windows 2003. however, i hit an error when starting to run make. the
  error is as below:
 
  [EMAIL PROTECTED] /cygdrive/c/squid-3.0.STABLE8
  $ make
  make: *** No targets specified and no makefile found.  Stop.
 
  I couldn't really get help from google at all. Hope some of you may
  have encountered the same problem.
 
  Thanks in advance.
 
  Regards,
  Dooda

3.0 has no official windows support. What is there is very, very
experimental, and while improving slowly. Guido is the only one with a
proper MS devel install to test stuff, and he is still working on both
squid versions. If you are able to help at all, thank you.

 Squid 3.0 STABLE 8 should build on both MinGW+MSYS and Cygwin. I
 don't know how it works  :-(

3.x windows issues had probably best go to squid-dev.

Anyway,

I'm very not sure of this, so make a backup copy of your squid code
files before trying.

... but ... you probably need to run ./configure to generate the
makefiles for your system.

 Sure, like any other platform.

 Amos: there are some Windows informations missing from 3.0 release
 notes, you can find it in the 2.6 one.

I'll fix that right now. You mean the whole section 4 (in 2.7) / section 6
(in 2.6)?

Amos




Re: [squid-users] source-hash balancing...

2008-09-03 Thread Amos Jeffries
  So what would be the alternative method in my case (2 pools of 3
 servers)?
  Would this work?
 
acl u1 dstdomain u1.example.com
acl u2 dstdomain u2.example.com
 
cache_peer_access u1pool1 allow u1
cache_peer_access u1pool2 allow u1
cache_peer_access u1pool3 allow u1
cache_peer_access u1pool1 deny u2
cache_peer_access u1pool2 deny u2
cache_peer_access u1pool3 deny u2
 
cache_peer_access u2pool1 allow u2
cache_peer_access u2pool2 allow u2
cache_peer_access u2pool3 allow u2
cache_peer_access u2pool1 deny u1
cache_peer_access u2pool2 deny u1
cache_peer_access u2pool3 deny u1
 
  Does it spread the requests or won't the first cache_peer_access
 always be
  chosen...?
 

 Try something like this:

 cache_peer 192.168.1.1 parent 80 0 no-query front-end-https=auto
 originserver name=origin_1_1 sourcehash
 cache_peer 192.168.1.2 parent 8080 0 no-query front-end-https=auto
 originserver name=origin_1_2 sourcehash
 acl service_1 dstdomain site.com
 cache_peer_access origin_1_1 allow service_1
 cache_peer_access origin_1_2 allow service_1

 Do I need to explicitly deny the other dstdomains or can I just use a deny
 all (unless it will override the previous allow)?
 By example If I have 3 pools of 2 servers:

 acl u1 dstdomain u1.example.com
 acl u2 dstdomain u2.example.com
 acl u3 dstdomain u3.example.com

snip

The *_access lines are run from top down an a first-match-wins basis per
peer. So an allow of whatever you want, followed by a deny all for each
peer should be fine.

Amos



[squid-users] Interception caching problems

2008-09-03 Thread Jason Cosby
I'm serving in Iraq, where bandwidth is low and DNS servers are thousands of 
miles away. squid is a great solution for my unit. 

I set up squid-3.0-STABLE8 behind SNAT to do intercetion caching with the 
standard:

iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j REDIRECT --to-port 
3128

and http_port 3128 transparent

but squid does not intercept the packets. Setting the proxy in the browsers 
(IE7 and Forefox3) results in squid caching as expected. After many agonizing 
days of trying to determine why I was not getting hits when leaving the 
browsers un-configured, I finally had everyone set their proxy settings to the 
server and port 3128 (dhcpd takes care of pointing them at the right subnet and 
gateway). The issues I now face are that other apps don't run right, 
particularly for the Mac guys (can't have seperate settings in browser and 
other network apps). I need to run this transparently if at all possible.  

Am I missing something with the newest browsers? tcpdump did report that IE7 
was sending packets to port 137. Is Firefox also sending to non-standard ports? 
I even tried DNAT'ing everything from eth1 to port 3128 as a test, but no hits. 
Do I have squid listen on all possible tcp ports used by both browsers? Is 
iptables 1.4.1 buggy (doubtful)? Do I re-route all possible tcp ports to 3128? 
If so, does anyone know what all of the ports used by these two browsers are? 
Are the browsers sending out Don't intercept me headers when in default setup 
and Intercept me headers when configured for proxy? I'm at a loss. 

squid is doing a fantastic job of keeping a lot of traffic local, but I fear I 
will have to cease using it in order to keep everything else working if I can't 
solve this. IM and VC apps top the list down here since everyone tries to stay 
in touch with home, so I have to keep them working. 

Thanks so much for any help,
Jason



Re: [squid-users] Interception caching problems

2008-09-03 Thread Amos Jeffries
 I'm serving in Iraq, where bandwidth is low and DNS servers are thousands
 of miles away. squid is a great solution for my unit.

 I set up squid-3.0-STABLE8 behind SNAT to do intercetion caching with the
 standard:

 iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j REDIRECT
 --to-port 3128

 and http_port 3128 transparent

 but squid does not intercept the packets. Setting the proxy in the
 browsers (IE7 and Forefox3) results in squid caching as expected. After
 many agonizing days of trying to determine why I was not getting hits when
 leaving the browsers un-configured, I finally had everyone set their proxy
 settings to the server and port 3128 (dhcpd takes care of pointing them at
 the right subnet and gateway). The issues I now face are that other apps
 don't run right, particularly for the Mac guys (can't have seperate
 settings in browser and other network apps). I need to run this
 transparently if at all possible.

 Am I missing something with the newest browsers? tcpdump did report that
 IE7 was sending packets to port 137. Is Firefox also sending to
 non-standard ports? I even tried DNAT'ing everything from eth1 to port
 3128 as a test, but no hits. Do I have squid listen on all possible tcp
 ports used by both browsers? Is iptables 1.4.1 buggy (doubtful)? Do I
 re-route all possible tcp ports to 3128? If so, does anyone know what all
 of the ports used by these two browsers are? Are the browsers sending out
 Don't intercept me headers when in default setup and Intercept me
 headers when configured for proxy? I'm at a loss.

 squid is doing a fantastic job of keeping a lot of traffic local, but I
 fear I will have to cease using it in order to keep everything else
 working if I can't solve this. IM and VC apps top the list down here since
 everyone tries to stay in touch with home, so I have to keep them working.

 Thanks so much for any help,
 Jason


A couple of Qs.

 - is your squid built with --enable-linux-netfilter ?

 - is squid running on the NAT box?

 - are the requests just dying, or being served okay as TCP_MISS?

 - whats the rest of your config say?


To keep explicit config (it is better anyway). Windows people are screwed
(way to go MS).
For the non-Windows users there is a global environment variable in most
OS which applications usually use for proxy settings:

  http_proxy=http://fubar.example.org:3128/;

or a control panel somewhere in the OS for 'proxy settings' which sets it
properly for the whole machine. Not in the browser-only settings.

Amos




[squid-users] make install: Squid 3.0 Stable 8 on W2k3

2008-09-03 Thread Dooda Dave
Hi,

Last time, i had a problem that i got stuck with make, but then i
figured out that i missed some gcc compiler components. However, when
i pass that and get to make install the following errors occur.

make[3]: *** [WinSvc.o] Error 1
make[3]: Leaving directory `/cygdrive/c/squid-3.0.STABLE8/src'
make[2]: *** [install-recursive] Error 1
make[2]: Leaving directory `/cygdrive/c/squid-3.0.STABLE8/src'
make[1]: *** [install] Error 2
make[1]: Leaving directory `/cygdrive/c/squid-3.0.STABLE8/src'
make: *** [install-recursive] Error 1

-- 
Dooda


Re: [squid-users] Interception caching problems

2008-09-03 Thread Indunil Jayasooriya
Hi,


Pls fill below varable with yours.
$LAN= Lan  ip range. example- 192.168.0.0/24
$INTERFAZ_INT= Interface connects to the Internet
$INTERFAZ_LAN= Interface conncects to Lan
$LAN_IP of the squid box = Lan ip. example- 192.168.0.1

I use below rules for tranceparent interception on Linux.

#Enabling ip forwarding
echo 1  /proc/sys/net/ipv4/ip_forward

#For squid traffic to Accept
iptables -A INPUT -d $LAN_IP -p tcp -s $LAN --dport 3128 -j ACCEPT

iptables -A FORWARD -p udp -s $LAN --dport 53 -m state --state NEW -j ACCEPT
iptables -A FORWARD -p tcp -s $LAN -m multiport --dports
20,21,22,25,43,53,80,443,110,143 -m state --state NEW -j ACCEPT

iptables -A OUTPUT -p udp --dport 53 -j ACCEPT
iptables -A OUTPUT -p tcp -m multiport --dports
20,21,22,25,43,53,80,443,110,143 -j ACCEPT

iptables -t nat -A POSTROUTING -p udp -o $INTERFAZ_INT -s $LAN --dport
53 -j SNAT --to-source $INT_IP
iptables -t nat -A POSTROUTING -p tcp -o $INTERFAZ_INT -s $LAN -m
multiport --dports 20,21,22,25,43,53,80,443,110,143 -j SNAT
--to-source $INT_IP

#Redirecting traffic destined to port 80 to port 3128
iptables -t nat -A PREROUTING -p tcp -i $INTERFAZ_LAN --dport 80 -j
REDIRECT --to-port 3128


in addition to that, Pls check you Clients PCs. their gateway, DNS servers


Re: [squid-users] make install: Squid 3.0 Stable 8 on W2k3

2008-09-03 Thread Amos Jeffries
 Hi,

 Last time, i had a problem that i got stuck with make, but then i
 figured out that i missed some gcc compiler components. However, when
 i pass that and get to make install the following errors occur.

 make[3]: *** [WinSvc.o] Error 1
 make[3]: Leaving directory `/cygdrive/c/squid-3.0.STABLE8/src'
 make[2]: *** [install-recursive] Error 1
 make[2]: Leaving directory `/cygdrive/c/squid-3.0.STABLE8/src'
 make[1]: *** [install] Error 2
 make[1]: Leaving directory `/cygdrive/c/squid-3.0.STABLE8/src'
 make: *** [install-recursive] Error 1

 --
 Dooda


There should be an all-important line or more above that sequence of abort
codes which tells us what the error actually is.

Amos



Re: [squid-users] Interception caching problems

2008-09-03 Thread Amos Jeffries
 Hi,


 Pls fill below varable with yours.
 $LAN= Lan  ip range. example- 192.168.0.0/24
 $INTERFAZ_INT= Interface connects to the Internet
 $INTERFAZ_LAN= Interface conncects to Lan
 $LAN_IP of the squid box = Lan ip. example- 192.168.0.1

 I use below rules for tranceparent interception on Linux.

Most of those rules do not apply to web traffic. They appear to be
standard rules for a gateway done manually without a control tool.

see:
  http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxRedirect
the issues do not appear to be with the interception itself. Just with the
followup or underlying traffic flows.

Amos