[squid-users] many parseHttpRequest: Unsupported method found

2008-07-03 Thread Siu-kin Lam
Dear all 

I am using version2.7 stable 3 as  a proxy server. I have found many 
parseHttpRequest: Unsupported method in cache.log. 

 

Is it possible to fix it? 

Also, could it change to allow any Unsupported method by default ? 

 

Thanks 

SK
 


  


Re: [squid-users] Squid web interface

2008-07-03 Thread Angela Williams
On Thursday 03 July 2008, Patrick G. Victoriano wrote:
 Hi,

 Is there a software or program where you can set acl's on squid using a web
 browser? If there's any, please advise me.

Webmin does a reasonable job. You will still need to set the order of the 
acl's so that they are parsed in the order you want them to!

Cheers
Ang


-- 
Angela Williams Enterprise Outsourcing
Unix/Linux  Cisco spoken here! Bedfordview
[EMAIL PROTECTED]   Gauteng South Africa

Smile!! Jesus Loves You!!



RE: [squid-users] Squid web interface

2008-07-03 Thread Patrick G. Victoriano
Thanks Everyone.
I'll give it a shot.


Thanks a lot.

 
 
 
Regards,
 
 
(TRIK)


-Original Message-
From: Angela Williams [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 03, 2008 4:04 PM
To: squid-users@squid-cache.org
Cc: Patrick G. Victoriano
Subject: Re: [squid-users] Squid web interface

On Thursday 03 July 2008, Patrick G. Victoriano wrote:
 Hi,

 Is there a software or program where you can set acl's on squid using a web
 browser? If there's any, please advise me.

Webmin does a reasonable job. You will still need to set the order of the 
acl's so that they are parsed in the order you want them to!

Cheers
Ang


-- 
Angela Williams Enterprise Outsourcing
Unix/Linux  Cisco spoken here! Bedfordview
[EMAIL PROTECTED]   Gauteng South Africa

Smile!! Jesus Loves You!!





[squid-users] Slow clients in reverse proxy setup...

2008-07-03 Thread John Doe
Hi again,

I simulated a slow client (throttled at 512k/s) and squid kept the apache 
connection open the whole time, while he could have closed it after 1 second...
It was a 20MB file and maximum_object_size 32768 KB.
Accessed a second time, the object is cached correctly, no more apache access.
Are there parameters in the configuration to tell squid to go full throttle 
with the server, close the connection and then continue alone with the client?
For info, I have KeepAlive Off in my httpd.conf

Thx,
JD


  



Re: [squid-users] Squid performance... RAM or CPU?

2008-07-03 Thread Henrik Nordstrom
On ons, 2008-07-02 at 18:12 -0500, Carlos Alberto Bernat Orozco wrote:

 Why I'm making this question, because when I installed squid for 120
 users, the ram went to the sky

ram usage is not very dependent on the amount of users, more on how you
configure Squid.

There is a whole chapter in the FAQ covering memory usage:
http://wiki.squid-cache.org/SquidFaq/SquidMemory

Where the most important entry is
How much memory do I need in my Squid server?
http://wiki.squid-cache.org/SquidFaq/SquidMemory#head-09818ad4cb8a1dfea1f51688c41bdf4b79a69991

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Could squid change src ip to client ipaddress in the Transparent mode?

2008-07-03 Thread Henrik Nordstrom
On tor, 2008-07-03 at 12:02 +0900, S.KOBAYASHI wrote:
 I want squid or Linux to change to http_client ip address from squid:eth1 ip
 address which is in processing of the proxy in transparent mode.
 Eventually, nobody needs to change the firewall settings.

See tproxy.

Requires Linux with a patched kernel to support the feature.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] many parseHttpRequest: Unsupported method found

2008-07-03 Thread Amos Jeffries

Siu-kin Lam wrote:
Dear all 

I am using version2.7 stable 3 as  a proxy server. I have found many parseHttpRequest: Unsupported method in cache.log. 

Is it possible to fix it? 


If you know what the methods are supposed to be. Squid 2.x has a 
squid.conf option extension_methods ... which allows you to name up to 
20 new ones which are allowed through.


http://www.squid-cache.org/Versions/v2/2.7/cfgman/extension_methods.html



Also, could it change to allow any Unsupported method by default ? 


The change is not easy and we have already done it in Squid 3.1 
(currently 3-HEAD) with no intentions on back-porting.


Amos
--
Please use Squid 2.7.STABLE3 or 3.0.STABLE7


Re: [squid-users] Account expiry

2008-07-03 Thread Henrik Nordstrom
Please do not reply to an existing question when asking a new question.
Changing the subject is not sufficient, your message is still a response
to the original, and gets threaded as such in thread aware mail clients
and mail archives.

On tor, 2008-07-03 at 11:32 +0800, Patrick G. Victoriano wrote:

 I want to give access a certain user to the internet at a certain date. What 
 config should I enter to my conf to implement this
 setup

Here is two options. Either place the restriction in whatever user
database you are using, only keeping the account enabled at those dates,
or build the restriction using acl's in squid.conf.

Now, Squid acls as such is a little limited in this regards as it
doesn't support dates, only days of the week and time. But It's
reletively trivial to extend with an external acl evaluating the time..


Example external acl helper for date checks. Used like

external_acl_type datecheck /path/to/datecheck.sh

acl user_a_dates external datecheck startdate enddate

where startdate and enddate is given like MMDD where  is year,
MM is month and DD is day.

### cut here ###
#!/bin/sh
while read start end; do
today=`date +%Y%m%d`
if [ $start -le $today  $end -ge $today ]; then
echo OK
else
echo ERR
fi
done
### END CUT ###

Depending on how the timezone is configured on your server you MAY need
to add a TZ variable in the beginning of the script defining your
timezone. But normally not needed.

TZ=yourtimezone
export TZ


Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] objects after expire time

2008-07-03 Thread Henrik Nordstrom
On tor, 2008-07-03 at 11:53 +0800, Ken W. wrote:
 My original server includes the expire headers in its response.
 When an object cached on squid get expired, for the succedent requests
 to this object, does squid revalidate it to original server every
 time?

No, just once to update the expiry, until it expires again.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Slow clients in reverse proxy setup...

2008-07-03 Thread Amos Jeffries

John Doe wrote:

Hi again,

I simulated a slow client (throttled at 512k/s) and squid kept the apache 
connection open the whole time, while he could have closed it after 1 second...
It was a 20MB file and maximum_object_size 32768 KB.
Accessed a second time, the object is cached correctly, no more apache access.
Are there parameters in the configuration to tell squid to go full throttle 
with the server, close the connection and then continue alone with the client?
For info, I have KeepAlive Off in my httpd.conf



That is squid default behavior when it gets a Connection: close header 
from server in response object.


The settings in squid are delay_pools, all about slowing down the client 
side of the send so clients can't hog too much bandwidth overall.



Amos
--
Please use Squid 2.7.STABLE3 or 3.0.STABLE7


Re: [squid-users] Squid web interface

2008-07-03 Thread Matus UHLAR - fantomas
Hello,

please, if you are writing a new post, send it as new mail and not
as reply/followup on old mail. It makes people with threading clients
angry and they can also in such case miss your e-mail.
Thank you.

On 03.07.08 11:58, Patrick G. Victoriano wrote:
 From: Patrick G. Victoriano [EMAIL PROTECTED]
 To: squid-users@squid-cache.org
 Date: Thu, 3 Jul 2008 11:58:04 +0800
 In-Reply-To: [EMAIL PROTECTED]
 Subject: [squid-users] Squid web interface

 Is there a software or program where you can set acl's on squid using a web 
 browser?
 If there's any, please advise me.

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Atheism is a non-prophet organization. 


Re: [squid-users] many parseHttpRequest: Unsupported method found

2008-07-03 Thread Siu-kin Lam
Hi Amos 

Thanks for the reply. 
I found most unsupported method in cache.log are control-code included. I 
think they are come from some BT clients. 

Thanks 
S K 
 
--- On Thu, 7/3/08, Amos Jeffries [EMAIL PROTECTED] wrote:

 From: Amos Jeffries [EMAIL PROTECTED]
 Subject: Re: [squid-users] many parseHttpRequest: Unsupported method found
 To: [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Date: Thursday, July 3, 2008, 9:44 AM
 Siu-kin Lam wrote:
  Dear all 
  
  I am using version2.7 stable 3 as  a proxy server. I
 have found many parseHttpRequest: Unsupported
 method in cache.log. 
  
  Is it possible to fix it? 
 
 If you know what the methods are supposed to be. Squid 2.x
 has a 
 squid.conf option extension_methods ... which allows you to
 name up to 
 20 new ones which are allowed through.
 
 http://www.squid-cache.org/Versions/v2/2.7/cfgman/extension_methods.html
 
  
  Also, could it change to allow any Unsupported
 method by default ? 
 
 The change is not easy and we have already done it in Squid
 3.1 
 (currently 3-HEAD) with no intentions on back-porting.
 
 Amos
 -- 
 Please use Squid 2.7.STABLE3 or 3.0.STABLE7


  


Re: [squid-users] Recommend for hardware configurations

2008-07-03 Thread Henrik Nordstrom
On tor, 2008-07-03 at 12:04 +0800, Roy M. wrote:

 We are planning to replace this testing server with two or three
 cheaper 1U servers (sort of redundancy!)
 
 Intel Dual Core or Quad Core CPU x1 (no SMP)

Squid uses only one core, so rather Dual core than Quad...

 4GB DDR2 800 RAM
 500GB or 750GB SATA (Raid 0)

For Squid it's easier with JBOD than RAID0. Performance is the same.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Access to IP websites blocked partially

2008-07-03 Thread Henrik Nordstrom
On tor, 2008-07-03 at 12:57 +0800, Josh wrote:
 Hey list,
 
 I have an issue with my squid proxy server.
 My setup is like that : client --- squid --- netcache --- internet
 
 When I enter in my client's browser the url: http://17.149.160.10/ , I
 got stucked... the page cannot be displayed.
 Access.log gives me :
 1215060561.991   4986 10.51.128.79 TCP_MISS/000 0 GET
 http://17.149.160.10/ - NONE/- -

Usually this:
http://wiki.squid-cache.org/SquidFaq/SystemWeirdnesses#head-699d810035c099c8b4bff21e12bb365438a21027

Or this:
http://wiki.squid-cache.org/SquidFaq/SystemWeirdnesses#head-4920199b311ce7d20b9a0d85723fd5d0dfc9bc84


But probably something else as you are using a parent proxy..

Maybe this?
http://wiki.squid-cache.org/SquidFaq/ConfiguringSquid#head-f7c4c667d4154ec5a9619044ef7d8ab94dfda39b

no, your squid.conf seems fine..

Or maybe the reverse lookup is delaying.. Your corpnet acl will reverse lookup 
the IP to see if it's within the corp.local domain.

As you get NONE/- in the hierarchy field Squid hasn't even contacted the 
netcache yet (almost 5 seconds, client aborted)

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Reverse Proxy, OWA RPCoHTTPS and NTLM

2008-07-03 Thread Henrik Nordstrom
On tor, 2008-07-03 at 07:28 +0200, Abdessamad BARAKAT wrote:
 Hi,
 
 I try to setup squid as ssl reverse proxy for publishing OWA services 
 (webmail, rpc/http and activesync), now the publish is made by a ISA 
 server and I want to replace this ISA Server.
 
 the flow:
 
 Internet = Firewall(NAT) = Squid Reverse Proxy on DMZ( https port 
 8443) = Firewall(8443 open) = Exchange Server (NLB IP on https port 443)

This will generally only work if the NAT port translates external port
443 to 8443 on the proxy. OWA will not work if the external requested
port differs from the port where OWA is running on the exchange server.


 I can get webmail working well, not yet tested activesync but the use of 
 RPC over HTTP doesn't work, I get a 401 error code when I try to logon 
 with outlook:

Have you told Squid to trust the web server with logon credentials? See
the cache_peer login= option..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Slow clients in reverse proxy setup...

2008-07-03 Thread Henrik Nordstrom
On tor, 2008-07-03 at 02:38 -0700, John Doe wrote:

 Are there parameters in the configuration to tell squid to go full throttle 
 with the server, close the connection and then continue alone with the client?

http://www.squid-cache.org/Versions/v3/3.0/cfgman/read_ahead_gap.html

 For info, I have KeepAlive Off in my httpd.conf

Why?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] LDAP Authentication with Umlauts

2008-07-03 Thread enrico . hoyme
Hi,

I also had problems with umlauts. We use our Lotus Domino Server as LDAP 
server and since an update from version 6.5 to 8, our users are unable to 
authenticate via IE or Firefox if their password contains umlauts.
We are running squid on BSD and Linux and on both system you are able to 
authenticate using squid_ldap_auth on command line.
I figured out that if you use the command line (set to utf-8) the utf-8 
code will be send and if you try to use IE or Firefox the ASCII code will 
be send.
So I wrote a small work around by adding a new function 
rfc1738_unescape_with_utf to squid_ldap_auth.c. The base content is the 
original function rfc1738_unescape, but I added a switch statement to 
change the character representation from ascii to utf-8 (see code for 
german special chars below).

void
rfc1738_unescape_with_utf(char *s)
{
char hexnum[3];
int i, j;   /* i is write, j is read */
unsigned int x;
for (i = j = 0; s[j]; i++, j++) {
s[i] = s[j];
if (s[i] != '%')
continue;
if (s[j + 1] == '%') {  /* %% case */
j++;
continue;
}
if (s[j + 1]  s[j + 2]) {
if (s[j + 1] == '0'  s[j + 2] == '0') {   /* %00 case */
j += 2;
continue;
}
hexnum[0] = s[j + 1];
hexnum[1] = s[j + 2];
hexnum[2] = '\0';
if (1 == sscanf(hexnum, %x, x)) {
switch(x) {
case 196 :
s[i] = (char) 195;
s[i + 1] = (char) 132;
i++;
break;
case 214 :
s[i] = (char) 195;
s[i + 1] = (char) 150;
i++;
break;
case 220 :
s[i] = (char) 195;
s[i + 1] = (char) 156;
i++;
break;
case 223 :
s[i] = (char) 195;
s[i + 1] = (char) 159;
i++;
break;
case 228 :
s[i] = (char) 195;
s[i + 1] = (char) 164;
i++;
break;
case 246 :
s[i] = (char) 195;
s[i + 1] = (char) 182;
i++;
break;
case 252 :
s[i] = (char) 195;
s[i + 1] = (char) 188;
i++;
break;
default :
s[i] = (char) (0x0ff  x);
}
j += 2;
}
}
}
s[i] = '\0';
}

Regards

Enrico Hoyme


Re: [squid-users] Reverse Proxy, OWA RPCoHTTPS and NTLM

2008-07-03 Thread Henrik Nordstrom
On tor, 2008-07-03 at 12:35 +0200, Abdessamad BARAKAT wrote:

 I have tried login=PASS without succes. If I have understand  
 correctly, the credentials are sent to the backend server without any  
 modifications

Yes.

 Finally, If I set Basic authentication on the outlook client, it's  
 working

Which Squid version?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Slow clients in reverse proxy setup...

2008-07-03 Thread John Doe
  I simulated a slow client (throttled at 512k/s) and squid kept the apache 
  connection open the whole time, while he could have closed it after 1 
  second...
  It was a 20MB file and maximum_object_size 32768 KB.
  Accessed a second time, the object is cached correctly, no more apache 
  access.
  Are there parameters in the configuration to tell squid to go full throttle 
  with the server, close the connection and then continue alone with the 
  client?
  For info, I have KeepAlive Off in my httpd.conf
 
 That is squid default behavior when it gets a Connection: close header 
 from server in response object.

My apache does send a Connection: close
  HTTP/1.1 200 OK
  Date: Thu, 03 Jul 2008 10:17:40 GMT
  Server: Apache/2.2.3 (CentOS)
  Last-Modified: Wed, 02 Jul 2008 15:48:40 GMT
  ETag: 68996-1388000-6d2d8600
  Accept-Ranges: bytes
  Content-Length: 2048
  Cache-Control: max-age=3600, s-maxage=300
  Expires: Thu, 03 Jul 2008 11:17:40 GMT
  Connection: close
  Content-Type: text/plain; charset=UTF-8

 The settings in squid are delay_pools, all about slowing down the client 
 side of the send so clients can't hog too much bandwidth overall.

I thought delay_pools were for limiting clients bandwitdh.
This is a reverse proxy setup: fast squid-apache and potentialy slow 
squid-clients.
I do not want to slowdown fast clients; I want squid to handle slow clients 
(that would hold onto apache for a too long time).

Right now, I have this:
  slowclient-squid
  squid-apache (a little bit transfered)
  slowclient-squid (a little bit transfered)
  squid-apache (a little bit transfered)
  slowclient-squid (a little bit transfered)
  . . .
  squid-apache (connection closed)
  slowclient-squid (connection closed)

I want something like:
  slowclient-squid
  squid-apache (all transfered and connection closed)
  slowclient-squid (a little bit transfered)
  slowclient-squid (a little bit transfered)
  slowclient-squid (a little bit transfered)
  . . .
  slowclient-squid (connection closed)

Are delay_pools for that too?

Thx,
JD


  



[squid-users] TCP connection to 127.0.0.1/80 failed

2008-07-03 Thread WestWind
Hi,
Some error occured when I am using http_load to test my squid server

2008/07/03 18:07:42| TCP connection to 127.0.0.1/80 failed
2008/07/03 18:07:42| TCP connection to 127.0.0.1/80 failed
2008/07/03 18:07:42| TCP connection to 127.0.0.1/80 failed
2008/07/03 18:07:42| TCP connection to 127.0.0.1/80 failed
2008/07/03 18:07:42| TCP connection to 127.0.0.1/80 failed
2008/07/03 18:07:42| TCP connection to 127.0.0.1/80 failed
2008/07/03 18:07:42| TCP connection to 127.0.0.1/80 failed
2008/07/03 18:07:42| TCP connection to 127.0.0.1/80 failed
2008/07/03 18:07:42| TCP connection to 127.0.0.1/80 failed
2008/07/03 18:07:42| TCP connection to 127.0.0.1/80 failed
2008/07/03 18:07:42| TCP connection to 127.0.0.1/80 failed
2008/07/03 18:07:42| Detected REVIVED Parent: squid_test_ip_6
2008/07/03 18:07:46| squidaio_queue_request: WARNING - Queue congestion
2008/07/03 18:07:54| squidaio_queue_request: WARNING - Queue congestion
2008/07/03 18:08:07| squidaio_queue_request: WARNING - Queue congestion
2008/07/03 18:08:29| squidaio_queue_request: WARNING - Queue congestion
2008/07/03 18:09:13| squidaio_queue_request: WARNING - Queue congestion
2008/07/03 18:10:42| squidaio_queue_request: WARNING - Queue congestion

I am sure back-end server(127.0.0.1/80) is running and have no problem
when this happen
Maybe http_load make too much pressure that squid can't afford.But I
hope squid can afford the pressure by use the max cpu, memory or any
other thing of the computer
How to do the job and avoid the error happen?


Thanks


Re: [squid-users] Slow clients in reverse proxy setup...

2008-07-03 Thread John Doe
  Are there parameters in the configuration to tell squid to go full throttle 
 with the server, close the connection and then continue alone with the client?
 http://www.squid-cache.org/Versions/v3/3.0/cfgman/read_ahead_gap.html

Thx, works great.
Can I set the same value as maximum_object_size, or should it be a little bit 
lower?

  For info, I have KeepAlive Off in my httpd.conf
 Why?

Just in case KeepAlive was a problem in that case...

Thx a lot,
JD


  



[squid-users] Help with integrarting squid with active directory

2008-07-03 Thread Tejpal Amin


HI,

I am trying to integrate squid 3.0 with my windows 2003 active directory
using squid_ldap_auth helper.
I have been unsuccesful in doing so, request your help.

Tejpal Amin



Disclaimer  Privilege Notice: This e-Mail may contain proprietary, privileged 
and confidential information and is sent for the intended recipient(s) only. 
If, by an addressing or transmission error, this mail has been misdirected to 
you, you are requested to notify us immediately by return email message and 
delete this mail and its attachments. You are also hereby notified that any 
use, any form of reproduction, dissemination, copying, disclosure, 
modification, distribution and/or publication of this e-mail message, contents 
or its attachment(s) other than by its intended recipient(s) is strictly 
prohibited. Any opinions expressed in this email are those of the individual 
and may not necessarily represent those of Tata Capital Ltd. Before opening 
attachment(s), please scan for viruses.




Re: [squid-users] Pseudo-random 403 Forbidden...

2008-07-03 Thread John Doe
  Looks like a false positive.
   
   For info, if I remove the digests, everything works fine...
  
  Cache digests has a higher false positives rate than ICP, but it can
  happen with ICP as well.

I checked with no digests again and there is an odd behavior.  If by example:

squid2 has the object
if queried, squid1 asks squid2 and squid3
if queried, squid3 asks only squid2 and not squid1

squid1 has the object
if queried, squid2 asks only squid3 and not squid1 = squid2 cache also the 
object.
if queried, squid3 asks only squid2 and not squid1 = squid3 cache also the 
object.

squid3 has the object
if queried, squid1 asks squid2 and squid3
if queried, squid2 asks only squid3 and not squid1

 In other words please file a bug report at http://bugs.squid-cache.org/

I filed Bug 2403.

Just wondering, are many people successfully using such a setup (more than 2 
squids as proxy-only siblings)?
Just to see if it is a minor bug or conf problem, or if it is uncharted 
territory.

Thx,
JD


  



Re: [squid-users] Squid3 Authentication digest ldap problema

2008-07-03 Thread Edward Ortega
Hi and thanks for all!

Almost is work, but i have another problem, i get this on
/var/log/squid3/cache.log:

user filter 'uid=user1', searchbase 'dc=something,dc=com'
2008/07/03 08:50:42| helperHandleRead: unexpected read from
digestauthenticator #1, 16 bytes 'ERR No such user'
2008/07/03 08:50:42| helperHandleRead: unexpected read from
digestauthenticator #1, 1 bytes ' '

It's  seemingly like squid3 cann't  make a sub search under a
begining of the tree, because the user are in:
uid=user1,ou=someOU,...,o=someDomain,dc=something,dc=com
   
Again Thanks!


Henrik Nordstrom escribió:
 On ons, 2008-07-02 at 14:52 -0430, Edward Ortega wrote:

   
 Ok, i store on the '*street*' attribute something like you said (
 MD5(username + : + realm + : + password) ), have i to  store the 
 realm  argument  on  other  attribute  to squid  understand the hash?

 #/usr/lib/squid3/digest_ldap_auth -v 3 -b 'dc=something,dc=com' -F
 '((objectclass=posixAccount)(uid=%s))' -H 'ldap://ldap' -A '*street*' 
 -l -d
 

 digest_ldap_auth expects an attribute with either

 a) plain-text password

 or when usingthe -e command line option

 b) realm:hash

 If encrypted mode is used (realm:hash) then the attribute may be
 multi-valued with one value per supported realm.

 Regards
 Henrik
   


RE: [squid-users] httpReadReply: Request not yet fully sent POSThttp://xxx/yyy.php

2008-07-03 Thread Joe Tiedeman
 Hi Guys,

I've also begun experiencing this issue with a few sites that we host
internally, we have a Mediawiki and a Joomla CMS installation both of
which use Windows Integrated Authentication (Kerberos not NTLM) behind a
squid reverse proxy. The error seems to show up when doing a large POST
such as uploading an image to the wiki or updating a large article in
Joomla and is quite often followed by an error in Firefox saying that
the connection was reset.

It seems to be that IIS is sending the 401 response before squid  the
client have finished sending the initial request to it, after sniffing
the traffic with wireshark on the client, squid is forwarding the 401
response before the client has finished posting the data.

I'm really at a loss as to what we can do to either fix or work around
the issue. I can't stop using WIA as it's the basis for all our single
sign on sites. If there's anything else that anyone suggest I would
really appreciate it!

If I can help by providing any more information, please let me know

Cheers

Joe


Joe Tiedeman
Support Analyst 
Higher Education Statistics Agency (HESA)
95 Promenade, Cheltenham, Gloucestershire GL50 1HZ
T 01242 211167  F 01242 211122  W www.hesa.ac.uk


-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Wednesday 13 June 2007 22:56
To: Sean Walberg
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] httpReadReply: Request not yet fully sent
POSThttp://xxx/yyy.php;

ons 2007-06-13 klockan 07:45 -0500 skrev Sean Walberg:

 httpReadReply: Request not yet fully sent POST http://xxx/yyy.php;
 
 -xxx varies, yyy.php is usually the same (most of our POSTs are to the

 same script anyway)

 Reading up on it a bit tells me that this means that the web server 
 has returned data before squid finished POSTing the form.

Yes.

 This is
 usually a PMTU problem in forward-cache scenarios though.  I wouldn't 
 expect PMTU discovery to be a problem on an Ethernet segment where all

 devices have the same MTU.

No. PMTU is not relevant here at all.

How the script behaves is relevant. If the script responds before
reading the complete request then the above message will be seen.

This may occur if

a) The script fails while reading the request or
b) The script doesn't really care what the POST data looks like,
ignoring it.
or
c) The web server responded with an error.

 My initial inclination is to get a packet capture, but these errors 
 are unpredictable so I might be sifting through a lot of data, and I'm

 not even sure what it would tell me.

The most important piece it will tell you is what the response from the
script actually looked like when this problem is seen. This will tell
you if the problem is the script / web server, or if the problem is
related to Squid.

Regards
Henrik

_

Higher Education Statistics Agency Ltd (HESA) is a company limited by
guarantee, registered in England at 95 Promenade Cheltenham GL50 1HZ.
Registered No. 2766993. The members are Universities UK and GuildHE.
Registered Charity No. 1039709. Certified to ISO 9001 and BS 7799. 
 
HESA Services Ltd (HSL) is a wholly owned subsidiary of HESA,
registered in England at the same address. Registered No. 3109219.
_

This outgoing email was virus scanned for HESA by MessageLabs.
_


Re: [squid-users] Recommend for hardware configurations

2008-07-03 Thread Roy M.
Hi,


On 7/3/08, Henrik Nordstrom [EMAIL PROTECTED] wrote:
 On tor, 2008-07-03 at 12:04 +0800, Roy M. wrote:

 Squid uses only one core, so rather Dual core than Quad...


But will it help if I am using external redirector? (Currently I don't
but maybe later)



 For Squid it's easier with JBOD than RAID0. Performance is the same.


If I only have 2 slots of disks, should I use both disks as cache, or
use one for system, and the other one for cache? (E.g. reduce
read/write for system disk = improve realiablitiy)


THanks.


[squid-users] CONNECT errors with 2.7.STABLE2-2

2008-07-03 Thread Ralf Hildebrandt
A tool supposedly worked until May and now, due to the evil squid update
to 2.7.x, won't work anymore. Of course squid is to blame, as always.
Since we all know, the professional tool is written with great care,
adhering to the specs and RFC by knowledgeable people. Unlike squid. Of
course. Call me bitter.

From our logs:

1215083751.310  0 10.47.52.76 TCP_MISS/417 1811 CONNECT 
drm.viasyshc.com:443 - NONE/- text/html
1215083785.295  0 10.47.52.76 TCP_MISS/417 1811 CONNECT 
drm.viasyshc.com:443 - NONE/- text/html
1215083805.308  2 10.47.52.76 TCP_MISS/417 1811 CONNECT 
drm.viasyshc.com:443 - NONE/- text/html
1215083818.308  0 10.47.52.76 TCP_MISS/417 1811 CONNECT 
drm.viasyshc.com:443 - NONE/- text/html
1215083819.294  0 10.47.52.76 TCP_MISS/417 1811 CONNECT 
drm.viasyshc.com:443 - NONE/- text/html

Their app logs:

07/03/08[11:02:09] ERROR: HTTP Transport: POST HTTP error 417 bytes = 16. 
Retrying  Seq No 4, RetryCount 1(max 1)
07/03/08[11:02:09] DEBUG: HEADER INFO START
 Server: squid/2.7.STABLE2
 Date: Thu, 03 Jul 2008 09:02:10 GMT
 Content-Type: text/html
 Content-Length: 1416
 Expires: Thu, 03 Jul 2008 09:02:10 GMT
 X-Squid-Error: ERR_INVALID_REQ 0
 X-Cache: MISS from proxy-cbf-2.charite.de
 X-Cache-Lookup: NONE from proxy-cbf-2.charite.de:8080
 Via: 1.0 proxy-cbf-2.charite.de:8080 (squid/2.7.STABLE2)
 Connection: close
 HEADER INFO END==

How can I debug their crap^h^h^h^hprofessional error free software?

-- 
Ralf Hildebrandt (i.A. des IT-Zentrums) [EMAIL PROTECTED]
Charite - Universitätsmedizin BerlinTel.  +49 (0)30-450 570-155
Gemeinsame Einrichtung von FU- und HU-BerlinFax.  +49 (0)30-450 570-962
IT-Zentrum Standort CBF send no mail to [EMAIL PROTECTED]


RE: [squid-users] Squid 2.6 not caching downloaded files

2008-07-03 Thread Tony Da Silva
 
Hi Adrian.

I don't think this is the problem because no files are being cached.
Previously norton updates would be cached but not even that is happening
anymore ...

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Adrian Chadd
Sent: Thursday, July 03, 2008 3:31 AM
To: Tony Da Silva
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid 2.6 not caching downloaded files

Look at enabling the header logging option (mime_ something in
squid.conf) and see what headers the object is being sent with.

It may be sent with headers which deny caching..



Adrian





[squid-users] GET cache_object://localhost/info on a reverse proxy setup

2008-07-03 Thread David Obando

Dear all,

I'm using Squid as a reverse proxy in a Squid/Pound/Zope/Plone-setup. 
Squid is running on port 80.


I would like to access the cache manager with the munin plugins to 
monitor Squid. The plugins use a HTTP request

GET cache_object://localhost/info HTTP/1.0.
Standard port 3128 isn't active, when asking port 80 I get a 404-error 
from zope.


How can I access the cache manager in such a setup?

My squid.conf is:

hierarchy_stoplist cgi-bin ?
#acl QUERY urlpath_regex cgi-bin \?
#no_cache deny QUERY
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
refresh_pattern (/cgi-bin/|\?) 0 0% 0
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:1440 0%  1440
refresh_pattern .  020%  4320

# Basic ACLs
acl all src 0.0.0.0/0.0.0.0
acl localhost src 127.0.0.1/32
acl ssl_ports port 443 563
acl safe_ports port 8080 80 443
#acl zope_servers src 127.0.0.2 127.0.0.1
acl manager proto cache_object
acl connect method connect

# deny requests to unknown ports
http_access deny !safe_ports
acl accelerated_protocols proto http https
acl accelerated_domains dstdomain lb.xxx.de
acl accelerated_domains dstdomain lb1.xxx.de
acl accelerated_domains dstdomain lb2.xxx.de
acl accelerated_domains dstdomain xxx.de
acl accelerated_domains dstdomain www.xxx.de
acl accelerated_ports myport 80 443
http_access allow accelerated_domains accelerated_ports 
accelerated_protocols


# Purge access - zope servers can purge but nobody else
acl purge method PURGE
#http_access allow zope_servers purge
http_access deny purge
# Reply access
http_reply_access allow all
# Cache manager setup - cache manager can only connect from localhost
# only allow cache manager access from localhost
http_access allow manager localhost
http_access deny manager
# deny connect to other than ssl ports
http_access deny connect !ssl_ports
# ICP access - anybody can access icp methods
icp_access allow localhost
# And finally deny all other access to this proxy
http_access deny all
coredump_dir /usr/local/squid/cache
http_port 80 defaultsite=www.xxx.de
#http_port 80 defaultsite=lb.xxx.de
#http_port 80
cache_peer 127.0.0.1 parent 8080 0 no-query originserver
#cache_peer 127.0.0.1 parent 8080 0 no-query originserver round-robin
#cache_peer 127.0.0.1 parent 8080 0 no-query
visible_hostname www.xxx.de
cache_mem 2000 MB
maximum_object_size 40960 KB
maximum_object_size_in_memory 100 KB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap LFUDA
cache_dir aufs /var/spool/squid 1 16 256
logformat combined %a %ui %un [%tl] %rm %ru HTTP/%rv %Hs %st 
%{Referer}h %{User-Agent}h %Ss:%Sh

access_log /var/log/squid/access.log combined
redirect_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf
#redirect_program /etc/squid/redirector.pl
negative_ttl 0 minutes
positive_dns_ttl 60 minutes
negative_dns_ttl 1 minutes


Thanks for your support,
David

--
The day microsoft makes something that doesn't suck is the day they start 
making vacuum cleaners.
gpg --keyserver pgp.mit.edu --recv-keys 1920BD87
Key fingerprint = 3326 32CE 888B DFF1 DED3  B8D2 105F 29CB 1920 BD87



Re: [squid-users] CONNECT errors with 2.7.STABLE2-2

2008-07-03 Thread Adrian Chadd
Attaching the actual request thats being made would probably be a good
place to start :)



Adrian


2008/7/3 Ralf Hildebrandt [EMAIL PROTECTED]:
 A tool supposedly worked until May and now, due to the evil squid update
 to 2.7.x, won't work anymore. Of course squid is to blame, as always.
 Since we all know, the professional tool is written with great care,
 adhering to the specs and RFC by knowledgeable people. Unlike squid. Of
 course. Call me bitter.

 From our logs:

 1215083751.310  0 10.47.52.76 TCP_MISS/417 1811 CONNECT 
 drm.viasyshc.com:443 - NONE/- text/html
 1215083785.295  0 10.47.52.76 TCP_MISS/417 1811 CONNECT 
 drm.viasyshc.com:443 - NONE/- text/html
 1215083805.308  2 10.47.52.76 TCP_MISS/417 1811 CONNECT 
 drm.viasyshc.com:443 - NONE/- text/html
 1215083818.308  0 10.47.52.76 TCP_MISS/417 1811 CONNECT 
 drm.viasyshc.com:443 - NONE/- text/html
 1215083819.294  0 10.47.52.76 TCP_MISS/417 1811 CONNECT 
 drm.viasyshc.com:443 - NONE/- text/html

 Their app logs:

 07/03/08[11:02:09] ERROR: HTTP Transport: POST HTTP error 417 bytes = 16. 
 Retrying  Seq No 4, RetryCount 1(max 1)
 07/03/08[11:02:09] DEBUG: HEADER INFO 
 START
  Server: squid/2.7.STABLE2
  Date: Thu, 03 Jul 2008 09:02:10 GMT
  Content-Type: text/html
  Content-Length: 1416
  Expires: Thu, 03 Jul 2008 09:02:10 GMT
  X-Squid-Error: ERR_INVALID_REQ 0
  X-Cache: MISS from proxy-cbf-2.charite.de
  X-Cache-Lookup: NONE from proxy-cbf-2.charite.de:8080
  Via: 1.0 proxy-cbf-2.charite.de:8080 (squid/2.7.STABLE2)
  Connection: close
  HEADER INFO END==

 How can I debug their crap^h^h^h^hprofessional error free software?

 --
 Ralf Hildebrandt (i.A. des IT-Zentrums) [EMAIL PROTECTED]
 Charite - Universitätsmedizin BerlinTel.  +49 (0)30-450 570-155
 Gemeinsame Einrichtung von FU- und HU-BerlinFax.  +49 (0)30-450 570-962
 IT-Zentrum Standort CBF send no mail to [EMAIL PROTECTED]




Re: [squid-users] CONNECT errors with 2.7.STABLE2-2

2008-07-03 Thread Ralf Hildebrandt
* Adrian Chadd [EMAIL PROTECTED]:
 Attaching the actual request thats being made would probably be a good
 place to start :)

Yes, how do I log this?

-- 
Ralf Hildebrandt (i.A. des IT-Zentrums) [EMAIL PROTECTED]
Charite - Universitätsmedizin BerlinTel.  +49 (0)30-450 570-155
Gemeinsame Einrichtung von FU- und HU-BerlinFax.  +49 (0)30-450 570-962
IT-Zentrum Standort CBF send no mail to [EMAIL PROTECTED]


[squid-users] url_rewrite_program doesn't seem to work on squid 2.6 STABLE17

2008-07-03 Thread Martin Jacobson (Jake)
Hi,

I hope that someone on this group can give me some pointers.  I have a squid 
proxy setup running version 2.6 stable 17 of squid.  I recently upgraded from a 
very old version of squid, 2.4 something.  The proxy sits in front of a search 
appliance and all search requests goes through the proxy.  

One of my requirements is to have all search requests for cache:SOMEURL go to a 
URL rewrite program that compares the requested URL to a list of URLs that have 
been blacklisted.  These URLs are one per line in a text file.  Any line that 
starts with # or is blank is discarded by the url_rewrite_program.  This Perl 
program seemed to work fine in the old version but now it doesn't work at all.  

Here is the relevant portion of my Squid conf file:
---
http_port 80 defaultsite=linsquid1o.myhost.com accel

url_rewrite_program /webroot/squid/imo/redir.pl
url_rewrite_children 10


cache_peer searchapp3o.myhost.com parent 80 0 no-query originserver 
name=searchapp proxy-only
cache_peer linsquid1o.myhost.com parent 9000 0 no-query originserver 
name=searchproxy proxy-only
acl bin urlpath_regex ^/cgi-bin/
cache_peer_access searchproxy allow bin
cache_peer_access searchapp deny bin

Here is the Perl program
---
#!/usr/bin/perl


$| = 1;

my $CACHE_DENIED_URL = http://www.mysite.com/mypage/pageDenied.intel;;
my $PATTERNS_FILE = /webroot/squid/blocked.txt;
my $UPDATE_FREQ_SECONDS = 60;

my $last_update = 0;
my $last_modified = 0;
my $match_function;

my $url, $remote_host, $ident, $method, $urlgroup;
my $cache_url;

my @patterns;


while () {
   chomp;
   ($url, $remote_host, $ident, $method, $urlgroup) = split;
  
   update_patterns();

   $cache_url = cache_url($url);
   if ($cache_url) {
  update_patterns();
  if ($match_function($cache_url)) {
 $cache_url = url_encode($cache_url);
 print 302:$CACHE_DENIED_URL?URL=$cache_url\n;
 next;
  }
   }
   print \n;
}

sub update_patterns {
   my $now = time();
   if ($now  $last_update + $UPDATE_FREQ_SECONDS) {
  my @a = stat($PATTERNS_FILE);
  my $mtime = $a[9];
  if ($mtime != $last_modified) {
 @patterns = get_patterns();
 $match_function = build_match_function(@patterns);
 $last_modified = $mtime;
  }
   }
}


sub get_patterns {
   my @p = ();
   my $p = ;
   open PATTERNS,  $PATTERNS_FILE or die Unable to open patterns file. $!;
   while (PATTERNS) {
  chomp;
  if (!/^\s*#/  !/^\s*$/) {# disregard comments and empty lines.
 $p = $_;
 $p =~ s#\/#\\/#g;
 $p =~ s/^\s+//g;
 $p =~ s/\s+$//g;
 if (is_valid_pattern($p)) {
push(@p, $p);
 }
  }
   }
   close PATTERNS;
   return @p;
}

sub is_valid_pattern {
   my $pat = shift;
   return eval {  =~ m|$pat|; 1 } || 0;
}


sub build_match_function {
   my @p = @_;
   my $expr = join(' || ', map { \$_[0] =~ m/$p[$_]/io } (0..$#p));
   my $mf = eval sub { $expr };
   die Failed to build match function: $@ if $@;
   return $mf;
}

sub cache_url {
   my $url = @_[0];
   (my $script, $qs) = split(/\?/, $url);
   if ($qs) {
  my $param, $name, $value;
  my @params = split(//, $qs);
  foreach $param (@params) {
 ($name, $value) = split(/=/, $param);
 $value =~ tr/+/ /;
 $value =~ s/%([\dA-Fa-f][\dA-Fa-f])/pack(C, hex($1))/eg;
 if ($value =~ /cache:([A-z0-9]{7,20}:)?([A-z]+:\/\/)?([^ ]+)/) {
if ($2) {
   return $2 . $3;
} else {
   # return http://; . $3;
   return $3;
}
 }
  }
   }
   return ;
}

sub url_encode {
   my $str = @_[0];
   $str =~ tr/ /+/;
   $str =~ s/([\?=:\/#])/sprintf(%%%02x, ord($1))/eg;
   return $str;
}

Below is a sample of the blocked URLs file

#
# URL Patterns to be Blocked
#---
# This file contains URL patterns which should be blocked
# in requests to the Google cache.
#
# The URL patterns should be entered one per line.
# Blank lines and lines that begin with a hash mark (#)
# are ignored.
#
# Anything that will work inside a Perl regular expression
# should work.
#
# Examples:
# http://www.bad.host/bad_directory/
# ^ftp:
# bad_file.html$

# Enter URLs below this line



www.badsite.com/


So my question, is there a better way of doing this?
Does someone see anything wrong that is keeping this from working in 2.6?

Thanks,
Martin C. Jacobson (Jake)


Re: [squid-users] Reverse Proxy, OWA RPCoHTTPS and NTLM

2008-07-03 Thread Henrik Nordstrom
On tor, 2008-07-03 at 14:02 +0200, Abdessamad BARAKAT wrote:
 Le 3 juil. 08 à 12:46, Henrik Nordstrom a écrit :
 
  On tor, 2008-07-03 at 12:35 +0200, Abdessamad BARAKAT wrote:
 
  I have tried login=PASS without succes. If I have understand
  correctly, the credentials are sent to the backend server without any
  modifications
 
  Yes.
 
  Finally, If I set Basic authentication on the outlook client, it's
  working
 
  Which Squid version?
 
 3.0STABLE7

Then downgrade to 2.7. NTLM passthru is not supported in Squid-3 ye, but
is supported in Squid-2.6 and later Squid-2 versions..

We hope to have the needed workarounds for Microsofts bending of the
HTTP protocol in place for Squid-3.1, but no guarantee.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Recommend for hardware configurations

2008-07-03 Thread Henrik Nordstrom
On tor, 2008-07-03 at 22:11 +0800, Roy M. wrote:
  Squid uses only one core, so rather Dual core than Quad...

 But will it help if I am using external redirector? (Currently I don't
 but maybe later)

url rewriters generally do not use a lot of CPU. 

 If I only have 2 slots of disks, should I use both disks as cache, or
 use one for system, and the other one for cache? (E.g. reduce
 read/write for system disk = improve realiablitiy)

Depends on the I/O load you expect. But yes, it's often a good idea to
have the cache separate from OS + logs..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] how safe is server_http11?

2008-07-03 Thread Chris Woodfield
So we're looking to upgrade from 2.6 to 2.7, primarily to get the HTTP/ 
1.1 header support. I realize that the full 1.1 spec is not completely  
implemented, but are there any real Danger, Will Robinson!  
implications?


Specifically, is there any functionality or access to content that  
would be actively broken because squid is advertising HTTP/1.1 but  
doesn't have the spec completely implemented?


Thanks,

-C




[squid-users] Squid and HTTP Host value

2008-07-03 Thread Julian Gilbert
I am trying to configure squid 2.5 and looking for some assistance. When I 
make client request to squid in the form:


GET http://66.102.9.147/
HOST www.google.co.uk

the squid proxy makes the following request to the web server:

GET /
HOST 66.102.9.147

How do I configure squid not to overwire the host value? The request from 
squid should be sent as:


GET /
HOST www.google.co.uk

Many Thanks,

Julian Gilbert 





[squid-users] Re: Account Expiry

2008-07-03 Thread Patrick G. Victoriano
Hi,

I am very sorry Henrik, Matus and to everyone who got angry/irritated for what 
I’ve done.
I do not know the implications of that so please forgive me this time.
I assure everyone this will not happen again.


Thanks for the replies.


Henrik,

Thank you for the acl regarding the the date. I will try this at my conf.
 


 
 
 
Regards,
 
 
TRIK





Re: [squid-users] Slow clients in reverse proxy setup...

2008-07-03 Thread Henrik Nordstrom
On tor, 2008-07-03 at 04:02 -0700, John Doe wrote:
   Are there parameters in the configuration to tell squid to go full 
   throttle 
  with the server, close the connection and then continue alone with the 
  client?
  http://www.squid-cache.org/Versions/v3/3.0/cfgman/read_ahead_gap.html
 
 Thx, works great.
 Can I set the same value as maximum_object_size, or should it be a little bit 
 lower?

It buffers in memory so don't be too agressive about it..

   For info, I have KeepAlive Off in my httpd.conf
  Why?
 
 Just in case KeepAlive was a problem in that case...

It's not.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] LDAP Authentication with Umlauts

2008-07-03 Thread Henrik Nordstrom
On tor, 2008-07-03 at 12:39 +0200, [EMAIL PROTECTED] wrote:
 Hi,
 
 I also had problems with umlauts. We use our Lotus Domino Server as LDAP 
 server and since an update from version 6.5 to 8, our users are unable to 
 authenticate via IE or Firefox if their password contains umlauts.

HTTP authentication uses ISO-8859-1, while LDAP uses UTF-8..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] CONNECT errors with 2.7.STABLE2-2

2008-07-03 Thread Henrik Nordstrom
On tor, 2008-07-03 at 16:43 +0200, Ralf Hildebrandt wrote:
 A tool supposedly worked until May and now, due to the evil squid update
 to 2.7.x, won't work anymore. Of course squid is to blame, as always.
 Since we all know, the professional tool is written with great care,
 adhering to the specs and RFC by knowledgeable people. Unlike squid. Of
 course. Call me bitter.
 
 From our logs:
 
 1215083751.310  0 10.47.52.76 TCP_MISS/417 1811 CONNECT 
 drm.viasyshc.com:443 - NONE/- text/html

417 is Expectation Failed, and means the application sent an Expect:
header which can not be fulfilled by Squid.

Most likely this is Expect: 100-continue as it's the only expectation
defined in HTTP/1.1. It beats me why one would send that in a CONNECT
request, but not strictly disallowed.

As we kind of expected there would be applications out there who do not
kow how to deal with Expectation failed and retry their requests
without the expectation we added a directive to tell Squid to ignore
this. Very RFC-ignorant, but...

ignore_expect_100 on

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] CONNECT errors with 2.7.STABLE2-2

2008-07-03 Thread Henrik Nordstrom
On tor, 2008-07-03 at 20:16 +0200, Ralf Hildebrandt wrote:
 * Adrian Chadd [EMAIL PROTECTED]:
  Attaching the actual request thats being made would probably be a good
  place to start :)
 
 Yes, how do I log this?

log_mime_hdrs on

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] GET cache_object://localhost/info on a reverse proxy setup

2008-07-03 Thread Henrik Nordstrom
On tor, 2008-07-03 at 17:01 +0200, David Obando wrote:
 Dear all,
 
 I'm using Squid as a reverse proxy in a Squid/Pound/Zope/Plone-setup. 
 Squid is running on port 80.
 
 I would like to access the cache manager with the munin plugins to 
 monitor Squid. The plugins use a HTTP request
 GET cache_object://localhost/info HTTP/1.0.
 Standard port 3128 isn't active, when asking port 80 I get a 404-error 
 from zope.
 
 How can I access the cache manager in such a setup?

Are you sending the query to Squid, or directly to Zope?

What I usually do in reverse proxy setups is to set up a normal 3128
listening port on loopback for cachemgr and squidclient to use.

http_port 127.0.0.1:3128

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] how safe is server_http11?

2008-07-03 Thread Henrik Nordstrom
On tor, 2008-07-03 at 16:55 -0400, Chris Woodfield wrote:
 So we're looking to upgrade from 2.6 to 2.7, primarily to get the HTTP/ 
 1.1 header support. I realize that the full 1.1 spec is not completely  
 implemented, but are there any real Danger, Will Robinson!  
 implications?

server_http11 is pretty safe to enable. Actually all of the http/1.1
stuff is quite safe to enable, but server_http11 more than the other.

 Specifically, is there any functionality or access to content that  
 would be actively broken because squid is advertising HTTP/1.1 but  
 doesn't have the spec completely implemented?

The main feature missing is forwarding of 1xx responses.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid and HTTP Host value

2008-07-03 Thread Henrik Nordstrom
On tor, 2008-07-03 at 22:29 +0100, Julian Gilbert wrote:
 I am trying to configure squid 2.5 and looking for some assistance. When I 
 make client request to squid in the form:
 
 GET http://66.102.9.147/
 HOST www.google.co.uk

That's a request for http://66.102.9.147/. The Host header in there MUST
be ignored.

 the squid proxy makes the following request to the web server:
 
 GET /
 HOST 66.102.9.147

Which is correct.

 How do I configure squid not to overwire the host value? The request from 
 squid should be sent as:
 
 GET /
 HOST www.google.co.uk

Make sure to request that from start.

Not updating the Host header to match the request is a major security
hazard.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Re: Account Expiry

2008-07-03 Thread Henrik Nordstrom
On fre, 2008-07-04 at 06:40 +0800, Patrick G. Victoriano wrote:

 I am very sorry Henrik, Matus and to everyone who got angry/irritated for 
 what I’ve done.

not angry. Only slightly annoyed. Almost missed your question entirely
because of that, but you were lucky and I noticed the change in subject
within the thread..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] many parseHttpRequest: Unsupported method found

2008-07-03 Thread Amos Jeffries

Siu-kin Lam wrote:
Hi Amos 

Thanks for the reply. 
I found most unsupported method in cache.log are control-code included. I think they are come from some BT clients. 



Ah, in that case there is nothing that can be done in squid.
You need to educate users on proper use of proxies with BT, if you think 
its worth the effort.


Amos

Thanks 
S K 
 
--- On Thu, 7/3/08, Amos Jeffries [EMAIL PROTECTED] wrote:



From: Amos Jeffries [EMAIL PROTECTED]
Subject: Re: [squid-users] many parseHttpRequest: Unsupported method found
To: [EMAIL PROTECTED]
Cc: squid-users@squid-cache.org
Date: Thursday, July 3, 2008, 9:44 AM
Siu-kin Lam wrote:
Dear all 


I am using version2.7 stable 3 as  a proxy server. I

have found many parseHttpRequest: Unsupported
method in cache.log. 
Is it possible to fix it? 

If you know what the methods are supposed to be. Squid 2.x
has a 
squid.conf option extension_methods ... which allows you to
name up to 
20 new ones which are allowed through.


http://www.squid-cache.org/Versions/v2/2.7/cfgman/extension_methods.html


Also, could it change to allow any Unsupported
method by default ? 


The change is not easy and we have already done it in Squid
3.1 
(currently 3-HEAD) with no intentions on back-porting.


Amos
--
Please use Squid 2.7.STABLE3 or 3.0.STABLE7



  



--
Please use Squid 2.7.STABLE3 or 3.0.STABLE7


Re: [squid-users] Squid and HTTP Host value

2008-07-03 Thread Amos Jeffries

Julian Gilbert wrote:

I am trying to configure squid 2.5 and looking for some assistance.


The first assistance we can give is upgrade to 3.0 or 2.7.
2.5 is well and truly obsolete now.

When 
I make client request to squid in the form:


GET http://66.102.9.147/
HOST www.google.co.uk

the squid proxy makes the following request to the web server:

GET /
HOST 66.102.9.147

How do I configure squid not to overwire the host value? The request 
from squid should be sent as:


GET /
HOST www.google.co.uk


The client asked for http://66.102.9.147/, nothing to do with google as 
far as HTTP is concerned. It's a security feature to prevent domain 
hijacking.


Amos
--
Please use Squid 2.7.STABLE3 or 3.0.STABLE7


Re: [squid-users] url_rewrite_program doesn't seem to work on squid 2.6 STABLE17

2008-07-03 Thread Amos Jeffries

Martin Jacobson (Jake) wrote:

Hi,

I hope that someone on this group can give me some pointers.  I have a squid proxy setup running version 2.6 stable 17 of squid.  I recently upgraded from a very old version of squid, 2.4 something.  The proxy sits in front of a search appliance and all search requests goes through the proxy.  

One of my requirements is to have all search requests for cache:SOMEURL go to a URL rewrite program that compares the requested URL to a list of URLs that have been blacklisted.  These URLs are one per line in a text file.  Any line that starts with # or is blank is discarded by the url_rewrite_program.  This Perl program seemed to work fine in the old version but now it doesn't work at all.  


Here is the relevant portion of my Squid conf file:
---
http_port 80 defaultsite=linsquid1o.myhost.com accel

url_rewrite_program /webroot/squid/imo/redir.pl
url_rewrite_children 10


cache_peer searchapp3o.myhost.com parent 80 0 no-query originserver 
name=searchapp proxy-only
cache_peer linsquid1o.myhost.com parent 9000 0 no-query originserver 
name=searchproxy proxy-only
acl bin urlpath_regex ^/cgi-bin/
cache_peer_access searchproxy allow bin
cache_peer_access searchapp deny bin

Here is the Perl program
---
#!/usr/bin/perl


$| = 1;

my $CACHE_DENIED_URL = http://www.mysite.com/mypage/pageDenied.intel;;
my $PATTERNS_FILE = /webroot/squid/blocked.txt;
my $UPDATE_FREQ_SECONDS = 60;

my $last_update = 0;
my $last_modified = 0;
my $match_function;

my $url, $remote_host, $ident, $method, $urlgroup;
my $cache_url;

my @patterns;


while () {
   chomp;
   ($url, $remote_host, $ident, $method, $urlgroup) = split;
  
   update_patterns();


   $cache_url = cache_url($url);
   if ($cache_url) {
  update_patterns();
  if ($match_function($cache_url)) {
 $cache_url = url_encode($cache_url);
 print 302:$CACHE_DENIED_URL?URL=$cache_url\n;
 next;
  }
   }
   print \n;
}

sub update_patterns {
   my $now = time();
   if ($now  $last_update + $UPDATE_FREQ_SECONDS) {
  my @a = stat($PATTERNS_FILE);
  my $mtime = $a[9];
  if ($mtime != $last_modified) {
 @patterns = get_patterns();
 $match_function = build_match_function(@patterns);
 $last_modified = $mtime;
  }
   }
}


sub get_patterns {
   my @p = ();
   my $p = ;
   open PATTERNS,  $PATTERNS_FILE or die Unable to open patterns file. $!;
   while (PATTERNS) {
  chomp;
  if (!/^\s*#/  !/^\s*$/) {# disregard comments and empty lines.
 $p = $_;
 $p =~ s#\/#\\/#g;
 $p =~ s/^\s+//g;
 $p =~ s/\s+$//g;
 if (is_valid_pattern($p)) {
push(@p, $p);
 }
  }
   }
   close PATTERNS;
   return @p;
}

sub is_valid_pattern {
   my $pat = shift;
   return eval {  =~ m|$pat|; 1 } || 0;
}


sub build_match_function {
   my @p = @_;
   my $expr = join(' || ', map { \$_[0] =~ m/$p[$_]/io } (0..$#p));
   my $mf = eval sub { $expr };
   die Failed to build match function: $@ if $@;
   return $mf;
}

sub cache_url {
   my $url = @_[0];
   (my $script, $qs) = split(/\?/, $url);
   if ($qs) {
  my $param, $name, $value;
  my @params = split(//, $qs);
  foreach $param (@params) {
 ($name, $value) = split(/=/, $param);
 $value =~ tr/+/ /;
 $value =~ s/%([\dA-Fa-f][\dA-Fa-f])/pack(C, hex($1))/eg;
 if ($value =~ /cache:([A-z0-9]{7,20}:)?([A-z]+:\/\/)?([^ ]+)/) {
if ($2) {
   return $2 . $3;
} else {
   # return http://; . $3;
   return $3;
}
 }
  }
   }
   return ;
}

sub url_encode {
   my $str = @_[0];
   $str =~ tr/ /+/;
   $str =~ s/([\?=:\/#])/sprintf(%%%02x, ord($1))/eg;
   return $str;
}

Below is a sample of the blocked URLs file

#
# URL Patterns to be Blocked
#---
# This file contains URL patterns which should be blocked
# in requests to the Google cache.
#
# The URL patterns should be entered one per line.
# Blank lines and lines that begin with a hash mark (#)
# are ignored.
#
# Anything that will work inside a Perl regular expression
# should work.
#
# Examples:
# http://www.bad.host/bad_directory/
# ^ftp:
# bad_file.html$

# Enter URLs below this line



www.badsite.com/


So my question, is there a better way of doing this?



You would be much better off defining this as an external_acl program 
and possibly using deny_info to do the 'redirect' when it blocks a request.
That way also, the ACL-lookup results can be cached in squid and reduce 
the server load doing url re-writes.




Re: [squid-users] many parseHttpRequest: Unsupported method found

2008-07-03 Thread Siu-kin Lam
Hi Amos 
is this kind connection harmful for Squid daemon? I meant Besides Squid drop 
this connection, would this kind of connection cause unstable to the daemon ? 

Also, how could I know the destination IP/host of this kind of connections ? 
I could find the client (source) IP address from access.log only. 

Thanks 
Best Regards, 
S K 


--- On Thu, 7/3/08, Amos Jeffries [EMAIL PROTECTED] wrote:

 From: Amos Jeffries [EMAIL PROTECTED]
 Subject: Re: [squid-users] many parseHttpRequest: Unsupported method found
 To: [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Date: Thursday, July 3, 2008, 11:35 PM
 Siu-kin Lam wrote:
  Hi Amos 
  
  Thanks for the reply. 
  I found most unsupported method in
 cache.log are control-code included. I think they are come
 from some BT clients. 
  
 
 Ah, in that case there is nothing that can be done in
 squid.
 You need to educate users on proper use of proxies with BT,
 if you think 
 its worth the effort.
 
 Amos
 
  Thanks 
  S K 
   
  --- On Thu, 7/3/08, Amos Jeffries
 [EMAIL PROTECTED] wrote:
  
  From: Amos Jeffries [EMAIL PROTECTED]
  Subject: Re: [squid-users] many
 parseHttpRequest: Unsupported method found
  To: [EMAIL PROTECTED]
  Cc: squid-users@squid-cache.org
  Date: Thursday, July 3, 2008, 9:44 AM
  Siu-kin Lam wrote:
  Dear all 
 
  I am using version2.7 stable 3 as  a proxy
 server. I
  have found many parseHttpRequest:
 Unsupported
  method in cache.log. 
  Is it possible to fix it? 
  If you know what the methods are supposed to be.
 Squid 2.x
  has a 
  squid.conf option extension_methods ... which
 allows you to
  name up to 
  20 new ones which are allowed through.
 
 
 http://www.squid-cache.org/Versions/v2/2.7/cfgman/extension_methods.html
 
  Also, could it change to allow any
 Unsupported
  method by default ? 
 
  The change is not easy and we have already done it
 in Squid
  3.1 
  (currently 3-HEAD) with no intentions on
 back-porting.
 
  Amos
  -- 
  Please use Squid 2.7.STABLE3 or 3.0.STABLE7
  
  

 
 
 -- 
 Please use Squid 2.7.STABLE3 or 3.0.STABLE7


  


Re: [squid-users] many parseHttpRequest: Unsupported method found

2008-07-03 Thread Amos Jeffries
 Hi Amos
 is this kind connection harmful for Squid daemon? I meant Besides Squid
 drop this connection, would this kind of connection cause unstable to the
 daemon ?

Not. Squid dumps the TCP connection as soon as the error is found and
reported.


 Also, how could I know the destination IP/host of this kind of connections
 ?
 I could find the client (source) IP address from access.log only.

Squid never handled the request so never had the destination info.

Amos


 Thanks
 Best Regards,
 S K


 --- On Thu, 7/3/08, Amos Jeffries [EMAIL PROTECTED] wrote:

 From: Amos Jeffries [EMAIL PROTECTED]
 Subject: Re: [squid-users] many parseHttpRequest: Unsupported method
 found
 To: [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Date: Thursday, July 3, 2008, 11:35 PM
 Siu-kin Lam wrote:
  Hi Amos
 
  Thanks for the reply.
  I found most unsupported method in
 cache.log are control-code included. I think they are come
 from some BT clients.
 

 Ah, in that case there is nothing that can be done in
 squid.
 You need to educate users on proper use of proxies with BT,
 if you think
 its worth the effort.

 Amos

  Thanks
  S K
 
  --- On Thu, 7/3/08, Amos Jeffries
 [EMAIL PROTECTED] wrote:
 
  From: Amos Jeffries [EMAIL PROTECTED]
  Subject: Re: [squid-users] many
 parseHttpRequest: Unsupported method found
  To: [EMAIL PROTECTED]
  Cc: squid-users@squid-cache.org
  Date: Thursday, July 3, 2008, 9:44 AM
  Siu-kin Lam wrote:
  Dear all
 
  I am using version2.7 stable 3 as  a proxy
 server. I
  have found many parseHttpRequest:
 Unsupported
  method in cache.log.
  Is it possible to fix it?
  If you know what the methods are supposed to be.
 Squid 2.x
  has a
  squid.conf option extension_methods ... which
 allows you to
  name up to
  20 new ones which are allowed through.
 
 
 http://www.squid-cache.org/Versions/v2/2.7/cfgman/extension_methods.html
 
  Also, could it change to allow any
 Unsupported
  method by default ?
 
  The change is not easy and we have already done it
 in Squid
  3.1
  (currently 3-HEAD) with no intentions on
 back-porting.
 
  Amos
  --
  Please use Squid 2.7.STABLE3 or 3.0.STABLE7
 
 
 


 --
 Please use Squid 2.7.STABLE3 or 3.0.STABLE7








Re: [squid-users] many parseHttpRequest: Unsupported method found

2008-07-03 Thread Siu-kin Lam
Hi Amos

I understand. Thanks for the information 

Best Regards, 
S K 

--- On Fri, 7/4/08, Amos Jeffries [EMAIL PROTECTED] wrote:

 From: Amos Jeffries [EMAIL PROTECTED]
 Subject: Re: [squid-users] many parseHttpRequest: Unsupported method found
 To: [EMAIL PROTECTED]
 Cc: Amos Jeffries [EMAIL PROTECTED], squid-users@squid-cache.org
 Date: Friday, July 4, 2008, 2:48 AM
  Hi Amos
  is this kind connection harmful for Squid daemon? I
 meant Besides Squid
  drop this connection, would this kind of connection
 cause unstable to the
  daemon ?
 
 Not. Squid dumps the TCP connection as soon as the error is
 found and
 reported.
 
 
  Also, how could I know the destination IP/host of this
 kind of connections
  ?
  I could find the client (source) IP address from
 access.log only.
 
 Squid never handled the request so never had the
 destination info.
 
 Amos
 
 
  Thanks
  Best Regards,
  S K
 
 
  --- On Thu, 7/3/08, Amos Jeffries
 [EMAIL PROTECTED] wrote:
 
  From: Amos Jeffries [EMAIL PROTECTED]
  Subject: Re: [squid-users] many
 parseHttpRequest: Unsupported method
  found
  To: [EMAIL PROTECTED]
  Cc: squid-users@squid-cache.org
  Date: Thursday, July 3, 2008, 11:35 PM
  Siu-kin Lam wrote:
   Hi Amos
  
   Thanks for the reply.
   I found most unsupported method
 in
  cache.log are control-code included. I think they
 are come
  from some BT clients.
  
 
  Ah, in that case there is nothing that can be done
 in
  squid.
  You need to educate users on proper use of proxies
 with BT,
  if you think
  its worth the effort.
 
  Amos
 
   Thanks
   S K
  
   --- On Thu, 7/3/08, Amos Jeffries
  [EMAIL PROTECTED] wrote:
  
   From: Amos Jeffries
 [EMAIL PROTECTED]
   Subject: Re: [squid-users] many
  parseHttpRequest: Unsupported method
 found
   To: [EMAIL PROTECTED]
   Cc: squid-users@squid-cache.org
   Date: Thursday, July 3, 2008, 9:44 AM
   Siu-kin Lam wrote:
   Dear all
  
   I am using version2.7 stable 3 as  a
 proxy
  server. I
   have found many parseHttpRequest:
  Unsupported
   method in cache.log.
   Is it possible to fix it?
   If you know what the methods are supposed
 to be.
  Squid 2.x
   has a
   squid.conf option extension_methods ...
 which
  allows you to
   name up to
   20 new ones which are allowed through.
  
  
 
 http://www.squid-cache.org/Versions/v2/2.7/cfgman/extension_methods.html
  
   Also, could it change to allow any
  Unsupported
   method by default ?
  
   The change is not easy and we have
 already done it
  in Squid
   3.1
   (currently 3-HEAD) with no intentions on
  back-porting.
  
   Amos
   --
   Please use Squid 2.7.STABLE3 or
 3.0.STABLE7
  
  
  
 
 
  --
  Please use Squid 2.7.STABLE3 or 3.0.STABLE7
 
 
 
 


  


[squid-users] Delay Pools: Big values for maximum and resto

2008-07-03 Thread Sergio Belkin
Hi Squid community,

Does impact on performance if I set maximum and restore on very high
values instead infinite (-1), I do that in order to audit the traffic
level. If I set -1 squidclient is not clear about its usage... please
tell me if I'm wrong...

I am using squid 2.6.x on Centos 5.1

Thanks in advance

-- 
--
Open Kairos http://www.openkairos.com
Watch More TV http://sebelk.blogspot.com
Sergio Belkin -


[squid-users] https site access problem!!!

2008-07-03 Thread Shiva Raman
Dear All

I got a squidIcap Installation running   with following squid.conf

-
http_port 80

hierarchy_stoplist cgi-bin ?

acl QUERY urlpath_regex cgi-bin \?

no_cache deny QUERY

cache_mem 8 MB

cache_dir ufs /usr/local/squidICAP/var/cache 500 16 256

cache_access_log /usr/local/squidICAP/var/logs/access.log

cache_log /usr/local/squidICAP/var/logs/cache.log

cache_store_log /usr/local/squidICAP/var/logs/store.log

redirect_program /opt/Websense/bin/WsRedtor

redirect_children 30

auth_param basic children 5

auth_param basic realm Squid proxy-caching web server

auth_param basic credentialsttl 2 hours

auth_param basic casesensitive off

refresh_pattern ^ftp:   144020% 10080

refresh_pattern ^gopher:14400%  1440

refresh_pattern .   0   20% 4320

acl squidICAP dstdomain  /usr/local/squidICAP/bad_domains

header_access Accept-Encoding deny squidICAP

acl all src 0.0.0.0/0.0.0.0

acl manager proto cache_object

acl localhost src 127.0.0.1/255.255.255.255

acl to_localhost dst 127.0.0.0/8

acl SSL_ports port 443 563

acl Safe_Ports port 81  # non stadard part

acl Safe_ports port 80  # http

acl Safe_ports port 21  # ftp

acl Safe_ports port 443 563 # https, snews

acl Safe_ports port 70  # gopher

acl Safe_ports port 210 # wais

acl Safe_ports port 1025-65535  # unregistered ports

acl Safe_ports port 280 # http-mgmt

acl Safe_ports port 488 # gss-http

acl Safe_ports port 591 # filemaker

acl Safe_ports port 777 # multiling http

acl CONNECT method CONNECT

acl GET method GET

http_access allow all

http_access allow manager localhost

http_access deny manager

http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports

http_access deny all

http_reply_access allow all

icp_access allow all

cache_effective_user squid

visible_hostname squidproxy

coredump_dir /usr/local/squidICAP/var/cache

redirector_bypass off






i am not able to open all ssl websites through this squid ,but  able to access
few ssl sites through it using lynx command line browser .

Following is one of the site tested https://secure.icicidirect.com

I am not sure  whether its squid or linux ssl issue

When i try to access the above webserver through the squid proxy, it
is unable to open
the website. When i try the links its showing as only SSL ERROR

I tried to check the openssl connectivity through command prompt get
following error.

[EMAIL PROTECTED] openssl s_client -connect
secure.icicidirect.com:443 -showcerts

CONNECTED(0003)
write:errno=104


Any suggestions / workarounds for this problems, please let me know.

Regards

Shiva Raman