RE: [squid-users] ipcCreate error:

2010-04-14 Thread Henrik Nordström
ons 2010-04-14 klockan 04:47 + skrev GIGO .:
 Hi Henrik,
  
 Thank you this problem is resolved by placing the squid_kerb_auth in
 the libexec folder. Now i beleive that i also have to place any other
 helpers like squid_ldap_group in the same location to get it to work.

Yes. if you have selinux enabled on the host then the security policy
for squid restricts it to execute helpers in /usr/libexec/squid/ only.
Which is a good thing in terms of security.

Regards
Henrik




Re: [squid-users] Re: need help port 80

2010-04-14 Thread da...@lafourmi.de

good morning everybody,

thanks at first for your helpi htink iam a dummy :)

i have wrote:

acl OnlyFox browser -i Firefox/
http_access deny !OnlyFox

and second test:

acl OnlyFox browser -i .*Firefox.*
http_access deny !OnlyFox

but with internet explorer i can surf?

can someone help me, that really only firefox can access internet and no 
other tool...


thanks for regard
dave



Amos Jeffries schrieb:

On Tue, 13 Apr 2010 21:54:40 +0200, Heinz Diehl h...@fancy-poultry.org
wrote:
  
On 13.04.2010, da...@lafourmi.de wrote: 



but i dont understand
regexp  pattern match on user agent
  
 


can you give me an example for dummies please ;)
  

acl Nofox browser -i .*Firefox.*
http_access deny Nofox



Ouch. very computing intensive.
I don't know why you people insist on sticking .* before and aft of the
pattern.
When that is processed in by Squid it becomes:
  .*.*Firefox.*.*

Just this will do to catch the browser tag:
  acl firefox browser Firefox/

Amos


  




[squid-users] Trouble writing external acl helper

2010-04-14 Thread marriedto51

I am almost certainly missing something very basic, but I haven't found out
what after searching here and elsewhere, so any help will be greatly
appreciated.

I'm using squid 3.1 on Fedora 12 (64-bit).

I want to write an external acl helper (for fun!) and started with a toy
example written in C which is only going to allow the URL
http://www.google.com;. It works as I expect when I run it at the command
line (lines are read one-by-one from standard input and a reply of OK or
ERR appears on standard output), but the output I get from squid says:

2010/04/14 08:40:23.731| helperOpenServers: Starting 5/5 'toy_helper'
processes
...
2010/04/14 08:40:31.197| WARNING: toy_helper #1 (FD 7) exited
2010/04/14 08:40:31.197| WARNING: toy_helper #3 (FD 11) exited
2010/04/14 08:40:31.198| WARNING: toy_helper #2 (FD 9) exited
2010/04/14 08:40:31.198| WARNING: toy_helper #4 (FD 13) exited
2010/04/14 08:40:31.198| Too few toy_helper processes are running
...
FATAL: The toy_helper helpers are crashing too rapidly, need help!

In the squid.conf file I've put:

external_acl_type toy_helper %PATH /tmp/squid-tests/toy_helper
acl toy external toy_helper

This squid.conf and the toy_helper executable are both in /tmp/squid-tests,
and everything there is world-readable.

Lastly, here is the source for toy_helper:

   1 #include stdio.h
   2 #include string.h
   3 #define BUFSIZE 8192
   4 
   5 int
   6 main(int argc, char *argv[])
   7 {
   8 
   9   char buf[BUFSIZE];
  10
  11   /* make standard output and input unbuffered */
  12   setvbuf(stdout, NULL, _IONBF, 0);
  13   setvbuf(stdin, NULL, _IONBF, 0);
  14   
  15   /* main loop: read lines from stdin */
  16   while ( fgets(buf, sizeof(buf), stdin) )
  17   {
  18 if ( strcmp(http://www.google.com/\n;, buf) == 0 )
  19   printf(OK\n);
  20 else
  21   printf(ERR\n);
  22   }
  23 
  24   return 0;
  25 }

Thanks in advance for any clues,
John.
-- 
View this message in context: 
http://n4.nabble.com/Trouble-writing-external-acl-helper-tp1839464p1839464.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: need help port 80

2010-04-14 Thread John Doe
From: da...@lafourmi.de da...@lafourmi.de
 acl OnlyFox browser -i Firefox/
 http_access deny !OnlyFox
 but with internet explorer i can surf?

I think the following is easier to read:
  http_access allow OnlyFox
  http_access deny all
Can you list all your http_access lines, or the whole config?
I guess you checked that squid is indeed used...?

JD


  


[squid-users] Is it possible to deactivate partial download ?

2010-04-14 Thread Dieter Bloms
Hi,

we use following constellation:

client - squid2.7.STABLE8 - http-virusscanner (avwebgate from avira) - 
internet

some clients like adobe updater request their updates as partial
download.
This makes trouble with our virusscanner.
So is it possible to disable partial download requests at all ?

Thank you for a hint


-- 
Best regards

  Dieter Bloms

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
From field.


Re: [squid-users] Trouble writing external acl helper

2010-04-14 Thread John Doe
From: marriedto51 johnmwil...@talktalk.net
 2010/04/14 08:40:31.197| WARNING: toy_helper #1 (FD 7) exited
 2010/04/14 08:40:31.197| WARNING: toy_helper #3 (FD 11) exited
 2010/04/14 08:40:31.198| WARNING: toy_helper #2 (FD 9) exited
 2010/04/14 08:40:31.198| WARNING: toy_helper #4 (FD 13) exited
 2010/04/14 08:40:31.198| Too few toy_helper processes are running

This works for me with squid 2.7 (I did not use setvbuf):

  fprintf(stderr, helper: starting...\n);
  fflush(stderr);
  while (fgets(input, sizeof(input), stdin)) {
if ((cp=strchr(input, '\n')) == NULL) {
  fprintf(stderr, filter: input too big: %s\n, input);
} else {
  *cp = '\0';
}
...
fflush(stderr);
fflush(stdout);
  }
  fprintf(stderr, helper: stopping...\n);


  


[squid-users] [SOLVED] Trouble writing external acl helper

2010-04-14 Thread marriedto51

Thank you! Following your example I can now get this to work.

I was perhaps misled by reading source code for something called
check_group.c into thinking the setvbuf() calls were needed.

John.
-- 
View this message in context: 
http://n4.nabble.com/Trouble-writing-external-acl-helper-tp1839464p1839566.html
Sent from the Squid - Users mailing list archive at Nabble.com.


RE: [squid-users] Squid 3.1 ICAP Issue with REQMOD 302

2010-04-14 Thread Niall O'Cuilinn
Hi Christos

Thanks for the reply.

Sorry that was my mistake, I removed some sensitive info from the location 
header URL but forgot to modify the null-body value.

It should have read null-body=100 (I removed 60 chars/bytes). You might be 
right and it might still be out by two. I will have a look.

Have you Squid 3.1 working with ICAP? I am wondering if there are any known 
issues with ICAP support in v3.1?

Thanks
Niall

Christos Tsantilas wrote:
Niall O'Cuilinn wrote:
 Hi,
 
 I have recently moved from Squid 3.0 to Squid 3.1. I am trying to integrate 
 it with an ICAP server.
 
 I am having a problem where Squid 3.1  is rejecting some responses from the 
 ICAP server which Squid 3.0 accepted.
 
 The response in question is a REQMOD response where the ICAP server is 
 returning a HTTP 302 response rather than modifying the original HTTP 
 request.

Hi Niall,
  I believe the Encapsulated header in the ICAP server response is wrong.
The null-body=160 should be the size of the encapsulated Http headers, 
if I am not wrong should be null-body=102.

Regards,
Christos


 
 Here is the ICAP request and response:
 
 ICAP Request from Squid:
 
 REQMOD icap://10.1.1.25:1344/reqmod ICAP/1.0\r\n
 Host: 10.1.1.25:1344\r\n
 Date: Mon, 12 Apr 2010 14:25:39 GMT\r\n
 Encapsulated: req-hdr=0, null-body=398\r\n
 Allow: 204\r\n
 \r\n
 GET http://c.proxy.com/www.test.com/ HTTP/1.1\r\n
 Host: c.proxy.com\r\n
 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.2.3) 
 Gecko/20100401 Firefox/3.6.3\r\n
 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\n
 Accept-Language: en-gb,en;q=0.5\r\n
 Accept-Encoding: gzip,deflate\r\n
 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\n
 Pragma: no-cache\r\n
 Cache-Control: no-cache\r\n
 \r\n
 
 Response from ICAP Server:
 
 ICAP/1.0 200 OK\r\n
 Date: Mon, 12 Apr 2010 14:25:15 GMT\r\n
 Connection: keep-alive\r\n
 ISTag: ReqModService\r\n
 Encapsulated: res-hdr=0,null-body=160\r\n
 \r\n
 HTTP/1.x 302 Found\r\n
 content-type: text/html\r\n
 location: https://localhost:8443/mib/authentication\r\n
 \r\n
 \r\n
 
 Squid displays an ICAP error in the browser and states that an illegal 
 response was received from the ICAP server.
 
 Any ideas what might be wrong? Although the ICAP server worked correctly 
 with Squid 3.0 I am open to the possibility that the issue is with the ICAP 
 response and that the old Squid was simply more tolerant than v3.1.
 
 Thanks in advance,
 Niall
 
 Niall Ó Cuilinn 
 Product Development
 ChangingWorlds - A Unit of Amdocs Interactive
 t: +353 1 4401268 | niall.ocuil...@changingworlds.com 
 
 AMDOCS  CUSTOMER EXPERIENCE SYSTEMS INNOVATION
 
 
 This message and the information contained herein is proprietary and 
 confidential and subject to the Amdocs policy statement,
 you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] How to squid 2.6 transparent

2010-04-14 Thread Netmail
Hi
don't find a clear guide to configure squid transparent for the squid
version 2.6.STABLE16 .
thanks for support
have a nice day 




Re: [squid-users] Squid 3.1 ICAP Issue with REQMOD 302

2010-04-14 Thread Niall O'Cuilinn
Hi,

Just resending the correct request and response:

ICAP Request from Squid:

REQMOD icap://10.1.1.25:1344/reqmod ICAP/1.0\r\n
Host: 10.1.1.25:1344\r\n
Date: Mon, 12 Apr 2010 14:25:39 GMT\r\n
Encapsulated: req-hdr=0, null-body=398\r\n
Allow: 204\r\n
\r\n
GET http://c.proxy.com/www.test.com/ HTTP/1.1\r\n
Host: c.proxy.com\r\n
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.2.3) 
Gecko/20100401 Firefox/3.6.3\r\n
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\n
Accept-Language: en-gb,en;q=0.5\r\n
Accept-Encoding: gzip,deflate\r\n
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\n
Pragma: no-cache\r\n
Cache-Control: no-cache\r\n
\r\n

Response from ICAP Server:

ICAP/1.0 200 OK\r\n
Date: Mon, 12 Apr 2010 14:25:15 GMT\r\n
Connection: keep-alive\r\n
ISTag: ReqModService\r\n
Encapsulated: res-hdr=0,null-body=100\r\n
\r\n
HTTP/1.x 302 Found\r\n
content-type: text/html\r\n
location: https://localhost:8443/mib/authentication\r\n
\r\n
\r\n

Niall Ó Cuilinn 
Product Development
ChangingWorlds - A Unit of Amdocs Interactive
t: +353 1 4401268 | niall.ocuil...@changingworlds.com 

AMDOCS  CUSTOMER EXPERIENCE SYSTEMS INNOVATION


This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] Reverse Proxy Cluster Issues

2010-04-14 Thread senad.cimic
Hi,

I am first time squid user and was wondering if could get some help. I
tried to find answers to these questions on-line, but unsuccessfully... 

I have 2 squid boxes setup as reverse proxies in a cluster (they're
using each other as siblings). On the backend I'm using single tomcat
server that both squid boxes use to retrieve content. Squid version I'm
using is 3.0. I'm running into couple issues:

Issue #1:
Whenever squid box receives request for url that contains querystring
(e.g. - http://site1:8080/RSSSource/rss/feed?max=1) it does not contact
sibling cache for that resource, but it retrieves it from the backend
server right away. What's odd is that it works (sometimes...) when query
string is not present (e.g. http://site1:8080/RSSSource/rss/feed). 

Issue #2:
Let's say squidA receives request for some resource (e.g.
http://site1:8080/RSSSource/rss/feed). If squidA doesn't have it in its
cache, it will check if it's available from squidB. However, if squidA
has expired version of that resource, it doesn't contact squidB but
retrieves it directly from the backend server, which should not be the
case (it should check if squidB had valid copy available), correct? 

Here are relevant squid.conf lines for one of the squids (everything
else is unchanged, config for the second squid is the same except for
sibling references):

##
http_port 80 accel vhost
icp_port 3130

acl sites_server1 dstdomain site1
acl siblings src address.of.squidA.com

cache_peer address.of.backend.server.com parent 8080 0 no-query
no-digest originserver name=server1
cache_peer address.of.squidA.com sibling 80 3130 name=sibling1 no-digest
allow-miss weight=5

cache_peer_access server1 allow sites_server1
cache_peer_access server1 allow siblings
cache_peer_access sibling1 allow sites_server1

http_access allow sites_server1
http_access allow siblings
http_access deny all

icp_access allow siblings
icp_access deny all

miss_access deny siblings

###

I tried using HTCP instead of ICP, but I got same results... Does anyone
know solution to these 2 problems? One thing I didn't mention is that
tomcat backend server is including conditional get headers in responses,
however I don't think it matters...

Let me know if more info needed. Thanks in advance!


[squid-users] squid_kerb_auth multiple GET request

2010-04-14 Thread Tiery DENYS
Hi,

I am using squid with squid_kerb_auth plugin for authentication on a
kerberized network.
Squid listen on port 3128 and clients use this proxy.

The transparent authentication works pretty well but if i look at
network flow, i see that for each website request, the client does two
requests:
1) normal GET request
Squid says proxy authentication required
2) second GET request with tgs

Is it possible for clients to automatically send tgs in first request ?

Thanks in advance,

Tiery


[squid-users] squid.conf.documented instead of squid.conf?

2010-04-14 Thread Boniforti Flavio
Hello list.

I'm on Debian SID and wanted to update squid3 to the latest 3.1.1-2
version. What happened is that dpkg returned me following error:

Configurazione di squid3 (3.1.1-2)...
sed: errore di lettura su stdin: Is a directory
dpkg: errore nell'elaborare squid3 (--configure):
 il sottoprocesso vecchio script di post-installation ha restituito lo
stato di errore 4
Si sono verificati degli errori nell'elaborazione:
 squid3
E: Sub-process /usr/bin/dpkg returned an error code (1)

The *second* line is the one that made me investigate a little bit:
sed: error reading stdin: Is a directory... Thus I checked /etc/squid3
and got this:

drwxr-xr-x  2 root root 4096 14 apr 15:54 squid.conf

Entering that directory, I discovered:

-rw-r--r-- 1 root root 198563 12 apr 16:09 squid.conf.documented

My questions are:

A) where did my customized squid.conf disappear to?
B) is it normal that now the /etc/squid3/squid.conf is not anymore a
file, but a directory?
C) how can I extract the actual configuration from the running squid3?

Many thanks in advance.

Flavio Boniforti

PIRAMIDE INFORMATICA SAGL
Via Ballerini 21
6600 Locarno
Switzerland
Phone: +41 91 751 68 81
Fax: +41 91 751 69 14
URL: http://www.piramide.ch
E-mail: fla...@piramide.ch 


[squid-users] Squid HTTP Keytab SPN question

2010-04-14 Thread Nick Cairncross
Hi,

I'd like confirmation of something is possible, but first best to detail what I 
want:

I want to use a separate computer account to authenticate my users against. I 
know that this requires an HTTP.keytab and computer in AD with SPN. I would 
like to use MKTSUTIL for this.
If my proxy server is called SQUID1 and is already happily joined to the domain 
then I need to create a new machine account which I will call AUTH1.

1) Do I need to create a DNS entry for AUTH1 (with the same IP as SQUID1)?
2) If so, do I need just an A record?
3) I have evidently got confused over the msktutil switches and values and so 
I'm specifying something wrong. What have I done? See below...

I used this command after a kinit myusername:
msktutil -c -b CN=COMPUTERS -s HTTP/squid1.[mydomain] iz -k 
/etc/squid/HTTP.keytab --computer-name auth1 --upn HTTP/squid1 --server dc1 
-verbose

This created the computer account auth1 in the computers ou, added 
HTTP/squid1.mydomain to SPN and HTTP/squid1.mydom...@mydomain to the UPN.
It also created the keytab HTTP.keytab. Klist reports:

   2 HTTP/squid1.[mydoma...@[mydomain]
   2 HTTP/squid1.[mydoma...@[mydomain]
   2 HTTP/squid1.[mydoma...@[mydomain]

However cache.log shows this when I then fire up me IE

2010/04/14 14:52:46| authenticateNegotiateHandleReply: Error validating user 
via Negotiate. Error returned 'BH gss_acquire_cred() failed: Unspecified GSS 
failure.  Minor code may provide more information. No principal in keytab 
matches desired name'

Thanks as always,
Nick




** Please consider the environment before printing this e-mail **

The information contained in this e-mail is of a confidential nature and is 
intended only for the addressee.  If you are not the intended addressee, any 
disclosure, copying or distribution by you is prohibited and may be unlawful.  
Disclosure to any party other than the addressee, whether inadvertent or 
otherwise, is not intended to waive privilege or confidentiality.  Internet 
communications are not secure and therefore Conde Nast does not accept legal 
responsibility for the contents of this message.  Any views or opinions 
expressed are those of the author.

Company Registration details:
The Conde Nast Publications Ltd
Vogue House
Hanover Square
London W1S 1JU

Registered in London No. 226900


Re: [squid-users] squid.conf.documented instead of squid.conf?

2010-04-14 Thread Silamael
On 04/14/2010 04:02 PM, Boniforti Flavio wrote:
 Hello list.
 
 I'm on Debian SID and wanted to update squid3 to the latest 3.1.1-2
 version. What happened is that dpkg returned me following error:
 
 Configurazione di squid3 (3.1.1-2)...
 sed: errore di lettura su stdin: Is a directory
 dpkg: errore nell'elaborare squid3 (--configure):
  il sottoprocesso vecchio script di post-installation ha restituito lo
 stato di errore 4
 Si sono verificati degli errori nell'elaborazione:
  squid3
 E: Sub-process /usr/bin/dpkg returned an error code (1)
 
 The *second* line is the one that made me investigate a little bit:
 sed: error reading stdin: Is a directory... Thus I checked /etc/squid3
 and got this:
 
 drwxr-xr-x  2 root root 4096 14 apr 15:54 squid.conf
 
 Entering that directory, I discovered:
 
 -rw-r--r-- 1 root root 198563 12 apr 16:09 squid.conf.documented
 
 My questions are:
 
 A) where did my customized squid.conf disappear to?
 B) is it normal that now the /etc/squid3/squid.conf is not anymore a
 file, but a directory?
 C) how can I extract the actual configuration from the running squid3?

I would say, instead of asking here in the squid mailing list, you
should better file a bug report against the debian package. Looks like
something is wrong there. I don't think this is a Squid problem.

-- Matthias


RE: [squid-users] Trouble writing external acl helper

2010-04-14 Thread Adnan Shahzad
Dear All,

I am adding my problem if you ppl can solve by external ACL Helper...

my clients mostly hitting local PC names as http request, which cause file 
descriptor in use and in that response Internet speed slow even dead slow..

Is it virus? What if I want to allow that them all, then how can I allow 
them Common thing them all is there is no .com or .org or .net. so is it 
possible that I make an acl that allow http request without having domain name 
(.com, .net or .org etc)

Looking forward to you response



1271255037.123  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255037.194  1 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255037.264  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255037.334  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255037.404  1 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255037.476  1 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255037.544  1 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255037.614  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255037.686  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255037.755  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255037.832  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255037.894  1 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255037.965  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255038.036  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255038.105  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255038.175  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255038.245  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255038.316  1 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255038.386  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255038.455  1 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255038.526  1 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255038.596  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255038.673  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255038.736  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255038.806  1 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255038.876  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255038.947  1 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255039.016  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html
1271255039.090  0 10.90.0.103 TCP_DENIED/407 7629 OPTIONS http://iqra-pc/ - 
NONE/- text/html

-Original Message-
From: marriedto51 [mailto:johnmwil...@talktalk.net] 
Sent: Wednesday, April 14, 2010 1:38 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Trouble writing external acl helper


I am almost certainly missing something very basic, but I haven't found out
what after searching here and elsewhere, so any help will be greatly
appreciated.

I'm using squid 3.1 on Fedora 12 (64-bit).

I want to write an external acl helper (for fun!) and started with a toy
example written in C which is only going to allow the URL
http://www.google.com;. It works as I expect when I run it at the command
line (lines are read one-by-one from standard input and a reply of OK or
ERR appears on standard output), but the output I get from squid says:

2010/04/14 08:40:23.731| helperOpenServers: Starting 5/5 'toy_helper'
processes
...
2010/04/14 08:40:31.197| WARNING: toy_helper #1 (FD 7) exited
2010/04/14 08:40:31.197| WARNING: toy_helper #3 (FD 11) exited
2010/04/14 08:40:31.198| WARNING: toy_helper #2 (FD 9) exited
2010/04/14 08:40:31.198| WARNING: toy_helper #4 (FD 13) exited
2010/04/14 08:40:31.198| Too few toy_helper processes are running
...
FATAL: The toy_helper helpers are crashing too rapidly, need help!

In the squid.conf file I've put:

external_acl_type toy_helper %PATH /tmp/squid-tests/toy_helper
acl toy external toy_helper

This squid.conf and the toy_helper executable are both in /tmp/squid-tests,
and everything there is world-readable.

Lastly, here is the source for toy_helper:

   1 #include stdio.h
   2 #include string.h
  

AW: [squid-users] How to squid 2.6 transparent

2010-04-14 Thread Zeller, Jan
don't find a clear guide to configure squid transparent for the squid version 
2.6.STABLE16 .

Hi,

there are multiple ways to intercept/redirect traffic. Did you consult 
http://wiki.squid-cache.org/ConfigExamples/#Interception ?

regards,

Jan




Re: [squid-users] Reverse Proxy Cluster Issues

2010-04-14 Thread Ron Wheeler

On 14/04/2010 11:34 AM, senad.ci...@thomsonreuters.com wrote:

Hi Ron,

Thank you for the quick response. I'm still not clear on these
unfortunately:

Issue 1:
I believe squid should use query parameters as well when evaluating
cached objects. Let's say I request object from squidA
http://site1:8080/RSSSource/rss/feed?max=1. Let's say squidA doesn't
have it in cache, so it will get it from the backend server. If I
request the same object while it is fresh in cache, it will get it from
cache. Similarly, if I request object with different query parameter
http://site1:8080/RSSSource/rss/feed?max=2 it will recognize that it is
a request for a different object (which is not cached) and it will
retrieve it from the backend server. The issue in these scenarios is
that it never checks if its sibling has those objects.
   

It is impossible for squid to know what Tomcat will do with ?max=1
It needs to let Tomcat generate a new page.


Issue 2:
I'm using ICP to communicate between siblings. If cacheA receives
request for an object that is in its cache but expired, it should send
ICP query to squidB, correct? Depending if it is available in squidB,
squidB will respond with either UDP_HIT or UDP_MISS response, so there
shouldn't be any loops, correct?

   
Beyond my pay scale. Perhaps someone with a more detailed knowledge of 
peer caches will explain what has to be done to make this work if it is 
possible.




Thanks again,
Senad

-Original Message-
From: Ron Wheeler [mailto:rwhee...@artifact-software.com]
Sent: Wednesday, April 14, 2010 9:11 AM
To: Cimic, Senad (Legal)
Subject: Re: [squid-users] Reverse Proxy Cluster Issues

On 14/04/2010 9:13 AM, senad.ci...@thomsonreuters.com wrote:
   

Hi,

I am first time squid user and was wondering if could get some help. I
tried to find answers to these questions on-line, but
 

unsuccessfully...
   

I have 2 squid boxes setup as reverse proxies in a cluster (they're
using each other as siblings). On the backend I'm using single tomcat
server that both squid boxes use to retrieve content. Squid version
 

I'm
   

using is 3.0. I'm running into couple issues:

Issue #1:
Whenever squid box receives request for url that contains querystring
(e.g. - http://site1:8080/RSSSource/rss/feed?max=1) it does not
 

contact
   

sibling cache for that resource, but it retrieves it from the backend
server right away. What's odd is that it works (sometimes...) when
 

query
   

string is not present (e.g. http://site1:8080/RSSSource/rss/feed).

Issue #2:
Let's say squidA receives request for some resource (e.g.
http://site1:8080/RSSSource/rss/feed). If squidA doesn't have it in
 

its
   

cache, it will check if it's available from squidB. However, if squidA
has expired version of that resource, it doesn't contact squidB but
retrieves it directly from the backend server, which should not be the
case (it should check if squidB had valid copy available), correct?

Here are relevant squid.conf lines for one of the squids (everything
else is unchanged, config for the second squid is the same except for
sibling references):

 


   

##
http_port 80 accel vhost
icp_port 3130

acl sites_server1 dstdomain site1
acl siblings src address.of.squidA.com

cache_peer address.of.backend.server.com parent 8080 0 no-query
no-digest originserver name=server1
cache_peer address.of.squidA.com sibling 80 3130 name=sibling1
 

no-digest
   

allow-miss weight=5

cache_peer_access server1 allow sites_server1
cache_peer_access server1 allow siblings
cache_peer_access sibling1 allow sites_server1

http_access allow sites_server1
http_access allow siblings
http_access deny all

icp_access allow siblings
icp_access deny all

miss_access deny siblings

 


   

###

I tried using HTCP instead of ICP, but I got same results... Does
 

anyone
   

know solution to these 2 problems? One thing I didn't mention is that
tomcat backend server is including conditional get headers in
 

responses,
   

however I don't think it matters...

Let me know if more info needed. Thanks in advance!


 

Issue # 1 is the correct behaviour. Squid has no way of predicting what
you might want to do with those arguments.

Issue #2 looks like a good decision. What is both have expired versions?

They could get into a loop going back and forth forever unless someone
is really careful.

Ron


   




[squid-users] squidGuard processes stay running

2010-04-14 Thread Sam Przyswa

Hi,

I well configured Squid3 on Debian SID then I configured squidGuard
1.2.0 but when I add the link to squidGuard in squid3 I got 4 squidGuard
processes running and the load average 1mn rise to 10 with lot of diqk
access and squid stop to work.

How to test the squidGuard config to fix the problem ?

Thanks for your help.

Sam.

--
Sam Przyswa - Chef de projet
Email: s...@arial-concept.com
Arial Concept - Intégrateur Internet
36, rue de Turin - 75008 - Paris - France
Tel: 01 40 54 86 04 - Fax: 01 40 54 83 01
Fax privé: 09 57 12 27 22
Skype ID: arial-concept
Web: http://www.arial-concept.com





Re: [squid-users] Squid 3.1 ICAP Issue with REQMOD 302

2010-04-14 Thread Niall O'Cuilinn
Hi

I had a look at the null-body values. They correctly match the length of the 
HTTP 302 response headers block. The extra two bytes is an extra line return. 
You can see that after the last header there are three '\r\n' line returns. I 
tried removing one of them but the result was the same.

I also turned on more detailed debug logging and found this in the cache.log:

--
2010/04/14 17:03:05.494| HttpReply::sanityCheckStartLine: missing or invalid 
status number in 'HTTP/1.x 302 Found
content-type: text/html
location: 
https://localhost:8443/mib/authentication/checkCookie?backURL=http%3A%2F%2Fc.proxy.com%2Fwww.google.ie

'
-

I changed the ICAP Server to return 'HTTP/1.0' instead of 'HTTP/1.x' and now it 
is working.

This worked using 'HTTP/1.x' on Squid 3.0. The version I'm using is Squid3.1.1

Thanks
Niall


This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



Re: [squid-users] Reverse Proxy Cluster Issues

2010-04-14 Thread Amos Jeffries
On Wed, 14 Apr 2010 08:13:01 -0500, senad.ci...@thomsonreuters.com
wrote:
 Hi,
 
 I am first time squid user and was wondering if could get some help. I
 tried to find answers to these questions on-line, but unsuccessfully... 
 
 I have 2 squid boxes setup as reverse proxies in a cluster (they're
 using each other as siblings). On the backend I'm using single tomcat
 server that both squid boxes use to retrieve content. Squid version I'm
 using is 3.0. I'm running into couple issues:
 
 Issue #1:
 Whenever squid box receives request for url that contains querystring
 (e.g. - http://site1:8080/RSSSource/rss/feed?max=1) it does not contact
 sibling cache for that resource, but it retrieves it from the backend
 server right away. What's odd is that it works (sometimes...) when query
 string is not present (e.g. http://site1:8080/RSSSource/rss/feed). 
 
 Issue #2:
 Let's say squidA receives request for some resource (e.g.
 http://site1:8080/RSSSource/rss/feed). If squidA doesn't have it in its
 cache, it will check if it's available from squidB. However, if squidA
 has expired version of that resource, it doesn't contact squidB but
 retrieves it directly from the backend server, which should not be the
 case (it should check if squidB had valid copy available), correct? 
 
 Here are relevant squid.conf lines for one of the squids (everything
 else is unchanged, config for the second squid is the same except for
 sibling references):

Nope.

The relevant lines are hierarchy_stoplist (prevent peers being asked for
query-string URLs).
and cache/no_cache controls (prevent QUERY ACL matches being stored
locally.)

Both of which need to be removed from your config.

Amos


Re: [squid-users] Squid 3.1 ICAP Issue with REQMOD 302

2010-04-14 Thread Amos Jeffries
On Wed, 14 Apr 2010 18:10:04 +0100, Niall O'Cuilinn
nocuil...@amdocs.com wrote:
 Hi
 
 I had a look at the null-body values. They correctly match the length of
 the HTTP 302 response headers block. The extra two bytes is an extra
line
 return. You can see that after the last header there are three '\r\n'
line
 returns. I tried removing one of them but the result was the same.
 
 I also turned on more detailed debug logging and found this in the
 cache.log:
 
 --
 2010/04/14 17:03:05.494| HttpReply::sanityCheckStartLine: missing or
 invalid status number in 'HTTP/1.x 302 Found
 content-type: text/html
 location:

https://localhost:8443/mib/authentication/checkCookie?backURL=http%3A%2F%2Fc.proxy.com%2Fwww.google.ie
 
 '
 -
 
 I changed the ICAP Server to return 'HTTP/1.0' instead of 'HTTP/1.x' and
 now it is working.
 
 This worked using 'HTTP/1.x' on Squid 3.0. The version I'm using is
 Squid3.1.1
 
 Thanks
 Niall

Looks like your previous version of 3.0 was vulnerable to CVE2009-2622.
Squid-3.1.1 is fixed.

Amos


Re: [squid-users] squid.conf.documented instead of squid.conf ?

2010-04-14 Thread Amos Jeffries
On Wed, 14 Apr 2010 16:02:06 +0200, Boniforti Flavio
fla...@piramide.ch
wrote:
 Hello list.
 
 I'm on Debian SID and wanted to update squid3 to the latest 3.1.1-2
 version. What happened is that dpkg returned me following error:
 
 Configurazione di squid3 (3.1.1-2)...
 sed: errore di lettura su stdin: Is a directory
 dpkg: errore nell'elaborare squid3 (--configure):
  il sottoprocesso vecchio script di post-installation ha restituito lo
 stato di errore 4
 Si sono verificati degli errori nell'elaborazione:
  squid3
 E: Sub-process /usr/bin/dpkg returned an error code (1)
 
 The *second* line is the one that made me investigate a little bit:
 sed: error reading stdin: Is a directory... Thus I checked /etc/squid3
 and got this:
 
 drwxr-xr-x  2 root root 4096 14 apr 15:54 squid.conf
 
 Entering that directory, I discovered:
 
 -rw-r--r-- 1 root root 198563 12 apr 16:09 squid.conf.documented
 
 My questions are:
 
 A) where did my customized squid.conf disappear to?

unknown.

 B) is it normal that now the /etc/squid3/squid.conf is not anymore a
 file, but a directory?

No. It's a new bug in the Debian squid3-3.1.1-2 package.
Hopefully Luigi can fix it again.

 C) how can I extract the actual configuration from the running squid3?

squidclient mgr:con...@password
 (catch-22: usually requires the password as configured in
cachemgr_passwd in squid.conf)

Amos


Re: [squid-users] squidGuard processes stay running

2010-04-14 Thread Amos Jeffries
On Wed, 14 Apr 2010 18:59:42 +0200, Sam Przyswa s...@arial-concept.com
wrote:
 Hi,
 
 I well configured Squid3 on Debian SID then I configured squidGuard
 1.2.0 but when I add the link to squidGuard in squid3 I got 4 squidGuard
 processes running and the load average 1mn rise to 10 with lot of diqk
 access and squid stop to work.
 
 How to test the squidGuard config to fix the problem ?

 1) read cache.log to see if any error messages are produced by
squidguard.

 2) Identify the default low-privileged user your Squid runs as (the basic
default is nobody, your OS may differ). Run squidguard as that user and
see what happens.

NP: 3.1 use a fair bit more RAM than other Squid since they default to
memory-cached objects. If that has caused swapping you can see a large rise
in CPU and disk IO.

Amos


Re: [squid-users] Squid HTTP Keytab SPN question

2010-04-14 Thread Khaled Blah
Hi Nick,

what I don't get in your question is this: if squid is already joined
to your domain as squid1, why create another machine account auth1?
Maybe I missed out on something.

Your msktutil parameters look fine though.

Regards,
Khaled

2010/4/14 Nick Cairncross nick.cairncr...@condenast.co.uk:
 Hi,

 I'd like confirmation of something is possible, but first best to detail what 
 I want:

 I want to use a separate computer account to authenticate my users against. I 
 know that this requires an HTTP.keytab and computer in AD with SPN. I would 
 like to use MKTSUTIL for this.
 If my proxy server is called SQUID1 and is already happily joined to the 
 domain then I need to create a new machine account which I will call AUTH1.

 1) Do I need to create a DNS entry for AUTH1 (with the same IP as SQUID1)?
 2) If so, do I need just an A record?
 3) I have evidently got confused over the msktutil switches and values and so 
 I'm specifying something wrong. What have I done? See below...

 I used this command after a kinit myusername:
 msktutil -c -b CN=COMPUTERS -s HTTP/squid1.[mydomain] iz -k 
 /etc/squid/HTTP.keytab --computer-name auth1 --upn HTTP/squid1 --server dc1 
 -verbose

 This created the computer account auth1 in the computers ou, added 
 HTTP/squid1.mydomain to SPN and HTTP/squid1.mydom...@mydomain to the UPN.
 It also created the keytab HTTP.keytab. Klist reports:

   2 HTTP/squid1.[mydoma...@[mydomain]
   2 HTTP/squid1.[mydoma...@[mydomain]
   2 HTTP/squid1.[mydoma...@[mydomain]

 However cache.log shows this when I then fire up me IE

 2010/04/14 14:52:46| authenticateNegotiateHandleReply: Error validating user 
 via Negotiate. Error returned 'BH gss_acquire_cred() failed: Unspecified GSS 
 failure.  Minor code may provide more information. No principal in keytab 
 matches desired name'

 Thanks as always,
 Nick




 ** Please consider the environment before printing this e-mail **

 The information contained in this e-mail is of a confidential nature and is 
 intended only for the addressee.  If you are not the intended addressee, any 
 disclosure, copying or distribution by you is prohibited and may be unlawful. 
  Disclosure to any party other than the addressee, whether inadvertent or 
 otherwise, is not intended to waive privilege or confidentiality.  Internet 
 communications are not secure and therefore Conde Nast does not accept legal 
 responsibility for the contents of this message.  Any views or opinions 
 expressed are those of the author.

 Company Registration details:
 The Conde Nast Publications Ltd
 Vogue House
 Hanover Square
 London W1S 1JU

 Registered in London No. 226900



RE: [squid-users] Intermittent connections patch

2010-04-14 Thread HC Barfield

nobody interested?   is there at least someone who would apply the patch (squid 
for windows) so i can test it
 
  
_
If It Exists, You'll Find it on SEEK. Australia's #1 job site
http://clk.atdmt.com/NMN/go/157639755/direct/01/