Re: [squid-users] storeAufsOpenDone

2008-02-19 Thread Kinkie
On Feb 18, 2008 5:59 PM, pokeman [EMAIL PROTECTED] wrote:

 i just turn my DiskD to aufs i received this error

 storeAufsOpenDone: (1) Operation not permitted
 2008/02/18 21:55:29|/ncache8/00/01/01FD


Are file permissions OK in your cache dirs?

-- 
/kinkie


[squid-users] strtocFile WARNING: empty ACL

2008-02-19 Thread adrian.wells

SQUID 2.5 stable12

Have just built a SuSE linux box and installed squid, as I have done many 
times, copied the conf and ACL text files from a running proxy and get the 
following list of errors for each ...


2008/02/19 10:27:42| strtokFile: /etc/squid/etc/girls.txt
2008/02/19 10:27:42| aclParseAclLine: WARNING: empty ACL: 
auid/etc/girls.txt


If I copy and past the address in the error to a file browser, I get to see 
the file!


This works perfectly on the previous machine running the same version of 
SuSE  Squid!

I have re-installed squid.
Can anyone please offer a solution?

Kind regards
Adrian

girls.txt is just a list of MAC addresses
e.g.
00:1B:77:8A:D5:CF # Wir User Name
00:1B:24:7E:CB:B1 # LAN User Name
etc.

SNIP squid conf
# Groups follow...
#

acl girls arp /etc/squid/etc/girls.txt
acl temp arp /etc/squid/etc/temp.txt

acl boys arp /etc/squid/etc/boys.txt
acl staff arp /etc/squid/etc/staff.txt
#
# Times follow...
#
acl 24Hr time M T W H F A S 00:00-23:59
/SNIP




Re: [squid-users] strtocFile WARNING: empty ACL

2008-02-19 Thread adrian.wells

adrian.wells wrote:

SQUID 2.5 stable12


Please try a later version of squid.

Will do once sorted ;-)



Have just built a SuSE linux box and installed squid, as I have done many 
times, copied the conf and ACL text files from a running proxy and get 
the following list of errors for each ...


2008/02/19 10:27:42| strtokFile: /etc/squid/etc/girls.txt
2008/02/19 10:27:42| aclParseAclLine: WARNING: empty ACL: 
auid/etc/girls.txt


I'm thinking at first glance that auid might be a problem.
It's usually a sign of non-ascii characters cut-n-pasted into the 
squid.conf text.
At this stage I've only removed a few unrequired ACL's, not edited the text 
in any way.
While waiting for a reply, I've started rebuilding the box from scratch as 
it only takes a few minutes.

I will check the text for anomalies in the meantime - thanks ;-)

P.S I've lost the original thread, but would like to thank you for helping 
with a previous problem, you suggested that the ISP might be block ports, 
they were! 3128! - I have tried 3129 and it works fine, are there any 
recommended ports to use with squid other than 80 or 8080?


Regards

Adrian




If I copy and past the address in the error to a file browser, I get to 
see the file!


This works perfectly on the previous machine running the same version of 
SuSE  Squid!

I have re-installed squid.
Can anyone please offer a solution?

Kind regards
Adrian

girls.txt is just a list of MAC addresses
e.g.
00:1B:77:8A:D5:CF # Wir User Name
00:1B:24:7E:CB:B1 # LAN User Name
etc.

SNIP squid conf
# Groups follow...
#

acl girls arp /etc/squid/etc/girls.txt
acl temp arp /etc/squid/etc/temp.txt

acl boys arp /etc/squid/etc/boys.txt
acl staff arp /etc/squid/etc/staff.txt
#
# Times follow...
#
acl 24Hr time M T W H F A S 00:00-23:59
/SNIP




Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.





Re: [squid-users] DNS-based reverse proxy peer selection, 2.5 vs 2.6

2008-02-19 Thread Amos Jeffries

Sven Edge wrote:
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 


Is there a way to do this in 2.6?
Yes, with a little trickery in DNS. You need to use DNS-views 
so that the
public see squid as being an A/ for the domain and squid 
does not. If

squid ever find itself as a source server for any of the domains its
accelerating you get a forwarding-loop.


That's more or less how we do things currently with 2.5.


Yes. It's the same old behaviour without the advanced per-ACL routing 
cache_peer allow.





Other than that catch; configure squid as a normal 2.6 accelerator with
vhost and defaultsite on the http_port line, omit any cache_peer_*
settings, and set as an open-proxy for the domains you are providing.
An external acl helper may be needed to accurately limit the open-proxy
behaviour to just the hosted domains.


I think that's what I'm trying, and it fails to find a web server to
talk to.


Hmm, you can do a spot-check on squid with
   squidclient mgr:ipcache | grep www.scran.ac.uk

or track this in cache.log as debug_options 14,5
to find out what its looking up and what its trying to use for the 
source server on direct.




Using just http_port 80 accel vhost defaultsite=www.scran.ac.uk, and
requesting http://www.scran.ac.uk/ from outside our network gives the
following:

debug 3:
2008/02/19 10:14:30| fwdStart: 'http://www.scran.ac.uk/'
2008/02/19 10:14:30| storeLockObject: key
'B13D9EB5D8D657257342FBE9C74C77D8' count=3
2008/02/19 10:14:30| peerSelect: http://www.scran.ac.uk/
2008/02/19 10:14:30| storeLockObject: key
'B13D9EB5D8D657257342FBE9C74C77D8' count=4
2008/02/19 10:14:30| cbdataLock: 0xdc6968
2008/02/19 10:14:30| peerSelectFoo: 'GET www.scran.ac.uk'
2008/02/19 10:14:30| peerSelectFoo: direct = DIRECT_NO


Looks like possibly a never_direct or a deny ACL getting in the way.


2008/02/19 10:14:30| peerSelectIcpPing: http://www.scran.ac.uk/
2008/02/19 10:14:30| neighborsCount: 0
2008/02/19 10:14:30| peerSelectIcpPing: counted 0 neighbors
2008/02/19 10:14:30| peerGetSomeParent: GET www.scran.ac.uk
2008/02/19 10:14:30| getDefaultParent: returning NULL
2008/02/19 10:14:30| peerSourceHashSelectParent: Calculating hash for
rem.ote.ad.dr
2008/02/19 10:14:30| getRoundRobinParent: returning NULL
2008/02/19 10:14:30| getFirstUpParent: returning NULL
2008/02/19 10:14:30| getAnyParent: returning NULL
2008/02/19 10:14:30| getDefaultParent: returning NULL
2008/02/19 10:14:30| peerSelectCallback: http://www.scran.ac.uk/
2008/02/19 10:14:30| Failed to select source for
'http://www.scran.ac.uk/'
2008/02/19 10:14:30|   always_direct = 0
2008/02/19 10:14:30|never_direct = 0
2008/02/19 10:14:30|timedout = 0
2008/02/19 10:14:30| cbdataValid: 0xdc6968
2008/02/19 10:14:30| fwdStartComplete: http://www.scran.ac.uk/
2008/02/19 10:14:30| fwdStartFail: http://www.scran.ac.uk/
2008/02/19 10:14:30| fwdFail: ERR_CANNOT_FORWARD Service Unavailable
http://www.scran.ac.uk/
...


Separately, I also tried setting a cache_peer to the shared hostname of
the web servers, but that just meant squid used the first IP address it
got for that hostname for all requests, as opposed to load balancing
between all the ip addresses, and completely ignored that some requests'
hostnames didn't resolve to the ip address it was using.

If it's relevant, in both cases, according to cachemgr the IP cache does
contain all of the correct values for www.scran.ac.uk.

Setting prefer_direct on doesn't do anything.

Ooh. Using always_direct works, including switching origin servers
immediately after IPcache gets to 0 TTL, although am I right in thinking
that precludes having multiple squids as siblings?


Yes that would work. And you are correct that will prevent siblings 
being tried.


Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


RE: [squid-users] DNS-based reverse proxy peer selection, 2.5 vs 2.6

2008-02-19 Thread Sven Edge
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 

 Is there a way to do this in 2.6?

Yes, with a little trickery in DNS. You need to use DNS-views 
so that the
public see squid as being an A/ for the domain and squid 
does not. If
squid ever find itself as a source server for any of the domains its
accelerating you get a forwarding-loop.

That's more or less how we do things currently with 2.5.

Other than that catch; configure squid as a normal 2.6 accelerator with
vhost and defaultsite on the http_port line, omit any cache_peer_*
settings, and set as an open-proxy for the domains you are providing.
An external acl helper may be needed to accurately limit the open-proxy
behaviour to just the hosted domains.

I think that's what I'm trying, and it fails to find a web server to
talk to.

Using just http_port 80 accel vhost defaultsite=www.scran.ac.uk, and
requesting http://www.scran.ac.uk/ from outside our network gives the
following:

debug 3:
2008/02/19 10:14:30| fwdStart: 'http://www.scran.ac.uk/'
2008/02/19 10:14:30| storeLockObject: key
'B13D9EB5D8D657257342FBE9C74C77D8' count=3
2008/02/19 10:14:30| peerSelect: http://www.scran.ac.uk/
2008/02/19 10:14:30| storeLockObject: key
'B13D9EB5D8D657257342FBE9C74C77D8' count=4
2008/02/19 10:14:30| cbdataLock: 0xdc6968
2008/02/19 10:14:30| peerSelectFoo: 'GET www.scran.ac.uk'
2008/02/19 10:14:30| peerSelectFoo: direct = DIRECT_NO
2008/02/19 10:14:30| peerSelectIcpPing: http://www.scran.ac.uk/
2008/02/19 10:14:30| neighborsCount: 0
2008/02/19 10:14:30| peerSelectIcpPing: counted 0 neighbors
2008/02/19 10:14:30| peerGetSomeParent: GET www.scran.ac.uk
2008/02/19 10:14:30| getDefaultParent: returning NULL
2008/02/19 10:14:30| peerSourceHashSelectParent: Calculating hash for
rem.ote.ad.dr
2008/02/19 10:14:30| getRoundRobinParent: returning NULL
2008/02/19 10:14:30| getFirstUpParent: returning NULL
2008/02/19 10:14:30| getAnyParent: returning NULL
2008/02/19 10:14:30| getDefaultParent: returning NULL
2008/02/19 10:14:30| peerSelectCallback: http://www.scran.ac.uk/
2008/02/19 10:14:30| Failed to select source for
'http://www.scran.ac.uk/'
2008/02/19 10:14:30|   always_direct = 0
2008/02/19 10:14:30|never_direct = 0
2008/02/19 10:14:30|timedout = 0
2008/02/19 10:14:30| cbdataValid: 0xdc6968
2008/02/19 10:14:30| fwdStartComplete: http://www.scran.ac.uk/
2008/02/19 10:14:30| fwdStartFail: http://www.scran.ac.uk/
2008/02/19 10:14:30| fwdFail: ERR_CANNOT_FORWARD Service Unavailable
http://www.scran.ac.uk/
...


Separately, I also tried setting a cache_peer to the shared hostname of
the web servers, but that just meant squid used the first IP address it
got for that hostname for all requests, as opposed to load balancing
between all the ip addresses, and completely ignored that some requests'
hostnames didn't resolve to the ip address it was using.

If it's relevant, in both cases, according to cachemgr the IP cache does
contain all of the correct values for www.scran.ac.uk.

Setting prefer_direct on doesn't do anything.

Ooh. Using always_direct works, including switching origin servers
immediately after IPcache gets to 0 TTL, although am I right in thinking
that precludes having multiple squids as siblings?

Sven


Re: [squid-users] strtocFile WARNING: empty ACL

2008-02-19 Thread Amos Jeffries

adrian.wells wrote:

SQUID 2.5 stable12


Please try a later version of squid.



Have just built a SuSE linux box and installed squid, as I have done 
many times, copied the conf and ACL text files from a running proxy and 
get the following list of errors for each ...


2008/02/19 10:27:42| strtokFile: /etc/squid/etc/girls.txt
2008/02/19 10:27:42| aclParseAclLine: WARNING: empty ACL: 
auid/etc/girls.txt


I'm thinking at first glance that auid might be a problem.
It's usually a sign of non-ascii characters cut-n-pasted into the 
squid.conf text.




If I copy and past the address in the error to a file browser, I get to 
see the file!


This works perfectly on the previous machine running the same version of 
SuSE  Squid!

I have re-installed squid.
Can anyone please offer a solution?

Kind regards
Adrian

girls.txt is just a list of MAC addresses
e.g.
00:1B:77:8A:D5:CF # Wir User Name
00:1B:24:7E:CB:B1 # LAN User Name
etc.

SNIP squid conf
# Groups follow...
#

acl girls arp /etc/squid/etc/girls.txt
acl temp arp /etc/squid/etc/temp.txt

acl boys arp /etc/squid/etc/boys.txt
acl staff arp /etc/squid/etc/staff.txt
#
# Times follow...
#
acl 24Hr time M T W H F A S 00:00-23:59
/SNIP




Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


[squid-users] Reverse Proxy Redirector Help

2008-02-19 Thread Joe Tiedeman
Hi All,

I'm using Squid 2.6 Stable 18 on RHEL4 in a reverse proxy configuration
and haven't been able to find any redirectors out there which do exactly
what I would like, so I'm in the process of writing my own very simple
one in php (code below - it's by NO means complete, I'm still in the
really early development stages)

It seems to be working fine so far, just outputting to my logfile
everything which squid passes to it, however every minute (to the
second) each of the spawned redirectors puts this into the cache.log and
I can't for the life of me figure out why!

helperHandleRead: unexpected read from url_rewriter #2, 1 bytes '

It also seems that every now and again squid sends a black request to
the redirectors as they put blank lines into the temporary logfile.

It could be that I'm doing something very stupid (highly likely!) but I
was wondering if anyone could help.

Many Thanks in advance!

Joe

?php

$stdin = fopen('php://stdin', 'r');

while (1) {

#$stdin = fopen('php://stdin', 'r');

$line = trim(fgets($stdin));
$filename = '/tmp/phpredirector.log';
$somecontent = $line.\n;

// Let's make sure the file exists and is writable first.
if (is_writable($filename)) {

// In our example we're opening $filename in append mode.
// The file pointer is at the bottom of the file hence
// that's where $somecontent will go when we fwrite() it.
if (!$handle = fopen($filename, 'a')) {
 echo couldn't open the file;
}

// Write $somecontent to our opened file.
if (fwrite($handle, $somecontent) === FALSE) {
echo couldn't write to the file;
}


fclose($handle);
echo PHP_EOL;

} else {
exit;
}
}

?

_

Higher Education Statistics Agency Ltd (HESA) is a company limited by
guarantee, registered in England at 95 Promenade Cheltenham GL50 1HZ.
Registered No. 2766993. The members are Universities UK and GuildHE.
Registered Charity No. 1039709. Certified to ISO 9001 and BS 7799. 
 
HESA Services Ltd (HSL) is a wholly owned subsidiary of HESA,
registered in England at the same address. Registered No. 3109219.
_

This outgoing email was virus scanned for HESA by MessageLabs.
_


Re: [squid-users] Reverse Proxy Redirector Help

2008-02-19 Thread Marcus Kool

Joe,
you are not allowed to use echo statements that write to stdout because
Squid is expecting ONE line and one line only per line that the script reads.
In case of an error Squid gets a second line from the script and issues an
'I do not expect this' error.

the exit is also not very nice.  Squid will complain about
a redirector that died and immediately start a new one.

BTW: ufdbGuard is a free redirector that can log all requests.

Marcus


Joe Tiedeman wrote:

Hi All,

I'm using Squid 2.6 Stable 18 on RHEL4 in a reverse proxy configuration
and haven't been able to find any redirectors out there which do exactly
what I would like, so I'm in the process of writing my own very simple
one in php (code below - it's by NO means complete, I'm still in the
really early development stages)

It seems to be working fine so far, just outputting to my logfile
everything which squid passes to it, however every minute (to the
second) each of the spawned redirectors puts this into the cache.log and
I can't for the life of me figure out why!

helperHandleRead: unexpected read from url_rewriter #2, 1 bytes '

It also seems that every now and again squid sends a black request to
the redirectors as they put blank lines into the temporary logfile.

It could be that I'm doing something very stupid (highly likely!) but I
was wondering if anyone could help.

Many Thanks in advance!

Joe

?php

$stdin = fopen('php://stdin', 'r');

while (1) {

#$stdin = fopen('php://stdin', 'r');

$line = trim(fgets($stdin));
$filename = '/tmp/phpredirector.log';
$somecontent = $line.\n;

// Let's make sure the file exists and is writable first.
if (is_writable($filename)) {

// In our example we're opening $filename in append mode.
// The file pointer is at the bottom of the file hence
// that's where $somecontent will go when we fwrite() it.
if (!$handle = fopen($filename, 'a')) {
 echo couldn't open the file;
}

// Write $somecontent to our opened file.
if (fwrite($handle, $somecontent) === FALSE) {
echo couldn't write to the file;
}


fclose($handle);
echo PHP_EOL;

} else {
exit;
}
}

?

_

Higher Education Statistics Agency Ltd (HESA) is a company limited by
guarantee, registered in England at 95 Promenade Cheltenham GL50 1HZ.
Registered No. 2766993. The members are Universities UK and GuildHE.
Registered Charity No. 1039709. Certified to ISO 9001 and BS 7799. 
 
HESA Services Ltd (HSL) is a wholly owned subsidiary of HESA,

registered in England at the same address. Registered No. 3109219.
_

This outgoing email was virus scanned for HESA by MessageLabs.
_




Re: [squid-users] NTLM authentication testing

2008-02-19 Thread Richard Wall
On 2/18/08, Adrian Chadd [EMAIL PROTECTED] wrote:
 Thats basically right - Squid doesn't handle the NTLM itself, it just
  passes the blob right through. The helper framework can handle hundreds
  of requests a second without too much thought; I'd like to spend some
  time figuring out what Samba is doing thats so slow. I thought that winbind
  was actually handling the NTLM challenge/response stuff itself and caching
  data rather than passing it upstream to the DC for every request.
  I haven't yet looked at it, so I can't say for certain that is correct.

I've done some pretty unscientific tests using curl against our Squid box.
 * CPU: Intel(R) Celeron(R) CPU 2.53GHz
 * MemTotal: 2075628 kB
 * Squid2.6 STABLE17 (using epoll)
 * NTLM auth_param ntlm children 100

I've been running multiple curl instances on four clients as follows:
{{{

for i in {1..100}; do
while true; do
curl -x 192.168.1.97:800 \
 --proxy-ntlm \
 --proxy-user DOMAINNAME\\username:password \
 --include \
 --silent \
 --header Pragma: http://www.mydomain.com/index.html /dev/null
done 
sleep 1
done

}}}

According to cachemgr this is generating a load of ~250req/sec.

client_http.requests = 252.175917/sec
client_http.hits = 126.159625/sec
client_http.errors = 0.00/sec
client_http.kbytes_in = 90.109732/sec
client_http.kbytes_out = 2735.581866/sec
client_http.all_median_svc_time = 0.851301 seconds
client_http.miss_median_svc_time = 0.000911 seconds
client_http.nm_median_svc_time = 0.00 seconds
client_http.nh_median_svc_time = 0.00 seconds
client_http.hit_median_svc_time = 0.806511 seconds

First problem is that you have to reinterpret the Squid reported hit
ratios when using NTLM auth. Only half of these are hits, the other
half being TCP_DENIED/407 that form part of the NTLM auth negotiation.

Second problem is that the majority of requests seem to result in auth
requests to the DC. There is an article describing Win2003 performance
counters showing Number of auth requests / sec, but those counters
don't seem to exist on my copy.
 * http://support.microsoft.com/kb/928576

Instead I used the difference in a minute of the total number of
security events (as shown in the titel bar of the windows event
viewer.
 * ~127 successful auth events per second
...which is about the same as the client_http.hits reported by squid.

I have the following setting defined in smb.conf:
 * winbind cache time = 10
...which clearly isn't being respected.

 * Does anyone else see this behaviour or have you managed to get auth
requests cached by winbindd?
 * Can winbindd even do caching of auth reqests or is it only
concerned with caching other domain data?

If anyone has answers, I'd really appreciate to hear from you. I'll
continue to experiment and will post my findings.

-RichardW.


Re: [squid-users] NTLM authentication testing

2008-02-19 Thread Adrian Chadd
G'day,

THanks for this stuff.

Could you possibly try hitting it hard enough to cause Squid to back up
on pending authentications? It'd be good to replicate a fail situation;
we can then take that to the samba guys and ask wtf?



Adrian

On Tue, Feb 19, 2008, Richard Wall wrote:
 On 2/18/08, Adrian Chadd [EMAIL PROTECTED] wrote:
  Thats basically right - Squid doesn't handle the NTLM itself, it just
   passes the blob right through. The helper framework can handle hundreds
   of requests a second without too much thought; I'd like to spend some
   time figuring out what Samba is doing thats so slow. I thought that winbind
   was actually handling the NTLM challenge/response stuff itself and caching
   data rather than passing it upstream to the DC for every request.
   I haven't yet looked at it, so I can't say for certain that is correct.
 
 I've done some pretty unscientific tests using curl against our Squid box.
  * CPU: Intel(R) Celeron(R) CPU 2.53GHz
  * MemTotal: 2075628 kB
  * Squid2.6 STABLE17 (using epoll)
  * NTLM auth_param ntlm children 100
 
 I've been running multiple curl instances on four clients as follows:
 {{{
 
 for i in {1..100}; do
 while true; do
 curl -x 192.168.1.97:800 \
  --proxy-ntlm \
  --proxy-user DOMAINNAME\\username:password \
  --include \
  --silent \
  --header Pragma: http://www.mydomain.com/index.html /dev/null
 done 
 sleep 1
 done
 
 }}}
 
 According to cachemgr this is generating a load of ~250req/sec.
 
 client_http.requests = 252.175917/sec
 client_http.hits = 126.159625/sec
 client_http.errors = 0.00/sec
 client_http.kbytes_in = 90.109732/sec
 client_http.kbytes_out = 2735.581866/sec
 client_http.all_median_svc_time = 0.851301 seconds
 client_http.miss_median_svc_time = 0.000911 seconds
 client_http.nm_median_svc_time = 0.00 seconds
 client_http.nh_median_svc_time = 0.00 seconds
 client_http.hit_median_svc_time = 0.806511 seconds
 
 First problem is that you have to reinterpret the Squid reported hit
 ratios when using NTLM auth. Only half of these are hits, the other
 half being TCP_DENIED/407 that form part of the NTLM auth negotiation.
 
 Second problem is that the majority of requests seem to result in auth
 requests to the DC. There is an article describing Win2003 performance
 counters showing Number of auth requests / sec, but those counters
 don't seem to exist on my copy.
  * http://support.microsoft.com/kb/928576
 
 Instead I used the difference in a minute of the total number of
 security events (as shown in the titel bar of the windows event
 viewer.
  * ~127 successful auth events per second
 ...which is about the same as the client_http.hits reported by squid.
 
 I have the following setting defined in smb.conf:
  * winbind cache time = 10
 ...which clearly isn't being respected.
 
  * Does anyone else see this behaviour or have you managed to get auth
 requests cached by winbindd?
  * Can winbindd even do caching of auth reqests or is it only
 concerned with caching other domain data?
 
 If anyone has answers, I'd really appreciate to hear from you. I'll
 continue to experiment and will post my findings.
 
 -RichardW.

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] NTLM authentication testing

2008-02-19 Thread Adrian Chadd

Oh, and Richard gets a T-shirt, for taking the initiative. :)




Adrian



RE: [squid-users] DNS-based reverse proxy peer selection, 2.5 vs 2.6

2008-02-19 Thread Sven Edge
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sven Edge wrote:
 From: Amos Jeffries [mailto:[EMAIL PROTECTED] 

 I think that's what I'm trying, and it fails to find a web server to
 talk to.

Hmm, you can do a spot-check on squid with
squidclient mgr:ipcache | grep www.scran.ac.uk

It's definitely doing the lookup and caching the right destination IP
addresses.

 2008/02/19 10:14:30| fwdStart: 'http://www.scran.ac.uk/'
 2008/02/19 10:14:30| storeLockObject: key
 'B13D9EB5D8D657257342FBE9C74C77D8' count=3
 2008/02/19 10:14:30| peerSelect: http://www.scran.ac.uk/
 2008/02/19 10:14:30| storeLockObject: key
 'B13D9EB5D8D657257342FBE9C74C77D8' count=4
 2008/02/19 10:14:30| cbdataLock: 0xdc6968
 2008/02/19 10:14:30| peerSelectFoo: 'GET www.scran.ac.uk'
 2008/02/19 10:14:30| peerSelectFoo: direct = DIRECT_NO

Looks like possibly a never_direct or a deny ACL getting in the way.

Turning up debug_options, it happily gets past the ACL stuff, including
a destination check on a correctly-resolved web server ip address, spits
out a The request GET ... is ALLOWED, because... message, and then
does peerSelect stuff and dies.

Poking around the source for the squid-2.6.STABLE17 release currently in
Fedora, there's appears to be another source of DIRECT_NO besides a
never_direct, in peer_select.c.
http://www.squid-cache.org/cgi-bin/cvsweb.cgi/squid/src/peer_select.c
I've got version 1.131, where there's an if
(request-flags.accelerated) that can cause a DIRECT_NO, but the most
recent version 1.134 has changed that. Not sure what the code's testing
for in either version, but from the commit comment it sounds like up to
now 2.6 was deliberately blocking direct access when in accelerator
mode. 

Maybe it's just a case of waiting for the next release?

Sven


Re: [squid-users] Error What is this ?

2008-02-19 Thread Matus UHLAR - fantomas
  anyone know this problems :
 
  1202922662.095  0 192.168.50.200 TCP_DENIED/400 1374 NONE
  error:unsupported-request-method - NONE/- text/html

On 14.02.08 17:23, Amos Jeffries wrote:
 Unsupported request method? yes we know about it.
 
 Some application is making a request via HTTP that squid has not been
 programmed or configured to accept. HTTP is usually GET, POST, PUT,
 CONNECT, and a pile of others from teh FRC that squid can handle. There is
 also an extension set (or more than one possibly) that new programs may
 use defined elsewhere.

another possibility is that someone tries to use squid as intercepting proxy
for other protocols and redirrrected non-HTTP port to squid...
-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Spam = (S)tupid (P)eople's (A)dvertising (M)ethod


Re: [squid-users] NTLM authentication testing

2008-02-19 Thread Richard Wall
On 2/19/08, Adrian Chadd [EMAIL PROTECTED] wrote:
 G'day,
  THanks for this stuff.
  Could you possibly try hitting it hard enough to cause Squid to back up
  on pending authentications? It'd be good to replicate a fail situation;
  we can then take that to the samba guys and ask wtf?

Adrian,

Yep I've seen that and it's easy to reproduce by lowering the number
of authenticators. So when I start squid configured with:
auth_param ntlm children 50

# /usr/local/squid/sbin/squid -d100 -X -N  -D -f /RamDisk/squid.conf
2008/02/19 14:29:09| WARNING: All ntlmauthenticator processes are busy.
2008/02/19 14:29:09| WARNING: up to 50 pending requests queued
2008/02/19 14:29:11| storeDirWriteCleanLogs: Starting...
2008/02/19 14:29:11| WARNING: Closing open FD   64
2008/02/19 14:29:11| commSetEvents: epoll_ctl(EPOLL_CTL_DEL): failed
on fd=64: (1) Operation not permitted
2008/02/19 14:29:11|   Finished.  Wrote 93 entries.
2008/02/19 14:29:11|   Took 0.0 seconds (140060.2 entries/sec).
FATAL: Too many queued ntlmauthenticator requests (251 on 50)
Aborted

# echo $?
134

It exits immediatly with return code 134

-RichardW.


Re: [squid-users] NTLM authentication testing

2008-02-19 Thread Guido Serassio

Hi,

At 14:40 19/02/2008, Richard Wall wrote:


First problem is that you have to reinterpret the Squid reported hit
ratios when using NTLM auth. Only half of these are hits, the other
half being TCP_DENIED/407 that form part of the NTLM auth negotiation.


This is caused by the NTLM over HTTP authentication sequence, look 
here for details:

http://davenport.sourceforge.net/ntlm.html


Second problem is that the majority of requests seem to result in auth
requests to the DC. There is an article describing Win2003 performance
counters showing Number of auth requests / sec, but those counters
don't seem to exist on my copy.
 * http://support.microsoft.com/kb/928576


Correct, you should request the hotfix to Microsoft.



Instead I used the difference in a minute of the total number of
security events (as shown in the titel bar of the windows event
viewer.
 * ~127 successful auth events per second
...which is about the same as the client_http.hits reported by squid.

I have the following setting defined in smb.conf:
 * winbind cache time = 10
...which clearly isn't being respected.

 * Does anyone else see this behaviour or have you managed to get auth
requests cached by winbindd?
 * Can winbindd even do caching of auth reqests or is it only
concerned with caching other domain data?


What Samba version do you are using ?
I remember that in Samba 3.0.25 there was big changes into winbindd 
regarding off-line logon support, but I don't know if this could help.


Another question, what type of NTLM authentication is supported by curl ?
Lan manager/NTLMv1 or full NTLMv2 ? (See the previous link for details)
There are big difference between the security level and on the 
performance impact, and currently all browsers automatically use 
always the NTLMv2 type.


Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



RE: [squid-users] Random image generator w/ reverse-proxy

2008-02-19 Thread Keith M. Richard
Amos,

I have a slightly older version of squid and it is setup as an
accelerator. Let me give you the layout. 

Domain name: www.my-company.org
Domain IP: 204.public address
DMZ IP Addr: 172.220.201.135 (squid server)
Internal IP: 192.1.0.59 (Web Server)
SQUID: Loads with the -D for no DNS and the host file has an entry for
192.1.0.59 as www.my-company.org.

Here is a dump from my cache.log from the last restart of squid:
2008/02/18 16:32:29| Starting Squid Cache version 2.6.STABLE6 for
i686-redhat-linux-gnu...
2008/02/18 16:32:29| Process ID 23575
2008/02/18 16:32:29| With 1024 file descriptors available
2008/02/18 16:32:29| Using epoll for the IO loop
2008/02/18 16:32:29| DNS Socket created at 0.0.0.0, port 32938, FD 5
2008/02/18 16:32:29| Adding domain groupbenefits.org from
/etc/resolv.conf
2008/02/18 16:32:29| Adding nameserver 204.xxx.xxx.xxx from
/etc/resolv.conf
2008/02/18 16:32:29| Adding nameserver 204.xxx.xxx.xxx from
/etc/resolv.conf
2008/02/18 16:32:29| User-Agent logging is disabled.
2008/02/18 16:32:29| Referer logging is disabled.
2008/02/18 16:32:29| Unlinkd pipe opened on FD 10
2008/02/18 16:32:29| Swap maxSize 1024 KB, estimated 787692 objects
2008/02/18 16:32:29| Target number of buckets: 39384
2008/02/18 16:32:29| Using 65536 Store buckets
2008/02/18 16:32:29| Max Mem  size: 8192 KB
2008/02/18 16:32:29| Max Swap size: 1024 KB
2008/02/18 16:32:29| Local cache digest enabled; rebuild/rewrite every
3600/3600 sec
2008/02/18 16:32:29| Rebuilding storage in /var/cache/squid (CLEAN)
2008/02/18 16:32:29| Using Least Load store dir selection
2008/02/18 16:32:29| Current Directory is /
2008/02/18 16:32:29| Loaded Icons.
2008/02/18 16:32:29| Accepting accelerated HTTP connections at 0.0.0.0,
port 80, FD 12.
2008/02/18 16:32:29| Accepting accelerated HTTP connections at 0.0.0.0,
port , FD 13.
2008/02/18 16:32:29| Accepting HTTPS connections at 0.0.0.0, port 443,
FD 14.
2008/02/18 16:32:29| Accepting ICP messages at 0.0.0.0, port 3130, FD
15.
2008/02/18 16:32:29| WCCP Disabled.
2008/02/18 16:32:29| Configuring Parent 192.1.0.59/443/0
2008/02/18 16:32:29| Configuring Parent 192.1.0.59//0
2008/02/18 16:32:29| Ready to serve requests.

All I really want to do is setup a http accelerator for this internal
website. I have read everything I can find about this and I guess I do
not understand the options. I do know that the option in the squid.conf
change rapidly and I am not running the newest version. I am running the
version that is loaded on my Red Hat server. I have downloaded the
newest version and am planning an upgrade very soon, but I am needing to
get this going first.

Thanks,
Keith
 -Original Message-
 From: Amos Jeffries [mailto:[EMAIL PROTECTED]
 Sent: Monday, February 18, 2008 5:13 PM
 To: Keith M. Richard
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] Random image generator w/ reverse-proxy
 
  All,
 
  I have a web page on my site that has a randomly
generated
  image (Alpha numeric picture) to allow users to register. I am using
 squid
  as an accelerator in my DMZ to this internal web server. Right now
the
  image is coded as an unsecured (http) link/servlet on port ,
which
 is
  just a random port. This is embedded in a HTTPS page. If I don't use
 squid
  it works but through squid if fails to display the image.
  I have checked the firewall and it is properly
configured.
  When I check the firewalls log, it shows the request to  from
the
  outside, but those same requests are never passed through squid for
some
  reason. I have also run Wireshark on the squid server to capture the
  traffic as users made requests and I see the TCP [SYN] from the
client
 to
  the squid servers IP address, but then the squid sends a TCP [RST,
ACK].
  When I watch the same request being made from the squid server
running
  FireFox to the internal web server it makes the handshake. I cannot
 figure
  out why the reset is happening.
 
 You have a forwarding loop in the config below.
 
  I modified the logformat so that I can get some readable
data
 and
  this is what I get from the output:
 
  18/Feb/2008:13:03:12 -0600 xxx.xxx.xxx.xxx:51651 192.168.0.135:
  TCP_MISS/404 697 GET
  http://www.my-company.org/randomimages/servlet/org.groupbenefits.por
  tal.RandomImageGenServlet? FIRST_UP_PARENT/192.1.0.59 text/html
 
  **
  # Basic config
  acl all src 0.0.0.0/0.0.0.0
  acl manager proto http cache_object
  acl localhost src 127.0.0.1/255.255.255.255
  acl to_localhost dst 127.0.0.0/8
  acl SSL_ports port 443
  acl Safe_ports port 80 # http
  acl Safe_ports port 21 # ftp
  acl Safe_ports port 443 # https
  acl Safe_ports port 70 # gopher
  acl Safe_ports port 210 # wais
  acl Safe_ports port 8080 # safe
  acl Safe_ports port  # safe
 
 Check #1. access to port  is possible. Great.
 
  acl Safe_ports port 1025-65535 # unregistered ports
  acl 

Re: [squid-users] NTLM authentication testing

2008-02-19 Thread Richard Wall
On 2/19/08, Guido Serassio [EMAIL PROTECTED] wrote:
  At 14:40 19/02/2008, Richard Wall wrote:
  First problem is that you have to reinterpret the Squid reported hit
  ratios when using NTLM auth. Only half of these are hits, the other
  half being TCP_DENIED/407 that form part of the NTLM auth negotiation.
 This is caused by the NTLM over HTTP authentication sequence, look
  here for details:
  http://davenport.sourceforge.net/ntlm.html

Guido,

Yep, I've looked at it, but have not completely absorbed it yet :)

  Second problem is that the majority of requests seem to result in auth
  requests to the DC. There is an article describing Win2003 performance
  counters showing Number of auth requests / sec, but those counters
  don't seem to exist on my copy.
* http://support.microsoft.com/kb/928576
 Correct, you should request the hotfix to Microsoft.

Thanks will search it out.

 What Samba version do you are using ?
  I remember that in Samba 3.0.25 there was big changes into winbindd
  regarding off-line logon support, but I don't know if this could help.

# /usr/upgrade/samba/sbin/winbindd --version
Version 3.0.24

So I guess I'll try compiling the latest version. Thanks for th tip.

  Another question, what type of NTLM authentication is supported by curl ?
  Lan manager/NTLMv1 or full NTLMv2 ? (See the previous link for details)

I'm not sure, but in full debug mode, curl will show the various
headers it exchanges with the server.
It seems to correspond to:
 * http://devel.squid-cache.org/ntlm/client_proxy_protocol.html

...but of course we're starting at point 4 which means that in real
life, there'd be even more squid requests I guess.

Anyway, here's the output from curl. Does this give enough information
to work out which type is being used?

{{{

* About to connect() to proxy 10.0.0.12 port 800 (#0)
*   Trying 10.0.0.12... connected
* Connected to 10.0.0.12 (10.0.0.12) port 800 (#0)
* Proxy auth using NTLM with user 'COVENTRYOFFICE\stafftest'
 GET http://www.squid-cache.org/Images/img4.jpg HTTP/1.1
 Proxy-Authorization: NTLM TlRMTVNTUAABBoIIAAA=
 User-Agent: curl/7.16.4 (i486-pc-linux-gnu) libcurl/7.16.4 OpenSSL/0.9.8e 
 zlib/1.2.3.3 libidn/1.0
 Host: www.squid-cache.org
 Accept: */*
 Proxy-Connection: Keep-Alive

* HTTP 1.0, assume close after body
 HTTP/1.0 407 Proxy Authentication Required
 Server: squid/2.6.STABLE17
 Date: Tue, 19 Feb 2008 15:03:05 GMT
 Content-Type: text/html
 Content-Length: 1371
 Expires: Tue, 19 Feb 2008 15:03:05 GMT
 X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
 Proxy-Authenticate: NTLM
TlRMTVNTUAACDgAOADAGgokAN+ZK+JnmUOEAAIoAigA+Q09WRU5UUllPRkZJQ0UCABwAQwBPAFYARQBOAFQAUgBZAE8ARgBGAEkAQwBFAAEAEABBAFAALQBUAEUAUwBUADIABAAcAGMAYQBjAGgAZQAuAGUAMgBiAG4ALgBvAHIAZwADAC4AYQBwAC0AdABlAHMAdAAyAC4AYwBhAGMAaABlAC4AZQAyAGIAbgAuAG8AcgBnAAA=
 X-Cache: MISS from ntlmsquidbox.test
 X-Cache-Lookup: NONE from ntlmsquidbox.test:800
 Via: 1.0 ntlmsquidbox.test:800 (squid/2.6.STABLE17)
* HTTP/1.0 proxy connection set to keep alive!
 Proxy-Connection: keep-alive

* Ignoring the response-body
{ [data not shown]
* Connection #0 to host 10.0.0.12 left intact
* Issue another request to this URL:
'http://www.squid-cache.org/Images/img4.jpg'
* Re-using existing connection! (#0) with host 10.0.0.12
* Connected to 10.0.0.12 (10.0.0.12) port 800 (#0)
* Proxy auth using NTLM with user 'COVENTRYOFFICE\stafftest'
 GET http://www.squid-cache.org/Images/img4.jpg HTTP/1.1
 Proxy-Authorization: NTLM 
 TlRMTVNTUAADGAAYAEAYABgAWA4ADgBwCQAJAH4IAAgAhwAABoKJAFb2ATKsj8TWAA6YY1ymLs5AgU5/lxbNCYtJnhdC67O5c0NPVkVOVFJZT0ZGSUNFc3RhZmZ0ZXN0cG9seXNydjE=
 User-Agent: curl/7.16.4 (i486-pc-linux-gnu) libcurl/7.16.4 OpenSSL/0.9.8e 
 zlib/1.2.3.3 libidn/1.0
 Host: www.squid-cache.org
 Accept: */*
 Proxy-Connection: Keep-Alive

* HTTP 1.0, assume close after body
 HTTP/1.0 200 OK
 Date: Tue, 19 Feb 2008 15:00:26 GMT
 Server: Apache/2.2.6 (FreeBSD) mod_ssl/2.2.6 OpenSSL/0.9.7e-p1 DAV/2
PHP/5.2.5 with Suhosin-Patch
 Last-Modified: Mon, 22 Jan 2007 10:51:58 GMT
 ETag: 6daaa8-7083-d9b9ef80
 Accept-Ranges: bytes
 Content-Length: 28803
 Content-Type: image/jpeg
 Age: 159
 X-Cache: HIT from ntlmsquidbox.test
HTTP/1.0 407 Proxy Authentication Required
Server: squid/2.6.STABLE17
Date: Tue, 19 Feb 2008 15:03:05 GMT
Content-Type: text/html
Content-Length: 1371
Expires: Tue, 19 Feb 2008 15:03:05 GMT
X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
Proxy-Authenticate: NTLM
TlRMTVNTUAACDgAOADAGgokAN+ZK+JnmUOEAAIoAigA+Q09WRU5UUllPRkZJQ0UCABwAQwBPAFYARQBOAFQAUgBZAE8ARgBGAEkAQwBFAAEAEABBAFAALQBUAEUAUwBUADIABAAcAGMAYQBjAGgAZQAuAGUAMgBiAG4ALgBvAHIAZwADAC4AYQBwAC0AdABlAHMAdAAyAC4AYwBhAGMAaABlAC4AZQAyAGIAbgAuAG8AcgBnAAA=
X-Cache: MISS from ntlmsquidbox.test
X-Cache-Lookup: NONE from ntlmsquidbox.test:800
Via: 1.0 ntlmsquidbox.test:800 (squid/2.6.STABLE17)
Proxy-Connection: keep-alive

HTTP/1.0 200 OK
Date: Tue, 19 Feb 2008 15:00:26 

Re: [squid-users] HTTPS proxy

2008-02-19 Thread Matus UHLAR - fantomas
On 17.02.08 18:10, Sam Przyswa wrote:
 We use Squid and SquidGuard to control webmails access, that work fine,
 but for those who use HTTPS protocole Squid/SquidGuard doesn't operate.
 Is it a way to control HTTPS as well HTTP trafic ?

no. the HTTPS traffic consists of CONNECT requests where the procy has no
idea what URLs are being retrieved and what requests (GET/POST/...) pass
through it - that is the 's'=secure in the https.

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Spam is for losers who can't get business any other way.


Re: [squid-users] Digest Authentication in Squid through LDAP in Windows 2003 DC

2008-02-19 Thread Luis Claudio Botelho - Chefe de Tecnologia e Redes

Hi Amos Jeffries,
Thank you for your cooperation..

So I used one of the links you sent to me. And I configured in shell scripts 
the tests, and it's ok.
But when I put into squid.conf, I can't authenticate. I tried but it still 
asking me for a user and password in the web browser.


These are my lines in squid.conf:
==
auth_param digest realm squid-valencia
auth_param digest children 5
auth_param digest program /usr/lib/squid/digest_ldap_auth -b 
ou=Funcionarios,ou=Usuarios,dc=feinet,dc=fei,dc=edu,dc=br -u cn -A 
l -D 
cn=Proxy_User,ou=Funcionarios,ou=Usuarios,dc=feinet,dc=fei,dc=edu,dc=br -w 
123456 -e -v 3 -h 172.16.0.13 -d

==

I think that its right. And I don't know if my problem is now in another 
line:


==
external_acl_type ldap_group %LOGIN /usr/lib/squid/squid_ldap_group -R -b 
dc=feinet,dc=fei,dc=edu,dc=br -D 
cn=proxy_user,ou=Funcionarios,ou=Usuarios,dc=feinet,dc=fei,dc=edu,dc=br -w 
123456 -f 
((objectclass=person)(memberof=cn=%a,ou=Funcionarios,ou=Usuarios,dc=feinet,dc=fei,dc=edu,dc=br)) 
-h 172.16.0.13

==

This external_acl_type works fine with basic, and I'm not sure that it's the 
right way to use external_acl_type with digest authentication.


If you could help me once again, it would be very nice.

Thank you again!

Regards,

Luis - FEI - Brazil



- Original Message - 
From: Amos Jeffries [EMAIL PROTECTED]
To: Luis Claudio Botelho - Chefe de Tecnologia e Redes 
[EMAIL PROTECTED]

Cc: squid-users@squid-cache.org
Sent: Monday, February 18, 2008 8:26 PM
Subject: Re: [squid-users] Digest Authentication in Squid through LDAP in 
Windows 2003 DC




Hi,

Please, I need some help about Digest Authentication.
We made a new server in our enterprise, using Fedora 7 (64 bits).
We have Squid 3, installed, and we need to authenticate our users in one
of
the DC's (Windows 2003 Server DC).
The problem:
We started configuring Squid with basic authentication; it worked fine,
but
we got the user's password through Ethereal Software. This is a problem
here, because we have a lot of students and teachers that we need to
guarantee security to them and against them.
So we tried digest authentication, and our problem started. Our tests
failed, and we didn't find any documentation about how to implement
digest_ldap_auth to check the username and password.
We don't know if our idea about digest authentication is right or wrong.
We
imagine that we can simply authenticate in Windows 2003 Server DC (as
basic authentication does), without store the user's passord into the
Linux
Server. Is that possible? If yes, where can I find instructions about how
to
use it?
If you can help us about this, and even if our idea about digest
authentication between Squid and Windows 2003 Server is wrong, it would 
be

very nice.
I would like to thank you for your time, and sorry for any inconvenience.

Regards,



There is a help how-to in the wiki
http://wiki.squid-cache.org/KnowledgeBase/Using_the_digest_LDAP_authetication_helper

There are also some other auth mechanisms that may beuseful to you:

http://wiki.squid-cache.org/NegotiateAuthentication

http://wiki.squid-cache.org/ConfigExamples/WindowsAuthenticationNTLM

Amos







[squid-users] Reverse Proxy, getting 503 errors when POST

2008-02-19 Thread Nick Duda
I've setup a very basic reverse proxy (maybe the error is because of my
config). It appears to be working as the reverse proxy properly, however
I can not perform a HTTP POST , I get TCP_MISS/503 error in my
access.log

Basically, any request to the Public IP of my proxy on 80/443 (secured
by ACL's for certain IP's) I want to proxy the request to an internal
(behind firewall) webserver. 

Client sets host file like:
x.x.x.x www.some.domain.com

Client surfs to www.some.domain.com, the ip is the public ip of squid
and they get served up the internal (behind firewall) site, this works,
but when they attempt to perform anything POST it doesn't allow:

squid.conf (this is how its build for a test, I will clean it up and
lock it down more after)

http_port 80 transparent
cache_peer x.x.x.x parent 80 0 no-query originserver
cache deny all
http_access allow all

x.x.x.x = RFC1918 of internal web server
I had to use transparent because when I used accel with defaultdomain=,
cookies would only work for that domain. I have 5 different host entries
that need to be used to all hit the same internal ip/web server.

What else is required or what is the issue on why I cant POST? I'm new
to this reverse configuration, so be kind lol

Nick


Re: [squid-users] HTTPS proxy

2008-02-19 Thread Marcus Kool



Matus UHLAR - fantomas wrote:

On 17.02.08 18:10, Sam Przyswa wrote:

We use Squid and SquidGuard to control webmails access, that work fine,
but for those who use HTTPS protocole Squid/SquidGuard doesn't operate.
Is it a way to control HTTPS as well HTTP trafic ?


no. the HTTPS traffic consists of CONNECT requests where the procy has no
idea what URLs are being retrieved and what requests (GET/POST/...) pass
through it - that is the 's'=secure in the https.


False. If https traffic goes via Squid, the URL can go to a redirector and
filter based on either
a) domain name
b) connect to the site and verify valid certificate

ufdbGuard does this and successfully blocks SSH tunnels over HTTPS.
Everybody should use ufdbGuard and have one security threat less. It is free!

Marcus



Re: [squid-users] NTLM authentication testing

2008-02-19 Thread Guido Serassio

Hi,

At 16:36 19/02/2008, Richard Wall wrote:


Guido,

Yep, I've looked at it, but have not completely absorbed it yet :)


But you should, probably it's the better NTLM explanation on the net ... :-)

  Another question, what type of NTLM authentication is supported by curl ?

  Lan manager/NTLMv1 or full NTLMv2 ? (See the previous link for details)

I'm not sure, but in full debug mode, curl will show the various
headers it exchanges with the server.
It seems to correspond to:
 * http://devel.squid-cache.org/ntlm/client_proxy_protocol.html

...but of course we're starting at point 4 which means that in real
life, there'd be even more squid requests I guess.


Likely should be NTLMv1, NTLMv2 requires client and server mutual 
authentication provided by Domain Controllers.




Doesn't the --helper-protocol=squid-2.5-ntlmssp in squid.conf
determine that NLTMv2 will be used? Looking at the man page for
ntlm_auth suggests that lanman auth would require different
parameters:

 * http://us1.samba.org/samba/docs/man/manpages-3/ntlm_auth.1.html


No, this ALLOW the support for the NTLM NEGOTIATE packet needed for 
NTLMv2, but the NTLM version is always negotiated between winbindd 
and the browser.



This may seem like a stupid question, and my vague understanding of
kerberos may be way off, but aren't there better alternatives to NTLM
proxy auth if you're authenticating only against Active Directory
servers?

Doesn't Kerberos provide a time limited token to the authenticated
windows domain client that can be passed to other machines in the
domain as proof that the client is authenticated; and which can be
used to lookup what services the client has acces to.

In a perfect world shouldn't Internet Explorer just pass this token
along with all requests to other machines in the same domain.


Negotiate it's the future: it's Kerberos based and the packet 
exchange is shorter than NTLM (but packets are larger). The only 
drawback is that Samba 3 doesn't support it .


Other limit is that you need at least Internet Explorer 7 or Firexox 1.5.

It's very easy to use running Squid on Windows with native helpers, 
or  you can try the new squid_kerb_auth helper:

http://www.squid-cache.org/mail-archive/squid-users/200801/0257.html


My aims are:
 * to have a proxy that is only available to authenticated windows 
domain users.

 * that Internet Explorer should not prompt the user for their
username and password if they have already logged onto the domain.
 * that squid should be able to record usernames alongside requests 
in its logs.

 * That dans guardian should be able to identify the username of the client.

Is there some way I can get all this without paying the penalty of NTLM auth?


Sure, negotiate.

Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



[squid-users] RE: Re[squid-users] d Hat 5 - Squid 2.6 Stable 13 WCCP V2 and GRE

2008-02-19 Thread domboy

Hello. I'm am having the exact same problem as described by Kevin, and got to
the end of this thread and I've tried all the things suggested. In addition,
I've tried it on both openSuse 10.3 (kernel 2.6.18.8-0.7-bigsmp) and Debian
4 (kernel 2.6.18-4) and finally Novell Suse Linux Enterprise Server 10 SP1
(kernel 2.6.16.54-0.2.3-bigsmp), all with the same issue. Is there anything
else to try??
Also, in the last post, it was recommended to assign a real IP address to
the gre tunnel... most of the examples I've read assign the IP address eth0
uses to the gre tunnel as well. Is this correct?
Any help would be greatly appreciated.

Dom


Van Der Hart, Kevin wrote:
 
 I changed gre1 to use a real IP. This fixed the problem and all is
 working.
 
 Thanks to Henrik for all of the assistance.
 
 Kevin 
 
 -Original Message-
 From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
 Sent: Friday, June 15, 2007 3:27 AM
 To: Van Der Hart, Kevin
 Cc: squid-users@squid-cache.org
 Subject: RE: [squid-users] Red Hat 5 - Squid 2.6 Stable 13 WCCP V2 and
 GRE
 
 tor 2007-06-14 klockan 20:33 -0500 skrev Van Der Hart, Kevin:
 It works if I configure my client to use the proxy and it works if I
 point my default route to the proxy machine when I am on the same
 subnet. The firewall is completely disabled. gre1 has IP of 127.0.0.2.
 
 
 I would recommend to use a real IP on the GRE if you use REDIRECT.
 
 To verify that traffic is coming in on the gre use
 
   tcpdump -n -i gre1
 
 Regards
 Henrik
 
 

-- 
View this message in context: 
http://www.nabble.com/Red-Hat-5---Squid-2.6-Stable-13-WCCP-V2-and-GRE-tp11040402p15562411.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] DNS-based reverse proxy peer selection, 2.5 vs 2.6

2008-02-19 Thread Amos Jeffries

Sven Edge wrote:
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sven Edge wrote:
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 


I think that's what I'm trying, and it fails to find a web server to
talk to.

Hmm, you can do a spot-check on squid with
   squidclient mgr:ipcache | grep www.scran.ac.uk


It's definitely doing the lookup and caching the right destination IP
addresses.


2008/02/19 10:14:30| fwdStart: 'http://www.scran.ac.uk/'
2008/02/19 10:14:30| storeLockObject: key
'B13D9EB5D8D657257342FBE9C74C77D8' count=3
2008/02/19 10:14:30| peerSelect: http://www.scran.ac.uk/
2008/02/19 10:14:30| storeLockObject: key
'B13D9EB5D8D657257342FBE9C74C77D8' count=4
2008/02/19 10:14:30| cbdataLock: 0xdc6968
2008/02/19 10:14:30| peerSelectFoo: 'GET www.scran.ac.uk'
2008/02/19 10:14:30| peerSelectFoo: direct = DIRECT_NO

Looks like possibly a never_direct or a deny ACL getting in the way.


Turning up debug_options, it happily gets past the ACL stuff, including
a destination check on a correctly-resolved web server ip address, spits
out a The request GET ... is ALLOWED, because... message, and then
does peerSelect stuff and dies.

Poking around the source for the squid-2.6.STABLE17 release currently in
Fedora, there's appears to be another source of DIRECT_NO besides a
never_direct, in peer_select.c.
http://www.squid-cache.org/cgi-bin/cvsweb.cgi/squid/src/peer_select.c
I've got version 1.131, where there's an if
(request-flags.accelerated) that can cause a DIRECT_NO, but the most
recent version 1.134 has changed that. Not sure what the code's testing
for in either version, but from the commit comment it sounds like up to
now 2.6 was deliberately blocking direct access when in accelerator
mode. 


Maybe it's just a case of waiting for the next release?


Aha, sounds like that yes. Fortunately Stable 18 is out already so if 
the change was included there you could use that one.
Otherwise the 2.6 daily snapshot should be stable enough to use, just 
with a little testing required to be sure of it.


Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


[squid-users] LISTENing question

2008-02-19 Thread George
Hi,

I have CentOS 5 and squid-2.6.STABLE16-2

I have 3 IPs on the server

1.1.1.1
1.1.1.2
1.1.1.3

If I put: http_port 33
the squid LISTENs on all IPs. If I connect to the proxy via 1.1.1.3
then my outside IP will be the main one: 1.1.1.1

If I put:
http_port 1.1.1.1:33
http_port 1.1.1.3:33

then the same happens.

What do I need to do to be able to get the outside IP the one I am
connecting to?

Please help.

Thanks


Re: [squid-users] LISTENing question

2008-02-19 Thread Amos Jeffries
 Hi,

 I have CentOS 5 and squid-2.6.STABLE16-2

 I have 3 IPs on the server

 1.1.1.1
 1.1.1.2
 1.1.1.3

 If I put: http_port 33
 the squid LISTENs on all IPs. If I connect to the proxy via 1.1.1.3
 then my outside IP will be the main one: 1.1.1.1

 If I put:
 http_port 1.1.1.1:33
 http_port 1.1.1.3:33

 then the same happens.

 What do I need to do to be able to get the outside IP the one I am
 connecting to?

tcp_outgoing_address + some my* ACLs.

Before yo go ahead and do that though ask yourself the question:
do I _really_ need this?
Adding that config will increase your IPA lock-in and may break some
unforseen things.

Amos




[squid-users] Problem changing log files path

2008-02-19 Thread Marco

hi to all,
i'm using OpenSuSE 10.2 with Squid 2.6 and i need to modify the path 
were to store cache.log, store.log and access.log.
Actually my system use the default directory (/var/log/squid); i would 
like to move to /cache/squid_log (a sub-dir into

the /cache squid directory).
i have modified the file /etc/squid/squid.conf whith the line below,

cache_log /cache/squid_log/store.log
access_log /cache/squid_log/access.log squid
cache_store_log /cache/squid_log/store.log

and next i have restart squid, but when i try to navigate in internet,  
in the /cache/squid_log directory i don't see
cache.log, store.log and access.log but stranges files (index.html, 
etc)... like URL.


Where i mistake ?
thank's to all



[squid-users] Proxy authentication against 2 domains?

2008-02-19 Thread Daniel Teixeira
Hi
I have a squid proxy that is being used as the main proxy for 1 of our
Active Directory domains. I use winbind so the users are able to validate
themselves. 
Now I need that some specific users from another domain (with a trust
relationship)  can also use this proxy as their internet access. 
Access is granted for the users on a specific AD group, and squid allows
that group to browse the internet.

Can I configure winbind to validate users from another domain? What's my
best option?

Thank you very much,

Daniel



Re: [squid-users] Proxy authentication against 2 domains?

2008-02-19 Thread Kinkie
On Feb 19, 2008 11:45 PM, Daniel Teixeira [EMAIL PROTECTED] wrote:
 Hi
 I have a squid proxy that is being used as the main proxy for 1 of our
 Active Directory domains. I use winbind so the users are able to validate
 themselves.
 Now I need that some specific users from another domain (with a trust
 relationship)  can also use this proxy as their internet access.
 Access is granted for the users on a specific AD group, and squid allows
 that group to browse the internet.

 Can I configure winbind to validate users from another domain? What's my
 best option?

You do not need any special setup. Since the domains are in a trust
relationship it should just work

-- 
/kinkie


[squid-users] Squid 2.6 hangs after several hours of operation

2008-02-19 Thread badaboom003-asdf
Hi,

I've been running squid 2.6 on my production server in
front of apache for quite a while now with no
problems. Recently, I made several changes to the
server (for example, my search app uses quite a bit
more memory).

Now, squid hangs up after several hours of operation.
As soon as I do an /etc/init.d/squid restart,
everything's ok for a few more hours until it hangs
again and rejects all requests.

I've tried stopping squid, removing swap.state, and
restarting. That didn't fix it. I've also tried
looking in the access.log and cache.log. I couldn't
find any warnings or errors?

Could someone point me in the right direction of how
to go about troubleshooting this problem?

Thanks Much,

Chris



Re: [squid-users] Squid 2.6 hangs after several hours of operation

2008-02-19 Thread Adrian Chadd
* Which version of Squid-2.6 ?
* Graph statistics via SNMP, look for any trends (increased RAM/filedescriptor
  counts, growing CPU usage, etc.)
* Run it inside GDB (with the appropriate .gdbinit script) and then
  when it hangs, ctrl-c into the debugger and see whats going on



Adrian

On Tue, Feb 19, 2008, [EMAIL PROTECTED] wrote:
 Hi,
 
 I've been running squid 2.6 on my production server in
 front of apache for quite a while now with no
 problems. Recently, I made several changes to the
 server (for example, my search app uses quite a bit
 more memory).
 
 Now, squid hangs up after several hours of operation.
 As soon as I do an /etc/init.d/squid restart,
 everything's ok for a few more hours until it hangs
 again and rejects all requests.
 
 I've tried stopping squid, removing swap.state, and
 restarting. That didn't fix it. I've also tried
 looking in the access.log and cache.log. I couldn't
 find any warnings or errors?
 
 Could someone point me in the right direction of how
 to go about troubleshooting this problem?
 
 Thanks Much,
 
 Chris

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


[squid-users] Squid currently not working.

2008-02-19 Thread Steve Billig
Not to long ago, maybe a 2 months ago, I was having problems
installing squid. I got it installed back then and it was working
fine. I had stopped using it then, and since thats the only thing I
used it for, stopped using the server. I want to start using the proxy
again so I booted it back up(changed a few ports since back then but I
put most of them back). Currently, when going through terminal, if I
type in: service squid status it says that squid is stopped. I type in
the command: service squid start, says it failed. For the heck of it I
try to stop it, Failed. Restart, failed. I had problems with this
before, but it had actually worked back then. I checked my config
file(at least what I think is my config file) and I do have the
http_port it lists forwarding to my server correctly, and from what it
shows I am.

So now I am currently in the jam of finding out why it is currently
not working correctly.

Any help is greatly appreciated.

Thanks.

-- 
-Steve


RE: [squid-users] Squid currently not working.

2008-02-19 Thread Amos Jeffries

 So now I am currently in the jam of finding out why it is currently
 not working correctly.

 cache_log seems like a good place to start looking.

 What OS is this?



Aklso, does your currently logged in user have the root privileges needed
to start squid and access all its required files etc.

Amos



[squid-users] What is the ICAP chain exactly?

2008-02-19 Thread S.KOBAYASHI
Hello guys,

I'v read some documents but I don't understand yet about ICAP chain how to
work and how to setup in the squid.cof.
I wonder if the squid can control sequence of connecting to some ICAP
service.
It's not only processing ICAP services in a row, but also controls sequence
of ICAP services.
Do my thoughts make sense? Or Is it possible?

Thanks a lot,
Seiji Kobayashi





RE: [squid-users] Squid currently not working.

2008-02-19 Thread Adam Carter
 
 So now I am currently in the jam of finding out why it is currently
 not working correctly.

cache_log seems like a good place to start looking.

What OS is this?



[squid-users] about delay pools

2008-02-19 Thread dleyva11037
hi I'm have 2 chanels one with 128kb and the other with 256kb I want to
limit using the delay pool that all the request from my contrie domain use
the 128kb channel and the other request use the other channel





[squid-users] squid access.log transparent debian 4

2008-02-19 Thread mostafa faridi
I install squid 2.6 on Debian 4 and I think everything is good but when 
I type


*
** tail -f /var/log/squid/access.log

*do not show me any thing and I think squid do not make log
this is my IPtables rule

*# Generated by iptables-save v1.3.6 on Tue Feb 19 18:08:20 2008
*nat
:PREROUTING ACCEPT [117:16009]
:POSTROUTING ACCEPT [2:111]
:OUTPUT ACCEPT [23:3703]
-A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128
-A POSTROUTING -o eth0 -j MASQUERADE
COMMIT
# Completed on Tue Feb 19 18:08:20 2008
# Generated by iptables-save v1.3.6 on Tue Feb 19 18:08:20 2008
*filter
:INPUT ACCEPT [1186:117352]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [550:173868]
-A FORWARD -i eth1 -j ACCEPT
COMMIT
# Completed on Tue Feb 19 18:08:20 2008

*where I make mistake ??


Re: [squid-users] What is the ICAP chain exactly?

2008-02-19 Thread Alex Rousskov

On Wed, 2008-02-20 at 12:43 +0900, S.KOBAYASHI wrote:

 I'v read some documents but I don't understand yet about ICAP chain how to
 work and how to setup in the squid.cof.
 I wonder if the squid can control sequence of connecting to some ICAP
 service.
 It's not only processing ICAP services in a row, but also controls sequence
 of ICAP services.
 Do my thoughts make sense? Or Is it possible?

For a given HTTP message, Squid 3.0 can select zero or one ICAP service.

Future Squid versions will probably be able to select a list of ICAP
services for a given HTTP message. The services would then be applied
one after another, in a chain- or pipe-like manner, where the output of
the previous service becomes the input of the next one. Developers are
looking for those who need that kind of functionality.

Sorry, I do not know what you mean by not only processing ICAP services
in a row, but also controls sequence of ICAP services. An example might
help.

Thank you,

Alex.
P.S. Squid2 with some ICAP patches can support ICAP service chaining,
but poorly.




[squid-users] original server down

2008-02-19 Thread J. Peng
hello,

How to handle the case of the original server was down? I use squid
for reverse-proxy, if I add some lines like:

cache_peer parentcache.example.com   parent  80 0
cache_peer childcache2.example.com   sibling  80 0
cache_peer childcache3.example.com   sibling  80 0

if original server was down, does squid go to query other caches like above?
Will squid get cache MISS object from its sibling? or it can only get
cache MISS object from the parent?

thanks!


[squid-users] delay_parameters: What is difference between aggregate, network and individual bucket?

2008-02-19 Thread Yong Bong Fong

Dear friends,

 I am just confuse about the usage of aggregate, network and individual 
bucket.
If not mistaken, aggregate bucket is just like a public bucket that all 
users get the privilege to access and individual bucket is one specific 
for each user?


Say if i set a delay_parameter as follow:
delay_parameters 2 32000/32000 8000/8000 600/8000
then, how does it allocate the bucket limitation to each user?

thanks
Regards
Yong