Re: [squid-users] Weirdness caching objects with a vary header on a accelerator

2007-05-01 Thread Bastian Blank
On Wed, May 02, 2007 at 12:34:51AM +0200, Henrik Nordstrom wrote:
> tis 2007-05-01 klockan 11:43 +0200 skrev Bastian Blank:
> > On Mon, Apr 30, 2007 at 10:37:24AM +0200, Bastian Blank wrote:
> > > The response is properly written according to the store.log:
> > > | 1177918188.354 SWAPOUT 00  5DD09DA912DD58C2EFBDAC8382385625  
> > > 200 1177918188-1 1178018188 x-squid-internal/vary - 1/201 GET 
> > > http://jura13.jura.uni-tuebingen.de/
> > > | 1177918188.354 SWAPOUT 00 0001 6401BF3ABD2BDF388518448979017161  
> > > 200 1177918188 1171557682 1177921788 text/html 18413/18413 GET 
> > > http://jura13.jura.uni-tuebingen.de/
> > 
> > The key 5DD09DA912DD58C2EFBDAC8382385625 is built using the vary header
> > and is therefor unusable to find this object again.
> 
> This problem is seen if you use urlgroup:s. See discussion in bug #1947.
> 
> It's not the vary details which is lost, it's the urlgroup when writing
> out the x-squid-internal object.

I removed the usage of urlgroups from the store key as workaround and it
works. Thank you.

Bastian

-- 
Too much of anything, even love, isn't necessarily a good thing.
-- Kirk, "The Trouble with Tribbles", stardate 4525.6


Re: [squid-users] Make install error

2007-05-01 Thread squid squid

Hi,

Both man statfs and statvfs returns No manual entry and I am trying to 
compile on Solaris 2.5.1. Kindly advise what are the things to look out for 
in config.log to determine the problem. Thank you.




From: Henrik Nordstrom <[EMAIL PROTECTED]>
To: squid squid <[EMAIL PROTECTED]>
CC: squid-users@squid-cache.org
Subject: Re: [squid-users] Make install error
Date: Mon, 30 Apr 2007 12:04:03 +0200

mån 2007-04-30 klockan 17:15 +0800 skrev squid squid:

> then mv -f ".deps/store_dir.Tpo" ".deps/store_dir.Po"; else rm -f
> ".deps/store_dir.Tpo"; exit 1; fi
> store_dir.c: In function `storeDirGetBlkSize':
> store_dir.c:529: error: too few arguments to function `statfs'
> store_dir.c: In function `storeDirGetUFSStats':
> store_dir.c:568: error: too few arguments to function `statfs'

Odd... didn't know there was more than one statfs() function...

> Kindly advise how can the above error be resolved.

Someone with access to your operating system need to read man pages for
statfs and statvfs, and figure out how to correctly use them in your os.

try

  man statfs

and cross-check that with the usage in the Squid sources.

also try

  man statvfs

if that returns something then another path is to investigate why
configure didn't pick up the statvfs function. In such case see
config.log to hopefully see why the test for statvfs failed.

Regards
Henrik




<< signature.asc >>


_
Get MSN Messenger emoticons and display pictures here! 
http://ilovemessenger.msn.com/?mkt=en-sg




Re: [squid-users] Transparent proxy testing from the proxy server

2007-05-01 Thread Henrik Nordstrom
tis 2007-05-01 klockan 21:45 -0400 skrev Leah Kubik:
> Hi,
> 
> I'm trying to set up squid as a transparent proxy on a Centos 4.x system.  
> Unfortunately, this means the system is stuck with the default system RPM's 
> (Version 2.5.STABLE6) (unless someone is making an RPM for CentOS for 4.6, 
> but I could not find one.)
> 
> When I configure the server to redirect it's own requests to the squid proxy 
> in the firewall (to test as I don't have access to the lan clients beind it) 
> I get failed ACL:
> 
> 1178066297.760  0 127.0.0.1 TCP_DENIED/403 1339 GET http://google.com/ - 
> NONE/- text/html

Did you allow localhost to use the proxy?

Anything in cache.log?

Have you configured transparent interception properly in squid.conf?

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Users spamming squid logs

2007-05-01 Thread Henrik Nordstrom
tis 2007-05-01 klockan 16:26 -0800 skrev Chris Robertson:

> Any URL blocked explicitly with a Squid ACL will be logged as a 403.

Unless you also have an acl to deny the request from being logged. See
the access_log directive (or alternatively the access_log_access).

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


[squid-users] Transparent proxy testing from the proxy server

2007-05-01 Thread Leah Kubik
Hi,

I'm trying to set up squid as a transparent proxy on a Centos 4.x system.  
Unfortunately, this means the system is stuck with the default system RPM's 
(Version 2.5.STABLE6) (unless someone is making an RPM for CentOS for 4.6, 
but I could not find one.)

When I configure the server to redirect it's own requests to the squid proxy 
in the firewall (to test as I don't have access to the lan clients beind it) 
I get failed ACL:

1178066297.760  0 127.0.0.1 TCP_DENIED/403 1339 GET http://google.com/ - 
NONE/- text/html
1178066297.761  3 127.0.0.1 TCP_MISS/403 1378 GET http://google.com/ - 
DIRECT/64.233.167.99 text/html

I am wondering if anyone might have an example configuration from a CentOS 4.x 
system for a transparent squid proxy that works that I could try, or if 
anyone would be willing to take a look at my configuration and suggest what 
might be wrong.

The configuration I am using is:

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
hosts_file /etc/hosts
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl lan src 64.233.167.99 192.168.1.0/24
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl purge method PURGE
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow all
http_access deny all
http_reply_access allow all
icp_access allow all
coredump_dir /var/spool/squid

Thanks for any help,
Leah
-- 
Leah Kubik : d416-585-9971x692 : d416-703-5977 : m416-559-6511
Frauerpower! Co. : www.frauerpower.com : Toronto, ON Canada
MSN: [EMAIL PROTECTED] | AIM: frauerpower | Yahoo: h3inous
F9B6 FEFE 080B 8299 D7EA  1270 005C EC73 47C9 B7A6


Re: [squid-users] google

2007-05-01 Thread dhottinger

Quoting Chris Robertson <[EMAIL PROTECTED]>:


[EMAIL PROTECTED] wrote:

Quoting Adrian Chadd <[EMAIL PROTECTED]>:


On Tue, May 01, 2007, [EMAIL PROTECTED] wrote:

I suddenly (last friday) started having issues when access google.com.
My access.log file shows all tcp_miss for google.  Is anyone else
experiencing slow google access?  I did get an email from google that
they were updating their applications (we use google calendars).


I've not heard about it. Do you have mime logging turned on so we can
see the headers w/ the request/reply?



Adrian



1178030214.203538 10.40.15.123 TCP_MISS/200 5959 GET   
http://tbn0.google.com/images? - DIRECT/72.14.211.104 image/jpeg ALL
OW "Visual Search Engine, Search Engines" [Accept:   
*/*\r\nAccept-Language: en\r\nAccept-Encoding: gzip,   
deflate\r\nCookie: PR
EF=ID=2377a880502b45e8:TM=1158762666:LM=1158762666:S=4J3pIHAN5lUxv6nf\r\nReferer:   
http://images.google.com/images?q=newspaper
+comics+political+cartoons+on+the+war+in+Iraq&gbv=2&svnum=10&hl=en&start=40&sa=N&ndsp=20\r\nUser-Agent: Mozilla/5.0   
(Macintos
h; U; Intel Mac OS X; en) AppleWebKit/418.9 (KHTML, like Gecko)   
Safari/419.3\r\nConnection: keep-alive\r\nHost: tbn0.google.c
om\r\n] [HTTP/1.0 200 OK\r\nContent-Type: image/jpeg\r\nServer:   
btfe\r\nContent-Length: 5775\r\nDate: Tue, 01 May 2007 14:36:

53 GMT\r\nConnection: Keep-Alive\r\n\r]


Assuming you have not removed the default cache/no_cache line from your
squid.conf, anything with a question mark in the URL will not be cached.

From 2.6STABLE12's squid.conf.default:

#We recommend you to use the following two lines.
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY

2.5 is the same, except the directive is called no_cache.

In addition, the objects have no freshness information (Expires, or
Content-Length), so even without the explicit requirement within Squid
to not cache GET queries the listed objects are cacheable.

Chris
I didnt remove any of the defaults I am using 2.5 and the acl query  
statements are there.  Not sure what you are trying to tell me.


thanks,
ddh


--
Dwayne Hottinger
Network Administrator
Harrisonburg City Public Schools



Re: [squid-users] google

2007-05-01 Thread Chris Robertson

[EMAIL PROTECTED] wrote:

Quoting Adrian Chadd <[EMAIL PROTECTED]>:


On Tue, May 01, 2007, [EMAIL PROTECTED] wrote:

I suddenly (last friday) started having issues when access google.com.
 My access.log file shows all tcp_miss for google.  Is anyone else
experiencing slow google access?  I did get an email from google that
they were updating their applications (we use google calendars).


I've not heard about it. Do you have mime logging turned on so we can
see the headers w/ the request/reply?



Adrian



1178030214.203538 10.40.15.123 TCP_MISS/200 5959 GET 
http://tbn0.google.com/images? - DIRECT/72.14.211.104 image/jpeg ALL
OW "Visual Search Engine, Search Engines" [Accept: 
*/*\r\nAccept-Language: en\r\nAccept-Encoding: gzip, 
deflate\r\nCookie: PR
EF=ID=2377a880502b45e8:TM=1158762666:LM=1158762666:S=4J3pIHAN5lUxv6nf\r\nReferer: 
http://images.google.com/images?q=newspaper
+comics+political+cartoons+on+the+war+in+Iraq&gbv=2&svnum=10&hl=en&start=40&sa=N&ndsp=20\r\nUser-Agent: 
Mozilla/5.0 (Macintos
h; U; Intel Mac OS X; en) AppleWebKit/418.9 (KHTML, like Gecko) 
Safari/419.3\r\nConnection: keep-alive\r\nHost: tbn0.google.c
om\r\n] [HTTP/1.0 200 OK\r\nContent-Type: image/jpeg\r\nServer: 
btfe\r\nContent-Length: 5775\r\nDate: Tue, 01 May 2007 14:36:
53 GMT\r\nConnection: Keep-Alive\r\n\r] 


Assuming you have not removed the default cache/no_cache line from your 
squid.conf, anything with a question mark in the URL will not be cached.


From 2.6STABLE12's squid.conf.default:

#We recommend you to use the following two lines.
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY

2.5 is the same, except the directive is called no_cache.

In addition, the objects have no freshness information (Expires, or 
Content-Length), so even without the explicit requirement within Squid 
to not cache GET queries the listed objects are cacheable.


Chris


Re: [squid-users] Users spamming squid logs

2007-05-01 Thread Chris Robertson

Daniel Appleby wrote:

Did it continue to spam the logs after you blocked it off with an acl?

-Daniel



Any URL blocked explicitly with a Squid ACL will be logged as a 403.

To not log these requests as being blocked...
1) allow them without authentication
2) block them before they hit Squid
or
3) modify the source code

One other thing you could try would be to set an explicit ACL matching 
that URL, and set a deny_info that redirects requests to something else 
(a local repository with the file, a non-existent server, a server that 
returns a 404, etc.).  Any of those methods might slow the Java update 
service down (or satisfy it).


Chris


Re: [squid-users] cache_peer - multiple ones

2007-05-01 Thread Henrik Nordstrom
tis 2007-05-01 klockan 23:41 +0100 skrev Gareth Edmondson:

> Thanks for the advice here. I read about this name= option earlier in 
> the archives - but I got the impression from previous posters that it 
> was in version 3 of squid and not the stable version that ships with 
> Debian Etch. The stable version is 2.6.5-6.

It's in 2.6 and later.

> cache_peer_access sslproxy allow CONNECT
> cache_peer_access sslproxy deny all
> cache_peer_access  deny CONNECT
> cache_peer_access  allow all
> 
> I'm not sure they are in the right order.

Looks fine.

order of cache_peer_access is important, but only per peer. The order of
the peers is not important.

> >> Everything seems to be working. However when we try and connect to the 
> >> 443 website it challenges us again for the AD username and password. 
> >> Upon entering this the browser challenges us again and again and again - 
> >> simply not letting us through.

One more thing, have you added trust between Squid and the peer for
forwarding of proxy authentication? See the login option to cache_peer.

Regards
Henrik



signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] cache_peer - multiple ones

2007-05-01 Thread Gareth Edmondson

hi Henrik - answers inline...

Henrik Nordstrom wrote:

tis 2007-05-01 klockan 22:49 +0100 skrev Gareth Edmondson:


  
Now this threw up an error along the lines of having two cache_peer 
names the same. So we edited the hosts file in DNS setting a name to 
resolve to the same IP address. The line now reads:


cache_peer sslproxy 443 parent 7 



There is a name= option to cache_peer to solve this without having to
fake host names..
  
Thanks for the advice here. I read about this name= option earlier in 
the archives - but I got the impression from previous posters that it 
was in version 3 of squid and not the stable version that ships with 
Debian Etch. The stable version is 2.6.5-6.


A quick look at debian.org reveals that 3.0.PRE5-5 is there. I have not 
tried this because we have been advised to stick with the stable branch.
We thought this would work - but it didn't, so we edited the 
cache_peer_access line to say 'cache_peer_access sslproxy allow CONNECT'.



You also need to deny CONNECT from the other..
  

Okay - I think we may have done this. The lines looked something like this

cache_peer_access sslproxy allow CONNECT
cache_peer_access sslproxy deny all
cache_peer_access  deny CONNECT
cache_peer_access  allow all

I'm not sure they are in the right order.

Everything seems to be working. However when we try and connect to the 
443 website it challenges us again for the AD username and password. 
Upon entering this the browser challenges us again and again and again - 
simply not letting us through.



What does your access.log say?
  

I shall take a look in work tomorrow.

Cheers

Gareth


Re: [squid-users] Weirdness caching objects with a vary header on a accelerator

2007-05-01 Thread Henrik Nordstrom
tis 2007-05-01 klockan 11:43 +0200 skrev Bastian Blank:
> On Mon, Apr 30, 2007 at 10:37:24AM +0200, Bastian Blank wrote:
> > The response is properly written according to the store.log:
> > | 1177918188.354 SWAPOUT 00  5DD09DA912DD58C2EFBDAC8382385625  200 
> > 1177918188-1 1178018188 x-squid-internal/vary - 1/201 GET 
> > http://jura13.jura.uni-tuebingen.de/
> > | 1177918188.354 SWAPOUT 00 0001 6401BF3ABD2BDF388518448979017161  200 
> > 1177918188 1171557682 1177921788 text/html 18413/18413 GET 
> > http://jura13.jura.uni-tuebingen.de/
> 
> The key 5DD09DA912DD58C2EFBDAC8382385625 is built using the vary header
> and is therefor unusable to find this object again.

This problem is seen if you use urlgroup:s. See discussion in bug #1947.

It's not the vary details which is lost, it's the urlgroup when writing
out the x-squid-internal object.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] cache_peer - multiple ones

2007-05-01 Thread Henrik Nordstrom
tis 2007-05-01 klockan 22:49 +0100 skrev Gareth Edmondson:


> Now this threw up an error along the lines of having two cache_peer 
> names the same. So we edited the hosts file in DNS setting a name to 
> resolve to the same IP address. The line now reads:
> 
> cache_peer sslproxy 443 parent 7 

There is a name= option to cache_peer to solve this without having to
fake host names..

> We thought this would work - but it didn't, so we edited the 
> cache_peer_access line to say 'cache_peer_access sslproxy allow CONNECT'.

You also need to deny CONNECT from the other..

> Everything seems to be working. However when we try and connect to the 
> 443 website it challenges us again for the AD username and password. 
> Upon entering this the browser challenges us again and again and again - 
> simply not letting us through.

What does your access.log say?

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


[squid-users] cache_peer - multiple ones

2007-05-01 Thread Gareth Edmondson

Hi,

After searching the archives, I've decided to ask here.

We have setup a Debian Etch box which uses squid to access an upstream 
proxy run by the education authority. They have given us a username and 
password and it all works on port 8080 (after challenging us for our 
Active Directory username and password). We have the line:


cache_peer  8080 parent 7 stuff) - I do not have access to it here.


Our web browser then points to the Debian box as a proxy on 
10.180.8.4:8080 - web browsing is fine. The problem arises when we want 
to access 443/https websites. The LEA require that we connect again 
through 8080 but use Squid to point to 443 - so I have added another line.


cache_peer  443 parent 7 

Now this threw up an error along the lines of having two cache_peer 
names the same. So we edited the hosts file in DNS setting a name to 
resolve to the same IP address. The line now reads:


cache_peer sslproxy 443 parent 7 

We thought this would work - but it didn't, so we edited the 
cache_peer_access line to say 'cache_peer_access sslproxy allow CONNECT'.


Everything seems to be working. However when we try and connect to the 
443 website it challenges us again for the AD username and password. 
Upon entering this the browser challenges us again and again and again - 
simply not letting us through.


I wonder if anyone has any ideas why this would be. If I have not 
explained it properly please do let me know and I will provide more 
information.


Many thanks in advance,

Gareth Edmondson



[squid-users] NTLM configuration

2007-05-01 Thread Ganesh Balasubramanian
Guys,
Please excuse my ignorance on this one, as i'm working on this proxy server 
configuration only for past couple of weeks.  I have installed squid 2.6x 
version in my win2k machine. In the same machine i had DNS server and ADS 
server running.  In my network all i had is only windows OS.
 
Now i want to configure my proxy server to work with NTLM auth. From our 
internal application, we can configure it to use the NTLM mode, where for 
authentication it will take up the windows logged in user.  But when i had the 
proxy configured in that client machine it blocks the user auth information.  
So i guess in the squid conf file, we need to add up those details to make it 
work with NTLM.
 
Can you help me out on to know, what is the basic configuration that has to be 
added in the .conf file, for the NTLM auth to work in windows OS.
 
 
Thanks for all your help on this
 
 
--Ganesh.b




Re: [squid-users] Cache Manager CGI Interface on IIS - I got issues I don't understand

2007-05-01 Thread Andreas Woll

Squid is just listening to the http_port 3128.
All other ports are disabled.

Andreas

At 11:34 01.05.2007, Henrik Nordstrom wrote:

tis 2007-05-01 klockan 09:02 -0400 skrev Andreas Woll:
> Hi all,
>
> I didn't found a subject in the newsgroup, so I ask again for a 
little help.

> Now the SquidNT service is running I tried to access it through the
> CGI interface.
>
> But I always get:
>
> connect: (10061) WSAECONNREFUSED, connection refused.

Then you didn't specify a correct address:port for cachemgr to connect
to. Needs to point to a http_port where Squid is listening for requests.


> In cachemgr.conf is just "localhost".

And what do you use for http_port in squid.conf?

Regards
Henrik




Re[2]: [squid-users] squid_ldap_group troubles

2007-05-01 Thread Sergey A. Kobzar
Hello Henrik,

You was right - host name must follow after all options. I didn't know
this. Now all looks working.

Thanks for help. ;)


Tuesday, May 1, 2007, 6:32:44 PM, you wrote:

> tis 2007-05-01 klockan 14:09 +0300 skrev Sergey A. Kobzar:

>> external_acl_type ldap_group %LOGIN 
>> /usr/local/libexec/squid/squid_ldap_group \
>>   -b "ou=Groups,dc=home" -f "(&(memberUid=%u)(cn=%g))" -v 3 localhost \
>>   -D "cn=Guest,ou=DSA,dc=home" -w xxx

> All options need to go before the host name, or the'll get misread as
> hostnames..

>> May  1 14:00:28 pixel slapd[744]: conn=256 fd=21 ACCEPT from 
>> IP=127.0.0.1:50849 (IP=127.0.0.1:389)
>> May  1 14:00:28 pixel slapd[744]: conn=256 op=0 SRCH 
>> base="ou=Groups,dc=home" scope=2 deref=0 
>> filter="(&(memberUid=sak)(cn=squid-unlim))"

> This search was anonymous. Probably because of the above.

>> May  1 14:00:28 pixel slapd[744]: conn=256 op=0 SRCH attr=1.1
>> May  1 14:00:28 pixel slapd[744]: conn=256 op=0 SEARCH RESULT tag=101 err=0 
>> nentries=0 text=

> And no results was returned by your LDAP..


> Regards
> Henrik


-- 
Best regards,
 Sergeymailto:[EMAIL PROTECTED]



Re: [squid-users] Squid / Heartbeat / IPtables

2007-05-01 Thread Henrik Nordstrom
tis 2007-05-01 klockan 08:38 -0500 skrev Paul Fiero:

> the heartbeat be lost.  Given this configuration we have squid
> configured as a transparent proxy with the following pertinent
> settings as I found them in a couple of different documents on
> transparent proxy:
> http_port 192.168.1.6:3128
> httpd_accel_host virtual
> httpd_accel_port 80
> httpd_accel_with_proxy on
> httpd_accel_uses_host_header on

I would strongly recommend you to upgrade to Squid-2.6.

> At this point I also ensured that ipv4 ip_forward is set to 1, then I
> set up an iptables rule to redirect traffic to the correct port:
> iptables -t nat -A PREROUTING -p tcp -i eth1 --dport 80 -j REDIRECT
> --to-port 3128

Ok.

> When I had Squid configured this way and did not have it being run via
> the clustering services all worked fine with policy-based routes and
> all.  It was a site to behold. 

Fine.

> Then as soon as we reconfigured
> everything for use in the cluster traffic has stopped flowing.  It
> appears to be getting to at least the port on the switch where the
> squid servers are plugged in so I know that the PBR is working.

Hmm.. no idea really. These things is pretty basic.

But check the ARP cache on the router. Maybe it has got the wrong MAC
for the virtual IP you route to. Or maybe you have an IP conflict on
that IP with more than one machine claiming to have the IP active (try
arping for it from the supposedly active server, should get no
responses)

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Cache Manager CGI Interface on IIS - I got issues I don't understand

2007-05-01 Thread Henrik Nordstrom
tis 2007-05-01 klockan 09:02 -0400 skrev Andreas Woll:
> Hi all,
> 
> I didn't found a subject in the newsgroup, so I ask again for a little help.
> Now the SquidNT service is running I tried to access it through the 
> CGI interface.
> 
> But I always get:
> 
> connect: (10061) WSAECONNREFUSED, connection refused.

Then you didn't specify a correct address:port for cachemgr to connect
to. Needs to point to a http_port where Squid is listening for requests.


> In cachemgr.conf is just "localhost".

And what do you use for http_port in squid.conf?

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] squid_ldap_group troubles

2007-05-01 Thread Henrik Nordstrom
tis 2007-05-01 klockan 14:09 +0300 skrev Sergey A. Kobzar:

> external_acl_type ldap_group %LOGIN /usr/local/libexec/squid/squid_ldap_group 
> \
>   -b "ou=Groups,dc=home" -f "(&(memberUid=%u)(cn=%g))" -v 3 localhost \
>   -D "cn=Guest,ou=DSA,dc=home" -w xxx

All options need to go before the host name, or the'll get misread as
hostnames..

> May  1 14:00:28 pixel slapd[744]: conn=256 fd=21 ACCEPT from 
> IP=127.0.0.1:50849 (IP=127.0.0.1:389)
> May  1 14:00:28 pixel slapd[744]: conn=256 op=0 SRCH base="ou=Groups,dc=home" 
> scope=2 deref=0 filter="(&(memberUid=sak)(cn=squid-unlim))"

This search was anonymous. Probably because of the above.

> May  1 14:00:28 pixel slapd[744]: conn=256 op=0 SRCH attr=1.1
> May  1 14:00:28 pixel slapd[744]: conn=256 op=0 SEARCH RESULT tag=101 err=0 
> nentries=0 text=

And no results was returned by your LDAP..


Regards
Henrik



signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Time stamp in access.log

2007-05-01 Thread Andreas Woll

Have a look here:
http://software.ccschmidt.de/downloads/utc.zip

It's a unix time stamp converter for Windows.


At 11:14 01.05.2007, jeff donovan wrote:

greetings

how can i get an accurate time stamp in my access.log right now it
looks like this:

1178025553.639175 192.207.19.129 TCP_MISS/200 11249 GET http:blah
blah

how can decode that stamp? or can i change it to something human :)

-jeff




Re: [squid-users] Time stamp in access.log

2007-05-01 Thread Slacker
jeff donovan, on 05/01/2007 08:14 PM [GMT+500], wrote :
> greetings
>
> how can i get an accurate time stamp in my access.log right now it
> looks like this:
>
> 1178025553.639175 192.207.19.129 TCP_MISS/200 11249 GET http:blah
> blah
>
> how can decode that stamp? or can i change it to something human :)
>
> -jeff
Its already in squid FAQs see ..

http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-de34519356ecd6791303987f0ee79b043199374b

and also look into logformat directive in squid.conf

Thanks.




[squid-users] Time stamp in access.log

2007-05-01 Thread jeff donovan

greetings

how can i get an accurate time stamp in my access.log right now it  
looks like this:


1178025553.639175 192.207.19.129 TCP_MISS/200 11249 GET http:blah  
blah


how can decode that stamp? or can i change it to something human :)

-jeff 


Re: [squid-users] google

2007-05-01 Thread dhottinger

Quoting Adrian Chadd <[EMAIL PROTECTED]>:


On Tue, May 01, 2007, [EMAIL PROTECTED] wrote:

I suddenly (last friday) started having issues when access google.com.
 My access.log file shows all tcp_miss for google.  Is anyone else
experiencing slow google access?  I did get an email from google that
they were updating their applications (we use google calendars).


I've not heard about it. Do you have mime logging turned on so we can
see the headers w/ the request/reply?



Adrian



1178030214.203538 10.40.15.123 TCP_MISS/200 5959 GET  
http://tbn0.google.com/images? - DIRECT/72.14.211.104 image/jpeg ALL
OW "Visual Search Engine, Search Engines" [Accept:  
*/*\r\nAccept-Language: en\r\nAccept-Encoding: gzip,  
deflate\r\nCookie: PR
EF=ID=2377a880502b45e8:TM=1158762666:LM=1158762666:S=4J3pIHAN5lUxv6nf\r\nReferer:  
http://images.google.com/images?q=newspaper
+comics+political+cartoons+on+the+war+in+Iraq&gbv=2&svnum=10&hl=en&start=40&sa=N&ndsp=20\r\nUser-Agent: Mozilla/5.0  
(Macintos
h; U; Intel Mac OS X; en) AppleWebKit/418.9 (KHTML, like Gecko)  
Safari/419.3\r\nConnection: keep-alive\r\nHost: tbn0.google.c
om\r\n] [HTTP/1.0 200 OK\r\nContent-Type: image/jpeg\r\nServer:  
btfe\r\nContent-Length: 5775\r\nDate: Tue, 01 May 2007 14:36:

53 GMT\r\nConnection: Keep-Alive\r\n\r]
1178030214.205538 10.40.15.123 TCP_MISS/200 5535 GET  
http://tbn0.google.com/images? - DIRECT/72.14.211.99 image/jpeg ALLO
W "Visual Search Engine, Search Engines" [Accept:  
*/*\r\nAccept-Language: en\r\nAccept-Encoding: gzip,  
deflate\r\nCookie: PRE
F=ID=2377a880502b45e8:TM=1158762666:LM=1158762666:S=4J3pIHAN5lUxv6nf\r\nReferer:  
http://images.google.com/images?q=newspaper+
comics+political+cartoons+on+the+war+in+Iraq&gbv=2&svnum=10&hl=en&start=40&sa=N&ndsp=20\r\nUser-Agent: Mozilla/5.0  
(Macintosh
; U; Intel Mac OS X; en) AppleWebKit/418.9 (KHTML, like Gecko)  
Safari/419.3\r\nConnection: keep-alive\r\nHost: tbn0.google.co
m\r\n] [HTTP/1.0 200 OK\r\nContent-Type: image/jpeg\r\nServer:  
btfe\r\nContent-Length: 5351\r\nDate: Tue, 01 May 2007 14:36:5

3 GMT\r\nConnection: Keep-Alive\r\n\r]
1178030214.361352 10.40.15.123 TCP_MISS/200 3822 GET  
http://tbn0.google.com/images? - DIRECT/72.14.211.104 image/jpeg ALL
OW "Visual Search Engine, Search Engines" [Accept:  
*/*\r\nAccept-Language: en\r\nAccept-Encoding: gzip,  
deflate\r\nCookie: PR
EF=ID=2377a880502b45e8:TM=1158762666:LM=1158762666:S=4J3pIHAN5lUxv6nf\r\nReferer:  
http://images.google.com/images?q=newspaper
+comics+political+cartoons+on+the+war+in+Iraq&gbv=2&svnum=10&hl=en&start=40&sa=N&ndsp=20\r\nUser-Agent: Mozilla/5.0  
(Macintos
h; U; Intel Mac OS X; en) AppleWebKit/418.9 (KHTML, like Gecko)  
Safari/419.3\r\nConnection: keep-alive\r\nHost: tbn0.google.c
om\r\n] [HTTP/1.0 200 OK\r\nContent-Type: image/jpeg\r\nServer:  
btfe\r\nContent-Length: 3638\r\nDate: Tue, 01 May 2007 14:36:

54 GMT\r\nConnection: Keep-Alive\r\n\r]
1178030214.393366 10.40.15.123 TCP_MISS/200 4142 GET  
http://tbn0.google.com/images? - DIRECT/72.14.211.99 image/jpeg ALLO
W "Visual Search Engine, Search Engines" [Accept:  
*/*\r\nAccept-Language: en\r\nAccept-Encoding: gzip,  
deflate\r\nCookie: PRE
F=ID=2377a880502b45e8:TM=1158762666:LM=1158762666:S=4J3pIHAN5lUxv6nf\r\nReferer:  
http://images.google.com/images?q=newspaper+
comics+political+cartoons+on+the+war+in+Iraq&gbv=2&svnum=10&hl=en&start=40&sa=N&ndsp=20\r\nUser-Agent: Mozilla/5.0  
(Macintosh
; U; Intel Mac OS X; en) AppleWebKit/418.9 (KHTML, like Gecko)  
Safari/419.3\r\nIf-Modified-Since: Tue, 01 May 2007 14:21:53 G
MT\r\nConnection: keep-alive\r\nHost: tbn0.google.com\r\n] [HTTP/1.0  
200 OK\r\nContent-Type: image/jpeg\r\nServer: btfe\r\nCo
ntent-Length: 3958\r\nDate: Tue, 01 May 2007 14:36:54  
GMT\r\nConnection: Keep-Alive\r\n\r]



--
Dwayne Hottinger
Network Administrator
Harrisonburg City Public Schools



Re: [squid-users] google

2007-05-01 Thread Adrian Chadd
On Tue, May 01, 2007, [EMAIL PROTECTED] wrote:
> I suddenly (last friday) started having issues when access google.com.  
>  My access.log file shows all tcp_miss for google.  Is anyone else  
> experiencing slow google access?  I did get an email from google that  
> they were updating their applications (we use google calendars).

I've not heard about it. Do you have mime logging turned on so we can
see the headers w/ the request/reply?



Adrian



[squid-users] Squid / Heartbeat / IPtables

2007-05-01 Thread Paul Fiero

Greetings all, again,
I am back with yet more questions, though hopefully, this time, I
have better information for you.

We have moved past issues with trying to decide how to do our
failover with squid on our new router infrastructure.  We will be
using policy-based routing (PBR) pointing at a cluster of squid nodes.
At this point it's going to be configured for high-availability and
not for load-balancing, yet.

In any case here is my situation now.  :o)
I have my two Squid servers configured with heartbeat so that we
have one active node and one passive node waiting for failover should
the heartbeat be lost.  Given this configuration we have squid
configured as a transparent proxy with the following pertinent
settings as I found them in a couple of different documents on
transparent proxy:
http_port 192.168.1.6:3128
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on

At this point I also ensured that ipv4 ip_forward is set to 1, then I
set up an iptables rule to redirect traffic to the correct port:
iptables -t nat -A PREROUTING -p tcp -i eth1 --dport 80 -j REDIRECT
--to-port 3128

When I had Squid configured this way and did not have it being run via
the clustering services all worked fine with policy-based routes and
all.  It was a site to behold.  Then as soon as we reconfigured
everything for use in the cluster traffic has stopped flowing.  It
appears to be getting to at least the port on the switch where the
squid servers are plugged in so I know that the PBR is working.

Somewhere/somehow I'm pretty sure the issue has to do with the way
heartbeat runs the NICs on the Squid server.

So the question:  Given the above information regarding squid
configuration, ip_forwarding, and iptables can anyone point me to a
source of information for fixing the problem or can you give me the
data I need?

Thanks all, in advance, for at least patient with me.  I don't post
much because our Squid system has been running pretty much flawlessly
since I built it out several years ago.  It's just that times are
changing and I've got to accommodate those changes.

If you need to reply please do so either here privately at
paulfierogmailcom or on the list.either one.

--
May have been the losing side...not convinced it was the wrong one.

Keep Flyin'

PFiero


[squid-users] Cache Manager CGI Interface on IIS - I got issues I don't understand

2007-05-01 Thread Andreas Woll

Hi all,

I didn't found a subject in the newsgroup, so I ask again for a little help.
Now the SquidNT service is running I tried to access it through the 
CGI interface.


But I always get:

connect: (10061) WSAECONNREFUSED, connection refused.

IIS is bound to all IPs of the machine including loopback.

ACLs in squid.conf:
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8

# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

In cachemgr.conf is just "localhost".

The password tag in squid.conf is set like this:
cachemgr_passwd 2secure4you all

The cache_mgr tag is set to webmaster.

What did I wrong?

Andreas



Re: [squid-users] Squid authenticating to 2 Separate Active Directory Domains

2007-05-01 Thread Guido Serassio

Hi,

At 18.07 30/04/2007, Ric Lonsdale wrote:

Hi,

I want to implement Squid, using Red Hat Enterprise 4.0, with authentication
via NTLM, using Samba, to 2 separate Windows 2003 Active Directory domains.

These domains do not trust each other.

Is it possible to setup Samba so that it queries one domain first, then if
the user does not exist on that domain, it then queries the other domain?


Using Samba this cannot be done.
It's a Windows domain membership problem: your samba machine, like an 
ordinary Windows machine, can be member of only one domain.



If you think my question should be directed to Samba developers please let
me know, but I know a lot of you have experience of Squid with AD setups.


I think that Samba Guys cannot change the Windows architecture  :-)

Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



[squid-users] squid_ldap_group troubles

2007-05-01 Thread Sergey A. Kobzar
Hello guys,

I'd like use LDAP groups to setup access right for users.

Current configuration:

===

auth_param basic program /usr/local/libexec/squid/squid_ldap_auth \
  -b "ou=Users,dc=home" -v 3 localhost
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

external_acl_type ldap_group %LOGIN /usr/local/libexec/squid/squid_ldap_group \
  -b "ou=Groups,dc=home" -f "(&(memberUid=%u)(cn=%g))" -v 3 localhost \
  -D "cn=Guest,ou=DSA,dc=home" -w xxx

[skipped]

acl CONNECT method CONNECT
acl ldap_unlim external ldap_group squid-unlim

[skipped]

http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access allow ldap_unlim
http_access deny all

===

LDAP group:

$ ldapsearch -LLL -s sub -b "ou=Groups,dc=home" -D "cn=Guest,ou=DSA,dc=home" -w 
xxx "(&(memberUid=sak)(cn=squid-unlim))"
dn: cn=squid-unlim,ou=Groups,dc=home
objectClass: top
objectClass: posixGroup
cn: squid-unlim
gidNumber: 2001
memberUid: sak


squid_ldap_group looks working:

# /usr/local/libexec/squid/squid_ldap_group -h 127.0.0.1 -b "ou=Groups,dc=home" 
-f "(&(memberUid=%u)(cn=%g))" -D "cn=Guest,ou=DSA,dc=home" -w xxx -v 3 -d
sak squid-unlim
Connected OK
group filter '(&(memberUid=sak)(cn=squid-unlim))', searchbase 
'ou=Groups,dc=home'
OK

but when I try access Internet site, I get:

The following error was encountered:

Access Denied. 
Access control configuration prevents your request from being allowed
at this time. Please contact your service provider if you feel this is
incorrect.

In slapd.log:

May  1 14:00:28 pixel slapd[744]: conn=255 fd=21 ACCEPT from IP=127.0.0.1:51366 
(IP=127.0.0.1:389)
May  1 14:00:28 pixel slapd[744]: conn=255 op=0 BIND 
dn="uid=sak,ou=Users,dc=home" method=128
May  1 14:00:28 pixel slapd[744]: conn=255 op=0 BIND 
dn="uid=sak,ou=Users,dc=home" mech=SIMPLE ssf=0
May  1 14:00:28 pixel slapd[744]: conn=255 op=0 RESULT tag=97 err=0 text=
May  1 14:00:28 pixel slapd[744]: conn=255 op=1 UNBIND
May  1 14:00:28 pixel slapd[744]: conn=255 fd=21 closed
May  1 14:00:28 pixel slapd[744]: conn=256 fd=21 ACCEPT from IP=127.0.0.1:50849 
(IP=127.0.0.1:389)
May  1 14:00:28 pixel slapd[744]: conn=256 op=0 SRCH base="ou=Groups,dc=home" 
scope=2 deref=0 filter="(&(memberUid=sak)(cn=squid-unlim))"
May  1 14:00:28 pixel slapd[744]: conn=256 op=0 SRCH attr=1.1
May  1 14:00:28 pixel slapd[744]: conn=256 op=0 SEARCH RESULT tag=101 err=0 
nentries=0 text=
May  1 14:00:28 pixel slapd[744]: conn=256 op=1 UNBIND
May  1 14:00:28 pixel slapd[744]: conn=256 fd=21 closed

# squid -v
Squid Cache: Version 2.6.STABLE12

Where am I wrong?


Thanks for any help.


-- 
Best regards,
 Sergey  mailto:[EMAIL PROTECTED]



[squid-users] google

2007-05-01 Thread dhottinger
I suddenly (last friday) started having issues when access google.com.  
 My access.log file shows all tcp_miss for google.  Is anyone else  
experiencing slow google access?  I did get an email from google that  
they were updating their applications (we use google calendars).


thanks,

ddh


--
Dwayne Hottinger
Network Administrator
Harrisonburg City Public Schools



Re: [squid-users] Weirdness caching objects with a vary header on a accelerator

2007-05-01 Thread Bastian Blank
On Mon, Apr 30, 2007 at 10:37:24AM +0200, Bastian Blank wrote:
> The response is properly written according to the store.log:
> | 1177918188.354 SWAPOUT 00  5DD09DA912DD58C2EFBDAC8382385625  200 
> 1177918188-1 1178018188 x-squid-internal/vary - 1/201 GET 
> http://jura13.jura.uni-tuebingen.de/
> | 1177918188.354 SWAPOUT 00 0001 6401BF3ABD2BDF388518448979017161  200 
> 1177918188 1171557682 1177921788 text/html 18413/18413 GET 
> http://jura13.jura.uni-tuebingen.de/

The key 5DD09DA912DD58C2EFBDAC8382385625 is built using the vary header
and is therefor unusable to find this object again.

I try to provide a patch.

Bastian

-- 
You can't evaluate a man by logic alone.
-- McCoy, "I, Mudd", stardate 4513.3


Re: [squid-users] DigiChat Java Applet does not connect to DigiChat

2007-05-01 Thread Henrik Nordstrom
tis 2007-05-01 klockan 01:47 -0700 skrev omero omero:
> Hello,
>
> DigiChat Java Applet does not connect to DigiChat
> servers
> I am using squid 2.6 on Windows xp sp2

What does access.log say when they try to connect?

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Re: users are so angry {NCSA authentication} ask for password with every new page

2007-05-01 Thread Jamie Learmonth
phpdevster wrote:
> can you try the configuration and tell me is that normal ??
>
> extra2.info  88
>
> user : squid
> pass : 1122
>
I do not get prompted for password on every new page. Perhaps you have
resolved it.

Jamie


[squid-users] DigiChat Java Applet does not connect to DigiChat

2007-05-01 Thread omero omero
Hello,
   
DigiChat Java Applet does not connect to DigiChat
servers
I am using squid 2.6 on Windows xp sp2
   
I have tried to set the proxy to be transparent using
the following:
   
http_port 10.143.219.1:8542 transparent
always_direct allow all
   
   
It did not work.
   
Any other configuration to be added ? please help
   
Thank you.



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [squid-users] Squid breaks Apache mod_rewrite for canonical hostnames

2007-05-01 Thread Henrik Nordstrom
mån 2007-04-30 klockan 19:34 -0700 skrev [EMAIL PROTECTED]:
> I'm running an Ubuntu LAMP server with standard
> configuration. I have the following lines in my
> .htaccess file for mod_rewrite (domain name changed):
> 
> Options Includes ExecCGI +Indexes +FollowSymLinks
> RewriteEngine On
> RewriteCond %{HTTP_HOST} ^mysite\.com [NC]
> RewriteRule ^(.*) http://www.mysite.com/$1 [R=301,L]
> 
> It works just fine.

This configuration is sensitive on the domain name, redirecting the user
to www if no www was specified.

> http_port xxx.xxx.xxx.xxx:80
> defaultsite=www.mysite.com

And this is not.. only supporting the single www domain. To support
domain-based virtual hosting you need the vhost directive in addition.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel