Re: [squid-users] squid with dansguardian

2013-10-08 Thread Dave Burkholder

There's no acl to deny in

http_access deny myLan


Try something like

http_access deny myLan all


Or you could try:

acl fb dstdomain .facebook.com
http_access deny mLan fb

In your squid logs are you seeing the LAN IP address or 127.0.0.1 for 
every request? If the latter then you need the follow_x_forwarded_for 
that Amos mentioned.


-Dave



On 10/8/2013 2:13 AM, Stefano Malini wrote:

Yes Dave,
in squid.conf i set
acl myLan src 192.168.1.0/24
and
http_access deny myLan

to try if squid stops me but i can browse. I don't understand why

My iptables rule:

target prot opt source   destination
REDIRECT   tcp  --  anywhere anywhere tcp
dpt:http redir ports 8080

Dansguardian network config.

# the port that DansGuardian listens to.
filterport = 8080

# the ip of the proxy (default is the loopback - i.e. this server)
proxyip = 127.0.0.1

# the port DansGuardian connects to proxy on
proxyport = 3128

Squid

acl myLan src 192.168.1.0/24
and
http_access deny myLan

http_port 3128

Dansguardian runs because it stops me browsing some blocked site! I
have to retry other times this evening.



Amos thanks, I'll try this evening, i don't know that directive.

2013/10/8 Amos Jeffries :

On 8/10/2013 12:58 p.m., Dave Burkholder wrote:

No squid is not bypassed.  The order flow is:

Browser -> Dansguardian -> Squid -> Internet

If you're wanting to limit access via squid ACLs, that's another aspect
altogether.

acl myLan src 10.0.4.0/24

http_access deny myLan all


Do you have something like that in squid.conf?


Don't forget the follow_x_forwarded_for to determine what the client on the
other side of DG is.

   follow_x_forwarded_for allow localhost
   follow_x_forwarded_for deny all


Amos




Re: [squid-users] squid with dansguardian

2013-10-07 Thread Dave Burkholder

No squid is not bypassed.  The order flow is:

Browser -> Dansguardian -> Squid -> Internet

If you're wanting to limit access via squid ACLs, that's another aspect 
altogether.


acl myLan src 10.0.4.0/24

http_access deny myLan all


Do you have something like that in squid.conf?

On 10/7/2013 5:00 PM, Stefano Malini wrote:

I'm sorry Dave, but, in this way squid proxy doesn't affect browsing.

Trying to deny the access to all my network (deny myLan) on
squid.conf, it doesn't stop me and i can browse as i want!

At the moment every http request (dport 80) is redirected --to-port
8080 (dansguardian). Is squid bypassed?

2013/10/7 Stefano Malini :

Thank you Dave! it's running
Eliezer, with your answer i have known the usefulness of cache_peer directive!

2013/10/7 Dave Burkholder :

If you want filtering, iptables should redirect to port 8080 instead of
3128.

Also, squid's 3128 should not be in transparent mode.

If you make those two changes, you should be operational.

-Dave


On 10/6/2013 12:59 PM, Stefano Malini wrote:

Dear all,
this is my first message because i'm having some diffuculties
configuring a proxy server on a raspberry with raspbian distro.

I installed Squid 3.1.20 and Dansguardian.

Squid is listen on port 3128, in transparent mode and it runs. I set
iptables to redirect http requests to port 3128.
The http requests are registered on squid cache/access logs files so it
runs

I installed Dansguardian also and configured to listen on port 8080
but it seems that Squid doesn't communicate with Dansguardian.
In dansguardian.conf file i set the proxy ip on 127.0.0.1, port 3128.

I think it's very easy to solve it but until now is still unsolved.

Do you have any idea about?

Thanks






Re: [squid-users] office 365 not accessible via squid proxy

2013-08-20 Thread Dave Burkholder
I missed these later emails when I replied this morning that I was 
getting the same problem.  I checked and thankfully, the following line 
fixes my problem as well.


"forward_max_tries 25"



On 08/20/2013 05:31 AM, Gaurav Saxena wrote:

Thx that resolved the issue.

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: 20 August 2013 14:44
To: squid-users@squid-cache.org
Subject: Re: [squid-users] office 365 not accessible via squid proxy

On 20/08/2013 8:45 p.m., Gaurav Saxena wrote:

This is the log when I access that url -

There you go:


1376988310.614  0 182.71.29.59 TCP_MISS/503 0 CONNECT
outlook.office365.com:443 - HIER_NONE/- -

outlook.office365.com domain resolves to a set of 25 IP addresses, most of
which will reject connections depending on the part of the planet you are
in.
Squid by default tries the first 10 connection paths (ie the first 10 of
those 12 IPv6 addresses) before giving up.

You can avoid this failure by setting "forward_max_tries 25".



1376988310.980  0 182.71.29.59 TCP_DENIED/403 3645 CONNECT
xsi.outlook.com:10106 - HIER_NONE/- text/html

This is denied because port 10106 is not a known SSL_port. Add it to your
SSL_port ACL definition if you want these connections to go through (you may
or may not).

Amos






Re: [squid-users] office 365 not accessible via squid proxy

2013-08-20 Thread Dave Burkholder

On 08/20/2013 03:57 AM, Gaurav Saxena wrote:

Hi,
I am not able to access o365 outlook/calendar/people using this url -
https://outlook.office365.com/owa/syntonicwireless.com when I am accessing
internet via squid proxy.Without proxy this url works.
  I can though access o365 outlook/calendar/people via proxy using these urls
- http://mail.office365.com and http://www.outlook.com/syntonicwireless.com.
Can anyone help me on  this?

Thx
Gaurav


I've encountered the same problem with Outlook & Office365 on Squid 
3.3.8.  I don't know if it's related to Outlook's certificate management 
or not. Doesn't seem to be because I couldn't get Outlook to connect 
whether or not I was using SSLBUMP.


Re: [squid-users] Workstation IP on SSLBUMP

2013-07-22 Thread Dave Burkholder

Ok, yes, that looks like my problem. Thanks for the reply.

What sort of timeline is there for fixing it in the stable release?

On 7/22/2013 9:52 PM, Amos Jeffries wrote:


On 23/07/2013 5:08 a.m., Dave wrote:

Hello everyone,

I'm running squid 3.3.8 and I just got sslbump working. I noticed in 
the squid logs, however, than for https connections, the IP is 
127.0.0.1 instead of the LAN IP.


Is that the price of sslbump or did I do something wrong? Relevant 
config lines included below...


From your config I  think you are hitting
http://bugs.squid-cache.org/show_bug.cgi?id=3792

Amos




[squid-users] Workstation IP on SSLBUMP

2013-07-22 Thread Dave

Hello everyone,

I'm running squid 3.3.8 and I just got sslbump working. I noticed in the 
squid logs, however, than for https connections, the IP is 127.0.0.1 
instead of the LAN IP.


Is that the price of sslbump or did I do something wrong? Relevant 
config lines included below...


Thanks,

Dave

http_port 3128 ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=4MB cert=/usr/local/squid/ssl_cert/thinkwell.pem


always_direct allow all
ssl_bump server-first all
sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER
sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/ssl_db -M 4MB
sslcrtd_children 5


follow_x_forwarded_for allow localhost


[squid-users] request_header_access & request_header_replace

2013-07-19 Thread Dave Burkholder

Hi everyone,

I'm running an ICAP client that can't read sdch content encodings so I 
want to strip those headers out. (I realize this is mostly an edge case 
anyway, but never mind that for now).


I have these rules to strip out the Accept-Encoding header if it 
contains sdch.


acl sdch req_header Accept-Encoding [-i] sdch

request_header_access Accept-Encoding deny sdch


What is the relationship between request_header_access and 
request_header_replace?


If I want to strip sdch and allow gzip, would the following ruleset 
accomplish this?


acl sdch req_header Accept-Encoding [-i] sdch

request_header_access Accept-Encoding deny sdch

request_header_replace Accept-Encoding Accept-Encoding: gzip


Does the request_header_replace only fire if request_header_access 
actually matched?


Thanks,

Dave


RE: [squid-users] Send FileZilla FTP traffic through ICAP server

2013-04-25 Thread Dave Burkholder
Thanks so much for your replies here, Alex. 

>> If you must use FileZilla,
The FTP client software, FileZilla / Cyberduck / etc, isn't the issue. The 
issue is sending traffic to an ICAP server.

>> Our FTP gateway project adds that functionality to Squid.

I'm very glad to hear about this project; I'd missed reading about it. This 
looks like just what I need.

You said it's not yet ready for production use. Does the May 2013 ETA mean ETA 
of beta-quality code or ETA of production-ready code?

Thanks!

Dave


-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
Sent: Thursday, April 25, 2013 10:17 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Send FileZilla FTP traffic through ICAP server


On 04/25/2013 08:08 AM, Alex Rousskov wrote:

> Dave, it looks like FileZilla did not receive FTP server Hello from 
> Squid. I suggest that you take packet captures before and after Squid, 
> to see whether Squid itself has received FTP server Hello from the FTP 
> server. If Squid connected to the FTP server but received nothing, 
> then the problem is on the FTP server side. Otherwise, the problem may 
> be with Squid.


I forgot to mention that even if you succeed with making CONNECT work, it will 
not help you with ICAP inspections because Squid will only send CONNECT request 
to your ICAP server and not the FTP traffic that happens inside the HTTP 
CONNECT tunnel.

If you must use FileZilla, and FileZilla does not support sending HTTP requests 
with ftp://urls to HTTP proxies (instead of using CONNECT tunnels with raw FTP 
inside), then you must use an FTP proxy that supports ICAP, not an HTTP proxy.

Our FTP gateway project adds that functionality to Squid. It is not ready for 
production use, but simple FTP transactions are supported and code is 
available: http://wiki.squid-cache.org/Features/FtpGateway


HTH,

Alex.




RE: [squid-users] Re: Send FileZilla FTP traffic through ICAP server

2013-04-25 Thread Dave Burkholder
Syaifuddin wrote
> is that youtube use ftp?? if not why post this question on thread youtube?

No, this has nothing to do with the youtube thread. I thought I was sending
a blank email to the squid-users@squid-cache.org list and starting an
entirely new thread. Don't know why it didn't; sorry for my mistake, folks.



--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Youtube-Changes-tp4659599
p4659633.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Send FileZilla FTP traffic through ICAP server

2013-04-24 Thread Dave
Good evening everyone,

Using Squid 3.3.3 on Centos 6.4. I need to be able to send FTP client
traffic through an ICAP server for Data Loss Prevention (DLP) purposes.

I have the following ACLs defined in squid.conf

***
acl ftp proto FTP
acl ftp_port port 20 21

http_access allow ftp_port connect
http_access allow ftp
***

However, when I attempt to connect to my FTP server via FileZilla, I get the
following squid log:

***
366851550.677396 192.168.137.1 NONE/200 0 CONNECT
ftp.thinkwelldesigns.com:21 - HIER_DIRECT/208.106.209.235 -
***

For its part, FileZilla reports:
***
Status: Connecting to ftp.thinkwelldesigns.com through proxy
Status: Connecting to 192.168.137.128:3128...
Status: Connection with proxy established, performing handshake...
Response:   Proxy reply: HTTP/1.1 200 Connection established
Status: Connection established, waiting for welcome message...
Error:  Connection timed out
Error:  Could not connect to server
***


It seems I'm almost there, but not quite. Any help for me?

Thanks,

Dave




RE: [squid-users] Windows Updates on 3.2.6

2013-01-31 Thread Dave Burkholder
Well, I'm not too familiar with tcpdump. It was tcpdump -i any host 10.0.2.150 
-w tcpdump.txt

I'll be glad to remove the range_offset_limit -1 line. I'd added it because the 
Squid wiki page @ http://wiki.squid-cache.org/SquidFaq/WindowsUpdate specifies 
it when wanting to cache updates. I wasn't after caching -- only after 
"working" so I thought I'd try it either way.



-Original Message-
From: Eliezer Croitoru [mailto:elie...@ngtech.co.il] 
Sent: Thursday, January 31, 2013 4:38 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Windows Updates on 3.2.6

On 1/31/2013 6:11 PM, Dave Burkholder wrote:
> Here are links to squid access.log
>
> www.thinkwelldesigns.com/access_log.txt
Ok seems like pretty normal to me from squid point of view.
I have the same lines which windows tries to access and dosn't exist.

>
> And tcpdump for 10.0.2.150
>
> www.thinkwelldesigns.com/tcpdump.zip
In what format is it? I have tried to read it with wireshark and it seems like 
corrupted or something.
I think I do understand what is the problem from squid.conf.

range_offset_limit -1

Remove it..
try to make the proxy as simple as it is.

The above can cause windows to not fetch objects and when fails tries to use 
SSL which I dont know if it can or cannot use.

Eliezer

>
> Thanks,
>
> Dave
>
> -Original Message-
> From: Dave Burkholder
> Sent: Thursday, January 31, 2013 10:29 AM
> To: Eliezer Croitoru; squid-users@squid-cache.org
> Subject: RE: [squid-users] Windows Updates on 3.2.6
>
> Hello Eliezer,
>
> Thank you for your reply. My exact problem is that Windows Updates do not 
> install or even download at all.
>
> The squid RPMs were built by my partner in 2 architectures: Centos 5 i386 and 
> Centos 6 x86_64. Same nonfunctioning behavior in both.
>
> I didn't realize you had a squid repo; I'd be glad to try your builds if 
> they're compatible. Where is your repo hosted?
>
>
> I had included the conf file in my first email, but a link would be better:
>
> www.thinkwelldesigns.com/squid_conf.txt
>
>
> ##
> #
> squid -v: (Centos 6 x86_64)
> --
> -
> Squid Cache: Version 3.2.6
> configure options:  '--host=x86_64-unknown-linux-gnu' 
> '--build=x86_64-unknown-linux-gnu' '--program-prefix=' '--prefix=/usr' 
> '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' 
> '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' 
> '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' 
> '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--exec_prefix=/usr' 
> '--libexecdir=/usr/lib64/squid' '--localstatedir=/var' 
> '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
> '--with-logdir=$(localstatedir)/log/squid' 
> '--with-pidfile=$(localstatedir)/run/squid.pid' 
> '--disable-dependency-tracking' '--enable-arp-acl' 
> '--enable-follow-x-forwarded-for' '--enable-auth' 
> '--enable-auth-basic=DB,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam'
>  '--enable-auth-ntlm=smb_lm,fake' '--enable-auth-digest=file,LDAP,eDirectory' 
> '--enable-auth-negotiate=kerberos' '--enable-exte
 rn
> al-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group' 
> '--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
> '--enable-delay-pools' '--enable-epoll' '--enable-http-violations' 
> '--enable-icap-client' '--enable-ident-lookups' '--enable-linux-netfilter' 
> '--enable-referer-log' '--enable-removal-policies=heap,lru' '--enable-snmp' 
> '--enable-ssl' '--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs' 
> '--enable-useragent-log' '--enable-wccpv2' '--enable-esi' '--enable-ecap' 
> '--with-aio' '--with-default-user=squid' '--with-filedescriptors=16384' 
> '--with-dl' '--with-openssl' '--with-pthreads' 
> 'build_alias=x86_64-unknown-linux-gnu' 'host_alias=x86_64-unknown-linux-gnu' 
> 'CFLAGS=-O2 -g -fpie' 'CXXFLAGS=-O2 -g -fpie' 
> 'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'
>
> ##

RE: [squid-users] Re: Windows Updates on 3.2.6

2013-01-31 Thread Dave Burkholder
I was able to install Eliezer's 3.2.6 RPM on one of my boxes and at this point 
I find no improvement. I'm coming to the conclusion that the problem might lie 
with Microsoft and this Squid upgrade of mine might be a coincidence.
 
For example, as Volodymyr noted, this link is down: 
http://download.windowsupdate.com/v9/1/windowsupdate/redir/muv4wuredir.cab 
but this one works so I wonder if their CDNs are out of sync.
 
http://www.update.microsoft.com/v9/1/windowsupdate/redir/muv4wuredir.cab
 
Anyway, @Eliezer, I found your repo but there weren't any 3.3 RPMs there. I was 
wondering if you happened to have a spec file for 3.3?
 
@Volodymyr, >> I created lame service that successfully caches WindowsUpdate 
traffic and I have no problems using latest squid.
 
Tell me more about your lame service creation.


-Original Message-
From: Volodymyr Kostyrko [mailto:c.kw...@gmail.com] 
Sent: Thursday, January 31, 2013 11:30 AM
Cc: squid-users@squid-cache.org
Subject: [squid-users] Re: Windows Updates on 3.2.6

31.01.2013 18:11, Dave Burkholder:
> Here are links to squid access.log
>
> www.thinkwelldesigns.com/access_log.txt

I see nothing bad in here.

http://download.windowsupdate.com/v9/1/windowsupdate/redir/muv4wuredir.cab 
link is void for some month now. It was used only for older versions of 
windows.

Did you happen to see this on Windows XP? Try installing Microsoft 
update then.

OnTopic: I created lame service that successfully caches WindowsUpdate 
traffic and I have no problems using latest squid.

-- 
Sphinx of black quartz, judge my vow.



RE: [squid-users] Windows Updates on 3.2.6

2013-01-31 Thread Dave Burkholder
Here are links to squid access.log

www.thinkwelldesigns.com/access_log.txt 

And tcpdump for 10.0.2.150

www.thinkwelldesigns.com/tcpdump.zip 

Thanks,

Dave

-Original Message-
From: Dave Burkholder 
Sent: Thursday, January 31, 2013 10:29 AM
To: Eliezer Croitoru; squid-users@squid-cache.org
Subject: RE: [squid-users] Windows Updates on 3.2.6

Hello Eliezer,

Thank you for your reply. My exact problem is that Windows Updates do not 
install or even download at all.

The squid RPMs were built by my partner in 2 architectures: Centos 5 i386 and 
Centos 6 x86_64. Same nonfunctioning behavior in both. 

I didn't realize you had a squid repo; I'd be glad to try your builds if 
they're compatible. Where is your repo hosted?


I had included the conf file in my first email, but a link would be better:

www.thinkwelldesigns.com/squid_conf.txt 


###
squid -v: (Centos 6 x86_64)
---
Squid Cache: Version 3.2.6
configure options:  '--host=x86_64-unknown-linux-gnu' 
'--build=x86_64-unknown-linux-gnu' '--program-prefix=' '--prefix=/usr' 
'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' 
'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' 
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' 
'--mandir=/usr/share/man' '--infodir=/usr/share/info' '--exec_prefix=/usr' 
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var' 
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' '--disable-dependency-tracking' 
'--enable-arp-acl' '--enable-follow-x-forwarded-for' '--enable-auth' 
'--enable-auth-basic=DB,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam'
 '--enable-auth-ntlm=smb_lm,fake' '--enable-auth-digest=file,LDAP,eDirectory' 
'--enable-auth-negotiate=kerberos' '--enable-extern
   al-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group' 
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
'--enable-delay-pools' '--enable-epoll' '--enable-http-violations' 
'--enable-icap-client' '--enable-ident-lookups' '--enable-linux-netfilter' 
'--enable-referer-log' '--enable-removal-policies=heap,lru' '--enable-snmp' 
'--enable-ssl' '--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs' 
'--enable-useragent-log' '--enable-wccpv2' '--enable-esi' '--enable-ecap' 
'--with-aio' '--with-default-user=squid' '--with-filedescriptors=16384' 
'--with-dl' '--with-openssl' '--with-pthreads' 
'build_alias=x86_64-unknown-linux-gnu' 'host_alias=x86_64-unknown-linux-gnu' 
'CFLAGS=-O2 -g -fpie' 'CXXFLAGS=-O2 -g -fpie' 
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'

###
squid -v: (Centos 5 i386)
---
Squid Cache: Version 3.2.6
configure options:  '--host=i686-redhat-linux-gnu' 
'--build=i686-redhat-linux-gnu' '--target=i386-redhat-linux' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' 
'--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' 
'--includedir=/usr/include' '--libdir=/usr/lib' '--libexecdir=/usr/libexec' 
'--sharedstatedir=/usr/com' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--exec_prefix=/usr' '--libexecdir=/usr/lib/squid' 
'--localstatedir=/var' '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' '--disable-dependency-tracking' 
'--enable-arp-acl' '--enable-follow-x-forwarded-for' '--enable-auth' 
'--enable-auth-basic=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL,DB,POP3,squid_radius_auth'
 '--enable-auth-ntlm=smb_lm,no_check,fakeauth' 
'--enable-auth-digest=password,ldap,eDirectory' '--en
   able-auth-negotiate=squid_kerb_auth' 
'--enable-external-acl-hel

RE: [squid-users] Windows Updates on 3.2.6

2013-01-31 Thread Dave Burkholder
Hello Eliezer,

Thank you for your reply. My exact problem is that Windows Updates do not 
install or even download at all.

The squid RPMs were built by my partner in 2 architectures: Centos 5 i386 and 
Centos 6 x86_64. Same nonfunctioning behavior in both. 

I didn't realize you had a squid repo; I'd be glad to try your builds if 
they're compatible. Where is your repo hosted?


I had included the conf file in my first email, but a link would be better:

www.thinkwelldesigns.com/squid_conf.txt 


###
squid -v: (Centos 6 x86_64)
---
Squid Cache: Version 3.2.6
configure options:  '--host=x86_64-unknown-linux-gnu' 
'--build=x86_64-unknown-linux-gnu' '--program-prefix=' '--prefix=/usr' 
'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' 
'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' 
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' 
'--mandir=/usr/share/man' '--infodir=/usr/share/info' '--exec_prefix=/usr' 
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var' 
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' '--disable-dependency-tracking' 
'--enable-arp-acl' '--enable-follow-x-forwarded-for' '--enable-auth' 
'--enable-auth-basic=DB,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam'
 '--enable-auth-ntlm=smb_lm,fake' '--enable-auth-digest=file,LDAP,eDirectory' 
'--enable-auth-negotiate=kerberos' '--enable-extern
 al-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group' 
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
'--enable-delay-pools' '--enable-epoll' '--enable-http-violations' 
'--enable-icap-client' '--enable-ident-lookups' '--enable-linux-netfilter' 
'--enable-referer-log' '--enable-removal-policies=heap,lru' '--enable-snmp' 
'--enable-ssl' '--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs' 
'--enable-useragent-log' '--enable-wccpv2' '--enable-esi' '--enable-ecap' 
'--with-aio' '--with-default-user=squid' '--with-filedescriptors=16384' 
'--with-dl' '--with-openssl' '--with-pthreads' 
'build_alias=x86_64-unknown-linux-gnu' 'host_alias=x86_64-unknown-linux-gnu' 
'CFLAGS=-O2 -g -fpie' 'CXXFLAGS=-O2 -g -fpie' 
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'

###
squid -v: (Centos 5 i386)
---
Squid Cache: Version 3.2.6
configure options:  '--host=i686-redhat-linux-gnu' 
'--build=i686-redhat-linux-gnu' '--target=i386-redhat-linux' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' 
'--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' 
'--includedir=/usr/include' '--libdir=/usr/lib' '--libexecdir=/usr/libexec' 
'--sharedstatedir=/usr/com' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--exec_prefix=/usr' '--libexecdir=/usr/lib/squid' 
'--localstatedir=/var' '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' '--disable-dependency-tracking' 
'--enable-arp-acl' '--enable-follow-x-forwarded-for' '--enable-auth' 
'--enable-auth-basic=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL,DB,POP3,squid_radius_auth'
 '--enable-auth-ntlm=smb_lm,no_check,fakeauth' 
'--enable-auth-digest=password,ldap,eDirectory' '--en
 able-auth-negotiate=squid_kerb_auth' 
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
 '--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client' 
'--enable-ident-lookups' '--with-large-files' '--enable-linux-netfilter' 
'--enable

RE: [squid-users] Windows Updates on 3.2.6

2013-01-31 Thread Dave Burkholder
Are there any comments here? I've tried adding the following options from 
http://wiki.squid-cache.org/SquidFaq/WindowsUpdate (even though I don't 
especially want to cache updates)

range_offset_limit -1
maximum_object_size 200 MB
quick_abort_min -1

No joy. I've tried transparent & standard proxy modes. Not using authentication 
anywhere. I've now tested on 4 LANs behind Squid 3.2.6 on CentOS 5 & 6 machines 
and WU isn't working on any of them.

On one machine I downgraded to 3.2.0.18 and was able to get WU to work. Was 
there a regression since 3.2.0.18?

Thanks,

Dave


-Original Message-
From: Dave Burkholder 
Sent: Wednesday, January 30, 2013 9:09 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Windows Updates on 3.2.6

Hello everyone,

I've upgraded a number of machines from 3.1.12 to squid 3.2.6. Since then, 
Windows Updates haven't completed and I'm totally scratching my head.


Has anyone else experienced this problem? (I'm including my config file below.) 
Or have some ACLs or defaults changed in 3.2.x that might be triggering this?



Thanks,

Dave

 

#
# Recommended minimum configuration:
#
# webconfig: acl_start
acl webconfig_lan src 192.168.0.0/16 10.0.0.0/8 acl webconfig_to_lan dst 
192.168.0.0/16 10.0.0.0/8 # webconfig: acl_end

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing # should be 
allowed

acl SSL_ports port 443
acl SSL_ports port 81 83 1 # Webconfig / Webmail / Webmin
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl Safe_ports port 81 83 1# Webconfig / Webmail / Webmin
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost http_access allow manager localhost 
http_access deny manager http_access allow webconfig_to_lan

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports http_access deny CONNECT 
!SSL_ports

# We strongly recommend the following be uncommented to protect innocent # web 
applications running on the proxy server who think the only # one who can 
access services on "localhost" is a local user #http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

# Example rule allowing access from your local networks.
# from where browsing should be allowed
http_access allow localhost

# And finally deny all other access to this proxy http_access allow 
webconfig_lan http_access deny all

# Squid normally listens to port 3128
http_port 3128

# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?

# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /var/spool/squid 2048 16 256

# Leave coredumps in the first cache dir coredump_dir /var/spool/squid

follow_x_forwarded_for allow localhost

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern .020%4320
redirect_program /usr/sbin/adzapper
maximum_object_size 51200 KB




[squid-users] Windows Updates on 3.2.6

2013-01-30 Thread Dave Burkholder
Hello everyone,

I've upgraded a number of machines from 3.1.12 to squid 3.2.6. Since then, 
Windows Updates haven't completed and I'm totally scratching my head.


Has anyone else experienced this problem? (I'm including my config file below.) 
Or have some ACLs or defaults changed in 3.2.x that might be triggering this?



Thanks,

Dave

 

#
# Recommended minimum configuration:
#
# webconfig: acl_start
acl webconfig_lan src 192.168.0.0/16 10.0.0.0/8
acl webconfig_to_lan dst 192.168.0.0/16 10.0.0.0/8
# webconfig: acl_end

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed

acl SSL_ports port 443
acl SSL_ports port 81 83 1 # Webconfig / Webmail / Webmin
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl Safe_ports port 81 83 1# Webconfig / Webmail / Webmin
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
http_access allow webconfig_to_lan

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

# Example rule allowing access from your local networks.
# from where browsing should be allowed
http_access allow localhost

# And finally deny all other access to this proxy
http_access allow webconfig_lan
http_access deny all

# Squid normally listens to port 3128
http_port 3128

# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?

# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /var/spool/squid 2048 16 256

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

follow_x_forwarded_for allow localhost

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern .020%4320
redirect_program /usr/sbin/adzapper
maximum_object_size 51200 KB



[squid-users] Squid 3.1 and TPROXY 4 Problems

2012-05-05 Thread Dave Blakey
Hi all,
 I'm busy working on a tproxy setup with the latest squid on Ubuntu
12.04; tproxy is enabled, squid is compiled with tproxy support etc.
The difference with this setup is that traffic is being sent to the
host using route-map on a cisco as opposed to WCCP but it seems that
should work. Unfortunately it seems there is very little documentation
about the latest tproxy+squid3.1 setup method - but this is what I
have --

# IP
ip -f inet rule add fwmark 1 lookup 100
ip -f inet route add local default dev eth0 table 100

# Sysctl
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 2 > /proc/sys/net/ipv4/conf/default/rp_filter
echo 2 > /proc/sys/net/ipv4/conf/all/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/eth0/rp_filter

# IP Tables
iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3129


In squid.conf the relevant line for http_port 3129 tproxy is set etc.
With this setup I get hits on the iptables rules, and see a request in
the access log but it fails to fill it, it also looks very strange --

1336146295.076  56266 x.x.x.x TCP_MISS/000 0 GET
http://www.google.com/url? - DIRECT/www.google.com -
1336146337.969  42875 x.x.x.x TCP_MISS/000 0 GET
http://www.google.com/url? - DIRECT/www.google.com -

As you can see it's a TCP_MISS/000 and the DIRECT/www.google.com in my
experience should have an IP not a hostname? Additionally the sizes
seem very weird. The client just hangs.

Should this setup be working or is there some obvious error?

Thank you in advance
Dave


[squid-users] Squid 3.1 and TPROXY 4 Problems

2012-05-04 Thread Dave
Hi all,
 I'm busy working on a tproxy setup with the latest squid on Ubuntu
12.04; tproxy is enabled, squid is compiled with tproxy support etc.
The difference with this setup is that traffic is being sent to the
host using route-map on a cisco as opposed to WCCP but it seems that
should work. Unfortunately it seems there is very little documentation
about the latest tproxy+squid3.1 setup method - but this is what I
have --

# IP
ip -f inet rule add fwmark 1 lookup 100
ip -f inet route add local default dev eth0 table 100

# Sysctl
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 2 > /proc/sys/net/ipv4/conf/default/rp_filter
echo 2 > /proc/sys/net/ipv4/conf/all/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/eth0/rp_filter

# IP Tables
iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3129


In squid.conf the relevant line for http_port 3129 tproxy is set etc.
With this setup I get hits on the iptables rules, and see a request in
the access log but it fails to fill it, it also looks very strange --

1336146295.076  56266 69.77.128.218 TCP_MISS/000 0 GET
http://www.google.com/url? - DIRECT/www.google.com -
1336146337.969  42875 69.77.128.218 TCP_MISS/000 0 GET
http://www.google.com/url? - DIRECT/www.google.com -

As you can see it's a TCP_MISS/000 and the DIRECT/www.google.com in my
experience should have an IP not a hostname? Additionally the sizes
seem very weird. The client just hangs.

Should this setup be working or is there some obvious error?

Thank you in advance
Dave


[squid-users] Internet redirection

2011-09-21 Thread Dave Young
Hi all,
I've tried to post this a couple of times but it wouldn't go through for
whatever reason...
Anyway, when users try to download anything bigger than a jpg they get a
message:
Redirecting...Generated "date and time" by servername.domain
(squid/2.6.STABLE22)
It redirects to our intranet which is currently down so it just times out.
Is there a way to stop the redirection?!
Thanks,
Dave


 

  Dave Young
  Brahma Lodge Primary School
  Ph. 8258 5505
  Skype: daveyoung86
 


[squid-users] squid redirecting attempted downloads

2011-08-21 Thread Dave Young
Hey,
First of all I know zero about squid, I'm not even sure this is a squid
related problem.
We are having an issue where users try to download a file (an email
attachment, setup file, etc.) and are redirected to a page on our intranet
that says something about file downloads not being allowed.
The person I took over from here says that it may be something configured
in the squid.conf file.
I found the file but have no idea how to disable or modify this setting.
Could somebody please tell me how to go about this or if it is even
related to the squid proxy?!
Thanks,
Dave



 ____

  Dave Young
  Brahma Lodge Primary School
  Ph. 8258 5505
  Skype: daveyoung86
 



Re: [squid-users] Re: can squid load data into cache faster than sending it out?

2011-05-12 Thread Dave Dykstra
On Thu, May 12, 2011 at 01:37:13PM +1200, Amos Jeffries wrote:
> On 12/05/11 08:18, Dave Dykstra wrote:
...
> >>  So its a choice of being partially vulnerable to "slow loris" style
> >>attacks (timeouts etc prevent full vulnerability) or packet
> >>amplification on a massive scale.
> >
> >Just to make sure I understand you, in both cases you're talking about
> >attacks, not normal operation, right?  And are you saying that it is
> >easier to mitigate the trickle-feed attack than the packet-amplification
> >attack, so trickle-feed is less bad?  I'm not so worried about attacks
> >as normal operation.
> >
> 
> Both are real traffic types, the attack form is just artificially
> induced to make it worse. Like ping-flooding in the 90's it happens
> normally, but not often. All it takes is a large number of slow
> clients requesting non-identical URLs.
> 
> IIRC it was noticed worse by cellphone networks with very large
> numbers of very slow GSM clients.
>  A client connects sends request, Squid reads back N bytes from
> server and sends N-M to the client. Repeat until all FD available in
> Squid are consumed. During which time M bytes of packets are
> overflowing the server link for each 2 FD used. If the total of all
> M is greater than the server link size...
> 
> Under the current design the worst case is Server running out of FD
> first and reject new connections. Or TCP protections dropping
> connections and Squid aborting the clients early. The overflow
> factor is 32K or 64K linear with the number of FD and cant happen
> naturally where the client does read the data just slowly.

With my application the server has a limit on the number of parallel
connections it has to its backend database, so there is no danger of
overflowing the bandwidth between the reverse-proxy squid and its server
(also, they're on the same machine so the "network" is intra-machine).
If there are many clients that suddenly make large requests they are put
into a queue on the server until they get their turn, and meanwhile the
server sends keepalive messages every 5 seconds so the clients don't
timeout.  With my preferred behavior, the squid would read that data
from the server as fast as possible, and then it wouldn't make any
difference to the squid-to-server link if the clients had low bandwidth
or high bandwidth.

I'll submit a feature request for an option to bugzilla.

Thanks a lot for your explanations, Amos.

- Dave


Re: [squid-users] Re: can squid load data into cache faster than sending it out?

2011-05-11 Thread Dave Dykstra
On Wed, May 11, 2011 at 09:05:08PM +1200, Amos Jeffries wrote:
> On 11/05/11 04:34, Dave Dykstra wrote:
> >On Sat, May 07, 2011 at 02:32:22PM +1200, Amos Jeffries wrote:
> >>On 07/05/11 08:54, Dave Dykstra wrote:
> >>>Ah, but as explained here
> >>> http://www.squid-cache.org/mail-archive/squid-users/200903/0509.html
> >>>this does risk using up a lot of memory because squid keeps all of the
> >>>read-ahead data in memory.  I don't see a reason why it couldn't instead
> >>>write it all out to the disk cache as normal and then read it back from
> >>>there as needed.  Is there some way to do that currently?  If not,
> >>
> >>Squid should be writing to the cache in parallel to the data
> >>arrival, the only bit required in memory being the bit queued for
> >>sending to the client.  Which gets bigger, and bigger... up to the
> >>read_ahead_gap limit.
> >
> >Amos,
> >
> >Yes, it makes sense that it's writing to the disk cache in parallel, but
> >what I'm asking for is a way to get squid to keep reading from the
> >origin server as fast as it can without reserving all that memory.  I'm
> >asking for an option to not block the reading from the origin server&
> >writing to the cache when the read_ahead_gap is full, and instead read
> >data back from the cache to write it out when the client is ready for
> >more.  Most likely the data will still be in the filesystem cache so it
> >will be fast.
> 
> That will have to be a configuration option. We had a LOT of
> complaints when we accidentally made several 3.0 act that way.

That's interesting.  I'm curious about what people didn't like about it,
do you remember details?


...
> >>>perhaps I'll just submit a ticket as a feature request.  I *think* that
> >>>under normal circumstances in my application squid won't run out of
> >>>memory, but I'll see after running it in production for a while.
> >
> >So far I haven't seen a problem but I can imagine ways that it could
> >cause too much growth so I'm worried that one day it will.
> 
> Yes, both approaches lead to problems.  The trickle-feed approach
> used now leads to resource holding on the Server. Not doing it leads
> to bandwidth overload as Squid downloads N objects for N clients and
> only has to send back one packet to each client.
>  So its a choice of being partially vulnerable to "slow loris" style
> attacks (timeouts etc prevent full vulnerability) or packet
> amplification on a massive scale.

Just to make sure I understand you, in both cases you're talking about
attacks, not normal operation, right?  And are you saying that it is
easier to mitigate the trickle-feed attack than the packet-amplification
attack, so trickle-feed is less bad?  I'm not so worried about attacks
as normal operation.

Thanks,

- Dave


Re: [squid-users] Re: can squid load data into cache faster than sending it out?

2011-05-10 Thread Dave Dykstra
On Sat, May 07, 2011 at 02:32:22PM +1200, Amos Jeffries wrote:
> On 07/05/11 08:54, Dave Dykstra wrote:
> >Ah, but as explained here
> > http://www.squid-cache.org/mail-archive/squid-users/200903/0509.html
> >this does risk using up a lot of memory because squid keeps all of the
> >read-ahead data in memory.  I don't see a reason why it couldn't instead
> >write it all out to the disk cache as normal and then read it back from
> >there as needed.  Is there some way to do that currently?  If not,
> 
> Squid should be writing to the cache in parallel to the data
> arrival, the only bit required in memory being the bit queued for
> sending to the client.  Which gets bigger, and bigger... up to the
> read_ahead_gap limit.

Amos,

Yes, it makes sense that it's writing to the disk cache in parallel, but
what I'm asking for is a way to get squid to keep reading from the
origin server as fast as it can without reserving all that memory.  I'm
asking for an option to not block the reading from the origin server &
writing to the cache when the read_ahead_gap is full, and instead read
data back from the cache to write it out when the client is ready for
more.  Most likely the data will still be in the filesystem cache so it
will be fast.

> IIRC it is supposed to be taken out of the cache_mem space
> available, but I've not seen anything to confirm that.

I'm sure that's not the case, because I have been able to force the
memory usage to grow by more than the cache_mem setting by doing a
number of wgets of largeish requests in parallel using --limit-rate to
cause them to take a long time.  Besides, it doesn't make sense that it
would do that when the read_ahead_gap is far greater than
maximum_object_size_in_memory.

> >perhaps I'll just submit a ticket as a feature request.  I *think* that
> >under normal circumstances in my application squid won't run out of
> >memory, but I'll see after running it in production for a while.

So far I haven't seen a problem but I can imagine ways that it could
cause too much growth so I'm worried that one day it will.

- Dave


Re: [squid-users] Re: can squid load data into cache faster than sending it out?

2011-05-06 Thread Dave Dykstra
Ah, but as explained here
http://www.squid-cache.org/mail-archive/squid-users/200903/0509.html
this does risk using up a lot of memory because squid keeps all of the
read-ahead data in memory.  I don't see a reason why it couldn't instead
write it all out to the disk cache as normal and then read it back from
there as needed.  Is there some way to do that currently?  If not,
perhaps I'll just submit a ticket as a feature request.  I *think* that
under normal circumstances in my application squid won't run out of
memory, but I'll see after running it in production for a while.

- Dave

On Wed, May 04, 2011 at 02:52:12PM -0500, Dave Dykstra wrote:
> I found the answer: set "read_ahead_gap" to a buffer larger than the
> largest data chunk I transfer.
> 
> - Dave
> 
> On Wed, May 04, 2011 at 09:11:59AM -0500, Dave Dykstra wrote:
> > I have a reverse proxy squid on the same machine as my origin server.
> > Sometimes queries from squid are sent around the world and can be very
> > slow, for example today there is one client taking 40 minutes to
> > transfer 46MB.  When the data is being transferred from the origin
> > server, the connection between squid and the origin server is tied up
> > for the entire 40 minutes, leaving it unavailable for other work
> > (there's only a small number of connections allowed by the origin server
> > to its upstream database).  My question is, can squid be configured to
> > take in the data from the origin server as fast as it can and cache it,
> > and then send out the data to the client as bandwidth allows?  I would
> > want it to stream to the client during this process too, but not block
> > the transfer from origin server to squid if the client is slow.
> > 
> > I'm using squid-2.7STABLE9, and possibly relevant non-default squid.conf
> > options I'm using are:
> > http_port 8000 accel defaultsite=127.0.0.1:8080
> > cache_peer 127.0.0.1 parent 8080 0 no-query originserver
> > collapsed_forwarding on
> > 
> > - Dave


[squid-users] Re: can squid load data into cache faster than sending it out?

2011-05-04 Thread Dave Dykstra
I found the answer: set "read_ahead_gap" to a buffer larger than the
largest data chunk I transfer.

- Dave

On Wed, May 04, 2011 at 09:11:59AM -0500, Dave Dykstra wrote:
> I have a reverse proxy squid on the same machine as my origin server.
> Sometimes queries from squid are sent around the world and can be very
> slow, for example today there is one client taking 40 minutes to
> transfer 46MB.  When the data is being transferred from the origin
> server, the connection between squid and the origin server is tied up
> for the entire 40 minutes, leaving it unavailable for other work
> (there's only a small number of connections allowed by the origin server
> to its upstream database).  My question is, can squid be configured to
> take in the data from the origin server as fast as it can and cache it,
> and then send out the data to the client as bandwidth allows?  I would
> want it to stream to the client during this process too, but not block
> the transfer from origin server to squid if the client is slow.
> 
> I'm using squid-2.7STABLE9, and possibly relevant non-default squid.conf
> options I'm using are:
> http_port 8000 accel defaultsite=127.0.0.1:8080
> cache_peer 127.0.0.1 parent 8080 0 no-query originserver
> collapsed_forwarding on
> 
> - Dave


[squid-users] can squid load data into cache faster than sending it out?

2011-05-04 Thread Dave Dykstra
I have a reverse proxy squid on the same machine as my origin server.
Sometimes queries from squid are sent around the world and can be very
slow, for example today there is one client taking 40 minutes to
transfer 46MB.  When the data is being transferred from the origin
server, the connection between squid and the origin server is tied up
for the entire 40 minutes, leaving it unavailable for other work
(there's only a small number of connections allowed by the origin server
to its upstream database).  My question is, can squid be configured to
take in the data from the origin server as fast as it can and cache it,
and then send out the data to the client as bandwidth allows?  I would
want it to stream to the client during this process too, but not block
the transfer from origin server to squid if the client is slow.

I'm using squid-2.7STABLE9, and possibly relevant non-default squid.conf
options I'm using are:
http_port 8000 accel defaultsite=127.0.0.1:8080
cache_peer 127.0.0.1 parent 8080 0 no-query originserver
collapsed_forwarding on

- Dave


[squid-users] Problem with Backslash

2010-06-17 Thread Dave Forster
Hi Guys,

We have an application that updates and grabs a manifest xml file from a
web server.  When it makes the request, it has been hard coded to use
\MANIFEST.XML at the end of the address.

The application that calls this URL does not run through Internet
Explorer so it won't correct the backslash.  When this URL is passed to
Squid it comes back with a 404 error.  This is because the URL should
actually reference /MANIFEST.XML.  The application is a government
application and "can't be changed"!

Is there a way to make Squid parse all backslashes in URLs as
forwardslashes???

Any help would be appreciated!

 
Cheers,
 
Dave Forster
Infrastructure Engineer

58 Ord Street
West Perth WA 6005
PO Box 1752 West Perth WA 6872
Phone: (08) 9463 1313
Fax: (08) 9486 1357
Mobile: 0429 579 418
While we have used various programs to alert us to the presence of computer 
viruses, we cannot guarantee that this email and any files transmitted with it 
are free from them. Any person who opens any file does so at his or her own 
risk. This transmission, or any part of it, is intended solely for the named 
addressee. It is confidential and may contain legally privileged information. 
The copying or distribution of this transmission or any information it 
contains, by anyone other than the addressee, is prohibited. Each page attached 
hereto must be read in conjunction with any disclaimer which forms part of it. 
If you have received this transmission in error, please let us know by 
telephoning (08) 9486 1244, or by reply email to the sender. If you are not the 
named addressee, you must destroy the original.
Please think of the environment before printing this email.


RE: [squid-users] Google SSL searches

2010-05-28 Thread Dave Burkholder
That is EXACTLY what I was looking for. I very much appreciate your prompt 
answer.

Thank you!

Dave


-Original Message-
From: Henrik Nordström [mailto:hen...@henriknordstrom.net] 
Sent: Thursday, May 27, 2010 6:58 PM
To: Dave Burkholder
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Google SSL searches

tor 2010-05-27 klockan 15:35 -0400 skrev Dave Burkholder:

> Is there some way to specify via a Squid ACL that requests via port 443 to
> google.com are blocked, but requests to google.com via port 80 are allowed?

acl https port 443
acl google dstdomain google.com
http_access deny https google

Regards
Henrik






[squid-users] Google SSL searches

2010-05-27 Thread Dave Burkholder
I’m using Squid in standard proxy mode with Dansguardian content filtering.
So the recent news that Google is doing SSL encryption on their search
results wasn’t good news to me. 
http://www.osnews.com/story/23358/Google_Launches_Encrypted_Search_Beta 

I want to limit searches to clear text only so that Dansguardian can do its
content filtering magic, and my first thought was to do this:
acl sslgoogle url_regex https://www.google.com 
http_access deny sslgoogle

But the url_regex doesn’t work as the URL seems to be encrypted already. 

The ACL below blocks the whole site, including Gmail, Docs, Apps, etc.
acl sslgoogle dstdomain .google.com 
http_access deny sslgoogle

Is there some way to specify via a Squid ACL that requests via port 443 to
google.com are blocked, but requests to google.com via port 80 are allowed?

Thanks,

Dave






Re: [squid-users] Slightly OT: Configuring a router for Squid.

2010-05-05 Thread Dave Coventry
Thanks for the help, Jose.

On 5 May 2010 18:46, Jose Ildefonso Camargo Tolosa
 wrote:
> Ok, so, you could, in theory, add an internal DNS zone, right?
> (because is doesn't currently exists).  Now, and off-topic question:
> do you have a "domain" on your network, or just have a "workgroup"
> (I'm assuming you have Windows computers for your staff).

Yes. I'm sure I can set up t DNS on the Debian box.

I'm not sure what a Domain is, but, yes, I have a windows 'Workgroup'.
All computers (except mine) are windows machines. There is a chance
that the Guest computers might have Linux (or Mac), but I would
imagine that the bulk would be Windows.

> Ok, guests=clients ie, persons not part of the company, right?

Correct.

> Yeah, all the bosses like their gadgets
 :)


Re: [squid-users] Slightly OT: Configuring a router for Squid.

2010-05-03 Thread Dave Coventry
On 4 May 2010 05:21, Jose Ildefonso Camargo Tolosa
 wrote:
>
> Some questions:
>
> 1. How is your network currently configured: static IPs, dhcp, if
> dhcp, is the dlink router your dhcp server?

Yes. The DLink allocates IP addresses on the network. The Squid box is
set to .5 static IP

> 2. What is the goal of the proxy server?: access control
> (restrictions, authentication), cache, other.

All of the above. We have clients who want to access the net through
their laptops, so configuring the clients' machines is not really
desirable and, obviously for them we are not interested in their
browsing habits. However, we want to place some restrictions on staff.
 This is not an absolute requirement, though, although if the staff
are abusing bandwidth, we'd like to know about it.

> 3. Who provides the DNS service? is the dlink router? is another server?

No, it'll be the ISP who provide the DNS.

> 4. How is the wireless part of the router being used? office
> computers, some laptops, some of the "boss's" gadgets, other.

Yes, the DLink has 4 wired ports one of which goes to the Squid Box
and the others to local machines. Other staff desktops and laptops
connect wirelessly and guests connect with laptops.

The boss does like his gadgets, though...

> Depending on these answers, there are one or more options for you.

That would be nice.

;)

~Dave


Re: [squid-users] Slightly OT: Configuring a router for Squid.

2010-05-03 Thread Dave Coventry
Thanks to everybody for the assistance.

2010/5/4 Jorge Armando Medina :
> Im afraid this cannot be achieved with simple static routes, you need to
> setup a interceptor proxy so outgoing http traffic is intercepted by
> your router and then transparent redirec it to your squid box.

Yes, I rather thought I was on the wrong track for this. I couldn't
see any other option for rerouting the LAN traffic through the Proxy
though.

> If you alrewady have a debian box with squid I recommend to setup a
> firewall on it with two interfaces and use it as your default gateway,
> this way you can use transparent proxy.

The modem/router is wireless, too, so I guess we'll need to turn off
the wireless and buy another WAP.

> For more information read the wiki page:
> http://wiki.squid-cache.org/SquidFaq/InterceptionProxy

Thanks. I'll check it out.

~ Dave


[squid-users] Slightly OT: Configuring a router for Squid.

2010-05-03 Thread Dave Coventry
I need to add a proxy server to our office network.

The router/modem is a DLink G604T and I want all requests for Internet
access to be re rerouted to a Debian box with Squid Installed.

How do I set this up?

I notice that the Router has an advanced option called 'Routing' which
defines the Routing table.

Options are:

Destination:
Netmask:
Gateway:
Connection:

I take it that the Destination is the Proxy Server (192.168.1.5), the
netmask will be 255.255.255.0

I'm not sure what the Gateway will be, and I presume I accept the
default for connection, which is Pvc0.

Or am I going in the wrong direction entirely?


Re: [squid-users] Questions about referer url cache

2010-03-17 Thread dave jones
Please ignore this message, I fixed the problem.
Sorry for the noise!

Regards,
Dave.

2010/3/17 dave jones :
> 2010/3/16 Henrik Nordström wrote:
>> tis 2010-03-16 klockan 10:08 +0800 skrev dave jones:
>>
>>> Thanks. I use url_rewrite_program /etc/squid/redirect_test.php,
>>> but it seems my program doesn't work...
>>
>> In most languages you need to turn off output buffering, and is
>> something everyone trips over when writing their first url
>> rewriter/redirector.
>>
>> try adding a flush() call after the result echo or setting the
>> implicit_flush php config variable.
>
> Thank you for your help. I think my redirect program has problems.
> When I go to offine mode, the browser still find us.rd.foo.com,
> but it should be us.news.foo.com, no?
>
>> Regards
>> Henrik
>
> Regards,
> Dave.
>


Re: [squid-users] Questions about referer url cache

2010-03-17 Thread dave jones
2010/3/16 Henrik Nordström wrote:
> tis 2010-03-16 klockan 10:08 +0800 skrev dave jones:
>
>> Thanks. I use url_rewrite_program /etc/squid/redirect_test.php,
>> but it seems my program doesn't work...
>
> In most languages you need to turn off output buffering, and is
> something everyone trips over when writing their first url
> rewriter/redirector.
>
> try adding a flush() call after the result echo or setting the
> implicit_flush php config variable.

Thank you for your help. I think my redirect program has problems.
When I go to offine mode, the browser still find us.rd.foo.com,
but it should be us.news.foo.com, no?

> Regards
> Henrik

Regards,
Dave.


Re: [squid-users] Questions about referer url cache

2010-03-15 Thread dave jones
On Sat, Mar 13, 2010 at 2:35 AM, Henrik Nordstrom  wrote:
> fre 2010-03-12 klockan 23:36 +0800 skrev dave jones:
>> My question is I want to offline browse the index.html of foo.com,
>> but there are many "http://us.rd.foo.com/referurl/news/index/realtime/*";
>> in index.html, would anyone tell me how do I solve that referer url to direct
>> the correct one, like
>> "http://us.news.foo.com/article/url/d/a/100312/11/21ycr.html";.
>> Thank you very much.
>
> See the url_rewrite_program option in squid.conf.

Thanks. I use url_rewrite_program /etc/squid/redirect_test.php,
but it seems my program doesn't work...

!/usr/local/bin/php


Re: [squid-users] Questions about portal sites such as yahoo cache with squid

2010-03-15 Thread dave jones
On Mon, Mar 15, 2010 at 6:20 PM, Amos Jeffries  wrote:
> dave jones wrote:
>>
>> Hi,
>>
>> Does anyone using squid to cache yahoo portal site successfully?
>> If so, would you tell me how to do? Thanks.
>>
>> Best regards,
>> Dave.
>
> Yahoo! use Squid as part of their deployment.
>  I imagine they already have the correct HTTP protocol details to make the
> content cacheable or have good reasons for leaving it as non-cacheable.
>
> If you want to investigate this yourself use www.redbot.org (Yahoo!
> sponsored) to see how cacheable the portal URLs are.

Ah, the result is:

HTTP/1.1 200 OK
Date: Mon, 15 Mar 2010 10:39:12 GMT
P3P: policyref="http://info.yahoo.com/w3c/p3p.xml";, CP="CAO DSP COR CUR ADM
DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi
PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE
LOC GOV"
Cache-Control: private
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8
Content-Encoding: gzip

Seems like the content cannot be cachable?

> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
>  Current Beta Squid 3.1.0.18

Regards,
Dave.


[squid-users] Questions about portal sites such as yahoo cache with squid

2010-03-14 Thread dave jones
Hi,

Does anyone using squid to cache yahoo portal site successfully?
If so, would you tell me how to do? Thanks.

Best regards,
Dave.


[squid-users] Questions about referer url cache

2010-03-12 Thread dave jones
Hi,

I use squid to cache the website, for instance foo.com. The access log is:

1268405798.728  78535 127.0.0.1 TCP_MISS/504 277 GET http://us.rd.foo.com/refe
rurl/news/index/realtime/*http://us.news.foo.com/article/url/d/a/100312/11/21y
cr.html - DIRECT/us.rd.foo.com text/html

If I type url http://us.news.foo.com/article/url/d/a/100312/11/21ycr.html,
the TCP_HIT.

My question is I want to offline browse the index.html of foo.com,
but there are many "http://us.rd.foo.com/referurl/news/index/realtime/*";
in index.html, would anyone tell me how do I solve that referer url to direct
the correct one, like
"http://us.news.foo.com/article/url/d/a/100312/11/21ycr.html";.
Thank you very much.

Best regrards,
Dave.


Re: [squid-users] Client browser perpetually 'Loading'

2010-02-04 Thread Dave Coventry
OK, Amos, this is what I've thought of:

I'll install squid on the server.

The 2 users can access the Apache front page ("It Works!") but not the
Drupal directories as defined by the Apache httpd.conf aliases.

If I set up the proxy to allow access from the 2 users (who will be on
Dynamic IPs and accessing from the WAN), will that allow access to
http://localhost/drupal/ ?

I don't think I would need the proxy to cache anything, but I would
like it to be as lightweight as possible.

Does that sound like it would work?


Re: [squid-users] Client browser perpetually 'Loading'

2010-02-04 Thread Dave Coventry
On 4 February 2010 14:10, Amos Jeffries  wrote:
> Not really. I only know how to diagnose from the bx which is having the
> issue. Which means the ISP proxy in your case.

OK, I suppose I'll have to do some research.

> As far as the end points go you may have to dig really deep, like tcpdump
> deep, to see exactly whats going where and what is coming back. From the
> client end initially.

> Contacting the ISP admin and enlisting their help might be of some
> advantage.

These guys are currently suffering a lot of complaints about poor
service levels and lack of support, so I'm not sure that they will be
that helpful. http://mybroadband.co.za/news/Broadband/11359.html

>> I wouldn't mind marching around to the ISP offices and throwing a
>> tantrum, but I'd prefer to be reasonably sure of my facts before I did
>> so.
>
> It's just an idea I have, so yelling may be counter productive. But being in
> the middle they would certainly have the best position to look into the 
> problem.

Well, I wouldn't really do any yelling, which is probably not the way
to get cooperation in the first place.

~ Dave Coventry


Re: [squid-users] Client browser perpetually 'Loading'

2010-02-04 Thread Dave Coventry
Hi Amos,

I appreciate you looking into this; thanks very much.

Trying to refresh the cache by using  Ctrl+Reload or Shift+Ctrl+Reload
or Ctrl+F5 in Windows do not work.

On 4 February 2010 00:47, Amos Jeffries  wrote:
> It's one pile of horse amongst several. Any of which might be real hay.
>
> I'm more inclined to think some intermediary is having persistent TCP
> connection problems. Caching the content will at worst result in old dead
> content being served back to the clients. But still served back.
> Restarting the server indicates some active connection is hung.

Could you suggest a way to diagnose persistent TCP connection problems
or suggest a forum where I might be able to table the question?

I wouldn't mind marching around to the ISP offices and throwing a
tantrum, but I'd prefer to be reasonably sure of my facts before I did
so.

~ Dave Coventry


[squid-users] Client browser perpetually 'Loading'

2010-02-03 Thread Dave Coventry
Hi,

2 of my users access my server through the same ISP who (I presume) uses Squid.

Sometimes the browser connects with no problem, but both of these
users have sporadic occurrences of the browser stalling. (It just says
'Loading' but never loads)

My Server runs from our office but the National Telcoms Provider does
not believe in static IPs, (we are in South Africa) therefore we have
to run a Dyndns account to keep track of our IP.

My theory is that the Clients' ISP is caching our content and causing the stall.

Rebooting the Server's ADSL modem/router allows both of these users to
re-access the server and my theory is that the change of IP address
after the reboot bypasses the clients' ISP's cache.

What I would like to know is if there is any way for the clients to
clear the cache? Not the browser cache, but the squid cache on the
ISP's machine.

Alternatively, if you feel that my theory is a pile of horse, could
you suggest what else might be happening?

Thank you,

Dave Coventry


Re: [squid-users] Squid proxy is very slow for web browsing in "near default" config

2010-01-12 Thread Dave T
On Mon, Jan 11, 2010 at 6:50 PM, Amos Jeffries  wrote:
> Dave T wrote:
> NP: you probably want icp_access to be limited to local LAN same as
> http_access is above.
> Amos
> --

Thanks for the detailed feedback. I'm not sure how I should apply your
suggestions because my Squid proxy server is not on my LAN. It is
hosted at Linode.com. I will be accessing it from an Android phone. I
do not know what IP address the phone may have and I suspect it will
be a NAT-style address (not a publicly addressable IP).

Shall I follow the rest of your instructions, just leaving out the
part about LAN addresses, or does this create larger issues?


Re: [squid-users] Squid proxy is very slow for web browsing in "near default" config

2010-01-11 Thread Dave T
On Mon, Jan 11, 2010 at 6:50 PM, Amos Jeffries  wrote:
>
> Dave T wrote:
>>
>> Thank you. Comments inline.
>>
>> On Sun, Jan 10, 2010 at 5:49 PM, Amos Jeffries  wrote:
>>>
>>> Dave T wrote:
>>>>
>>>> I just set up squid for the first time. It is on a Ubuntu box hosted
>>>> on Linode.com. I have zero experience with proxy servers. I used this
>>>> guide:
>>>> http://news.softpedia.com/news/Seting-Up-a-HTTP-Proxy-Server-with-Authentication-and-Filtering-52467.shtml
>>>
>>> Eeek! That tutorial is advising people to create open proxies for global 
>>> public access (allow all).
>>
>> I think that is just for initial testing. The tutorial actually
>> changes that in the second step.
>>
>>>
>>>> (I also looked at a few other guides such as this one:
>>>> http://ubuntuforums.org/showthread.php?t=320733. However, I wanted to
>>>> most barebones config to start with and the link I used was the
>>>> simplest I found.)
>>>
>>> The simplest and safest documentation is in:
>>>  /usr/share/doc/squid-common/QUICKSTART
>>> or
>>>  /usr/share/doc/squid3-common/QUICKSTART
>>>
>>> ... which outlines the minimal config changes to go from a clean install of 
>>> your particular version to a working proxy.
>>
>> Thanks. Amazing that I looked everywhere else but on my local HDD. :)
>>>
>>>> So now that I have it set up, I'm testing it with FoxyProxy. It is not
>>>> working well. Many web pages do not load completely. Some load very
>>>> slowly. A few load fast (but even then, some images are often
>>>> missing). Many times I have to try an address several times before a
>>>> page will even start to load.
>>>>
>>>> I am using iptables. When I turn the firewall off, I have slightly
>>>> less problems, but nothing significantly changes. I don't want to
>>>> leave the firewall off, so I took a few ideas from here:
>>>> http://www.cyberciti.biz/tips/linux-setup-transparent-proxy-squid-howto.html
>>>> But the changes I put in actually made the performance a little worse
>>>> than before. And like I said, even with the firewall off, the problems
>>>> I described remain.
>>>>
>>>> What should I look at next to begin to understand my problem? Thanks.
>>>
>>> Coming here was a good start.
>>>
>>> We are going to need to known the version of Squid you are using, there are 
>>> a dozen or more available on Ubuntu.
>>>
>> I assume this will give more than enough info:
>>
>> $ dpkg -s squid
>
> 
>>
>> Version: 2.6.18-1ubuntu3
>
> 
>>
>> Linux Linode01 2.6.18.8-linode19 #1 SMP Mon Aug 17 22:19:18 UTC 2009
>> i686 GNU/Linux
>>
>
> Excellent.
>
> A little old, there are some recent config alterations we recommend. I'm 
> adding the ones 2.6 can use inline with your config below.
>
>>
>>> Also, we are going to have to see what squid.conf you have ended up working 
>>> with. Minus the documentation comments and empty lines please.
>>
>> Here is what I am using for TESTING only. I was getting TCP_DENIED/407
>> errors in the log, so I made an attempt to test it with no auth
>> required at all. (Not sure if I achieved that with this config or not,
>> but the problems didn't go away.)
>>
>> acl all src 0.0.0.0/0.0.0.0
>
> all src all
>
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/255.255.255.255
>
> acl localhost src 127.0.0.1
>
>> acl to_localhost dst 127.0.0.0/8
>
> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
>
>> acl purge method PURGE
>> acl CONNECT method CONNECT
>
> NP: For non-testing use you will need to re-add the Safe_ports and SSL_ports 
> security controls here.
> They are the safety nets that prevent people, particularly infected clients, 
> from opening tunnels via the proxy and sending spam or worse.
>
>> http_access allow all
>
> replace the above http_access line with:
>
>  # alter to match your LAN range(s) currently allowed to use the proxy.
>  acl localnet src 192.168.0.0/16
>  http_access allow localnet
>  http_access deny all
>
>> icp_access allow all
>
> NP: you probably want icp_access to be limited to local LAN same as 
> http_access is above.
>

Thanks for the detailed feedback. I am about half way through reading
it and I'm not sure if your suggestions will apply because my Squid
proxy server is not on my LAN. It is hosted at Linode.com. I will be
accessing it from an Android phone. I do not know what IP address the
phone may have and I suspect it will be a NAT-style address (not a
publicly addressable IP).

Shall I follow the rest of your instructions, just leaving out the
part about LAN addresses, or does this create larger issues?


Re: [squid-users] Squid proxy is very slow for web browsing in "near default" config

2010-01-11 Thread Dave T
Thank you. Comments inline.

On Sun, Jan 10, 2010 at 5:49 PM, Amos Jeffries  wrote:
>
> Dave T wrote:
>>
>> I just set up squid for the first time. It is on a Ubuntu box hosted
>> on Linode.com. I have zero experience with proxy servers. I used this
>> guide:
>> http://news.softpedia.com/news/Seting-Up-a-HTTP-Proxy-Server-with-Authentication-and-Filtering-52467.shtml
>
> Eeek! That tutorial is advising people to create open proxies for global 
> public access (allow all).

I think that is just for initial testing. The tutorial actually
changes that in the second step.

>
>
>>
>> (I also looked at a few other guides such as this one:
>> http://ubuntuforums.org/showthread.php?t=320733. However, I wanted to
>> most barebones config to start with and the link I used was the
>> simplest I found.)
>
> The simplest and safest documentation is in:
>  /usr/share/doc/squid-common/QUICKSTART
> or
>  /usr/share/doc/squid3-common/QUICKSTART
>
> ... which outlines the minimal config changes to go from a clean install of 
> your particular version to a working proxy.

Thanks. Amazing that I looked everywhere else but on my local HDD. :)
>
>
>>
>> So now that I have it set up, I'm testing it with FoxyProxy. It is not
>> working well. Many web pages do not load completely. Some load very
>> slowly. A few load fast (but even then, some images are often
>> missing). Many times I have to try an address several times before a
>> page will even start to load.
>>
>> I am using iptables. When I turn the firewall off, I have slightly
>> less problems, but nothing significantly changes. I don't want to
>> leave the firewall off, so I took a few ideas from here:
>> http://www.cyberciti.biz/tips/linux-setup-transparent-proxy-squid-howto.html
>> But the changes I put in actually made the performance a little worse
>> than before. And like I said, even with the firewall off, the problems
>> I described remain.
>>
>> What should I look at next to begin to understand my problem? Thanks.
>
> Coming here was a good start.
>
> We are going to need to known the version of Squid you are using, there are a 
> dozen or more available on Ubuntu.
>
I assume this will give more than enough info:

$ dpkg -s squid
Package: squid
Status: install ok installed
Priority: optional
Section: web
Installed-Size: 1584
Maintainer: Ubuntu Core Developers 
Architecture: i386
Version: 2.6.18-1ubuntu3
Replaces: squid-novm
Depends: adduser, libc6 (>= 2.4), libdb4.6, libldap-2.4-2 (>= 2.4.7),
libpam0g (>= 0.99.7.1), logrotate (>= 3.5.4-1), lsb-base, netbase,
squid-common (>= 2.6.18-1ubuntu3), ssl-cert (>= 1.0-11ubuntu1)
Pre-Depends: debconf (>= 1.2.9) | debconf-2.0
Suggests: logcheck-database, resolvconf (>= 0.40), smbclient,
squid-cgi, squidclient, winbind
Conflicts: sarg (<< 1.1.1-2), squid-novm
Conffiles:
 /etc/init.d/squid 19cb626e40f26e79596786ca3dbf991e
 /etc/logrotate.d/squid 04a97ec018c01cd54851de772812067f
 /etc/resolvconf/update-libc.d/squid c066626f87865da468a7e74dc5d9aeb0
Description: Internet object cache (WWW proxy cache)
 This package provides the Squid Internet Object Cache developed by
 the National Laboratory for Applied Networking Research (NLANR) and
 Internet volunteers.
Homepage: http://www.squid-cache.org/
Original-Maintainer: Luigi Gangitano 

Linux Linode01 2.6.18.8-linode19 #1 SMP Mon Aug 17 22:19:18 UTC 2009
i686 GNU/Linux


>
> Also, we are going to have to see what squid.conf you have ended up working 
> with. Minus the documentation comments and empty lines please.

Here is what I am using for TESTING only. I was getting TCP_DENIED/407
errors in the log, so I made an attempt to test it with no auth
required at all. (Not sure if I achieved that with this config or not,
but the problems didn't go away.)

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl purge method PURGE
acl CONNECT method CONNECT
http_access allow all
icp_access allow all
http_port 3128
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:   1440    20% 10080
refresh_pattern ^gopher:    1440    0%  1440
refresh_pattern .   0   20% 4320
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
extension_methods REPORT MERGE MKACTIVITY CHECKOUT
hosts_file /etc/hosts
coredump_dir /var/spool/squid

>
>
>>
>> BTW, is there a recent preconfigured squid virtual appliance that I
>> could host on Amazon EC2 (or similar) that would be suitable for my
>> own personal proxy server?
>
> Not that I'm aware of. There have been several attempts in the last years to 
> get a current Squid appliance made. But none of those people have reported 
> back even to advertise their wares.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE21
>  Current Beta Squid 3.1.0.15


[squid-users] Squid proxy is very slow for web browsing in "near default" config

2010-01-10 Thread Dave T
I just set up squid for the first time. It is on a Ubuntu box hosted
on Linode.com. I have zero experience with proxy servers. I used this
guide:
http://news.softpedia.com/news/Seting-Up-a-HTTP-Proxy-Server-with-Authentication-and-Filtering-52467.shtml

(I also looked at a few other guides such as this one:
http://ubuntuforums.org/showthread.php?t=320733. However, I wanted to
most barebones config to start with and the link I used was the
simplest I found.)

So now that I have it set up, I'm testing it with FoxyProxy. It is not
working well. Many web pages do not load completely. Some load very
slowly. A few load fast (but even then, some images are often
missing). Many times I have to try an address several times before a
page will even start to load.

I am using iptables. When I turn the firewall off, I have slightly
less problems, but nothing significantly changes. I don't want to
leave the firewall off, so I took a few ideas from here:
http://www.cyberciti.biz/tips/linux-setup-transparent-proxy-squid-howto.html
But the changes I put in actually made the performance a little worse
than before. And like I said, even with the firewall off, the problems
I described remain.

What should I look at next to begin to understand my problem? Thanks.

BTW, is there a recent preconfigured squid virtual appliance that I
could host on Amazon EC2 (or similar) that would be suitable for my
own personal proxy server?

I found some links, but the only image seems to be from 2006.

Request: Proxy Firewall appliance (2009)
http://www.turnkeylinux.org/forum/general/20091109/request-proxy-firewall-appliance

wiki:
http://eu.squid-cache.org/Features/SquidAppliance
http://wiki.squid-cache.org/WishList

Quick mailing list discussion (2006)
http://www.squid-cache.org/mail-archive/squid-users/200803/0334.html

Proxy Standalone Appliance? (2008)
http://forums.whirlpool.net.au/forum-replies-archive.cfm/917508.html


Re: [squid-users] When is squid-2.7STABLE7 expected?

2009-09-04 Thread Dave Dykstra
On Thu, Sep 03, 2009 at 09:48:43PM +0200, Henrik Nordstrom wrote:
> tor 2009-09-03 klockan 09:46 -0500 skrev Dave Dykstra:
> > When is the next squid-2.7 stable release expected?  I am very eager
> > for the fix in http://www.squid-cache.org/bugs/show_bug.cgi?id=2451
> > (regarding 304 Not Modified responses).
> 
> When I find some spare or paid time to finish it up.

Ok, and about how much needs to be done to finish it up?

Also, do you have any reservations about using the current 2.7 HEAD in
production?  That is, do you consider there to be instabilities that
have been introduced since STABLE6?  It doesn't look very straight-
forward for me to backport just this fix to STABLE6 because the calling
parameters to httpReplyBodySize have changed since then.

Thanks,

- Dave


[squid-users] When is squid-2.7STABLE7 expected?

2009-09-03 Thread Dave Dykstra
When is the next squid-2.7 stable release expected?  I am very eager
for the fix in http://www.squid-cache.org/bugs/show_bug.cgi?id=2451
(regarding 304 Not Modified responses).

- Dave Dykstra


RE: [squid-users] Laptops/Mobile Phones using Squid on the road

2009-08-31 Thread Dave Burkholder
Here is a link to my Squid config. 
http://www.thinkwelldesigns.com/backups/squid.zip 

The ACL rules I use are in the section below:

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

The only way I have been able to make this work reliably is to add more ACL 
rules such as:

acl elmerlaptop src ##.##.##.##
http_access allow elmerlaptop

But as I mentioned before, the public IP of the laptop changes.




-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, August 26, 2009 7:41 PM
To: Dave Burkholder
Subject: RE: [squid-users] Laptops/Mobile Phones using Squid on the road

On Wed, 26 Aug 2009 15:27:32 -0400, "Dave Burkholder"
 wrote:
> Authentication was created for exactly this purpose.
> 
> With explicitly set proxy settings in the browsers, there is no reason 
> why you can't allow them to login to the proxy when they are on the 
> road. Or even at HQ.
> 
> Note that by entering the proxy settings in the browsers you are no 
> longer using "transparent mode".
> 
> Assuming by "transparent" you actually mean "NAT intercepting" you 
> should of course have Squid listening on one port for the intercepted 
> requests (authentication not possible) and another for the configured 
> browsers (authentication possible).
> 
> Amos


Sorry if you get this twice, my mail app died...

I think the DG rules and access may have to be adjusted to let external
people who are logged in through.

If not that I think I'm going to have to see your acl and http_access lines
config to see if there is any obvious reason for the denial.

Amos




RE: [squid-users] Laptops/Mobile Phones using Squid on the road

2009-08-26 Thread Dave Burkholder
>Authentication was created for exactly this purpose.

>With explicitly set proxy settings in the browsers, there is no reason why
you can't allow >them to login to the proxy when they are on the road. Or
even at HQ.

>Note that by entering the proxy settings in the browsers you are no longer
using >"transparent mode".

>Assuming by "transparent" you actually mean "NAT intercepting" you should
of course have >Squid listening on one port for the intercepted requests
(authentication not possible) and >another for the configured browsers
(authentication possible).

>Amos
>--
>Please be using
>   Current Stable Squid 2.7.STABLE6 or 3.0.STABLE18
>   Current Beta Squid 3.1.0.13


I'm back on this after a few days...Thanks for your reply, Amos. What you
said about authentication makes a lot of sense. So I disabled transparent
mode, and required authentication and I get the exact same problem. (using
Clarkconnect 5.0) Squid throughs an error, but does NOT give me a login
dialog box as it would inside the LAN!!!

A few more details about what I'm trying to do...

1. Using Dansguardian content filter on Clarkconnect 5.0
2. When I'm OUTSIDE the CC lan using proxy settings in the browser, I get
the Dansguardian error page if I try to go to a banned site.
3. When I try to visit a site that is acceptable, I get the Squid error.
4. Then if I put in the ACL rule to allow my WAN IP, everything works.

Any ideas? I'd really like to just use proxy settings without VPN involved.

Thank you in advance.




[squid-users] Changing HTTP BASIC 'Realm' to force user logout / reauthentication

2009-07-21 Thread David (Dave) Donnan

Hello squid users. Is anyone able to help me, please ?

I mistakenly thought I was clever and could force users to logout of 
squid by changing the realm and immediately restarting the server.

I even thought I could do this with a small cron job, say, 4 times a day.

Background:

  http://httpd.apache.org/docs/1.3/howto/auth.html

  so that if other resources are requested *from the same realm*, the
  same username and password can be returned to authenticate

Re-creation:

1. HTTP authenticate
2. delta squid.conf, specifically, auth_param basic realm *Change Realm *
3. service squid restart
4. F5 refresh

However, I surf seamlessly without the HTTP BASIC prompt.

Should this not work ?

Cdlt, Dave



Re: [squid-users] delay pool: limit download on file types

2009-05-28 Thread Dooda Dave
Hi,

This is really something! Thanks so much. I can really see real-time.

Best,
Dooda

On Wed, May 27, 2009 at 2:01 AM, Chris Robertson  wrote:
> Dooda Dave wrote:
>>
>> Chris,
>>
>> Thanks. I think it works; I tested by looking at the download screen.
>> But is there any better way i can monitor it?
>>
>
> http://samm.kiev.ua/sqstat/
>
>> Best
>> Dooda
>>
>
> Chris
>


[squid-users] delay pool: limit download on file types

2009-05-22 Thread Dooda Dave
Hi,

Can anyone guide on how to configure Squid to using delay pool to
limit download speed on particular file types, but pages will be
loaded normally with full speed?

Appreciate for any help.

Best,
Dooda


Re: [squid-users] Forward SSH on internal machine through Squid to external server

2009-05-21 Thread Dave Dykstra
On Thu, May 21, 2009 at 01:57:37PM +1200, Amos Jeffries wrote:
> > I would like to forward an scp session from one internal machine through
> > the Squid proxy and connect to an external machine. I have found many
> > documents that write about running squid over SSH but not the other way
> > around.  I searched on the Squid-Cache wiki for SSH but could not find
> > anything.
> 
> Squid provides the CONNECT HTTP method for this type of thing.
> 
> Setting the system http_proxy environment variable may make scp use that
> proxy as a gateway. If not you are probably out of luck. scp is intended
> to be very simple and easy to use for end-to-end encrypted links. Adding
> squid to the equation breaks that.
...
> Check the proxy capabilities of your programs (ssh, scp, whatever) they
> need to be capable of transport over HTTP-proxy. If they do configure it
> and set whatever ports they need to CONNECT to, to both the Safe_ports and
> SSL_ports ACL.
> If they don't support transport over HTTP-proxy thats the end of it.

No, it's not the end.  I have succesfully tunnelled ssh over another
program that handles http-proxy:
http://www.nocrew.org/software/httptunnel.html

That program doesn't even require CONNECT, it goes over regular http and
it periodically (or when the connection drops) starts new http
connections without interrupting the tunnel.

- Dave


Re: [squid-users] delay_pools

2009-05-21 Thread Dooda Dave
Hi,

Saw Nyoman's. It's is a quick setup, but havent tried yet.

Is there any way I can limit to just some file types when people
download? So pages will be loaded normally with full speed.

Would really appreciate for any help.

Dooda

On Sat, Mar 28, 2009 at 12:18 AM, nyoman karna  wrote:
>
> Dear Maksim,
> first of all you'll need to attach your squid.conf
> without that we can only guess.
>
> but this is a simple example for delay pool i used,
> it create 2 pools, 1 for faculty (b...@32kbps) and
> 1 for students (b...@128kbps):
>
> acl faculty src 172.16.1.0/255.255.255.0
> acl students src 172.16.0.0/255.255.224.0
>
> delay_pools 2
> delay_class 1 2
> delay_class 2 2
> delay_access 1 allow faculty
> delay_access 1 deny all
> delay_access 2 allow students
> delay_access 2 deny all
>
> delay_parameters 1 256000/256000 4000/4000
> delay_parameters 2 256000/256000 16000/16000
>
> --
>  Nyoman Bogi Aditya Karna
>          IM Telkom
>  http://www.imtelkom.ac.id
> --
>
> --- On Fri, 3/27/09, Maksim Filenko  
> wrote:
>
>> From: Maksim Filenko 
>> Subject: [squid-users] delay_pools
>> To: squid-users@squid-cache.org
>> Date: Friday, March 27, 2009, 10:35 AM
>> Hi everyone!
>>
>> I've stuck with shaping issues.
>>
>> squid.exe -v
>>
>>         Squid Cache: Version
>> 2.7.STABLE4
>>         configure options:
>> --enable-win32-service --enable-storeio='ufs
>>         aufs null coss'
>> --enable-default-hostsfile=none
>>         --enable-removal-policies='heap
>> lru' --enable-snmp --enable-htcp
>>         --disable-wccp --disable-wccpv2
>> --enable-useragent-log
>>         --enable-referer-log
>> --enable-cache-digests --enable-auth='basic
>>         ntlm digest negotiate'
>> --enable-basic-auth-helpers='LDAP NCSA
>>         mswin_sspi'
>> --enable-negotiate-auth-helpers=mswin_sspi
>>
>> --enable-ntlm-auth-helpers='mswin_sspi fakeauth'
>>
>> --enable-external-acl-helpers='mswin_lm_group ldap_group'
>>         --enable-large-cache-files
>>
>> --enable-digest-auth-helpers='password LDAP eDirectory'
>>         --enable-forw-via-db
>> --enable-follow-x-forwarded-for
>>         --enable-delay-pools
>> --enable-arp-acl --prefix=c:/squid
>>
>>         Compiled as Windows System
>> Service.
>>
>> Here's what I've got in log:
>>
>>         2009/03/27 15:00:20|
>> Reconfiguring Squid Cache (version
>>         2.6.STABLE19)...
>>         2009/03/27 15:00:20| FD 11
>> Closing HTTP connection
>>         2009/03/27 15:00:20| FD 16
>> Closing SNMP socket
>>         2009/03/27 15:00:20| FD 14
>> Closing ICP connection
>>         2009/03/27 15:00:20| FD 15
>> Closing HTCP socket
>>         2009/03/27 15:00:20| FD 17
>> Closing SNMP socket
>>         2009/03/27 15:00:20| Cache dir
>> 'c:/squid/var/cache' size remains
>>         unchanged at 102400 KB
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2972 unrecognized:
>>         'delay_pools 5'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2974 unrecognized:
>>         'delay_class 1 1'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2975 unrecognized:
>>         'delay_class 2 1'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2976 unrecognized:
>>         'delay_class 3 2'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2977 unrecognized:
>>         'delay_class 4 1'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2978 unrecognized:
>>         'delay_class 5 1'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2980 unrecognized:
>>         'delay_access 1 allow media'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2981 unrecognized:
>>         'delay_access 1 deny all'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2982 unrecognized:
>>         'delay_access 2 allow
>> leechers'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2983 unrecognized:
>>         'delay_access 2 deny all'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2984 unrecognized:
>>         'delay_access 3 allow limited'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2985 unrecognized:
>>         'delay_access 3 deny all'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2986 unrecognized:
>>         'delay_access 4 allow
>> office_net'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2987 unrecognized:
>>         'delay_access 4 deny all'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2988 unrecognized:
>>         'delay_access 5 allow
>> unlim_ip'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2989 unrecognized:
>>         'delay_access 5 deny all'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2991 unrecognized:
>>         'delay_parameters 1
>> 16000/16000'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2992 unrecognized:
>>         'delay_parameters 2
>> 16000/16000'
>>         2009/03/27 15:00:20|
>> parseConfigFile: line 2993 unrecognized:
>>         'delay_parameters 3 32000/32000
>> 80

[squid-users] squid particular ACL authentication

2009-03-02 Thread Dooda Dave
Hi,

I was wondering if i can create two different or more ACLs; one of a
network range and can bypass squid authentication, while the other ACL
must be authenticated. Would that be possible?

I have squid3 on Ubuntu 8.10. I 've been reading some on-line
documents about squid ACL but can't really figure out how to achieve
that.

any help would be much appreciated.

-- 
Best Regards,
Dooda


Re: [squid-users] How get negative cache along with origin server error?

2008-10-07 Thread Dave Dykstra
On Tue, Oct 07, 2008 at 08:38:12PM +0200, Henrik Nordstrom wrote:
> On tis, 2008-10-07 at 11:49 -0500, Dave Dykstra wrote:
> 
> > Ah, I never would have guessed that I needed to try 10 times before
> > negative_ttl would take effect for a dead host.  That wouldn't be
> > bad at all.
> 
> You don't. Squid does that for you automatically. 

I meant, in my testing I needed to try 10 times to see if negative_ttl
caching was working.  Or are you saying that squid tries to contact the
origin server 10 times on the first request before it even returns the
first 504?  I thought you meant it kept track of the number of client
attempts and should start caching it after 10 failures.

> > time I still saw the request get sent from the child squid to the parent
> > squid and return a 504 error.  This is unexpected to me; is it to you,
> > Henrik?  I would have thought the 504 error would get cached for three
> > minutes after the tenth try.
> 
> Agreed.

Ok, then I will file a bugzilla report.

Meanwhile, I belive I have a workaround as I discussed in another post
on this thread
http://www.squid-cache.org/mail-archive/squid-users/200810/0171.html

Thanks,

- Dave


Re: [squid-users] How get negative cache along with origin server error?

2008-10-07 Thread Dave Dykstra
Henrik,

Thanks so much for your very informative reply!

On Thu, Oct 02, 2008 at 12:31:03PM +0200, Henrik Nordstrom wrote:
> By default Squid tries to use a parent 10 times before declaring it
> dead.

Ah, I never would have guessed that I needed to try 10 times before
negative_ttl would take effect for a dead host.  That wouldn't be
bad at all.

I just tried this now by having two squids, one a cache_peer parent of
the other.  I requested a URL while the origin server was up in order to
load the cache, with a CC max-age of 180.  Both squids have max_stale 0
and negative_ttl of 3 minutes.  Next, I put the origin server name as an
alias for localhost in /etc/hosts on both machines the squids were on,
so they both see connection refused when they try to connect to the
origin server.  I also restarted nscd and did squid -k reconfigure to
make sure the new host name was seen by squid.  After the (small) object
in the cache expired, I retried the request 20 times in a row.  Every
time I still saw the request get sent from the child squid to the parent
squid and return a 504 error.  This is unexpected to me; is it to you,
Henrik?  I would have thought the 504 error would get cached for three
minutes after the tenth try.

> Each time Squid retries a request it falls back on the next possible
> path for forwarding the request. What that is depends on your
> configuration. In normal forwarding without never_direct there usually
> never is more than at most two selected active paths: Selected peer if
> any + going direct. In accelerator mode or with never_direct more peers
> is selected as candidates (one sibling, and all possible parents).
> 
> These retries happens on
> 
> * 504 Gateway Timeout  (including local connection failure)
> * 502 Bad gateway
> 
> or if retry_on_error is enabled also on
> 
> * 401 Forbidden
> * 500 Server Error
> * 501 Not Implemented
> * 503 Service not available
> 
> Please note that there is a slight name confusion relating to max-stale.
> Cache-Control: max-stale is not the same as the squid.conf directive. 
> 
> Cache-Control: max-stale=N is a permissive request directive, saying
> that responses up to the given staleness is accepted as fresh without
> needing a cache validation. It's not defined for responses.
> 
> The squid.conf setting is a restrictive directive, placing an upper
> limit on how stale content may be returned if cache validations fail.
> 
> The Cache-Control: stale-if-error response header is equivalent the
> squid.conf max-stale setting, and overrides squid.conf.

That's very good to know.  I didn't see that in the HTTP 1.1 spec, but
I see that Mark Nottingham submitted a draft protocol extension with
this feature.

> The default for stale-if-error if not specified (and squid.conf
> max-stale) is infinite.
> 
> Warning headers is not yet implemented by Squid. This is on the todo.

Sounds good.

- Dave


Re: [squid-users] How get negative cache along with origin server error?

2008-10-07 Thread Dave Dykstra
Mark,

Thanks for that suggestion.  I had independently come to the same idea,
after posting my message, but haven't yet had a chance to try it out.  I
currently have hierarchies of cache_peer parents but stop the hierarchies
just before the last step to the origin servers because they were
selected by the host & port number in the URLs.  The origin servers have
their own squids configured in accelerator mode so I think I will just
extend the hierarchies all the way to them and let the squids (the ones
which were formerly the top of the hierarchies) take care of detecting
when an origin server goes down (using the cache_peer monitorurl
option).  I did a little experiment and found out that it doesn't matter
what the host and port number are in a URL if the top of a cache_peer
parent hierarchy is an accelerator mode squid, so I don't think I'll
even have to change the application.

- Dave


On Fri, Oct 03, 2008 at 11:21:19AM +1000, Mark Nottingham wrote:
> Have you considered setting squid up to know about both origins, so it  
> can fail over automatically?
> 
> 
> On 26/09/2008, at 5:04 AM, Dave Dykstra wrote:
> 
> >I am running squid on over a thousand computers that are filtering data
> >coming out of one of the particle collision detectors on the Large
> >Hadron Collider.  There are two origin servers, and the application
> >layer is designed to try the second server if the local squid returns a
> >5xx HTTP code (server error).  I just recently found that before squid
> >2.7 this could never happen because squid would just return stale data
> >if the origin server was down (more precisely, I've been testing with
> >the server up but the listener process down so it gets 'connection
> >refused').  In squid 2.7STABLE4, if squid.conf has 'max_stale 0' or if
> >the origin server sends 'Cache-Control: must-revalidate' then squid will
> >send a 504 Gateway Timeout error.  Unfortunately, this timeout error
> >does not get cached, and it gets sent upstream every time no matter what
> >negative_ttl is set to.  These squids are configured in a hierarchy
> >where each feeds 4 others so loading gets spread out, but the fact that
> >the error is not cached at all means that if the primary origin server
> >is down, the squids near the top of the hierarchy will get hammered with
> >hundreds of requests for the server that's down before every request
> >that succeeds from the second server.
> >
> >Any suggestions?  Is the fact that negative_ttl doesn't work with
> >max_stale a bug, a missing feature, or an unfortunate interpretation of
> >the HTTP 1.1 spec?
> >
> >By the way, I had hoped that 'Cache-Control: max-stale=0' would work the
> >same as squid.conf's 'max_stale 0' but I never see an error come back
> >when the origin server is down; it returns stale data instead.  I wonder
> >if that's intentional, a bug, or a missing feature.  I also note that
> >the HTTP 1.1 spec says that there MUST be a Warning 110 (Response is
> >stale) header attached if stale data is returned and I'm not seeing
> >those.
> >
> >- Dave


Re: [squid-users] How get negative cache along with origin server error?

2008-10-07 Thread Dave Dykstra
On Sat, Oct 04, 2008 at 12:55:15PM -0400, Chris Nighswonger wrote:
> On Tue, Sep 30, 2008 at 6:13 PM, Dave Dykstra <[EMAIL PROTECTED]> wrote:
> >> On Thu, Sep 25, 2008 at 02:04:09PM -0500, Dave Dykstra wrote:
> >> > I am running squid on over a thousand computers that are filtering data
> >> > coming out of one of the particle collision detectors on the Large
> >> > Hadron Collider.
> 
> A bit off-topic here, but I'm wondering if these squids are being used
> in CERN's new computing grid? I noticed Fermi was helping out with
> this. 
> (http://devicedaily.com/misc/cern-launches-the-biggest-computing-grid-in-the-world.html)

The particular squids I was talking about are not considered to be part
of the grid, they're part of the "High-Level Trigger" filter farm that
is installed at the location of the CMS detector.  There are other
squids that are considered to be part of the grid, however, at each of
the locations around the world where CMS collision data is being
analyzed.  I own the piece of the software involved in moving detector
alignment & calibration data from CERN out to all the processors at all
the collaboration sites, which is needed to be able to understand the
collision data.  This data is on the order of 100MB but needs to get
sent to all the analysis jobs (and some of it changes every day or so),
unlike the collision data which is much larger but gets sent separately
to individual processors.  The software I own converts the data from a
database to http where it is cached in squids and then converts the data
from http to objects in memory.  The home page is frontier.cern.ch.

That article is misleading, by the way; the very nature of a computing
grid is that it doesn't belong to a single organization, so it's not
"CERN's new computing grid."  It is a collaboration of many
organizations; many different organizations provide the computing
resources, and many different organizations provide the software that
controls the grid and the software that runs on the grid.

- Dave


[squid-users] Multiple squids serving one port (was Re: [squid-users] Why single thread?)

2008-10-06 Thread Dave Dykstra
On Mon, Oct 06, 2008 at 01:07:49PM -0700, Gordon Mohr wrote:
> I can't find mention of this '-I' option elsewhere. (It's not in my 
> 2.6.STABLE14-based man page.)
> 
> Is there a writeup on this option anywhere?
> 
> Did it only appear in later versions?

Right, sorry, it appeared in 2.7:
http://www.squid-cache.org/cgi-bin/cvsweb.cgi/squid/doc/squid.8.in

> Is there a long-name for the option that would be easier to search for?

No.

> I would be interested in seeing your scripts if there are other wrinkles 
> to using Squid in this manner. We're currently using squid for 
> load-balancing on dedicated dual-core machines, so one core is staying 
> completely idle...

I'm including a perl script called 'multisquid' below that uses -I and
assumes that there are '.squid-N.conf' configure scripts where "N" is a
number 0, 1, etc.  I'm also including a bash script 'init-squid' that
generates those from a squid.conf based on the number of subdirectories
under the cache_dir of the form 0, 1, etc (up to 4) exist.  It makes
squid 0 a cache_peer parent of the others so it's the only one that
makes upstream connections, but they all can serve clients.

- Dave

> Dave Dykstra wrote:
> >Meanwhile the '-I' option to squid makes it possible to run multiple
> >squids serving the same port on the same machine, so you can make use of
> >more CPUs.  I've got scripts surrounding squid startups to take
> >advantage of that.  Let me know if you're interested in having them.
> >Currently I run a couple machines using 2 squids each on 2 bonded
> >gigabit interfaces in order to get over 200 Mbytes/second throughput.
> >
> >- Dave


-- multisquid ---
#!/usr/bin/perl -w
#
# run multiple squids.
#  If the command line options are for starting up and listening on a
#  socket, first open a socket for them to share with squid -I.
#  If either one results in an error exit code, return the first error code.
# Writtten by Dave Dykstra, July 2007
#
use strict;
use Socket;
use IO::Handle;
use Fcntl;

if ($#ARGV < 2) {
  print STDERR "Usage: multisquid squidprefix numsquids http_port [squid_args 
...]\n";
  exit 2;
}

my $prefix = shift(@ARGV);
my $numsquids = shift(@ARGV);
my $port = shift(@ARGV);
my $proto = getprotobyname('tcp');

if (!(($#ARGV >= 0) && (($ARGV[0] eq "-k") || ($ARGV[0] eq "-z" {
  #open the socket for both squids to listen on if not doing an
  # operation that doesn't use the socket (that is, -k or -z)
  close STDIN;
  my $fd;
  socket($fd, PF_INET, SOCK_STREAM, $proto) || die "socket: $!";
  setsockopt($fd, SOL_SOCKET, SO_REUSEADDR, 1)|| die "setsockopt: $!";
  bind($fd, sockaddr_in($port, INADDR_ANY)) || die "bind of port $port: $!";
}

my $childn;
for ($childn = 0; $childn < $numsquids; $childn++) {
  if (fork() == 0) {
exec "$prefix/sbin/squid -f $prefix/etc/.squid-$childn.conf -I @ARGV" || 
die "exec: $!";
  }
  # get them to start at different times so they're identifiable by squidclient
  sleep 2;
}

my $exitcode = 0;
while(wait() > 0) {
  if (($? != 0) && ($exitcode == 0)) {
# Take the first non-zero exit code and ignore the other one.
# exit expects a byte, but the exit code from wait() has signal
#  numbers in low byte and exit code in high byte.  Combine them.
$exitcode = ($? >> 8) | ($? & 255);
  }
}

exit $exitcode;
-- init-squid ---
#!/bin/bash
# This script will work with one squid or up to 4 squids on the same http port.
# The number of squids is determined by the existence of cache directories
# as follows.  The main path to the cache directories is determined by the
# cache_dir option in squid.conf.  To run multiple squids, create directories
# of the form
#   `dirname $cache_dir`/$N/`basename $cache_dir`
# where N goes from 0 to the number of squids minus 1.  Also create 
# cache_log directories of the same form.  Note that the cache_log option
# in squid.conf is a file, not a directory, so the $N is up one level:
#   cache_log_file=`basename $cache_log`
#   cache_log_dir=`dirname $cache_log`
#   cache_log_dir=`dirname $cache_log_dir`/$N/`basename $cache_log_dir`
# The access_log should be in the same directory as the cache_log, and
# the pid_filename also needs to be in similarly named directories (the
# same directories as the cache_log is a good choice).

. /etc/init.d/functions

RETVAL=0

INSTALL_DIR=_your_base_install_dir_with_squid_and_utils_subdirectories_
#size at which rotateiflarge will rotate access.log
LARGE_ACCESS_LOG=10

CONF_FILE=$INSTALL_DIR/squid/etc/squid.conf

CACHE_DIR=`awk '$1 == "cache_dir" {x=$

Re: [squid-users] Why single thread?

2008-10-06 Thread Dave Dykstra
Marcin,

In my case all of the data being sent out was small enough and
repetitive enough to be in the Linux filesystem cache.  That's where I
found the best throughput.  I think the typical size of the data items
were about 8-30MBytes.  It was a regular Linux ext3 filesystem.  The
machine happens to have been a dual dual-core 64-bit 2Ghz Opteron,
although I saw some Intel machines with similar performance per CPU but
on those I had only one gigabit network interface and one squid.

- Dave

On Mon, Oct 06, 2008 at 08:09:17PM +0200, Marcin Mazurek wrote:
> Dave Dykstra ([EMAIL PROTECTED]) napisa?(a):
> 
> > Meanwhile the '-I' option to squid makes it possible to run multiple
> > squids serving the same port on the same machine, so you can make use of
> > more CPUs.  I've got scripts surrounding squid startups to take
> > advantage of that.  Let me know if you're interested in having them.
> > Currently I run a couple machines using 2 squids each on 2 bonded
> > gigabit interfaces in order to get over 200 Mbytes/second throughput.
> > 
> 
> 
> What kind of storage do You use for such a IO performance, and what file
> system type on it, if that's not a secret:)
> 
> br
> 
> -- 
> Marcin Mazurek
> 


Re: [squid-users] Why single thread?

2008-10-06 Thread Dave Dykstra
Meanwhile the '-I' option to squid makes it possible to run multiple
squids serving the same port on the same machine, so you can make use of
more CPUs.  I've got scripts surrounding squid startups to take
advantage of that.  Let me know if you're interested in having them.
Currently I run a couple machines using 2 squids each on 2 bonded
gigabit interfaces in order to get over 200 Mbytes/second throughput.

- Dave

On Fri, Oct 03, 2008 at 12:01:26PM +1300, Amos Jeffries wrote:
> Roy M. wrote:
> >Hello,
> >
> >Why squid is running as a single thread program, wouldn't it perform
> >better if allow run as multithreaded as SMP or Quad core CPU are
> >popular now?
> >
> >Thanks.
> 
> Simply 'allowing' squid to run as multi-threaded is a very big change.
> We are doing what we can to work towards it. A years worth of work in 
> now behind with at least another ahead before its really possible.
> 
> Amos
> -- 
> Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] How get negative cache along with origin server error?

2008-09-30 Thread Dave Dykstra
I found out a little bit more by looking in the source code and the
generated headers and setting a few breakpoints.  The squid closest to
the origin server that is down (the one at the top of the cache_peer
parent hierarchy) never attempts to store the negative result.  Worse,
it sets an Expires: header that is equal to the current time.  Squids
further down the hierarchy do call storeNegativeCache() but they see
an expiration time that is already past so it isn't of any use.

Those things make it seem like squid is far from being able to
effectively handle failing over from one origin server to another
at the application level.

- Dave

On Tue, Sep 30, 2008 at 10:32:43AM -0500, Dave Dykstra wrote:
> Do any of the squid experts have any answers for this?
> 
> - Dave
> 
> On Thu, Sep 25, 2008 at 02:04:09PM -0500, Dave Dykstra wrote:
> > I am running squid on over a thousand computers that are filtering data
> > coming out of one of the particle collision detectors on the Large
> > Hadron Collider.  There are two origin servers, and the application
> > layer is designed to try the second server if the local squid returns a
> > 5xx HTTP code (server error).  I just recently found that before squid
> > 2.7 this could never happen because squid would just return stale data
> > if the origin server was down (more precisely, I've been testing with
> > the server up but the listener process down so it gets 'connection
> > refused').  In squid 2.7STABLE4, if squid.conf has 'max_stale 0' or if
> > the origin server sends 'Cache-Control: must-revalidate' then squid will
> > send a 504 Gateway Timeout error.  Unfortunately, this timeout error
> > does not get cached, and it gets sent upstream every time no matter what
> > negative_ttl is set to.  These squids are configured in a hierarchy
> > where each feeds 4 others so loading gets spread out, but the fact that
> > the error is not cached at all means that if the primary origin server
> > is down, the squids near the top of the hierarchy will get hammered with
> > hundreds of requests for the server that's down before every request
> > that succeeds from the second server.
> > 
> > Any suggestions?  Is the fact that negative_ttl doesn't work with
> > max_stale a bug, a missing feature, or an unfortunate interpretation of
> > the HTTP 1.1 spec?
> > 
> > By the way, I had hoped that 'Cache-Control: max-stale=0' would work the
> > same as squid.conf's 'max_stale 0' but I never see an error come back
> > when the origin server is down; it returns stale data instead.  I wonder
> > if that's intentional, a bug, or a missing feature.  I also note that
> > the HTTP 1.1 spec says that there MUST be a Warning 110 (Response is
> > stale) header attached if stale data is returned and I'm not seeing
> > those.
> > 
> > - Dave


Re: [squid-users] How get negative cache along with origin server error?

2008-09-30 Thread Dave Dykstra
Do any of the squid experts have any answers for this?

- Dave

On Thu, Sep 25, 2008 at 02:04:09PM -0500, Dave Dykstra wrote:
> I am running squid on over a thousand computers that are filtering data
> coming out of one of the particle collision detectors on the Large
> Hadron Collider.  There are two origin servers, and the application
> layer is designed to try the second server if the local squid returns a
> 5xx HTTP code (server error).  I just recently found that before squid
> 2.7 this could never happen because squid would just return stale data
> if the origin server was down (more precisely, I've been testing with
> the server up but the listener process down so it gets 'connection
> refused').  In squid 2.7STABLE4, if squid.conf has 'max_stale 0' or if
> the origin server sends 'Cache-Control: must-revalidate' then squid will
> send a 504 Gateway Timeout error.  Unfortunately, this timeout error
> does not get cached, and it gets sent upstream every time no matter what
> negative_ttl is set to.  These squids are configured in a hierarchy
> where each feeds 4 others so loading gets spread out, but the fact that
> the error is not cached at all means that if the primary origin server
> is down, the squids near the top of the hierarchy will get hammered with
> hundreds of requests for the server that's down before every request
> that succeeds from the second server.
> 
> Any suggestions?  Is the fact that negative_ttl doesn't work with
> max_stale a bug, a missing feature, or an unfortunate interpretation of
> the HTTP 1.1 spec?
> 
> By the way, I had hoped that 'Cache-Control: max-stale=0' would work the
> same as squid.conf's 'max_stale 0' but I never see an error come back
> when the origin server is down; it returns stale data instead.  I wonder
> if that's intentional, a bug, or a missing feature.  I also note that
> the HTTP 1.1 spec says that there MUST be a Warning 110 (Response is
> stale) header attached if stale data is returned and I'm not seeing
> those.
> 
> - Dave


Re: [squid-users] Cache Peers and Load Balancing

2008-09-30 Thread Dave Dykstra
On Mon, Sep 29, 2008 at 03:41:33PM -0500, Dean Weimer wrote:
> I am looking at implementing a new proxy configuration, using multiple peers 
> and load balancing, I have been looking through the past archives but I 
> haven't found the answers to some questions I have.
...
> Now the other question is whether or not I should configure the 3 parent 
> servers as siblings?
> Would doing so break the source hash?

Dean,

I can't answer most of your questions, I'm not familiar with source
hash, but I will point out something from my experience with cache_peer
siblings: they can't be used in combination with collapsed_forwarding
because it causes a deadlock.  In my application I can have many clients
requesting the same thing at the same time, so I've determined that
collapsed_forwarding is more important than making squids siblings.

- Dave


[squid-users] RE: [Bulk] Re: [squid-users] Allowing web page access

2008-09-25 Thread Dave Beach
Works like a charm, thanks.

-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sent: September 23, 2008 11:46 PM
To: Dave Beach
Cc: squid-users@squid-cache.org
Subject: [Bulk] Re: [squid-users] Allowing web page access

> Hi  - I currently use squid (3.0 stable 1) as a web proxy for my internal
> home network. Among other things, it denies website access from my
> daughter's computer to anything other than specific domains I allow, using
> acl dstdomain statements. This works great.
>
> In addition to continuing to do that, I now want to allow her to access
> specific webpages as well. Her teacher maintains a class website on
> angelfire.com, and I don't want to add that entire domain - just the
> teacher's page, and all pages subordinate to it.
>
> I've been poring through the documents, and I can't seem to easily figure
> out how to do that. Please, take pity and drop hints.
>


 "url_regex" or "urlpath_regex" in
http://www.squid-cache.org/Versions/v3/3.0/cgman/acl.html

ie, put this (with changes) above the lines which deny her access.

  acl angelfire dstdomain .angelfire.com
  acl teacher_site urlpath_regex /folder/.*
  http_access allow daughterpc angelfire teacher_site


I'd also advise you to upgrade the squid to something at least 3.0.STABLE7.

Amos



[squid-users] How get negative cache along with origin server error?

2008-09-25 Thread Dave Dykstra
I am running squid on over a thousand computers that are filtering data
coming out of one of the particle collision detectors on the Large
Hadron Collider.  There are two origin servers, and the application
layer is designed to try the second server if the local squid returns a
5xx HTTP code (server error).  I just recently found that before squid
2.7 this could never happen because squid would just return stale data
if the origin server was down (more precisely, I've been testing with
the server up but the listener process down so it gets 'connection
refused').  In squid 2.7STABLE4, if squid.conf has 'max_stale 0' or if
the origin server sends 'Cache-Control: must-revalidate' then squid will
send a 504 Gateway Timeout error.  Unfortunately, this timeout error
does not get cached, and it gets sent upstream every time no matter what
negative_ttl is set to.  These squids are configured in a hierarchy
where each feeds 4 others so loading gets spread out, but the fact that
the error is not cached at all means that if the primary origin server
is down, the squids near the top of the hierarchy will get hammered with
hundreds of requests for the server that's down before every request
that succeeds from the second server.

Any suggestions?  Is the fact that negative_ttl doesn't work with
max_stale a bug, a missing feature, or an unfortunate interpretation of
the HTTP 1.1 spec?

By the way, I had hoped that 'Cache-Control: max-stale=0' would work the
same as squid.conf's 'max_stale 0' but I never see an error come back
when the origin server is down; it returns stale data instead.  I wonder
if that's intentional, a bug, or a missing feature.  I also note that
the HTTP 1.1 spec says that there MUST be a Warning 110 (Response is
stale) header attached if stale data is returned and I'm not seeing
those.

- Dave


Re: [squid-users] round robin question

2008-09-25 Thread Dave Dykstra
On Thu, Sep 25, 2008 at 08:51:00AM -0400, jeff donovan wrote:
> 
> On Sep 24, 2008, at 11:38 AM, Kinkie wrote:
> 
> >On Wed, Sep 24, 2008 at 5:16 PM, jeff donovan  
> ><[EMAIL PROTECTED]> wrote:
> >>greetings
> >>
> >>How could I go about load balancing two or more transparent proxy  
> >>squid servers ?
> >>No caching invloved. This is strictly for access.
> >>
> >>i thought about dns round robin, but that didn't make sense since i  
> >>am forwarding all connections to a single interface.
> >>
> >>any insight would be helpful
> >
> >
> >So both instances are running on the same (bridging?) system?
> >Can you give some more details?
> 
> I have 12 subnets coming off router 1 and 12 coming off router 2 each  
> pass through a transparent squid.
> 
> I want to for better or worse " Mux " these two and add a 3rd box.
> 
> combine the 24 subnets ---> ( 3 squids ) --->

What is the reason for combining them?  What do you mean by a single
"interface"?  What data rates are you talking about at each point?

I've seen squid load balancing done 3 ways: round-robin dns, a "blade"
in a Cisco switch, and on 2 systems running LinuxVirtualServer in
combination with linux-ha.

- Dave


[squid-users] Allowing web page access

2008-09-23 Thread Dave Beach
Hi  - I currently use squid (3.0 stable 1) as a web proxy for my internal
home network. Among other things, it denies website access from my
daughter's computer to anything other than specific domains I allow, using
acl dstdomain statements. This works great.

In addition to continuing to do that, I now want to allow her to access
specific webpages as well. Her teacher maintains a class website on
angelfire.com, and I don't want to add that entire domain - just the
teacher's page, and all pages subordinate to it.

I've been poring through the documents, and I can't seem to easily figure
out how to do that. Please, take pity and drop hints.



[squid-users] cant find "Server" from webmin menu

2008-09-06 Thread Dooda Dave
hi all,

i have my squid 2.7 stable 4 running on win xp. it works fine as a
standard proxy without any authentication all.

my default squid location is c:/squid/

i installed webmin 1.430 on the same box. its service starts perfectly
and i can login normal from browser; however, i couldn't find any menu
item named "Server" to control the Squid Proxy. And what would be the
configuration to tell webmin to look for it under c:/squid/ ?

dunno if anyone is using webmin for win32 here?

your help'd would b much appreciated.

-- 
Best Regards,
Dooda


[squid-users] make install: Squid 3.0 Stable 8 on W2k3

2008-09-03 Thread Dooda Dave
Hi,

Last time, i had a problem that i got stuck with "make," but then i
figured out that i missed some gcc compiler components. However, when
i pass that and get to "make install" the following errors occur.

make[3]: *** [WinSvc.o] Error 1
make[3]: Leaving directory `/cygdrive/c/squid-3.0.STABLE8/src'
make[2]: *** [install-recursive] Error 1
make[2]: Leaving directory `/cygdrive/c/squid-3.0.STABLE8/src'
make[1]: *** [install] Error 2
make[1]: Leaving directory `/cygdrive/c/squid-3.0.STABLE8/src'
make: *** [install-recursive] Error 1

-- 
Dooda


[squid-users] compiling squid error on windows

2008-09-03 Thread Dooda Dave
Dear all,

I've downloaded squid3.0 stable 8 and am trying to compile it on
windows 2003. however, i hit an error when starting to run "make." the
error is as below:

[EMAIL PROTECTED] /cygdrive/c/squid-3.0.STABLE8
$ make
make: *** No targets specified and no makefile found.  Stop.

I couldn't really get help from google at all. Hope some of you may
have encountered the same problem.

Thanks in advance.

Regards,
Dooda


Re: [squid-users] assertion failed

2008-07-10 Thread Dave

I get this one too, sorry do not have a coredump of it though.

Henrik Nordstrom wrote:

On fre, 2008-07-11 at 00:27 +0545, Pritam wrote:
  

Hi,

My squid box was restarted with following message in cache.log.

httpReadReply: Excess data from "HEAD 
http://dl_dir.qq.com/qqfile/ims/qqdoctor/tsfscan.dat";

assertion failed: forward.c:109: "!EBIT_TEST(e->flags, ENTRY_FWD_HDR_WAIT)"
Starting Squid Cache version 2.7.STABLE3 for i686-pc-linux-gnu...
Process ID 22147
With 8192 file descriptors available...

I googled the issue and couldn't actually get the clearer idea behind.



You found a bug.

Please file a bug report. If you can please include a stacktace of that
error.. 


http://wiki.squid-cache.org/SquidFaq/TroubleShooting#head-7067fc0034ce967e67911becaabb8c95a34d576d

Regards
Henrik

  


Re: [squid-users] 2.6.STABLE19 and 2.6.STABLE20 missing from mirrors

2008-05-01 Thread Dave Holland
On Thu, May 01, 2008 at 01:42:58AM +1000, Joshua Root wrote:
> I notice that ftp://ftp.squid-cache.org/pub/squid-2/STABLE/ seems to
> have stopped being updated after the 2.6.STABLE18 release. Consequently,
> none of the mirrors have 2.6.STABLE19 or 2.6.STABLE20.

I don't know if it's related, but the *.asc signature files for STABLE20
are missing too. The links to them from
http://www.squid-cache.org/Versions/v2/2.6/ are also broken. Please can
they be replaced?

thanks,
Dave
-- 
** Dave Holland ** Systems Support -- Infrastructure Management **
** 01223 496923 ** The Sanger Institute, Hinxton, Cambridge, UK **
"The plural of anecdote is not data."


-- 
 The Wellcome Trust Sanger Institute is operated by Genome Research 
 Limited, a charity registered in England with number 1021457 and a 
 company registered in England with number 2742969, whose registered 
 office is 215 Euston Road, London, NW1 2BE. 


Re: [squid-users] control bandwidth usage on internal LAN?

2008-04-16 Thread Dave Augustus
On Sunday 09 March 2008 9:49:49 pm Chuck Kollars wrote:
> How can I prioritize traffic on my _internal_ LAN (or
> to use different words the _other_ side of Squid)?
>
OK

> The first request for a very large file uses some
> amount of drop bandwidth which I can control with
> things like delay_pools. But the second request is
> answered out of the cache, at disk speed, and
> saturates my LAN. I'd much rather respond smartly to
> the 20 other users by making the large file go to the
> back of the line. How can I turn down the priority of
> large file responses from the cache?
>

What you need is QOS. Check out Zeroshell.net

We installed this 2 months ago and never looked back. We use it in transparent 
bridge mode.

Dave


Re: [squid-users] Confusing redirection behaviour

2008-03-28 Thread Dave Coventry
Hi Amos,

On Fri, Mar 28, 2008 at 4:26 PM, Amos Jeffries wrote:
>  Because the ACL to which you have attached the deny_info is only doing
>  an allow. You need to use it to actually deny before the deny_info will
>  work.
>
>  Try:
>http_access deny !lan

Okay, I'll give it ago. Is that instead of "http_access allow lan"?
It's a shame I didn't get this until after I'd left work; I'll have to
wait until Monday, I guess.

>  > url_rewrite_program to redirect to my login page. The interesting
>  > thing here is that if I redirect as follows:
>  > --
>  > print "302:http://192.168.60.254/cgi-bin/auth.cgi\n";
>  > --
>  > Then I get an error message which says "Error the requested URL could
>  > not be retrieved." as the root has been removed from the path.
>  > (see http://davec.uklinux.net/Squid3.0-HEAD.jpg )
>  >
>  > But, if I put a slash in front of the redirection URL:
>  > --
>  > print "302:/http://192.168.60.254\n";
>  > --
>  > then Squid attempts to redirect me to the originally requested URL
>  > with /http://192.168.60.254/ appended.
>  > (see http://davec.uklinux.net/Redirectionerror.jpg )

Do you have any idea what's happening as regards the
"url_rewrite_program" directive?


Re: [squid-users] Confusing redirection behaviour

2008-03-28 Thread Dave Coventry
Hi Henrik,

On Fri, Mar 28, 2008 at 2:29 AM, Henrik Nordstrom wrote:
>  http://www.squid-cache.org/Versions/v2/2.6/cfgman/deny_info.html

Thank you. I'm not sure I understand it, though. Do you need to set
%s? How do you use it?

>
>  > I've tried:
>  >
>  >  deny_info "302:http://192.168.60.254/login.html"; lan
>
>  Should be
>
>  deny_info http://192.168.60.254/login.html lan

Yes, I've also tried that.

This is what I've currently got:
--
external_acl_type ipauth ttl=5 negative_ttl=5 %SRC
/usr/local/squid/libexec/checkip
acl lan external ipauth
http_access allow lan

deny_info http://192.168.60.254/login.html lan
--

However, redirection does not take place and I'm served the Standard
error page in ERR_ACCESS_DENIED.

In an attempt to find a kludge for what I'm trying to do, I used
url_rewrite_program to redirect to my login page. The interesting
thing here is that if I redirect as follows:
--
print "302:http://192.168.60.254/cgi-bin/auth.cgi\n";
--
Then I get an error message which says "Error the requested URL could
not be retrieved." as the root has been removed from the path.
(see http://davec.uklinux.net/Squid3.0-HEAD.jpg )

But, if I put a slash in front of the redirection URL:
--
print "302:/http://192.168.60.254\n";
--
then Squid attempts to redirect me to the originally requested URL
with /http://192.168.60.254/ appended.
(see http://davec.uklinux.net/Redirectionerror.jpg )

Regards and thanks for your time :)

Dave


Re: [squid-users] Confusing redirection behaviour

2008-03-27 Thread Dave Coventry
Hi, I've done as Chris suggests and used
http://192.168.60.254/cgi-bin/auth.cgi";
name=login>

with the following result:
http://tinypic.com/view.php?pic=2cz6b87&s=3

As you can see the root "http://192.168.60.254"; has been removed and
squid is reporting an error because "/cgi-bin/auth.cgi" cannot be
found. Could this be an iptables issue?

(I have given up using deny_info and have simply replaced
"share/errors/English/ERR_ACCESS_DENIED" with my login page.)


Re: [squid-users] Confusing redirection behaviour

2008-03-26 Thread Dave Coventry
Thanks, Chris, Henrik. (Apologies to Henrik; I thought I was replying
to the list, forgot that the default is to reply off-list.)

I think it was my firewall which was causing a lot of the odd
behavior, I hope I have that sorted now...

Chris, regarding the 302 redirection and the use of %s, where can I
find information on this?

I've tried:

 deny_info "302:http://192.168.60.254/login.html"; lan

but the Access denied page which is served is just the
"/usr/local/squid/share/errors/English/ERR_ACCESS_DENIED".


[squid-users] Confusing redirection behaviour

2008-03-17 Thread Dave Coventry
In an attempt to generate a login page I was previously using
"external_acl_type" to define a helper program to define my acl, and
then using "deny_info" to define a logon page for my users.

This failed because the redirected page did not appear to use it's own
URL as it's root and instead substituted the requested URL.

This meant that I was unable to call a CGI from my logon form because
the form's CGI was appended to the originally requested (and denied)
URL. So, if the user requested "toyota.co.za", and was (correctly)
sent to my login "192.168.60.254/login.html", the CGI called from the
login page's form was "toyota.co.za/cgi-bin/myperlscript.cgi".

Amos suggested that, instead of hosting the cgi script on the server,
I placed it on the 'net, but I'm afraid this wouldn't suit my purpose.

In desperation I'm looking at "url_rewrite_program", but it also
appears to have redirection issues.

If I use the Perl script below, I would expect the requested URL to be
replaced by http://localhost/login.html, whatever the user requested.

However 2 results occur. If the requested URL is a simple tld, like
http://www.toyota.co.za, then the user is redirected to the Apache
default page which simply proclaims (misleadingly!) that "It Works!".
This in spite of the fact that the default page has been removed and
replaced.

If the URL takes the form
"http://www.google.co.za/firefox?client=firefox-a&rls=org.mozilla:en-GB:official";
then the user is presented with a 404 which says "firefox not found".
/var/log/apache/error.log confirms that "/var/www/firefox" is not
found.

This behaviour persists if I replace http://localhost with
http://192.168.60.254 or with http://news.bbc.co.uk, or whatever.

#!/usr/bin/perl
$|=1;
while (<>) {
@X = split;
$url = $X[0];
$ip = $X[1];
if (1 == 1){
print "302:http://localhost/login.html\n";;
}
else{
print "$url\n";
}
}


Re: [squid-users] Squid/Samba authenication with wrong username

2008-03-13 Thread Dave Augustus
On Thursday 13 March 2008 10:50:50 am J Beris wrote:
> Hi Shane,
>
> My krb5.conf
>
> [libdefaults]
>   Default_realm = OURDOMAIN
>
> [realms]
>   OURDOMAIN = {
>   kdc = 1.2.3.4
>   kdc = 1.2.3.5
>   kdc = host.domain
>   kdc = host1.domain
>   }
>
> [logging]
>   kdc = FILE:/path/to/log/krb5kdc.log
>   admin_server = FILE:/path/to/log/kadmind.log
>   default = FILE:/path/to/log/krb5lib.log
>
> That's all.
>
> > And I receive the following errors (quite lengthy, sorry) when running
> > the NTLM_AUTH command, as shown:
> >
> > [EMAIL PROTECTED] Shane]# /usr/lib/squid/ntlm_auth --username=shane
> >
> > /usr/lib/squid/ntlm_auth: invalid option -- -

Use the ntlm_auth provided by samba instead.

Dave


Re: [squid-users] Squid/Samba authenication with wrong username

2008-03-12 Thread Dave Augustus
On Wednesday 12 March 2008 4:31:16 pm Leach, Shane - MIS Laptop wrote:
> I am not sure that I am clear.  It is working already for the most part,
> just not exactly as I want it to.
>
> Take this example:
>
> If I use command "wbinfo -u" I will receive the user "Shane" as one
> account listed... But, in Windows XP, I am signed in under
> "DOMAIN\Shane" so the authentication does not recognize me.  If I type
> in "Shane" in logon screen for Squid, I am able to use just fine... The
> access log is updated as I browse the web.  But, if I attempt to logon
> with "DOMAIN\Shane" I am rejected.
>
> I want Squid to recognize the "DOMAIN\Shane" as the username so my users
> do not have to logon.
>
> It would seem that if I can append "DOMAIN\" to the username that is
> passed, things would be fine... But I am not sure.
>
> Thank you for the assistance.
>
> Shane
>

Set the default domain in smb.conf and krb5.conf

Dave


Re: [squid-users] Squid/Samba authenication with wrong username

2008-03-12 Thread Dave Augustus
On Wednesday 12 March 2008 1:50:20 pm Leach, Shane - MIS Laptop wrote:
> Dave,
>
> Perhaps my terminology was incorrect.  I am wanting Squid to log/filter
> web traffic.  I want permissions to be based on A/D security groups.
> From what I read, using NTLM or Samba, I could do this... The proxy
> works fine, although it is requiring a login when a user opens an IE
> session and I don't want the user to be prompted for username and
> password.  Instead, I'd like Windows to pass the credentials
> automatically.
>
> Like I noted, though, it would appear it is passing only the username
> and not the domain\username... It occurred to me that this could have
> been the reason for the login every time someone opens IE.
>
> Any suggestions or ideas?
>
Google for squid nltm samba and you should find several resources about 
setting up what you want. I would get it working without groups FIRST and 
them add them laterone thing at a time.
:)

Dave


Re: [squid-users] Squid/Samba authenication with wrong username

2008-03-12 Thread Dave Augustus
On Wednesday 12 March 2008 12:17:16 pm Leach, Shane - MIS Laptop wrote:
> I currently have Samba 3.028 and Squid 7:2.6Stable set up to
> authenticate Active Directory users to the proxy server.  I want the
> proxy to be transparent though and it is not.
>

Shane,

Transparent Proxy and Authentication are mutally exclusive- either users 
authenticate or they don't.

What are you trying to accomplish?

Dave


Re: [squid-users] Redirection on error.

2008-03-12 Thread Dave Coventry
Hi,

I was hoping to replace the ERR_ACCESS_DENIED page with a logon page
which could authenticate the user against a password. It doesn't need
to be very secure.

The problem is that the logon page cannot call the required CGI
scripts from /usr/local/squid/share/errors/English/

Attempting to place the logon page in "/var/www/apache2-default/"
using "deny_info /var/www/apache2-default/login.html ipauthACL"
generates this error:


2008/03/12 13:33:33| errorTryLoadText:
'/usr/local/squid/share/errors/English//var/www/apache2-default/login.html':
(2) No such file or directory

Using "deny_info http://localhost/login.html ipauthACL" or "deny_info
http:/192.168.60.254/login.html ipauthACL" appears to work, but
subsequent calls to (say) "cgi-bin/auth.pl" are appended onto the
original URL. For example, if the user requests "www.toyota.co.za",
"www.toyota.co.za/cgi-bin/auth.pl" is returned.

Is there any way of modifying this behavior?


[squid-users] Possible Error

2008-03-11 Thread Dave Coventry
Hi,

I am still unable to get Squid to process my acl_external_type script
to run as expected.

I'm getting an error in my cache.log 'ipcacheAddEntryFromHosts: Bad IP
address 'localhost.localdomain'' (see log listing below)

Is it possible that this is causing my script's anomalies?

Kind Regards,

Dave Coventry

2008/03/11 13:00:33| Starting Squid Cache version 3.0.STABLE2-20080307
for i686-pc-linux-gnu...
2008/03/11 13:00:33| Process ID 4635
2008/03/11 13:00:33| With 1024 file descriptors available
2008/03/11 13:00:33| ipcacheAddEntryFromHosts: Bad IP address
'localhost.localdomain'
2008/03/11 13:00:33| DNS Socket created at 0.0.0.0, port 32772, FD 7
2008/03/11 13:00:33| Adding nameserver 192.168.10.213 from /etc/resolv.conf
2008/03/11 13:00:33| helperOpenServers: Starting 5 'checkip' processes
2008/03/11 13:00:34| Unlinkd pipe opened on FD 17
2008/03/11 13:00:34| Swap maxSize 102400 KB, estimated 7876 objects
2008/03/11 13:00:34| Target number of buckets: 393
2008/03/11 13:00:34| Using 8192 Store buckets
2008/03/11 13:00:34| Max Mem  size: 8192 KB
2008/03/11 13:00:34| Max Swap size: 102400 KB
2008/03/11 13:00:34| Version 1 of swap file with LFS support detected...
2008/03/11 13:00:34| Rebuilding storage in /usr/local/squid/var/cache (DIRTY)
2008/03/11 13:00:34| Using Least Load store dir selection
2008/03/11 13:00:34| Set Current Directory to /usr/local/squid/var/cache
2008/03/11 13:00:34| Loaded Icons.
2008/03/11 13:00:34| Accepting transparently proxied HTTP connections
at 0.0.0.0, port 3128, FD 19.
2008/03/11 13:00:34| Accepting ICP messages at 0.0.0.0, port 3130, FD 20.
2008/03/11 13:00:34| HTCP Disabled.
2008/03/11 13:00:34| Ready to serve requests.
2008/03/11 13:00:34| Done reading /usr/local/squid/var/cache swaplog
(201 entries)
2008/03/11 13:00:34| Finished rebuilding storage from disk.
2008/03/11 13:00:34|   201 Entries scanned
2008/03/11 13:00:34| 0 Invalid entries.
2008/03/11 13:00:34| 0 With invalid flags.
2008/03/11 13:00:34|   201 Objects loaded.
2008/03/11 13:00:34| 0 Objects expired.
2008/03/11 13:00:34| 0 Objects cancelled.
2008/03/11 13:00:34| 0 Duplicate URLs purged.
2008/03/11 13:00:34| 0 Swapfile clashes avoided.
2008/03/11 13:00:34|   Took 0.14 seconds (1415.77 objects/sec).
2008/03/11 13:00:34| Beginning Validation Procedure
2008/03/11 13:00:34|   Completed Validation Procedure
2008/03/11 13:00:34|   Validated 427 Entries
2008/03/11 13:00:34|   store_swap_size = 2252
2008/03/11 13:00:35| storeLateRelease: released 0 objects


[squid-users] popups on linux box- ntlm? ldap helper?

2008-03-07 Thread Dave Augustus
Hello all,

I get auth popups on firefox 2 on my workstation (centos 4.6).  My workstation 
is also a domain machine using samba. Now I am aware of the ntlm problem with 
winbind- that is, occasional popups. Is this to be expected for my machine as 
well or should I use an ldap helper for non-domain machines?

( parenthetically- Is there a way I can use my local winbind pipe for proxy 
authentication?)

Running squid-2.6.STABLE6-5.el5_1.2 on Centos 5. Here is the squid.conf for 
auth section:

# Active Directory configuration
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 30
# Basic authentication
auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 5
auth_param basic realm Proxy Server
auth_param basic credentialsttl 2 hours

acl authenticated_users proxy_auth REQUIRED


Thanks!
Dave


Re: RS: [squid-users] winbindd: Exceeding 200 client connections, no idle connection found

2008-03-06 Thread Dave Augustus
On Tuesday 04 March 2008 5:08:54 am Francisco Martinez Espadas wrote:
> 2.6stable18

I have a Centos5.0 box now- where did you get squid 2.6 stable18 from? I don't 
see it in the upgrade path?

Thanks!
Dave 


Re: [squid-users] Authentication Hack

2008-03-06 Thread Dave Coventry
As far as I can tell the helper script never gets executed.

If there are errors in the script, Squid will respond to those errors,
but it doesn't seem to actually run the script.


Re: [squid-users] Authentication Hack

2008-03-05 Thread Dave Coventry
Thanks Adrian,

On Wed, Mar 5, 2008 at 1:31 PM, Adrian Chadd wrote:
> Uhm, try:
>
>  #!/usr/bin/perl -w
>
>  use strict; # (because you should!)
Point taken.

>
>  $| = 1;
>
>  while (<>) {
> chomp;
> my ($ip) = $_;
> # XXX should verify IP is an IP and not something nasty!
> ...
>  }

I'll try it!

>  The question then is how to query a cgi from a helper. I'd try the LWP stuff
>  in Perl to talk to a cgi-bin ; what you've doen there is try to read a file,
>  not call a cgi-bin. :)

My understanding is that Squid checks the helper to carry out a check
against the IP, User, etc according to the FORMAT parameter to test
that they belong to an acl, based on OK or ERR. My script was just a
simple test against the existence of a file generated by a cgi script
called by the ACCESS_DENIED error page replacement.

If you can see a way to short-cut this, please tell me more!

As far as I can see, though, Squid is looking for either OK or ERR and
ignores anything else


Re: [squid-users] Authentication Hack

2008-03-05 Thread Dave Coventry
Thanks, Mick.

On Wed, Mar 5, 2008 at 12:08 PM, Michael Graham wrote:
>  (Sorry Dave I keep hitting reply and not reply to list)
Yes, I keep doing that :)

>  External helps are not supposed to exit once they have completed a
>  request.  Your perl script should read from stdin then write OK/ERR then
>  wait for more input.
Ah!

So this should work?

 1:#!/usr/bin/perl
 2:while(1){
 3:  $| = 1;
 4:  $ip=;
 5:  $ip=chomp($ip);
 6:  $ipfile='/var/www/apache2-default/cgi-bin/ips/'.$ip;
 7:  #print $ipfile;
 8:  if (-e $ipfile){print "OK";}
 9:  else {print "ERR : ".$ip;}
10:}

I'll try it right now.


Re: [squid-users] Authentication Hack

2008-03-05 Thread Dave Coventry
On Wed, Mar 5, 2008 at 11:20 AM, Michael Graham wrote:
>  deny_info http://myhost/login.cgi?url=%s ipauthACL
>
>  then the login page will be your cgi script and as an added bonus you'll
>  get url set as the original url that caused the deny.  Then you can
>  redirect to it after a successful login.
>
>

Okay, thanks. I'll try that.

However, it appears that, when the screen goes blank (ie, when I'm
expecting my login page to appear), it's actually crashing Squid.

I've revised my helper script slightly (the STDIN apparently has a
newline which confused the script):

1:#!/usr/bin/perl
2:$| = 1;
3:$ip=;
4:$ip=chomp($ip);
5:$ipfile='/var/www/apache2-default/cgi-bin/ips/'.$ip;
6:#print $ipfile;
7:if (-e $ipfile){print "OK";}
8:else {print "ERR : ".$ip;}

This appears in the cache.log:

2008/03/05 11:33:44| WARNING: ipauth #1 (FD 7) exited
2008/03/05 11:33:44| WARNING: ipauth #2 (FD 8) exited
2008/03/05 11:33:44| WARNING: ipauth #3 (FD 9) exited
2008/03/05 11:33:44| Too few ipauth processes are running
2008/03/05 11:33:44| storeDirWriteCleanLogs: Starting...
2008/03/05 11:33:44|   Finished.  Wrote 195 entries.
2008/03/05 11:33:44|   Took 0.0 seconds (874439.5 entries/sec).
FATAL: The ipauth helpers are crashing too rapidly, need help!

Squid Cache (Version 2.6.STABLE18): Terminated abnormally.

Squid then seems to restart without a problem. (Which is why I thought
the redirection behaviour was to blame.)

Damned if I can see what is going wrong

Thanks again for your assistance.


Re: [squid-users] Authentication Hack

2008-03-04 Thread Dave Coventry
I believe that this is the thing that is defeating me at the moment.

I cannot get my Error page Form to call my CGI script:

http://www.mail-archive.com/squid-users@squid-cache.org/msg53327.html


Re: [squid-users] Authentication Hack

2008-03-04 Thread Dave Coventry
My Bad.

I had used $_ instead of  in my Perl program.

It still doesn't work, though: I get a blank page instead of my logon
page. The Apache access.log and errors.log don't appear to have any
entries.

I'll investigate further...


Re: [squid-users] Authentication Hack

2008-03-04 Thread Dave Coventry
On Mon, Mar 3, 2008 at 7:24 PM, Michael Graham wrote:
>  I think I missed a line out, try:
>
>
>  external_acl_type ipauth %SRC /usr/local/squid/libexec/checkip
>  acl ipauthACL external ipauth # <-- This creates the ACL
>  http_access allow ipauthACL
>
Hi Michael,

Thank you for your patience.

Of course! The acl hadn't been declared! Still, I wasn't really aware
of the external argument for acl :?

However, Squid returns this in my /usr/local/squid/var/logs/cache.log:

2008/03/04 10:07:24| Ready to serve requests.
2008/03/04 10:07:24| WARNING: ipauth #1 (FD 7) exited
2008/03/04 10:07:24| WARNING: ipauth #2 (FD 8) exited
2008/03/04 10:07:24| WARNING: ipauth #3 (FD 9) exited
2008/03/04 10:07:24| Too few ipauth processes are running
FATAL: The ipauth helpers are crashing too rapidly, need help!

My Perl script is pretty simple, it just checks for the existence of a
file with the name of the user's IP. If the file exists, the user has
authenticated, if not he needs to log in.

#!/usr/bin/perl -w

$| = 1;
if (-e '/var/www/apache2-default/cgi-bin/ips/'.$_){print "OK";}
else {print "ERR";}

(I'm assuming that squid places the user's IP onto the STDIN and I
don't have to pass the IP address from the squid.conf file).


Re: [squid-users] Redirection on error.

2008-03-01 Thread Dave Coventry
Thanks for your help.

On Sat, Mar 1, 2008 at 11:42 AM, Amos Jeffries  wrote:
>  I'm not sure what you mean by this?
>  The error response and page as a whole _replaces_ the
>  original URL and  page requested _as a whole_.

Well, if I compose an HTML page to replace ERR_ACCESS_DENIED, and the
page has an IMG tag which refers to "images/logo.jpg", then apache
assumes that the location of the logo.jpg file is on the server to
which I was attempting to connect before my access was denied.

So if I was attempting to view http://www.cricinfo.com, apache assumes
that the location of the file "logo.jpg" is at
http://www.cricinfo.com/images/logo.jpg and returns a "404"

If the IMG tag is changed to "http://localhost/images/logo.jpg"; the
result is the same.

If, however, the IMG tag is changed to
"http://192.168.60.254/images/logo.jpg"; the result is slightly
different: the /var/log/apache2/access.log file reveals that apache
believes a dummy file has been requested and returns 200.

127.0.0.1 - - [01/Mar/2008:11:52:32 +0200] "GET / HTTP/1.0" 200 738
"-" "Apache/2.2.4 (Ubuntu) PHP/5.2.3-1ubuntu6 (internal dummy
connection)"

It may be that Apache is at fault here, and I will research this.

But my gut feel is that Squid is spoofing the location of the
ERR_ACCESS_DENIED file as being on the server of the requested URL.

This is not a big deal as far as the "images/logo.jpg" is concerned,
but it drives a coach and horses through my idea to call a perl cgi
script from the ERR_ACCESS_DENIED page.


[squid-users] Authentication Hack

2008-02-29 Thread Dave Coventry
I understand that transparent proxy cannot ask the browser for
Authentication because the browser is not aware of the existence of
the proxy.

I can't believe that there is not a work-around for this...

I have several laptops on my network which are used on other networks,
so I need the connection through the proxy to be "automagic" to the
extent that I don't need to ask my CEO to reconfigure his browser
everytime he comes into the office. But I also need to be able to
track web usage.

I have thought up a hack involving the following:
I can set up a file containing an ip address on each line /etc/squid/iplist.

Then I set up the squid.conf to have the following line:

acl authorisedip src "/etc/squid/iplist"

I changed the ERR_ACCESS_DENIED file to contain a form which calls a
perl program (catchip.pl) passing it a username and password which, if
correct, appends the user's ip to the /etc/squid/iplist file.
(removing the IP when the user closes his browser would be trickier).

However, this all falls down because it appears that the file is only
parsed on startup which sort of subverts it's usefulness.

I can't believe that this avenue has not been fully explored. Can
anyone comment on this hack?

Is there a simpler method of getting this done?


Re: [squid-users] The requested URL could not be retrieved: invalid url

2008-02-28 Thread Dave Coventry
On Thu, Feb 28, 2008 at 11:08 PM, Matus UHLAR - fantomas wrote:
>
>  I guess you are trying to use squid as intercepting proxy but didn't tell it
>  so. Look at "transparent" option for http_port directive

Hi Matus,

I have "http_port 3128 transparent" in my squid.conf and this is the
only occurance of the http_port directive.

It works fine except when I need to access the intranet apache server
(which is on the same machine).


  1   2   3   4   >