Re: [squid-users] squid 3.3.8 failed to start because of hard-coded acl with ::1

2014-02-10 Thread Craig R. Skinner

Bugged: http://bugs.squid-cache.org/show_bug.cgi?id=4024

This known bug has resulted in a FreeBSD binary build patch.


On 2014-01-21 Tue 12:42 PM |, Craig R. Skinner wrote:
> ping
> 
> On 2014-01-08 Wed 11:13 AM |, Craig R. Skinner wrote:
> > On 2014-01-01 Wed 20:55 PM |, Amos Jeffries wrote:
> > > > 
> > > > I included a link to a bug verified by the FreeBSD ports team.
> > > > 
> > > >>
> > > >> The line you have mentioned:
> > > >> http://bazaar.launchpad.net/~squid/squid/3-trunk/view/head:/src/cf.data.pre#L847
> > > >> Assumes that the machine is ipv6 enabled by default.
> > 
> > The FreeBSD patch removes that assumption.
> > 
> > > > 
> > > > It's very easy to test. No kernel or squid recompile needed.
> > > > 
> > > > By setting the DNS resolver to use IPv4 only, squid can't start/parse
> > > >
> > > > (i.e. it is a DNS resolution issue):
> > > >>>
> > > >>> $ fgrep family /etc/resolv.conf
> > > >>> family inet4
> > > >>>
> > > 
> > > Exactly.
> > > 
> > > > 
> > > > Re-enabling IPv6 DNS resolution lets squid run again:
> > > > 
> > > >>>
> > > >>> $ fgrep family /etc/resolv.conf
> > > >>> #family inet4
> > > >>>
> > > > 
> > > 
> > > Possibly the resolv.conf configuration directive could be done earlier
> > > in the configuration sequence, the ACL made non-fatal when an invalid
> > > value is passed for interpretation as an IP address, and Squid updated
> > > to support that family directive from resolv.conf.
> > > 
> > > Amos
> > 
> > That seems sensible.
> > 
> 
> -- 
> Craig Skinner | http://twitter.com/Craig_Skinner | http://linkd.in/yGqkv7


Re: [squid-users] squid 3.3.8 failed to start because of hard-coded acl with ::1

2014-01-21 Thread Craig R. Skinner
ping

On 2014-01-08 Wed 11:13 AM |, Craig R. Skinner wrote:
> On 2014-01-01 Wed 20:55 PM |, Amos Jeffries wrote:
> > > 
> > > I included a link to a bug verified by the FreeBSD ports team.
> > > 
> > >>
> > >> The line you have mentioned:
> > >> http://bazaar.launchpad.net/~squid/squid/3-trunk/view/head:/src/cf.data.pre#L847
> > >> Assumes that the machine is ipv6 enabled by default.
> 
> The FreeBSD patch removes that assumption.
> 
> > > 
> > > It's very easy to test. No kernel or squid recompile needed.
> > > 
> > > By setting the DNS resolver to use IPv4 only, squid can't start/parse
> > >
> > > (i.e. it is a DNS resolution issue):
> > >>>
> > >>> $ fgrep family /etc/resolv.conf
> > >>> family inet4
> > >>>
> > 
> > Exactly.
> > 
> > > 
> > > Re-enabling IPv6 DNS resolution lets squid run again:
> > > 
> > >>>
> > >>> $ fgrep family /etc/resolv.conf
> > >>> #family inet4
> > >>>
> > > 
> > 
> > Possibly the resolv.conf configuration directive could be done earlier
> > in the configuration sequence, the ACL made non-fatal when an invalid
> > value is passed for interpretation as an IP address, and Squid updated
> > to support that family directive from resolv.conf.
> > 
> > Amos
> 
> That seems sensible.
> 

-- 
Craig Skinner | http://twitter.com/Craig_Skinner | http://linkd.in/yGqkv7


Re: [squid-users] squid 3.3.8 failed to start because of hard-coded acl with ::1

2014-01-08 Thread Craig R. Skinner
On 2014-01-01 Wed 20:55 PM |, Amos Jeffries wrote:
> > 
> > I included a link to a bug verified by the FreeBSD ports team.
> > 
> >>
> >> The line you have mentioned:
> >> http://bazaar.launchpad.net/~squid/squid/3-trunk/view/head:/src/cf.data.pre#L847
> >> Assumes that the machine is ipv6 enabled by default.

The FreeBSD patch removes that assumption.

> > 
> > It's very easy to test. No kernel or squid recompile needed.
> > 
> > By setting the DNS resolver to use IPv4 only, squid can't start/parse
> >
> > (i.e. it is a DNS resolution issue):
> >>>
> >>> $ fgrep family /etc/resolv.conf
> >>> family inet4
> >>>
> 
> Exactly.
> 
> > 
> > Re-enabling IPv6 DNS resolution lets squid run again:
> > 
> >>>
> >>> $ fgrep family /etc/resolv.conf
> >>> #family inet4
> >>>
> > 
> 
> Possibly the resolv.conf configuration directive could be done earlier
> in the configuration sequence, the ACL made non-fatal when an invalid
> value is passed for interpretation as an IP address, and Squid updated
> to support that family directive from resolv.conf.
> 
> Amos

That seems sensible.



Re: [squid-users] squid 3.3.8 failed to start because of hard-coded acl with ::1

2013-12-31 Thread Craig R. Skinner
On 2013-12-31 Tue 23:07 PM |, Eliezer Croitoru wrote:
> Hey Craig,
> 
> I want to verify the issue.
> Do these FreeBSD machines operate only on the ipv4 level?

As I wrote Eliezer, I use OpenBSD which is dual stack.

I included a link to a bug verified by the FreeBSD ports team.

> 
> The line you have mentioned:
> http://bazaar.launchpad.net/~squid/squid/3-trunk/view/head:/src/cf.data.pre#L847
> Assumes that the machine is ipv6 enabled by default.

It's very easy to test. No kernel or squid recompile needed.

By setting the DNS resolver to use IPv4 only, squid can't start/parse
(i.e. it is a DNS resolution issue):

> >
> >$ fgrep family /etc/resolv.conf
> >family inet4
> >

Re-enabling IPv6 DNS resolution lets squid run again:

> >
> >$ fgrep family /etc/resolv.conf
> >#family inet4
> >

Maybe squid could first check at run time if IPv6 DNS resolution is
avaliable before requiring IPv6 default ACLs?

FreeBSD solved it by removing IPv6 items from the hardcoded default ACLs

Admins can still use IPv6 in /etc/squid/squid.conf, but it their choice.

Thanks,
-- 
Craig Skinner | http://www.bbc.co.uk/programmes/b03mtrg9/clips


[squid-users] squid 3.3.8 failed to start because of hard-coded acl with ::1

2013-12-31 Thread Craig R. Skinner
#-=-=-=-=-= FYI -=-=-=-=-=-

This is probably a bug, but I can't create a bugzilla account as there
is no DNS PTR record for east.squid-cache.org, which I've raised with
postmaster@, hostmaster@ & r...@packet-pushers.com

#-=-=-=-=-= FYI -=-=-=-=-=-


When using only IPv4, Squid 3.3.8 fails to start, citing bungled config.

FreeBSD uses a patch, see below.



$ uname -srp
OpenBSD 5.4 i386

$ pkg_info -I squid
squid-3.3.8 WWW and FTP proxy cache and accelerator

$ fgrep family /etc/resolv.conf
family inet4

$ grep ^acl /etc/squid/squid.conf
acl localnet src 192.168.169.0/24   # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

$ /usr/local/sbin/squid -k parse
2013/12/31 11:28:35| Startup: Initializing Authentication Schemes ...
2013/12/31 11:28:35| Startup: Initialized Authentication Scheme 'basic'
2013/12/31 11:28:35| Startup: Initialized Authentication Scheme 'digest'
2013/12/31 11:28:35| Startup: Initialized Authentication Scheme 'negotiate'
2013/12/31 11:28:35| Startup: Initialized Authentication Scheme 'ntlm'
2013/12/31 11:28:35| Startup: Initialized Authentication.
2013/12/31 11:28:35| aclIpParseIpData: Bad host/IP: '::1' in '::1', flags=0 : 
(-5) no address associated with name
FATAL: Bungled Default Configuration line 11: acl localhost src 127.0.0.1/32 ::1
Squid Cache (Version 3.3.8): Terminated abnormally.
CPU Usage: 0.094 seconds = 0.055 user + 0.039 sys
Maximum Resident Size: 5836 KB
Page faults with physical i/o: 0


$ fgrep family /etc/resolv.conf
#family inet4


$ /usr/local/sbin/squid -k parse
2013/12/31 12:11:05| Startup: Initializing Authentication Schemes ...
2013/12/31 12:11:05| Startup: Initialized Authentication Scheme 'basic'
2013/12/31 12:11:05| Startup: Initialized Authentication Scheme 'digest'
2013/12/31 12:11:05| Startup: Initialized Authentication Scheme 'negotiate'
2013/12/31 12:11:05| Startup: Initialized Authentication Scheme 'ntlm'
2013/12/31 12:11:05| Startup: Initialized Authentication.
2013/12/31 12:11:05| Processing Configuration File:
/etc/squid/squid.conf (depth 0)
2013/12/31 12:11:05| Processing: acl localnet src 192.168.169.0/24 # RFC1918 
possible internal network
...
...
..
.
[OK]


Bugged by FreeBSD ports team:
http://www.freebsd.org/cgi/query-pr.cgi?pr=176951
Their patch on same page:
http://www.freebsd.org/cgi/query-pr.cgi?pr=176951&getpatch=1


Maybe about line 846/7 of src/cf.data.pre (revision 13199)
http://bazaar.launchpad.net/~squid/squid/3-trunk/view/head:/src/cf.data.pre


Cheers,
-- 
Craig Skinner | http://twitter.com/Craig_Skinner | http://linkd.in/yGqkv7


[squid-users] Allow External Site.

2010-08-03 Thread Craig
Hi All-
 
I have a user who is trying to get to the following site:
https://gcsdskyward.org:444/scripts/wsisa.dll/WService=wsFam/fwemnu01.w 
 
I have Squid 2.7.  I am not trying to deny access to any web site-I am using 
squid to track web site usage.  With this in mind I have done very little 
modification to the squid.conf file.  What did I accidently change, or what do 
I need to change to allow the above link to work?
 
I have attempted to put in an acl-below is just one of many attempts.
#acl Geneseo Schools
acl gs dstdomain 
https://gcsdskyward.org:444/scripts/wsisa.dll/WService=wsFam/fwemnu01.w 
http_access allow gs
 
Thanks
Craig
United Way of the Quad Cities Area




[squid-users] AUTH_ON_ACCELERATION in Squid 3

2008-09-04 Thread Craig Kelley
Hello Squid users;

I've been using AUTH_ON_ACCELERATION to help control access to squid
servers that cache data from a primary Apache server.  This works
great in 2.5, but I've been playing around with Squid 3, and was
wondering how to do the same thing with it.  I've basically used this
setup to get it to function (without requiring authentication):

http_port 1234 defaultsite=10.0.0.25
cache_peer 10.0.0.25 parent 80 0 no-query originserver

Then I setup basic NCSA auth for testing:

auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/testing
auth_param basic children 5
auth_param basic realm Testing Squid Auth
auth_param basic credentialsttl 2 hours

But Squid just happily serves and caches data from 10.0.0.25 without
requiring authentication.

Is this possible anymore?

Thanks!

  -Craig

-- 
http://inconnu.islug.org/~ink finger [EMAIL PROTECTED] for PGP block


[squid-users] help with squid_session redirection

2007-12-28 Thread Craig


I'm working on setting up squid_session to point users to an acceptable use
policy before they are allowed to surf and I just want to get a sanity check
on my config.

According the man pages (http://linuxreviews.org/man/squid_session/) and
several posts (i.e.
http://www.mail-archive.com/squid-users@squid-cache.org/msg45599.html) found
in this archive...

I should have the following lines in the TAG acl section of squid.conf:

external_acl_type session ttl=300 negative_ttl=0 children=1
concurrency=200 %LOGIN /usr/lb/squid/squid_session 
acl session external session 

(note:  /usr/lib/squid/ is the where squid_session was put when squid
installed)

Then in the TAG http_access section, I should have the following:

http_access deny !session 

And finally in the TAG deny_info section, I place the following line:

deny_info http://your.server/bannerpage?url=%s session

making sure that ?url=%s follows whatever url I put there for my AUP page.



However, the above settings did not force the test web client (configured to
use the proxy) to view the url for the http://your.server/bannerpage page
(currently a static web page to check functionality), so I changed the first
line to be:

external_acl_type session ttl=300 negative_ttl=0 children=1
concurrency=200 %LOGIN /usr/local/squid/libexec/squid_session -a

(note the -a at the end)
However, that just made the web client load the requested page really slow
without loading the url I for the AUP.

I don't have a database set up, I was just going to let the memory hold the
session details.

Why isn't it redirecting to the AUP?  Any suggestions?  Am I missing
something obvious?  

Thanks.

Craig L. Bowser
Information Assurance Manager
---
To lead a symphony You must occasionally turn your back on the crowd. -
Anonymous 



[squid-users] How to create an accept terms of use page

2007-12-26 Thread Craig


Hi, I'm wondering if I can use squid to emulate the setup many hotels/wifi
hotspots have where a user must go through an acceptance page before their
computer is allowed to access the Internet.  That acceptance page can either
be a simple 'yes' button or the user must fill in a code of some sort.  

I've been searching the web for several hours and I have only found one
reference to such a configuration; a brief exchange on this list in 2000
where the person asked if it could be done and someone else said it was not
possible.  

Since that was seven years ago, I was wondering if a solution had been found
either using squid only or squid as part of a larger system of devices.  Is
there a link to somewhere I've missed?  I didn't see anything from a quick
scan of the documentation either

We're having issues with a remote site doing things they shouldn't

Thanks.


Craig L. Bowser
Information Assurance Manager
---
After all is said and done, more is said than done.  



[squid-users] common squid hostnames & RFC 2219

2007-10-20 Thread Craig Skinner
What are the most common host names that users on this list use for
their squid boxes?

I'm asking in light of RFC 2219 while cobbling up a fairly generic
WPAD proxy.pac file.


http://www.faqs.org/rfcs/rfc2219.html

3. Special cases

   Special Cases:
---
Alias Service
---
archiearchie [ARCHIE]
fingerFinger [RFC-1288]
ftp   File Transfer Protocol [RFC-959]
gopherInternet Gopher Protocol [RFC-1436]
ldap  Lightweight Directory Access Protocol [RFC-1777]
mail  SMTP mail [RFC-821]
news  Usenet News via NNTP [RFC-977]
ntp   Network Time Protocol [RFC-1305]
phCCSO nameserver [PH]
pop   Post Office Protocol [RFC-1939]
rwhoisReferral WHOIS [RFC-1714]
wais  Wide Area Information Server [RFC-1625]
whois NICNAME/WHOIS [RFC-954]
www   World-Wide Web HTTP [RFC-1945]
---


So do folk commonly use these host names for squid, or something else?:

squid.example.org
proxy.exam..
webcache.
cache.
www-proxy.
webproxy.
gateway.


What is the "prefered" host name for the service?
-- 
Craig Skinner | http://www.kepax.co.uk | [EMAIL PROTECTED]


Re: [squid-users] ACL help: blocking non-html objects from particular domains

2007-10-16 Thread Craig Skinner
On Wed, Oct 17, 2007 at 01:12:41AM +1300, Amos Jeffries wrote:
> Doh!. I'm just going to go aside and kick myself a bit.
> 
>   reP_mime_types is a REPLY acl.
> 
> it should be used with http_reply_access  :-P

Beautie mate! Stupid of me!

acl our_networks src 127.0.0.1/32
http_access allow our_networks
acl suspect-domains dstdom_regex "/etc/squid/suspect-domains.acl"
http_access allow suspect-domains
http_access deny all
acl ok-mime-types rep_mime_type -i text/html
http_reply_access allow ok-mime-types
http_reply_access deny all

Nice one.


Re: [squid-users] Squid error

2007-10-15 Thread Craig Skinner
On Mon, Oct 15, 2007 at 05:34:03AM -0700, tosin oyenusi wrote:
> Dear All,
> 
> Please assist.Users behind my squid proxy(version
> 2.5)cannot do a successful ping to addresses on the
> internet e.g www.yahoo.com etc. It comes with this
> error: "ping request could not find host
> www.yahoo.com, please check the name and try again".
> when i use the IP it replies with "request timed out".
> The port I use on squid is 2880 and I do not have
> firewall configured. The irony is that though this
> addresses cannot be pinged, but can be surfed. Please
> help out. i need to be able to ping on clients behind
> my squid server.
> 

Squid is not an ICMP proxy, it is a HTTP proxy.

You need to configure your packet filter.


Re: [squid-users] ACL help: blocking non-html objects from particular domains

2007-10-15 Thread Craig Skinner
On Mon, Oct 15, 2007 at 12:04:41AM +1300, Amos Jeffries wrote:
> It should work. What does cache.log / access.log say when (3) is used?
> 

Thanks for the help, I'll work on dstdomains next, logs below:


###


acl our_networks src 127.0.0.1/32
http_access allow our_networks
acl suspect-domains dstdom_regex "/etc/squid/suspect-domains.acl"
acl ok-mime-types rep_mime_type -i ^text/html$
http_access allow suspect-domains ok-mime-types
http_access deny all

The request GET http://www.example.com/ is DENIED, because it matched 'all'
TCP_DENIED/403 1375 GET http://www.example.com/ - NONE/- text/html


###


acl our_networks src 127.0.0.1/32
http_access allow our_networks
acl suspect-domains dstdom_regex "/etc/squid/suspect-domains.acl"
acl ok-mime-types rep_mime_type -i ^text/html$
http_access deny suspect-domains !ok-mime-types
http_access allow suspect-domains
http_access deny all

The request GET http://www.example.com/ is DENIED, because it matched 
'ok-mime-types'
TCP_DENIED/403 1375 GET http://www.example.com/ - NONE/- text/html





[squid-users] ACL help: blocking non-html objects from particular domains

2007-10-13 Thread Craig Skinner
I'm attempting to use ACLs to block non-HTML objects from particular
domains. i.e: users should be able to see the html, but not the images.

Tried various forms of and always end up will all or nothing:

acl suspect-domains dstdom_regex "/etc/squid/suspect-domains.acl"
acl ok-mime-types rep_mime_type -i ^text/html$
acl ok-mime-types rep_mime_type -i text/html

# 1
#http_access allow ok-mime-types
#http_access allow suspect-domains

# 2
#http_access allow suspect-domains ok-mime-types

# 3
#http_access deny suspect-domains !ok-mime-types
#http_access allow suspect-domains

http_access deny all


What am I missing here?
-- 
Craig Skinner | http://www.kepax.co.uk | [EMAIL PROTECTED]


[squid-users] 2.5 -> 2.6 accel migration

2007-09-18 Thread Craig Skinner
I have a general purpose box that acts as a caching firewall for a small 
LAN, and also it reverse proxies (httpd accel) for apache on the 
localhost to the web.


I don not use transparent, users load a proxy.pac file.

In 2.5 my config was:

acl accel_host dst 127.0.0.1/32 an.ip.address/32
acl accel_port port 80
http_access deny to_localhost
acl our_networks src 192.168.6.0/24 a.network.address/29 127.0.0.1/32
http_access allow our_networks
http_access deny !accel_port
acl local-servers dstdomain .example.org
http_access allow local-servers
httpd_accel_host 127.0.0.1
httpd_accel_port 80
httpd_accel_single_host on
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
forwarded_for off




In 2.6, I can get outbound caching working for the LAN with:

allow_underscore off
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
acl accel_host dst 127.0.0.1/32 an.ip.address/32
acl accel_port port 80
http_access deny to_localhost
acl our_networks src 192.168.6.0/24 a.network.address/29 127.0.0.1/32
http_access allow our_networks
http_access deny !accel_port
acl local-servers dstdomain .example.org
http_access allow local-servers
forwarded_for off


And can get inbound requests from the Internet working with the above 
plus, but it kills local outbound access as all requests are sent to apache:


http_port 3128 vhost (packet filter redirect)
cache_peer 127.0.0.1 parent 80 0 no-query originserver


I've followed various suggestions on 
http://wiki.squid-cache.org/SquidFaq/ReverseProxy but these seem to be 
for use with squid hosts that only work in 1 direction.



Any ideas?

Ta,
--
Craig


Re: [squid-users] new website: final beta

2007-05-10 Thread Craig Skinner
On Thu, May 10, 2007 at 11:46:42AM +1200, [EMAIL PROTECTED] wrote:
> > I don't have the web skills that you do, but I found the easiest way to
> > make php's cache-able was to lynx dump the php to a .html, and have
> > apache serve index.html in preference to index.phtml. Naturally, all
> > links to pages must be to the .html and not the .php:
> >
> 
> Whereas I have a completely alternate experience with cachability.
> PHP has the ability to easily prepend headers that specify cachability and
> duration.
> Alternatively apache can do that itself with VirtualHost or .htaccess
> configs.
> 

Oh OK, I never even thought of using mod_expires entries in per
directory .htaccess files. Good point.

I did play about with PHP headers, but found it awkward when using
common header templates and wanting only some pages to be dynaminc.

Thanks for the tip.


Re: [squid-users] new website: final beta

2007-05-09 Thread Craig Skinner
On Wed, May 09, 2007 at 02:14:33PM +0200, Ralf Hildebrandt wrote:
> > 
> > Nice work Adrian!
> 
> Definitely.
> 

Struth Bruce! Nice one mate!

Sort of quoting one of Yahweh's olde proverbs:
"...squidmaster, cache thy self"

Will the final site be cache-able?

I don't have the web skills that you do, but I found the easiest way to
make php's cache-able was to lynx dump the php to a .html, and have
apache serve index.html in preference to index.phtml. Naturally, all
links to pages must be to the .html and not the .php:




$ cat /usr/local/site/bin/php2html
#!/bin/ksh

# the packet filter prevents local access to the public interfaces
export http_proxy='http://localhost:80/'

host=$1
host ${host} > /dev/null
[ $? -ne 0 ] && { host ${host}; exit 1; }

shift

for source in $*
do
#echo ${source} | grep '.phtml$' || continue
html=$(echo ${source} | sed 's~.phtml$~.html~')
lynx -source ${host}/${source} > ${html}~
mv ${html}~ ${html}
done



I then build a file of phps to redirect into the above via find, grep &
friends, to exclude items that need to be dynamic, such as contact forms
and the like. I supose you could be more intelligent and only process
the phps that are newer than the (non-existing?) corresponding static
page:


$ head php2html.list
404.phtml   yes, apache is set to 404 on the static .html
index.phtml
faq/*.phtml
pricing/index.phtml
pricing/resellers.phtml
..
..

-- 
Craig Skinner | http://www.kepax.co.uk | [EMAIL PROTECTED]


Re: [squid-users] cookie blocking?

2007-04-21 Thread Craig Skinner
On Thu, Apr 19, 2007 at 05:00:56PM +0200, Matus UHLAR - fantomas wrote:
> > 
> > acl cookieBlockSite dstdom_regex msn\.
> > acl cookieBlockSite dstdom_regex aol\.
> > ..
> > ..
> 
> are you sure they have to be regexes? that's quite inefficient...
> 

Hadn't thought of that. I found regex after I done some more research
following on the back of what Chris suggested.

What is preferable here?:

Absolutely listing google.co.uk, google.de, google.com.au,.

Or a regex on google?

Same for msn & others.
-- 
Craig Skinner | http://www.kepax.co.uk | [EMAIL PROTECTED]


Re: [squid-users] cookie blocking?

2007-04-17 Thread Craig Skinner
On Tue, Apr 17, 2007 at 01:58:01PM -0800, Chris Robertson wrote:
> 
> acl cookieBlockSite dstdomain .certain.site
> header_access Set-Cookie deny cookieBlockSite

Brilliant! I set up something like this:

acl cookieBlockSite dstdom_regex msn\.
acl cookieBlockSite dstdom_regex aol\.
..
..
..

Or is it more preferable dump the above into an external file:

acl cookieBlockSite dstdom_regex "/etc/squid/cookie-blocked.acl"

-- 
Craig Skinner | http://www.kepax.co.uk | [EMAIL PROTECTED]


[squid-users] cookie blocking?

2007-04-17 Thread Craig Skinner
Hi there,

Can squid be used to block cookies from a certian site?

I want to be able to proxy content from a site, but want to filter out
cookies, without running around and configuring all of the clients.

Is this something squid can do? I searched the archives, but I think my
terminology was incorrect.

If not, would squidguard help? Or is a proxy.pac file the way to go?

TIA,
-- 
Craig Skinner | http://www.kepax.co.uk | [EMAIL PROTECTED]


Re: [squid-users] ERR_INVALID_REQ - Invalid Request

2007-03-02 Thread Craig Van Tassle
[EMAIL PROTECTED]:~$ squid -v
Squid Cache: Version 2.6.STABLE1
configure options: '--prefix=/usr' '--exec_prefix=/usr' '--bindir=/usr/sbin'
'--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid'
'--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid'
'--enable-async-io' '--with-pthreads' '--enable-storeio=ufs,aufs,diskd,null'
'--enable-linux-netfilter' '--enable-linux-proxy' '--enable-arp-acl'
'--enable-epoll' '--enable-removal-policies=lru,heap' '--enable-snmp'
'--enable-delay-pools' '--enable-htcp' '--enable-cache-digests'
'--enable-underscores' '--enable-referer-log' '--enable-useragent-log'
'--enable-auth=basic,digest,ntlm' '--enable-carp' '--with-large-files'
'i386-debian-linux' 'build_alias=i386-debian-linux'
'host_alias=i386-debian-linux' 'target_alias=i386-debian-linux'

That is what I get when I try it with squid -v on my Ubuntu box
Angela Burrell wrote:
> Hi Adrian,
> 
> Thank you for your reply.
> 
> I have Ubuntu Edgy and I installed squid with apt-get. Is there a way to
> tell what options were used to configure it?
> 
> Thanks!
> 
> Angela
> 
> -Original Message-
> From: Adrian Chadd [mailto:[EMAIL PROTECTED]
> Sent: March 1, 2007 5:10 PM
> To: Angela Burrell
> Cc: squid users
> Subject: Re: [squid-users] ERR_INVALID_REQ - Invalid Request
> 
> 
> On Thu, Mar 01, 2007, Angela Burrell wrote:
> 
> Transparent redirection:
> 
>> This is the line in my firewall that redirects the HTTP requests from port
>> 80 to port 3328:
>> iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j
>> REDIRECT --to-port 3328
>>
>> When I comment out this line, clients on the LAN can get through to the
>> Internet. When the above line is implemented, we get the following error
> in
>> all browsers, to all hosts. ERR_INVALID_REQ
>>
>> The following error was encountered:
>> Invalid Request
>> Some aspect of the HTTP Request is invalid. Possible problems:
>> Missing or unknown request method
>> Missing URL
>> Missing HTTP Identifier (HTTP/1.0)
>> Request is too large
>> Content-Length missing for POST or PUT requests
>> Illegal character in hostname; underscores are not allowed
>> Your cache administrator is webmaster.
>>
>>
>>
>>
>> Generated Wed, 28 Feb 2007 22:49:09 GMT by squid (squid/2.6.STABLE1)
>>
>> Here is my squid.conf file, hoping it will help.
>> 
>> http_port 3328
> 
> You need to add 'transparent' to this line, ie:
> 
> http_port 3328 transparent
> 
> And make sure you've compiled squid with --enable-linux-netfilter .
> 
> (And you also should upgrade, there's quite a few nasty bugs between
> squid-2.6.STABLE1 and
> Squid-2.6.STABLE9.)
> 
> 
> 
> 
> Adrian
> 
> 
> 
> 


-- 
Craig Van Tassle
Network Support
E-Mail: [EMAIL PROTECTED]
Cell: 815-276-3075
8200 Ridgefield Road
Crystal Lake, IL 60012
Chemtool, INC



Re: [squid-users] storeUfsCreate: Failed to create I:\Proxy/cache/00/00/00000000 ((13) Permission denied)

2007-02-28 Thread Craig Van Tassle
gt; I:\Proxy/cache/00/00/0006 ((13) Permission denied)
> 2007/02/28 21:17:03| storeUfsCreate: Failed to create
> I:\Proxy/cache/00/00/0007 ((13) Permission denied)
> 2007/02/28 21:17:04| storeUfsCreate: Failed to create
> I:\Proxy/cache/00/00/0008 ((13) Permission denied)
> 2007/02/28 21:17:05| storeUfsCreate: Failed to create
> I:\Proxy/cache/00/00/0009 ((13) Permission denied)
> 2007/02/28 21:17:06| storeUfsCreate: Failed to create
> I:\Proxy/cache/00/00/000A ((13) Permission denied)
> 2007/02/28 21:17:07| storeUfsCreate: Failed to create
> I:\Proxy/cache/00/00/000B ((13) Permission denied)
> 2007/02/28 21:17:09| storeUfsCreate: Failed to create
> I:\Proxy/cache/00/00/000C ((13) Permission denied)
> 2007/02/28 21:17:10| storeUfsCreate: Failed to create
> I:\Proxy/cache/00/00/000D ((13) Permission denied)
> 2007/02/28 21:17:12| storeUfsCreate: Failed to create
> I:\Proxy/cache/00/00/000E ((13) Permission denied)
> 2007/02/28 21:17:12| storeUfsCreate: Failed to create
> I:\Proxy/cache/00/00/000F ((13) Permission denied)
> 2007/02/28 21:17:15| storeUfsCreate: Failed to create
> I:\Proxy/cache/00/00/0010 ((13) Permission denied)
> 2007/02/28 21:17:16| storeUfsCreate: Failed to create
> I:\Proxy/cache/00/00/0011 ((13) Permission denied)
> 2007/02/28 21:17:17| storeUfsCreate: Failed to create
> I:\Proxy/cache/00/00/0012 ((13) Permission denied)
> 2007/02/28 21:17:18| storeUfsCreate: Failed to create
> I:\Proxy/cache/00/00/0013 ((13) Permission denied)
> 2007/02/28 21:17:20| storeUfsCreate: Failed to create
> I:\Proxy/cache/00/00/0014 ((13) Permission denied)
> 2007/02/28 21:17:22| storeUfsCreate: Failed to create
> I:\Proxy/cache/00/00/0015 ((13) Permission denied)
> 2007/02/28 21:17:22| storeUfsCreate: Failed to create
> I:\Proxy/cache/00/00/0016 ((13) Permission denied)
> 
> What could be the problem?
> 
> Please help.
> 
> Regards
> 
> 
> 
> 


-- 
Craig Van Tassle
Network Support
E-Mail: [EMAIL PROTECTED]
Cell: 815-276-3075
8200 Ridgefield Road
Crystal Lake, IL 60012
Chemtool, INC



Re: [squid-users] Re: Having problems with ntlm_auth in my squid.conf file

2007-02-19 Thread Craig Van Tassle
Ray,

In my squid.conf I have this for ntlm auth and it works perfectly

auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 80
auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 5
auth_param basic realm Work Proxy Server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off


Try starting out squid in the foregroud with debuging turned on. That helped me
find a lot of errors I had in my squid.conf


Ray Dermody wrote:
> Hi,
>>
>> Im trying to get transparent authentication working to my active 
>> directory
>> box as specified here (
>> http://samba.org/samba/docs/man/Samba-Guide/DomApps.html ).  My
>> kerberos and
>> smb config files work fine as klist -e, wbinfo -u and wbinfo  -g returns
>> proper results. However when I add
>>
>>   auth_param ntlm  program /usr/bin/ntlm_auth 
>> --helper-protocol=squid-2.5-ntlmssp
>>   auth_param ntlm children 5
>>auth_param ntlm max_challenge_reuses 0
>>   auth_param ntlm  max_challenge_lifetime 2 minutes
>>   auth_param basic program  /usr/bin/ntlm_auth
>> --helper-protocol=squid-2.5-basic
>>   auth_param basic children 5
>>auth_param basic realm Squid proxy-caching web server
>>   auth_param basic  credentialsttl 2 hours
>>   acl AuthorizedUsers proxy_auth REQUIRED
>>http_access allow all AuthorizedUsers
>>
>> to my previously untouched/default  squid.conf file. However when I
>> start squid after this change I get errors in  my
>> /var/log/squid/squid.out file
>>
>> squid: ERROR: Could not send signal 0  to process 6193: (3) No such
>> process
>> squid: ERROR: Could not send signal 0 to  process 6379: (3) No such
>> process
>> squid: ERROR: Could not send signal 0 to  process 7114: (3) No such
>> process
>>
>> When I do a "service squid start" it  keeps adding a new PID and a
>> "service squid stop" adds a new error to the  squid.out file above.
>> However when I uncomment all the auth_param stuff above  I can shutdown
>>   and restart squid prefectly. Also when I run
>> /usr/bin/ntlm_auth  --helper-protocol=squid-2.5-ntlmssp
>> --username=dermodyr manually I can  authenticate perfectly. Ownership
>> on ntlm_auth is
>>
>> -rwxrwxrwx 1 root  squid 1170036 Feb  7 22:54 /usr/bin/ntlm_auth
>>
>> Im 95% sure that my problem  is with my squid.conf file (
>> http://software.itcarlow.ie/misc/squid.conf)
>> Have i  put these new entries into the wrong section of my config file?
>> BTW, Im  running Fedora Core 6, squid-2.6.STABLE9-1.fc6, samba 3.0.24
>> and  Kerberos5.
>> Thanks to all
> 
> 
> 
> 



Re: [squid-users] Torrify Bypass

2007-02-16 Thread Craig Van Tassle
First its best to not reply to a message and make a new subject.

On to your question, that is best handeled at the network level, Block all
outgoing web traffic save what goes thew your proxy.  That will force everyone
to use the proxy to get out to the web.
Monica Madrid Romero wrote:
> I am using Squid along with Tsunami as a content filter. We have realized
> that programs such as torrify (torrify.com) completely bypass our filter as
> well as our access.log. Has anyone had any experience with this issue? If
> you have I would really appreciate you sharing.
>  
> Thanks,
>  
> Monica
> 
> 
> 
> 


-- 
Craig Van Tassle
Network Support
E-Mail: [EMAIL PROTECTED]
Cell: 815-276-3075
8200 Ridgefield Road
Crystal Lake, IL 60012
Chemtool, INC



Re: [squid-users] bungled reverse proxy config: open proxy [SOLVED]

2007-02-05 Thread Craig Skinner
On Mon, Feb 05, 2007 at 11:52:50PM +0100, Henrik Nordstrom wrote:
> 
> Upgrade to 2.6 and there is considerably less risk of doing so..

Ta, I'm using the OS's pre-built binary package at the moment.

> 
> > http_access allow all
> 
> Your problem is here... you should only allow access to your site(s).
> See the dstdomain acl.
> 


$ egrep '^acl|^http_' /etc/squid/squid.conf
http_port localhost:3128
http_port twig.birch:3128
http_port branch.birch:80
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl accel_host dst 192.168.186.20/255.255.255.255
acl accel_port port 80
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
acl local-servers dstdomain .kepax.co.uk
http_access allow local-servers
http_access deny all
http_reply_access allow all



1170718788.672344 212.20.230.11 TCP_MEM_HIT/200 2245 GET 
http://www.kepax.co.uk - NONE/- text/html
1170718805.802124 212.20.230.11 TCP_DENIED/403 1368 GET 
http://www.squid-cache.org - NONE/- text/html


Thankyou for your help.
-- 
Craig Skinner | http://www.kepax.co.uk | [EMAIL PROTECTED]


[squid-users] bungled reverse proxy config: open proxy

2007-02-05 Thread Craig Skinner
Hi there,

Being the Squid reverse newbie that I am, I have configured an open
reverse proxy :-(


>From an offsite shell account:

$ telnet my-server
Trying 8
Connected to .
Escape character is '^]'.
GET http://www.squid-cache.org HTTP/1.0

HTTP/1.0 200 OK


and in access.log:


1170713839.523   1345 212.20.230.11 TCP_MISS/200 6368 GET 
http://www.squid-cache.org - DIRECT/12.160.37.9 text/html
1170713895.037126 212.20.230.11 TCP_MEM_HIT/200 6376 GET 
http://www.squid-cache.org - NONE/- text/html


Well, at least I got it working as a reverse proxy in front of a single
apache host with a few virtual domain websites..


I followed the reverse white paper at
http://www.visolve.com/squid/whitepapers/reverseproxy.php

Config is:

$ fgrep -v \# /etc/squid/squid.conf | grep -v ^$
http_port localhost:3128
http_port twig.birch:3128
http_port branch.birch:80
cache_dir ufs /var/squid/cache 400 16 256
ftp_list_width 80
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl CONNECT method CONNECT
acl accel_host dst 192.168.186.20/255.255.255.255
acl accel_port port 80
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access allow all
http_reply_access allow all
httpd_accel_host 192.168.186.20
httpd_accel_single_host on
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
strip_query_terms off
coredump_dir /var/squid/cache
extension_methods REPORT MERGE MKACTIVITY CHECKOUT PROPFIND



I think I need to get the http_access items tightened up (according to
the white paper), what links do I need to refer to? Thanks.

I've shut down squid until I make it secure.
-- 
Craig Skinner | http://www.kepax.co.uk | [EMAIL PROTECTED]


[squid-users] ACL issues

2007-01-30 Thread Craig Van Tassle
I have been getting a lot of incorrect deny with my Squid system. I double
checked the ACL's that I am using and sites like belkin.com are not in any of my
acl's however they are still getting blocked. Is there how would I go about
finding out what ACL is blocking access to these sites?

Thanks
Craig



Re: [squid-users] High CPU usage problem on Squid 2.6 STABLE9

2007-01-29 Thread Craig Van Tassle
Agung T. Apriyanto wrote:
> any large acl usage perhaps ?
Or any large Regex acl's. I have found that on my proxy, with ntlm_auth and
about 2mb of Regex ACL's I chew up about 1GB of ram and stick at about 20-30%
CPU usage. I tried a LARGE acl but that chewed up all the Ram on my server and
Sent the CPU to 100% usage for about 2 hours.



Re: [squid-users] SQUID and AD

2007-01-19 Thread Craig Van Tassle
The best play to do that is at your border firewall. Then you can block all
outgoing traffic except what goes thew the Proxy. But if you are they are using
a domain account to get past the proxy then not much you can really do except
disable local account.


Andrei Antonelli wrote:
> I m using squid with AD + samba and winbind, i got integrate ! When im
> going to enter in domain, the box asking the user and password to use
> internet isn't appear. ok !! until here everything ok. When im going to
> enter local machine, and try to access the internet the box asking the
> user and password appear, and if the user put the user and pass domain
> the internet works. ! I would like if the user enter at local machine
> and try access the internet, the internet can't works. Only if who
> logged at domain internet can works.
> 
> Someone can help me !
> 
> tks
> 
> Andrei
> 
> 
> 
> 




[squid-users] Squid 2.6 Stable6 install problem

2007-01-16 Thread Craig Bigler
I'm new to this type of work, and I'm going through a squid install just
to learn the process.

I'm working with the Squid 2.6 Stable6 daily release from 20070111.

I'm running HP-UX 11.11 as the OS.

I've verified the dependencies for Squid, and I believe I have all of
them installed properly.

I run the ./configure command (no other parameters), and it seems to
complete successfully.

When I execute the make command, and I receive the following sequence
of error messages:

multicast.c: In function 'mcastJoinGroups':
multicast.c:55: error: storage size of 'mr' isn't known
multicast.c:69: error: invalid application of 'sizeof' to incomplete
type 'struct ip_mreq' 
gmake[2]: *** [multicast.o] Error 1
gmake[2]: Leaving directory `/tmp/squid-2.6.STABLE6-20070111/src'
gmake[1]: *** [all-recursive] Error 1
gmake[1]: Leaving directory `/tmp/squid-2.6.STABLE6-20070111/src'
gmake: *** [all] Error 2
*** Error exit code 1

Stop.

I found one match on the above message sequence on the web, but no one
ever answered the post in that particular forum.  I was hoping someone
would be able to provide some guidance for a "newbie" like me.

Thanks.


Re: [squid-users] NTLM auth keeps asking for password. (solved)

2007-01-04 Thread Craig Van Tassle
Chris Robertson wrote:
> Craig Van Tassle wrote:
>> *snip*
>> By "on line" do you mean the FAQ
>> (http://wiki.squid-cache.org/SquidFaq/ProxyAuthentication#head-1d6e24e071a1a5e65f112d9a96cdf1320684a8f2)?
>>
>> If so, did you test the helper as the cache_effective_user? When
>> prompted for authentication, were you prompted for the Windows domain,
>> or did you include it?
>> */snip*
>>
>> I had tried with out the domain and and with the domain.
>> After the first attempt it included the domain.
>>
>> *snip*
>>   
>
> That leaves a couple of questions unanswered...
>
> Did you follow the steps in the FAQ?
> Did you test the helper as the cache_effective_user?
>
I did follow the steps in the FAQ.
I tried it with cache_effective_user set and non set.
I also added the proper user and changed the permission of the
winbindd_privileged directory.
Tried everything else, under the sun. Then it turned out I didn't have
Samba working properly and also the acl auth proxy_auth REQUIRED was
misspelled in the config file. That got me to look REALLY close to at
the config file and fixed it.

> That's a bummer.  Does that mean you missed Adrian's response to your
> first request
> (http://www.squid-cache.org/mail-archive/squid-users/200612/0475.html)?
It turns out that I found your first reply via Google, I also found the
Ubuntu FAQ before as well.
I think I need to go and shoot my email host now, considering this is
one of who knows how many emails that are missing?
Thanks for the fix!
>
> Chris




Re: [squid-users] NTLM auth keeps asking for password.

2007-01-03 Thread Craig Van Tassle
*snip*
By "on line" do you mean the FAQ
(http://wiki.squid-cache.org/SquidFaq/ProxyAuthentication#head-1d6e24e071a1a5e65f112d9a96cdf1320684a8f2)?
If so, did you test the helper as the cache_effective_user? When
prompted for authentication, were you prompted for the Windows domain,
or did you include it?
*/snip*

I had tried with out the domain and and with the domain.
After the first attempt it included the domain.

*snip*

http_access allow internal_src
  

This would allow internal_src computers to surf without authenticating.
Perhaps what you are trying to do.
*/snip*

That was done so the uses could get access to the net while I was fixing
this. I actually switch the

#http_access deny !auth
always_direct allow internal_dst
  

Seeing as you don't have any cache_peers assigned, this is not going to
do what you expect.

Sorry for the late reply. My email host decided to lose a bunch of mail for me.

Thanks





[squid-users] Blocking Streaming Media

2006-12-28 Thread Craig Van Tassle
I'm trying to block streaming media from my users, and I'm not able to I have
this for my acls. I read on line the req_mime_type and rep_mime_type are both
needed.

I have these ACL's
acl media req_mime_type .*application/x-comet-log$
acl media req_mime_type .*application/x-mms-framed$
acl MIMETYPES rep_mime_type .*application/x-comet-log$
acl MIMETYPES rep_mime_type .*application/x-mms-framed$

and this is in my http_access and http_reply_access

http_access deny media
http_reply_access deny MIMETYPES


However I keep getting punched threw the ACL's with streaming media regardless
of what I do.

Any help is appreciated.
Craig



[squid-users] NTLM auth keeps asking for password.

2006-12-27 Thread Craig Van Tassle
Hello list.

I have been trying to get NTLM authentication working with squid and winbind
under ubuntu 6.10. I can get user names and account with winbind, I can even try
  using a domain user to login and I see this in my logs.
Dec 27 13:00:06 proxy pam_winbind[6734]: user 'domainuser' granted access

The proxy works well if I have no authentication, however if I try to put
authentication in place, I get asked for the user name and password 3 time then
I get kicked out to a cache access denied page saying I cant access anything
until I authenticate to the proxy. According to what I have found on line my
setup should be correct. Any help would be appreciated.


access_log /var/log/squid/access.log squid
http_port 3128
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
cache_mem 4 MB
cache_swap_low 85
cache_swap_high 90
cache_dir ufs /var/spool/squid 100 16 256
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320

#Authenticate users agaist a dc
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 10
auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 10
auth_param basic realm Chemtool Proxy Server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
#authenticate_cache_garbage_interval 10 seconds
# Credentials past their TTL are removed from memory
#authenticate_ttl 0 seconds


#Recommended minimum configuration:
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563 # https, snews
acl SSL_ports port 873 # rsync
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl Safe_ports port 555 # Sysaid
acl purge method PURGE
acl CONNECT method CONNECT
acl internal_src src x.x.x.x/x
acl auth proxy_auth REUQIRED
acl internal_dst dst x.x.x.x/x

acl porn dstdomain "/etc/squid/blacklists/porn/domains"
acl virus dstdomain "/etc/squid/blacklists/virusinfected/domains"
acl radio dstdomain "/etc/squid/blacklists/radio/domains"
acl phish dstdomain "/etc/squid/blacklists/phishing/domains"
acl games dstdomain "/etc/squid/blacklists/onlinegames/domains"


http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny porn
http_access deny virus
http_access deny radio
http_access deny phish
http_access allow internal_src
#http_access deny !auth
always_direct allow internal_dst
#http_access deny all
#http_reply_access allow all
miss_access  allow all
icp_access deny all
coredump_dir /var/spool/squid




Re: [squid-users] Errors while installing Squid Version 2.5.STABLE12

2006-12-27 Thread Craig Van Tassle
Sorry forgot to CC the list on my last reply.
If you are not sure you are going to be using ldap then I would say remove ldap
from your build and then be done with it.

--enable-basic-auth-helpers=NCSA,SMB

Agrawal, Devendra (Indust, PTL) wrote:
> Thanks for the response.
> 
> Can I ignore these errors (as I am not sure if I will be using LDAP) and
> go ahead with squid configuration? 
> 
> Devendra
> 
> 
> 
> -Original Message-
> From: Craig Van Tassle [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, December 27, 2006 4:53 PM
> To: Agrawal, Devendra (Indust, PTL)
> Subject: Re: [squid-users] Errors while installing Squid Version
> 2.5.STABLE12
> 
> It appears that you do not have the OPENldap headers installed, you may
> want to install the correct ldap development suite for your system.
> 
> Agrawal, Devendra (Indust, PTL) wrote:
>> Hi,
>>
>> I am trying to install Squid Version 2.5.STABLE12. 
>>
>> ./configure  --prefix=/opt/local/squid --localstatedir=/var 
>> --enable-poll --enable-snmp --enable-remova l-policies=heap,lru 
>> --enable-storeio=aufs,coss,diskd,null,ufs
>> --enable-async-io --with-aufs-threads=48 --enable-delay-pools 
>> --enable-linux-netfilter --with-pthreads 
>> --enable-basic-auth-helpers=LDAP,NCSA,SMB
>> ,MSNT,winbind --enable-ntlm-auth-helpers=SMB,winbind,fakeauth
>> --enable-external-acl-helpers=ip_user,lda
>> p_group,unix_group,wbinfo_group,winbind_group --enable-auth=basic,ntlm
> 
>> --enable-useragent-log --enable- referer-log --enable-gnuregex
>>
>> I observed following errors while running "make". Are they normal and 
>> can be ignored?
>>
>> # make all
>>
>> squid_ldap_auth.c:88:18: lber.h: No such file or directory
>> squid_ldap_auth.c:89:18: ldap.h: No such file or directory
>> squid_ldap_auth.c:102: `LDAP_SCOPE_SUBTREE' undeclared here (not in a
>> function)
>> squid_ldap_auth.c:106: `LDAP_DEREF_NEVER' undeclared here (not in a
>> function)
>> squid_ldap_auth.c:112: `LDAP_NO_LIMIT' undeclared here (not in a
>> function)
>> squid_ldap_auth.c:119: syntax error before '*' token
>> squid_ldap_auth.c:173: syntax error before '*' token
>> squid_ldap_auth.c: In function `squid_ldap_errno':
>> squid_ldap_auth.c:175: `ld' undeclared (first use in this function)
>> squid_ldap_auth.c:175: (Each undeclared identifier is reported only 
>> once
>> squid_ldap_auth.c:175: for each function it appears in.)
>> squid_ldap_auth.c: At top level:
>> squid_ldap_auth.c:178: syntax error before '*' token
>> squid_ldap_auth.c: In function `squid_ldap_set_aliasderef':
>> squid_ldap_auth.c:180: `ld' undeclared (first use in this function)
>> squid_ldap_auth.c:180: `deref' undeclared (first use in this function)
>> squid_ldap_auth.c: At top level:
>> squid_ldap_auth.c:183: syntax error before '*' token
>> squid_ldap_auth.c: In function `squid_ldap_set_referrals':
>> squid_ldap_auth.c:185: `referrals' undeclared (first use in this
>> function)
>> squid_ldap_auth.c:186: `ld' undeclared (first use in this function)
>> squid_ldap_auth.c:186: `LDAP_OPT_REFERRALS' undeclared (first use in 
>> this function)
>> squid_ldap_auth.c: At top level:
>> squid_ldap_auth.c:191: syntax error before '*' token
>> squid_ldap_auth.c: In function `squid_ldap_set_timelimit':
>> squid_ldap_auth.c:193: `ld' undeclared (first use in this function)
>> squid_ldap_auth.c: At top level:
>> squid_ldap_auth.c:196: syntax error before '*' token
>> squid_ldap_auth.c:214: syntax error before '*' token
>> squid_ldap_auth.c:216: warning: return type defaults to `int'
>> squid_ldap_auth.c: In function `open_ldap_connection':
>> squid_ldap_auth.c:217: `LDAP' undeclared (first use in this function)
>> squid_ldap_auth.c:217: `ld' undeclared (first use in this function)
>> squid_ldap_auth.c:243: warning: implicit declaration of function 
>> `ldap_init'
>> squid_ldap_auth.c: In function `main':
>> squid_ldap_auth.c:313: `LDAP' undeclared (first use in this function)
>> squid_ldap_auth.c:313: `ld' undeclared (first use in this function)
>> squid_ldap_auth.c:315: `LDAP_PORT' undeclared (first use in this
>> function)
>> squid_ldap_auth.c:375: `LDAP_SCOPE_BASE' undeclared (first use in this
>> function)
>> squid_ldap_auth.c:377: `LDAP_SCOPE_ONELEVEL' undeclared (first use in 
>> this function)
>> squid_ldap_auth.c:379: `LDAP_SCOPE_SUBTREE' undeclared (f

[squid-users] NTLM auth with ubuntu

2006-12-27 Thread Craig Van Tassle
Hello list.

I have been trying to get NTLM authentication working with squid and winbind
under ubuntu 6.10. I can get user names and account with winbind, I can even try
  using a domain user to login and I see this in my logs.
Dec 27 13:00:06 proxy pam_winbind[6734]: user 'domainuser' granted access

The proxy works well if I have no authentication, however if I try to put
authentication in place, I get asked for the user name and password 3 time then
I get kicked out to a cache access denied page saying I cant access anything
until I authenticate to the proxy. According to what I have found on line my
setup should be correct. Any help would be appreciated.


access_log /var/log/squid/access.log squid
http_port 3128
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
cache_mem 4 MB
cache_swap_low 85
cache_swap_high 90
cache_dir ufs /var/spool/squid 100 16 256
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320

#Authenticate users agaist a dc
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 10
auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 10
auth_param basic realm Chemtool Proxy Server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
#authenticate_cache_garbage_interval 10 seconds
# Credentials past their TTL are removed from memory
#authenticate_ttl 0 seconds


#Recommended minimum configuration:
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563 # https, snews
acl SSL_ports port 873 # rsync
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl Safe_ports port 555 # Sysaid
acl purge method PURGE
acl CONNECT method CONNECT
acl internal_src src x.x.x.x/x
acl auth proxy_auth REUQIRED
acl internal_dst dst x.x.x.x/x

acl porn dstdomain "/etc/squid/blacklists/porn/domains"
acl virus dstdomain "/etc/squid/blacklists/virusinfected/domains"
acl radio dstdomain "/etc/squid/blacklists/radio/domains"
acl phish dstdomain "/etc/squid/blacklists/phishing/domains"
acl games dstdomain "/etc/squid/blacklists/onlinegames/domains"


http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny porn
http_access deny virus
http_access deny radio
http_access deny phish
http_access allow internal_src
#http_access deny !auth
always_direct allow internal_dst
#http_access deny all
#http_reply_access allow all
miss_access  allow all
icp_access deny all
coredump_dir /var/spool/squid



[squid-users] Choose cache peer based on Content-Length HTTP header

2006-12-12 Thread Craig Appleton

I have two gateways on a LAN both running squid. One link is a fast
high cost link, one is a slow low cost link.

I would like HTTP requests under 1MB (Web Browsing) to go via the
cache at the fast link and requests over 1MB (Bulk downloads) to go
via the cache at the slow link. Is this possible? If not is there a
patch that would do this?

Re implimentation maybe sends a HEAD request via the fast link to
determine the content-length header then proceeds to choose link for
GET request based on that.
Something like that.

Thanks in advance.

--
Craig Appleton
Email: [EMAIL PROTECTED]
Phone: +64 27 472 9531
MSN Messenger: [EMAIL PROTECTED]


Re: [squid-users] the dreaded 'zero sized reply' on RHEL3

2006-06-23 Thread Craig Home

Neil

Sorry, I don't always catch all replies as I've been asking for a few weeks 
and getting loads of unwanted messages from the list instead of any direct 
replies


I have sent numerous emails to the [EMAIL PROTECTED] 
over the last four weeks and followed all the relevant instructions on the 
following link and stilll I get the messages


http://www.squid-cache.org/mailing-lists.html

Regards

Craig



Craig,

which bit of 'Read the SMTP headers' didn't you understand?

The following appears in the header of each and every message which is
sent to the mailing list:

List-Post: <mailto:squid-users@squid-cache.org>
List-Help: <mailto:[EMAIL PROTECTED]>
List-Unsubscribe: <mailto:[EMAIL PROTECTED]>
List-Subscribe: <mailto:[EMAIL PROTECTED]>

If you then can't unsubscribe explain exactly what you've tried and what
message / error you receive.  Otherwise you won't get assistance.


Neil.

Craig Home wrote:
>
> Please help me unsubscribe from this list.
>
> Thanks
>
> Craig
>
>> ons 2006-06-21 klockan 17:11 +0200 skrev [EMAIL PROTECTED]:
>>
>> > Well, not sure what you mean with 'support for this patched binary
>> > distribution is provided by redhat' but 2.5.stable3 is the latest
>> one that
>> > they offer for RHEL3 (via their up2date tool).
>>
>> That support for the RedHat binary distribution of Squid is provided by
>> RedHat.
>>
>> > Can you tell me where I can find a officially supported squid for 
RHEL3

>> > that is more current ?
>>
>> The officially supported Squid version in this forum is the current
>> STABLE source code release, i.e. currently 2.5.STABLE14. And yes RHEL3
>> is a supported platform.
>>
>> But we won't hurt you for running a binary distribution. Just that we
>> can not help you much with problems which seem to be specific to the
>> binary distribution you are running, and we also expect you to verify
>> that any problem you may have exists in the current version of Squid as
>> well before looking into the exact details.
>>
>> Translated to your current question this means that the level of the
>> original question you sent is fine. So is also questions related to how
>> to configure Squid etc. However, as the problem could not be repeated 
by

>> clicking on the link you provided it's not back on your table to verify
>> if you see the problem using the current version of Squid (not the
>> RedHat version). Or alternatively if you do not want to try the
>> squid-cache.org source distribution send the question to your RedHat
>> support contact.
>>
>> Regards
>> Henrik
>
>
>> << signature.asc >>
>


--
Neil Hillard[EMAIL PROTECTED]
Westland Helicopters Ltd.   http://www.whl.co.uk/

Disclaimer: This message does not necessarily reflect the
views of Westland Helicopters Ltd.





Re: [squid-users] the dreaded 'zero sized reply' on RHEL3

2006-06-22 Thread Craig Home


Please help me unsubscribe from this list.

Thanks

Craig


ons 2006-06-21 klockan 17:11 +0200 skrev [EMAIL PROTECTED]:

> Well, not sure what you mean with 'support for this patched binary
> distribution is provided by redhat' but 2.5.stable3 is the latest one 
that

> they offer for RHEL3 (via their up2date tool).

That support for the RedHat binary distribution of Squid is provided by
RedHat.

> Can you tell me where I can find a officially supported squid for RHEL3
> that is more current ?

The officially supported Squid version in this forum is the current
STABLE source code release, i.e. currently 2.5.STABLE14. And yes RHEL3
is a supported platform.

But we won't hurt you for running a binary distribution. Just that we
can not help you much with problems which seem to be specific to the
binary distribution you are running, and we also expect you to verify
that any problem you may have exists in the current version of Squid as
well before looking into the exact details.

Translated to your current question this means that the level of the
original question you sent is fine. So is also questions related to how
to configure Squid etc. However, as the problem could not be repeated by
clicking on the link you provided it's not back on your table to verify
if you see the problem using the current version of Squid (not the
RedHat version). Or alternatively if you do not want to try the
squid-cache.org source distribution send the question to your RedHat
support contact.

Regards
Henrik




<< signature.asc >>





Re: [squid-users] SYN flooding

2006-06-20 Thread Craig Home
Please help me unsubscribe from this list. I have tried asking for help now 
5 times.


many thanks

Craig


[EMAIL PROTECTED] wrote:
I check my Squid and I have exact values as you mention on tcp_syncookies 
and

tcp_max_syn_backlog

$ echo "1" >/proc/sys/net/ipv4/tcp_syncookies
$ echo "1024" >/proc/sys/net/ipv4/tcp_max_syn_backlog

I will check how can I implement it on iptables or  if you have link can 
please

forward it to me.

Thanks again,

Wennie


it can be useful;
http://www.netfilter.org/documentation/HOWTO//netfilter-extensions-HOWTO-3.html#ss3.5
But from here, this is more a iptables question.

Thanks
Emilio C.



Quoting Emilio Casbas <[EMAIL PROTECTED]>:

[EMAIL PROTECTED] wrote:


Hi all,

I can see a message on my log files  "possible SYN flooding on port 
8080.

Sending cookies." not on access.log and cache.log, but I've seen this on


the


message.log.

Is this a big problem? how can I prevent this?

Thanks,

Wennie






You can enable syn-cookies (prevent syn-flood attacks):
$ echo "1" >/proc/sys/net/ipv4/tcp_syncookies

or

reduce number of possible SYN Floods:
$ echo "1024" >/proc/sys/net/ipv4/tcp_max_syn_backlog

you can need a iptables script and see the 'limit' module in iptables.

Thanks
Emilio C.









Re: [squid-users] Out of file descriptor

2006-06-19 Thread Craig Home
Can anyone tell me how to unsubscribe from this list, I've tried the usual 
emails with unsubscribe in the header and am still getting these messages I 
don't want


Please help

Regards

Craig


Have a look @ http://www.squid-cache.org/Doc/FAQ/FAQ-11.html
Yos
Ronny

*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
If I have seen further it is by standing on the shoulders of giants.
--Isaac Newton 
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*






Wennie V. Lagmay wrote:
Im having a problem "your cache is running out of file descriptor" Im 
using squid2.5-STABLE13 on Fedora Core 4 64 bit. Can any body help me on 
this


thanks,

Wennie






Re: [squid-users] transparent proxy with different user authentication

2005-11-28 Thread Craig Herring
DNS: add this
proxy   A   x.x.x.x
wpadCNAME   proxy.yourdomain.org
TXT "\"service: wpad:!http://wpad.yourdomain.org:80/proxy.pac\"";
wpad.tcp SRV0 0 80 wpad


DHCP: add this to your global scope for all dhcp machines
option 252 WPAD "http://proxy.yourdomain.org/wpad.dat";

The web scripts can be a variety of different things. A google search of
"wpad.dat" can show some very interesting things. I kept mine simple and
works well. Desktops' browsers I have configured using the proxy script
entry, notebook users configured to Auto Detect Proxy Settings.
root of web server add these 3 files with the same content:
proxy.pac
wpad.dat
wspad.dat (for some reason Firefox uses this one)
--cut-
t1 = "PROXY proxy.yourdomain.org:8080";
local = "DIRECT";
function FindProxyForURL(url, host)
{
if (isPlainHostName(host) ||
shExpMatch(url, "http:*:86/*") ||
shExpMatch(host, "*.yourdomain.org") ||
shExpMatch(host, "localhost") ||
shExpMatch(host, "127.*") ||
shExpMatch(host, "10.*") ||
shExpMatch(host, "192.168.*") ||
shExpMatch(host, "169.254.*") ||
shExpMatch(host, "172.16.*"))
return local;
else
return t1;
}
--snip--

The proxy server authenticates via winbind to a Win2K domain. Win2K
servers handle the DNS and DHCP and a SLES9 box handles squid and apache
(for web scripts). All browsers except for Safari seem to work well on
OSX,WIN,Linux. This should do it for you, let me know if you have any
trouble.

if anyone else has more insight, feel free :-)
Craig Herring


On Mon, 2005-11-28 at 14:25 +0100, CsY wrote:
> hello!
> 
> Please send me a details :)
> i using ubuntu linux 5.10 
> thanks in advance
> 
> Craig Herring írta: 
> > I read somewhere on the squid-cache.org site that you cannot run a
> > transparent proxy and have user authentication at the same time.
> > However, we dealt with publishing proxy settings using DNS, DHCP, and
> > auto proxy scripts. It works well. If you like I can send details...
> > 
> > Craig Herring
> > 
> > On Mon, 2005-11-28 at 12:04 +0100, CsY wrote:
> >   
> > > Hello!
> > > 
> > > Anybody can help me?
> > > I need set up a transparent proxy with user auth. and different user 
> > > rights.
> > > Eg: manager access all exept porn, drug sites
> > > simple user acces news portal, sites which needed to work
> > > 
> > > Anybody create same server? can help me?
> > > thanks
> > > 
> > > 
> > 
> >  _ NOD32 1.1306 (20051128) Információ _
> > 
> > Az üzenetet a NOD32 antivirus system megvizsgálta.
> > http://www.nod32.hu
> > 
> > 
> > 
> >   


Re: [squid-users] transparent proxy with different user authentication

2005-11-28 Thread Craig Herring
I read somewhere on the squid-cache.org site that you cannot run a
transparent proxy and have user authentication at the same time.
However, we dealt with publishing proxy settings using DNS, DHCP, and
auto proxy scripts. It works well. If you like I can send details...

Craig Herring

On Mon, 2005-11-28 at 12:04 +0100, CsY wrote:
> Hello!
> 
> Anybody can help me?
> I need set up a transparent proxy with user auth. and different user rights.
> Eg: manager access all exept porn, drug sites
> simple user acces news portal, sites which needed to work
> 
> Anybody create same server? can help me?
> thanks
> 


RE: [squid-users] Can Winbind 3.x authenticators be stopped from asking for credentials?

2005-06-13 Thread Craig Box
> On another machine using Winbind 2.x I have a similar configuration
> with the old helpers, and it does fail the way I want.  It was using
> 'external_acl_type NT_global_group %LOGIN /usr/lib/squid/wb_group -c'
> however, instead of 'proxy_auth'.  Can I make the browsers work how I
> want with the new method?

To answer my own question, in case anyone is concerned:

What I wanted to do was similar to what I had last time:

external_acl_type NT_global_group %LOGIN /usr/lib/squid/wbinfo_group.pl
acl FullUsers external NT_global_group "/etc/squid/fullusers"

Changing the order of the http_access lines to:

http_access allow localhost
http_access allow fullusers
http_access allow localnet allowedsites
http_access deny all

means all usernames are logged as appropriate.

This setup lets you do NTLM authentication without bothering users with
passwords dialogs (and obviously only works on browsers that support
NTLM) and therefore 'silently' deny users without the correct access.

Craig


[squid-users] Can Winbind 3.x authenticators be stopped from asking for credentials?

2005-06-08 Thread Craig Box
Hi everyone,

I have Squid configured with Winbind 3.x to do NTLM authentication to
only allow a limited subset of sites to people who are not in an
"Internet access" group.

Everything works OK - users in the group can access everything, users in
the group can access only the sites in the allowedsites list, except the
case where a limited user tried to access a site they don't have access
to, both IE and Firefox pops up a dialog asking for credentials, instead
of failing them with an "Access denied" message.

On another machine using Winbind 2.x I have a similar configuration with
the old helpers, and it does fail the way I want.  It was using
'external_acl_type NT_global_group %LOGIN /usr/lib/squid/wb_group -c'
however, instead of 'proxy_auth'.  Can I make the browsers work how I
want with the new method?

Relevant config sections:

auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
--require-membership-of="DOMAIN\\Internet"
auth_param ntlm children 5
auth_param ntlm max_challenge_reuses 0
auth_param ntlm max_challenge_lifetime 2 minutes

acl allowedsitesdstdomain   "/etc/squid/allowedsites"
acl fullusers   proxy_auth  REQUIRED

http_access allow localhost
http_access allow allowedsites
http_access allow fullusers
http_access deny all

Thanks,
Craig


Re: [squid-users] delay pools to slow large images/files

2004-12-28 Thread Craig Main
On Tue, 28 Dec 2004 14:23:02 +0100 (CET), Henrik Nordstrom
<[EMAIL PROTECTED]> wrote:
> 
> 
> On Tue, 28 Dec 2004, Craig Main wrote:
> 
> > Hi all,
> >
> > I have a 64k leased line connection that is shared between 4
> > terminals, I need to limit the bandwidth for large files and images to
> > 5k, but let everything else through at maximim rate.
> 
> Then use the pool size to give your users some bandwidth credit. This way
> they only get limited by the pool refill rate once they have been eating
> the available bandwidth for some time.
> 

Thanks Hendrik,

I am new to delay pools, and don't understand exactly what you are
saying, would you mind explaining in a little more detail.

Many thanks
Craig


[squid-users] delay pools to slow large images/files

2004-12-28 Thread Craig Main
Hi all,

I have a 64k leased line connection that is shared between 4
terminals, I need to limit the bandwidth for large files and images to
5k, but let everything else through at maximim rate.

I have the following:

delay_pools 1
delay_class 1 2
delay_access allow large_files
delay_parameters -1/-1 625/625

Is this correct?

What do I need for the acl large_files to make it match files larger than 300K?

TIA
Craig


[squid-users] Bandwidth Management

2004-12-13 Thread Craig Main
Hi all,

I have an internet cafe connected on a not so fast leased line (64k).

I definately need to use a caching proxy. I currently use squid, and
it works fine. However if one of the terminals has a 'power surfer',
they tend to use all of the bandwidth leaving not much for the other
terminals.

I have tryed squids delay pools, but they don't really do what I want.
What I really need is to split all the bandwidth between the terminals
that are drawing traffic fairly.

I have setup tc qdiscs and classes on the interface between the proxy
and the terminals sharing the bandwith fairly and using sfq qdiscs for
when the limit is reached. The problem with this scenario is that
squid still pulls the info from the net unfairly, so only traffic from
squid to the terminals is managed.

I was hoping that there might be some way of using delay pools that
handles bandwidth management the way I need to.

Can anyone recommend a way to setup squid to do what I need?

Many Thanks
Craig


RE: [squid-users] squid_ldap_auth Windows 2003

2004-02-27 Thread Craig Scott
But as ldapsearch works every time along with the other ldap tools and
facilities we employ does this not point towards the of squid_ldap_auth
module? 

Furthermore, as I mentioned squid_ldap_auth was working fine with
Windows 2000 active directory, the 2000 to 2003 active directory upgrade
process modifies the directory schema and introduces new security
settings might these be effecting the ldap queries performed by
squid_ldap_auth?

Craig Scott
IT Development Officer
South Tyneside College
Tel: (0191) 4273670

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: 27 February 2004 12:28
To: Craig Scott
Cc: 'Henrik Nordstrom'; [EMAIL PROTECTED]
Subject: RE: [squid-users] squid_ldap_auth Windows 2003 

On Fri, 27 Feb 2004, Craig Scott wrote:

> That is correct I am not using the persistent connections (-P), out of
> curiosity I tried using the -P switch this morning but it has made no
> difference.

Then the operations of your AD is very odd indeed, refusing every second

attempt to access the LDAP directory.

Regards
Henrik




RE: [squid-users] squid_ldap_auth Windows 2003

2004-02-27 Thread Craig Scott
That is correct I am not using the persistent connections (-P), out of
curiosity I tried using the -P switch this morning but it has made no
difference.
 
Craig Scott
IT Development Officer
South Tyneside College
Tel: (0191) 4273670

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: 26 February 2004 19:05
To: Craig Scott
Cc: [EMAIL PROTECTED]
Subject: Re: [squid-users] squid_ldap_auth Windows 2003 

On Thu, 26 Feb 2004, Craig Scott wrote:

> As squid_ldap_auth eventually returns an OK and ldapsearch works with
> the same query I do not believe this problem to be related to security
> permissions. 
> 
> Any on the cause of this and how it can be resolved?

Not sure. The symptoms displayed could make sense if you were using 
persistent LDAP connections, but from what I can tell you are not (this
is 
specified by the -P option to squid_ldap_auth).

Regards
Henrik




[squid-users] squid_ldap_auth Windows 2003

2004-02-26 Thread Craig Scott
I have been successfully using Squid 2.5.STABLE4 using squid_ladp_auth
authenticating against Windows 200 Active Directory without any problems
for a number of months. Following the upgrade of the domain to Windows
2003 server squid_ldap_auth appears to now only function intermittently 

For example.

$ ./squid_ldap_auth -b "DC=MAN,DC=STC,DC=AC,DC=UK" -D
"CN=squiduser,CN=Users,DC=MAN,DC=STC,DC=AC,DC=UK" -w "password" -h
172.24.0.100 -u sAMAccountName -f sAMAccountName =%s cscott password OK
cscott password
squid_ldap_auth: WARNING, LDAP search error 'Operations error' OK cscott
password
squid_ldap_auth: WARNING, LDAP search error 'Operations error' OK

As squid_ldap_auth eventually returns an OK and ldapsearch works with
the same query I do not believe this problem to be related to security
permissions. 

Any on the cause of this and how it can be resolved?
 

Thanks in advance

Craig Scott
IT Development Officer
South Tyneside College
Tel: (0191) 4273670





[squid-users] Off-Topic: Squidalyser dot100K.gif file missing

2004-01-16 Thread Craig Sharp
I realize that this is somewhat off topic but I need the dot100k.gif
file for Squidalyser.

If you have it please send it to me.

Thanks,

Craig


[squid-users] WPAD and Squid

2004-01-12 Thread Craig Sharp
This may be a bit off topic but I am having a problem with wpad and my
squid server.

I have the following script setup on the webserver and point to it in
dns.

function FindProxyForURL(url, host)
{
if (isPlainHostName(host) ||
dnsDomainIs(host, ".internaldomain.com") ||
dnsDomainIs("windowsupdate.microsoft.com") ||
dnsDomainIs("wustat.windows.com"))
return "DIRECT";
else
return "PROXY  proxyserver:3128; DIRECT";
}

"internaldomain and proxyserver are correct in the implemented script
and are just changed for posting"

The script works fine.  Our IE implementation is set to auto detect
proxy.  Some of my users pickup the wpad.dat script and use the proxy
properly.  Some of the users do not pickup the script so they do not use
the proxy.  It seems that even though the users have the auto detect set
on their browsers, they are not auto detecting.  I would say about 2 out
of 10 people are hitting the proxy.

Any ideas would be appreciated.

TIA,

Craig


[squid-users] LDAP Authentication storage issue

2004-01-08 Thread Craig Sharp
Hi,
 
I am using LDAP to authenticate to Novell E-Dir with Squid for Internet access.  It is 
working perfectly, however our management and users do not like the fact that when the 
browser is closed down and reopened, they have to authenticate again. They are whining 
because they do not want to have to type in their name when they open the browser 
several times a day.
 
I need a way to store the authentication so that they will remain authenticated and 
not be challenged by the Squid server when they open a new browser for a period of 4 
hours.  Yes I know that this is defeating the purpose of security and authentication, 
but this is my direction.
 
TIA,
 
Craig



[squid-users] Squid_ldap_group

2003-04-01 Thread Craig Home


Hi,

I have been trying to use squid_ldap_match with Active directory with not
much success, I have built everything but just can't see to get the
parameters correct.
I am also unsure whether I just have to use the match, or also do an
ldap_auth on the user beforehand - if you can clarify whether this is
required - thanks.
Ok, some background details:

Our LDAP AD server is on 193.116.22.122 and responds to ldap anonymous
searches on the usual ldap port 589
I am trying to match up a group which is situated in:

cn=INTERNETUSERS,cn=Users

The Base dn = dc=top,dc=sy,dc=turvy

Ok,

So I am trying to match the group with squid_ldap_match with Squid 2.5
stable2 compiled from source with openldap on Redhat 7.3 (fully patched)
(Standalone)

squid_ldap_match -b "dc=top,dc=sy,dc=turvy" -f "(%(cn=%u)(cn=%g))" -h
193.116.22.122 -p 389
in the squid.conf file

external_acl_type ldap_group %LOGIN /path/to/squid_ldap_match
-b "dc=top,dc=sy,dc=turvy" -f "(%(cn=%u)(cn=%g))" -h 193.116.22.122 -p 389
acl firstrule external ldap_group INTERNETUSERS

I am particular interested in any debug options you can specify to further
debug whether I have the filter options correctly. How would I test these
filters out in relation to active directory as I don't know whether the %u
or %g are returning the correct values?
Can I capture what is sent to STDIN so I can look at the returned results?

Any help appreciated as there is not much documentation in using this with
Active directory
Many thanks

Craig





_
Get Hotmail on your mobile phone http://www.msn.co.uk/mobile


Re: [squid-users] HTTP Headers

2003-03-06 Thread Craig Kelley
On Thu, 2003-03-06 at 00:54, Henrik Nordstrom wrote:
> On Thursday 06 March 2003 00.24, Craig Kelley wrote:
> 
> > Just for the archives; I solved the problem by using this on the
> > source HTTPD server:
> >
> > 
> > Options FollowSymLinks
> > AllowOverride None
> > Header set Cache-control public
> > AuthType Basic
> > AuthName ByPassword
> > AuthUserFile /path/to/htpasswd/file/goes/here
> > 
> >   Require valid-user
> > 
> > 
> >
> > Many thanks Henrik for the HTTP header hint.
> 
> I assume you know that cache hits will not require authentication in 
> such setup? And this does not only apply to your cache but any cache 
> on the Internet who have cached the page.
> 
> Having auth requirement on such URLs on the server is somewhat odd, 
> but if you require authentication for URLs higher up in the directory 
> structure then you will need to mark them as public as browsers will 
> still think authentication is required to fetch these objects and 
> thereby make caches also think it is...
> 
> If you really want both authentication and caching in your accelerator 
> then set up authentication in Squid.

Yes, that is a good point.  In our situation we are setting up firewall
rules such that the only machines that can speak with the central apache
server are the squid transparent proxies, so it works for us.  The squid
machines are also behind private firewalls, with controlled access to
the clients (in between is FreeS/WAN).  This gives us a top-down
distributed filesystem with top-down authentication too.

The auth requirement is just meant to be there to keep the casual
observer from snooping around (which will be discovered via log files
and such).  Thanks again for your help;

  -Craig




Re: [squid-users] HTTP Headers

2003-03-05 Thread Craig Kelley
On Wed, 2003-03-05 at 15:50, Henrik Nordstrom wrote:
> On Wednesday 05 March 2003 23.18, Craig Kelley wrote:
> 
> >  2) If I turn on basic HTTP authentication in Apache with something
> > like this, however:
> >
> > AuthType Basic
> > AuthName ByPassword
> > AuthUserFile /var/www/secure/users
> > 
> >   Require valid-user
> > 
> >
> > Then squid will always re-fetch the file regardless; a cache miss
> > every time.  Is this an Apache problem?
> 
> This is normal. A shared cache cannot cache content which are 
> protected by authentication unless the response includes 
> "Cache-control: public" which tells caches that even if the request 
> included authentication the response does not actually require 
> authentication and may be cached by shared caches.

Just for the archives; I solved the problem by using this on the source
HTTPD server:


Options FollowSymLinks
AllowOverride None
Header set Cache-control public
AuthType Basic
AuthName ByPassword
AuthUserFile /path/to/htpasswd/file/goes/here

  Require valid-user



Many thanks Henrik for the HTTP header hint.

  -Craig (who now can throw CODA away)




Re: [squid-users] HTTP Headers

2003-03-05 Thread Craig Kelley
Thanks for the reply Henrik; this is strange though:

 1) If I disable all authentication, everything works flawlessly.  I can
retrieve a file and then subsequent requests hit the cache.  If `touch`
it on the remote machine, then it re-fetches the file, just as it
should.

 2) If I turn on basic HTTP authentication in Apache with something like
this, however:

AuthType Basic
AuthName ByPassword
AuthUserFile /var/www/secure/users

  Require valid-user


Then squid will always re-fetch the file regardless; a cache miss every
time.  Is this an Apache problem?  I would rather just have squid do the
basic authentication, but from what I understand it cannot reliably do
this when in httpd_accel mode (?).

On Tue, 2003-03-04 at 16:32, Henrik Nordstrom wrote:
> If your server send Last-Modified headers then Squid will use this and 
> issues a If-Modified-Since request to validate the freshness of the 
> cached copy when refresh_pattern says the file is stale..
> 
> While refresh_pattern says that the file is fresh it is given directly 
> from the cache without validating the freshness with the backend 
> server.
> 
> Regards
> Henrik
> 
> On Tuesday 04 March 2003 20.39, Craig Kelley wrote:
> > Hello everyone;
> >
> > We have squid running in a transparent mode (httpd_accel) to
> > another server so that we can reduce the load over an expensive
> > link. Everything works great, but it seems that squid is not
> > checking the `last modified' http headers from the source server
> > (?)  Is there a way to have squid check the http header from the
> > remote machine to determine whether or not a file needs to be
> > re-fetched, or does it simply go `stale' in the squid cache and
> > require re-fetching the whole file again?
> >
> > We're currently using these directives:
> >
> > httpd_accel_host 63.227.133.121
> > httpd_accel_port 80
> > httpd_accel_single_host on
> > httpd_accel_with_proxy off
> > httpd_accel_uses_host_header on
> >
> > Thanks,
> >
> >   Craig Kelley
> 




[squid-users] HTTP Headers

2003-03-04 Thread Craig Kelley
Hello everyone;

We have squid running in a transparent mode (httpd_accel) to another
server so that we can reduce the load over an expensive link. 
Everything works great, but it seems that squid is not checking the
`last modified' http headers from the source server (?)  Is there a way
to have squid check the http header from the remote machine to determine
whether or not a file needs to be re-fetched, or does it simply go
`stale' in the squid cache and require re-fetching the whole file again?

We're currently using these directives:

httpd_accel_host 63.227.133.121
httpd_accel_port 80
httpd_accel_single_host on
httpd_accel_with_proxy off
httpd_accel_uses_host_header on

Thanks,

  Craig Kelley