[squid-users] Re: logrotate/squid -k rotate relationship

2006-10-03 Thread Joost de Heer
I use the following scripts for rotation:

-- rotate.sh ---
#!/bin/sh

SQUID_HOME=/opt/squid
SQUID_VERSION=2.5.13
SQUID_LOG_DIR=${SQUID_HOME}/shared/cache/logging
SQUID_LOG_BACKUP=${SQUID_HOME}/shared/logging

SLEEP_PROG=/bin/sleep
DATE_PROG=/bin/date

#Rotate squid logs
${SQUID_HOME}/${SQUID_VERSION}/sbin/squid -k rotate

#Wait 2 minutes, to be certain Squid rotation is finished

${SLEEP_PROG} 120

#cache_access.log.0 is now the most recent logfile. Rename this to
#cache_access-DATE-HOUR.log, and move to the archive directory

LAST_HOUR=`${DATE_PROG} --date 1 hour ago +%F-%H%M`
LAST_DATE=`${DATE_PROG} --date 1 hour ago +%F`

mkdir -p ${SQUID_LOG_BACKUP}/old-logs/${LAST_DATE}
mv ${SQUID_LOG_DIR}/access.log.0
${SQUID_LOG_BACKUP}/old-logs/${LAST_DATE}/access-${LAST_HOUR}.log
mv ${SQUID_LOG_DIR}/cache.log.0
${SQUID_LOG_BACKUP}/old-logs/${LAST_DATE}/cache-${LAST_HOUR}.log
-- rotate.sh ---

This is run at 00:00, 08:00 and 18:00.

And at 04:30, there's a script that bzips the logs:

-- bzip.sh --
#!/bin/sh

SQUID_HOME=/opt/squid
DATE_PROG=/bin/date
BZIP_PROG=/usr/bin/bzip2

SQUID_LOGS=${SQUID_HOME}/shared/logging/old-logs
LAST_DATE=`date --date 1 day ago +%F`

## bzip2 the logfiles

for f in ${SQUID_LOGS}/${LAST_DATE}/*.log; do
${BZIP_PROG} $f
done
-- bzip.sh --

Crontab looks like:

0 0,8,20 * * * /opt/squid/support/rotate.sh  /dev/null 21
30 4 * * * /opt/squid/support/bzip-logs.sh  /dev/null 21

Joost



Re: [squid-users] Re: logrotate/squid -k rotate relationship

2006-10-03 Thread Joost de Heer

Henrik Nordstrom schreef:

tis 2006-10-03 klockan 14:53 +0200 skrev Joost de Heer:

I use the following scripts for rotation:


The script will work a bit better if you 


1. Set log_rotate 0 in squid.conf.

2. Have the script rename the log files before it issues squid -k
rotate. Just dont move them to another partition until Squid has
completed the log rotation..


What's the gain if the backup is on a different disk?

Joost


[squid-users] Squid + ICAP?

2006-09-26 Thread Joost de Heer
Hello,

Is Squid+ICAP still developed? The only reference to ICAP in the source
tree of Squid 2.6STABLE3 is in SPONSORS, pointing to
http://devel.squid-cache.org/icap/ , but the latest entry in that page is
from end 2003.

Joost



[squid-users] Re: Squid 2.6 + COSS comparison

2006-09-19 Thread Joost de Heer
Adrian Chadd wrote:
 Hi everyone,

 The COSS code in Squid-2.6 has come quite far from its original design by
 Eric Stern. Steven Wilton has put an enormous amount of effort into the
 COSS design to fix the remaining bugs and dramatically improve its
 performance.

 I've assembled a quick webpage showing the drop in CPU usage and the
 negligible effect on hit-rate. Steven Wilton provided the statistics
 from two Squid caches he administers.

 You can find it here - http://www.squid-cache.org/~adrian/coss/.
 Steven is running a recent snapshot of squid-2.6. The latest -STABLE
 release of Squid-2.6 doesn't incorporate all of the COSS bugfixes
 (and there's at least one really nasty bug!) so if you're interested
 in trying COSS out please grab the latest Squid-2.6 snapshot from
 the website.

The example proxy given has a request rate of about 100 req/s max, if I
understand the graphs correctly. How does COSS hold when the request rate
is significantly higher? I run a proxy that currently seems to peak around
420 req/s (and has an average rate of about 300 req/s during office
hours), and am currently using aufs. Mbps peakrate is about 25/30 Mbps.
Anything that can improve the proxy performance even more is wanted, since
I have the feeling that currently the proxy is hitting its upper limits.

Joost



[squid-users] Re: Illegal hostname

2006-09-14 Thread Joost de Heer
 2006/09/13 07:50:21| urlParse: Illegal hostname
 '.update.toolbar.yahoo.com'

A hostname may not start with a ., so Squid rightfully says it's illegal.

 The web access is very slow :(

Which is unrelated to people provided invalid hostnames in requests.

Joost



[squid-users] Re: blocking external users on a bridge when firewall is disabled

2006-09-14 Thread Joost de Heer
William Bohannan wrote:
 Hi I currently have been running squid for a while now and it work
 fantastic.  On one problem when I disable my firewall I notice that squid
 goes overtime on caching and external users start using it?  Is there a
 way
 to make squid only accept connections from my internal interface?

Bind Squid only to the internal interface:

http_port internal.interface.ipaddress:port

And deny access from non-internal clients:

acl my_lan network/mask
http_access allow my_lan
http_access deny all

Joost



[squid-users] Re: (110) Connection timed out, but Privoxy can?

2006-09-14 Thread Joost de Heer
 If I perform a search at
 http://www.linuxquestions.org/questions/search.php; using Squid the error
 returned is (110) Connection timed out.  The Privoxy on the same box,
 and an IPCop Squid on a different box, perform the search without fault.
 After clicking on Search at linuxquestions nothing is logged in
 /var/log/squid.

Have you performed a tcpdump to see what traffic is generated?

Joost



Re: [squid-users] squid can not automatically run when system boot

2006-09-05 Thread Joost de Heer
Adrian Chadd wrote:
 On Tue, Sep 05, 2006, wangzicai wrote:
 Thanks Adrian Chadd
 The squid is not the version shipped with the system.
 I installed it by my self.
 But I do not to know how to do created the relevant symlinks
 Could you tell me hou to do.

 I -think- the magic command under Redhat is 'chkconfig'.
 You'll need to find a SYSV init script that starts squid from
 somewhere and put it into /etc/init.d/ first.

 Could anyone more redhat-cluey point out where an init script for
 squid is? I always spend the two minutes writing it when I need one :)

I use the following script. Place in /etc/init.d, chmod 755, 'chkconfig
--add squid' and 'chkconfig --level 2345 squid on'. Provided as is, if
this script reformats your drive, it's your own fault for not checking
what it does properly, or what the abovementioned commands do.

#!/bin/bash
#
# chkconfig: - 85 15
# description: Squid
# processname: squid
# pidfile: /opt/squid/2.5.13/var/logs/squid.pid
# config: /opt/squid/2.5.13/etc/squid.conf

. /etc/rc.d/init.d/functions

if [ -f /etc/sysconfig/squid ]; then
. /etc/sysconfig/squid
fi

INITLOG_ARGS=

SQUID_HOME=/opt/squid/2.5.13
squid=${SQUID_HOME}/sbin/squid
prog=squid
RETVAL=0
OPTIONS=-D

ulimit -Sn 8192

start()
{
echo -n $Starting $prog: 
daemon $squid $OPTIONS
RETVAL=$?
echo
#   [ $RETVAL = 0 ]  touch /var/lock/subsys/squid
return $RETVAL
}

stop()
{
echo -n $Stopping $prog: 
$squid -k shutdown
RETVAL=$?
echo
#   [ $RETVAL = 0 ]  rm -f /var/log/subsys/squid
/opt/squid-master/var/logs/squid.pid
}

reload()
{
echo -n $Reloading $prog: 
$squid -k reconfigure
RETVAL=$?
echo
}

case $1 in
start)
start
;;
stop)
stop
;;
status)
status $squid
RETVAL=$?
;;
restart)
stop
start
;;
condrestart)
if [ -f /opt/squid-master/var/logs/squid.pid ]; then
stop
start
fi
;;
reload)
reload
;;
*)
echo $Usage: $prog
{start|stop|restart|condrestart|reload|status}
exit 1
esac

exit $RETVAL




[squid-users] Re: disk cache not used

2006-09-05 Thread Joost de Heer
Mark Gibson wrote:
 I've got 2 cache_dirs set up, and squid doesn't seem to want to use
 them.  Squid is Releasing objects before it should, which leads me to
 believe that it thinks it has no more space to store objects.  This
 setup worked fine while it was just me testing, but isn't working with
 lots of users hitting it.

 Any insights or further questions to lead to insights would be
 appreciated.

Post the config, stripped of all things irrelevant.

Joost



Re: [squid-users] Workaround For CGI Scripts

2006-09-05 Thread Joost de Heer
 Somewhere in the documentation I copied the following:

 Squid is written only as a high-performance proxy server, so there is
 no
 way for it to function as a web server, since Squid has no support for
 reading files from a local disk, running CGI scripts and so forth.
 There
 is, however, a workaround.

Where in the documentation is this? Perhaps a few paragraphs later, the
workaround is given.

Joost




Re: [squid-users] inject object into cache

2006-07-28 Thread Joost de Heer
Pranav Desai wrote:
 Hello,

 Is it possible to inject a specific object into the cache store and
 associate it with a particular URL ?

 E.g. a gif on the disk needs to be included in the cache store as say
 http://www.google.com/logo.gif.
 So, when someone accesses http://www.google.com/logo.gif, they will
 get the gif that was on the disk.

 Or maybe it can be included in a special user request (like PURGE),
 where you would give the entire object in the user request.

You could use a redirector
(http://wiki.squid-cache.org/SquidFaq/SquidRedirectors).

Joost



[squid-users] Performance problems

2006-07-17 Thread Joost de Heer
Hello,

For a while, we've been having performance problems on one of our proxies.
So far it looks like the machine is responding horridly when memory is
freed.

Here's some sample output from vmstat:

20060717-12  2  3  0  18332 190544 101322011 0 3  
 0 0  2  2  1  1
20060717-120100  2  0  0  20744 191040 101430811 0 3  
 0 0  2  2  1  1
20060717-120200  2  0  0  20620 191576 101344411 0 3  
 0 0  2  2  1  1
20060717-120300  2  0  0  20828 192012 101281611 0 3  
 0 0  2  2  1  1
20060717-120400  1  0  0  29832 192392 100286811 0 3  
 0 0  2  2  1  1
20060717-120500  1  0  0  58108 192524 97176011 0 3   
0 0  2  2  1  1
20060717-120600  2  0  0  69172 192784 96505611 0 3   
0 0  2  2  1  1
20060717-120700  2  0  0  45644 193200 98818411 0 3   
0 0  2  2  1  1
20060717-120800  2  0  0  24668 193604 100877611 0 3  
 0 0  2  2  1  1
20060717-120900  2  0  0  21576 194048 101143611 0 3  
 0 0  2  2  1  1
20060717-121001  4  0  0  18056 194400 101090411 0 3  
 0 0  2  2  1  1
20060717-121100  2  0  0  18652 194904 101350411 0 3  
 0 0  2  2  1  1

Between 12:04 and 12:07, the machine was responding very poorly.

Output of 'free':

 total   used   free sharedbuffers cached
Mem:   20554482034576  20872  0 2027561002868
-/+ buffers/cache: 8289521226496
Swap:  8388600  08388600

Specs of the machine:

Dual processor Intel(R) Xeon(TM) CPU 3.20GHz, 2 GB memory, machine has 3
HD's: 2x72.8GB mirror for OS/software/logs and 1x72.8G single disk for
cache.

OS: Linux kslh086 2.4.21-37.ELsmp #1 SMP Wed Sep 7 13:28:55 EDT 2005 i686
i686 i386 GNU/Linux (upgrade to 2.6 is unfortunately not possible)

Disks are all ext3.

Relevant squid.conf lines:

cache_replacement_policy heap GDSF
memory_replacement_policy heap GDSF
memory_pools off
cache_swap_low 90
cache_swap_high 95
maximum_object_size 64 KB
maximum_object_size_in_memory 8 KB

Apart from squid, Apache httpd 2.2.2 and BIND 9.3.2 (as a caching DNS
server) are running on this machine.

Relevant Apache config:

MinSpareServers 5
MaxSpareServers 5
StartServers5
MaxRequestsPerChild 0
MaxClients  5

I've already minimised the cache (it's only 1G large now) to see if the
problem was too much disk access, but no luck.

Normal cache usage is about 300-350 req/s, throughput is about 2.5MB/s,
and usually there are about 2000-2500 fd's open (proxy is configured to
run with 8192 available fd's, we can't lower this as the peak usage seems
to be about 6000 fds)

Anyone has any ideas what might cause this? Directions to search for?

Joost



[squid-users] Re: Performance problems

2006-07-17 Thread Joost de Heer
Forgot one additional piece of information: Squid version used is 2.5.13.
But we've been having these problems with 2.5.7, 2.5.10 and 2.5.12 too.

Joost de Heer wrote:
 Hello,

 For a while, we've been having performance problems on one of our proxies.
 So far it looks like the machine is responding horridly when memory is
 freed.

 Here's some sample output from vmstat:

 20060717-12  2  3  0  18332 190544 101322011 0 3
  0 0  2  2  1  1
 20060717-120100  2  0  0  20744 191040 101430811 0 3
  0 0  2  2  1  1
 20060717-120200  2  0  0  20620 191576 101344411 0 3
  0 0  2  2  1  1
 20060717-120300  2  0  0  20828 192012 101281611 0 3
  0 0  2  2  1  1
 20060717-120400  1  0  0  29832 192392 100286811 0 3
  0 0  2  2  1  1
 20060717-120500  1  0  0  58108 192524 97176011 0 3
 0 0  2  2  1  1
 20060717-120600  2  0  0  69172 192784 96505611 0 3
 0 0  2  2  1  1
 20060717-120700  2  0  0  45644 193200 98818411 0 3
 0 0  2  2  1  1
 20060717-120800  2  0  0  24668 193604 100877611 0 3
  0 0  2  2  1  1
 20060717-120900  2  0  0  21576 194048 101143611 0 3
  0 0  2  2  1  1
 20060717-121001  4  0  0  18056 194400 101090411 0 3
  0 0  2  2  1  1
 20060717-121100  2  0  0  18652 194904 101350411 0 3
  0 0  2  2  1  1

 Between 12:04 and 12:07, the machine was responding very poorly.

 Output of 'free':

  total   used   free sharedbuffers cached
 Mem:   20554482034576  20872  0 2027561002868
 -/+ buffers/cache: 8289521226496
 Swap:  8388600  08388600

 Specs of the machine:

 Dual processor Intel(R) Xeon(TM) CPU 3.20GHz, 2 GB memory, machine has 3
 HD's: 2x72.8GB mirror for OS/software/logs and 1x72.8G single disk for
 cache.

 OS: Linux kslh086 2.4.21-37.ELsmp #1 SMP Wed Sep 7 13:28:55 EDT 2005 i686
 i686 i386 GNU/Linux (upgrade to 2.6 is unfortunately not possible)

 Disks are all ext3.

 Relevant squid.conf lines:

 cache_replacement_policy heap GDSF
 memory_replacement_policy heap GDSF
 memory_pools off
 cache_swap_low 90
 cache_swap_high 95
 maximum_object_size 64 KB
 maximum_object_size_in_memory 8 KB

 Apart from squid, Apache httpd 2.2.2 and BIND 9.3.2 (as a caching DNS
 server) are running on this machine.

 Relevant Apache config:

 MinSpareServers 5
 MaxSpareServers 5
 StartServers5
 MaxRequestsPerChild 0
 MaxClients  5

 I've already minimised the cache (it's only 1G large now) to see if the
 problem was too much disk access, but no luck.

 Normal cache usage is about 300-350 req/s, throughput is about 2.5MB/s,
 and usually there are about 2000-2500 fd's open (proxy is configured to
 run with 8192 available fd's, we can't lower this as the peak usage seems
 to be about 6000 fds)

 Anyone has any ideas what might cause this? Directions to search for?

 Joost






[squid-users] Re: Squid won't debug

2006-07-06 Thread Joost de Heer
 ERROR
 The requested URL could not be retrieved

 While trying to retrieve the URL: http://localhost:81/

 The following error was encountered:

 * Access Denied.

 My squid.conf:

I doubt it is your complete squid.conf, as an ACL is used that's not present:

 http_access deny !Safe_ports

This ACL is probably also the cause for the error: 81 isn't usually in a
list of 'safe ports'.

The 'No running copy' error might come from this:

 pid_filename /var/run/squid.pid

Does the user that squid runs as have write access to this file? When you
startup the server, is an error printed in /var/log/squid/cache.log?

Joost



[squid-users] Re: WARNING: Cannot run '/user/bin/ntlm_auth' process.

2006-07-04 Thread Joost de Heer
Nathaniel Staples wrote:
 Hi all!

 auth_param ntlm program /usr/local/bin/ntlm_auth
 --helper-protocol=squid-2.5-ntlmssp
 auth_param ntlm children 5
 auth_param ntlm max_challenge_reuses 0
 auth_param ntlm max_challenge_lifetime 2 minutes

 auth_param basic program /usr/local/bin/ntlm_auth
 --helper-protocol=squid-2.5-basic
 auth_param basic children 5
 auth_param basic realm Squid proxy-caching web server
 auth_param basic credentialsttl 2 hours

 This line
 was then followed by 5 WARNING: Cannot run '/user/bin/ntlm_auth'
 process.

Is this the exact message? Because there's a path mismatch between your
config and the actual message.

Are you sure you're editing the correct squid.conf file?

Joost



[squid-users] Re: no auth configured on squid but prompting for NTLM credentials

2006-06-16 Thread Joost de Heer
 The problem is that when they try to access HTTPS sites they don't get an
 LDAP prompt from the NetCache.  They receive an authentication prompt from
 the Squid requesting their NTLM credentials.  Which of course is an issue
 because they are not members of nor do they have accounts in the domain.
 The prompt they recieve is of the format below.

[snip config]
 never_direct allow all

Try adding 'always_direct deny all' to this.

Joost



[squid-users] Re: weird mem usage

2006-06-15 Thread Joost de Heer
Mike Leong wrote:
 Hi,

 Squid is flushing the mem cached objs once it hits a certain
 threshhold.  See my attachment for the graph.

 system: 4GB of ram

 cache mem set to 2GB
 has about ~12,944,329 objects in each cache, and is increasing daily

 any ideas why squid is behaving like this?

What are the values of the following parameters?

cache_swap_low
cache_swap_high

Joost



[squid-users] Re: X-Forwarded-For Header and Rewriter

2006-06-09 Thread Joost de Heer
[EMAIL PROTECTED] wrote:
 Hi,

 I took a look at the follow_xff patch, but will the ip-address information
 I get in an url rewriter (squid as reverse proxy with redirect script) be
 the one of the client or the one of the other cache-proxy that send its
 request to squid?
 due to the documentation, it seems as if I only use the patch to have a
 new acl directive follow_x_forwarded_for.

 but I would like to start a script to check the original client ip, e.g.
 inside of the redirect script, and then to redirect him to a special url.

 is this possible?

If the request includes the 'X-Forwarded-For' header, then the patch will
use the value in that header for acls, if it doesn't (i.e. direct client
access, or a proxy/loadbalancer that's configured not to include this
header) then the actual connecting IP address will be used.

You can't find the original IP address if it's not included in the request
headers.

Joost



[squid-users] Re: Problem using Outlook Express 6.0 with Squid

2006-06-07 Thread Joost de Heer
Stefano Del Furia wrote:
 Hi all,
 we have installed Squid 2.5 for Windows and all works fine, but we have a
 problem using outlook express 6.0.
 When we try to retrieve the e-mail from a pop3 account we got always an
 error 10060 while if we bypass the proxy all works fine.
 Is there some configuration's trick that we must use for getting Outlook
 express to works ???
 Thanks in advance
 Stefano

- Squid is a HTTP proxy, not a POP3 proxy.
- What does your error log say?

Joost



[squid-users] Re: ftp behaving badly

2006-06-07 Thread Joost de Heer
Hement Gopal wrote:
 Hi all

 Platform : Squid 2.5 Stable 13 on Redhat 9

 I'm having trouble ftp'ing out via squid. If I enter
 ftp://ftp.domain.com in my browser URL and point my browser to my proxy
 server, it does not work. Ftp port in my squid.conf is open.

But you probably have CONNECT closed to TCP highports. What does the error
log say?

This communication is intended for the addressee only. [blahblahblah]

Please remove this from mails to a public mailinglist.

Joost



[squid-users] Re: COSS testers!

2006-05-16 Thread Joost de Heer
Adrian Chadd wrote:
 Is there much interest in me getting COSS to the point where its stable
 and useable? I have no actual idea how COSS will actually perform in
 the real world as I don't actually know of anyone who has used it.

I have used it about 1 year ago, and it crashed quite often. Also, after
restarting, squid crashed almost immediately, leaving the coss cache
corrupted, and it had to be rebuilt completely. It's bug 1296 in Bugzilla.

I switched to aufs shortly after the bug report. Unfortunately, I can't go
back to an experimental storage scheme, my customer wouldn't like to be a
test case

Joost



Re: [squid-users] NTLM web authentication

2006-05-11 Thread Joost de Heer
Mark Elsen wrote:
 Hi,

 A squid proxy running on FC4 was setup to support multiple remote
 locations in our organization. However, it was found that the password
 prompt did not show up when user tried to access some restricted URL on
 the Windows server, which was other than the current login id/password.

 Does Squid proxy supports NTLM challenge-response type web
 authentication on Windows NT4/2000 servers? If it does, where can I get
 the setup/config. documentation to make it work?



 http://www.squid-cache.org/Doc/FAQ/FAQ-23.html#ss23.5

I think he meant that the website used NTLM authentication, not that the
proxy used it.

Joost



Re: [squid-users] Squid Proxy and IIS

2006-05-11 Thread Joost de Heer
Mark Elsen wrote:
 We are running Squid proxy and everybody can connect to the internet
 without
 problems. Recently we connect a site with intranet running IIS to our
 network, using our Squid Proxy we cannot connect to the said intranet,
 even
 the login prompt is not appearing, you can only see an error:

  If the IIS uses NTLM , then your out of luck, ntlm can't be proxied
  over http.

But shouldn't that result in a 403, rather than a 401?

Are there IP restrictions or NTFS restrictions on the webserver?

Joost



[squid-users] Re: Exchange OWA required extension_methods

2006-05-10 Thread Joost de Heer

 old-mail01# tail -n10 access.log cache.log store.log
 == access.log ==
 1147170439.085  4 172.16.11.175 TCP_MISS/403 1802 RPC_IN_DATA
 http://webmail.giessen.nl/rpc/rpcproxy.dll? -
 FIRST_UP_PARENT/webmail.giessen.nl text/html
 1147170439.096  3 172.16.11.175 TCP_MISS/403 1802 RPC_OUT_DATA
 http://webmail.giessen.nl/rpc/rpcproxy.dll? -
 FIRST_UP_PARENT/webmail.giessen.nl text/html

Are you using NTLM (Windows Integrated Authentication) on the backend
server? This authentication scheme can't be proxied very well, as far as I
know.

Joost



[squid-users] Re: ACL Website Banning doesn't work

2006-05-10 Thread Joost de Heer
 acl all src 0.0.0.0/0.0.0.0
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl SSL_ports port 443 563
 acl Safe_ports port 80 21 443 563 70 210 1025-65535
 acl Safe_ports port 280
 acl Safe_ports port 488
 acl Safe_ports port 591
 acl Safe_ports port 777
 #acl Safe_ports port 8080
 acl CONNECT method CONNECT

 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow password

What's the 'password' ACL? If it's matched here, users are granted access,
and all following rules are ignored.

 acl lan  src 192.168.0.0/255.255.255.0
 acl lan1 src 192.168.1.0/255.255.255.0
 acl lan2 src 192.168.2.0/255.255.255.0
 acl lan3 src 192.168.3.0/255.255.255.0

 acl restricted_sites url_regex -i myspace.com
 acl restricted_sites url_regex -i schoolies.com
 acl restricted_sites url_regex -i
 killjeeseday.freewebpage.org/lol.html
 acl restricted_sites url_regex -i earth.google.com
 acl restircted_sites url_regex -i
 kh.google.com/download/earth/index.html
 acl restricted_sites url_regex -i 211.27.149.18/webbook
 acl restricted_sites url_regex -i maps.google.com
 acl restricted_sites url_regex -i runescape.com
 acl restricted_sites url_regex -i runehq.com

 acl user_passwords proxy_auth REQUIRED

 http_access deny  !restricted_sites lan
 http_access deny  !restricted_sites lan1
 http_access deny  !restricted_sites lan2
 http_access deny  !restricted_sites lan3

So move the 'http_access allow password' (http_access allow
user_passwords?) to here.

 http_access deny all

Joost



[squid-users] Re: Denying user access based on proxy_auth

2006-05-03 Thread Joost de Heer
 I have an acl that looks like this:

 acl denied_users proxy_auth_regex -i '/etc/squid2/denied_users'

 where the denied_users file has a list of users who are not allowed access
 in the form of: john.smith

 Now for the first time I have a problem in the way this works.  For
 instance, I have a user account of smith.  It's a generic account that is
 used to ensure that certain applications run on Windows 2000/XP.  I simply
 want to prevent Web access as it's anonymous to some extent.  So I add the
 name smith to my denied_users file.  Now not only is smith denied
 access, but also john.smith.

Put the username as '^smith$' in the config.

IMO it would be easier to use NT group membership (those who may browse
are member of a certain group, and check membership of that group in the
acl).

Joost



[squid-users] Re: bandwidth

2006-05-02 Thread Joost de Heer
Di Giambelardini Gabriele wrote:
 Hi to all,
 this is my first email here...
 I have a problem. some time mine internet line it'w really slow...
 I'd like know which of my client use all the internet line.
 I tried sarg, but no work well for my case..
 somebody know some software that in real time, show me, who use all the
 line?

Check the active requests with cachemgr.cgi or with squidclient
mgr:active_requests

Joost



Re: [squid-users] Question on IP based access

2006-05-01 Thread Joost de Heer
 To the best of my knowledge, this is only available in Squid 3 or via
 the patch on devel.squid-cache.org
 http://devel.squid-cache.org/follow_xff/index.html).

Apart from src/structs.h, this patches fine. But I get several warnings on
running bootstrap.sh (and the first time it actually fails, the second
time I run it, it gives an OK though)

WARNING: Cannot find automake version 1.5
Trying automake (GNU automake) 1.9.2
WARNING: Cannot find autoconf version 2.13
Trying autoconf (GNU Autoconf) 2.59
acinclude.m4:10: warning: underquoted definition of AC_CHECK_SIZEOF_SYSTYPE
  run info '(automake)Extending aclocal'
  or see http://sources.redhat.com/automake/automake.html#Extending-aclocal
acinclude.m4:49: warning: underquoted definition of AC_CHECK_SYSTYPE
configure.in:1553: warning: AC_CHECK_TYPE: assuming `u_short' is not a type
autoconf/types.m4:234: AC_CHECK_TYPE is expanded from...
configure.in:1553: the top level
autoheader: WARNING: Using auxiliary files such as `acconfig.h',
`config.h.bot'
autoheader: WARNING: and `config.h.top', to define templates for
`config.h.in'
autoheader: WARNING: is deprecated and discouraged.
autoheader:
autoheader: WARNING: Using the third argument of `AC_DEFINE' and
autoheader: WARNING: `AC_DEFINE_UNQUOTED' allows to define a template without
autoheader: WARNING: `acconfig.h':
autoheader:
autoheader: WARNING:   AC_DEFINE([NEED_FUNC_MAIN], 1,
autoheader: [Define if a function `main' is needed.])
autoheader:
autoheader: WARNING: More sophisticated templates can also be produced,
see the
autoheader: WARNING: documentation.
configure.in:1553: warning: AC_CHECK_TYPE: assuming `u_short' is not a type
autoconf/types.m4:234: AC_CHECK_TYPE is expanded from...
configure.in:1553: the top level
configure.in:1553: warning: AC_CHECK_TYPE: assuming `u_short' is not a type
autoconf/types.m4:234: AC_CHECK_TYPE is expanded from...
configure.in:1553: the top level
configure.in:1553: warning: AC_CHECK_TYPE: assuming `u_short' is not a type
autoconf/types.m4:234: AC_CHECK_TYPE is expanded from...
configure.in:1553: the top level
Autotool bootstrapping complete.

Can I ignore these warnings?



Re: [squid-users] Question on IP based access

2006-04-30 Thread Joost de Heer
 To the best of my knowledge, this is only available in Squid 3 or via
 the patch on devel.squid-cache.org
 http://devel.squid-cache.org/follow_xff/index.html).

Thanks for this link. There's a diff for 2.5 on that page, but it's
ancient (2003). Does it still apply cleanly to 2.5S13? If needed I can
patch manually, but I'd rather use patch...

Joost



[squid-users] Question on IP based access

2006-04-28 Thread Joost de Heer
Hello,

I have a proxy which uses IP based access (if you come from IP address X,
you're allowed to use the proxy, if from IP address Y, you're denied.

Now, a second proxy is being installed and an F5 loadbalancer is placed in
front of them, which causes all connections to the proxies to be made from
the loadbalancer.

The loadbalancer has the option to turn on the X-Forwarded-For header, so
we can check that. But is it possible to check this header directly, or do
I need an external authentication program to do this?

Joost



RE: [squid-users] proxy.pac

2006-04-18 Thread Joost de Heer
[EMAIL PROTECTED] wrote:
 Not true at all.  The web browser tries to access the configuration
 script.  If it doesn't get to it, the request is submitted directly.
 We wouldn't have been  able to use the functionality otherwise.

I think it uses the cached proxy.pac.

All out pac's include something like 'if
!isresolvable(some.internal.host) return DIRECT;' to check if they're in
the internal or in an external network.

Joost



[squid-users] Re: Squid + NTLM and TCP_DENIED for each request

2006-04-18 Thread Joost de Heer
Ngo, Toan wrote:
 Hi,

 I'm using Squid 2.5 stable 11.  I noticed log entries with TCP_DENIED
 when I go visit a website.  The connection gets through but there are
 several TCP_DENIED entries before the login is accepted.  I am on a
 domain so NTLM authentication is transparent but is there anyway to get
 the browser or squid to authenticate the first try?  It always seems to
 authenticate on the third attempt.  So for every http request that is
 logged, there are 2 TCP_DENIED entries prior to it.

That's NTLM handshaking, it's normal behaviour. Blame MS for creating a
crappy implementation of the authentication mechanism.

Joost



[squid-users] Re: Problem with Squid

2006-04-01 Thread Joost de Heer
 From a workstation running windowsXP a user can't download some type of
 files (doc,pdf,pps). But from a windows98 workstation that is at the same
 http_access level in squid.conf the user doesn't have any problem.

Are you using the same browser version on WXP and W98?

This might have something to do with the 'Use HTTP/1.1 through proxy
connections' option in IE, but since I hardly ever use IE, I'm not 100%
sure.

Joost



[squid-users] Strange denies

2006-03-27 Thread Joost de Heer
Hello,

I have the following ACLs:

acl block_domain dstdomain /opt/squid-master/etc/block.txt
http_access deny block_domain

block.txt has the following content:

# Blokkeer domein
.gator.com
.webads.nl
.doubleclick.net

The http_access rule is the first rule in the access rules, so there's no
previous rule that could grant access.

When I search for 'gator.com' in the access log, I see the following:

1143528173.138 1235254 10.36.74.44 TCP_DENIED/403 1318 POST
http://gbs.gator.com/gbs/gbs.dll? - NONE/- text/html
1143528173.138 1160166 10.36.74.44 TCP_DENIED/403 1318 POST
http://gbs.gator.com/gbs/gbs.dll? - NONE/- text/html
1143528173.138 783930 10.36.74.44 TCP_DENIED/403 1318 POST
http://gbs.gator.com/gbs/gbs.dll? - NONE/- text/html
1143528173.138 708776 10.36.74.44 TCP_DENIED/403 1318 POST
http://gbs.gator.com/gbs/gbs.dll? - NONE/- text/html
1143528173.138 859393 10.36.74.44 TCP_DENIED/403 1318 POST
http://gbs.gator.com/gbs/gbs.dll? - NONE/- text/html
1143528173.138 1256918 10.36.74.44 TCP_DENIED/403 1306 POST
http://rs.gator.com/rs.dll? - NONE/- text/html
1143528173.138 968166 10.36.74.44 TCP_DENIED/403 1318 POST
http://gbs.gator.com/gbs/gbs.dll? - NONE/- text/html
1143528253.117  1 10.41.0.83 TCP_DENIED/403 1394 GET
http://gatorcme.gator.com/gatorcme/autoupdate/installprecisiontime.exe? -
NONE/- text/html
1143528253.143  1 10.41.0.83 TCP_DENIED/403 1380 GET
http://gatorcme.gator.com/gatorcme/autoupdate/precisiontime.ini? - NONE/-
text/html

Why are the deny times for some of the requests so high? All these long
denies come from one client by the way, is there something misconfigured
at that client? Since this is a WAN proxy, I have no idea what that client
is (local browser or site proxy), and I have no influence on its
behaviour.

Joost



[squid-users] Re: Recommendations for log analyzer

2006-02-20 Thread Joost de Heer
Chris Mason wrote:
 I'm using Squid to control staff access to the net and I'd like to find
 a reasonable log analyzer package to monitor the efficiency and to
 report usage. I've explored the links on http://www.squid-cache.org/ but
 most of what I found isn't very polished. Any suggestions?

As long as you don't state what you want to see in the output, any
suggestion is useless.

Joost



[squid-users] Re: blocking all but one site from a machine

2006-02-20 Thread Joost de Heer
Dave wrote:
 Hello,
 I've got a unique situation. I've got squid acting as a transparent
 proxy. I want to block all outgoing http requests from a single machine
 with
 the exception of a single site, let that through. In other words if
 machine
 x goes to any other site other than the one i've designated they get an
 access denied msg. Is this doable?

- Please start a new thread for new questions, don't reply to an existing
thread, as it mucks up archiving.

- Use something like:
acl SOURCE src IPaddress-of-client
acl DEST dst IPaddress-of-destination
http_access allow SOURCE DEST
http_access deny SOURCE

Joost



[squid-users] Re: transparent proxy without client DNS setting

2006-02-20 Thread Joost de Heer
 In my attempt to configure a transparent squid using PF, ( squid is
 running on the
 openbsd gateway ) I have found out that the client is trying to
 connect to the
 internet using the DNS server configured in the client, which does not
 work, because
 the DNS server specified in the client is only internal.

Because the client doesn't know it should use a proxy, it does a DNS
resolve attempt.

So either allow DNS resolving, or force a proxy in the browser.

See an earlier mail from Mark Elsen for more downsides of interception
proxying.

Joost



[squid-users] Delay pool question

2006-02-14 Thread Joost de Heer
Hello,

I have configured a delay pool as follows:

delay_pools 1
delay_class 1 3
delay_access 1 allow all
delay_parameters 1 24/24 -1/-1 3/12

mgr:delay gives the following output for the individual buckets:

Individual:
Max: 12
Rate: 3
Current [Network 0]: 1:12
Current [Network 15]: 60:-3
Current [Network 208]: 144:0
Current [Network 3]: 154:12
Current [Network 248]: 20:12 130:12 140:12
22:12
Current [Network 16]: 3:12 4:12
Current [Network 119]: 122:12

As far as I understand delay pools, this looks OK, but what does '-3' mean
for 15.60 ?

Joost



Re: [squid-users] Automatic restart squid when response time is too large

2006-02-11 Thread Joost de Heer

  - Memory shortage. Montior with vmstat/sar.


2GB memory, Squid takes up 700MB. It's the only thing running on the machine.


  - I/O bottleneck. Monitor with vmstat/iostat/sar.


Cache hits are fast as usual, it's just cache misses and cache near misses 
that go kablooey. If there was an IO bottleneck, wouldn't cache hits be 
affected too?


  - Overload. Monitor CPU usage with vmstat/iostat/sar, and 
filedescriptor usage in Squid using snmp polling. Also keep an eye on 
cache.log.


No fd shortage whatsoever, happens when there's only 2500 fd's in use (with a 
maximum of 8192, I occasionally get spikes to about 5000, that's why it can't 
be much lower than 8192)


I decided not to automatically restart (which could result in all kinds of 
risks), but to send a mail when the median response time in the last 5 mins is 
larger than 0.35 seconds, or the fd count is larger than 6000.


Joost
--
Du hast mein Herz zerrissen, meine Seele geraubt
Das es so enden würde hätt` ich nie geglaubt  [Aus der Ruinen -]
Ohne Rücksicht auf Verluste, hast Du meine Welt zerstört  [L'Âme Immortelle]
Eine Welt, die vor kurzem nur uns beiden hat gehört


Re: [squid-users] Logging contents of all POST requests.

2006-02-07 Thread Joost de Heer
Mark Elsen wrote:

 I was wondering if there was a way to log the content of all POST
 requests that are passed through squid?  I've looked through the
 archives, documentation, and FAQ.  Any pointers would be appreciated...


 In squid.conf :

strip_query_terms off

Query terms is for GET, not for POST.

Joost



[squid-users] Question about 'default' option for cache_peer

2006-02-02 Thread Joost de Heer
Hello,

How does the 'default' keyword for cache_peer work exactly?

- the 'default' is always tried first, and then for that cache_peer the
cache_peer_access rules are applied.
- A list of cache_peers which are allowed is generated from the
cache_peer_access rules, and then the default is tried first.
- Otherwise?

Reason for asking:

I want separate parent proxies for http and https. For both protocols I
want to enable a cold standby parent, which should only be accessed when
the default parent is considered 'dead' by Squid. Can I mark both the
primary http parent and the primary https parent as default, and use
cache_peer_access to deny http to the https parents, and to deny https to
the http parents?

Joost



Re: [squid-users] Question about 'default' option for cache_peer

2006-02-02 Thread Joost de Heer
Kinkie wrote:
 On Thu, 2006-02-02 at 09:21 +0100, Joost de Heer wrote:
 Hello,

 How does the 'default' keyword for cache_peer work exactly?

 Does this answer your question?
 http://squidwiki.kinkie.it/SquidFaq/TroubleShooting#head-36aedae8f2cc4943850c22bdbff2e781c76ce2f6

   Kinkie

What I want to do (and I don't find this answered in the FAQ):

never_direct allow all

cache_peer IP1 parent 8080 0 no-query default
cache_peer IP2 parent 8080 0 no-query
cache_peer IP3 parent 8080 0 no-query default
cache_peer IP4 parent 8080 0 no-query

acl http proto http
acl https method CONNECT
acl all src 0.0.0.0/0.0.0.0

cache_peer_access allow IP1 http
cache_peer_access deny IP1 all
cache_peer_access allow IP2 http
cache_peer_access deny IP2 all
cache_peer_access allow IP3 https
cache_peer_access deny IP3 all
cache_peer_access allow IP4 https
cache_peer_access deny all

I.e. IP1 is default server for http traffic, and IP2 should only be used
when IP1 isn't available, and IP3 is default for https, and IP4 should
only be used if IP3 isn't available.

Joost



[squid-users] File Descriptor limit in Windows binary

2006-01-30 Thread Joost de Heer
Hello,

The current Windows binary provided by Guido Serassio has a 2048 file
descriptor limit. I'd like to increase this to 4096. Is the current an OS
limit or can this be changed? And if it can be changed, could anyone
provide me with information how to do this?

Joost



[squid-users] Automatic restart squid when response time is too large

2006-01-17 Thread Joost de Heer

Hello,

One of our proxies has a problem which causes the response time to explode. 
We've been unable to find a cause for this behaviour, but I want to implement 
a workaround: when the median response time grows over 1 second (normal 
behaviour is a median response time of about 100/110ms) I want to 
automatically restart the proxy.


Does anyone have something like this already implemented, or do I need to 
write my own script that does this? I'd hate reinventing the wheel, since I'm 
not really a script builder


Joost
--
Du hast mein Herz zerrissen, meine Seele geraubt
Das es so enden würde hätt` ich nie geglaubt  [Aus der Ruinen -]
Ohne Rücksicht auf Verluste, hast Du meine Welt zerstört  [L'Âme Immortelle]
Eine Welt, die vor kurzem nur uns beiden hat gehört


Re: [squid-users] Automatic restart squid when response time is too large

2006-01-17 Thread Joost de Heer

Mark Elsen wrote:

Hello,

One of our proxies has a problem which causes the response time to explode.
We've been unable to find a cause for this behaviour, but I want to implement
a workaround: when the median response time grows over 1 second (normal
behaviour is a median response time of about 100/110ms) I want to
automatically restart the proxy.




   - Wouldn´t that explode the issue you are trying to tackle ?


No. As I said, median time is normal about 110ms, and the highest (apart from 
the strange irregular peaks) I've seen is about 150ms. Proxy averages 200-250 
requests/sec, peaking to 300 on the busiest time. Median time of 1000ms or 
higher indicates a problem, and so far, restarting the proxy solved this problem.


The provider of the upstream proxy does monitoring of the parent and of Squid 
from an internal client, and the parent proxy (a loadbalanced Finjan cluster) 
has no problems at all when Squid is acting weird.



   - Better is too solve the issue ;


We're replacing the proxy with a loadbalanced cluster in a few months, so 
restarting when the problem occurs works for me.



   1) - SQUID version.


Currently 2.5.10.


   2) - Post access log entry, for long taking URL-s


Don't have them at hand, but -all- files take extremely long, even 2 byte 
files. The only files served at normal speed are files delivered from cache. 
And error log shows nothing, apart from the usual 'illegal hostname with a _'.



   3) - Does SQUID at all times have sufficient memory ?


Yes, machine has 2G memory, Squid uses about 600MB, and it's the only service 
running on this machine.


Joost
--
Du hast mein Herz zerrissen, meine Seele geraubt
Das es so enden würde hätt` ich nie geglaubt  [Aus der Ruinen -]
Ohne Rücksicht auf Verluste, hast Du meine Welt zerstört  [L'Âme Immortelle]
Eine Welt, die vor kurzem nur uns beiden hat gehört


Re: [squid-users] SquidNT ignoring dns_nameservers?

2006-01-03 Thread Joost de Heer
 Squid on Windows doesn't ignore the dns_nameservers squid.conf
 directive, unless you are using an old external DNS build.

I downloaded 2.5STABLE12-NT Standard from acmeconsulting.it.

 The specified name server could not be able to resolve the host names
 specified in the dns_testnames squid.conf directive.

When I manually test the server on the dns_testnames directive line, using
'nslookup nu.nl 192.168.1.1', the names are resolved.

And now the really puzzling part:

I just ran an ethereal trace (no filter, capture everything) and there's
no DNS traffic whatsoever while running 'sbin\squid -d1 -f
etc/squid.conf'! When running with -Dd1, squid starts normally and in the
output I see that 192.168.1.1 is added as DNS server.

I have the following in my squid.conf:

dns_nameservers 192.168.1.1
dns_testnames nu.nl

Joost



[squid-users] SquidNT ignoring dns_nameservers?

2006-01-02 Thread Joost de Heer
Hello,

I have installed SquidNT on Windows 2003, and configured a nameserver with
dns_nameservers (due to domain restrictions, I can't change the nameserver
in the TCP properties, and I want to use the local DNS server, not a
server on the other side of the country). This nameserver works (I can use
'nslookup dns.name dns.server.ip' and everything resolves fine), but when
I start Squid, a DNS error is logged (DNS tests failed), and Squid fails
to start.

I've temporarily solved the problem by adding -D to the CommandLine
registry key, but I'm wondering if anyone has an idea as to why SquidNT
seems to ignore the dns_nameservers line.

Joost



[squid-users] Re: useragent list somewhere?

2005-11-23 Thread Joost de Heer
Boniforti Flavio wrote:
 Hello everybody.
 I'm actually playing around with my useragent logs, and would like to
 know if there's a place on the 'net where I could seek information about
 the useragent strings I find in my logfiles.
 Or, if anybody would be interested, I would donate part of my
 sparetime to create and maintain a list of useragents with their
 description.

Personally, I think such a list is useless, since 'User-Agent' is a header
that can be faked.

Joost



[squid-users] Re: https Webmin using port 12000 doesn't work anymore with Squid

2005-11-23 Thread Joost de Heer
 Since  I  have  installed  Squid on my Debian 3.1, I cannot use Webmin
 anymore.
 I get the error :
 1132704539.351  0 192.168.1.10 TCP_DENIED/403 1414 CONNECT
 192.168.1.1:12000 - NONE/- text/html
 1132704539.473121 192.168.1.10 TCP_DENIED/403 1414 CONNECT
 192.168.1.1:12000 - NONE/- text/html

 acl SSL_ports port 443 563 # https, snews
 acl SSL_ports port 873 # rsync
 http_access deny CONNECT !SSL_ports

Voila, the reason.

Joost



Re: [squid-users] Error tcp_negative on web server in DMZ

2005-11-23 Thread Joost de Heer
 ..but on internal client of my LAN when I try in the web browser (IE):
 http://www.mysite.com
 ..the dns resolutions is ok and the ip address of my webserver is:
 10.0.1.2
 ..and I visualize only Fedora Core Test Page.

Is 'www.mysite.com' a vhost which is bound to a specific IP address?

Joost



[squid-users] SARG question

2005-11-21 Thread Joost de Heer
Not a squid question per se, but I figure several people here use SARG.

Has anyone got SARG working with the -p argument? I get a failure in
creating a temporary file:

sarg: (log) Cannot open temporary file: /tmp/sarg/TCP.MISS/504...unsort -
No such file or directory

When checking the /tmp/sarg directory, I found out that the TCP.MISS
directory isn't created, but unfortunately my C knowledge isn't good
enough to see what goes wrong exactly.

I'm using SARG 2.0.9 on Linux and Cygwin.

Joost



Re: [squid-users] SARG question

2005-11-21 Thread Joost de Heer
Colin Farley wrote:
 I've had problems with the latest versions of SARG, I have only tested or
 BSD boxes but I would suggest trying 2.0.6.  Unfotunately I don't use the
 -p switch so I can't say for sure if this is your problem.

I somewhat fixed this by patching fixip() in util.c, adding a check to see
if the ip string is only digits or ., and returning 0.0.0.0 if this isn't
the case:

   for (i=0;istrlen(ip);i++) {
   if (ip[i]!='.'  (ip[i]'0' || ip[i]'9'))
   sprintf(ip,0.0.0.0);
   return;
   }

But that's a workaround of course.

The line that seems to have triggered this behaviour (I added recs2 to the
output string in log.c):

1132208375.045 179621 10.224.26.198 TCP_MISS/504 1314 GET
http://61.139.33.188:8083/images/mini_con19.gif maggie NONE/- text/html

Looks to me that this line is in the proper format

Joost



Re: [squid-users] How do I stop access.log from logging gifs and jpegs

2005-11-05 Thread Joost de Heer
 Is it possible to stop squid from logging gifs and jpegs in the
 access.log
 file?

 http://devel.squid-cache.org/old_projects.html#customlog

Don't you mean http://devel.squid-cache.org/old_projects.html#log_access ?

Joostz



Re: [squid-users] Performance question

2005-10-28 Thread Joost de Heer
Venkatesh K said:
 Checkout whether you are running out of file descriptors.

Max descriptors is 8192, the largest peak I've ever seen was around 5000,
and around the time of the problems, mgr:info shows around 2500
descriptors.

 If you are using iptables, checkout if you are maxing on
 net.ipv4.ip_conntrack_max. You should be able to see these messages on
 console.

No IPtables, machine is firewalled with a Checkpoint firewall.

Joost



[Fwd: Re: [squid-users] Performance question]

2005-10-28 Thread Joost de Heer
 What do the squid/kernel logs say?

Squid log: nothing. Kernel log: Only strange things I see are messages
about 'shrinking window', but no timestamp is given for those...

 Doesn't squid wipe the cache at that time?

No, it just accepts connections really slow for some time.

 isn't there some kind of flood onto your server?
 (what does netstat say at such time?)

Less connections, as far as I can see.

Joost



[squid-users] Performance question

2005-10-27 Thread Joost de Heer
Hello,

Usually my proxy (Squid 2.5 STABLE10 on RH Linux) performs quite well, but
occasionally I see a drop in the number of successful requests (it's
usually around 200/s, but drops to as low as 100/s). I've written a small
performance test which uses curl to measure time_total, time_connect and
speed_download. The script runs on the same machine. Most of the time, the
connect time is close to 0 ms, but occasionally I see a spike, largest
I've seen is around 6 seconds(!) connect time. time_total - time_connect
is reasonably constant (somewhere between 600 and 800 ms)

Does anyone have an idea where I could start to find a cause of this
strange behaviour? Kernel settings? TCP settings? Squid settings?

Joost



Re: [squid-users] Performance question

2005-10-27 Thread Joost de Heer
Mark Elsen said:
 Hello,

 Usually my proxy (Squid 2.5 STABLE10 on RH Linux) performs quite well,
 but
 occasionally I see a drop in the number of successful requests (it's
 usually around 200/s, but drops to as low as 100/s).

   Couldn´t that just be related to user activity (browsing) dropping ?

No, because several proxy users are complaining at those moments that they
lose performance too.

Joost



[squid-users] Re: support scheme ntlm

2005-10-26 Thread Joost de Heer
 Now I want to configure authentication ntlm, but I obtain the error:
 'unrecognised ntlm auth scheme parameter 'mac-challenge-lifetime'

max-challenge-lifetime

Joost



[squid-users] strftime() style logfiles?

2005-10-25 Thread Joost de Heer
How hard would it be to implement strftime() style logfile names (i.e.
things like cache_access_log /var/log/squid/access-%F.log)? That'd avoid
having to rotate the logs every time.

Joost



Re: [squid-users] any new documentation about squid?in PDF?

2005-10-25 Thread Joost de Heer
Ben said:
 hi Kumara
 check it
 http://squid.visolve.com/squid/configuration_manual_24.htm

Which is antique and doesn't cover everything in 2.5 (for instance
auth_param isn't in there)

Joost



Re: [squid-users] Squid won't start with 2 cache_dirs configured

2005-10-25 Thread Joost de Heer
 I temporarily set permissions on both /cache and /cache/squid to 777. I
 still get the same error. Besides overly strict permissions, is there
 anything else that would cause Squid to give Permission denied? Or is
 there anything else i should try?

- What filesystem is on /cache/squid?
- How is it mounted? (show the relevant line from /proc/mounts)

Joost



[squid-users] Re: Blocking big uploads

2005-10-13 Thread Joost de Heer
 1) does some situation exist where large HTTP outbound transfers are
 done without any Content-Length header? This would make it possible for
 users to work around my acl;

chunked responses (Transfer-encoding: chunked) don't contain Content-Length.

 2) what happens with HTTPS? Is it subject to the same rules as HTTP, or
 would it pass unfiltered, as it uses the CONNECT method?

Since headers can't be read, it won't get blocked by a header acl.

 Is Squid able to block big FTP uploads, or FTP uploads in general?
 I couldn't find any way to do it, yet... Is there some safe way to block
 STOR commands?

Don't allow active ftp to the outside, only passive, and allow CONNECT
only to 443 (and possibly some other ports if you need to https to it).
Any ftp session trying to use your squid box will try to use CONNECT to a
high-port, which won't work. So you'll only have ftp-over-http, and that
doesn't allow ftp puts.

 SMTP
 
 This is really not in topic with the list, but nevertheless, if anyone
 has any suggestions... I'm currently setting up Postfix to filter SMTP
 connections, I just need to configure authentication-based policies.

Most of that is quite well explained in the postfix manual.

Joost



Re: [squid-users] HTTPD reverse proxy

2005-10-12 Thread Joost de Heer
 There's no reason for squid to forward request as https, unless the
 network
 between squid and server is untrusted. But in such case, there's usually
 no
 need for using squid.

I disagree. For one customer, we provide reverse proxy functionality
(although it's not Squid). The customer is divided into smaller fractions,
some of which don't trust the rest. So they want the internal traffic to
go via https too.

Because the backend network is a private WAN, we do need the reverse proxy
on the DMZ to publish the site.

Joost



RE: [squid-users] HTTPD reverse proxy

2005-10-12 Thread Joost de Heer
M Harrata said:


 Joost,
 If it's not a security secret, can you describe your alternative solution
 ?

Entrust GetAccess Proxy Server

Joost



Re: [squid-users] HTTPD reverse proxy

2005-10-12 Thread Joost de Heer
 I'm not sure (I doubt) if apache's mod_proxy supports ssl client
 connections.

It does ('ProxySSLEngine on' if memory serves me right)

Joost



Re: [squid-users] ntlm_auth Windows Update

2005-10-11 Thread Joost de Heer
 Hi Stefano,

 thank you for fast answering - you solved the problem :-)

Actually no, he didn't solve the problem, he masked the problem. The real
problem is that MS has done a poor job on the current WU implementation,
forcing it to go through proxies unauthenticated. A -real- solution would
be MS fixing this bug. But that'll probably happen when pigs learn to
fly

Joost



Re: [squid-users] Which the best OS for Squid?

2005-10-11 Thread Joost de Heer
[EMAIL PROTECTED] said:
 What if the squid cache is stored on the / partition?

That's a bad idea. Your cache could potentially fill up the root partition.

 Wouldn't that be a hideous mistake to set / to 'noatime' ?

Wouldn't it be a hideous mistake to put the cache on the same partition as
/?

Joost



RE: [squid-users] HTTPD reverse proxy

2005-10-11 Thread Joost de Heer
 When I start squid, it tells me :
 FATAL: ipcache_init : DNS name lookup tests failed.

The DNS server you configured (either in squid.conf or in
/etc/resolv.conf) isn't working, or the dns_testnames you defined can't be
resolved by the DNS server you configured.

Joost



[squid-users] Re: bad squid - Daniel Navarro - xstrdup

2005-10-07 Thread Joost de Heer
 So I change to Fedora 4 with stable 12 and has hangs
 again but now same message. just receiving this
 continously

 Oct  6 09:39:50 ngproxy kernel:
 audit(1128605990.095:2): avc:  denied  { name_co
 nnect } for  pid=2195 comm=squid dest=8001
 scontext=system_u:system_r:squid_t
 tcontext=system_u:object_r:port_t tclass=tcp_socket

Sounds like SElinux is forbidding things.

Joost



Re: [squid-users] slower connections using squid (squid is slowing down all connections)

2005-10-06 Thread Joost de Heer
 Free Memory = 24M !! I have installed on my serverv 1G Reg Ecc
 available
 (plus 2Gb swap). Swap still remain unused and from phisycal memory it
 remain
 18-24M free.

Two words: memory cache. Reading from memory is much much faster than
reading from disk, so disk reads are cached in memory as much as possible.

 Mem:   1034680k total,  1014068k used,20612k free,68500k buffers
 Swap:  2008084k total,  144k used,  2007940k free,   491916k cached

Here you see that almost half of your memory is dedicated to caching. So
there's nothing to worry about wrt memory.

Perhaps there's a tuning problem in your tcp stack. Have you changed
anything in /proc/sys/net/ipv4?

Joost



[squid-users] Re: problem about squid exhaust all memory

2005-09-27 Thread Joost de Heer
   Squid use more and more memory continuously during it's running ,and it
 will restart when all physical memory is exhausted ,so my squid restart
 many times a day . It's boring ,how can I solve the prolem ?

How much memory does your machine have? You have a 5G cache and 256M
memory cache, perhaps this is too much for your machine.

Joost



[squid-users] Re: Squid Clustered configuration

2005-09-21 Thread Joost de Heer
 On firewall logs i see that squid requests has the original ip source
 (ie eth0) and not the virtual one (eth0:1).

tcp_outgoing_address is your friend.

Joost



[squid-users] Re: Tr : Compilation problem with Squid

2005-09-20 Thread Joost de Heer
 We have Suse Linux 9.2 distribution installed.

 Now, when we launch the command ./configure, to compile squid, we get this
 message :

 Configure:error:no acceptable cc found in $PATH

You don't have a compiler installed on your machine. Install the gcc package.

Joost



[squid-users] change for squid_rad_auth

2005-09-19 Thread Joost de Heer
Hello,

I recently had to work with squid_rad_auth 1.07 on a Linux machine, which
needed to talk to a radius server on Solaris 8. I couldn't get the thing
to work properly, and after lots of searching I found out that there is a
difference between the ports defined for radius in /etc/services on Linux
and Solaris. On Linux, the default radius port is 1812, on Solaris it's
1645. On Linux, this port is called 'datametrics'.

So in order to avoid problems like this, I propose the following (trivial)
change to squid_rad_auth.c:

diff squid_radius_auth-1.07/squid_rad_auth.c
squid_radius_auth-1.07-joost/squid_rad_auth.c
76a77
 static char svc_name[MAXLINE] = radius;
166a168,169
   if (!memcmp(line, service, 7))
   sscanf(line, service %s, svc_name);
343d345
 const char *svc_name = radius;
362c364
   svc_name = optarg;
---
   strcpy(svc_name, optarg);

This adds an option 'service' in the squid_rad_auth.conf file.

Joost



[squid-users] Re: How to: Block certain domains

2005-09-19 Thread Joost de Heer
James Moe said:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hello,
 ~  Disclaimer: Yes, I RTFM. Yes, I scanned the archives; because there is
 no search, I probably missed a similar question. Yes, I have lurked here
 for a couple of weeks.

 ~  v2.5.stable5
 ~  Can squid be configured to deny access to certain domains? Like
 *.doubleclick.net or *.falkag.net? The acl waste-of-time dstdomain
 unwanted + http_access deny waste-of-time looked promising but had
 no effect; the hosts were accessed anyway.

 ~  Here is what I tried:
 acl adclick1 dstdomain .doubleclick.net
 acl adclick2 dstdomain .valueclick.net
 acl adclick3 dstdomain .falkag.net
 http_access deny adclick1 adclick2 adclick3

acl's are 'OR' lists, http_access rules are 'AND' lists. Your http_access
rule will never be true, because the destination domain is never
.doubleclick.net AND .valueclick.net AND .falkag.net.

So what you want is
acl adclick dstdomain .doubleclick.net .valueclick.net .falkag.net
http_access deny adclick

This will deny access if dstdomain is .doubleclick.net OR .valueclick.net
OR .falkag.net.

If your list of ads-to-block is very long, you can also use
acl adclick dstdomain /path/to/textfile
where /path/to/textfile is a list of domains (one per line). You can add
comments in this file by starting the line with #.

 ~  How does squid block/deny/etc specified domains?

With a dstdomain acl

 ~  Is a reload all that is necessary after changing squid.conf? Or is a
 full restart required?

Reload is enough

Joost



[squid-users] Re: purge using squidclient

2005-09-17 Thread Joost de Heer
Joe Acquisto said:
 Docs say I can purge objects using squidclient after setting up acl and
 http_allow.

 Cannot find squidclient file on the server.

In 2.4, it's called 'client' (located in the bin-directory)

Joost



RE: [squid-users] dead squid, Still looking,

2005-08-30 Thread Joost de Heer
John R. Van Lanen, Network Operations - TCCSA said:
 Done that too. Allowed squid -z to rebuild and it will still die with in
 hours.

Did you delete everything before running 'squid -z', including the
cache_swap.log?

Another reason might be hardware related.

- Check all cables
- badblocks test on all disks
- memtest

Joost



[squid-users] Re: Problems with maxconn

2005-08-30 Thread Joost de Heer
Xavier Cabrera said:
 Thanks for your answer...

 I want to limited the number of connections to a webpage www.foo.com per
 IP address, for example i want to user only connect 5 times to a webpage
 per day.

maxconn won't help you with that. You'll have to write an external acl,
which would look up things in e.g. a database.

Joost



[squid-users] Re: Problems with maxconn

2005-08-29 Thread Joost de Heer
Xavier Cabrera said:
 Hello i try to use maxconn acl but there is not block for the ip
 configured Can anyone hellpme whit this issue?

Define what you want to do. You only limit the number of concurrent
sessions, not the number of browsers.

Joost



[squid-users] Re: Which deny rule was used?

2005-08-25 Thread Joost de Heer
Ken Ara said:
 I have seen this question asked before but I have been
 unable to find the answer.

 Using squid-2.5.STABLE9 as reverse proxy, I try to
 defend my server against assorted nasties using lots
 of 'src' and 'browser' acls.

 But in access.log, when a 403 is reported, there seems
 to be no way to detect which rule has been invoked to
 deny access.

Link a unique error document to the acl, using deny_info.

Joost



[squid-users] Changing the port on which Squid starts during compilation

2005-08-25 Thread Joost de Heer
Hello,

I want the port on which Squid starts to be 8080. Default it's port 3128.
But I can't seem to get Squid compiled with the start-port on 8080.

What I did:
- Set the environment variable CACHE_HTTP_PORT to 8080
- ./configure --with-lots-of-options
- Checked include/autoconf.h, in it I see '#define CACHE_HTTP_PORT 8080'
- Compiled squid and started it, and it listens on 3128??

So what else do I need to change to get the default start port to 8080? I
know I can set it with http_port in the configuration, but things like
squidclient still need the -p argument then, and I wanted to avoid that.

Joost



[squid-users] Re: Changing the port on which Squid starts during compilation

2005-08-25 Thread Joost de Heer
I forgot to mention:

- OS: Linux RHES (Taroon update 4)
- Squid version: 2.5STABLE10

Joost



Re: [squid-users] Changing the port on which Squid starts during compilation

2005-08-25 Thread Joost de Heer
 So what else do I need to change to get the default start port to 8080?
 I
 know I can set it with http_port in the configuration, but things like
 squidclient still need the -p argument then, and I wanted to avoid that.

 The relevant configuration directive is http_port.
 Please check your squid.conf file.

I know that, as I wrote before. But I want it to be the absolute default
(the port that is used if no http_port is in the config file). Things like
squidclient still require a -p option if you set a non-default port as
http_port.

Joost



[squid-users] Re: configuring Squid to authenticate AND to log users' access to forbidden sites.

2005-08-22 Thread Joost de Heer
MARLON BORBA said:
 Squid ubergeeks,

 I am configuring a Squid (2.5-STABLE9 in a Fedora Core 4) to authenticate
 users into a LDAP directory. Having succeeded in that configuration, my
 next challenge is to implement access control AND logging of users'
 accesses to forbidden sites.

 I created two url_regex lists, semacesso.txt for porn and other banned
 sites and liberado.txt, which contain regexes for sites that, not being
 porn or any other crap, could be blocked because they contain a substring
 appearing to be a porn site (eg esSEX.ac.uk).

 I have two problems to solve:

 1)  My Squid.conf relevant lines below:

 [...]
 acl autenticados proxy_auth REQUIRED
 [...]
 acl liberado dstdom_regex /etc/squid/liberado.txt
 acl semacesso dstdom_regex /etc/squid/semacesso.txt
 [...]
 http_access allow autenticados

 http_access allow liberado
 http_access deny semacesso
 [...]
 # And finally deny all other access to this proxy
 http_access allow localhost
 http_access deny all
 [...]

 In this configuration it allows an authenticated user to access any site,
 even the forbidden ones. OTOH, I put the 'liberado' and 'semacesso' lines
 ABOVE the authentication line, the user does not access forbidden sites
 and Squid logs that into Cache.log, but WITHOUT the lame user's login.

Untested:
http_access allow localhost
http_access deny semacesso autenticados
http_access allow autenticados
http_access deny all

- Allow localhost to do anything
- If someone goes to a site in 'semacesso', (s)he'll get a password prompt
and if valid credentials are given, access is denied
- If someone goes to another site, (s)he'll get a password prompt and if
valid credentials are given, access is allowed
- And deny the rest

If someone presses escape after the password prompt when going to a
'semacesso' site, no username is logged of course, but a 407 (proxy
authentication is needed) is logged.

 2) Is there a better way to permit access to non-pornographic sites (eg
 esSEX.ac.uk) but block pornographic ones (eg SEX.com)?

A content scanning proxy. Unfortunately I don't have any experience with
this (the squids I manage either don't have content scanning, or they talk
to a parent proxy which does scan but which I don't manage)

Joost



Re: [squid-users] How to limit upload for a particular source ip/user?

2005-08-22 Thread Joost de Heer
 acl my_net src 10.0.0.1/255.255.255.0
 acl USERA src 10.0.0.1/255.255.255.255
 acl UPLIMIT req_header Content-Length [5-9][0-9]{5,}

And if the size is 1000? That won't match that regex.

acl UPLIMIT req_header Content-Length [5-9][0-9]{5} [0-9]{7,}

(either 6 digits with a 5-9 at the begin, or 7 or more digits)

Joost



[squid-users] Re: can't start squid 10 latest compile

2005-08-22 Thread Joost de Heer

[EMAIL PROTECTED] wrote:

Hello,
Yes squid have all right on directories it need to.
and here is the output of the squid -Nd1
bash-3.00# ../sbin/squid -Nd1
2005/08/22 07:53:14| Starting Squid Cache version 2.5.STABLE10 for
sparc-sun-solaris2.8...
2005/08/22 07:53:14| Process ID 21606
2005/08/22 07:53:14| With 1024 file descriptors available
2005/08/22 07:53:14| Performing DNS Tests...
2005/08/22 07:53:14| Successful DNS name lookup tests...
Illegal Instruction


Did you link with -lnsl? IIRC, Solaris is quite picky with DNS libraries.

Joost


Re: [squid-users] beat it to death....

2005-08-19 Thread Joost de Heer
Corey Tyndall said:
 I have tried all of these and none of these provide a fix.  I was hoping
 someone had another solution.  Also, I am not getting the Zero Size reply
 with the new version just getting a blank screen and then I can hit
 refresh on the browser and everything is fine.

New version != latest version. Upgrade to STABLE10, and see if the problem
still exists.

Joost



Re: [squid-users] Windows update hangs

2005-08-19 Thread Joost de Heer
 Squid´s access log shows this:

 1124403238.616590 10.x.x.x TCP_DENIED/403 310 HEAD
 http://www.download.windowsupdate.com/msdownload/update/v3-19990518/cabpool/ndp1.1sp1-kb867460-x86_74a5b25d65a70b8ecd6a9c301a0aea10d8483a23.exe
 - DIRECT/206.24.192.222 text/html

 Does anyone know what can be wrong?

Turn on acl debugging (see FAQ) to see which acl causes this deny.

Joost



[squid-users] Re: can't start squid 10 latest compile

2005-08-19 Thread Joost de Heer
[EMAIL PROTECTED] said:
 Hi,
 I try to use the latest Squid compile (squid-2.5.STABLE10-20050816), but
 the system is not starting.

- Did you run 'squid -z' before starting?
- Does the user you start squid with have write access to all directories
it tries to write to?
- Start with 'squid -Nd1', and see if it returns to the prompt or not.

Joost



[squid-users] Question on squidaio_counters

2005-08-17 Thread Joost de Heer
I took a look at the squidaio_counters page today, and saw something strange:

In the Squid book, I read, on page 244: 'The cancel counter is normally
equal to the close counter'. However, when I look at the statistics of my
cache I see the following:

open6011915
close   463
cancel  6011881

This is Squid 2.5STABLE10 on Linux kernel 2.4.19, using aufs as cache
storage scheme.

Is something wrong with Squid, or is the comment in the book no longer
valid for the current version of Squid?

Joost



Re: [squid-users] Squid and ACL with two internet connections

2005-08-17 Thread Joost de Heer
 Thankyou so much Chris for the reply but the squid.conf says

  tcp_outgoing_address
 #   Allows you to map requests to different outgoing IP addresses
 #   based on the username or sourceaddress of the user making
 #   the request.
 #
 #   tcp_outgoing_address ipaddr [[!]aclname] ...
 #
 #   Example where requests from 10.0.0.0/24 will be forwareded
 #   with source address 10.1.0.1, 10.0.2.0/24 forwarded with
 #   source address 10.1.0.2 and the rest will be forwarded with
 #   source address 10.1.0.3.
 #
 #   acl normal_service_net src 10.0.0.0/255.255.255.0
 #   acl good_service_net src 10.0.1.0/255.255.255.0
 #   tcp_outgoing_address 10.0.0.1 normal_service_net
 #   tcp_outgoing_address 10.0.0.2 good_service_net
 #   tcp_outgoing_address 10.0.0.3
 #
 #   Processing proceeds in the order specified, and stops at first
 fully
 #   matching line.

 I my case the source address of the user making the request is same.

Then make your acl so it differs between users. The ip acl is just an
example.

Joost



Re: [squid-users] Windows update hangs

2005-08-16 Thread Joost de Heer

 That's the wrong options if you are doing the authentication locally.
 always_direct is for bypassing parent proxies.
 I think what you want to do is to use http_access allow WIN1 (and so
 on) as seperate ACLs to your main one that allows your clients to
 access. Of course, as always, I could be way off.

 That did'n work. It still prompts for user and password :(

The order of http_access rules is important. If you have your
authentication http_access rule before the 'allow WIN1', the
authentication popup will show.

Joost



Re: [squid-users] blank user

2005-08-16 Thread Joost de Heer
Kashif Ali Bukhari said:
 its very hard there is an easy way to solv this problem
 u can by pass authantication on windows update site

Or have Microsoft repair their broken software. But I guess that around
that time, pigs have learned to fly

Joost



Re: [squid-users] Windows update hangs

2005-08-16 Thread Joost de Heer
Lasse Mørk said:
 ok. I've putted at the end of squid.conf :(
 Then tried to move it up a little.

 Now i looks like this:
 --snip--

 acl WIN1 dstdomain http://*.update.microsoft.com

acl WIN1 dstdomain .update.microsoft.com

Joost



[squid-users] Re: Cache Size Problem

2005-08-04 Thread Joost de Heer
maryam dalvi said:
 Dear all,
 I' m useing squid-2.5STABLE9 (running on FC3) as a
 cache server.
 I've set five partitions(reiserfs formatted) to store
 cache data. all partitions's size set to 4GB.
 The problem is that When cache data fill 50% of cache
 partitions, speed of cache reduce and some time it
 seems that it crashed. (The bandwidth is 20Mbps)

- I see you use ufs. Try using aufs or diskd.
- Did you mount the partitions with noatime? Updating atime kills the
performance.

Joost



[squid-users] Re: url failures

2005-08-02 Thread Joost de Heer
Joe Acquisto said:
 I cannot connect to some sites that have valid url.  Also, have instituted
 some ACL's, which don't appear to work.

What URLs? ACLs? What error do you get? Without information, no one can
help you...

 To attempt to determine why, I set squid.conf to debug_options ALL,1
 38,2 and did an rcsquid restart, then tried to access the site.  I did
 not see additional debug info in the log.  Tried rcsquid stop then rcsquid
 start.  Still no joy.  Changed to debug_options ALL,1 38,2 28,9 and did
 restart.  Still no additional info.

Perhaps the browser has the site in a proxy exclusion list, so the request
never reaches Squid. But without more information, I can't really give
anything useful.

 SUSe 8, v2.4STABLE7-93

Quite antique version. Is there a special reason you're not using 2.5?

Joost



[squid-users] Re: HTTP Headers caching

2005-08-02 Thread Joost de Heer
 Cache-Control: public, max-age=0

Cache-Control: no-cache

Joost



[squid-users] Re: acl issues

2005-08-02 Thread Joost de Heer
Joe Acquisto said:
 Still chasing getting PC restrictions to work.

 I just don't get it.  I have acl's defined, and I can see it checking
 them, in the cache.log.  However, it seems it is hosing up on  the IP
 check.  Always seems to be checking 127.0.0.1 instead of the actual
 connection's IP.

 Below is an example from the log:

 2005/08/01 14:54:56| aclCheck: checking 'http_access allow JOESPC LETIN1'
 2005/08/01 14:54:56| aclMatchAclList: checking JOESPC
 2005/08/01 14:54:56| aclMatchAcl: checking 'acl JOESPC src 192.168.0.16'
 2005/08/01 14:54:56| aclMatchIp: '127.0.0.1' NOT found
 2005/08/01 14:54:56| aclMatchAclList: returning 0

Are you testing from the machine the proxy is on? What IP address did you
use for proxy configuration in the browser?

Joost



[squid-users] Re: FW: [Full-disclosure] INFOHACKING and illusion brazilian b0ys own age

2005-07-26 Thread Joost de Heer
 The main website of the squid proxy  (www.squid.org) was compromised
 and defaced by llusion brazilian b0ys and me (INFOHACKING.com) .

Which website? The main site of Squid is www.squid-cache.org and not
www.squid.org...

Joost



[squid-users] Re: slow squid

2005-07-12 Thread Joost de Heer
 i have installed squid 2.5 stable 10 on fedora core 3 on
 pentium
 4 processor 512 ram pc and scsi hard disk

 my cache dir conf is:
 cache_dir ufs /usr/local/squid/var/cache 3 14 256

- Which OS filesystem are you using? Do you have noatime mount option
turned on?
- How much inodes do you have free?
- Is it still slow with a different storage scheme (aufs, diskd)?
- What's the memory usage? Is the machine swapping heavily? A 30G disk
with 512MB memory seems a bit high.

Joost



  1   2   >