Re: [squid-users] squid busy even when its not working...(?) bug?

2008-10-06 Thread Amos Jeffries

Linda W wrote:

With no processes attaching to squid -- no activity -- no open
network connections -- only squid listening for connections --
why is squid walking up doing a busy-wait so often?

It's the most active process -- even when it is supposedly doing nothing?

I'm running it on suse10.3, squid-beta-3.0-351 so maybe it is something
that has been fixed?


Wakeups-from-idle per second : 102.2interval: 10.0s

Top causes for wakeups:
 58.4% ( 62.6) squid : schedule_timeout (process_timeout)


Ah wakeups. Looks like one of the inefficient listening methods is being 
used. Squid _polls_ its listening sockets in one of several ways. Some 
of these can cause lot of wakeups.


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] auth_param basic children

2008-10-06 Thread Amos Jeffries

Andrew Struiksma wrote:

I have setup a reverse proxy which prompts for a password if the client is not 
on our LAN. I am not sure as to the proper setting of auth_param basic 
children. I set it to 2 since we will have around 75 users hitting the site 
from our LAN but probably fewer than 10 simultanious users from the outside. 
I'm just not sure if I'm correctly understanding how often the helper is 
actually used by Squid.

Is auth_param basic children only important when a user is actually prompted 
for a password? Or, is the authentication used everytime a client requests 
pages from Squid? Does it matter if the client in on our LAN or not?




When Squid needs to authenticate a user their details are passed to the 
auth helper. It then waits (doing other stuff meanwhile) for the helper 
to send back its result.


There are two things which affect performance.

 A) children - number of helpers squid can send data to.

 B) helper concurrency - number of requests squid is allowed to queue 
up for a single helper.


Squid can only handle up to A x B requests which need authenticating at 
any given time. More requests than that will get an error message.


It's a trade off for how fast your helper can work (ie how long things 
might wait in the queue) against how many helpers you can run in 
parallel before server CPU cut is noticeable.


NP: Some helpers though have a max concurrency of 1.

Amos


Thanks!

Andrew

---squid.conf---
http_port my_ip:80 defaultsite=webserver.company.com
https_port my_ip:443 cert=/etc/apache2/ssl/webserver.company.com.cert 
key=/etc/apache2/ssl/webserver.company.com.key defaultsite=webserver.company.com

#redirects all http traffic to https
acl port80 myport 80
deny_info https://webserver.company.com port80
http_access deny port80

#reverse proxy
cache_peer webserver.company.com parent 443 0 no-query originserver ssl 
sslflags=DONT_VERIFY_PEER name=myAccel
acl our_sites dstdomain webserver.company.com
acl all src 0.0.0.0/0.0.0.0

auth_param basic program /usr/lib/squid/ldap_auth -R -b "dc=company,dc=com" -D 
"cn=squid_user,cn=Users,dc=company,dc=com" -w "password" -f sAMAccountName=%s -h 
192.168.1.2
auth_param basic children 2
auth_param basic realm Our web site
auth_param basic credentialsttl 2 hours

#these networks can access webserver without authenticating
acl trusted_nets src 192.168.1.0/24

acl ldap_users proxy_auth REQUIRED

http_access allow trusted_nets our_sites
http_access allow ldap_users our_sites

cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all

never_direct allow our_sites
--




--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] how to intergrate virus detection with squid proxy

2008-10-06 Thread Amos Jeffries

simon benedict wrote:

Dear All,

I have the following setup which is been workin fine for a long time 

Redhat 9 
squid-2.4.STABLE7-4


i also have shoreline firewall on the same squid server

now i appreciate if someone cd advise n help me

1) Real time Virus Scanning for your Proxy Server, includes scanning of HTTP 
traffic and downloaded files

2) Real-Time Content Scanning for HTTP traffic.

3) POP Up Filter

4) Scanning of active content like Javascript, Java or ActiveX and block 
scripts that access the hard disks
 


i really wd apprecite if someone advise me of software which i can download n 
intergrate it with squid


really apprecite your help



Most AV software has capability to act as an ICAP server.
Squid-3 contains an ICAP client to pass HTTP requests to such a server 
for filtering as they go past.

  http://wiki.squid-cache.org/Features/ICAP
  http://www.squid-cache.org/Versions/v3/3.0

Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


[squid-users] acl website block

2008-10-06 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
i have been use
acl website dstdomain "/etc/website"

to block some website

but how to make exception to some PC ( client )
eg :

192.168.1.100-200 <-- my client's ip
i want only 192.168.1.100 and 192.168.1.110 that not have any blocked
site ( free access )
and i want 192.168.1.101 - 192.168.1.109 only can browse yahoo.com
and i want 192.168.1.111 - 192.168.1.120 only can browse google.com
and i want 192.168.1.121 - 192.168.1.30 only can browse yahoo and google.com
and i want the rest cannot / doesnt have ant access

-- 
-=-=-=-=


[squid-users] squid busy even when its not working...(?) bug?

2008-10-06 Thread Linda W

With no processes attaching to squid -- no activity -- no open
network connections -- only squid listening for connections --
why is squid walking up doing a busy-wait so often?

It's the most active process -- even when it is supposedly doing nothing?

I'm running it on suse10.3, squid-beta-3.0-351 so maybe it is something
that has been fixed?



Wakeups-from-idle per second : 102.2interval: 10.0s

Top causes for wakeups:
 58.4% ( 62.6) squid : schedule_timeout (process_timeout)
  8.4% (  9.0)   xfsaild : schedule_timeout (process_timeout)
  7.0% (  7.5)   xfsbufd : schedule_timeout (process_timeout)
  5.4% (  5.8): uhci_hcd:usb1, eth0
  3.7% (  4.0): usb_hcd_poll_rh_status (rh_timer_func)
  3.1% (  3.3)   : Rescheduling interrupts
  1.9% (  2.0)  : clocksource_register 
(clocksource_watchdog)




Re: [squid-users] Transparent proxy from different networks

2008-10-06 Thread Amos Jeffries
> Hi all:
>
> I have a Squid running on 192.168.1.1 listening on 3128 TCP port. Users
> from 192.168.1.0/24 can browse the Internet without problems thanks to a
> REDIRECT rule in my shorewall config.
>
> But users from differents networks (192.168.2.0/24, 192.168.3.0/24,
> etc.) can't browse the Internet. Those networks are connected to
> 192.168.1.0/24 via a VPN connection.
>
> My redirect rule in iptables syntax is like this:
>
> iptables -t nat -A PREROUTING -s 0.0.0.0/24 -i eth2 -p tcp --dport 80 -j
> REDIRECT --to-ports
>
> Is there a restriction to work transparent proxy for other networks
> different from 192.168.1.0/24? Do I have to configure squid to listen on
> each range o network addresses?

Your current rule is restricting the REDIRECT to specific interface and
0.0.0.0 source. not sure host that 0.0.0.0 bit works.
Does the VPN traffic come in from a virtual interface?

There should also be a SNAT or MASQUERADE rule creating symetry for the
proper routing of replies.

The entire ruleset you need is listed in the wiki:
  http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxRedirect

Amos



Re: [squid-users] Cache_dir more than 10GB

2008-10-06 Thread Rafael Gomes
>
> Yes, but using data=writeback is not a tuning, but risking. Using that on
> squid cache dir may require cleaning cache_dir after each crash, otherwise
> you risk providing invalid data
> --

What option "data=writeback" really do?

Thanks!

-- 
Rafael Gomes
Consultor em TI
Embaixador Fedora
LPIC-1
(71) 8709-1289


Re: [squid-users] Raid 0 vs Two cache_dir

2008-10-06 Thread Rafael Gomes
Thanks for answers,

Your informations about this ask helped me to really understand this.

I will write a post about this on my blog.

Thanks!

On Mon, Oct 6, 2008 at 10:45 AM, Amos Jeffries <[EMAIL PROTECTED]> wrote:
> Henrik Nordstrom wrote:
>>
>> On mån, 2008-10-06 at 11:08 +0200, Matus UHLAR - fantomas wrote:
>>>
>>> On 05.10.08 12:31, Rafael Gomes wrote:

 I have two Scsi discs. I can set a unique cache_dir and make a Raid 0,
 so i will  improve the write or I can set two cache_dir one per disc.

 What is better?

 Are There any documents about information? Like comparation and other
 things like this.
>>>
>>> even tried reading the FAQ? Doesn't
>>> http://wiki.squid-cache.org/SquidFaq/RAID say it all?
>>
>>
>> Doesnt mention RAID0, does it?
>>
>> I have now added a RAID0 section.
>>
>
> I had a section on each RAID type. Then the RAID fanboys went and created
> that version with no mention of the fatal problems seen with some RAIDs.
>
> Amos
> --
> Please use Squid 2.7.STABLE4 or 3.0.STABLE9
>



-- 
Rafael Gomes
Consultor em TI
Embaixador Fedora
LPIC-1
(71) 8709-1289


Re: [squid-users] Cache_dir more than 10GB

2008-10-06 Thread Henrik Nordstrom
On mån, 2008-10-06 at 19:07 +0200, Itzcak Pechtalt wrote:
> On Mon, Oct 6, 2008 at 1:05 PM, Henrik Nordstrom
> <[EMAIL PROTECTED]> wrote:
> 
> > But it is important you keep the number of objects per cache_dir well
> > below 2^24. Preferably not more than 2^23.
> 
> Is there any way to limit number of objects in cache_dir ?

Only by size and estimating the number of objects based on that.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Is Squid the right tool for the job?

2008-10-06 Thread Chris Robertson

gms5002 wrote:

Hello,
  


SNIP


In a nutshell I need something that can handle 'Take all
traffic from xxx.xxx.xxx.xxx, encrypt it, and send it to
yyy.yyy.yyy.yyy:.' Thanks so much for your help!
  


Stunnel (http://www.stunnel.org/) is a far better choice.  Squid is an 
HTTP proxy, and not designed to transport arbitrary TCP traffic.


Chris



Re: [squid-users] Multiple Squid nodes sharing a single, common cache directory

2008-10-06 Thread Kinkie
On Mon, Oct 6, 2008 at 6:07 PM, Christian Tzolov
<[EMAIL PROTECTED]> wrote:
> Hello all,
>
> Is it possible to configure several Squid servers to share a single,
> common cache directory?

No.
What is possible is to have multiple squids, each with its own cache,
coordinate with each other using icp and cache digests.


-- 
/kinkie


[squid-users] auth_param basic children

2008-10-06 Thread Andrew Struiksma
I have setup a reverse proxy which prompts for a password if the client is not 
on our LAN. I am not sure as to the proper setting of auth_param basic 
children. I set it to 2 since we will have around 75 users hitting the site 
from our LAN but probably fewer than 10 simultanious users from the outside. 
I'm just not sure if I'm correctly understanding how often the helper is 
actually used by Squid.

Is auth_param basic children only important when a user is actually prompted 
for a password? Or, is the authentication used everytime a client requests 
pages from Squid? Does it matter if the client in on our LAN or not?

Thanks!

Andrew

---squid.conf---
http_port my_ip:80 defaultsite=webserver.company.com
https_port my_ip:443 cert=/etc/apache2/ssl/webserver.company.com.cert 
key=/etc/apache2/ssl/webserver.company.com.key defaultsite=webserver.company.com

#redirects all http traffic to https
acl port80 myport 80
deny_info https://webserver.company.com port80
http_access deny port80

#reverse proxy
cache_peer webserver.company.com parent 443 0 no-query originserver ssl 
sslflags=DONT_VERIFY_PEER name=myAccel
acl our_sites dstdomain webserver.company.com
acl all src 0.0.0.0/0.0.0.0

auth_param basic program /usr/lib/squid/ldap_auth -R -b "dc=company,dc=com" -D 
"cn=squid_user,cn=Users,dc=company,dc=com" -w "password" -f sAMAccountName=%s 
-h 192.168.1.2
auth_param basic children 2
auth_param basic realm Our web site
auth_param basic credentialsttl 2 hours

#these networks can access webserver without authenticating
acl trusted_nets src 192.168.1.0/24

acl ldap_users proxy_auth REQUIRED

http_access allow trusted_nets our_sites
http_access allow ldap_users our_sites

cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all

never_direct allow our_sites
--



[squid-users] Multiple squids serving one port (was Re: [squid-users] Why single thread?)

2008-10-06 Thread Dave Dykstra
On Mon, Oct 06, 2008 at 01:07:49PM -0700, Gordon Mohr wrote:
> I can't find mention of this '-I' option elsewhere. (It's not in my 
> 2.6.STABLE14-based man page.)
> 
> Is there a writeup on this option anywhere?
> 
> Did it only appear in later versions?

Right, sorry, it appeared in 2.7:
http://www.squid-cache.org/cgi-bin/cvsweb.cgi/squid/doc/squid.8.in

> Is there a long-name for the option that would be easier to search for?

No.

> I would be interested in seeing your scripts if there are other wrinkles 
> to using Squid in this manner. We're currently using squid for 
> load-balancing on dedicated dual-core machines, so one core is staying 
> completely idle...

I'm including a perl script called 'multisquid' below that uses -I and
assumes that there are '.squid-N.conf' configure scripts where "N" is a
number 0, 1, etc.  I'm also including a bash script 'init-squid' that
generates those from a squid.conf based on the number of subdirectories
under the cache_dir of the form 0, 1, etc (up to 4) exist.  It makes
squid 0 a cache_peer parent of the others so it's the only one that
makes upstream connections, but they all can serve clients.

- Dave

> Dave Dykstra wrote:
> >Meanwhile the '-I' option to squid makes it possible to run multiple
> >squids serving the same port on the same machine, so you can make use of
> >more CPUs.  I've got scripts surrounding squid startups to take
> >advantage of that.  Let me know if you're interested in having them.
> >Currently I run a couple machines using 2 squids each on 2 bonded
> >gigabit interfaces in order to get over 200 Mbytes/second throughput.
> >
> >- Dave


-- multisquid ---
#!/usr/bin/perl -w
#
# run multiple squids.
#  If the command line options are for starting up and listening on a
#  socket, first open a socket for them to share with squid -I.
#  If either one results in an error exit code, return the first error code.
# Writtten by Dave Dykstra, July 2007
#
use strict;
use Socket;
use IO::Handle;
use Fcntl;

if ($#ARGV < 2) {
  print STDERR "Usage: multisquid squidprefix numsquids http_port [squid_args 
...]\n";
  exit 2;
}

my $prefix = shift(@ARGV);
my $numsquids = shift(@ARGV);
my $port = shift(@ARGV);
my $proto = getprotobyname('tcp');

if (!(($#ARGV >= 0) && (($ARGV[0] eq "-k") || ($ARGV[0] eq "-z" {
  #open the socket for both squids to listen on if not doing an
  # operation that doesn't use the socket (that is, -k or -z)
  close STDIN;
  my $fd;
  socket($fd, PF_INET, SOCK_STREAM, $proto) || die "socket: $!";
  setsockopt($fd, SOL_SOCKET, SO_REUSEADDR, 1)|| die "setsockopt: $!";
  bind($fd, sockaddr_in($port, INADDR_ANY)) || die "bind of port $port: $!";
}

my $childn;
for ($childn = 0; $childn < $numsquids; $childn++) {
  if (fork() == 0) {
exec "$prefix/sbin/squid -f $prefix/etc/.squid-$childn.conf -I @ARGV" || 
die "exec: $!";
  }
  # get them to start at different times so they're identifiable by squidclient
  sleep 2;
}

my $exitcode = 0;
while(wait() > 0) {
  if (($? != 0) && ($exitcode == 0)) {
# Take the first non-zero exit code and ignore the other one.
# exit expects a byte, but the exit code from wait() has signal
#  numbers in low byte and exit code in high byte.  Combine them.
$exitcode = ($? >> 8) | ($? & 255);
  }
}

exit $exitcode;
-- init-squid ---
#!/bin/bash
# This script will work with one squid or up to 4 squids on the same http port.
# The number of squids is determined by the existence of cache directories
# as follows.  The main path to the cache directories is determined by the
# cache_dir option in squid.conf.  To run multiple squids, create directories
# of the form
#   `dirname $cache_dir`/$N/`basename $cache_dir`
# where N goes from 0 to the number of squids minus 1.  Also create 
# cache_log directories of the same form.  Note that the cache_log option
# in squid.conf is a file, not a directory, so the $N is up one level:
#   cache_log_file=`basename $cache_log`
#   cache_log_dir=`dirname $cache_log`
#   cache_log_dir=`dirname $cache_log_dir`/$N/`basename $cache_log_dir`
# The access_log should be in the same directory as the cache_log, and
# the pid_filename also needs to be in similarly named directories (the
# same directories as the cache_log is a good choice).

. /etc/init.d/functions

RETVAL=0

INSTALL_DIR=_your_base_install_dir_with_squid_and_utils_subdirectories_
#size at which rotateiflarge will rotate access.log
LARGE_ACCESS_LOG=10

CONF_FILE=$INSTALL_DIR/squid/etc/squid.conf

CACHE_DIR=`awk '$1 == "cache_dir" {x=$3} END{print x}' $CONF_FILE`
CACHE_LOG=`awk '$1 == "cache_log" {x=$2} END{print x}' $CONF_FILE`
ACCESS_LOG=`awk '$1 == "access_log" {x=$2} END{print x}' $CONF_FILE`

squid_dirs()
{
 # if $NUMSQUIDS is 1, echo the parameter, otherwise echo the parameter
 #   N times with $N before the basename of the parameter, where N is
 #   from 0 to $NUMSQU

Re: [squid-users] Why single thread?

2008-10-06 Thread Marcin Mazurek
> 
> In my case all of the data being sent out was small enough and
> repetitive enough to be in the Linux filesystem cache.  That's where I
> found the best throughput.  I think the typical size of the data items
> were about 8-30MBytes.  It was a regular Linux ext3 filesystem.  The
> machine happens to have been a dual dual-core 64-bit 2Ghz Opteron,
> although I saw some Intel machines with similar performance per CPU but
> on those I had only one gigabit network interface and one squid.
> 

I see, You've got really big files to serve, so it's easier to get those
numbers in throughput. I wonder if it's possible get such a number with
smaller files 1kB - 50kB for example.

cheers

-- 
Marcin Mazurek



Re: [squid-users] Slow for one user, fast for everyone else

2008-10-06 Thread RM
On Mon, Oct 6, 2008 at 4:08 AM, RM <[EMAIL PROTECTED]> wrote:
> On Mon, Oct 6, 2008 at 1:45 AM, Pieter De Wit <[EMAIL PROTECTED]> wrote:
>> Hi JL,
>>
>> Does your server use DNS in it's logging ? Perhaps it's reverse DNS ?
>>
>> If he downloads a big file, does the speed pick up ?
>>
>> Cheers,
>>
>> Pieter
>>
>> JL wrote:
>>>
>>> I have a server setup which provides an anonymous proxy service to
>>> individuals across the world. I have one specific user that is
>>> experiencing very slow speeds. Other users performing the very same
>>> activities do not experience the slow speeds, myself included. I asked
>>> the slow user to do traceroutes and it appeared there were no network
>>> routing issues but for some reason it is VERY slow for him to the
>>> point of being unusable. The slow user can perform the same exact
>>> activities perfectly fine using another proxy service but with my
>>> proxy it is too slow.
>>>
>>> Any help is appreciated.
>>>
>>
>>
>
> Thanks Pieter for the reply.
>
> I am not sure what you mean by DNS in its logging. I am assuming you
> mean that in the logs hostnames as opposed to IP addresses are logged.
> If so, that is not the case, only IP addresses are logged in the Squid
> logs. I realize you are probably are also referring to reverse DNS for
> the user but just in case you mean reverse DNS for the server, I do
> have reverse DNS setup for the server IP's.
>
> I will have to ask to see if big downloads speed up for the user.
>
> Any other help is appreciated.
>

One thing I forgot to ask is: if he downloads a big file and the speed
picks up, what does this say and how do I fix the problem?

Any other suggestions are appreciated as well.


Res: [squid-users] BUG 740

2008-10-06 Thread NBBR


- Mensagem original 
De: Amos Jeffries <[EMAIL PROTECTED]>
Para: NBBR <[EMAIL PROTECTED]>
Cc: squid-users@squid-cache.org
Enviadas: Segunda-feira, 6 de Outubro de 2008 10:50:51
Assunto: Re: [squid-users] BUG 740

NBBR wrote:
> I'm with problems for the squid(3.0) send content-type to my perl script 
> using external acl's. this problem is what this in BUG 740?
> 
> if it will be, would like resolv in squid 3.0?
> 

>3.0 is already restricted to only serious bugs. You can patch your own 
>though if you want:
>  http://www.squid-cache.org/Versions/v3/3.1/changesets/b9216.patch
>  http://www.squid-cache.org/Versions/v3/3.1/changesets/b9223.patch
>  http://www.squid-cache.org/Versions/v3/3.1/changesets/b9226.patch
>
>3.1 has just been branched, which means test releases will come very 
>soon which you can use.

>Amos
>-- 
>Please use Squid 2.7.STABLE4 or 3.0.STABLE9

I made download of SQUID 3.0 STABLE 9 and applied the PATCH' s, but when go to 
compile it of the following o error:

external_acl.cc: In function ‘void parse_externalAclHelper(external_acl**)’:
external_acl.cc:353: erro: ‘DBG_IMPORTANT’ was not declared in this scope
make[3]: ** [external_acl.o] Erro 1

it needs to apply more some PATCH?



 
Andre Fernando A. Oliveira



  Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua 
cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses


Re: [squid-users] Why single thread?

2008-10-06 Thread Gordon Mohr
I can't find mention of this '-I' option elsewhere. (It's not in my 
2.6.STABLE14-based man page.)


Is there a writeup on this option anywhere?

Did it only appear in later versions?

Is there a long-name for the option that would be easier to search for?

I would be interested in seeing your scripts if there are other wrinkles 
to using Squid in this manner. We're currently using squid for 
load-balancing on dedicated dual-core machines, so one core is staying 
completely idle...


Thanks,

- Gordon @ IA

Dave Dykstra wrote:

Meanwhile the '-I' option to squid makes it possible to run multiple
squids serving the same port on the same machine, so you can make use of
more CPUs.  I've got scripts surrounding squid startups to take
advantage of that.  Let me know if you're interested in having them.
Currently I run a couple machines using 2 squids each on 2 bonded
gigabit interfaces in order to get over 200 Mbytes/second throughput.

- Dave

On Fri, Oct 03, 2008 at 12:01:26PM +1300, Amos Jeffries wrote:

Roy M. wrote:

Hello,

Why squid is running as a single thread program, wouldn't it perform
better if allow run as multithreaded as SMP or Quad core CPU are
popular now?

Thanks.

Simply 'allowing' squid to run as multi-threaded is a very big change.
We are doing what we can to work towards it. A years worth of work in 
now behind with at least another ahead before its really possible.


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Why single thread?

2008-10-06 Thread Dave Dykstra
Marcin,

In my case all of the data being sent out was small enough and
repetitive enough to be in the Linux filesystem cache.  That's where I
found the best throughput.  I think the typical size of the data items
were about 8-30MBytes.  It was a regular Linux ext3 filesystem.  The
machine happens to have been a dual dual-core 64-bit 2Ghz Opteron,
although I saw some Intel machines with similar performance per CPU but
on those I had only one gigabit network interface and one squid.

- Dave

On Mon, Oct 06, 2008 at 08:09:17PM +0200, Marcin Mazurek wrote:
> Dave Dykstra ([EMAIL PROTECTED]) napisa?(a):
> 
> > Meanwhile the '-I' option to squid makes it possible to run multiple
> > squids serving the same port on the same machine, so you can make use of
> > more CPUs.  I've got scripts surrounding squid startups to take
> > advantage of that.  Let me know if you're interested in having them.
> > Currently I run a couple machines using 2 squids each on 2 bonded
> > gigabit interfaces in order to get over 200 Mbytes/second throughput.
> > 
> 
> 
> What kind of storage do You use for such a IO performance, and what file
> system type on it, if that's not a secret:)
> 
> br
> 
> -- 
> Marcin Mazurek
> 


Re: [squid-users] Why single thread?

2008-10-06 Thread Marcin Mazurek
Dave Dykstra ([EMAIL PROTECTED]) napisał(a):

> Meanwhile the '-I' option to squid makes it possible to run multiple
> squids serving the same port on the same machine, so you can make use of
> more CPUs.  I've got scripts surrounding squid startups to take
> advantage of that.  Let me know if you're interested in having them.
> Currently I run a couple machines using 2 squids each on 2 bonded
> gigabit interfaces in order to get over 200 Mbytes/second throughput.
> 


What kind of storage do You use for such a IO performance, and what file
system type on it, if that's not a secret:)

br

-- 
Marcin Mazurek



[squid-users] how to intergrate virus detection with squid proxy

2008-10-06 Thread simon benedict
Dear All,

I have the following setup which is been workin fine for a long time 

Redhat 9 
squid-2.4.STABLE7-4

i also have shoreline firewall on the same squid server

now i appreciate if someone cd advise n help me

1) Real time Virus Scanning for your Proxy Server, includes scanning of HTTP 
traffic and downloaded files

2) Real-Time Content Scanning for HTTP traffic.

3) POP Up Filter

4) Scanning of active content like Javascript, Java or ActiveX and block 
scripts that access the hard disks
 

i really wd apprecite if someone advise me of software which i can download n 
intergrate it with squid


really apprecite your help

regards

simon



  


Re: [squid-users] Unexpected MISSes; patching Accept-Encoding via header_access/header_replace?

2008-10-06 Thread Itzcak Pechtalt
On Mon, Sep 29, 2008 at 4:19 AM, Gordon Mohr <[EMAIL PROTECTED]> wrote:
> Using 2.6.14-1ubuntu2 in an reverse/accelerator setup.
>
> URLs I hope to be cached aren't, even after adjusting passed headers.
>
> For example, I request an URL with FireFox, get the expected MISS. Then
> request same URL with IE, get unexpected MISS when I'd like a HIT. Then
> request same URL with Chrome, get MISS instead of HIT. Finally, request with
> Safari, finally get a HIT.
>
> I gather that the key variable is the differing Accept-Encoding headers:
>
> Firefox: gzip,deflate
> IE: gzip, deflate
> Chrome: gzip,deflate,bzip2
> Safari: gzip, deflate (same as IE, hence the HIT)
>
> My theory was that stripping the varied header values and replacing them
> with the lowest-common-denominator (and the only variant ever returned by
> the parent server) could help. So I added the following to my squid
> configuration:
>
> header_access Accept-Encoding deny all
> header_replace Accept-Encoding gzip
>
> However, this has not changed the HIT/MISS pattern at all.
>
> Any other ideas for letting all these browsers share the same cached
> version?

If the HTTP response includes "Vary: User-Agent" then Squid will give
HIT only for same
User-Agent, check your case.

>
> (Bonus question: My inner-server's 404 responses include a 24-hour Expires
> header. Will these be cached by squid for the declared period or the shorter
> negative-ttl? The info at
> 
> is unclear which wins.)
>
> - Gordon @ IA
>
>

Error caching  is called negative cache, and depends on "negative_ttl"
parameter
in squid.conf which default to 5 minutes, so the 404 will be cached
only 5 minutes by default.

Itzcak


Re: [squid-users] Cache_dir more than 10GB

2008-10-06 Thread Itzcak Pechtalt
On Mon, Oct 6, 2008 at 1:05 PM, Henrik Nordstrom
<[EMAIL PROTECTED]> wrote:

> But it is important you keep the number of objects per cache_dir well
> below 2^24. Preferably not more than 2^23.

Is there any way to limit number of objects in cache_dir ?

Thanks

Itzcak


Re: [squid-users] Why single thread?

2008-10-06 Thread Dave Dykstra
Meanwhile the '-I' option to squid makes it possible to run multiple
squids serving the same port on the same machine, so you can make use of
more CPUs.  I've got scripts surrounding squid startups to take
advantage of that.  Let me know if you're interested in having them.
Currently I run a couple machines using 2 squids each on 2 bonded
gigabit interfaces in order to get over 200 Mbytes/second throughput.

- Dave

On Fri, Oct 03, 2008 at 12:01:26PM +1300, Amos Jeffries wrote:
> Roy M. wrote:
> >Hello,
> >
> >Why squid is running as a single thread program, wouldn't it perform
> >better if allow run as multithreaded as SMP or Quad core CPU are
> >popular now?
> >
> >Thanks.
> 
> Simply 'allowing' squid to run as multi-threaded is a very big change.
> We are doing what we can to work towards it. A years worth of work in 
> now behind with at least another ahead before its really possible.
> 
> Amos
> -- 
> Please use Squid 2.7.STABLE4 or 3.0.STABLE9


[squid-users] Multiple Squid nodes sharing a single, common cache directory

2008-10-06 Thread Christian Tzolov
Hello all, 

Is it possible to configure several Squid servers to share a single,
common cache directory? 

Cheers,
 Chris



[squid-users] IE "Operation aborted error" with Squid

2008-10-06 Thread Jean Sirota
Hello all,
 
Running Squid 2.6.STABLE21, and get the lovely "Internet Explorer cannot open 
the Internet site - Operation aborted" error when connecting to 
www.aircanada.com 
 
Get this error when connecting through Squid in IE6 and IE7 but not with IE8 
(beta). Firefox is not affected by this of course. The error only occurs when 
connecting via squid.
 
Anyone care to try the AirCanada website to see if you can re-produce the 
error? Or have gotten the error with other websites?
 
What could be causing this?
 
 
Thanks!




[squid-users] Transparent proxy from different networks

2008-10-06 Thread Jason Voorhees
Hi all:

I have a Squid running on 192.168.1.1 listening on 3128 TCP port. Users
from 192.168.1.0/24 can browse the Internet without problems thanks to a
REDIRECT rule in my shorewall config.

But users from differents networks (192.168.2.0/24, 192.168.3.0/24,
etc.) can't browse the Internet. Those networks are connected to
192.168.1.0/24 via a VPN connection.

My redirect rule in iptables syntax is like this:

iptables -t nat -A PREROUTING -s 0.0.0.0/24 -i eth2 -p tcp --dport 80 -j
REDIRECT --to-ports

Is there a restriction to work transparent proxy for other networks
different from 192.168.1.0/24? Do I have to configure squid to listen on
each range o network addresses?

Thanks



Re: [squid-users] Reverse Proxy and Googlebot

2008-10-06 Thread Simon Waters
On Monday 06 October 2008 14:12:40 Amos Jeffries wrote:
> Simon Waters wrote:
> >
> > Would you expect Squid to cache the first 3MB if the HTTP 1.1 request
> > stopped early?
>
> Not separate form the rest of the file. You currently still need the
> quick_abort and related settings tuned to always fetch a whole object
> for squid to cache it.
>
> Hmm, that might actually fix the issue for you come to think of it. If
> not it can be unset after an experiment.

I've set:

quick_abort_min -1 KB

I see no great risk in using this on the servers in question, unless a 
customer set out to sabotage our service, and they probably have worse ways 
to do that.

Certianly I'd far rather Google was abusing just the reverse proxy than both 
proxy and server.


RE: [squid-users] Squid on VMWare ESX

2008-10-06 Thread Dean Weimer
I have two installations on ESX 3.5 Update 2 currently in testing, one running 
on Solaris and the other on Ubuntu, both the 3.0 branch.  They are running with 
no disk cache however, and pointing at parent proxies.  I was concerned about 
how our iSCSI SAN would handle the cache as it is recommended not to run on 
raid.  Though I am planning to test that as well, I just have to get a few 
other projects finished first.  I have ran into no problems with either 
installation so far.  They seem to handle the live migration moves between 
servers with only a slight slow down in operation during the move.  The load 
when I go into production will be around 500 users as well, though I have only 
had about 25 users pointed at the test installations.  I have considered doing 
a FreeBSD install as that's what I have been using for a long time on physical 
hardware, but being that it is not officially supported by VMWare, I have been 
hesitant to try it for fear that it might hurt the ESX servers performance.
I hope this helps you some, I still have a decent amount of testing to do 
before I would be willing to say it works and performs great, but so far it's 
been good.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co.

-Original Message-
From: Altrock, Jens [mailto:[EMAIL PROTECTED] 
Sent: Monday, October 06, 2008 6:20 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Squid on VMWare ESX

Are there any concerns/problems using Squid on VMware ESX server 3.5? We
got about 500 Users, so there shouldn't be that much load on that
machine. Maybe someone tested that and could just report how it works.

Regards

Jens


Re: [squid-users] BUG 740

2008-10-06 Thread Amos Jeffries

NBBR wrote:

I'm with problems for the squid(3.0) send content-type to my perl script using 
external acl's. this problem is what this in BUG 740?

if it will be, would like resolv in squid 3.0?



3.0 is already restricted to only serious bugs. You can patch your own 
though if you want:

 http://www.squid-cache.org/Versions/v3/3.1/changesets/b9216.patch
 http://www.squid-cache.org/Versions/v3/3.1/changesets/b9223.patch
 http://www.squid-cache.org/Versions/v3/3.1/changesets/b9226.patch

3.1 has just been branched, which means test releases will come very 
soon which you can use.


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Raid 0 vs Two cache_dir

2008-10-06 Thread Amos Jeffries

Henrik Nordstrom wrote:

On mån, 2008-10-06 at 11:08 +0200, Matus UHLAR - fantomas wrote:

On 05.10.08 12:31, Rafael Gomes wrote:

I have two Scsi discs. I can set a unique cache_dir and make a Raid 0,
so i will  improve the write or I can set two cache_dir one per disc.

What is better?

Are There any documents about information? Like comparation and other
things like this.

even tried reading the FAQ? Doesn't
http://wiki.squid-cache.org/SquidFaq/RAID say it all?



Doesnt mention RAID0, does it?

I have now added a RAID0 section.



I had a section on each RAID type. Then the RAID fanboys went and 
created that version with no mention of the fatal problems seen with 
some RAIDs.


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] External ACL helper

2008-10-06 Thread Amos Jeffries

Francois Goudal wrote:

Hi,

I'm trying to make a setup with several squid proxies :

All my clients are making their requests to the main proxy, I will call 
it proxy_1 here.


Then I have 2 other proxies : proxy_2 and proxy_3 that are never queried 
directly by the clients, they are supposed to be used as cache_peer by 
proxy_1.


I want proxy_1 to forward the requests to either proxy_2 or proxy_3 
depending on a specific condition based on the source IP address.


So I want to use an external acl helper script to determine if the 
client matches the condition or not.


I have written a dummy test helper script in /root/test.sh :

#!/bin/sh

while read line; do
  echo $line >> /tmp/log_helper
  echo OK
done


And my squid.conf is basically:

external_acl_type testacl %SRC /root/test.sh
acl test1 dstdom_regex google
acl test2 external testacl
cache_peer proxy_2 parent 3128 0 proxy-only
cache_peer proxy_3 parent 3128 0 proxy-only
cache_peer_access proxy_2 allow test1
cache_peer_access proxy_3 allow test2
never_direct allow all


When I start squid with this setup, I can see in the process tree that 
it starts 10 instances of test.sh


If I make a http://www.google.com query to this proxy, then the acl 
test1 is matched and the query is directed to proxy_2 and it succeeds.
But if I make a http://www.yahoo.com query to this proxy, then it 
shouldn't match the test1 acl, and then try the test2 acl, which would 
mean providing the client's IP address to the helper script, which would 
reply OK, and then the query should be directed to proxy_3.

But as a matter of fact, this query fails with a 503 Service Unavailable.

I don't understand why squid is not writing anything to the helper 
script, to try to match the test2 acl.


I would appreciate some help to figure this out, I'm out of ideas :-/

Best regards.



a) You may need to echo a newline explicitly:
  echo "OK\n"

b) Does the helper have write permissions to create or append to the log 
file when its run as the squid user?


c) what does cache.log say about the time of the test request?


Hint:  When this is going consider the concurrency, ttl, and 
negative_ttl options for extra performance.



Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Reverse Proxy and Googlebot

2008-10-06 Thread Amos Jeffries

Simon Waters wrote:

On Monday 06 October 2008 11:55:41 Amos Jeffries wrote:

Simon Waters wrote:

Seeing issues with Googlebots retrying on large PDF files.

Apache logs a 200 for the HTTP 1.0 requests.

Squid logs an HTTP 1.1 request that looks to have stopped early (3MB out
of 13MB).

This pattern is repeated with slight variation in the amount of data
served to the Googlebots, and after about 14 attempts it gives up and
goes away.

Anyone else seeing same?

Not seeing this, but  do you have correct Expires: and Cache-Control
headers on those .pdf? and is GoogleBot not obeying them?


Yes Etags and Expires headers - I don't think this is Squid specific since I 
saw similar from Googlebots before there was a reverse proxy involved.


I agree. I just thought it might be their way of self-detecting 
unchanged content of the headers were missing. BUut it seems not.




Does have a "Vary: Host" header, I know how it got there but I'm not 100% sure 
what if any effect it has on caching, I'm hoping everything is ignoring it.


A different copy gets cached for each difference in Vary: listed 
headers. ETag should override that by meaning two variants are the 
identical.


Again may be relevant in general, but shouldn't be relevant to this request 
(since it is all from the same host).


http://groups.google.com/group/Google_Webmaster_Help-Indexing/browse_thread/thread/f8ecc41ac9e5bc11

I just thought because there is a Squid reverse proxy in front of the server I 
had more information on what was going wrong, and that others here might have 
seen something similar.


It looks like the Googlebot is timing out, and retrying. Quite why it is not 
getting the cache is unclear at this point, but since I can't control the 
Googlebot I can't reproduce with more logging. It also doesn't seem to back 
off any when it fails, which I think is the real issue here. Google showed 
some interest last time, but never got back to me.


I got TCP_MISS:FIRST_UP_PARENT logged on squid for all these requests.
Today when I checked the headers using wget I see 
TCP_REFRESH_HIT:FIRST_UP_PARENT, and TCP_HIT:NONE, so Squid seems to be doing 
something sensible with the file usually, just Googlebots it dislikes.


Would you expect Squid to cache the first 3MB if the HTTP 1.1 request stopped 
early?


Not separate form the rest of the file. You currently still need the 
quick_abort and related settings tuned to always fetch a whole object 
for squid to cache it.


Hmm, that might actually fix the issue for you come to think of it. If 
not it can be unset after an experiment.


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


[squid-users] squid 2.7

2008-10-06 Thread Aguiar Magalhaes
Hi list,

I've been instaled the squid 2.7 in a freebsd 7 machine. I need to configure it 
to a network with at about 400 clients machines.

The server has 4 GB RAM and 60 GB (partition) to the cache, and only the pf 
firewall is configured and running on this server, since a lot.

I alread have a directive in the pf file to transfer all the network access  to 
the squid, when they use the port 80 and 443.

How to configure the main parameters (cache, memory usage, and others if 
necessary) in the squid.conf for our network and server ?

Help me, thanks,

Aguiar Magalhaes




  Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua 
cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses


RE: [squid-users] Squid with webwasher using NTLM authentication

2008-10-06 Thread NGUYEN DANG LUAN, Eric
I'm using squid 2.7 stable 4.

Here's the line i use:
cache_peer comp parent 3128 3130 default login=PASS

Then I run squid using this command:
/usr/local/squid/sbin/squid -N -d 1 -D &

But It doesn't seem to work.

Eric NGUYEN DANG LUAN


-Message d'origine-
De : Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Envoyé : lundi 6 octobre 2008 12:30
À : NGUYEN DANG LUAN, Eric
Cc : squid-users@squid-cache.org
Objet : RE: [squid-users] Squid with webwasher using NTLM authentication

On mån, 2008-10-06 at 10:59 +0200, NGUYEN DANG LUAN, Eric wrote:
> I've tried almost all options for cache_peer but it doesn't seem to work. Is 
> it a squid's bug?

Did you try login=PASS using squid-2.7?

Regards
Henrik


[squid-users] Your IP (192.168.8.5) does not have access to the IWSS server.

2008-10-06 Thread jmaan
*
This message has been scanned by IMSS NIT-Silchar

Dear ALL SQUID Users,


I have a problem/ query to ask you all. I have a proxy server for student
folk in my network.

It was going fine untill the students reported to me that they were able
to log onto the proxy server to surf a site but then immediately their
browsers throws a error message and then they not able to surf any longer,
as it denies access to them saying their proxy server 192.168.8.5 does nto
have access to IWSS server.

Please give some pointers as such to resolve this .


Is it something due to a line in the squid.conf or IWSS server etc.

The error message displayed on the browsers of the student's system wis
shown below for your reference:-

Here it is within double quotes:


"
IWSS Security Event (nit-imss)
Your IP (192.168.8.5) does not have access to the IWSS server.
Contact your network administrator

"


Do you have any idea as such why this error message is showing up ??
I had tried my best. But noways I could fix the problem. Of course there
was one port which got disabled, it was made ok, but still the problem could
not be fixed.

Please let me know if you have any idea.

I have also checked the IWSS system (where the IBM machine for the IWSS
server) but still could not find the problem over there. However, on
restart of this system  I got the message that one or two servers have
failed to start. But yes, IWSS is ofcourse running.

Any pionters to solve this problem would be appreciated as then the
student folk are not getting the net access from their hostels untill it
is fixed.

Thanks,

jmaan








[squid-users] Your IP (192.168.8.5) does not have access to the IWSS server.

2008-10-06 Thread jmaan
*
This message has been scanned by IMSS NIT-Silchar

Dear ALL SQUID Users,


I have a problem/ query to ask you all. I have a proxy server for student
folk in my network.

It was going fine untill the students reported to me that they were able
to log onto the proxy server to surf a site but then immediately their
browsers throws a error message and then they not able to surf any longer,
as it denies access to them saying their proxy server 192.168.8.5 does nto
have access to IWSS server.

Please give some pointers as such to resolve this .


Is it something due to a line in the squid.conf or IWSS server etc.

The error message displayed on the browsers of the student's system wis
shown below for your reference:-

Here it is within double quotes:


"
IWSS Security Event (nit-imss)
Your IP (192.168.8.5) does not have access to the IWSS server.
Contact your network administrator

"


Do you have any idea as such why this error message is showing up ??
I had tried my best. But noways I could fix the problem. Of course there
was one port which got disabled, it was made ok, but still the problem could
not be fixed.

Please let me know if you have any idea.

I have also checked the IWSS system (where the IBM machine for the IWSS
server) but still could not find the problem over there. However, on
restart of this system  I got the message that one or two servers have
failed to start. But yes, IWSS is ofcourse running.

Any pionters to solve this problem would be appreciated as then the
student folk are not getting the net access from their hostels untill it
is fixed.

Thanks,

jmaan






Re: [squid-users] Re: Should I remove cache-directory...?

2008-10-06 Thread jmaan
*
This message has been scanned by IMSS NIT-Silchar



Dear ALL SQUID Users,


I have a problem/ query to ask you all. I have a proxy server for student
folk in my network.

It was going fine untill the students reported to me that they were able
to log onto the proxy server to surf a site but then immediately their
browsers throws a error message and then they not able to surf any longer,
as it denies access to them saying their proxy server 192.168.8.5 does nto
have access to IWSS server.

Please give some pointers as such to resolve this .


Is it something due to a line in the squid.conf or IWSS server etc.

The error message displayed on the browsers of the student's system wis
shown below for your reference:-

Here it is within double quotes:


"
IWSS Security Event (nit-imss)
Your IP (192.168.8.5) does not have access to the IWSS server.
Contact your network administrator

"


Do you have any idea as such why this error message is showing up ??
I had tried my best. But noways I could fix the problem. Of course there
was one port which got disabled, it was made ok, but still the problem could
not be fixed.

Please let me know if you have any idea.

I have also checked the IWSS system (where the IBM machine for the IWSS
server) but still could not find the problem over there. However, on
restart of this system  I got the message that one or two servers have
failed to start. But yes, IWSS is ofcourse running.

Any pionters to solve this problem would be appreciated as then the
student folek are not getting the net access from their hostels untill it
is fixed.

Thanks,

jmaan


Thanks,







[squid-users] BUG 740

2008-10-06 Thread NBBR
I'm with problems for the squid(3.0) send content-type to my perl script using 
external acl's. this problem is what this in BUG 740?

if it will be, would like resolv in squid 3.0?

I'm trying make a MIME-TYPE validation using external acl's.



 
Andre Fernando A. Oliveira



  Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua 
cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses


[squid-users] Squid on VMWare ESX

2008-10-06 Thread Altrock, Jens
Are there any concerns/problems using Squid on VMware ESX server 3.5? We
got about 500 Users, so there shouldn't be that much load on that
machine. Maybe someone tested that and could just report how it works.

Regards

Jens


Re: [squid-users] Raid 0 vs Two cache_dir

2008-10-06 Thread Henrik Nordstrom
On mån, 2008-10-06 at 11:08 +0200, Matus UHLAR - fantomas wrote:
> On 05.10.08 12:31, Rafael Gomes wrote:
> > I have two Scsi discs. I can set a unique cache_dir and make a Raid 0,
> > so i will  improve the write or I can set two cache_dir one per disc.
> > 
> > What is better?
> > 
> > Are There any documents about information? Like comparation and other
> > things like this.
> 
> even tried reading the FAQ? Doesn't
> http://wiki.squid-cache.org/SquidFaq/RAID say it all?


Doesnt mention RAID0, does it?

I have now added a RAID0 section.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Reverse Proxy and Googlebot

2008-10-06 Thread Simon Waters
On Monday 06 October 2008 11:55:41 Amos Jeffries wrote:
> Simon Waters wrote:
> > Seeing issues with Googlebots retrying on large PDF files.
> >
> > Apache logs a 200 for the HTTP 1.0 requests.
> >
> > Squid logs an HTTP 1.1 request that looks to have stopped early (3MB out
> > of 13MB).
> >
> > This pattern is repeated with slight variation in the amount of data
> > served to the Googlebots, and after about 14 attempts it gives up and
> > goes away.
> >
> > Anyone else seeing same?
>
> Not seeing this, but  do you have correct Expires: and Cache-Control
> headers on those .pdf? and is GoogleBot not obeying them?

Yes Etags and Expires headers - I don't think this is Squid specific since I 
saw similar from Googlebots before there was a reverse proxy involved.

Does have a "Vary: Host" header, I know how it got there but I'm not 100% sure 
what if any effect it has on caching, I'm hoping everything is ignoring it. 
Again may be relevant in general, but shouldn't be relevant to this request 
(since it is all from the same host).

http://groups.google.com/group/Google_Webmaster_Help-Indexing/browse_thread/thread/f8ecc41ac9e5bc11

I just thought because there is a Squid reverse proxy in front of the server I 
had more information on what was going wrong, and that others here might have 
seen something similar.

It looks like the Googlebot is timing out, and retrying. Quite why it is not 
getting the cache is unclear at this point, but since I can't control the 
Googlebot I can't reproduce with more logging. It also doesn't seem to back 
off any when it fails, which I think is the real issue here. Google showed 
some interest last time, but never got back to me.

I got TCP_MISS:FIRST_UP_PARENT logged on squid for all these requests.
Today when I checked the headers using wget I see 
TCP_REFRESH_HIT:FIRST_UP_PARENT, and TCP_HIT:NONE, so Squid seems to be doing 
something sensible with the file usually, just Googlebots it dislikes.

Would you expect Squid to cache the first 3MB if the HTTP 1.1 request stopped 
early?

66.249.67.185 - - [01/Oct/2008:08:37:13 -0700] "GET http://somewhere.pdf 
HTTP/1.1" 200 3596968 TCP_MISS:FIRST_UP_PARENT
66.249.67.185 - - [01/Oct/2008:08:47:34 -0700] "GET http://somewhere.pdf 
HTTP/1.1" 200 3342120 TCP_MISS:FIRST_UP_PARENT
66.249.67.185 - - [01/Oct/2008:08:53:47 -0700] "GET http://somewhere.pdf 
HTTP/1.1" 200 4106664 TCP_MISS:FIRST_UP_PARENT
66.249.67.185 - - [01/Oct/2008:08:59:51 -0700] "GET http://somewhere.pdf 
HTTP/1.1" 200 3973448 TCP_MISS:FIRST_UP_PARENT
66.249.67.185 - - [01/Oct/2008:09:06:12 -0700] "GET http://somewhere.pdf 
HTTP/1.1" 200 3762040 TCP_MISS:FIRST_UP_PARENT
66.249.67.185 - - [01/Oct/2008:09:12:35 -0700] "GET http://somewhere.pdf 
HTTP/1.1" 200 3843128 TCP_MISS:FIRST_UP_PARENT
66.249.67.185 - - [01/Oct/2008:09:18:46 -0700] "GET http://somewhere.pdf 
HTTP/1.1" 200 3206008 TCP_MISS:FIRST_UP_PARENT
66.249.67.185 - - [01/Oct/2008:09:25:00 -0700] "GET http://somewhere.pdf 
HTTP/1.1" 200 2958400 TCP_MISS:FIRST_UP_PARENT
66.249.67.185 - - [01/Oct/2008:09:31:25 -0700] "GET http://somewhere.pdf 
HTTP/1.1" 200 3659232 TCP_MISS:FIRST_UP_PARENT
66.249.67.185 - - [01/Oct/2008:09:37:59 -0700] "GET http://somewhere.pdf 
HTTP/1.1" 200 3643304 TCP_MISS:FIRST_UP_PARENT
66.249.67.185 - - [01/Oct/2008:09:44:35 -0700] "GET http://somewhere.pdf 
HTTP/1.1" 200 3950280 TCP_MISS:FIRST_UP_PARENT
66.249.67.185 - - [01/Oct/2008:09:50:44 -0700] "GET http://somewhere.pdf 
HTTP/1.1" 200 2182272 TCP_MISS:FIRST_UP_PARENT
66.249.67.185 - - [01/Oct/2008:09:57:16 -0700] "GET http://somewhere.pdf 
HTTP/1.1" 200 4154448 TCP_MISS:FIRST_UP_PARENT






Re: [squid-users] Cache_dir more than 10GB

2008-10-06 Thread Henrik Nordstrom
On mån, 2008-10-06 at 08:49 +0200, Francois Cami wrote:

> I would not run an ext3 filesystem with data=writeback . noatime and
> nodiratime provide a welcome boost by eliminating unneeded writes,
> however writeback is not {powerfailure, system crash}-safe. If you
> value your time (especially the time spent putting systems back up
> after a system failure), you should use data=ordered.

I don't value the cache contents much, and in fact in some installations
I automatically mkfs the cache partitions on a power failure or system
crash instead of running a filesystem check.. This allows aggressive
tuning for performance.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Slow for one user, fast for everyone else

2008-10-06 Thread RM
On Mon, Oct 6, 2008 at 1:45 AM, Pieter De Wit <[EMAIL PROTECTED]> wrote:
> Hi JL,
>
> Does your server use DNS in it's logging ? Perhaps it's reverse DNS ?
>
> If he downloads a big file, does the speed pick up ?
>
> Cheers,
>
> Pieter
>
> JL wrote:
>>
>> I have a server setup which provides an anonymous proxy service to
>> individuals across the world. I have one specific user that is
>> experiencing very slow speeds. Other users performing the very same
>> activities do not experience the slow speeds, myself included. I asked
>> the slow user to do traceroutes and it appeared there were no network
>> routing issues but for some reason it is VERY slow for him to the
>> point of being unusable. The slow user can perform the same exact
>> activities perfectly fine using another proxy service but with my
>> proxy it is too slow.
>>
>> Any help is appreciated.
>>
>
>

Thanks Pieter for the reply.

I am not sure what you mean by DNS in its logging. I am assuming you
mean that in the logs hostnames as opposed to IP addresses are logged.
If so, that is not the case, only IP addresses are logged in the Squid
logs. I realize you are probably are also referring to reverse DNS for
the user but just in case you mean reverse DNS for the server, I do
have reverse DNS setup for the server IP's.

I will have to ask to see if big downloads speed up for the user.

Any other help is appreciated.


[squid-users] External ACL helper

2008-10-06 Thread Francois Goudal

Hi,

I'm trying to make a setup with several squid proxies :

All my clients are making their requests to the main proxy, I will call 
it proxy_1 here.


Then I have 2 other proxies : proxy_2 and proxy_3 that are never queried 
directly by the clients, they are supposed to be used as cache_peer by 
proxy_1.


I want proxy_1 to forward the requests to either proxy_2 or proxy_3 
depending on a specific condition based on the source IP address.


So I want to use an external acl helper script to determine if the 
client matches the condition or not.


I have written a dummy test helper script in /root/test.sh :

#!/bin/sh

while read line; do
  echo $line >> /tmp/log_helper
  echo OK
done


And my squid.conf is basically:

external_acl_type testacl %SRC /root/test.sh
acl test1 dstdom_regex google
acl test2 external testacl
cache_peer proxy_2 parent 3128 0 proxy-only
cache_peer proxy_3 parent 3128 0 proxy-only
cache_peer_access proxy_2 allow test1
cache_peer_access proxy_3 allow test2
never_direct allow all


When I start squid with this setup, I can see in the process tree that 
it starts 10 instances of test.sh


If I make a http://www.google.com query to this proxy, then the acl 
test1 is matched and the query is directed to proxy_2 and it succeeds.
But if I make a http://www.yahoo.com query to this proxy, then it 
shouldn't match the test1 acl, and then try the test2 acl, which would 
mean providing the client's IP address to the helper script, which would 
reply OK, and then the query should be directed to proxy_3.

But as a matter of fact, this query fails with a 503 Service Unavailable.

I don't understand why squid is not writing anything to the helper 
script, to try to match the test2 acl.


I would appreciate some help to figure this out, I'm out of ideas :-/

Best regards.

--
Francois Goudal
Satcom1
Denmark - France - Sweden - Canada
Phone: +33170031923 (NEW)
Fax: +33170031922 (NEW)
Mob: +33626432204
e-mail: [EMAIL PROTECTED]
www.satcom1.com
Inmarsat: ISP 8422, PSA 3123
*Satcom1 hopes to see you at NBAA  2008, October 6th to 8th, Booth #1038*


Re: [squid-users] Cache_dir more than 10GB

2008-10-06 Thread Henrik Nordstrom
On sön, 2008-10-05 at 16:38 +0200, Itzcak Pechtalt wrote:
> When Squid reach several millions of objects per cache dir, it start
> to be very CPU consumer, becuae every insertion and deletion of object
> takes long time.

Mine don't.

> On my Squid 80-100GB had the CPU consumption effect.

That's a fairly small cache.

The biggest cache I have been running was in the 1.5TB range, split over
a number of cache_dir, about 130GB each I think.

But it is important you keep the number of objects per cache_dir well
below 2^24. Preferably not more than 2^23.


What I think is that you got bitten by something else than cache size..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] How to change TimeZone in SQUID configuration?

2008-10-06 Thread Amos Jeffries

lmenaria wrote:
Hello everyone, 


I have installed SQUID 2.7.4 and its working fine. Now need to change
TimeZone from GMT to Local, because my application is not working with GMT
time. So how can set the local time in SQUID configuration ?

Please let me know.


The default squid log format uses epoch-time. There is no timezone involved.

For other options see:
http://www.squid-cache.org/Versions/v2/2.7/cfgman/logformat.html

note the %tl tag.

Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Is it possible to monitor Delay pools with MRTG?

2008-10-06 Thread Henrik Nordstrom
On sön, 2008-10-05 at 09:04 +0200, Sommariva Graziano wrote:

> Is it possible to monitor Delay Pools with MRTG?

There is no SNMP MIB definition for the delay pools counters.

BUt in theory it's possible to collect the data using cachemgr and feed
it into mrtg or rrdtool.. but it's probably about as easy to extend the
Squid MIB with delay pools data...

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Reverse Proxy and Googlebot

2008-10-06 Thread Amos Jeffries

Simon Waters wrote:

Seeing issues with Googlebots retrying on large PDF files.

Apache logs a 200 for the HTTP 1.0 requests.

Squid logs an HTTP 1.1 request that looks to have stopped early (3MB out of 
13MB).


This pattern is repeated with slight variation in the amount of data served to 
the Googlebots, and after about 14 attempts it gives up and goes away.


Anyone else seeing same?



Not seeing this, but  do you have correct Expires: and Cache-Control 
headers on those .pdf? and is GoogleBot not obeying them?


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Re: Adding Another HardDisk and the read_only option

2008-10-06 Thread Amos Jeffries

Rafael Gomes wrote:

Well any algorithms could help this case correct?

If you put another disc, I must choice the same size, correct?


No, not must. At worst large one gets more content into it and faster 
turnover. Small one used as a backup when large gets filled.


Amos



On Sat, Oct 4, 2008 at 7:49 PM, RW <[EMAIL PROTECTED]> wrote:

On Thu, 2 Oct 2008 12:09:09 +0300
"Mr. Issa\(*\)" <[EMAIL PROTECTED]> wrote:


Hello mates,

Well i have added to separate hard disks (200GB hd and 500GB hd) and
added them in the cache_dir but i noticed that squid Uses the hardisk
with MORE disk space  and leaves the second one epmty... why?

There are two store_dir_select_algorithm settings: round-robin and the
default least-load. The latter is supposed to select the cache with
the least-load, but on relatively lightly-loaded systems it picks the
one with the most free-space, so you expect the 500GB drive to fill-up
to 300GB or so before the smaller starts getting any.

My memory is a little vague here, but my recollection is that neither
algorithm works optimally when the caches are of different sizes.
I think Round-robin puts equal amounts in all caches, without
weighting them; least-load always favours the larger cache
because the low-water mark gives it more free space.

If you want to tweak it once all the caches have filled, you can play
around with the min and max object size for each cache to balance them
out.



--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


RE: [squid-users] Squid with webwasher using NTLM authentication

2008-10-06 Thread Henrik Nordstrom
On mån, 2008-10-06 at 10:59 +0200, NGUYEN DANG LUAN, Eric wrote:
> I've tried almost all options for cache_peer but it doesn't seem to work. Is 
> it a squid's bug?

Did you try login=PASS using squid-2.7?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Reverse Proxy and Googlebot

2008-10-06 Thread Simon Waters
Seeing issues with Googlebots retrying on large PDF files.

Apache logs a 200 for the HTTP 1.0 requests.

Squid logs an HTTP 1.1 request that looks to have stopped early (3MB out of 
13MB).

This pattern is repeated with slight variation in the amount of data served to 
the Googlebots, and after about 14 attempts it gives up and goes away.

Anyone else seeing same?



Re: [squid-users] Raid 0 vs Two cache_dir

2008-10-06 Thread Matus UHLAR - fantomas
On 05.10.08 12:31, Rafael Gomes wrote:
> I have two Scsi discs. I can set a unique cache_dir and make a Raid 0,
> so i will  improve the write or I can set two cache_dir one per disc.
> 
> What is better?
> 
> Are There any documents about information? Like comparation and other
> things like this.

even tried reading the FAQ? Doesn't
http://wiki.squid-cache.org/SquidFaq/RAID say it all?

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Boost your system's speed by 500% - DEL C:\WINDOWS\*.*


Re: [squid-users] Cache_dir more than 10GB

2008-10-06 Thread Matus UHLAR - fantomas
> On Mon, Oct 6, 2008 at 8:49 AM, Francois Cami <[EMAIL PROTECTED]> wrote:
> > I would not run an ext3 filesystem with data=writeback . noatime and
> > nodiratime provide a welcome boost by eliminating unneeded writes,
> > however writeback is not {powerfailure, system crash}-safe. If you
> > value your time (especially the time spent putting systems back up
> > after a system failure), you should use data=ordered.

On 06.10.08 09:19, Kinkie wrote:
> That's of course a possibility.
> It all boils down to the usual performance vs. resiliency tradeoff,
> and it depends on each sysadmin's unique operational situation.

Yes, but using data=writeback is not a tuning, but risking. Using that on
squid cache dir may require cleaning cache_dir after each crash, otherwise
you risk providing invalid data
-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Micro$oft random number generator: 0, 0, 0, 4.33e+67, 0, 0, 0...


RE: [squid-users] Squid with webwasher using NTLM authentication

2008-10-06 Thread NGUYEN DANG LUAN, Eric
I've tried almost all options for cache_peer but it doesn't seem to work. Is it 
a squid's bug?

Eric NGUYEN DANG LUAN


-Message d'origine-
De : NGUYEN DANG LUAN, Eric [mailto:[EMAIL PROTECTED] 
Envoyé : lundi 6 octobre 2008 09:29
À : Henrik Nordstrom
Cc : squid-users@squid-cache.org
Objet : RE: [squid-users] Squid with webwasher using NTLM authentication


>> When a user is connect directly on webwasher it works. He is authenticated 
>> worretly (I can see that thanks to logs).
>> But once I implement a Squid cache server, it doesn't work. My user can't be 
>> authenticated.

>Have you told Squid to trust the webwasher proxy with proxy login credentials? 
>See cache_peer directive.

I'm currently using this line:
cache_peer comp parent 3128 3130 no-query default
For the moment there is no login credentials. I'm gonna check this.

Regards,
NGUYEN DANG LUAN Eric

-Message d'origine-
De : Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Envoyé : samedi 4 octobre 2008 16:14
À : NGUYEN DANG LUAN, Eric
Cc : squid-users@squid-cache.org
Objet : Re: [squid-users] Squid with webwasher using NTLM authentication

On fre, 2008-10-03 at 10:17 +0200, NGUYEN DANG LUAN, Eric wrote:

> I'm using squid as a cache server working with webwasher (proxy + 
> authentication + webpage filter). Here's the context :
>
> User's computer<>Squid <> Webwasher<--->Internet
>  |
>  | Authentication
>  |(Using NTLM)
>  |
>NTLM
>Agent
> 
> When a user is connect directly on webwasher it works. He is authenticated 
> worretly (I can see that thanks to logs).
> But once I implement a Squid cache server, it doesn't work. My user can't be 
> authenticated.

Have you told Squid to trust the webwasher proxy with proxy login credentials? 
See cache_peer directive.

> Does anyone has an idea? I'm using squid 2.6 running on a RedHat linux server 
> 5.

Maybe you need to upgrade to 2.7. But it depends on which exact 2.6 release you 
are using.. see below.

> Right now i'm trying squid 3 but it dosen't seem to work too.

squid-3.0 does not support forwarding of NTLM authentication as it does not yet 
implement the required workarounds to Microsoft HTTP protocol violations needed 
to support NTLM forwarding.

Regards
Henrik


Re: [squid-users] Problem with Trillian

2008-10-06 Thread Matus UHLAR - fantomas
On 03.10.08 07:40, fbaiao wrote:
> My squid is blocking Trillian and I'm not finding the reason.

squid is HTTP proxy. Unless you really have a reason, don't use it for
Trillian and other protocols. Connect trillian directly to destination
ports.
-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
10 GOTO 10 : REM (C) Bill Gates 1998, All Rights Reserved!


Re: [squid-users] Slow for one user, fast for everyone else

2008-10-06 Thread Pieter De Wit

Hi JL,

Does your server use DNS in it's logging ? Perhaps it's reverse DNS ?

If he downloads a big file, does the speed pick up ?

Cheers,

Pieter

JL wrote:

I have a server setup which provides an anonymous proxy service to
individuals across the world. I have one specific user that is
experiencing very slow speeds. Other users performing the very same
activities do not experience the slow speeds, myself included. I asked
the slow user to do traceroutes and it appeared there were no network
routing issues but for some reason it is VERY slow for him to the
point of being unusable. The slow user can perform the same exact
activities perfectly fine using another proxy service but with my
proxy it is too slow.

Any help is appreciated.
  




[squid-users] Slow for one user, fast for everyone else

2008-10-06 Thread JL
I have a server setup which provides an anonymous proxy service to
individuals across the world. I have one specific user that is
experiencing very slow speeds. Other users performing the very same
activities do not experience the slow speeds, myself included. I asked
the slow user to do traceroutes and it appeared there were no network
routing issues but for some reason it is VERY slow for him to the
point of being unusable. The slow user can perform the same exact
activities perfectly fine using another proxy service but with my
proxy it is too slow.

Any help is appreciated.


Re: [squid-users] Can someone help me block samba users at a particular time.

2008-10-06 Thread Avinash Rao
I went through the documentation. I need help in installing the
auth_proxy module. I have not installed squid from Synaptic Manager, i
did it manually! so, the helpers directory is missing on my system and
i am not able to find the squid authenticators.
Is there anyway i can get this?

On Mon, Oct 6, 2008 at 10:35 AM, Avinash Rao <[EMAIL PROTECTED]> wrote:
>
> I went through the documentation. I need help in installing the auth_proxy 
> module. I didn't install squid from Synaptic Manager, i did it manually! so, 
> the helpers directory is missing on my system and i am not able to find the 
> squid authenticators.
>
> Is there anyway i can get this?
>
>
>
>
>
>
> On Thu, Oct 2, 2008 at 10:24 AM, Avinash Rao <[EMAIL PROTECTED]> wrote:
>>
>> thanks and i will check it today.
>>
>> On Thu, Oct 2, 2008 at 9:09 AM, Amos Jeffries <[EMAIL PROTECTED]> wrote:
>> >
>> > > Amos,
>> > >
>> > > Thank you for the information. I will go through the doc, test it and get
>> > > back if necessary.
>> > > If i wrote my requirement right in my last email, the samba users can get
>> > > access to internet only between 18:00 - 20:00 Hrs everyday.
>> >
>> > Ah, sorry. you wrote it right. I read it wrong.
>> >
>> > The http_access line should be:
>> >  http_access deny !deadHours sambaUsers
>> >
>> > and the name makes better sense being okayHours instead of deadHours.
>> >
>> > Amos
>> >
>> > >
>> > > Thanks again
>> > > Avinash
>> > >
>> > > On Thu, Oct 2, 2008 at 7:42 AM, Amos Jeffries <[EMAIL PROTECTED]>
>> > > wrote:
>> > >
>> > >> > Hi all,
>> > >> >
>> > >> > I have configured the latest version of squid on Ubuntu Studio 8.0 -
>> > >> > AMD 64bit. I have also configured samba.
>> > >> > I am in need of blocking the samba users from accessing the internet
>> > >> > anytime except 18:00 - 20:00 Hrs everyday. How do i do this?
>> > >> > The samba is configured as a PDC with WinXP clients.
>> > >> >
>> > >>
>> > >> Standard samba config.
>> > >>  http://wiki.squid-cache.org/SquidFaq/ProxyAuthentication
>> > >>
>> > >> Then this at the appropriate place of your config:
>> > >>
>> > >> acl sambaUsers proxy_auth REQUIRED
>> > >> acl deadHours time 18:00-20:00
>> > >> http_access deny deadHours sambaUsers
>> > >>
>> > >>
>> > >> Amos
>> > >>
>> > >>
>> > >
>> >
>> >
>


RE: [squid-users] Squid with webwasher using NTLM authentication

2008-10-06 Thread NGUYEN DANG LUAN, Eric

>> When a user is connect directly on webwasher it works. He is authenticated 
>> worretly (I can see that thanks to logs).
>> But once I implement a Squid cache server, it doesn't work. My user can't be 
>> authenticated.

>Have you told Squid to trust the webwasher proxy with proxy login credentials? 
>See cache_peer directive.

I'm currently using this line:
cache_peer comp parent 3128 3130 no-query default
For the moment there is no login credentials. I'm gonna check this.

Regards,
NGUYEN DANG LUAN Eric

-Message d'origine-
De : Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Envoyé : samedi 4 octobre 2008 16:14
À : NGUYEN DANG LUAN, Eric
Cc : squid-users@squid-cache.org
Objet : Re: [squid-users] Squid with webwasher using NTLM authentication

On fre, 2008-10-03 at 10:17 +0200, NGUYEN DANG LUAN, Eric wrote:

> I'm using squid as a cache server working with webwasher (proxy + 
> authentication + webpage filter). Here's the context :
>
> User's computer<>Squid <> Webwasher<--->Internet
>  |
>  | Authentication
>  |(Using NTLM)
>  |
>NTLM
>Agent
> 
> When a user is connect directly on webwasher it works. He is authenticated 
> worretly (I can see that thanks to logs).
> But once I implement a Squid cache server, it doesn't work. My user can't be 
> authenticated.

Have you told Squid to trust the webwasher proxy with proxy login credentials? 
See cache_peer directive.

> Does anyone has an idea? I'm using squid 2.6 running on a RedHat linux server 
> 5.

Maybe you need to upgrade to 2.7. But it depends on which exact 2.6 release you 
are using.. see below.

> Right now i'm trying squid 3 but it dosen't seem to work too.

squid-3.0 does not support forwarding of NTLM authentication as it does not yet 
implement the required workarounds to Microsoft HTTP protocol violations needed 
to support NTLM forwarding.

Regards
Henrik


Re: [squid-users] Cache_dir more than 10GB

2008-10-06 Thread Kinkie
On Mon, Oct 6, 2008 at 8:49 AM, Francois Cami <[EMAIL PROTECTED]> wrote:
> On Mon, Oct 6, 2008 at 8:28 AM, Kinkie <[EMAIL PROTECTED]> wrote:
>> On Mon, Oct 6, 2008 at 5:01 AM, Rafael Gomes <[EMAIL PROTECTED]> wrote:
>>> With ReiserFS or xfs we must set this options too?
>>>
>>> options : noatime, nodiratime, data=writeback
>>
>> data=writeback is ext3-specific, the others should be available and if
>> so they are to be specified for maximum performance.
>> in case of reiserfs you should also specify notail.
>> There's some hints at http://wiki.squid-cache.org/BestOsForSquid
>
> I would not run an ext3 filesystem with data=writeback . noatime and
> nodiratime provide a welcome boost by eliminating unneeded writes,
> however writeback is not {powerfailure, system crash}-safe. If you
> value your time (especially the time spent putting systems back up
> after a system failure), you should use data=ordered.

That's of course a possibility.
It all boils down to the usual performance vs. resiliency tradeoff,
and it depends on each sysadmin's unique operational situation.



-- 
/kinkie