[squid-users] Regarding wccp and tproxy

2010-02-10 Thread senthil

Hi

We have configured squid with wccp and tproxy.

It has been working fine till date but suddenly some ip not able to browse

The request can be seen in access log but not able to browse .

Meanwhile https://www.google.com works but http://www.google.com does 
not work


Please do me favour

Thanks and regards

senthilkumar


RE: [squid-users] RE: libsmb/ntlmssp.c:ntlmssp_update(334)

2010-02-10 Thread Dawie Pretorius
Hello Amos

So will the problem be solved when adding the squid_kerb_auth to the 
squid.conf, so when the client does ask for the Kerberos auth it can be 
supplied to him?



Dawie Pretorius

Dawie Pretorius wrote:
> Is it possible that someone can get back to me on this issue, thanks
> 
> Dawie Pretorius
> 
> Hello
> 
> Getting this error sometimes in my cache.log:
> 
> [2010/01/28 14:23:58, 1] libsmb/ntlmssp.c:ntlmssp_update(334)
>   got NTLMSSP command 3, expected 1
> 2010/01/28 14:25:51| AuthConfig::CreateAuthUser: Unsupported or 
> unconfigured/inactive proxy-auth scheme, ''
> 
> Gentoo squid-3.0.STABLE19 

Hi Dawie,

The "3" and "1" have been explained as the difference between NTLM vs 
Kerberos.

As far as I can tell "3" means Kerberos is being used. "1" is NTLM.
The ntlm_auth helper only checks NTLM and the squid_kerb_auth helper is 
needed instead for Kerberos.

Looks to me like the client is broken and using Kerberos when offered 
NTLM as the only available auth option.

Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
   Current Beta Squid 3.1.0.16

Note: Privileged/Confidential information may be contained in this message and 
may be subject to legal privilege. Access to this e-mail by anyone other than 
the intended is unauthorised. If you are not the intended recipient (or 
responsible for delivery of the message to such person), you may not use, copy, 
distribute or deliver to anyone this message (or any part of its contents ) or 
take any action in reliance on it. All reasonable precautions have been taken 
to ensure no viruses are present in this e-mail. As our company cannot accept 
responsibility for any loss or damage arising from the use of this e-mail or 
attachments we recommend that you subject these to your virus checking 
procedures prior to use. The views, opinions, conclusions and other information 
expressed in this electronic mail are not given or endorsed by the company 
unless otherwise indicated by an authorized representative independent of this 
message.


[squid-users] regarding squid with tproxy wccp

2010-02-10 Thread senthil

HI

I have installed Tproxy 4 .

I have done all the prerequisites like compiling kernel and installing
iptables 1.4 etc

When i create Bridge i cant able to browse in private ip

My network :

Internet ---> (eth1)squid machine(eth0)-->test client

eth1 -public ip

eth0 and test client ip >private in 172 series

But i able to browse in public ip and access is seen in log of squid

I follwed document of //wiki.squid-cache.org/Features/Tproxy4 



My iptable and ebtable rules:

iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1

iptables -t mangle -A DIVERT -j ACCEPT

iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT

iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3128

ebtables -t broute -A BROUTING -i eth0 -p ipv4 --ip-proto tcp
--ip-dport 80 -j redirect --redirect-target DROP

ebtables -t broute -A BROUTING -i eth1 -p ipv4 --ip-proto tcp
--ip-sport 80 -j redirect --redirect-target DROP


ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0  dev lo table 100

echo 0 > /proc/sys/net/ipv4/conf/lo/rp_
filter echo 1 > /proc/sys/net/ipv4/ip_forward

set net.ipv4.forwarding = 1
Please help me .Thanks in Advance

Regards
senthilkumar


Re: [squid-users] Ongoing Running out of filedescriptors

2010-02-10 Thread Landy Landy
> As I said -
> 
> The running configure / make / compile environment has to
> be set to
> 64k file descriptors.  The build environment's max
> file descriptors
> are an overriding limit on the actual usable FDs, no matter
> what you
> set the configure maxfd value to.  If ulimit -n = 1024
> at configure
> time, that's what you're stuck at.
> 
> # ulimit -HSn 32768 (or 64k) ; ./configure (options...) ;
> make
>

I did just that. I created a script with ulimit -HSn 32768 at the beginning and 
reconfigured and installed. Now it looks like this:

0:00:17| Starting Squid Cache version 3.0.STABLE21 for i686-pc-linux-gnu...
2010/02/10 20:00:17| Process ID 15870
2010/02/10 20:00:17| With 32768 file descriptors available

Let's test for a week or so and see if it doesn't fail.

Thanks for the help.


  


Re: [squid-users] killall -HUP squid

2010-02-10 Thread Jeff Peng
On Thu, Feb 11, 2010 at 4:02 AM, Riccardo Castellani
 wrote:
> so now I cannot use "kill -HUP" command ?

from squid -h:

-k reconfigure|rotate|shutdown|interrupt|kill|debug|check|parse

Though "kill -HUP pid" may work but you really should use squid -k reconfigure.

-- 
Jeff Peng
Email: jeffp...@netzero.net
Skype: compuperson


Re: [squid-users] Ongoing Running out of filedescriptors

2010-02-10 Thread Kinkie
>> After retarting, I did have 1024 descriptors, no matters i did compile with
>> 64k FDs.
>
> As I said -
>
> The running configure / make / compile environment has to be set to
> 64k file descriptors.  The build environment's max file descriptors
> are an overriding limit on the actual usable FDs, no matter what you
> set the configure maxfd value to.  If ulimit -n = 1024 at configure
> time, that's what you're stuck at.
>
> # ulimit -HSn 32768 (or 64k) ; ./configure (options...) ; make

Remember to also set the ulimit when launching squid. Otherwise the OS
is enforcing the limit and there's nothing squid can do.

-- 
/kinkie


Re: [squid-users] NTLM Authentication and Connection Pinning problem

2010-02-10 Thread Amos Jeffries
On Wed, 10 Feb 2010 12:53:16 -0600, Jeff Foster  wrote:
> There appears to be a problem with the connection pinning in both
> versions squid-2.7.stable7 and
>  squid-3.1.0.7. I have some network captures that show the client
> (IE6) creating multiple TCP
> connections to the squid proxy and the proxy creating multiple TCP
> connections to an IIS server.
> The initial couple of requests are OK but after that the input TCP
> connection to output TCP
> connect is broken. The requests are changing output bound TCP
> connections. This is breaking
> the NTLM authentication handshake.
> 
> I can supply my squid configuration files if needed. I do have NTLM
> authentication enabled
> in both configurations.
> 
> I have tcpdump traces for both versions available.
> 
> In the 3.1 dump summary, note that the client packet 207 is the server
> packet 210.
> The server should be on port 37159 and it is on port 37161.
> 
> Can a developer look at this?

There are quite a few pinning issues resolved since 3.1.0.7 (beta) was
released.
Try 3.1.0.16 beta. 

Amos


Re: [squid-users] Is there ICAP evidence log in any log files?

2010-02-10 Thread Amos Jeffries
On Wed, 10 Feb 2010 10:51:16 -0600, Luis Daniel Lucio Quiroz
 wrote:
> Le Mardi 9 Février 2010 22:57:58, Amos Jeffries a écrit :
>> Henrik Nordström wrote:
>> > tis 2010-02-09 klockan 15:18 -0600 skrev Luis Daniel Lucio Quiroz:
>> >> Le Mercredi 30 Juillet 2008 22:24:35, Henrik Nordstrom a écrit :
>> >>> On tor, 2008-07-31 at 11:26 +0900, S.KOBAYASHI wrote:
>>  Hello developer,
>>  
>>  I'm looking for the evidence for accessing ICAP server. Is there
its
>>  log in any log files such as access.log, cache.log?
>> >>> 
>> >>> The ICAP server should have logs of it's own.
>> >>> 
>> >>> There is no information in the Squid logs on which iCAP servers
were
>> >>> used for the request/response.
>> >>> 
>> >>> Regards
>> >>> Henrik
>> >> 
>> >> I wonder if using squidclient mngr:xxXX  we could see some info
about
>> >> icap, where?
>> > 
>> > Seems not.
>> > 
>> > You can however increase the debug level of section 93 to have ICAP
>> > spew
>> > out lots of information in cache.log.
>> > 
>> > debug_options ALL,1 93,5
>> > 
>> > should do the trick I think.
>> > 
>> > Regards
>> > Henrik
>> 
>> Squid-3.1 and later provide some little ICAP service logging in
>> access.log.
>> http://www.squid-cache.org/Doc/config/logformat/
>> 
>> 
>> Amos
> Do you think we could backport that logging in access.log capabilitie. 
I
> may 
> do that but just tellme what file to backport.
> 
> TIA
> 
> LD

It was quite a major change. For only a small benefit. It won't be done
upstream sorry.

3.1 is out as stable in less than 60 days though if everything goes to
plan.
(one blocker bug to go. Major, but solo).

Amos



Re: [squid-users] squid + dansguardian + auth

2010-02-10 Thread Amos Jeffries
On Wed, 10 Feb 2010 14:05:14 + (WET), Bruno Ricardo Santos
 wrote:
> X-Copyrighted-Material
> 

Oh, lucky you did not add your "nobody is allowed to read this" disclaimer
as well. I can finally answer this request without getting myself into
trouble publicly... ;)

> 
> Hi all!
> 
> I'm having some trouble configuring squid with auth + dansguardian
content
> filter.
> 
> It's all configured, but when i try to browse, i get an error:
> 
> Dansguardian 400
> URL malformed
> 
> Does authentication (and dansguardian filter) only works with
transparent
> proxy or do i have some configuration wrong ?

Auth does NOT work against transparent proxies.
Is your Squid doing "transparent" NAT interception or TPROXY?

> 
> If i configure the browser to access directly to the squid port,
> everything works perfect...

Yes. Good. Auth works in regular proxy configuration.

> 
> The problem, as i see it, is about the IP dansguardian passes to squid.
> After a request, dansguardian give squid the local machine IP.

Yes. IMHO the documented config with DG between the client and Squid is
not as good as DG between squid and the Internet.

Try reversing the order of the two, so that Squid is being contacted by
the visitors, and DG does its filtering before Squid stores the replies.

> 
> If i change some options in dansguardian, as originalip, i get the error
> above !

Which is produced by some error in DG. Nothing to do with Squid.

> 
> I've tried messing around with the following options:
> 
> forwardedfor
> 
> usexforwardedfor
> 
> and in squid 
> 
> follow_x_forwarded_for
> 
> but i had no luck

Auth is not directly related to the connecting IP unless you have turned
on ACLs to limit the number of connections per IP. Doing so would block
most of your users going through DG.

> 
> Any idea ?

Auth happens as a challenge reply to the requests which are not already
authenticated.

Whether they will work through DG depends on what type of authentication
you are doing.

Amos



Re: [squid-users] cache manager access from web

2010-02-10 Thread Chris Robertson

J. Webster wrote:

Doesn't the fact that the manager needs a password in previous config lines 
mean that they can't access it?
  


Fair enough, if you are content with that.


the ncsa_users is only for http access?
  


The cachemgr interface is accessed via HTTP.  It uses a specific request 
method (identified by the ACLs as manager), but it is a subset of HTTP.


Changing the access rules like...

http_access allow manager localhost
http_access allow manager cacheadmin
http_access deny manager
http_access allow ncsa_users

...prevents those who are allowed to utilize your cache from even 
attempting access to your cachemgr interface (unless they are surfing 
from localhost, or the IP identified by the cacheadmin ACL).  The 
default squid.conf has some further denies (such as preventing CONNECT 
requests to non-SSL ports) that are also missing from this configuration 
snippet, so this is not the only avenue for abuse.


Chris



Re: [squid-users] high load issues

2010-02-10 Thread Amos Jeffries
On Wed, 10 Feb 2010 11:36:40 -0500, Justin Lintz  wrote:
> Squid ver: squid-2.6.STABLE21-3
> The server is a xen virtual with 6GB of ram available to it.
> 
> relevant lines in Squid.conf:
> 
> ierarchy_stoplist cgi-bin ?
> acl apache rep_header Server ^Apache
> broken_vary_encoding allow apache
> cache_mem 4096 MB
> maximum_object_size 8192 KB
> maximum_object_size_in_memory 4096 KB
> cache_swap_low 95
> cache_swap_high 96
> cache_dir aufs /www/apps/squid/var/cache 4096 16 256
> logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs % "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh %tr\

NP: the 'combined' format is hard coded by default. No need to redefine
it. If you have altered something then change the format name as well to
avoid confusion and any potential bad stuff.

> access_log /www/logs/squid/access.log combined
>  cache_log /www/logs/squid/cache.log
>  cache_store_log /www/logs/squid/store.log
> debug_options ALL,1 33,2
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern .   0   20% 4320
> negative_ttl 0
> collapsed_forwarding on
> refresh_stale_hit 5 seconds
> half_closed_clients off
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
> acl PURGE method PURGE
> http_access allow manager localhost
> http_access deny manager
> http_access deny PURGE

NP: to block PURGE requests leave it unconfigured. The squid default is
not to turn on the PURGE code at all unless there is mention of it in the
config file.

> http_access allow localhost
> http_access allow all

Why?

> http_reply_access allow all
> icp_access allow all
> httpd_suppress_version_string on
> cachemgr_passwd none config
> error_directory /www/apps/squid/errors
> coredump_dir /var/spool/squid
> minimum_expiry_time 15 seconds
> max_filedesc 8192
> 
> Symptoms:
> - High load avg on box ranging from 6-10 during traffic hours
> - CPU iowait time during times will be between 20-50%
> - SO_FAIL status codes seen in store.log
>  - MaintainSwapSpace is continually running under a second. This
> appears to be normal though looking at our dev and stage squid setups
> which have no load.
>  - From squidaio_counts, seeing the Queue spike upwards to 200 or
> more.  I saw a mention in the O'Reilly book this number if greater
> than 5x # of IO threads, then squid is overworked.
> - Cache_dir storage size is constantly at the cache_swap_low value
> (94%).  Does this mean squid is continually garbage collecting and
> possibly causing the high IO?  Originally we had the number at 90, but
> after reading some threads, adjusted the number to 94 for the low and
> 95 for the high hoping to reduce IO with smaller amount of data being
> garbage collected.  This change didn't have any impact
> - Saw a couple of warnings in cache.log saying
> "squidaio_queue_request: WARNING - Disk I/O overloading"
> - High number of create.select_fail events in store_io screen in the
> cache manager.  Seeing this number at 12% of the total IO calls.
> 
> From reading around the list of people with similar issues,  I see one
> suggestion we will implement next will be configuring a second
> cache_dir to increase the number of threads available for IO.
> 
> I wanted to know if you had any other suggestions for tweaks that
> could be made that would hopefully alleviate the load on the box.
> 

So what is the request/second load on Squid?
Is RAID involved?


You only have 4GB of storage. Thats just a little bit above trivial for
Squid.

With 4GB of RAM cache and 4GB of disk cache, I'd raise the maximum object
size a bit. or at least remove the maximum in-memory object size. It's
forcibly pushing half the objects to disk, when there is just as much space
in RAM to hold them.

Amos



Re: [squid-users] killall -HUP squid

2010-02-10 Thread Riccardo Castellani

so now I cannot use "kill -HUP" command ?
- Original Message - 
From: "Luis Daniel Lucio Quiroz" 

To: 
Sent: Wednesday, February 10, 2010 8:11 PM
Subject: Re: [squid-users] killall -HUP squid


Le Mercredi 10 Février 2010 13:05:44, Riccardo Castellani a écrit :
What command can I use to permit Squid 2.7.STABLE3-4.1 to reload 
squid.conf

file ?!

First I'm usually to use killall -HUP squid in the previous version, with
squid -k shutdown I can only doing process shutdown, but I'm going to
reload Squid config file automatically with only one command.
squid -k reconfigure 



Re: [squid-users] Ongoing Running out of filedescriptors

2010-02-10 Thread George Herbert
On Wed, Feb 10, 2010 at 8:50 AM, Luis Daniel Lucio Quiroz
 wrote:
> Le Mardi 9 Février 2010 19:34:13, Amos Jeffries a écrit :
>> On Tue, 9 Feb 2010 17:39:37 -0600, Luis Daniel Lucio Quiroz
>>
>>  wrote:
>> > Le Mardi 9 Février 2010 17:29:23, Landy Landy a écrit :
>> >> I don't know what to do with my current squid, I even upgraded to
>> >> 3.0.STABLE21 but, the problem persist every three days:
>> >>
>> >> /usr/local/squid/sbin/squid -v
>> >> Squid Cache: Version 3.0.STABLE21
>> >> configure options:  '--prefix=/usr/local/squid'
>>
>> '--sysconfdir=/etc/squid'
>>
>> >> '--enable-delay-pools' '--enable-kill-parent-hack' '--disable-htcp'
>> >> '--enable-default-err-language=Spanish' '--enable-linux-netfilter'
>> >> '--disable-ident-lookups' '--localstatedir=/var/log/squid3.1'
>> >> '--enable-stacktraces' '--with-default-user=proxy' '--with-large-files'
>> >> '--enable-icap-client' '--enable-async-io' '--enable-storeio=aufs'
>> >> '--enable-removal-policies=heap,lru' '--with-maxfd=32768'
>> >>
>> >> I built with --with-maxfd=32768 option but, when squid is started it
>>
>> says
>>
>> >> is working with only 1024 filedescriptor.
>> >>
>> >> I even added the following to the squid.conf:
>> >>
>> >> max_open_disk_fds 0
>> >>
>> >> But it hasn't resolve anything. I'm using squid on Debian Lenny. I
>>
>> don't
>>
>> >> know what to do. Here's part of cache.log:
>> 
>>
>> > You got a bug! that behaivor happens when a coredump occurs in squid,
>> > please
>> > file a ticket with gdb output, rice debug at maximum if you can.
>>
>> WTF are you talking about Luis? None of the above problems have anything
>> to do with crashing Squid.
>>
>> They are in order:
>>
>> "WARNING! Your cache is running out of filedescriptors"
>>  * either the system limits being set too low during run-time operation.
>>  * or the system limits were too small during the configure and build
>> process.
>>    -> Squid may drop new client connections to maintain lower than desired
>> traffic levels.
>>
>>   NP: patching the kernel headers to artificially trick squid into
>> believing the kernel supports more by default than it does is not a good
>> solution. The ulimit utility exists for that purpose instead.
>> 
>>
>>
>> "Unsupported method attempted by 172.16.100.83"
>>  * The machine at 172.16.100.83 is pushing non-HTTP data into Squid.
>>   -> Squid will drop these connections.
>>
>> "clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) failed: (2) No such file
>> or directory"
>>  * NAT interception is failing to locate the NAT table entries for some
>> client connection.
>>  * usually due to configuring the same port with "transparent" option and
>> regular traffic.
>>  -> for now Squid will treat these connections as if the directly
>> connecting box was the real client. This WILL change in some near future
>> release.
>>
>>
>> As you can see in none of those handling operations does squid crash or
>> core dump.
>>
>>
>> Amos
>
>
> Amos, that is the exactly behaivor I did have with a bug, dont you remember
> the DIGEST bug that makes squid restart internaly? HNO did help me, but the
> fact is that is a symptom of a coredump internal restart because he complains
> his sq is already compilled with more than 1024.
>
> After retarting, I did have 1024 descriptors, no matters i did compile with
> 64k FDs.

As I said -

The running configure / make / compile environment has to be set to
64k file descriptors.  The build environment's max file descriptors
are an overriding limit on the actual usable FDs, no matter what you
set the configure maxfd value to.  If ulimit -n = 1024 at configure
time, that's what you're stuck at.

# ulimit -HSn 32768 (or 64k) ; ./configure (options...) ; make



-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] high load issues

2010-02-10 Thread Mike Rambo

Justin Lintz wrote:

dont top list,

we have seveal heavy load squids, and we realized that sometimes inet surf is
slow, we've discovered that  it is because IO (as you see in your top command
more than 1% of IO waiting), so we purge our cache to dont let it rise
cache_swap_high percentage very often



The iowait time is more than 1%, at times between 20-50%.  We've tried
purging the cache a few times but that only appears to give temporary
relief to the issue.  I was looking to tune our configuration more
before ruling out the need for more caching servers or looking into
faster disks for just the cache.


Here is the Novell article re: 'virtualization advisability' that I 
mentioned in my previous post.


http://www.zdnetasia.com/news/software/0,39044164,62060848,00.htm


--
Mike Rambo


NOTE: In order to control energy costs the light at the end
of the tunnel has been shut off until further notice...


Re: [squid-users] killall -HUP squid

2010-02-10 Thread Luis Daniel Lucio Quiroz
Le Mercredi 10 Février 2010 13:05:44, Riccardo Castellani a écrit :
> What command can I use to permit Squid 2.7.STABLE3-4.1 to reload squid.conf
> file ?!
> 
> First I'm usually to use killall -HUP squid in the previous version, with
> squid -k shutdown I can only doing process shutdown, but I'm going to
> reload Squid config file automatically with only one command.
squid -k reconfigure


Re: [squid-users] squid + dansguardian + auth

2010-02-10 Thread Jose Lopes
Hi!

Which version of squid are you using?

Regards
Jose

Jose Ildefonso Camargo Tolosa wrote:
> Hi!
>
> On Wed, Feb 10, 2010 at 9:35 AM, Bruno Ricardo Santos
>  wrote:
>   
>> X-Copyrighted-Material
>>
>>
>> Hi all!
>>
>> I'm having some trouble configuring squid with auth + dansguardian content 
>> filter.
>>
>> It's all configured, but when i try to browse, i get an error:
>>
>> Dansguardian 400
>> URL malformed
>>
>> Does authentication (and dansguardian filter) only works with transparent 
>> proxy or do i have some configuration wrong ?
>>
>> If i configure the browser to access directly to the squid port, everything 
>> works perfect...
>> 
>
> Ok.
>
>   
>> The problem, as i see it, is about the IP dansguardian passes to squid. 
>> After a request, dansguardian give squid the local machine IP.
>> 
>
> Did you enabled the auth helpers on dansguardian?  Also, if squid
> works correctly: the problem is on dansguardian, and is, thus,
> off-topic for this list, you should write there.  Nevertheless, we
> have no problem helping, anyway.
>
>   
>> If i change some options in dansguardian, as originalip, i get the error 
>> above !
>>
>> I've tried messing around with the following options:
>>
>> forwardedfor
>>
>> usexforwardedfor
>> 
> On Dansguardian: No, and Yes, but this is another issue.
>
>   
>> and in squid
>>
>> follow_x_forwarded_for
>> 
>
> Yeah, via ACL, only accept these from the dansguardian box (localhost,
> most likely).
>
> I hope this helps,
>
> Ildefonso Camargo
>   


Re: [squid-users] high load issues

2010-02-10 Thread Mike Rambo

Justin Lintz wrote:

dont top list,

we have seveal heavy load squids, and we realized that sometimes inet surf is
slow, we've discovered that  it is because IO (as you see in your top command
more than 1% of IO waiting), so we purge our cache to dont let it rise
cache_swap_high percentage very often



The iowait time is more than 1%, at times between 20-50%.  We've tried
purging the cache a few times but that only appears to give temporary
relief to the issue.  I was looking to tune our configuration more
before ruling out the need for more caching servers or looking into
faster disks for just the cache.


FWIW...

You mentioned at the outset that this was a virtualized installation. I 
read a piece by Novell the other day that mentioned that IO was one of 
the factors that made virtualization inadvisable under certain 
conditions. As Squid is known to make heavy use of disk IO and you are 
having iowait problems it make the virtualized installation suspect to 
me (having read that piece). Perhaps others more knowledgeable can 
comment otherwise but absent that I would at least explore that avenue 
were it me.



--
Mike Rambo


NOTE: In order to control energy costs the light at the end
of the tunnel has been shut off until further notice...


Re: [squid-users] high load issues

2010-02-10 Thread Luis Daniel Lucio Quiroz
Le Mercredi 10 Février 2010 12:49:47, vous avez écrit :
> > dont top list,
> > 
> > we have seveal heavy load squids, and we realized that sometimes inet
> > surf is slow, we've discovered that  it is because IO (as you see in
> > your top command more than 1% of IO waiting), so we purge our cache to
> > dont let it rise cache_swap_high percentage very often
> 
> The iowait time is more than 1%, at times between 20-50%.  We've tried
> purging the cache a few times but that only appears to give temporary
> relief to the issue.  I was looking to tune our configuration more
> before ruling out the need for more caching servers or looking into
> faster disks for just the cache.

You may also change your filesystem, raiser4 or extf4,
make a bigger caché ,
use other policy

if  you have enoguht ram, increase your mem cache and your 
maximum_mem_cache_object_size (or something like this) so your discache goes  
down


[squid-users] killall -HUP squid

2010-02-10 Thread Riccardo Castellani
What command can I use to permit Squid 2.7.STABLE3-4.1 to reload squid.conf 
file ?!


First I'm usually to use killall -HUP squid in the previous version, with 
squid -k shutdown I can only doing process shutdown, but I'm going to reload 
Squid config file automatically with only one command.





Re: [squid-users] squid + dansguardian + auth

2010-02-10 Thread Jose Ildefonso Camargo Tolosa
Hi!

On Wed, Feb 10, 2010 at 9:35 AM, Bruno Ricardo Santos
 wrote:
> X-Copyrighted-Material
>
>
> Hi all!
>
> I'm having some trouble configuring squid with auth + dansguardian content 
> filter.
>
> It's all configured, but when i try to browse, i get an error:
>
> Dansguardian 400
> URL malformed
>
> Does authentication (and dansguardian filter) only works with transparent 
> proxy or do i have some configuration wrong ?
>
> If i configure the browser to access directly to the squid port, everything 
> works perfect...

Ok.

>
> The problem, as i see it, is about the IP dansguardian passes to squid. After 
> a request, dansguardian give squid the local machine IP.

Did you enabled the auth helpers on dansguardian?  Also, if squid
works correctly: the problem is on dansguardian, and is, thus,
off-topic for this list, you should write there.  Nevertheless, we
have no problem helping, anyway.

>
> If i change some options in dansguardian, as originalip, i get the error 
> above !
>
> I've tried messing around with the following options:
>
> forwardedfor
>
> usexforwardedfor
On Dansguardian: No, and Yes, but this is another issue.

>
> and in squid
>
> follow_x_forwarded_for

Yeah, via ACL, only accept these from the dansguardian box (localhost,
most likely).

I hope this helps,

Ildefonso Camargo


[squid-users] NTLM Authentication and Connection Pinning problem

2010-02-10 Thread Jeff Foster
There appears to be a problem with the connection pinning in both
versions squid-2.7.stable7 and
 squid-3.1.0.7. I have some network captures that show the client
(IE6) creating multiple TCP
connections to the squid proxy and the proxy creating multiple TCP
connections to an IIS server.
The initial couple of requests are OK but after that the input TCP
connection to output TCP
connect is broken. The requests are changing output bound TCP
connections. This is breaking
the NTLM authentication handshake.

I can supply my squid configuration files if needed. I do have NTLM
authentication enabled
in both configurations.

I have tcpdump traces for both versions available.

In the 3.1 dump summary, note that the client packet 207 is the server
packet 210.
The server should be on port 37159 and it is on port 37161.

Can a developer look at this?

Jeff Foster
jfo...@gmail.com

Client
No.  Time  SrcInfo
  7 0.001648  1916   GET http://simon/efms/ HTTP/1.0
 16 0.559067  1916   GET http://simon/efms/ HTTP/1.0, NTLMSSP_NEGOTIATE
 21 0.752159  1916   GET http://simon/efms/ HTTP/1.0, NTLMSSP_AUTH, User: WG
 42 1.576078  1917   GET http://simon/efms/ HTTP/1.0
 65 1.961280  1917   GET http://simon/efms/ HTTP/1.0, NTLMSSP_NEGOTIATE
 70 2.151384  1917   GET http://simon/efms/ HTTP/1.0, NTLMSSP_AUTH, User: WG
 85 2.991803  1918   GET http://simon/EFMS/efms.js HTTP/1.0
144 3.370616  1918   GET http://simon/EFMS/efms.js HTTP/1.0, NTLMSSP_NEGOTIA
157 3.560971  1918   GET http://simon/EFMS/efms.js HTTP/1.0, NTLMSSP_AUTH, U
163 3.780493  1918   GET http://simon/EFMS/efms.css HTTP/1.0
171 3.781469  1919   GET http://simon/Styles/perry_fix_font.css HTTP/1.0
174 3.781643  1920   GET http://simon/Styles/forms.css HTTP/1.0
179 3.782358  1921   GET http://simon/styles/dashboard.css HTTP/1.0
195 3.969630  1918   GET http://simon/javascript/std.js HTTP/1.0
207 4.161036  1919   GET http://simon/EFMS/efms.css HTTP/1.0, NTLMSSP_NEGOTI
212 4.162125  1920   GET http://simon/Styles/perry_fix_font.css HTTP/1.0, NT
215 4.163060  1921   GET http://simon/styles/dashboard.css HTTP/1.0, NTLMSSP
217 4.163214  1918   GET http://simon/javascript/std.js HTTP/1.0, NTLMSSP_NE
225 4.359340  1919   GET http://simon/EFMS/efms.css HTTP/1.0, NTLMSSP_AUTH,
227 4.359685  1920   GET http://simon/Styles/perry_fix_font.css HTTP/1.0, NT
235 4.361623  1921   GET http://simon/styles/dashboard.css HTTP/1.0, NTLMSSP
237 4.362001  1918   GET http://simon/javascript/std.js HTTP/1.0, NTLMSSP_AU
243 4.577293  1919   GET http://simon/Styles/forms.css HTTP/1.0, NTLMSSP_NEG
257 4.768473  1920   GET http://simon/Styles/forms.css HTTP/1.0, NTLMSSP_AUT


Squid Server

No.  Time  SrcInfo
 12 0.369931  37156  GET /efms/ HTTP/1.0
 18 0.559496  37156  GET /efms/ HTTP/1.0, NTLMSSP_NEGOTIATE
 23 0.752534  37156  GET /efms/ HTTP/1.0, NTLMSSP_AUTH, User: WGC\jfoste
 61 1.758489  37157  GET /efms/ HTTP/1.0
 67 1.961708  37157  GET /efms/ HTTP/1.0, NTLMSSP_NEGOTIATE
 72 2.152100  37157  GET /efms/ HTTP/1.0, NTLMSSP_AUTH, User: WGC\jfoste
113 3.180079  37158  GET /EFMS/efms.js HTTP/1.0
146 3.371116  37158  GET /EFMS/efms.js HTTP/1.0, NTLMSSP_NEGOTIATE
159 3.561335  37158  GET /EFMS/efms.js HTTP/1.0, NTLMSSP_AUTH, User: WGC\jfo
168 3.781256  37158  GET /EFMS/efms.css HTTP/1.0
190 3.967221  37159  GET /Styles/perry_fix_font.css HTTP/1.0
191 3.967513  37160  GET /Styles/forms.css HTTP/1.0
192 3.967791  37161  GET /styles/dashboard.css HTTP/1.0
197 3.970336  37158  GET /javascript/std.js HTTP/1.0
210 4.161855  37161  GET /EFMS/efms.css HTTP/1.0, NTLMSSP_NEGOTIATE
214 4.162567  37160  GET /Styles/perry_fix_font.css HTTP/1.0, NTLMSSP_NEGOTI
219 4.163678  37159  GET /styles/dashboard.css HTTP/1.0, NTLMSSP_NEGOTIATE
220 4.163806  37158  GET /javascript/std.js HTTP/1.0, NTLMSSP_NEGOTIATE
231 4.360942  37160  GET /Styles/perry_fix_font.css HTTP/1.0, NTLMSSP_AUTH,
232 4.361087  37161  GET /EFMS/efms.css HTTP/1.0, NTLMSSP_AUTH, User: WGC\jf
239 4.362346  37159  GET /styles/dashboard.css HTTP/1.0, NTLMSSP_AUTH, User:
240 4.362591  37158  GET /javascript/std.js HTTP/1.0, NTLMSSP_AUTH, User: WG
245 4.577641  37161  GET /Styles/forms.css HTTP/1.0, NTLMSSP_NEGOTIATE
259 4.768829  37160  GET /Styles/forms.css HTTP/1.0, NTLMSSP_AUTH, User: WGC


Re: [squid-users] high load issues

2010-02-10 Thread Justin Lintz
>
> dont top list,
>
> we have seveal heavy load squids, and we realized that sometimes inet surf is
> slow, we've discovered that  it is because IO (as you see in your top command
> more than 1% of IO waiting), so we purge our cache to dont let it rise
> cache_swap_high percentage very often
>

The iowait time is more than 1%, at times between 20-50%.  We've tried
purging the cache a few times but that only appears to give temporary
relief to the issue.  I was looking to tune our configuration more
before ruling out the need for more caching servers or looking into
faster disks for just the cache.


Re: [squid-users] high load issues

2010-02-10 Thread Luis Daniel Lucio Quiroz
Le Mercredi 10 Février 2010 11:41:29, Justin Lintz a écrit :
> We're seeing the symptoms across 4 servers on different hardware.
> What would be the reason for adjusting the cache_swap_high to 96?
> Thanks
> 
> - Justin Lintz
> 
> 
> 
> On Wed, Feb 10, 2010 at 11:45 AM, Luis Daniel Lucio Quiroz
> 
>  wrote:
> > Le Mercredi 10 Février 2010 10:36:40, Justin Lintz a écrit :
> >> Squid ver: squid-2.6.STABLE21-3
> >> The server is a xen virtual with 6GB of ram available to it.
> >> 
> >> relevant lines in Squid.conf:
> >> 
> >> ierarchy_stoplist cgi-bin ?
> >> acl apache rep_header Server ^Apache
> >> broken_vary_encoding allow apache
> >> cache_mem 4096 MB
> >> maximum_object_size 8192 KB
> >> maximum_object_size_in_memory 4096 KB
> >> cache_swap_low 95
> >> cache_swap_high 96
> >> cache_dir aufs /www/apps/squid/var/cache 4096 16 256
> >> logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs % >> "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh %tr
> >> access_log /www/logs/squid/access.log combined
> >>  cache_log /www/logs/squid/cache.log
> >>  cache_store_log /www/logs/squid/store.log
> >> debug_options ALL,1 33,2
> >> refresh_pattern ^ftp:   144020% 10080
> >> refresh_pattern ^gopher:14400%  1440
> >> refresh_pattern .   0   20% 4320
> >> negative_ttl 0
> >> collapsed_forwarding on
> >> refresh_stale_hit 5 seconds
> >> half_closed_clients off
> >> acl all src 0.0.0.0/0.0.0.0
> >> acl manager proto cache_object
> >> acl localhost src 127.0.0.1/255.255.255.255
> >> acl to_localhost dst 127.0.0.0/8
> >> acl SSL_ports port 443
> >> acl Safe_ports port 80  # http
> >> acl Safe_ports port 21  # ftp
> >> acl Safe_ports port 443 # https
> >> acl Safe_ports port 70  # gopher
> >> acl Safe_ports port 210 # wais
> >> acl Safe_ports port 1025-65535  # unregistered ports
> >> acl Safe_ports port 280 # http-mgmt
> >> acl Safe_ports port 488 # gss-http
> >> acl Safe_ports port 591 # filemaker
> >> acl Safe_ports port 777 # multiling http
> >> acl CONNECT method CONNECT
> >> acl PURGE method PURGE
> >> http_access allow manager localhost
> >> http_access deny manager
> >> http_access deny PURGE
> >> http_access allow localhost
> >> http_access allow all
> >> http_reply_access allow all
> >> icp_access allow all
> >> httpd_suppress_version_string on
> >> cachemgr_passwd none config
> >> error_directory /www/apps/squid/errors
> >> coredump_dir /var/spool/squid
> >> minimum_expiry_time 15 seconds
> >> max_filedesc 8192
> >> 
> >> Symptoms:
> >> - High load avg on box ranging from 6-10 during traffic hours
> >> - CPU iowait time during times will be between 20-50%
> >> - SO_FAIL status codes seen in store.log
> >>  - MaintainSwapSpace is continually running under a second. This
> >> appears to be normal though looking at our dev and stage squid setups
> >> which have no load.
> >>  - From squidaio_counts, seeing the Queue spike upwards to 200 or
> >> more.  I saw a mention in the O'Reilly book this number if greater
> >> than 5x # of IO threads, then squid is overworked.
> >> - Cache_dir storage size is constantly at the cache_swap_low value
> >> (94%).  Does this mean squid is continually garbage collecting and
> >> possibly causing the high IO?  Originally we had the number at 90, but
> >> after reading some threads, adjusted the number to 94 for the low and
> >> 95 for the high hoping to reduce IO with smaller amount of data being
> >> garbage collected.  This change didn't have any impact
> >> - Saw a couple of warnings in cache.log saying
> >> "squidaio_queue_request: WARNING - Disk I/O overloading"
> >> - High number of create.select_fail events in store_io screen in the
> >> cache manager.  Seeing this number at 12% of the total IO calls.
> >> 
> >> From reading around the list of people with similar issues,  I see one
> >> suggestion we will implement next will be configuring a second
> >> cache_dir to increase the number of threads available for IO.
> >> 
> >> I wanted to know if you had any other suggestions for tweaks that
> >> could be made that would hopefully alleviate the load on the box.
> >> 
> >> A couple of other tweaks we have currently implemented are putting the
> >> noatime option on the partition where the cache is stored and using
> >> tcmalloc inplace of gnu malloc.
> >> 
> >> I saw a recommendation of changing the store_dir_select_algorithm to
> >> round-robin but from reading this
> >> http://www.squid-cache.org/mail-archive/squid-users/200011/0794.html
> >> it sounded like the change would increase the response times.
> >> 
> >> 
> >> 
> >> 
> >> - Justin Lintz
> > 
> > Change your
> > cache_swap_high 96
> > 
> > to something higher, 98 could be.
> > look for hardware errors

dont top list,

we have seveal heavy load squids, and we realized that sometimes inet surf is 
slow, we've discovered that  it is because IO (as you see in your top command 
more than 1% of IO waiting), so

Re: [squid-users] high load issues

2010-02-10 Thread Justin Lintz
We're seeing the symptoms across 4 servers on different hardware.
What would be the reason for adjusting the cache_swap_high to 96?
Thanks

- Justin Lintz



On Wed, Feb 10, 2010 at 11:45 AM, Luis Daniel Lucio Quiroz
 wrote:
> Le Mercredi 10 Février 2010 10:36:40, Justin Lintz a écrit :
>> Squid ver: squid-2.6.STABLE21-3
>> The server is a xen virtual with 6GB of ram available to it.
>>
>> relevant lines in Squid.conf:
>>
>> ierarchy_stoplist cgi-bin ?
>> acl apache rep_header Server ^Apache
>> broken_vary_encoding allow apache
>> cache_mem 4096 MB
>> maximum_object_size 8192 KB
>> maximum_object_size_in_memory 4096 KB
>> cache_swap_low 95
>> cache_swap_high 96
>> cache_dir aufs /www/apps/squid/var/cache 4096 16 256
>> logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %> "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh %tr
>> access_log /www/logs/squid/access.log combined
>>  cache_log /www/logs/squid/cache.log
>>  cache_store_log /www/logs/squid/store.log
>> debug_options ALL,1 33,2
>> refresh_pattern ^ftp:           1440    20%     10080
>> refresh_pattern ^gopher:        1440    0%      1440
>> refresh_pattern .               0       20%     4320
>> negative_ttl 0
>> collapsed_forwarding on
>> refresh_stale_hit 5 seconds
>> half_closed_clients off
>> acl all src 0.0.0.0/0.0.0.0
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/255.255.255.255
>> acl to_localhost dst 127.0.0.0/8
>> acl SSL_ports port 443
>> acl Safe_ports port 80          # http
>> acl Safe_ports port 21          # ftp
>> acl Safe_ports port 443         # https
>> acl Safe_ports port 70          # gopher
>> acl Safe_ports port 210         # wais
>> acl Safe_ports port 1025-65535  # unregistered ports
>> acl Safe_ports port 280         # http-mgmt
>> acl Safe_ports port 488         # gss-http
>> acl Safe_ports port 591         # filemaker
>> acl Safe_ports port 777         # multiling http
>> acl CONNECT method CONNECT
>> acl PURGE method PURGE
>> http_access allow manager localhost
>> http_access deny manager
>> http_access deny PURGE
>> http_access allow localhost
>> http_access allow all
>> http_reply_access allow all
>> icp_access allow all
>> httpd_suppress_version_string on
>> cachemgr_passwd none config
>> error_directory /www/apps/squid/errors
>> coredump_dir /var/spool/squid
>> minimum_expiry_time 15 seconds
>> max_filedesc 8192
>>
>> Symptoms:
>> - High load avg on box ranging from 6-10 during traffic hours
>> - CPU iowait time during times will be between 20-50%
>> - SO_FAIL status codes seen in store.log
>>  - MaintainSwapSpace is continually running under a second. This
>> appears to be normal though looking at our dev and stage squid setups
>> which have no load.
>>  - From squidaio_counts, seeing the Queue spike upwards to 200 or
>> more.  I saw a mention in the O'Reilly book this number if greater
>> than 5x # of IO threads, then squid is overworked.
>> - Cache_dir storage size is constantly at the cache_swap_low value
>> (94%).  Does this mean squid is continually garbage collecting and
>> possibly causing the high IO?  Originally we had the number at 90, but
>> after reading some threads, adjusted the number to 94 for the low and
>> 95 for the high hoping to reduce IO with smaller amount of data being
>> garbage collected.  This change didn't have any impact
>> - Saw a couple of warnings in cache.log saying
>> "squidaio_queue_request: WARNING - Disk I/O overloading"
>> - High number of create.select_fail events in store_io screen in the
>> cache manager.  Seeing this number at 12% of the total IO calls.
>>
>> From reading around the list of people with similar issues,  I see one
>> suggestion we will implement next will be configuring a second
>> cache_dir to increase the number of threads available for IO.
>>
>> I wanted to know if you had any other suggestions for tweaks that
>> could be made that would hopefully alleviate the load on the box.
>>
>> A couple of other tweaks we have currently implemented are putting the
>> noatime option on the partition where the cache is stored and using
>> tcmalloc inplace of gnu malloc.
>>
>> I saw a recommendation of changing the store_dir_select_algorithm to
>> round-robin but from reading this
>> http://www.squid-cache.org/mail-archive/squid-users/200011/0794.html
>> it sounded like the change would increase the response times.
>>
>>
>>
>>
>> - Justin Lintz
> Change your
> cache_swap_high 96
>
> to something higher, 98 could be.
> look for hardware errors
>


Re: [squid-users] Is there ICAP evidence log in any log files?

2010-02-10 Thread Luis Daniel Lucio Quiroz
Le Mardi 9 Février 2010 22:57:58, Amos Jeffries a écrit :
> Henrik Nordström wrote:
> > tis 2010-02-09 klockan 15:18 -0600 skrev Luis Daniel Lucio Quiroz:
> >> Le Mercredi 30 Juillet 2008 22:24:35, Henrik Nordstrom a écrit :
> >>> On tor, 2008-07-31 at 11:26 +0900, S.KOBAYASHI wrote:
>  Hello developer,
>  
>  I'm looking for the evidence for accessing ICAP server. Is there its
>  log in any log files such as access.log, cache.log?
> >>> 
> >>> The ICAP server should have logs of it's own.
> >>> 
> >>> There is no information in the Squid logs on which iCAP servers were
> >>> used for the request/response.
> >>> 
> >>> Regards
> >>> Henrik
> >> 
> >> I wonder if using squidclient mngr:xxXX  we could see some info about
> >> icap, where?
> > 
> > Seems not.
> > 
> > You can however increase the debug level of section 93 to have ICAP spew
> > out lots of information in cache.log.
> > 
> > debug_options ALL,1 93,5
> > 
> > should do the trick I think.
> > 
> > Regards
> > Henrik
> 
> Squid-3.1 and later provide some little ICAP service logging in access.log.
> http://www.squid-cache.org/Doc/config/logformat/
> 
> 
> Amos
Do you think we could backport that logging in access.log capabilitie.  I may 
do that but just tellme what file to backport.

TIA

LD


Re: [squid-users] Ongoing Running out of filedescriptors

2010-02-10 Thread Luis Daniel Lucio Quiroz
Le Mardi 9 Février 2010 19:34:13, Amos Jeffries a écrit :
> On Tue, 9 Feb 2010 17:39:37 -0600, Luis Daniel Lucio Quiroz
> 
>  wrote:
> > Le Mardi 9 Février 2010 17:29:23, Landy Landy a écrit :
> >> I don't know what to do with my current squid, I even upgraded to
> >> 3.0.STABLE21 but, the problem persist every three days:
> >> 
> >> /usr/local/squid/sbin/squid -v
> >> Squid Cache: Version 3.0.STABLE21
> >> configure options:  '--prefix=/usr/local/squid'
> 
> '--sysconfdir=/etc/squid'
> 
> >> '--enable-delay-pools' '--enable-kill-parent-hack' '--disable-htcp'
> >> '--enable-default-err-language=Spanish' '--enable-linux-netfilter'
> >> '--disable-ident-lookups' '--localstatedir=/var/log/squid3.1'
> >> '--enable-stacktraces' '--with-default-user=proxy' '--with-large-files'
> >> '--enable-icap-client' '--enable-async-io' '--enable-storeio=aufs'
> >> '--enable-removal-policies=heap,lru' '--with-maxfd=32768'
> >> 
> >> I built with --with-maxfd=32768 option but, when squid is started it
> 
> says
> 
> >> is working with only 1024 filedescriptor.
> >> 
> >> I even added the following to the squid.conf:
> >> 
> >> max_open_disk_fds 0
> >> 
> >> But it hasn't resolve anything. I'm using squid on Debian Lenny. I
> 
> don't
> 
> >> know what to do. Here's part of cache.log:
> 
> 
> > You got a bug! that behaivor happens when a coredump occurs in squid,
> > please
> > file a ticket with gdb output, rice debug at maximum if you can.
> 
> WTF are you talking about Luis? None of the above problems have anything
> to do with crashing Squid.
> 
> They are in order:
> 
> "WARNING! Your cache is running out of filedescriptors"
>  * either the system limits being set too low during run-time operation.
>  * or the system limits were too small during the configure and build
> process.
>-> Squid may drop new client connections to maintain lower than desired
> traffic levels.
> 
>   NP: patching the kernel headers to artificially trick squid into
> believing the kernel supports more by default than it does is not a good
> solution. The ulimit utility exists for that purpose instead.
> 
> 
> 
> "Unsupported method attempted by 172.16.100.83"
>  * The machine at 172.16.100.83 is pushing non-HTTP data into Squid.
>   -> Squid will drop these connections.
> 
> "clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) failed: (2) No such file
> or directory"
>  * NAT interception is failing to locate the NAT table entries for some
> client connection.
>  * usually due to configuring the same port with "transparent" option and
> regular traffic.
>  -> for now Squid will treat these connections as if the directly
> connecting box was the real client. This WILL change in some near future
> release.
> 
> 
> As you can see in none of those handling operations does squid crash or
> core dump.
> 
> 
> Amos


Amos, that is the exactly behaivor I did have with a bug, dont you remember 
the DIGEST bug that makes squid restart internaly? HNO did help me, but the 
fact is that is a symptom of a coredump internal restart because he complains 
his sq is already compilled with more than 1024.

After retarting, I did have 1024 descriptors, no matters i did compile with 
64k FDs.


Re: [squid-users] high load issues

2010-02-10 Thread Luis Daniel Lucio Quiroz
Le Mercredi 10 Février 2010 10:36:40, Justin Lintz a écrit :
> Squid ver: squid-2.6.STABLE21-3
> The server is a xen virtual with 6GB of ram available to it.
> 
> relevant lines in Squid.conf:
> 
> ierarchy_stoplist cgi-bin ?
> acl apache rep_header Server ^Apache
> broken_vary_encoding allow apache
> cache_mem 4096 MB
> maximum_object_size 8192 KB
> maximum_object_size_in_memory 4096 KB
> cache_swap_low 95
> cache_swap_high 96
> cache_dir aufs /www/apps/squid/var/cache 4096 16 256
> logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs % "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh %tr
> access_log /www/logs/squid/access.log combined
>  cache_log /www/logs/squid/cache.log
>  cache_store_log /www/logs/squid/store.log
> debug_options ALL,1 33,2
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern .   0   20% 4320
> negative_ttl 0
> collapsed_forwarding on
> refresh_stale_hit 5 seconds
> half_closed_clients off
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
> acl PURGE method PURGE
> http_access allow manager localhost
> http_access deny manager
> http_access deny PURGE
> http_access allow localhost
> http_access allow all
> http_reply_access allow all
> icp_access allow all
> httpd_suppress_version_string on
> cachemgr_passwd none config
> error_directory /www/apps/squid/errors
> coredump_dir /var/spool/squid
> minimum_expiry_time 15 seconds
> max_filedesc 8192
> 
> Symptoms:
> - High load avg on box ranging from 6-10 during traffic hours
> - CPU iowait time during times will be between 20-50%
> - SO_FAIL status codes seen in store.log
>  - MaintainSwapSpace is continually running under a second. This
> appears to be normal though looking at our dev and stage squid setups
> which have no load.
>  - From squidaio_counts, seeing the Queue spike upwards to 200 or
> more.  I saw a mention in the O'Reilly book this number if greater
> than 5x # of IO threads, then squid is overworked.
> - Cache_dir storage size is constantly at the cache_swap_low value
> (94%).  Does this mean squid is continually garbage collecting and
> possibly causing the high IO?  Originally we had the number at 90, but
> after reading some threads, adjusted the number to 94 for the low and
> 95 for the high hoping to reduce IO with smaller amount of data being
> garbage collected.  This change didn't have any impact
> - Saw a couple of warnings in cache.log saying
> "squidaio_queue_request: WARNING - Disk I/O overloading"
> - High number of create.select_fail events in store_io screen in the
> cache manager.  Seeing this number at 12% of the total IO calls.
> 
> From reading around the list of people with similar issues,  I see one
> suggestion we will implement next will be configuring a second
> cache_dir to increase the number of threads available for IO.
> 
> I wanted to know if you had any other suggestions for tweaks that
> could be made that would hopefully alleviate the load on the box.
> 
> A couple of other tweaks we have currently implemented are putting the
> noatime option on the partition where the cache is stored and using
> tcmalloc inplace of gnu malloc.
> 
> I saw a recommendation of changing the store_dir_select_algorithm to
> round-robin but from reading this
> http://www.squid-cache.org/mail-archive/squid-users/200011/0794.html
> it sounded like the change would increase the response times.
> 
> 
> 
> 
> - Justin Lintz
Change your
cache_swap_high 96 

to something higher, 98 could be.
look for hardware errors 


[squid-users] high load issues

2010-02-10 Thread Justin Lintz
Squid ver: squid-2.6.STABLE21-3
The server is a xen virtual with 6GB of ram available to it.

relevant lines in Squid.conf:

ierarchy_stoplist cgi-bin ?
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
cache_mem 4096 MB
maximum_object_size 8192 KB
maximum_object_size_in_memory 4096 KB
cache_swap_low 95
cache_swap_high 96
cache_dir aufs /www/apps/squid/var/cache 4096 16 256
logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %h" "%{User-Agent}>h" %Ss:%Sh %tr
access_log /www/logs/squid/access.log combined
 cache_log /www/logs/squid/cache.log
 cache_store_log /www/logs/squid/store.log
debug_options ALL,1 33,2
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
negative_ttl 0
collapsed_forwarding on
refresh_stale_hit 5 seconds
half_closed_clients off
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl PURGE method PURGE
http_access allow manager localhost
http_access deny manager
http_access deny PURGE
http_access allow localhost
http_access allow all
http_reply_access allow all
icp_access allow all
httpd_suppress_version_string on
cachemgr_passwd none config
error_directory /www/apps/squid/errors
coredump_dir /var/spool/squid
minimum_expiry_time 15 seconds
max_filedesc 8192

Symptoms:
- High load avg on box ranging from 6-10 during traffic hours
- CPU iowait time during times will be between 20-50%
- SO_FAIL status codes seen in store.log
 - MaintainSwapSpace is continually running under a second. This
appears to be normal though looking at our dev and stage squid setups
which have no load.
 - From squidaio_counts, seeing the Queue spike upwards to 200 or
more.  I saw a mention in the O'Reilly book this number if greater
than 5x # of IO threads, then squid is overworked.
- Cache_dir storage size is constantly at the cache_swap_low value
(94%).  Does this mean squid is continually garbage collecting and
possibly causing the high IO?  Originally we had the number at 90, but
after reading some threads, adjusted the number to 94 for the low and
95 for the high hoping to reduce IO with smaller amount of data being
garbage collected.  This change didn't have any impact
- Saw a couple of warnings in cache.log saying
"squidaio_queue_request: WARNING - Disk I/O overloading"
- High number of create.select_fail events in store_io screen in the
cache manager.  Seeing this number at 12% of the total IO calls.

>From reading around the list of people with similar issues,  I see one
suggestion we will implement next will be configuring a second
cache_dir to increase the number of threads available for IO.

I wanted to know if you had any other suggestions for tweaks that
could be made that would hopefully alleviate the load on the box.

A couple of other tweaks we have currently implemented are putting the
noatime option on the partition where the cache is stored and using
tcmalloc inplace of gnu malloc.

I saw a recommendation of changing the store_dir_select_algorithm to
round-robin but from reading this
http://www.squid-cache.org/mail-archive/squid-users/200011/0794.html
it sounded like the change would increase the response times.




- Justin Lintz


[squid-users] squid + dansguardian + auth

2010-02-10 Thread Bruno Ricardo Santos
X-Copyrighted-Material


Hi all!

I'm having some trouble configuring squid with auth + dansguardian content 
filter.

It's all configured, but when i try to browse, i get an error:

Dansguardian 400
URL malformed

Does authentication (and dansguardian filter) only works with transparent proxy 
or do i have some configuration wrong ?

If i configure the browser to access directly to the squid port, everything 
works perfect...

The problem, as i see it, is about the IP dansguardian passes to squid. After a 
request, dansguardian give squid the local machine IP.

If i change some options in dansguardian, as originalip, i get the error above !

I've tried messing around with the following options:

forwardedfor

usexforwardedfor

and in squid 

follow_x_forwarded_for

but i had no luck

Any idea ?

Cheers,

Bruno Santos
---
Esta mensagem e ficheiros em anexo são confidenciais e destinados somente ao 
conhecimento e utilização da(s) pessoa(s) ou entidade(s) a quem foram 
endereçados.
Cabe ao destinatário verificar a existência de vírus ou erros, uma vez que a 
informação contida pode ser interceptada e/ou modificada.
Se recebeu este e-mail por engano, ou a eles teve acesso não sendo o 
destinatário, por favor informe de imediato o seu administrador de sistemas 
e elimine-o sem o utilizar, divulgar ou reproduzir.

Proteja o ambiente. Antes de imprimir este e-mail, verifique se realmente 
necessita.



Re: [squid-users] Multiple domains in dstdomain bug?

2010-02-10 Thread Kinkie
On Wed, Feb 10, 2010 at 2:42 PM, Michael Tennes  wrote:
> Hi,
>
> For the life of me I can't figure out why the following two lines in my 
> squid.conf file causes squid (Squid Cache: Version 3.0.STABLE16) to crash 
> with a BUS ERROR, but if I split the acl into two (last for lines) it works 
> fine. What am I missing? From what I've read acl entries are ORed. Examples 
> show multiple domains in dstdomain acl types.

What OS are you running on? The last time I saw a bus error was on HPUX.. :)
IIRC a bus error is a variant of a dangling pointer; it'd be nice to
attach a debugger and see where it is happening. Can you do that?


-- 
/kinkie


[squid-users] Multiple domains in dstdomain bug?

2010-02-10 Thread Michael Tennes
Hi,

For the life of me I can't figure out why the following two lines in my 
squid.conf file causes squid (Squid Cache: Version 3.0.STABLE16) to crash with 
a BUS ERROR, but if I split the acl into two (last for lines) it works fine. 
What am I missing? From what I've read acl entries are ORed. Examples show 
multiple domains in dstdomain acl types.

acl ym dstdomain .messenger.yahoo.com .psq.yahoo.com
http_access deny ym

acl ym1 dstdomain .messenger.yahoo.com
acl ym2 dstdomain .psq.yahoo.com
http_access deny ym1
http_access deny ym2

I have searched for answers, so I apologize in advance if this question 
demonstrates my ignorance of configuring squid.

smime.p7s
Description: S/MIME cryptographic signature


Re: [squid-users] reverse proxying for sharepoint ??

2010-02-10 Thread Amos Jeffries

Kinkie wrote:

From memory, 3.1 is almost there..



Just topping 65% compliant in the latest 3.1 release. :)

Alex has almost done updating the HTTP/1.1 checklist. The current one 
should be up in a few days. The older one in the wiki was not to far off 
in the estimated "Guess" column.

 http://wiki.squid-cache.org/Features/HTTP11




On 2/10/10, Jakob Curdes  wrote:

- sharepoint seems to rely on http 1.1
- sharepoint uses absolute URLs which would have to be rewritten (but
newer
versions seem to have options to remedy that)


http://technet.microsoft.com/en-us/library/cc287848.aspx
seems to have some recipes, not specific to Squid but I expect them to
be pretty easy to translate.


Yes, with the exception of HTTP 1.1. what is the status of the
various SQUID variants with respect to HTTP 1.1 ?

Sure I will put my findings in the wiki...

jc



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16


Re: [squid-users] RE: libsmb/ntlmssp.c:ntlmssp_update(334)

2010-02-10 Thread Amos Jeffries

Dawie Pretorius wrote:

Is it possible that someone can get back to me on this issue, thanks

Dawie Pretorius

Hello

Getting this error sometimes in my cache.log:

[2010/01/28 14:23:58, 1] libsmb/ntlmssp.c:ntlmssp_update(334)
  got NTLMSSP command 3, expected 1
2010/01/28 14:25:51| AuthConfig::CreateAuthUser: Unsupported or 
unconfigured/inactive proxy-auth scheme, ''

Gentoo squid-3.0.STABLE19 


Hi Dawie,

The "3" and "1" have been explained as the difference between NTLM vs 
Kerberos.


As far as I can tell "3" means Kerberos is being used. "1" is NTLM.
The ntlm_auth helper only checks NTLM and the squid_kerb_auth helper is 
needed instead for Kerberos.


Looks to me like the client is broken and using Kerberos when offered 
NTLM as the only available auth option.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16


Re: [squid-users] reverse proxying for sharepoint ??

2010-02-10 Thread Kinkie
>From memory, 3.1 is almost there..


On 2/10/10, Jakob Curdes  wrote:
>
>>>
>>> - sharepoint seems to rely on http 1.1
>>> - sharepoint uses absolute URLs which would have to be rewritten (but
>>> newer
>>> versions seem to have options to remedy that)
>>>
>>
>> http://technet.microsoft.com/en-us/library/cc287848.aspx
>> seems to have some recipes, not specific to Squid but I expect them to
>> be pretty easy to translate.
>>
> Yes, with the exception of HTTP 1.1. what is the status of the
> various SQUID variants with respect to HTTP 1.1 ?
>
> Sure I will put my findings in the wiki...
>
> jc
>
>


-- 
/kinkie


Re: [squid-users] reverse proxying for sharepoint ??

2010-02-10 Thread Jakob Curdes



- sharepoint seems to rely on http 1.1
- sharepoint uses absolute URLs which would have to be rewritten (but newer
versions seem to have options to remedy that)



http://technet.microsoft.com/en-us/library/cc287848.aspx
seems to have some recipes, not specific to Squid but I expect them to
be pretty easy to translate.
  
Yes, with the exception of HTTP 1.1. what is the status of the 
various SQUID variants with respect to HTTP 1.1 ?


Sure I will put my findings in the wiki...

jc



Re: [squid-users] squid 3.1 and error_directory

2010-02-10 Thread Amos Jeffries

Amos Jeffries wrote:

Eugene M. Zheganin wrote:

Hi.

Recently I decided to look on 3.1 branch on my test proxy. Everything 
seems to work fine, but I'm stuck with the problem with the error 
messages.
Whatever I do with the error_directory/error_default_language settings 
(leaving 'em commented out, or setting 'em to something) in my browser 
I see corrupted symbols. These are neither latin, nor cyrillic. They 
do look like it is UTF-8 treated like Cp1251, for example. Changing 
encoding of the page in browser doesn't help.

And the charset in  tag of such page is always "us-ascii" (why ?).


Um, thank you. I've seen something like this before. Will get on and 
check the fix.


The symbols you are seeing is probably UTF-8 treated as us-ascii. I've 
seen it as an artifact of 'tidy html' which is used by default on the 
translation toolkit we build the error pages with. I just have to check 
that is true and update the sources to leave the generated files 
slightly mangled.




How can I make pages be displayed at least in english ? I thought that 
this can be achieved by setting error_default_language to en, but I 
was wrong again.


I thought I am familiar with squid error directory and creating my own 
templates for 2.x/3.0 branches, but definitely I'm not with the 3.1


They are almost the same. The base templates are in templates/ERR_* for 
copying and adding your own ones in templates/* too.


That is the big difference, that your local templates always go in 
templates/* or a custom directory (with error_default_language pointing 
at it).


Amos


Sorry this took so long. It's now fixed and winding its way down to the 
next releases.
Please grab the langpack bundle after the next set of snapshots. It 
should contain corrected language files by this time tomorrow.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16


Re: [squid-users] Allowing links inside websites in whitelist

2010-02-10 Thread Amos Jeffries

CASALI COMPUTERS - Michele Brodoloni wrote:

Hello,
i?m using Squid Version 2.6.STABLE21 with squid_ldap_group auth helper for 
authenticating groups of users.

My problem is that some groups need to access certain sites only, but these 
sites contain links to other external content outside the whitelist causing 
squid popping up the annoying login box repeteadly. Is there a way to make 
squid follow (or deny) those links without annoying the user?
I simply would like that auth is requested just once.. if the user is not 
allowed, just deny it without requesting authentication again?



What do you mean "again"? To get auth popups means they are not 
authenticated at all yet. Were they already authenticated something must 
have gone badly.


Your config confirms that. Anybody visiting the whitelist gets through 
without authenticating at all.
The instant they go anywhere else they are verified for authentication 
and the blacklist tested.



To let people browse the web without auth popups is to remove auth 
completely, or to whitelist every site they need to visit. There seems 
to be somethign broken if the login box is popping up repeatedly.


You might try auto-blacklisting anything not whitelisted which is 
referred to from the whitelist sites.


Something like this just after the whitelist itself will prevent _any_ 
non-whitelisted link from a whitelisted page without involving auth:


  acl whiteRef referer_regex "/etc/squid/whitelist"
  http_access deny whiteRef

Be careful though. If you make that an auto-allow you enable anybody on 
access the proxy by sending an easily forged header to you.
You will also need to do something to let people click on actual wanted 
links on those whitelisted pages.




Here?s my configuration (squid.conf) snippet:

#
auth_param basic program /usr/lib64/squid/squid_ldap_auth -b "dc=server,dc=local" -f 
"uid=%s" -h 127.0.0.1
auth_param basic children 10
auth_param basic realm "Server Proxy Server"
auth_param basic credentialsttl 8 hours

external_acl_type ldap_group %LOGIN /usr/lib64/squid/squid_ldap_group -b 
"ou=Groups,dc=server,dc=local" -f 
"(&(memberUid=%u)(cn=%g)(objectClass=posixGroup))" -h 127.0.0.1 -d

acl utenti_tutti external ldap_group grp-proxy
acl utenti_tg24  external ldap_group grp-tg24

acl retelocale src 192.0.0.0/255.255.255.0


acl retelocale src 192.0.0.0/24


acl whitelist dstdom_regex "/etc/squid/whitelist"
http_access allow retelocale whitelist

acl autenticati proxy_auth REQUIRED

acl blacklist dstdom_regex "/etc/squid/blacklist"
http_access deny  utenti_tutti blacklist
http_access allow utenti_tutti

acl tg24 url_regex "/etc/squid/whitelist_tg24"
http_access allow utenti_tg24 tg24
http_access deny utenti_tg24
#

Thank you very much 



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16


Re: [squid-users] Multiple domains in dstdomain bug?

2010-02-10 Thread Amos Jeffries

Michael Tennes wrote:

Hi,

For the life of me I can't figure out why the following two lines in my 
squid.conf file causes squid (Squid Cache: Version 3.0.STABLE16) to crash with 
a BUS ERROR, but if I split the acl into two (last for lines) it works fine. 
What am I missing? From what I've read acl entries are ORed. Examples show 
multiple domains in dstdomain acl types.

acl ym dstdomain .messenger.yahoo.com .psq.yahoo.com
http_access deny ym

acl ym1 dstdomain .messenger.yahoo.com
acl ym2 dstdomain .psq.yahoo.com
http_access deny ym1
http_access deny ym2

I have searched for answers, so I apologize in advance if this question 
demonstrates my ignorance of configuring squid.


Your understanding seems right to me. That first ACL should work without 
problems.

Can you try a newer Squid release and see if it still happens for you?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16


RE: [squid-users] cache manager access from web

2010-02-10 Thread J. Webster

As a side note
 
>> http_access allow ncsa_users
>> http_access allow manager localhost
>> http_access allow manager cacheadmin
>> http_access deny manager
 
cache_manager access (any access, really) is already allowed to 
ncsa_users, no matter if they are accessing from localhost, 
88.xxx.xxx.xx9 or any other IP.  You might want to have a gander at the 
FAQ section on ACLs (http://wiki.squid-cache.org/SquidFaq/SquidAcl).

Doesn't the fact that the manager needs a password in previous config lines 
mean that they can't access it?
the ncsa_users is only for http access?



> Date: Tue, 9 Feb 2010 16:14:31 -0900
> From: crobert...@gci.net
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] cache manager access from web
>
> Amos Jeffries wrote:
>> J. Webster wrote:
>>> I have followed the tutorial here:
>>> http://wiki.squid-cache.org/SquidFaq/CacheManager
>>> and set up acls to access the cache manager cgi on my server. I have
>>> to access this externally for the moment as that is the only access
>>> to the server that I have (SSH or web). The cache manager login
>>> appears when I access: http://myexternalipaddress/cgi-bin/cachemgr.cgi
>>> I have set the cache manager login and password in the squid.conf
>>> # TAG: cache_mgr
>>> # Email-address of local cache manager who will receive
>>> # mail if the cache dies. The default is "root".
>>> #
>>> #Default:
>>> # cache_mgr root
>>> cache_mgr a...@aaa.com
>>> cachemgr_passwd aaa all
>>> #Recommended minimum configuration:
>>> acl all src 0.0.0.0/0.0.0.0
>>> acl manager proto cache_object
>>> acl localhost src 127.0.0.1/255.255.255.255
>>> acl cacheadmin src 88.xxx.xxx.xx9/255.255.255.255 #external IP address?
>>
>> You don't need the /255.255.255.255 bit. Just a single IP address will
>> do.
>>
>>> acl to_localhost dst 127.0.0.0/8
>>> # Only allow cachemgr access from localhost
>
> As a side note
>
>>> http_access allow ncsa_users
>>> http_access allow manager localhost
>>> http_access allow manager cacheadmin
>>> http_access deny manager
>
> cache_manager access (any access, really) is already allowed to
> ncsa_users, no matter if they are accessing from localhost,
> 88.xxx.xxx.xx9 or any other IP. You might want to have a gander at the
> FAQ section on ACLs (http://wiki.squid-cache.org/SquidFaq/SquidAcl).
>
>>>
>>> However, whenever I enter the password and select localhost port 8080
>>> from the cgi script I get:
>>> The following error was encountered:
>>> Cache Access Denied.
>>> Sorry, you are not currently allowed to request:
>>> cache_object://localhost/
>>> from this cache until you have authenticated yourself.
>>
>> Looks like the CGI script does its own internal access to Squid to
>> fetch the page data. But does not have the right login details to pass
>> your "http_access allow ncsa_auth" security config.
>>
>> Amos
>
> Chris
>
  
_
Got a cool Hotmail story? Tell us now
http://clk.atdmt.com/UKM/go/195013117/direct/01/

[squid-users] Any work around for bug 2805 (Digest LDAP auth failed)

2010-02-10 Thread sankar m
Dear All,

I have one doubt, please clarify. I already mentioned that squid is
dereferencing some other memory address for a specific request (digest
authentication). When I ran it in command line, it works perfectly
(same userid).

I'm seeing the access.log lines which has different userid for the
particular request. Will you be able to release from this Bugfull
life.?

http://bugs.squid-cache.org/show_bug.cgi?id=2805

I'm desperately looking forward to hear from you.

Thank a lot in advance.

Regards,
Sankar.M


Re: [squid-users] reverse proxying for sharepoint ??

2010-02-10 Thread Kinkie
On Tue, Feb 9, 2010 at 11:02 PM, Jakob Curdes  wrote:
> Can anybody comment on protecting a sharepoint server with squid as reverse
> proxy?
> I worked my way through some stories, also on the squid list, and it seems
> that there are two possible problems:
>
> - sharepoint seems to rely on http 1.1
> - sharepoint uses absolute URLs which would have to be rewritten (but newer
> versions seem to have options to remedy that)

http://technet.microsoft.com/en-us/library/cc287848.aspx
seems to have some recipes, not specific to Squid but I expect them to
be pretty easy to translate.

When you manage to get it working, please share back to the list the
details, so that we may enhance our own knowledge base :)

Thanks!


-- 
/kinkie


[squid-users] Allowing links inside websites in whitelist

2010-02-10 Thread CASALI COMPUTERS - Michele Brodoloni
Hello,
i?m using Squid Version 2.6.STABLE21 with squid_ldap_group auth helper for 
authenticating groups of users.

My problem is that some groups need to access certain sites only, but these 
sites contain links to other external content outside the whitelist causing 
squid popping up the annoying login box repeteadly. Is there a way to make 
squid follow (or deny) those links without annoying the user?
I simply would like that auth is requested just once.. if the user is not 
allowed, just deny it without requesting authentication again?

Here?s my configuration (squid.conf) snippet:

#
auth_param basic program /usr/lib64/squid/squid_ldap_auth -b 
"dc=server,dc=local" -f "uid=%s" -h 127.0.0.1
auth_param basic children 10
auth_param basic realm "Server Proxy Server"
auth_param basic credentialsttl 8 hours

external_acl_type ldap_group %LOGIN /usr/lib64/squid/squid_ldap_group -b 
"ou=Groups,dc=server,dc=local" -f 
"(&(memberUid=%u)(cn=%g)(objectClass=posixGroup))" -h 127.0.0.1 -d

acl utenti_tutti external ldap_group grp-proxy
acl utenti_tg24  external ldap_group grp-tg24

acl retelocale src 192.0.0.0/255.255.255.0
acl whitelist dstdom_regex "/etc/squid/whitelist"
http_access allow retelocale whitelist

acl autenticati proxy_auth REQUIRED

acl blacklist dstdom_regex "/etc/squid/blacklist"
http_access deny  utenti_tutti blacklist
http_access allow utenti_tutti

acl tg24 url_regex "/etc/squid/whitelist_tg24"
http_access allow utenti_tg24 tg24
http_access deny utenti_tg24
#

Thank you very much 





[squid-users] Multiple domains in dstdomain bug?

2010-02-10 Thread Michael Tennes
Hi,

For the life of me I can't figure out why the following two lines in my 
squid.conf file causes squid (Squid Cache: Version 3.0.STABLE16) to crash with 
a BUS ERROR, but if I split the acl into two (last for lines) it works fine. 
What am I missing? From what I've read acl entries are ORed. Examples show 
multiple domains in dstdomain acl types.

acl ym dstdomain .messenger.yahoo.com .psq.yahoo.com
http_access deny ym

acl ym1 dstdomain .messenger.yahoo.com
acl ym2 dstdomain .psq.yahoo.com
http_access deny ym1
http_access deny ym2

I have searched for answers, so I apologize in advance if this question 
demonstrates my ignorance of configuring squid.

smime.p7s
Description: S/MIME cryptographic signature