Re: [squid-users] Dual Level Authentication

2011-03-08 Thread Go Wow
Thanks for the reply. I think I will have to consider PAM.



Regards

On 8 March 2011 11:06, Amos Jeffries  wrote:
> On 08/03/11 18:42, Go Wow wrote:
>>
>> Hi All,
>>
>>  I have implemented the AD authentication with squid3. I would like to
>> add another level of authentication which should be local to unix box
>> something like ncsa. When AD authentication fails then it should
>> switch to other authentication and even if it fails then deny the
>> packet.
>>
>> In squid, when I define
>>
>> auth_param basic program /usr/lib/ncsa_auth /etc/squid3/passwd
>> auth_param basic program /usr/lib/squid_ldap_auth ...
>>
>> the bottom line is configured by initiating the helper programs and
>> the top line is ignored. If I interchange the above lines then again
>> the bottom program is initiated and top one is ignored.
>
> Yes. You can only define each authentication type once.
>
> Squid just hands every Basic auth header it gets over to a helper to get a
> yes/no answer for use in ACLs. It is up to that helper and the backend
> authentication system it uses to anything like failover, checking multiple
> sources etc.
>
>>
>> Can someone guide me how to create the dual level authen.
>>
>
>
> * Use two different types of authentication, ordered by your preference.
> Then hope that the browser agrees with that preference because all you are
> doing is offering auth types. The client browser chooses which one is used.
>
> * use an authentication backend which supports checking credentials against
> multiple sources. ie PAM or similar.
>
> * write your own wrapper script to receive data from Squid and test both
> data sources. Passing the overall result back to Squid.
>
>
>> I read the multiple services authentication FAQ on
>> http://wiki.squid-cache.org/ConfigExamples/Authenticate/MultipleSources
>> but couldn't understand fully. I understood myacl.pl is used for
>> authentication but how I do define username and password for users
>> using this method?
>
> This example is about enforcing strict controls over which background
> authentication mechanism is used for any given client IP.
>
> You *could* use it, however for trying both systems with failover it is
> simpler and more efficient to write an authenticator that does it. That
> example is only needed because the IP is not sent to basic auth in some
> squid versions.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.11
>  Beta testers wanted for 3.2.0.5
>


Re: [squid-users] Squid 3.1.8 and zero sized reply

2011-03-08 Thread Francesco
Hello Amos,

and thank you for the very very interesting reply!

Even though that website is not very standard, is there a way/patch to
permit squid regularly browse?
The problem, with these kind of sites, is that users complaint about the
fact, at their home, these websites are surfed well!

Thank you again,
Francesco

>  On Mon, 7 Mar 2011 13:43:52 +0100 (CET), Francesco wrote:
>> Hello,
>>
>> i am experiencing some problems of zero sized reply even though i
>> have
>> upgraded to 3.1.8 version.
>>
>> For example, this is an example site:
>> http://itinerari.mondodelgusto.it/eventi
>> Trying this site without proxy it works.
>>
>> I have tried this workaround i found on the list:
>> acl broken dstdomain .mondodelgusto.it
>> request_header_access Accept-Encoding deny broken
>>
>> but it does not work...
>>
>> any ideas?
>>
>> Thank you!
>>
>> Francesco
>
>  Confirmed. The website is attempting to do browser and client IP
>  sniffing. But the scripts seem to crash when processing the client IP
>  passed on by a proxy.
>
>  This will happen with any proxy using the X-Forwarded-For header. It at
>  least produces a page when there is no such header, or when the header
>  contains the common "unknown". But as soon as anything other than
>  "unknown" is present it aborts the transaction.
>
>
>  Since it was browser sniffing I tried a few UA strings too. It seems
>  not to like anything strange in there either. Dying with a long hang
>  then "Your browser sent a request that this server could not
>  understand.".
>   The "Vary: User-Agent" statement that each UA type gets a unique reply
>  is bogus. The only change between page loads is an inlined advert, which
>  changes even if the same UA loads a page twice.
>
>
>  The "Vary: Host" statement that pages differing in domain name is worse
>  than useless. That is a basic assumption of HTTP being re-stated in a
>  way that merely slows down middleware processing the site.
>
>  Amos
>
>




[squid-users] positive_dns_ttl

2011-03-08 Thread jiluspo
Does squid honors the DNS TTL if it happens true ttl is lower than 
positive_dns_ttl ? 



--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



[squid-users] Proxying all traffic on all ports

2011-03-08 Thread Dotan Cohen
Hi all, new list member.

A student asked me if he can proxy all his internet activity through a
specific server in order to hide his IP address. He has found some
hostility directed towards him in some online activities, including
online gaming, and feels he would be better off appearing German
online. He has a full root server in Germany and wants to proxy all
his traffic, on all ports, through it.

I figured that I could install Squid on the German server and then
configure Iptables on his local machine (Ubuntu Linux) to proxy via
that server. I figured that this would be a common scenario however I
have been unable to google a solution. I have also checked the Squid
FAQ and Config Examples pages, but surprisingly did not find anything
about this. Is this possible with Squid? Should I search for a
different solution?

Thanks in advance for any advice.

-- 
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com


Re: [squid-users] Squid 3.1.8 and zero sized reply

2011-03-08 Thread Amos Jeffries

On 08/03/11 21:14, Francesco wrote:

Hello Amos,

and thank you for the very very interesting reply!

Even though that website is not very standard, is there a way/patch to
permit squid regularly browse?
The problem, with these kind of sites, is that users complaint about the
fact, at their home, these websites are surfed well!


The directive "forwarded_for off" will make it supply pages again. 
Though the results may not be what the webmaster intended.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


[squid-users] WARNING: Ignoring unknown protocol 'cache_object' in the ACL named 'manager'

2011-03-08 Thread Ralf Hildebrandt
Todays bzr checkout is logging:

/usr/sbin/squid -k parse
2011/03/08 11:24:16| Initializing Authentication Schemes ...
2011/03/08 11:24:16| Initialized Authentication Scheme 'basic'
2011/03/08 11:24:16| Initialized Authentication Scheme 'digest'
2011/03/08 11:24:16| Initialized Authentication Scheme 'negotiate'
2011/03/08 11:24:16| Initialized Authentication Scheme 'ntlm'
2011/03/08 11:24:16| Initializing Authentication Schemes Complete.
2011/03/08 11:24:16| Processing Configuration File: /etc/squid3/squid.conf 
(depth 0)
2011/03/08 11:24:16| WARNING: Ignoring unknown protocol 'cache_object' in the 
ACL named 'manager'
2011/03/08 11:24:16| WARNING: Ignoring unknown protocol 'cache_object' in the 
ACL named 'manager'
2011/03/08 11:24:16| WARNING: Ignoring unknown protocol 'cache_object' in the 
ACL named 'manager'
2011/03/08 11:24:16| WARNING: Ignoring unknown protocol 'cache_object' in the 
ACL named 'manager'
2011/03/08 11:24:16| WARNING: Ignoring unknown protocol 'cache_object' in the 
ACL named 'manager'
2011/03/08 11:24:16| WARNING: Ignoring unknown protocol 'cache_object' in the 
ACL named 'manager'
2011/03/08 11:24:16| Processing Configuration File: 
/etc/squid3/squid-icap.conf.3.2 (depth 1)

>From the config:

#Recommended minimum configuration:
acl manager proto cache_object

Can I omit that line?

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



Re: [squid-users] Proxying all traffic on all ports

2011-03-08 Thread Essad Korkic
Hi Dotan,

I'm not sure if this is possible with Squid but it seems to me you just could 
create a VPN server on the server in Germany. Create a tunnel from the client 
pc to the server and point the client to only use the tunnel for all of it's 
traffic. 

I think that this will be the best solution for him. 

Greetings
Essad



On 8 mrt. 2011, at 11:16, Dotan Cohen  wrote:

> Hi all, new list member.
> 
> A student asked me if he can proxy all his internet activity through a
> specific server in order to hide his IP address. He has found some
> hostility directed towards him in some online activities, including
> online gaming, and feels he would be better off appearing German
> online. He has a full root server in Germany and wants to proxy all
> his traffic, on all ports, through it.
> 
> I figured that I could install Squid on the German server and then
> configure Iptables on his local machine (Ubuntu Linux) to proxy via
> that server. I figured that this would be a common scenario however I
> have been unable to google a solution. I have also checked the Squid
> FAQ and Config Examples pages, but surprisingly did not find anything
> about this. Is this possible with Squid? Should I search for a
> different solution?
> 
> Thanks in advance for any advice.
> 
> -- 
> Dotan Cohen
> 
> http://gibberish.co.il
> http://what-is-what.com


Re: [squid-users] Proxying all traffic on all ports

2011-03-08 Thread Dotan Cohen
On Tue, Mar 8, 2011 at 12:29, Essad Korkic  wrote:
> Hi Dotan,
>
> I'm not sure if this is possible with Squid but it seems to me you just could 
> create a VPN server on the server in Germany. Create a tunnel from the client 
> pc to the server and point the client to only use the tunnel for all of it's 
> traffic.
>
> I think that this will be the best solution for him.
>

‎Thank you, Essad. I'll start googling. Just give me a little push so
I'll know which keywords to search for: what software would be
involved here? What would I install on the server, and would Iptables
be enough on the home computer or would he need something there as
well?

Thanks!

-- 
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com


[squid-users] How can Squid behave gracefully when there is no Memory?

2011-03-08 Thread Saurabh Agarwal
Hi All

What if there is no memory in System and memory allocation for Squid fails? I 
think default behavior is to Crash. Can we handle it gracefully?

Regards,
Saurabh


Re: [squid-users] Proxying all traffic on all ports

2011-03-08 Thread Amos Jeffries

On 08/03/11 23:16, Dotan Cohen wrote:

Hi all, new list member.

A student asked me if he can proxy all his internet activity through a
specific server in order to hide his IP address. He has found some
hostility directed towards him in some online activities, including
online gaming, and feels he would be better off appearing German
online. He has a full root server in Germany and wants to proxy all
his traffic, on all ports, through it.


Either this is irrelevant sob story, or he has a case for getting some 
people banned from certain activities. That would be worth looking into 
IIWY. You do want to be protecting the innocent while abandoning the 
trolls to their chosen fate.




I figured that I could install Squid on the German server and then
configure Iptables on his local machine (Ubuntu Linux) to proxy via
that server. I figured that this would be a common scenario however I
have been unable to google a solution. I have also checked the Squid
FAQ and Config Examples pages, but surprisingly did not find anything
about this. Is this possible with Squid? Should I search for a
different solution?


Squid is an HTTP proxy. Just HTTP.  None of the rest of the routing or 
tunneling setup involves Squid at all. That would be why you can't find 
references detailing it.


Apart from iptables which you are thinking of the other parts are 
"tunnel interface" and "router". Squid would be a bit of candy on top, 
but not necessary.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


Re: [squid-users] positive_dns_ttl

2011-03-08 Thread Amos Jeffries

On 08/03/11 21:19, jiluspo wrote:

Does squid honors the DNS TTL if it happens true ttl is lower than
positive_dns_ttl ?



Yes. Exactly that.

BUT, only for DNS responses which contain an answer
(ie positive results).

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


Re: [squid-users] How can Squid behave gracefully when there is no Memory?

2011-03-08 Thread Amos Jeffries

On 09/03/11 00:11, Saurabh Agarwal wrote:

Hi All

What if there is no memory in System and memory allocation for Squid fails? I 
think default behavior is to Crash. Can we handle it gracefully?

Regards,
Saurabh


Correct. When the machine is not able to allocate memory Squid will crash.

There are some solutions possible but none yet being worked on. Would 
you like to assist with coding a fix? Then please join up with squid-dev 
mailing list and we can get started :)


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


Re: [squid-users] Squid 3.1.8 and zero sized reply

2011-03-08 Thread Francesco
Hello Amos!

>The directive "forwarded_for off" will make it supply pages again.
>Though the results may not be what the webmaster intended.

Nice reply, it works!
Do you see any controindication by putting forwarded_for off by default?
It can by applied only for a certain website or it is a general directive?

Thank you again!
Francesco


> On 08/03/11 21:14, Francesco wrote:
>> Hello Amos,
>>
>> and thank you for the very very interesting reply!
>>
>> Even though that website is not very standard, is there a way/patch to
>> permit squid regularly browse?
>> The problem, with these kind of sites, is that users complaint about the
>> fact, at their home, these websites are surfed well!
>
> The directive "forwarded_for off" will make it supply pages again.
> Though the results may not be what the webmaster intended.
>
> Amos
> --
> Please be using
>Current Stable Squid 2.7.STABLE9 or 3.1.11
>Beta testers wanted for 3.2.0.5
>




Re: [squid-users] WARNING: Ignoring unknown protocol 'cache_object' in the ACL named 'manager'

2011-03-08 Thread Amos Jeffries

On 08/03/11 23:29, Ralf Hildebrandt wrote:

Todays bzr checkout is logging:

/usr/sbin/squid -k parse
2011/03/08 11:24:16| Initializing Authentication Schemes ...
2011/03/08 11:24:16| Initialized Authentication Scheme 'basic'
2011/03/08 11:24:16| Initialized Authentication Scheme 'digest'
2011/03/08 11:24:16| Initialized Authentication Scheme 'negotiate'
2011/03/08 11:24:16| Initialized Authentication Scheme 'ntlm'
2011/03/08 11:24:16| Initializing Authentication Schemes Complete.
2011/03/08 11:24:16| Processing Configuration File: /etc/squid3/squid.conf 
(depth 0)
2011/03/08 11:24:16| WARNING: Ignoring unknown protocol 'cache_object' in the 
ACL named 'manager'
2011/03/08 11:24:16| WARNING: Ignoring unknown protocol 'cache_object' in the 
ACL named 'manager'
2011/03/08 11:24:16| WARNING: Ignoring unknown protocol 'cache_object' in the 
ACL named 'manager'
2011/03/08 11:24:16| WARNING: Ignoring unknown protocol 'cache_object' in the 
ACL named 'manager'
2011/03/08 11:24:16| WARNING: Ignoring unknown protocol 'cache_object' in the 
ACL named 'manager'
2011/03/08 11:24:16| WARNING: Ignoring unknown protocol 'cache_object' in the 
ACL named 'manager'
2011/03/08 11:24:16| Processing Configuration File: 
/etc/squid3/squid-icap.conf.3.2 (depth 1)

 From the config:

#Recommended minimum configuration:
acl manager proto cache_object

Can I omit that line?



Ouch, sorry. Thanks for mentioning.
 I thought I got all those a few days ago with
http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-11254.patch. 
Can you check your bzr is up to revno 11268 and the acl and anyp 
libraries are okay?


The line is still needed for controlling who can get cachemgr reports 
(when it works). The broken Squid just ignore the ACL so no harm in leaving.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


Re: [squid-users] help with squid redirectors

2011-03-08 Thread Osmany
On Tue, 2011-03-08 at 12:21 +1300, Amos Jeffries wrote:
> On Tue, 08 Mar 2011 11:58:57 +1300, Amos Jeffries wrote:
> > On Mon, 07 Mar 2011 16:59:07 -0500, Osmany wrote:
> >> Greetings everyone,
> >>
> >> So I'm having trouble with my squid proxy-cache server. I recently 
> >> added
> >> a redirect program because I had to make users go to my kaspersky 
> >> admin
> >> kit and my WSUS services to get their updates and it works fine but 
> >> I
> >> get constantly a warning and squid just collapses after a few 
> >> minutes of
> >> run time. This is what I get in my cache.log:
> >>
> >> 2011/03/07 15:54:17| WARNING: All url_rewriter processes are busy.
> >> 2011/03/07 15:54:17| WARNING: up to 465 pending requests queued
> >> 2011/03/07 15:54:17| storeDirWriteCleanLogs: Starting...
> >> 2011/03/07 15:54:17| WARNING: Closing open FD 1455
> >> 2011/03/07 15:54:17| commSetEvents: epoll_ctl(EPOLL_CTL_DEL): failed 
> >> on
> >> fd=1455: (1) Operation not permitted
> >> 2011/03/07 15:54:17| 65536 entries written so far.
> >> 2011/03/07 15:54:17|131072 entries written so far.
> >> 2011/03/07 15:54:17| WARNING: Closing open FD 1456
> >> 2011/03/07 15:54:17| commSetEvents: epoll_ctl(EPOLL_CTL_DEL): failed 
> >> on
> >> fd=1456: (1) Operation not permitted
> >> 2011/03/07 15:54:17|   Finished.  Wrote 139965 entries.
> >> 2011/03/07 15:54:17|   Took 0.1 seconds (1288729.1 entries/sec).
> >> FATAL: Too many queued url_rewriter requests (465 on 228)
> >> Squid Cache (Version 2.7.STABLE7): Terminated abnormally.
> >>
> >> This is what I have in the squid.conf
> >>
> >> #  TAG: url_rewrite_program
> >> url_rewrite_program /etc/squid/redirect
> >>
> >> #  TAG: url_rewrite_children
> >> url_rewrite_children 100
> >>
> >> #  TAG: url_rewrite_concurrency
> >> url_rewrite_concurrency 50
> >>
> >> #  TAG: url_rewrite_access
> >> url_rewrite_access allow redirect
> >>
> >> And this is what I have in my redirector script
> >>
> >> #!/usr/bin/perl
> >> BEGIN {$|=1}
> >> while (<>) {
> >>  @X = split;
> >>  $url = $X[0];
> >>  if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
> >>   print 
> >> "301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates";
> >>  }
> >>  elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
> >>   print "301:http:\/\/windowsupdate\.quimefa\.cu\:8530";
> >>  }
> >> }
> >>
> >> Can you please help me to solve this?
> >
> > Your script does not support concurrency. When that is configured in
> > squid there will be 2 space-delimited fields to handle.
> > First one being the ID of the request channel, not the URL.
> >
> 
>  Oops, I missed a few other things too:
>   * 'else' case is needed to print the no-change result back to Squid
>   * newlines need to be printed in perl
> 
> 
>$url = $X[1];
>if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
> print $X[0]." 
>  301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates\n";
>}
>elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
> print $X[0]." 301:http:\/\/windowsupdate\.quimefa\.cu\:8530\n";
>}
>else {
> print $X[0]."\n";
>}
> 
>  Amos

So this is what I have now but it doesn't work. I've tried it manually:

#!/usr/bin/perl
BEGIN {$|=1}
while (<>) {
 @X = split;
 $url = $X[1];
   if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
print $X[0]." 
 301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates\n";
   }
   elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
print $X[0]." 301:http:\/\/windowsupdate\.quimefa\.cu\:8530\n";
   }
   else {
print $X[0]."\n";
   }
}

it just keeps on returning the same url that I enter. help please?



Re: [squid-users] Resetting cache using HTCP clr

2011-03-08 Thread Amos Jeffries

On 05/03/11 13:46, Andy Nagai wrote:

Our sites are template based, so a lot of pages share the same template.
Problem with cache is a broken page will show up until the next refresh. We
need the ability to reset the cache site wide so broken pages will not show
after a fix.

Amos mentioned using the HTCP clr protocol to reset the cache. How exactly
is this done? We don't have access to the squid server itself.

Andy



Hi Andy,
  Unfortunately we don't have any easy tools to do it directly by the 
admin. Just needs something to construct a relevant UDP packet though.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


Re: [squid-users] Squid 3.1.8 and zero sized reply

2011-03-08 Thread Amos Jeffries

On 09/03/11 00:53, Francesco wrote:

Hello Amos!


The directive "forwarded_for off" will make it supply pages again.
Though the results may not be what the webmaster intended.


Nice reply, it works!
Do you see any controindication by putting forwarded_for off by default?


Some like it, some don't. I can't really answer that one.


It can by applied only for a certain website or it is a general directive?


It's general.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


[Fwd: Re: [squid-users] help with squid redirectors]

2011-03-08 Thread Osmany
 Forwarded Message 
From: Osmany 
Reply-to: osm...@oc.quimefa.cu
To: squid-users 
Subject: Re: [squid-users] help with squid redirectors
Date: Tue, 08 Mar 2011 07:20:11 -0500

On Tue, 2011-03-08 at 12:21 +1300, Amos Jeffries wrote:
> On Tue, 08 Mar 2011 11:58:57 +1300, Amos Jeffries wrote:
> > On Mon, 07 Mar 2011 16:59:07 -0500, Osmany wrote:
> >> Greetings everyone,
> >>
> >> So I'm having trouble with my squid proxy-cache server. I recently 
> >> added
> >> a redirect program because I had to make users go to my kaspersky 
> >> admin
> >> kit and my WSUS services to get their updates and it works fine but 
> >> I
> >> get constantly a warning and squid just collapses after a few 
> >> minutes of
> >> run time. This is what I get in my cache.log:
> >>
> >> 2011/03/07 15:54:17| WARNING: All url_rewriter processes are busy.
> >> 2011/03/07 15:54:17| WARNING: up to 465 pending requests queued
> >> 2011/03/07 15:54:17| storeDirWriteCleanLogs: Starting...
> >> 2011/03/07 15:54:17| WARNING: Closing open FD 1455
> >> 2011/03/07 15:54:17| commSetEvents: epoll_ctl(EPOLL_CTL_DEL): failed 
> >> on
> >> fd=1455: (1) Operation not permitted
> >> 2011/03/07 15:54:17| 65536 entries written so far.
> >> 2011/03/07 15:54:17|131072 entries written so far.
> >> 2011/03/07 15:54:17| WARNING: Closing open FD 1456
> >> 2011/03/07 15:54:17| commSetEvents: epoll_ctl(EPOLL_CTL_DEL): failed 
> >> on
> >> fd=1456: (1) Operation not permitted
> >> 2011/03/07 15:54:17|   Finished.  Wrote 139965 entries.
> >> 2011/03/07 15:54:17|   Took 0.1 seconds (1288729.1 entries/sec).
> >> FATAL: Too many queued url_rewriter requests (465 on 228)
> >> Squid Cache (Version 2.7.STABLE7): Terminated abnormally.
> >>
> >> This is what I have in the squid.conf
> >>
> >> #  TAG: url_rewrite_program
> >> url_rewrite_program /etc/squid/redirect
> >>
> >> #  TAG: url_rewrite_children
> >> url_rewrite_children 100
> >>
> >> #  TAG: url_rewrite_concurrency
> >> url_rewrite_concurrency 50
> >>
> >> #  TAG: url_rewrite_access
> >> url_rewrite_access allow redirect
> >>
> >> And this is what I have in my redirector script
> >>
> >> #!/usr/bin/perl
> >> BEGIN {$|=1}
> >> while (<>) {
> >>  @X = split;
> >>  $url = $X[0];
> >>  if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
> >>   print 
> >> "301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates";
> >>  }
> >>  elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
> >>   print "301:http:\/\/windowsupdate\.quimefa\.cu\:8530";
> >>  }
> >> }
> >>
> >> Can you please help me to solve this?
> >
> > Your script does not support concurrency. When that is configured in
> > squid there will be 2 space-delimited fields to handle.
> > First one being the ID of the request channel, not the URL.
> >
> 
>  Oops, I missed a few other things too:
>   * 'else' case is needed to print the no-change result back to Squid
>   * newlines need to be printed in perl
> 
> 
>$url = $X[1];
>if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
> print $X[0]." 
>  301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates\n";
>}
>elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
> print $X[0]." 301:http:\/\/windowsupdate\.quimefa\.cu\:8530\n";
>}
>else {
> print $X[0]."\n";
>}
> 
>  Amos

So this is what I have now but it doesn't work. I've tried it manually:

#!/usr/bin/perl
BEGIN {$|=1}
while (<>) {
 @X = split;
 $url = $X[1];
   if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
print $X[0]." 
 301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates\n";
   }
   elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
print $X[0]." 301:http:\/\/windowsupdate\.quimefa\.cu\:8530\n";
   }
   else {
print $X[0]."\n";
   }
}

it just keeps on returning the same url that I enter. help please?


Sorry for the two messages but I also observed that in the cache.log I have 
this:

helperHandleRead: unexpected reply on channel 301 from url_rewriter #1 ' 
301:ftp://dnl-kaspersky.quimefa.cu:2122/Updates'
2011/03/08 07:15:20| helperHandleRead: unexpected reply on channel 301 from 
url_rewriter #2 ' 301:ftp://dnl-kaspersky.quimefa.cu:2122/Updates'
2011/03/08 07:15:23| helperHandleRead: unexpected reply on channel 301 from 
url_rewriter #1 ' 301:ftp://dnl-kaspersky.quimefa.cu:2122/Updates'
2011/03/08 07:15:29| helperHandleRead: unexpected reply on channel 301 from 
url_rewriter #1 ' 301:ftp://dnl-kaspersky.quimefa.cu:2122/Updates'
2011/03/08 07:15:29| helperHandleRead: unexpected reply on channel 301 from 
url_rewriter #1 ' 301:ftp://dnl-kaspersky.quimefa.cu:2122/Updates'
2011/03/08 07:15:29| helperHandleRead: unexpected reply on channel 301 from 
url_rewriter #1 ' 301:ftp://dnl-kaspersky.quimefa.cu:2122/Updates'
2011/03/08 07:15:31| helperHandleRead: unexpected reply on channel 301 from 
url_rewriter #2 ' 301:ftp://dnl-kaspersky.quimefa.cu:2122/Updates'
2011/03/08 07:15:32| helperHandleRead: unexpected reply on channel 301 from

Re: [squid-users] WARNING: Ignoring unknown protocol 'cache_object' in the ACL named 'manager'

2011-03-08 Thread Ralf Hildebrandt
* Amos Jeffries :

> >#Recommended minimum configuration:
> >acl manager proto cache_object
> >
> >Can I omit that line?
> >
> 
> Ouch, sorry. Thanks for mentioning.
>  I thought I got all those a few days ago with
> http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-11254.patch.
> Can you check your bzr is up to revno 11268 

# bzr update
Doing on-the-fly conversion from RepositoryFormatKnitPack1() to
RepositoryFormatKnitPack6().
   
   This may take some time. Upgrade the repositories to the same
format for better performance.
Tree is up to date at revision 11268 of branch 
http://bzr.squid-cache.org/bzr/squid3/trunk  

> and the acl and anyp libraries are okay?
How do I check that?

> The line is still needed for controlling who can get cachemgr reports
> (when it works). The broken Squid just ignore the ACL so no harm in
> leaving.

good!

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



Re: [squid-users] help with squid redirectors

2011-03-08 Thread Amos Jeffries

On 09/03/11 01:20, Osmany wrote:

On Tue, 2011-03-08 at 12:21 +1300, Amos Jeffries wrote:

On Tue, 08 Mar 2011 11:58:57 +1300, Amos Jeffries wrote:

On Mon, 07 Mar 2011 16:59:07 -0500, Osmany wrote:

Greetings everyone,

So I'm having trouble with my squid proxy-cache server. I recently
added
a redirect program because I had to make users go to my kaspersky
admin
kit and my WSUS services to get their updates and it works fine but
I
get constantly a warning and squid just collapses after a few
minutes of
run time. This is what I get in my cache.log:

2011/03/07 15:54:17| WARNING: All url_rewriter processes are busy.
2011/03/07 15:54:17| WARNING: up to 465 pending requests queued
2011/03/07 15:54:17| storeDirWriteCleanLogs: Starting...
2011/03/07 15:54:17| WARNING: Closing open FD 1455
2011/03/07 15:54:17| commSetEvents: epoll_ctl(EPOLL_CTL_DEL): failed
on
fd=1455: (1) Operation not permitted
2011/03/07 15:54:17| 65536 entries written so far.
2011/03/07 15:54:17|131072 entries written so far.
2011/03/07 15:54:17| WARNING: Closing open FD 1456
2011/03/07 15:54:17| commSetEvents: epoll_ctl(EPOLL_CTL_DEL): failed
on
fd=1456: (1) Operation not permitted
2011/03/07 15:54:17|   Finished.  Wrote 139965 entries.
2011/03/07 15:54:17|   Took 0.1 seconds (1288729.1 entries/sec).
FATAL: Too many queued url_rewriter requests (465 on 228)
Squid Cache (Version 2.7.STABLE7): Terminated abnormally.

This is what I have in the squid.conf

#  TAG: url_rewrite_program
url_rewrite_program /etc/squid/redirect

#  TAG: url_rewrite_children
url_rewrite_children 100

#  TAG: url_rewrite_concurrency
url_rewrite_concurrency 50

#  TAG: url_rewrite_access
url_rewrite_access allow redirect

And this is what I have in my redirector script

#!/usr/bin/perl
BEGIN {$|=1}
while (<>) {
  @X = split;
  $url = $X[0];
  if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
   print
"301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates";
  }
  elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
   print "301:http:\/\/windowsupdate\.quimefa\.cu\:8530";
  }
}

Can you please help me to solve this?


Your script does not support concurrency. When that is configured in
squid there will be 2 space-delimited fields to handle.
First one being the ID of the request channel, not the URL.



  Oops, I missed a few other things too:
   * 'else' case is needed to print the no-change result back to Squid
   * newlines need to be printed in perl


$url = $X[1];
if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
 print $X[0]."
  301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates\n";
}
elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
 print $X[0]." 301:http:\/\/windowsupdate\.quimefa\.cu\:8530\n";
}
else {
 print $X[0]."\n";
}

  Amos


So this is what I have now but it doesn't work. I've tried it manually:

#!/usr/bin/perl
BEGIN {$|=1}
while (<>) {
  @X = split;
  $url = $X[1];
if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
 print $X[0]."
  301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates\n";
}
elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
 print $X[0]." 301:http:\/\/windowsupdate\.quimefa\.cu\:8530\n";
}
else {
 print $X[0]."\n";
}
}

it just keeps on returning the same url that I enter. help please?



Did you add the concurrency channel ID before the URL on each manually 
entered line?

eg  $id $url $garbage

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


Re: [squid-users] help with squid redirectors

2011-03-08 Thread Osmany
On Wed, 2011-03-09 at 01:33 +1300, Amos Jeffries wrote:
> On 09/03/11 01:20, Osmany wrote:
> > On Tue, 2011-03-08 at 12:21 +1300, Amos Jeffries wrote:
> >> On Tue, 08 Mar 2011 11:58:57 +1300, Amos Jeffries wrote:
> >>> On Mon, 07 Mar 2011 16:59:07 -0500, Osmany wrote:
>  Greetings everyone,
> 
>  So I'm having trouble with my squid proxy-cache server. I recently
>  added
>  a redirect program because I had to make users go to my kaspersky
>  admin
>  kit and my WSUS services to get their updates and it works fine but
>  I
>  get constantly a warning and squid just collapses after a few
>  minutes of
>  run time. This is what I get in my cache.log:
> 
>  2011/03/07 15:54:17| WARNING: All url_rewriter processes are busy.
>  2011/03/07 15:54:17| WARNING: up to 465 pending requests queued
>  2011/03/07 15:54:17| storeDirWriteCleanLogs: Starting...
>  2011/03/07 15:54:17| WARNING: Closing open FD 1455
>  2011/03/07 15:54:17| commSetEvents: epoll_ctl(EPOLL_CTL_DEL): failed
>  on
>  fd=1455: (1) Operation not permitted
>  2011/03/07 15:54:17| 65536 entries written so far.
>  2011/03/07 15:54:17|131072 entries written so far.
>  2011/03/07 15:54:17| WARNING: Closing open FD 1456
>  2011/03/07 15:54:17| commSetEvents: epoll_ctl(EPOLL_CTL_DEL): failed
>  on
>  fd=1456: (1) Operation not permitted
>  2011/03/07 15:54:17|   Finished.  Wrote 139965 entries.
>  2011/03/07 15:54:17|   Took 0.1 seconds (1288729.1 entries/sec).
>  FATAL: Too many queued url_rewriter requests (465 on 228)
>  Squid Cache (Version 2.7.STABLE7): Terminated abnormally.
> 
>  This is what I have in the squid.conf
> 
>  #  TAG: url_rewrite_program
>  url_rewrite_program /etc/squid/redirect
> 
>  #  TAG: url_rewrite_children
>  url_rewrite_children 100
> 
>  #  TAG: url_rewrite_concurrency
>  url_rewrite_concurrency 50
> 
>  #  TAG: url_rewrite_access
>  url_rewrite_access allow redirect
> 
>  And this is what I have in my redirector script
> 
>  #!/usr/bin/perl
>  BEGIN {$|=1}
>  while (<>) {
>    @X = split;
>    $url = $X[0];
>    if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
> print
>  "301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates";
>    }
>    elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
> print "301:http:\/\/windowsupdate\.quimefa\.cu\:8530";
>    }
>  }
> 
>  Can you please help me to solve this?
> >>>
> >>> Your script does not support concurrency. When that is configured in
> >>> squid there will be 2 space-delimited fields to handle.
> >>> First one being the ID of the request channel, not the URL.
> >>>
> >>
> >>   Oops, I missed a few other things too:
> >>* 'else' case is needed to print the no-change result back to Squid
> >>* newlines need to be printed in perl
> >>
> >>
> >> $url = $X[1];
> >> if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
> >>  print $X[0]."
> >>   301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates\n";
> >> }
> >> elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
> >>  print $X[0]." 301:http:\/\/windowsupdate\.quimefa\.cu\:8530\n";
> >> }
> >> else {
> >>  print $X[0]."\n";
> >> }
> >>
> >>   Amos
> >
> > So this is what I have now but it doesn't work. I've tried it manually:
> >
> > #!/usr/bin/perl
> > BEGIN {$|=1}
> > while (<>) {
> >   @X = split;
> >   $url = $X[1];
> > if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
> >  print $X[0]."
> >   301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates\n";
> > }
> > elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
> >  print $X[0]." 301:http:\/\/windowsupdate\.quimefa\.cu\:8530\n";
> > }
> > else {
> >  print $X[0]."\n";
> > }
> > }
> >
> > it just keeps on returning the same url that I enter. help please?
> >
> 
> Did you add the concurrency channel ID before the URL on each manually 
> entered line?
> eg  $id $url $garbage
> 
> Amos

Yes you are right. I was missing the concurrency channel ID on my manual
test in fact when I supervise my access.log I see it is actually working
and when I look at the cache.log I see that it working concurrently with
all the available channels.




[Fwd: Re: [squid-users] help with squid redirectors]

2011-03-08 Thread Osmany
 Forwarded Message 
From: Osmany 
Reply-to: osm...@oc.quimefa.cu
To: squid-users 
Subject: Re: [squid-users] help with squid redirectors
Date: Tue, 08 Mar 2011 07:41:47 -0500

On Wed, 2011-03-09 at 01:33 +1300, Amos Jeffries wrote:
> On 09/03/11 01:20, Osmany wrote:
> > On Tue, 2011-03-08 at 12:21 +1300, Amos Jeffries wrote:
> >> On Tue, 08 Mar 2011 11:58:57 +1300, Amos Jeffries wrote:
> >>> On Mon, 07 Mar 2011 16:59:07 -0500, Osmany wrote:
>  Greetings everyone,
> 
>  So I'm having trouble with my squid proxy-cache server. I recently
>  added
>  a redirect program because I had to make users go to my kaspersky
>  admin
>  kit and my WSUS services to get their updates and it works fine but
>  I
>  get constantly a warning and squid just collapses after a few
>  minutes of
>  run time. This is what I get in my cache.log:
> 
>  2011/03/07 15:54:17| WARNING: All url_rewriter processes are busy.
>  2011/03/07 15:54:17| WARNING: up to 465 pending requests queued
>  2011/03/07 15:54:17| storeDirWriteCleanLogs: Starting...
>  2011/03/07 15:54:17| WARNING: Closing open FD 1455
>  2011/03/07 15:54:17| commSetEvents: epoll_ctl(EPOLL_CTL_DEL): failed
>  on
>  fd=1455: (1) Operation not permitted
>  2011/03/07 15:54:17| 65536 entries written so far.
>  2011/03/07 15:54:17|131072 entries written so far.
>  2011/03/07 15:54:17| WARNING: Closing open FD 1456
>  2011/03/07 15:54:17| commSetEvents: epoll_ctl(EPOLL_CTL_DEL): failed
>  on
>  fd=1456: (1) Operation not permitted
>  2011/03/07 15:54:17|   Finished.  Wrote 139965 entries.
>  2011/03/07 15:54:17|   Took 0.1 seconds (1288729.1 entries/sec).
>  FATAL: Too many queued url_rewriter requests (465 on 228)
>  Squid Cache (Version 2.7.STABLE7): Terminated abnormally.
> 
>  This is what I have in the squid.conf
> 
>  #  TAG: url_rewrite_program
>  url_rewrite_program /etc/squid/redirect
> 
>  #  TAG: url_rewrite_children
>  url_rewrite_children 100
> 
>  #  TAG: url_rewrite_concurrency
>  url_rewrite_concurrency 50
> 
>  #  TAG: url_rewrite_access
>  url_rewrite_access allow redirect
> 
>  And this is what I have in my redirector script
> 
>  #!/usr/bin/perl
>  BEGIN {$|=1}
>  while (<>) {
>    @X = split;
>    $url = $X[0];
>    if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
> print
>  "301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates";
>    }
>    elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
> print "301:http:\/\/windowsupdate\.quimefa\.cu\:8530";
>    }
>  }
> 
>  Can you please help me to solve this?
> >>>
> >>> Your script does not support concurrency. When that is configured in
> >>> squid there will be 2 space-delimited fields to handle.
> >>> First one being the ID of the request channel, not the URL.
> >>>
> >>
> >>   Oops, I missed a few other things too:
> >>* 'else' case is needed to print the no-change result back to Squid
> >>* newlines need to be printed in perl
> >>
> >>
> >> $url = $X[1];
> >> if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
> >>  print $X[0]."
> >>   301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates\n";
> >> }
> >> elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
> >>  print $X[0]." 301:http:\/\/windowsupdate\.quimefa\.cu\:8530\n";
> >> }
> >> else {
> >>  print $X[0]."\n";
> >> }
> >>
> >>   Amos
> >
> > So this is what I have now but it doesn't work. I've tried it manually:
> >
> > #!/usr/bin/perl
> > BEGIN {$|=1}
> > while (<>) {
> >   @X = split;
> >   $url = $X[1];
> > if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
> >  print $X[0]."
> >   301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates\n";
> > }
> > elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
> >  print $X[0]." 301:http:\/\/windowsupdate\.quimefa\.cu\:8530\n";
> > }
> > else {
> >  print $X[0]."\n";
> > }
> > }
> >
> > it just keeps on returning the same url that I enter. help please?
> >
> 
> Did you add the concurrency channel ID before the URL on each manually 
> entered line?
> eg  $id $url $garbage
> 
> Amos

Yes you are right. I was missing the concurrency channel ID on my manual
test in fact when I supervise my access.log I see it is actually working
and when I look at the cache.log I see that it working concurrently with
all the available channels.

Now actually I found out that my script only works for one of the two
conditions I have. the one that works: windowsupdate, I can see that it
is actually redirecting because I see it in the access.log file like
this:

TCP_MISS/301 330 HEAD
http://download.microsoft.com/windowsupdate/v5/redir/wuredir.cab?

But the other condition, the kaspersky one I am pretty sure it is not

[squid-users] FreeBSD port for squid 3.2.0.5?

2011-03-08 Thread Guy Helmer
Is anyone working on a FreeBSD port for squid 3.2.0.5?

Thanks,
Guy

This message has been scanned by ComplianceSafe, powered by Palisade's 
PacketSure.


Re: [squid-users] connection-auth

2011-03-08 Thread Vernon A. Fort

 On 3/7/2011 7:28 PM, Amos Jeffries wrote:

On Mon, 07 Mar 2011 17:14:40 -0600, Vernon A. Fort wrote:

What do you mean by "external groups"?  people accessing from out on 
the Internet?


NP: NTLM does not work reliably across the wide Internet due to its 
design as a LAN protocol. Kerberos is only slightly better over WAN.



The key authentication difference between XP and Win7 is NTLM. In Win7 
it has been outright removed from some services (the Server ones) and 
downgraded in all others (client services) to require manual 
configuration turning back on.
 The recommended path is to add Kerberos alongside NTLM until you can 
turn off NTLM entirely. If you absolutely cant start the transition to 
Kerberos then doing that manual configuration of Windows Vista or 
later boxes is required to downgrade their security.


Amos

Our setup is simple - just configure the proxy setting in the browser 
and start browsing - no auth to squid itself.  The site we are trying to 
connect to is an internet based windows sharepoint server which requires 
authentication:


Cannot connect using version(s) 3.1.[8,9] regardless of the 
combination's with connection-auth and pipeline_prefetch.  I have also 
tried the registry hacks for win7 without success.


I downgraded to version 2.7.9 using the default squid.conf (no 
adjustments whatsoever) and CAN successfully connect (authenticate) from 
both win7 and xp using IE/Firefox/Chrome.  I am by no means and expert 
but have experienced greater difficulty using the 3.* versions when 
connecting to windows based servers which require authentication.  My 
observations so far doing NOTHING to the windows boxes is:


Successful connections using version 2.7.9 - default squid.conf.
Unsuccessful connection using 3.1.7 or higher - regardless of the 
connection-auth with or without registry  hacks.


Vernon


[squid-users] Re: Squid DG Sandwich... Squid3 (auth) ->DansGuardian->Squid3(proxy)

2011-03-08 Thread bwright
Thats a good idea... I can try that... I got a project thrown my way so not
sure I'll have time this week, but this week or next I will try it out and
see what I get.

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-DG-Sandwich-Squid3-auth-DansGuardian-Squid3-proxy-tp3311884p3341563.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid DG Sandwich... Squid3 (auth) -> DansGuardian->Squid3(proxy)

2011-03-08 Thread bwright
I'll check it out... at first glance not sure if it does everything I want...
be able to tie in with Active Directory (users/groups) and do better than
url filtering (DG uses PICS categories and is a content filter not just
url)...  but I'll look at it more.

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-DG-Sandwich-Squid3-auth-DansGuardian-Squid3-proxy-tp3311884p3341574.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Squid 3.1 SSL bump and tarnsparent mode

2011-03-08 Thread Francesco
Hello,

by activating SSL bump in Squid 3.1, is it possible to transparent proxy
https request?

I have read some documentation and posts, but i have not clear if it
possible, with browser warning, or not...

Any workaround/ideas?

Thank you!
Francesco



[squid-users] Re: Squid DG Sandwich... Squid3 (auth) -> DansGuardian->Squid3(proxy)

2011-03-08 Thread bwright
It does do true content filtering... I was going to start looking into the
details (like if it can get users/groups from A.D. or squid)

but then I noticed it cost $$$ (and I would prefer avoid spending $$$ on the
project).

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-DG-Sandwich-Squid3-auth-DansGuardian-Squid3-proxy-tp3311884p3341811.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] xcalloc Fatal (squid 3.1.9)

2011-03-08 Thread Víctor José Hernández Gómez

Hi all,

I have found a message such as:

FATAL: xcalloc: Unable to allocate 1 blocks of 536870912 bytes!
Looks like a strange big block.

My squid versión is 3.1.9. Any suggestion?

Regards,
--
Víctor Hernández
Centro de Informatica y Comunicaciones


[squid-users] Performance between 2.x/3.1

2011-03-08 Thread Baird, Josh
Are there any docs that reference performance differences between 2.6/7
and 3.1?  I'm running several 2.6 clusters (forward proxy) with all
caching disabled doing 20-30mbps per node.  The nodes are not far from
idle in terms of CPU and memory.  They are currently running
RHEL5/x86_64.  Should I expect to see similar performance on 3.1, or
even better?

Thanks,

Josh


Re: [squid-users] Proxying all traffic on all ports

2011-03-08 Thread Dotan Cohen
On Tue, Mar 8, 2011 at 13:26, Amos Jeffries  wrote:
> Either this is irrelevant sob story, or he has a case for getting some
> people banned from certain activities. That would be worth looking into
> IIWY. You do want to be protecting the innocent while abandoning the trolls
> to their chosen fate.
>

No, I've experienced it too. We're Israeli, some people see that as
reason enough to be mean. I completely understand the situation.
Actually, with a name such as Amos, you might be familiar with it!


>> I figured that I could install Squid on the German server and then
>> configure Iptables on his local machine (Ubuntu Linux) to proxy via
>> that server. I figured that this would be a common scenario however I
>> have been unable to google a solution. I have also checked the Squid
>> FAQ and Config Examples pages, but surprisingly did not find anything
>> about this. Is this possible with Squid? Should I search for a
>> different solution?
>
> Squid is an HTTP proxy. Just HTTP.  None of the rest of the routing or
> tunneling setup involves Squid at all. That would be why you can't find
> references detailing it.
>

Ah, I see, thanks. Actually, the Squid homepage mentions FTP "and
more" but I understand from that that Squid only proxies protocols
that it understands, not general traffic.


> Apart from iptables which you are thinking of the other parts are "tunnel
> interface" and "router". Squid would be a bit of candy on top, but not
> necessary.
>

Perfect, thanks Amos. I did not know that routers might have this
ability built in. I'm off to google!

Have a great evening.

-- 
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com


[squid-users] Re: Squid DG Sandwich... Squid3 (auth) -> DansGuardian->Squid3(proxy)

2011-03-08 Thread sichent

On 3/8/2011 6:10 PM, bwright wrote:

It does do true content filtering... I was going to start looking into the
details (like if it can get users/groups from A.D. or squid)

but then I noticed it cost $$$ (and I would prefer avoid spending $$$ on the
project).

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-DG-Sandwich-Squid3-auth-DansGuardian-Squid3-proxy-tp3311884p3341811.html
Sent from the Squid - Users mailing list archive at Nabble.com.



not sure but you can try qlproxy from quintolabs, i am sure it does not 
cost anything :)





Re: [squid-users] Squid 3.1 SSL bump and tarnsparent mode

2011-03-08 Thread Amos Jeffries

On Tue, 8 Mar 2011 17:10:43 +0100 (CET), Francesco wrote:

Hello,

by activating SSL bump in Squid 3.1, is it possible to transparent 
proxy

https request?



No. It is not.


I have read some documentation and posts, but i have not clear if it
possible, with browser warning, or not...

Any workaround/ideas?


WPAD "transparent configuration" for browsers. Both DNS and DHCP 
methods are recommended for best browser coverage.

http://wiki.squid-cache.org/SquidFaq/ConfiguringBrowsers#Fully_Automatically_Configuring_Browsers_for_WPAD

Amos



Re: [squid-users] xcalloc Fatal (squid 3.1.9)

2011-03-08 Thread Amos Jeffries

On Tue, 08 Mar 2011 18:38:42 +0100, Víctor José Hernández Gómez wrote:

Hi all,

I have found a message such as:

FATAL: xcalloc: Unable to allocate 1 blocks of 536870912 bytes!
Looks like a strange big block.

My squid versión is 3.1.9. Any suggestion?


Probably bug http://bugs.squid-cache.org/show_bug.cgi?id=3113

Please try an upgrade to 3.1.11 to resolve that and a few other smaller 
leaks.


Amos


Re: [squid-users] Performance between 2.x/3.1

2011-03-08 Thread Amos Jeffries

On Tue, 8 Mar 2011 12:36:49 -0600, Baird, Josh wrote:
Are there any docs that reference performance differences between 
2.6/7

and 3.1?  I'm running several 2.6 clusters (forward proxy) with all
caching disabled doing 20-30mbps per node.  The nodes are not far 
from

idle in terms of CPU and memory.  They are currently running
RHEL5/x86_64.  Should I expect to see similar performance on 3.1, or
even better?


Squid had a lot of very speed-specific changes in 2.7 that never got 
ported to the 3.x series. It hit a performance high that we are still 
trying to match in 3.x.


I believe 3.1 is faster than releases in the 2.6 series, but have no 
solid benchmarking to back that up. The last wide comparison we had was 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Updated-Benchmark-results-with-CPU-usage-for-3-1-and-2-7-td1048161.html


3.1 is slower than 2.7 in most situations.
We have had one report that the very latest 3.1 releases are equivalent 
under a specific type of traffic. But that was at the edge of stability 
and the test machine crashed too often. It definitely takes more CPU and 
memory.


3.2 surpasses everything else on the RPS metric (total parallel 
requests capacity), though we know it has currently traded handling 
speed in order to get there. Work is underway to gain back on the speed 
regressions since 3.1, with faint dreams of beating 2.7 as well.



Its currently a matter of choice when upgrading from 2.6 and down.
 If you need speed above everything else 2.7 latest is the way to go.
 If you need standards compliance and new features over speed then 3.x 
latest gives that


Amos



Re: [Fwd: Re: [squid-users] help with squid redirectors]

2011-03-08 Thread Amos Jeffries

On Tue, 08 Mar 2011 07:52:50 -0500, Osmany wrote:

 Forwarded Message 
From: Osmany 
Reply-to: osm...@oc.quimefa.cu
To: squid-users 
Subject: Re: [squid-users] help with squid redirectors
Date: Tue, 08 Mar 2011 07:41:47 -0500

On Wed, 2011-03-09 at 01:33 +1300, Amos Jeffries wrote:

On 09/03/11 01:20, Osmany wrote:
> On Tue, 2011-03-08 at 12:21 +1300, Amos Jeffries wrote:
>> On Tue, 08 Mar 2011 11:58:57 +1300, Amos Jeffries wrote:
>>> On Mon, 07 Mar 2011 16:59:07 -0500, Osmany wrote:
 Greetings everyone,

 So I'm having trouble with my squid proxy-cache server. I 
recently

 added
 a redirect program because I had to make users go to my 
kaspersky

 admin
 kit and my WSUS services to get their updates and it works fine 
but

 I
 get constantly a warning and squid just collapses after a few
 minutes of
 run time. This is what I get in my cache.log:

 2011/03/07 15:54:17| WARNING: All url_rewriter processes are 
busy.

 2011/03/07 15:54:17| WARNING: up to 465 pending requests queued
 2011/03/07 15:54:17| storeDirWriteCleanLogs: Starting...
 2011/03/07 15:54:17| WARNING: Closing open FD 1455
 2011/03/07 15:54:17| commSetEvents: epoll_ctl(EPOLL_CTL_DEL): 
failed

 on
 fd=1455: (1) Operation not permitted
 2011/03/07 15:54:17| 65536 entries written so far.
 2011/03/07 15:54:17|131072 entries written so far.
 2011/03/07 15:54:17| WARNING: Closing open FD 1456
 2011/03/07 15:54:17| commSetEvents: epoll_ctl(EPOLL_CTL_DEL): 
failed

 on
 fd=1456: (1) Operation not permitted
 2011/03/07 15:54:17|   Finished.  Wrote 139965 entries.
 2011/03/07 15:54:17|   Took 0.1 seconds (1288729.1 
entries/sec).

 FATAL: Too many queued url_rewriter requests (465 on 228)
 Squid Cache (Version 2.7.STABLE7): Terminated abnormally.

 This is what I have in the squid.conf

 #  TAG: url_rewrite_program
 url_rewrite_program /etc/squid/redirect

 #  TAG: url_rewrite_children
 url_rewrite_children 100

 #  TAG: url_rewrite_concurrency
 url_rewrite_concurrency 50

 #  TAG: url_rewrite_access
 url_rewrite_access allow redirect

 And this is what I have in my redirector script

 #!/usr/bin/perl
 BEGIN {$|=1}
 while (<>) {
   @X = split;
   $url = $X[0];
   if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
print
 "301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates";
   }
   elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
print 
"301:http:\/\/windowsupdate\.quimefa\.cu\:8530";

   }
 }

 Can you please help me to solve this?
>>>
>>> Your script does not support concurrency. When that is 
configured in

>>> squid there will be 2 space-delimited fields to handle.
>>> First one being the ID of the request channel, not the URL.
>>>
>>
>>   Oops, I missed a few other things too:
>>* 'else' case is needed to print the no-change result back to 
Squid

>>* newlines need to be printed in perl
>>
>>
>> $url = $X[1];
>> if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
>>  print $X[0]."
>>   301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates\n";
>> }
>> elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
>>  print $X[0]." 
301:http:\/\/windowsupdate\.quimefa\.cu\:8530\n";

>> }
>> else {
>>  print $X[0]."\n";
>> }
>>
>>   Amos
>
> So this is what I have now but it doesn't work. I've tried it 
manually:

>
> #!/usr/bin/perl
> BEGIN {$|=1}
> while (<>) {
>   @X = split;
>   $url = $X[1];
> if ($url =~ /^http:\/\/dnl(.*)kaspersky(.*)com(.*)/) {
>  print $X[0]."
>   301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates\n";
> }
> elsif ($url =~ /^http:\/\/(.*)windowsupdate(.*)/) {
>  print $X[0]." 
301:http:\/\/windowsupdate\.quimefa\.cu\:8530\n";

> }
> else {
>  print $X[0]."\n";
> }
> }
>
> it just keeps on returning the same url that I enter. help please?
>

Did you add the concurrency channel ID before the URL on each 
manually

entered line?
eg  $id $url $garbage

Amos


Yes you are right. I was missing the concurrency channel ID on my 
manual
test in fact when I supervise my access.log I see it is actually 
working
and when I look at the cache.log I see that it working concurrently 
with

all the available channels.

Now actually I found out that my script only works for one of the two
conditions I have. the one that works: windowsupdate, I can see that 
it

is actually redirecting because I see it in the access.log file like
this:

TCP_MISS/301 330 HEAD
http://download.microsoft.com/windowsupdate/v5/redir/wuredir.cab?

But the other condition, the kaspersky one I am pretty sure it is not
working because in the access.log this is what I see:
TCP_MISS/200 946 GET

http://dnl-17.geo.kaspersky.com/diffs/bases/av/kdb/i386/base048c.kdc.h

[squid-users] Squid with AD Authendication problem (windows 2003)- please help

2011-03-08 Thread Sharik M
 I have configured squid with AD authentication its working fine but I am 
getting lots of error for authentication failed.
 
 
squid-2.5.STABLE14-1.4E
samba-3.0.10-1.4E.11
 
 
 
Windows 2003 Domain Audit log failure.
 
 
Pre-authentication failed:
    User Name:    proxy$
    User ID:  DOMAIN\proxy$
    Service Name:    krbtgt/DOMAIN.HOME
    Pre-Authentication Type:   0x0
    Failure Code:  0x19
    Client Address:   10.1.5.12
 
 
For more information, see Help and Support Center at 
http://go.microsoft.com/fwlink/events.asp.
 
 
 
 
 
 
 
/etc/samba/smb.conf
 
 
[global]
    workgroup = DOMAIN
    netbios name = PROXY
    realm = DOMAIN.HOME
    server string = Linux Samba Server
    security = ads
    encrypt passwords = Yes
    password server = 10.1.5.11
    log file = /var/log/samba/%m.log
    max log size = 0
    socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192
    preferred master = False
    local master = No
    domain master = False
    dns proxy = No
    wins server = 10.1.5.11
   # winbind separator = /
    winbind enum users = yes
    winbind enum groups = yes
    winbind use default domain = yes
    idmap uid = 1-2
    idmap gid = 1-2
    client schannel = no
 
log file = /var/log/samba/%m.log
max log size = 50
[homes]
   comment = Home Directories
   browseable = no
   writable = yes
[printers]
   comment = All Printers
   path = /var/spool/samba
   browseable = no
   guest ok = no
   writable = no
   printable = yes
 
 
/etc/krb5.conf
 
[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log
 
[libdefaults]
 #ticket_lifetime = 24000
 default_realm = DOMAIN.HOME
 dns_lookup_realm = false
 dns_lookup_kdc = false
 
[realms]
 DOMAIN.HOME = {
  kdc = 10.1.5.11
  admin_server = 10.1.5.11
  default_domain = DOMAIN.HOME
 }
 
[domain_realm]
 .DOMAIN.home = DOMAIN.HOME
 DOMAIN.home = DOMAIN.HOME
 
[kdc]
 profile = /var/kerberos/krb5kdc/kdc.conf
 
[appdefaults]
 pam = {
   debug = false
   ticket_lifetime = 36000
   renew_lifetime = 36000
   forwardable = true
   krb4_convert = false
 }





[squid-users] possible to deactivate pre-authentification on the Linux (or windows)- Please help

2011-03-08 Thread Sharik M
Dear Friend,


Is it possible to deactivate pre-authentification on the Linux (or

Windows) side to avoid these messages ?

Becouse i am getting lot of erro in windows 2003 domain.

Hi, 

When validating users on my Linux system against an ActiveDirectory, 
the Windows event log are filled with messages like these (Windows 
Event ID 675): 

Pre-authentication failed: 
User Name: linux$ 
User ID: KK\linux$ 
Service Name: krbtgt/KK.LOCAL 
Pre-Authentication Type: 0x0 
Failure Code: 0x19 
Client Address: 1.2.3.4 


(1.2.3.4 is the IP address of the Linux machine, LINUX the hostname of 
the Linux machine). 

The message above comes at every request from the Linux machine (every 5 
minutes on this installation). If I am validating a user, the same 
message is shown for the user like this (user name validated=test): 

Pre-authentication failed: 
User Name: test$ 
User ID: KK\test$ 
Service Name: krbtgt/KK.LOCAL 
Pre-Authentication Type: 0x0 
Failure Code: 0x19 
Client Address: 1.2.3.4 

Messages logged on behalf of a user may be disabled by deactivating 
pre-authentification for each user. But I cannot find any place in 
ActiveDirectory to disable it for the machine account. 

What is missing ? 

Is it possible to deactivate pre-authentification on the Linux (or 
Windows) side to avoid these messages ?