Re: [squid-users] squid consuming too much processor/cpu

2010-03-22 Thread Marcello Romani

Muhammad Sharfuddin ha scritto:

On Mon, 2010-03-22 at 19:27 +1300, Amos Jeffries wrote:

Thanks list for help.

restarting squid is not a solution, I noticed only after 20 minutes
after restarting, squid started consuming/eating CPU again.

On Wed, 2010-03-17 at 19:54 +1100, Ivan . wrote:

you might want to check out this thread
http://www.mail-archive.com/squid-users@squid-cache.org/msg56216.html

Neither I installed any package.. i.e not checked

On Wed, 2010-03-17 at 05:27 -0700, George Herbert wrote:

or install the Google malloc library and recompile Squid to
use it instead of default gcc malloc.

On Wed, 2010-03-17 at 15:01 +0200, Henrik K wrote:

If the system regex is issue, wouldn't it be better/simpler to just
compile
with PCRE? (LDFLAGS="-lpcreposix -lpcre"). It doesn't leak and as a bonus
makes your REs faster.

Nor I re-compiled Squid, as I have to use binary/rpm version of squid
that shipped with the Distro I am using

issue resolved via removing acl that blocked almost 60K urls/domains

commenting following worked
##acl porn_deny url_regex "/etc/squid/domains.deny"
##http_access deny porn_deny

so how can I deny illegal contents/website ?


If those were actually domain names...

they are both urls and domain


  * use "dstdomain" type instead of regex.

ok nice suggestion


Optimize order of ACLs so do most rejections as soon as possible with 
fastest match types.

>>

I think its optimized, as the rule(squeezing cpu) is the first rule in
squid.conf


That's the exact opposite of "optimizing" as the cpu-consuming rule is 
_always_ executed.
First rules should be non-cpu consuming (i.e. non-regexp) and should 
block most of the traffic, leaving the cpu-consuming ones at the bottom, 
ralrely executed.


If you don't mind sharing your squid.conf access lines we can work 
through optimizing with you.

I posted squid.conf when I start this thread/topic, but I have no issue
posting it again ;)


I think he meant the list of blocked sites / url.



squid.conf:
acl myFTP port   20  21
acl ftp_ipes src "/etc/squid/ftp_ipes.txt"
http_access allow ftp_ipes myFTP
http_access deny myFTP

 this is the acl eating CPU #
acl porn_deny url_regex "/etc/squid/domains.deny"
http_access deny porn_deny
###

acl vip src "/etc/squid/vip_ipes.txt"
http_access allow vip

acl entweb url_regex "/etc/squid/entwebsites.txt"
http_access deny entweb

acl mynet src "/etc/squid/allowed_ipes.txt"
http_access allow mynet


Amos





--
Marcello Romani


[squid-users] Blocking Instant Messaging

2010-03-22 Thread a bv
Hi,

I have a squid running and i would like to block/control  the instant
messaging trafffic at squid (especially MSN/Windows Live Messenger) .

So how can i do this effectively?

Regards


Re: [squid-users] squid consuming too much processor/cpu

2010-03-22 Thread Muhammad Sharfuddin
On Mon, 2010-03-22 at 08:47 +0100, Marcello Romani wrote:
> Muhammad Sharfuddin ha scritto:
> > On Mon, 2010-03-22 at 19:27 +1300, Amos Jeffries wrote:
> >>> Thanks list for help.
> >>>
> >>> restarting squid is not a solution, I noticed only after 20 minutes
> >>> after restarting, squid started consuming/eating CPU again.
> >>>
> >>> On Wed, 2010-03-17 at 19:54 +1100, Ivan . wrote:
>  you might want to check out this thread
>  http://www.mail-archive.com/squid-users@squid-cache.org/msg56216.html
> >>> Neither I installed any package.. i.e not checked
> >>>
> >>> On Wed, 2010-03-17 at 05:27 -0700, George Herbert wrote:
>  or install the Google malloc library and recompile Squid to
>  use it instead of default gcc malloc.
> >>> On Wed, 2010-03-17 at 15:01 +0200, Henrik K wrote:
>  If the system regex is issue, wouldn't it be better/simpler to just
>  compile
>  with PCRE? (LDFLAGS="-lpcreposix -lpcre"). It doesn't leak and as a bonus
>  makes your REs faster.
> >>> Nor I re-compiled Squid, as I have to use binary/rpm version of squid
> >>> that shipped with the Distro I am using
> >>>
> >>> issue resolved via removing acl that blocked almost 60K urls/domains
> >>>
> >>> commenting following worked
> >>> ##acl porn_deny url_regex "/etc/squid/domains.deny"
> >>> ##http_access deny porn_deny
> >>>
> >>> so how can I deny illegal contents/website ?
> >>>
> >> If those were actually domain names...
> > they are both urls and domain
> > 
> >>   * use "dstdomain" type instead of regex.
> > ok nice suggestion
> > 
> > 
> >> Optimize order of ACLs so do most rejections as soon as possible with 
> >> fastest match types.
>  >>
> > I think its optimized, as the rule(squeezing cpu) is the first rule in
> > squid.conf
> 
> That's the exact opposite of "optimizing" as the cpu-consuming rule is 
> _always_ executed.
> First rules should be non-cpu consuming (i.e. non-regexp) and should 
> block most of the traffic, leaving the cpu-consuming ones at the bottom, 
> ralrely executed.
> 
> >> If you don't mind sharing your squid.conf access lines we can work 
> >> through optimizing with you.
> > I posted squid.conf when I start this thread/topic, but I have no issue
> > posting it again ;)
> 
> I think he meant the list of blocked sites / url
its 112K after compression, am I allowed to post/attach such a big
file ?
> .
> 
> > 
> > squid.conf:
> > acl myFTP port   20  21
> > acl ftp_ipes src "/etc/squid/ftp_ipes.txt"
> > http_access allow ftp_ipes myFTP
> > http_access deny myFTP
> > 
> >  this is the acl eating CPU #
> > acl porn_deny url_regex "/etc/squid/domains.deny"
> > http_access deny porn_deny
> > ###
> > 
> > acl vip src "/etc/squid/vip_ipes.txt"
> > http_access allow vip
> > 
> > acl entweb url_regex "/etc/squid/entwebsites.txt"
> > http_access deny entweb
> > 
> > acl mynet src "/etc/squid/allowed_ipes.txt"
> > http_access allow mynet
> > 
> >> Amos
> > 
> 
> 

-- 
Regards
Muhammad Sharfuddin | NDS Technologies Pvt Ltd | +92-333-2144823

Novice: name a single major diff b/w Redhat and SUSE
GURU:   One is Red and the other one is Green




Re: [squid-users] squid consuming too much processor/cpu

2010-03-22 Thread Amos Jeffries

Muhammad Sharfuddin wrote:

On Mon, 2010-03-22 at 08:47 +0100, Marcello Romani wrote:

Muhammad Sharfuddin ha scritto:

On Mon, 2010-03-22 at 19:27 +1300, Amos Jeffries wrote:

Thanks list for help.

restarting squid is not a solution, I noticed only after 20 minutes
after restarting, squid started consuming/eating CPU again.

On Wed, 2010-03-17 at 19:54 +1100, Ivan . wrote:

you might want to check out this thread
http://www.mail-archive.com/squid-users@squid-cache.org/msg56216.html

Neither I installed any package.. i.e not checked

On Wed, 2010-03-17 at 05:27 -0700, George Herbert wrote:

or install the Google malloc library and recompile Squid to
use it instead of default gcc malloc.

On Wed, 2010-03-17 at 15:01 +0200, Henrik K wrote:

If the system regex is issue, wouldn't it be better/simpler to just
compile
with PCRE? (LDFLAGS="-lpcreposix -lpcre"). It doesn't leak and as a bonus
makes your REs faster.

Nor I re-compiled Squid, as I have to use binary/rpm version of squid
that shipped with the Distro I am using

issue resolved via removing acl that blocked almost 60K urls/domains

commenting following worked
##acl porn_deny url_regex "/etc/squid/domains.deny"
##http_access deny porn_deny

so how can I deny illegal contents/website ?


If those were actually domain names...

they are both urls and domain


  * use "dstdomain" type instead of regex.

ok nice suggestion


Optimize order of ACLs so do most rejections as soon as possible with 
fastest match types.

 >>

I think its optimized, as the rule(squeezing cpu) is the first rule in
squid.conf
That's the exact opposite of "optimizing" as the cpu-consuming rule is 
_always_ executed.
First rules should be non-cpu consuming (i.e. non-regexp) and should 
block most of the traffic, leaving the cpu-consuming ones at the bottom, 
ralrely executed.


If you don't mind sharing your squid.conf access lines we can work 
through optimizing with you.

I posted squid.conf when I start this thread/topic, but I have no issue
posting it again ;)

I think he meant the list of blocked sites / url

its 112K after compression, am I allowed to post/attach such a big
file ?


The mailing list will drop all attachments.




squid.conf:
acl myFTP port   20  21
acl ftp_ipes src "/etc/squid/ftp_ipes.txt"
http_access allow ftp_ipes myFTP


The most optimal form of that line is:

  acl myFTP proto FTP
  http_access allow myFTP ftp_ipes

NP: Checking the protocol is faster than checking a whole list of IPs or 
list of ports.



http_access deny myFTP



Since you only have two network IP ranges that might be possibly allowed 
after the regex checks it's a good idea to start the entire process by 
blocking the vast range of IPs which are never going to be allowed:


 acl vip src "/etc/squid/vip_ipes.txt"
 acl mynet src "/etc/squid/allowed_ipes.txt"
 http_access deny !vip !mynet



 this is the acl eating CPU #
acl porn_deny url_regex "/etc/squid/domains.deny"
http_access deny porn_deny
###

acl vip src "/etc/squid/vip_ipes.txt"
http_access allow vip

acl entweb url_regex "/etc/squid/entwebsites.txt"
http_access deny entweb


Doing the same process to entwebsites.txt that was done to domains.deny 
file will stop this one becoming a second CPU waste.




acl mynet src "/etc/squid/allowed_ipes.txt"
http_access allow mynet




This is the basic process for reducing a large list of regex down to an 
optimal set of ACL tests



What you can do to start with is separate all the domain-only lines from 
the real regex patterns:


  grep -E "^([\^]?[htpf]://)?[a-z0-9\.]+(/?\$?)$" 
/etc/squid/domains.deny >dstdomain.deny


   grep -v -E "^([\^]?[htpf]://)?[a-z0-9\.]+(/?\$?)$" 
/etc/squid/domains.deny  >url_regex.deny


... check the output of those two files. Don't trust my 2-second pattern 
creation.


You will also need to strip any "^", "$", "http://"; and "/" bits off the 
dstdomain patterns.


When thats done see if there are any domains you can wildcard in the 
dstdomain list. Loading the result into squid.conf may produce WARNING 
lines about other duplicates that can also be removed. I'll call the ACL 
using this file "stopDomains" in the following example.



For the other file with ones where URL still needs a full pattern match, 
... split that to create another three files:
  1) dstdomains where the domain is part of the pattern. I'll call this 
"regexDomains" in the following example.
  2) the full URL regex patterns with domains in (1). I'll call this 
"regexUrls" in the example below.
  3) regex patterns where domain name does not matter to the match. 
I'll  call that "regexPaths".



When thats done, change your config to make your CPU expensive lines:

  acl porn_deny url_regex "/etc/squid/domains.deny"
  http_access deny porn_deny

change into these:

# A
  acl stopDomains dstdomain "/etc/squid/dstdomain.deny"
  http_access deny stopDomains

#B
  acl regexDomains dstdomain "/etc/squid/dstdomain.regexDomains"
  acl regexUrls  url_

[squid-users] blocking users from x amount of connections with acl maxconn

2010-03-22 Thread Dayo Adewunmi

Hi

I'm trying to set a limit of no more than three connections per user 
with maxconn

acl lan-ebola dst 192.168.24.18
acl limitebola maxconn 3
http_access deny limitebola lan-ebola
http_access allow !limitebola

It's not limiting anything, though.

Best regards

Dayo


[squid-users] Re: blocking users from x amount of connections with acl maxconn

2010-03-22 Thread Dayo Adewunmi

Problem Existed Between Keyboard And Chair.
It's working now.

Dayo

 Original Message 
Subject:blocking users from x amount of connections with acl maxconn
Date:   Mon, 22 Mar 2010 12:00:53 +0100
From:   Dayo Adewunmi 
Reply-To:   contactd...@gmail.com
To: squid-users@squid-cache.org 



Hi

I'm trying to set a limit of no more than three connections per user 
with maxconn

acl lan-ebola dst 192.168.24.18
acl limitebola maxconn 3
http_access deny limitebola lan-ebola
http_access allow !limitebola

It's not limiting anything, though.

Best regards

Dayo




[squid-users] Mail client with squid

2010-03-22 Thread Impact Services
Hi all,

How do I enable mail clients like outlook express on client machines
while filtering internet access through squid? I went through lot of
forums and looks like it is possible through iptables. I tried with
that but am not getting the correct settings.

Also I want to enable one particular mac address, if possible, else ip
address to access internet without routing through squid but all other
clients on the network to access through squid. Is it possible?

Any help would be highly appreciated.

--
Regards
Gorav


RE: [squid-users] Ignore requests from certain hosts in access_log

2010-03-22 Thread Baird, Josh
Wow, I still can't seem to get this working!  I can't figure out what I am 
doing wrong:

# Put the load balancers in an ACL so we can ignore requests (health checks) 
from them
acl loadbalancers src 172.26.100.136/255.255.255.255
acl loadbalancers src 172.26.100.137/255.255.255.255

# We want to append the X-Forwarded-For header
follow_x_forwarded_for allow loadbalancers
log_uses_indirect_client on
acl_uses_indirect_client on

# Define Logging (do not log loadbalancer health checks)
access_log /var/log/squid/access.log squid
log_access deny loadbalancers
coredump_dir /var/spool/squid
pid_filename /var/run/squid.pid
httpd_suppress_version_string on
shutdown_lifetime 5 seconds
# We don't cache, so there is no need to waste disk I/O on cache logging
cache_store_log none

These changes aren't suppressing any logs:

Health checks still show up:

1269265701.388  0 172.26.100.136 TCP_DENIED/400 2570 GET 
error:invalid-request - NONE/- text/html
1269265703.009  0 172.26.100.137 TCP_DENIED/400 2570 GET 
error:invalid-request - NONE/- text/html
1269265706.389  0 172.26.100.136 TCP_DENIED/400 2570 GET 
error:invalid-request - NONE/- text/html
1269265708.010  0 172.26.100.137 TCP_DENIED/400 2570 GET 
error:invalid-request - NONE/- text/html

.. as well as normal traffic using X-Forwarded-For.

What am I doing wrong here?

Thanks,

Josh

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Friday, March 19, 2010 7:29 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Ignore requests from certain hosts in access_log

Baird, Josh wrote:
> And, you still see the non-healthcheck, normal traffic logged using the 
> X-Forwarded-For information?

Yes.

> 
> Here is my entire config, maybe this will help:

> 
> # We want to append the X-Forwarded-For header for Websense
> follow_x_forwarded_for allow loadbalancers
> log_uses_indirect_client on
> acl_uses_indirect_client on
> 
> # Define Logging (do not log loadbalancer health checks)
> access_log /var/log/squid/access.log squid
> log_access deny !loadbalancers

Gah. Stupid me not reading that right earlier.

Means: deny all requests that are NOT loadbalancers.

You are wanting:
   log_access deny loadbalancers

So sorry.


Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
   Current Beta Squid 3.1.0.18


Re: [squid-users] Mail client with squid

2010-03-22 Thread John Doe
From: Impact Services 
> I want to enable one particular mac address, if possible, 
> else ip address to access internet without routing 
> through squid but all other clients on the network 
> to access through squid. Is it possible?

I guess you could use the iptables MAC module to filter it...

JD


  


Re: [squid-users] Mail client with squid

2010-03-22 Thread Nyamul Hassan
Hi,
Your mail suggests that you are attempting to use email clients.  This
is a Squid mailing list, and Squid is a HTTP proxy, with some support
FTP and HTTPS.
If you meant something else, please be more specific.
Regards
HASSAN


On Mon, Mar 22, 2010 at 7:37 PM, Impact Services
 wrote:
>
> Hi all,
>
> How do I enable mail clients like outlook express on client machines
> while filtering internet access through squid? I went through lot of
> forums and looks like it is possible through iptables. I tried with
> that but am not getting the correct settings.
>
> Also I want to enable one particular mac address, if possible, else ip
> address to access internet without routing through squid but all other
> clients on the network to access through squid. Is it possible?
>
> Any help would be highly appreciated.
>
> --
> Regards
> Gorav
>


Re: [squid-users] squid consuming too much processor/cpu

2010-03-22 Thread Marcus Kool

Or use an alternative: ufdbGuard.

ufdbGuard is a URL filter for Squid that has a much easier
configuration file than the Squid ACLs and additional
configuration files.
ufdbGuard is also multithreaded and very fast.

And a tip: if you are really serious about blocking
anything, you should also block 'proxy sites' (i.e. sites
used to circumvent URL filters).

-Marcus


Amos Jeffries wrote:

Muhammad Sharfuddin wrote:

On Mon, 2010-03-22 at 08:47 +0100, Marcello Romani wrote:

Muhammad Sharfuddin ha scritto:

On Mon, 2010-03-22 at 19:27 +1300, Amos Jeffries wrote:

Thanks list for help.

restarting squid is not a solution, I noticed only after 20 minutes
after restarting, squid started consuming/eating CPU again.

On Wed, 2010-03-17 at 19:54 +1100, Ivan . wrote:

you might want to check out this thread
http://www.mail-archive.com/squid-users@squid-cache.org/msg56216.html 


Neither I installed any package.. i.e not checked

On Wed, 2010-03-17 at 05:27 -0700, George Herbert wrote:

or install the Google malloc library and recompile Squid to
use it instead of default gcc malloc.

On Wed, 2010-03-17 at 15:01 +0200, Henrik K wrote:

If the system regex is issue, wouldn't it be better/simpler to just
compile
with PCRE? (LDFLAGS="-lpcreposix -lpcre"). It doesn't leak and as 
a bonus

makes your REs faster.

Nor I re-compiled Squid, as I have to use binary/rpm version of squid
that shipped with the Distro I am using

issue resolved via removing acl that blocked almost 60K urls/domains

commenting following worked
##acl porn_deny url_regex "/etc/squid/domains.deny"
##http_access deny porn_deny

so how can I deny illegal contents/website ?


If those were actually domain names...

they are both urls and domain


  * use "dstdomain" type instead of regex.

ok nice suggestion


Optimize order of ACLs so do most rejections as soon as possible 
with fastest match types.

 >>

I think its optimized, as the rule(squeezing cpu) is the first rule in
squid.conf
That's the exact opposite of "optimizing" as the cpu-consuming rule 
is _always_ executed.
First rules should be non-cpu consuming (i.e. non-regexp) and should 
block most of the traffic, leaving the cpu-consuming ones at the 
bottom, ralrely executed.


If you don't mind sharing your squid.conf access lines we can work 
through optimizing with you.

I posted squid.conf when I start this thread/topic, but I have no issue
posting it again ;)

I think he meant the list of blocked sites / url

its 112K after compression, am I allowed to post/attach such a big
file ?


The mailing list will drop all attachments.




squid.conf:
acl myFTP port   20  21
acl ftp_ipes src "/etc/squid/ftp_ipes.txt"
http_access allow ftp_ipes myFTP


The most optimal form of that line is:

  acl myFTP proto FTP
  http_access allow myFTP ftp_ipes

NP: Checking the protocol is faster than checking a whole list of IPs or 
list of ports.



http_access deny myFTP



Since you only have two network IP ranges that might be possibly allowed 
after the regex checks it's a good idea to start the entire process by 
blocking the vast range of IPs which are never going to be allowed:


 acl vip src "/etc/squid/vip_ipes.txt"
 acl mynet src "/etc/squid/allowed_ipes.txt"
 http_access deny !vip !mynet



 this is the acl eating CPU #
acl porn_deny url_regex "/etc/squid/domains.deny"
http_access deny porn_deny
###

acl vip src "/etc/squid/vip_ipes.txt"
http_access allow vip

acl entweb url_regex "/etc/squid/entwebsites.txt"
http_access deny entweb


Doing the same process to entwebsites.txt that was done to domains.deny 
file will stop this one becoming a second CPU waste.




acl mynet src "/etc/squid/allowed_ipes.txt"
http_access allow mynet




This is the basic process for reducing a large list of regex down to an 
optimal set of ACL tests



What you can do to start with is separate all the domain-only lines from 
the real regex patterns:


  grep -E "^([\^]?[htpf]://)?[a-z0-9\.]+(/?\$?)$" 
/etc/squid/domains.deny >dstdomain.deny


   grep -v -E "^([\^]?[htpf]://)?[a-z0-9\.]+(/?\$?)$" 
/etc/squid/domains.deny  >url_regex.deny


... check the output of those two files. Don't trust my 2-second pattern 
creation.


You will also need to strip any "^", "$", "http://"; and "/" bits off the 
dstdomain patterns.


When thats done see if there are any domains you can wildcard in the 
dstdomain list. Loading the result into squid.conf may produce WARNING 
lines about other duplicates that can also be removed. I'll call the ACL 
using this file "stopDomains" in the following example.



For the other file with ones where URL still needs a full pattern match, 
... split that to create another three files:
  1) dstdomains where the domain is part of the pattern. I'll call this 
"regexDomains" in the following example.
  2) the full URL regex patterns with domains in (1). I'll call this 
"regexUrls" in the example below.
  3) regex patterns where domain name does not matter to th

Re: [squid-users] Blocking Instant Messaging

2010-03-22 Thread Nick Cairncross
Hi abv,

You can block on user agent for some IM clients such as MSN. Ensure you have 
User Agent logging turned on and an entry in your conf file. I found this 
useful for finding out the agent that some IM clients use.
For testing I use a file containing the agent but the result is the same..:

useragent_log /var/log/squid/useragent.log squid

acl MSNMessenger browser "/etc/squid/ACL/USER-AGENT_BLOCKED.txt"
acl http_access deny MSNMessenger

USER-AGENT_BLOCKED.txt contains the agents you want to block:Windows Live 
Messenger

You can go further and allow certain IPs to have MSN using a !acl.

tail -f /var/log/squid/useragent.log to see what's going on.
===
Skype requires a direct IP acl rule:
acl StopDirectIP url_regex ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+
http_access deny StopDirectIP

Again you could exclude certain IPs using a !acl

Cheers,

Nick



On 22/03/2010 07:56, "a bv"  wrote:

Hi,

I have a squid running and i would like to block/control  the instant
messaging trafffic at squid (especially MSN/Windows Live Messenger) .

So how can i do this effectively?

Regards


** Please consider the environment before printing this e-mail **

The information contained in this e-mail is of a confidential nature and is 
intended only for the addressee.  If you are not the intended addressee, any 
disclosure, copying or distribution by you is prohibited and may be unlawful.  
Disclosure to any party other than the addressee, whether inadvertent or 
otherwise, is not intended to waive privilege or confidentiality.  Internet 
communications are not secure and therefore Conde Nast does not accept legal 
responsibility for the contents of this message.  Any views or opinions 
expressed are those of the author.

Company Registration details:
The Conde Nast Publications Ltd
Vogue House
Hanover Square
London W1S 1JU

Registered in London No. 226900


[squid-users] Sending on Group names after Kerb LDAP look-up

2010-03-22 Thread Nick Cairncross
Hi All,

Things seem to be going well with my Squid project so far; a combined 
Mac/Windows AD environment using Kerberos authentication with fall back of 
NTLM. I (hopefully) seem to be getting the hang of it!
I've been trying out the Kerberos LDAP look up tool and have a couple of 
questions (I think the answers will be no..):

- Is it possible to wrap up the matched group name(s) in the header as it gets 
sent onwards to my peer?
I used to use the authentication agent that came from our A/V provider. This 
tool ran as a service and linked into our ISA. Once a user authenticated their 
group membership was forwarded along with their username to my peer (Scansafe). 
The problem is that it only does NTLM auth. It added the group 
(WINNT://[group]) into the header and then a rule base at the peer site could 
be set up based on group. Since I am using Kerberos I wondered whether it's 
possible to send the results of the Kerb LDAP auth? I already see the user on 
the peer as the Kerberos login. It would be great if I could include the group 
or groups...

This is what I use currently: cache_peer proxy44.scansafe.net parent 8080 7 
no-query no-digest no-netdb-exchange login=*
(From http://www.hutsby.net/2008/03/apple-mac-osx-squid-and-scansafe.html)

- Are there plans to integrate the lookup tool in future versions of Squid? 
I've enjoyed learning about compiling but.. just wondering..

Thanks again in advance,

Nick




** Please consider the environment before printing this e-mail **

The information contained in this e-mail is of a confidential nature and is 
intended only for the addressee.  If you are not the intended addressee, any 
disclosure, copying or distribution by you is prohibited and may be unlawful.  
Disclosure to any party other than the addressee, whether inadvertent or 
otherwise, is not intended to waive privilege or confidentiality.  Internet 
communications are not secure and therefore Conde Nast does not accept legal 
responsibility for the contents of this message.  Any views or opinions 
expressed are those of the author.

Company Registration details:
The Conde Nast Publications Ltd
Vogue House
Hanover Square
London W1S 1JU

Registered in London No. 226900


[squid-users] FileDescriptor Issues

2010-03-22 Thread a...@gmail

Hi All,

I have tried everything so far I definitely have increased my file 
descriptors on my Ubuntu OS

from 1024 to 46622
But when I start Squid 3.0 STABLE25 I doesn't seem to detect the real 
descriptor's size


I have checked the sysctl.conf, and I have checked the system to make sure 
that the correct size
/etc/sysctl.confWhen I run this I more /proc/sys/fs/file-maxI get 46622But 
Squid3.0 seem to only detect 1024Is there anything that I am not doing 
please?

I don't know what else to do
Thank you
Regards
Adam 



Re: [squid-users] Mail client with squid

2010-03-22 Thread Impact Services
Hi,

Thanks for your reply. I know squid is a proxy server. May be I didnt
explain the problem correctly.

I am able to manage my internet traffic through squid - filtering,
logging etc. The problem is that I want to enable the client computers
to use outlook express as well but since internet access is routed
through squid and squid doesnt handle pop and smtp traffic, outlook
express on client computers are unable to send or receive emails. What
is the workaround for client computers to access internet through
squid but at the same time enable outlook express to operate as well?

Regards
Gorav

On Mon, Mar 22, 2010 at 7:46 PM, Nyamul Hassan  wrote:
> Hi,
> Your mail suggests that you are attempting to use email clients.  This
> is a Squid mailing list, and Squid is a HTTP proxy, with some support
> FTP and HTTPS.
> If you meant something else, please be more specific.
> Regards
> HASSAN
>
>
> On Mon, Mar 22, 2010 at 7:37 PM, Impact Services
>  wrote:
>>
>> Hi all,
>>
>> How do I enable mail clients like outlook express on client machines
>> while filtering internet access through squid? I went through lot of
>> forums and looks like it is possible through iptables. I tried with
>> that but am not getting the correct settings.
>>
>> Also I want to enable one particular mac address, if possible, else ip
>> address to access internet without routing through squid but all other
>> clients on the network to access through squid. Is it possible?
>>
>> Any help would be highly appreciated.
>>
>> --
>> Regards
>> Gorav
>>
>



-- 
Regards
Gorav
Impact Services
Manapakkam, Chennai
Mob. 98401 64646


Re: [squid-users] Mail client with squid

2010-03-22 Thread Al - Image Hosting Services

Hi,

You need a pop3/smtp proxy. Maybe someone else can recommend one.

Best Regards,
Al




On Mon, 22 Mar 2010, Impact Services wrote:


Date: Mon, 22 Mar 2010 23:21:45 +0530
From: Impact Services 
To: Nyamul Hassan 
Cc: Squid Users 
Subject: Re: [squid-users] Mail client with squid

Hi,

Thanks for your reply. I know squid is a proxy server. May be I didnt
explain the problem correctly.

I am able to manage my internet traffic through squid - filtering,
logging etc. The problem is that I want to enable the client computers
to use outlook express as well but since internet access is routed
through squid and squid doesnt handle pop and smtp traffic, outlook
express on client computers are unable to send or receive emails. What
is the workaround for client computers to access internet through
squid but at the same time enable outlook express to operate as well?

Regards
Gorav

On Mon, Mar 22, 2010 at 7:46 PM, Nyamul Hassan  wrote:

Hi,
Your mail suggests that you are attempting to use email clients.  This
is a Squid mailing list, and Squid is a HTTP proxy, with some support
FTP and HTTPS.
If you meant something else, please be more specific.
Regards
HASSAN


On Mon, Mar 22, 2010 at 7:37 PM, Impact Services
 wrote:


Hi all,

How do I enable mail clients like outlook express on client machines
while filtering internet access through squid? I went through lot of
forums and looks like it is possible through iptables. I tried with
that but am not getting the correct settings.

Also I want to enable one particular mac address, if possible, else ip
address to access internet without routing through squid but all other
clients on the network to access through squid. Is it possible?

Any help would be highly appreciated.

--
Regards
Gorav







--
Regards
Gorav
Impact Services
Manapakkam, Chennai
Mob. 98401 64646


Re: [squid-users] FileDescriptor Issues

2010-03-22 Thread Al - Image Hosting Services

Hi,

Did you try using ulimit?

Best Regards,
Al


On Mon, 22 Mar 2010, a...@gmail wrote:


Date: Mon, 22 Mar 2010 17:42:47 -
From: "a...@gmail" 
To: squid-users@squid-cache.org
Subject: [squid-users] FileDescriptor Issues

Hi All,

I have tried everything so far I definitely have increased my file 
descriptors on my Ubuntu OS

from 1024 to 46622
But when I start Squid 3.0 STABLE25 I doesn't seem to detect the real 
descriptor's size


I have checked the sysctl.conf, and I have checked the system to make sure 
that the correct size
/etc/sysctl.confWhen I run this I more /proc/sys/fs/file-maxI get 46622But 
Squid3.0 seem to only detect 1024Is there anything that I am not doing 
please?

I don't know what else to do
Thank you
Regards
Adam


Re: [squid-users] FileDescriptor Issues

2010-03-22 Thread a...@gmail

Hi, Al
Yes I did thanks for the suggestion
I am trying to figure out why is Squid refusing to aknowledge the available 
size on the system
Unless of course it's a bug on either sides, I mean on Squid's side and 
Ubuntu side,
But I have checked some Ubuntu forums and people used the same methods I 
used and it seems very strange that when I start Squid I get 1024 instead of 
46622 or whatever the number I put


Regards
Adam
- Original Message - 
From: "Al - Image Hosting Services" 

To: "a...@gmail" 
Cc: 
Sent: Monday, March 22, 2010 6:13 PM
Subject: Re: [squid-users] FileDescriptor Issues



Hi,

Did you try using ulimit?

Best Regards,
Al


On Mon, 22 Mar 2010, a...@gmail wrote:


Date: Mon, 22 Mar 2010 17:42:47 -
From: "a...@gmail" 
To: squid-users@squid-cache.org
Subject: [squid-users] FileDescriptor Issues

Hi All,

I have tried everything so far I definitely have increased my file 
descriptors on my Ubuntu OS

from 1024 to 46622
But when I start Squid 3.0 STABLE25 I doesn't seem to detect the real 
descriptor's size


I have checked the sysctl.conf, and I have checked the system to make 
sure that the correct size
/etc/sysctl.confWhen I run this I more /proc/sys/fs/file-maxI get 
46622But Squid3.0 seem to only detect 1024Is there anything that I am not 
doing please?

I don't know what else to do
Thank you
Regards
Adam 




[squid-users] Time-based ACLs for long connections

2010-03-22 Thread Jason Healy
I work at a school where we would like to limit bandwidth during certain times 
of day (study times), but relax those restrictions at other times of day.  
We're looking into delay pools to shape the traffic, and time-based ACLs to 
assign the connections to different pools.  I'm pretty sure my guess about how 
time-based ACLs work is correct, but I wanted to confirm before I set this all 
up and have a major "duh" moment.

Assignment to a delay pool using a time-based ACL would only occur at the start 
of the connection, correct?  In other words, if I have a ACLs like:

  acl play time MTWHF 15:00-19:59
  acl work time MTWHF 20:00-22:59
  # plus some assignment to delay pools using "play" and "work" ACLs...

Then a user who starts a long download at 19:59 will enjoy the benefits of the 
"play" ACL for the duration of the download, while one who starts at 22:59 will 
be penalized with the "work" ACL until their download completes.  There is no 
re-evaluation of the ACL while the download is in progress that would notice 
that the ACL time boundary has been crossed.

Just want to check before I start architecting our solution...

Thanks,

Jason

--
Jason Healy|jhe...@logn.net|   http://www.logn.net/






[squid-users] TPROXY and DansGuardian

2010-03-22 Thread Jason Healy
We've used a few different Squid setups over the years, from a vanilla setup to 
a transparent interception proxy, to a fully transparent tproxy.

We're now using DansGuardian to keep tabs on our users (we don't block; we just 
monitor).  This is good, but unfortunately it doesn't appear to be compatible 
with tproxy (DG only understands interception or regular proxying).

Does anyone know of a way to use DG as an interception proxy, but configure 
Squid to use the "real" client IP address in its outgoing requests?  I have no 
idea if this is possible since it would be quite a mess of different proxy 
schemes (DG would be interception-based using routing, Squid would use 
X-Forwarded-For to get the real IP, and then tproxy to make the request using 
the client address).

Alternately, does anyone know of a good web monitoring product that works in a 
"sniffer" mode so I don't need to insert it inline?  I basically would like to 
use tproxy, but also need to log users who are going to naughty sites...

Thanks,

Jason

--
Jason Healy|jhe...@logn.net|   http://www.logn.net/






[squid-users] reverse proxy question

2010-03-22 Thread Al - Image Hosting Services

Hi,

I have a reverse proxy setup. It has worked well except now the apache 
server is getting overloaded. I would like to change my load balancing so 
that I send all the dynamic content to one server like php to the apache 
server and all the static content like .gif, .jpg, .html to another 
webserver. Is there a way to do this and where is it documented? Also, 
could someone recommend a light weight server for static content?


Thanks,
Al


Re: [squid-users] FileDescriptor Issues

2010-03-22 Thread a...@gmail

Hello All,
I have solved the problem, I managed to increase the filedescriptor
from 1024
This what I have done on (Ubuntu hardy) it should work on most Ubuntu OS and 
Debians


I first needed to see the max that my System can support

run this command first:
cat /proc/sys/fs/file-max

it will display the maximum that you're system can currently handle

to increase that number

you need first to run this command
let's assume X is a number 46900

echo  > proc/sys/fs/file-max (where the xx is the number you want to 
add)


you then need to add this into the file
/etc/sysctl.conf   file
fs.file-max = X  (that same number again)

After you've done this.

check again with this command

systcl -p

It's all stored in

/proc/sys/fs/file-nr   (just run this command to get the output)

To modify the limit descriptors per session
We need to add this to our limits.conf

emacs or vi /etc/security/limits.conf
and add

*   soft   nofile  X
*   hard nofileX
Note you can use either or both of the above two lines
And you can use a specific user instead of the wildcard "*" which is at the 
beginning of each line, it means to all users on your system


save it and then you can check with ulimit -n
if you still get 1024 you probably need to reboot your system altogether, on 
mine it didn't show until I rebooted anyway.


I hope this will help someone somewhere at some point

Regards
Adam



- Original Message - 
From: "Al - Image Hosting Services" 

To: "a...@gmail" 
Cc: 
Sent: Monday, March 22, 2010 6:13 PM
Subject: Re: [squid-users] FileDescriptor Issues



Hi,

Did you try using ulimit?

Best Regards,
Al


On Mon, 22 Mar 2010, a...@gmail wrote:


Date: Mon, 22 Mar 2010 17:42:47 -
From: "a...@gmail" 
To: squid-users@squid-cache.org
Subject: [squid-users] FileDescriptor Issues

Hi All,

I have tried everything so far I definitely have increased my file 
descriptors on my Ubuntu OS

from 1024 to 46622
But when I start Squid 3.0 STABLE25 I doesn't seem to detect the real 
descriptor's size


I have checked the sysctl.conf, and I have checked the system to make 
sure that the correct size
/etc/sysctl.confWhen I run this I more /proc/sys/fs/file-maxI get 
46622But Squid3.0 seem to only detect 1024Is there anything that I am not 
doing please?

I don't know what else to do
Thank you
Regards
Adam 




Re: [squid-users] reverse proxy question

2010-03-22 Thread Nyamul Hassan
Did you try using lighttpd?
Regards
HASSAN



On Tue, Mar 23, 2010 at 1:33 AM, Al - Image Hosting Services
 wrote:
>
> Hi,
>
> I have a reverse proxy setup. It has worked well except now the apache server 
> is getting overloaded. I would like to change my load balancing so that I 
> send all the dynamic content to one server like php to the apache server and 
> all the static content like .gif, .jpg, .html to another webserver. Is there 
> a way to do this and where is it documented? Also, could someone recommend a 
> light weight server for static content?
>
> Thanks,
> Al
>


[squid-users] Speed up squid

2010-03-22 Thread Kevin Blackwell
I currently have squid installed.

I need to get some more speed out of it though. Was wondering if the
community had any thoughts.

I've expressed to the powers that be that throwing a proxy in the way
by nature will create a bottleneck to some degree. I mean it does have
to fetch the file, do it's thing, then let the proxy user's browser
have it. not to mention the that the url needs to be checked, scan
files for viruses, etc...

Also open to a hardware solution if anyone has any suggestions and
feel it would be faster.

Squid requirements.

We really only need squid to act as a web filter. Ive been told that
other smaller and faster programs can do this. But after looking into
them, they seem to be lacking on the one specific I need it to do.

NTLM authentication

That's the #1 most important requirement.

After that I need SPEED.

I basically just need the proxy to

1. Log NT domain users
2. Filter websites based on a blacklist (squidguard seems to be doing
OK for our needs currently, open to other possibilities )
3. Virus Filter ( Currently using havp, but open to faster/better suggestions )
4. SPEED ( just to reiterate )

Anyone have any pointers to set squid up with those requirements?

Many thanks in advance.

-- 
Kevin Blackwell


Re: [squid-users] Mail client with squid

2010-03-22 Thread Amos Jeffries
On Mon, 22 Mar 2010 23:21:45 +0530, Impact Services
 wrote:
> Hi,
> 
> Thanks for your reply. I know squid is a proxy server. May be I didnt
> explain the problem correctly.
> 
> I am able to manage my internet traffic through squid - filtering,
> logging etc. The problem is that I want to enable the client computers
> to use outlook express as well but since internet access is routed
> through squid and squid doesnt handle pop and smtp traffic, outlook
> express on client computers are unable to send or receive emails. What
> is the workaround for client computers to access internet through
> squid but at the same time enable outlook express to operate as well?
> 

By being more careful with your iptables rules.

Non-80 ports should be handled normally. Either by only routing port 80
traffic through the Squid box by your main router or by having the Squid
box configured as a proper router itself.


FWIW: OE will use the internet explorer options and Thunderbird has "HTTP
proxy" settings configurable. This will only pass the attachments and
images displayed in HTML emails through the proxy. Not the actual email
texts.

Amos


RE: [squid-users] Disable user accounts

2010-03-22 Thread David Parks
So, if I understand correctly, squid has no way for me to force a user
account to be expired or cleared prematurely. Setting the nonce_max_duration
low wouldn't block a user with a constant stream of traffic, say watching a
video for example.

If the above statements are correct, then do you have any thoughts on how
challenging a change like this would be at the code level? For example,
having a command similar to "squid -k reconfigure" (e.g. "squid -r
user_to_expire") in which case squid would simply expire the given
credentials, thus "tricking" squid into re-authenticating on demand?

If user credentials are simply a table in memory this seems conceptually
simple to accomplish. Though I'm a java developer and haven't touched C/++
in many years, so I'm not sure this is worth considering unless you think
it's as simple as it seems like it could be.

Thanks!
Dave

p.s. my purpose in following this line of questioning is to monitor log
files for per user traffic, and after a user exceeds their data transfer
quota, I need to block further access. I don't want to slow access for users
within their quota.




-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, March 22, 2010 12:35 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Disable user accounts

David Parks wrote:
> I will be monitoring squid usage logs and need to disable user 
> accounts from an external app (block them from making use of the proxy 
> after they are authenticated).
> 
> I'm not quite following the FAQ on this
> (http://wiki.squid-cache.org/Features/Authentication?action=show&redir
> ect=SquidFaq/ProxyAuthentication#How_do_I_ask_for_authentication_of_an
> _already_authenticated_user.3F) because I don't have any criteria on 
> which the ACL might force a re-negotiation (or I just don't understand 
> the proposed solution).

Re-challenge is automatic whenever a new request needs to be authed and the
currently known credentials are unknown or too old to be used.

> 
> I'm also not clear if ("nonce_garbage_interval") and
> ("nonce_max_duration") are actually forcing a password check against 
> the authentication module, or if they are just dealing with the 
> nuances of the digest authentication protocol. I have them set to

garbage collection only removes things known to be dead already. The garbage
interval determines how often the memory caches are cleaned out above and
beyond the regular as-used cleanings.

  nonce_max_duration determines how long the nonces may be used for. 
It's closer to what you are wanting, but I'm not sure of there are any nasty
side effects of setting it too low.

> their defaults, but after making a change to the password file that 
> digest_pw_auth helper uses, I do not get challenged for the updated 
> password. Could it just be that digest_pw_auth didn't re-read the 
> password file after I made the change?

Yes.

> 
> Thanks! David
> 
> 
> p.s. thanks for all of the responses to this point, I haven't replied 
> as such with a "thanks", but the help on this user group is fantastic 
> and is really appreciated, particularly Amos, you're a god-send!

Welcome.

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
   Current Beta Squid 3.1.0.18




[squid-users] rules ebtables

2010-03-22 Thread Ariel
hi. list a query, finish putting together a bridge + squid with
tproxy, but I have a doubt with ebtables rules, as in the wiki says
one thing and does another ami me

only works if I put --redirect-target ACCEPT the end of regulation
instead of DROP


Is this right?



${EBTABLES} -t broute -F
${EBTABLES} -t broute -A BROUTING -i $EXTDEV -p ipv4 --ip-protocol tcp
--ip-source-port 80 -j redirect --redirect-target ACCEPT
${EBTABLES} -t broute -A BROUTING -i $INTDEV -p ipv4 --ip-protocol tcp
--ip-destination-port 80 -j redirect --redirect-target ACCEPT


RE: [squid-users] Disable user accounts

2010-03-22 Thread Amos Jeffries
On Mon, 22 Mar 2010 16:26:26 -0600, "David Parks" 
wrote:
> So, if I understand correctly, squid has no way for me to force a user
> account to be expired or cleared prematurely. Setting the
> nonce_max_duration
> low wouldn't block a user with a constant stream of traffic, say
watching a
> video for example.

Even obsolete auth details won't block an existing stream.
The key word there is "prematurely".

> 
> If the above statements are correct, then do you have any thoughts on
how

They are not quite.

> challenging a change like this would be at the code level? For example,
> having a command similar to "squid -k reconfigure" (e.g. "squid -r
> user_to_expire") in which case squid would simply expire the given
> credentials, thus "tricking" squid into re-authenticating on demand?

-k reconfigure and -k restart will break client connections in current
Squid.

> 
> If user credentials are simply a table in memory this seems conceptually
> simple to accomplish. Though I'm a java developer and haven't touched
C/++
> in many years, so I'm not sure this is worth considering unless you
think
> it's as simple as it seems like it could be.

The user credentials are tagged data associated with each request. They
exist for as long as the request is ongoing.  Some are also attached to
specific TCP connections and live as long as the connection or until new
auth data is received inside the connection.

I say you statements above are "not quite" because of this:
http://wiki.squid-cache.org/Features/Authentication#How_do_I_ask_for_authentication_of_an_already_authenticated_user.3F

> 
> Thanks!
> Dave
> 
> p.s. my purpose in following this line of questioning is to monitor log
> files for per user traffic, and after a user exceeds their data transfer
> quota, I need to block further access. I don't want to slow access for
> users
> within their quota.
> 

Real quota control is something that has long been wanted in Squid and the
groundwork has almost finished being laid into 3.2 but nobody yet has the
time to actually implement the feature.
http://wiki.squid-cache.org/Features/Quota

Amos


Re: [squid-users] rules ebtables

2010-03-22 Thread Amos Jeffries
On Mon, 22 Mar 2010 19:37:36 -0300, Ariel 
wrote:
> hi. list a query, finish putting together a bridge + squid with
> tproxy, but I have a doubt with ebtables rules, as in the wiki says
> one thing and does another ami me
> 
> only works if I put --redirect-target ACCEPT the end of regulation
> instead of DROP
> 
> 
> Is this right?

No. The meaning of "ACCEPT" and "DROP" have different semantics in
ebtables.

DROP means to stop bridging and perform normal routing rules on the
packets.

ACCEPT means the kernel will bridge all packets without passing them to
Squid or the local host routing.
It will get the pages, and externally show the client IP because it really
is the client going out.

Amos


Re: [squid-users] Time-based ACLs for long connections

2010-03-22 Thread Amos Jeffries
On Mon, 22 Mar 2010 14:57:10 -0400, Jason Healy  wrote:
> I work at a school where we would like to limit bandwidth during certain
> times of day (study times), but relax those restrictions at other times
of
> day.  We're looking into delay pools to shape the traffic, and
time-based
> ACLs to assign the connections to different pools.  I'm pretty sure my
> guess about how time-based ACLs work is correct, but I wanted to confirm
> before I set this all up and have a major "duh" moment.
> 
> Assignment to a delay pool using a time-based ACL would only occur at
the
> start of the connection, correct?  In other words, if I have a ACLs
like:
> 
>   acl play time MTWHF 15:00-19:59
>   acl work time MTWHF 20:00-22:59
>   # plus some assignment to delay pools using "play" and "work" ACLs...
> 
> Then a user who starts a long download at 19:59 will enjoy the benefits
of
> the "play" ACL for the duration of the download, while one who starts at
> 22:59 will be penalized with the "work" ACL until their download
completes.
> There is no re-evaluation of the ACL while the download is in progress
> that would notice that the ACL time boundary has been crossed.
> 
> Just want to check before I start architecting our solution...
> 

Correct.

Amos


Re: [squid-users] Speed up squid

2010-03-22 Thread Amos Jeffries
On Mon, 22 Mar 2010 15:16:07 -0500, Kevin Blackwell 
wrote:
> I currently have squid installed.
> 
> I need to get some more speed out of it though. Was wondering if the
> community had any thoughts.
> 
> I've expressed to the powers that be that throwing a proxy in the way
> by nature will create a bottleneck to some degree. I mean it does have
> to fetch the file, do it's thing, then let the proxy user's browser
> have it. not to mention the that the url needs to be checked, scan
> files for viruses, etc...
> 
> Also open to a hardware solution if anyone has any suggestions and
> feel it would be faster.
> 
> Squid requirements.
> 
> We really only need squid to act as a web filter. Ive been told that
> other smaller and faster programs can do this. But after looking into
> them, they seem to be lacking on the one specific I need it to do.
> 
> NTLM authentication
> 
> That's the #1 most important requirement.
> 
> After that I need SPEED.
> 
> I basically just need the proxy to
> 
> 1. Log NT domain users

Note: NTLM is a deprecated security method. It's being replaced by
Kerberos which is faster and more secure.
One of the things to improve performance is looking at doing that
migration where possible.

> 2. Filter websites based on a blacklist (squidguard seems to be doing
> OK for our needs currently, open to other possibilities )
> 3. Virus Filter ( Currently using havp, but open to faster/better
> suggestions )
> 4. SPEED ( just to reiterate )
> 
> Anyone have any pointers to set squid up with those requirements?

* The biggest bottleneck is the body data filtering software.

All Squid does is some processing of the HTTP headers portion and
pass-thru the rest. The smaller the objects get the more processing Squid
has to do relative to data pipe throughput.

* Followed by ACL and storage IO speeds.

Amos



[squid-users] FileDescriptor Issues

2010-03-22 Thread a...@gmail

I have solved the problem, I managed to increase the filedescriptor
My system now reads 65535
But Squid still says only 1024 fileDescriptors available

What can I do to fix this please, I have rebooted the system and Squid 
several times

I am running out of ideas

Any help would be appreciated
Regards
Adam



Re: [squid-users] FileDescriptor Issues

2010-03-22 Thread Ivan .
Have you set the descriptor size in the squid start up script?

see here
http://paulgoscicki.com/archives/2007/01/squid-warning-your-cache-is-running-out-of-filedescriptors/

cheers
Ivan

On Tue, Mar 23, 2010 at 12:45 PM, a...@gmail  wrote:
>
> I have solved the problem, I managed to increase the filedescriptor
> My system now reads 65535
> But Squid still says only 1024 fileDescriptors available
>
> What can I do to fix this please, I have rebooted the system and Squid 
> several times
> I am running out of ideas
>
> Any help would be appreciated
> Regards
> Adam
>


Re: [squid-users] FileDescriptor Issues

2010-03-22 Thread a...@gmail

Thanks Ivan for your suggestion
But in my case it's slightly different
I have no squid in

/etc/default/squid


/etc/init.d/mine is located in /usr/local/squid/sbin/squidunless I try 
this/usr/local/squid/sbin/squid

 SQUID_MAXFD=4096

And then restart it, but I am not sure I am using Ubuntu HardyI think this 
tip is for the Squid that is packaged with Ubuntu and not the 
compiledSquidThanks for your suggestion I appreciate itRegardsAdamFrom: 
"Ivan ." To: "a...@gmail" 

Cc: 
Sent: Tuesday, March 23, 2010 1:50 AM
Subject: Re: [squid-users] FileDescriptor Issues



Have you set the descriptor size in the squid start up script?

see here
http://paulgoscicki.com/archives/2007/01/squid-warning-your-cache-is-running-out-of-filedescriptors/

cheers
Ivan

On Tue, Mar 23, 2010 at 12:45 PM, a...@gmail  
wrote:


I have solved the problem, I managed to increase the filedescriptor
My system now reads 65535
But Squid still says only 1024 fileDescriptors available

What can I do to fix this please, I have rebooted the system and Squid 
several times

I am running out of ideas

Any help would be appreciated
Regards
Adam





Re: [squid-users] FileDescriptor Issues

2010-03-22 Thread a...@gmail

Sorry I haven't set it in the Start up script
But I will try it right away
Regards
Adam

- Original Message - 
From: "Ivan ." 

To: "a...@gmail" 
Cc: 
Sent: Tuesday, March 23, 2010 1:50 AM
Subject: Re: [squid-users] FileDescriptor Issues



Have you set the descriptor size in the squid start up script?

see here
http://paulgoscicki.com/archives/2007/01/squid-warning-your-cache-is-running-out-of-filedescriptors/

cheers
Ivan

On Tue, Mar 23, 2010 at 12:45 PM, a...@gmail  
wrote:


I have solved the problem, I managed to increase the filedescriptor
My system now reads 65535
But Squid still says only 1024 fileDescriptors available

What can I do to fix this please, I have rebooted the system and Squid 
several times

I am running out of ideas

Any help would be appreciated
Regards
Adam





[squid-users] Listing or Deauthenticating Users

2010-03-22 Thread Frank Parker
I couldn't find this in the archives.  Using Basic or Digest auth...
Is it possible to list the current authenticated user sessions?  Is it
possible to deauthenticate a user and thereby force an authentication
prompt on the next request from his/her browser?


Re: [squid-users] FileDescriptor Issues

2010-03-22 Thread Amos Jeffries
On Tue, 23 Mar 2010 02:19:40 -, "a...@gmail" 
wrote:
> Thanks Ivan for your suggestion
> But in my case it's slightly different
> I have no squid in
> 
> /etc/default/squid
> 
> 
> /etc/init.d/mine is located in /usr/local/squid/sbin/squidunless I try 
> this/usr/local/squid/sbin/squid
>   SQUID_MAXFD=4096
> 

/etc/default/squid is a configuration file for configuring the system
init.d/squid script.
 It does not exist normally, you create it only when overrides are needed.

.../sbin/squid is supposed to be the binary application which gets run.

> And then restart it, but I am not sure I am using Ubuntu HardyI think
this 
> tip is for the Squid that is packaged with Ubuntu and not the 
> compiledSquid

Bash environment shells resets the descriptors down again towards 1024
each time a new one is generated. It _always_ must be increased to the
wanted limit before running Squid. Whether you do it manually on the
command line each time, or in the init.d script, or in some other custom
starter script.


My Ubuntu systems show default OS limits of just over 24K FD available.

Building Squid with:
  ulimit -HSn 65535 && ./configure --with-filedescriptors=65535 ...
  make install

starting:  squid -f /etc/squid.conf
squid shows 1024

starting: ulimit -Hsn 64000 && squid -f /etc/squid.conf
squid shows 64000

Amos


Re: [squid-users] Listing or Deauthenticating Users

2010-03-22 Thread Amos Jeffries
On Mon, 22 Mar 2010 19:24:47 -0700, Frank Parker
 wrote:
> I couldn't find this in the archives.  Using Basic or Digest auth...
> Is it possible to list the current authenticated user sessions?  Is it
> possible to deauthenticate a user and thereby force an authentication
> prompt on the next request from his/her browser?


Listing active requests:

  squidclient mgr:active_requests

also available through the cachemgr.cgi interface.

NP: you need the "manager" ACLs setup to permit your request into Squid.

De-authenticate. Not on existing connections once they have been accepted.

Amos


Re: [squid-users] FileDescriptor Issues

2010-03-22 Thread a...@gmail

Thanks Amos for this tip I will try that and keep you posted
Regards
Adam

- Original Message - 
From: "Amos Jeffries" 

To: 
Sent: Tuesday, March 23, 2010 2:54 AM
Subject: Re: [squid-users] FileDescriptor Issues



On Tue, 23 Mar 2010 02:19:40 -, "a...@gmail" 
wrote:

Thanks Ivan for your suggestion
But in my case it's slightly different
I have no squid in

/etc/default/squid


/etc/init.d/mine is located in /usr/local/squid/sbin/squidunless I try
this/usr/local/squid/sbin/squid
  SQUID_MAXFD=4096



/etc/default/squid is a configuration file for configuring the system
init.d/squid script.
It does not exist normally, you create it only when overrides are needed.

.../sbin/squid is supposed to be the binary application which gets run.


And then restart it, but I am not sure I am using Ubuntu HardyI think

this

tip is for the Squid that is packaged with Ubuntu and not the
compiledSquid


Bash environment shells resets the descriptors down again towards 1024
each time a new one is generated. It _always_ must be increased to the
wanted limit before running Squid. Whether you do it manually on the
command line each time, or in the init.d script, or in some other custom
starter script.


My Ubuntu systems show default OS limits of just over 24K FD available.

Building Squid with:
 ulimit -HSn 65535 && ./configure --with-filedescriptors=65535 ...
 make install

starting:  squid -f /etc/squid.conf
squid shows 1024

starting: ulimit -Hsn 64000 && squid -f /etc/squid.conf
squid shows 64000

Amos 




Re: [squid-users] Blocking Instant Messaging

2010-03-22 Thread Amos Jeffries
> On 22/03/2010 07:56, "a bv" wrote:
> 
> Hi,
> 
> I have a squid running and i would like to block/control  the instant
> messaging trafffic at squid (especially MSN/Windows Live Messenger) .
> 
> So how can i do this effectively?
> 
> Regards
> 

http://wiki.squid-cache.org/ConfigExamples#Instant_Messaging_.2BAC8_Chat_Program_filtering

Amos


[squid-users] clients -- SSL SQUID -- NON SSL webserver

2010-03-22 Thread Guido Marino Lorenzutti
Hi people: Im trying to give my clients access to my non ssl  
webservers thru my reverse proxies adding ssl support on them.


Like the subject tries to explain:

WAN CLIENTS --- SSL SQUID (443) --- NON SSL webserver (80).

This is the relevant part of the squid.conf:

https_port 22.22.22.22:443 cert=/etc/squid/crazycert.domain.com.crt  
key=/etc/squid/crazycert.domain.com.key  
defaultsite=crazycert.domain.com vhost  
sslflags=VERIFY_CRL_ALL,VERIFY_CRL cafile=/etc/squid/ca.crt  
clientca=/etc/squid/ca.crt


cache_peer crazycert.domain.com parent 80 0 no-query proxy-only  
originserver login=PASS


Im using a self signed certificate and the squid should not allow the  
connection if the client does not have a valid key.


When I try to connect I get this error:

2010/03/23 00:39:47| SSL unknown certificate error 3 in  
/C=AR/ST=Buenos Aires/L=Ciudad Aut\xF3noma de Buenos Aires/O=Consejo  
de la Magistratura de la C.A.B.A./OU=Direcci\xF3n de Inform\xE1tica y  
Tecnolog\xEDa/CN=Guido Marino  
Lorenzutti/emailaddress=glorenzu...@jusbaires.gov.ar


2010/03/23 00:39:47| clientNegotiateSSL: Error negotiating SSL  
connection on FD 12: error:140890B2:SSL  
routines:SSL3_GET_CLIENT_CERTIFICATE:no certificate returned (1/-1)


Any ideas?
I don't think the problem is in the certificates, coz im using them on  
an apache working like reverse proxy. But I would prefer having squid  
for everything.


Tnxs in advance.



Re: [squid-users] clients -- SSL SQUID -- NON SSL webserver

2010-03-22 Thread Luis Daniel Lucio Quiroz
Le Lundi 22 Mars 2010 21:47:05, Guido Marino Lorenzutti a écrit :
> Hi people: Im trying to give my clients access to my non ssl
> webservers thru my reverse proxies adding ssl support on them.
> 
> Like the subject tries to explain:
> 
> WAN CLIENTS --- SSL SQUID (443) --- NON SSL webserver (80).
> 
> This is the relevant part of the squid.conf:
> 
> https_port 22.22.22.22:443 cert=/etc/squid/crazycert.domain.com.crt
> key=/etc/squid/crazycert.domain.com.key
> defaultsite=crazycert.domain.com vhost
> sslflags=VERIFY_CRL_ALL,VERIFY_CRL cafile=/etc/squid/ca.crt
> clientca=/etc/squid/ca.crt
> 
> cache_peer crazycert.domain.com parent 80 0 no-query proxy-only
> originserver login=PASS
> 
> Im using a self signed certificate and the squid should not allow the
> connection if the client does not have a valid key.
> 
> When I try to connect I get this error:
> 
> 2010/03/23 00:39:47| SSL unknown certificate error 3 in
> /C=AR/ST=Buenos Aires/L=Ciudad Aut\xF3noma de Buenos Aires/O=Consejo
> de la Magistratura de la C.A.B.A./OU=Direcci\xF3n de Inform\xE1tica y
> Tecnolog\xEDa/CN=Guido Marino
> Lorenzutti/emailaddress=glorenzu...@jusbaires.gov.ar
> 
> 2010/03/23 00:39:47| clientNegotiateSSL: Error negotiating SSL
> connection on FD 12: error:140890B2:SSL
> routines:SSL3_GET_CLIENT_CERTIFICATE:no certificate returned (1/-1)
> 
> Any ideas?
> I don't think the problem is in the certificates, coz im using them on
> an apache working like reverse proxy. But I would prefer having squid
> for everything.
> 
> Tnxs in advance.

You cant
look for apache fake-ssl mod  to do that


Re: [squid-users] clients -- SSL SQUID -- NON SSL webserver

2010-03-22 Thread Amos Jeffries

Luis Daniel Lucio Quiroz wrote:

Le Lundi 22 Mars 2010 21:47:05, Guido Marino Lorenzutti a écrit :

Hi people: Im trying to give my clients access to my non ssl
webservers thru my reverse proxies adding ssl support on them.

Like the subject tries to explain:

WAN CLIENTS --- SSL SQUID (443) --- NON SSL webserver (80).

This is the relevant part of the squid.conf:

https_port 22.22.22.22:443 cert=/etc/squid/crazycert.domain.com.crt
key=/etc/squid/crazycert.domain.com.key
defaultsite=crazycert.domain.com vhost
sslflags=VERIFY_CRL_ALL,VERIFY_CRL cafile=/etc/squid/ca.crt
clientca=/etc/squid/ca.crt


"cafile=" option overrides the "clientca=" option and contains a single 
CA to be checked.


Set clientca= to the file containing the officially accepted global CA 
certificates. The type used for multiple certificates is a .PEM file if 
I understand it correctly.


If you have issued the clients with certificates signed by your own 
custom CA, then add that to the list as well.


I will assume that you know how to do that since you are requiring it.



cache_peer crazycert.domain.com parent 80 0 no-query proxy-only
originserver login=PASS

Im using a self signed certificate and the squid should not allow the
connection if the client does not have a valid key.

When I try to connect I get this error:

2010/03/23 00:39:47| SSL unknown certificate error 3 in
/C=AR/ST=Buenos Aires/L=Ciudad Aut\xF3noma de Buenos Aires/O=Consejo
de la Magistratura de la C.A.B.A./OU=Direcci\xF3n de Inform\xE1tica y
Tecnolog\xEDa/CN=Guido Marino
Lorenzutti/emailaddress=glorenzu...@jusbaires.gov.ar

2010/03/23 00:39:47| clientNegotiateSSL: Error negotiating SSL
connection on FD 12: error:140890B2:SSL
routines:SSL3_GET_CLIENT_CERTIFICATE:no certificate returned (1/-1)

Any ideas?
I don't think the problem is in the certificates, coz im using them on
an apache working like reverse proxy. But I would prefer having squid
for everything.

Tnxs in advance.


You cant
look for apache fake-ssl mod  to do that


@Luis: What do you mean?

For reverse proxy environments it is possible and easily done AFAIK.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] clients -- SSL SQUID -- NON SSL webserver

2010-03-22 Thread Luis Daniel Lucio Quiroz
Le Lundi 22 Mars 2010 23:30:27, Amos Jeffries a écrit :
> Luis Daniel Lucio Quiroz wrote:
> > Le Lundi 22 Mars 2010 21:47:05, Guido Marino Lorenzutti a écrit :
> >> Hi people: Im trying to give my clients access to my non ssl
> >> webservers thru my reverse proxies adding ssl support on them.
> >> 
> >> Like the subject tries to explain:
> >> 
> >> WAN CLIENTS --- SSL SQUID (443) --- NON SSL webserver (80).
> >> 
> >> This is the relevant part of the squid.conf:
> >> 
> >> https_port 22.22.22.22:443 cert=/etc/squid/crazycert.domain.com.crt
> >> key=/etc/squid/crazycert.domain.com.key
> >> defaultsite=crazycert.domain.com vhost
> >> sslflags=VERIFY_CRL_ALL,VERIFY_CRL cafile=/etc/squid/ca.crt
> >> clientca=/etc/squid/ca.crt
> 
> "cafile=" option overrides the "clientca=" option and contains a single
> CA to be checked.
> 
> Set clientca= to the file containing the officially accepted global CA
> certificates. The type used for multiple certificates is a .PEM file if
> I understand it correctly.
> 
> If you have issued the clients with certificates signed by your own
> custom CA, then add that to the list as well.
> 
> I will assume that you know how to do that since you are requiring it.
> 
> >> cache_peer crazycert.domain.com parent 80 0 no-query proxy-only
> >> originserver login=PASS
> >> 
> >> Im using a self signed certificate and the squid should not allow the
> >> connection if the client does not have a valid key.
> >> 
> >> When I try to connect I get this error:
> >> 
> >> 2010/03/23 00:39:47| SSL unknown certificate error 3 in
> >> /C=AR/ST=Buenos Aires/L=Ciudad Aut\xF3noma de Buenos Aires/O=Consejo
> >> de la Magistratura de la C.A.B.A./OU=Direcci\xF3n de Inform\xE1tica y
> >> Tecnolog\xEDa/CN=Guido Marino
> >> Lorenzutti/emailaddress=glorenzu...@jusbaires.gov.ar
> >> 
> >> 2010/03/23 00:39:47| clientNegotiateSSL: Error negotiating SSL
> >> connection on FD 12: error:140890B2:SSL
> >> routines:SSL3_GET_CLIENT_CERTIFICATE:no certificate returned (1/-1)
> >> 
> >> Any ideas?
> >> I don't think the problem is in the certificates, coz im using them on
> >> an apache working like reverse proxy. But I would prefer having squid
> >> for everything.
> >> 
> >> Tnxs in advance.
> > 
> > You cant
> > look for apache fake-ssl mod  to do that
> 
> @Luis: What do you mean?
> 
> For reverse proxy environments it is possible and easily done AFAIK.
> 
> Amos
OH, I did try that scenario once ago and I couldnt