[squid-users] squid and squidguard

2008-08-26 Thread İsmail ÖZATAY

Hi ,
I am using 2.6.STABLE6 on CentOS 5.2 + squidguard 1.3 & p1,p2,p3 + 
berkeley db 2.7. Everything seems ok without any problem but when i use 
redirect_program in squid.conf my internal network connect bypassing the 
squidguard. I searched something but can not fix it ? Can anybody help 
me ? Here is my config;


squidGuard.conf
-
logdir /usr/local/squidGuard/log
dbhome /usr/local/squidGuard/db

src int_net {
   ip 192.168.0.0/24
}
dest porn {
   domainlist BL/porn/domains
   urllistBL/porn/urls
}
acl {
   int_net {
   pass !porn all
   }
   default { pass none
   redirect http://www.google.com.tr
   }
}



squid.conf
---
http_port 0.0.0.0:3128
acl all src 0.0.0.0/0.0.0.0
redirect_program /usr/local/bin/squidGuard -c 
/usr/local/squidGuard/squidGuard.conf

acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports





[squid-users] squid and squidGuard

2010-03-03 Thread Jaap Cammeraat
Hi,


I'm using squid-3.0.STABLE20
And running squidGuard 1.4


When I do a test in my shell I get the answer I want:


sh-3.2# echo "http://playboy.com 127.0.0.1/ - - GET" | 
/usr/local/squidGuard/bin/squidGuard -c /usr/local/squidGuard/squidGuard.conf -d
2010-03-03 12:26:10 [77887] New setting: dbhome: /usr/local/squidGuard/db
2010-03-03 12:26:10 [77887] New setting: logdir: /usr/local/squidGuard/log
2010-03-03 12:26:10 [77887] init domainlist 
/usr/local/squidGuard/db/porn/domains
2010-03-03 12:26:10 [77887] loading dbfile 
/usr/local/squidGuard/db/porn/domains.db
2010-03-03 12:26:10 [77887] init urllist /usr/local/squidGuard/db/porn/urls
2010-03-03 12:26:10 [77887] loading dbfile /usr/local/squidGuard/db/porn/urls.db
2010-03-03 12:26:10 [77887] squidGuard 1.4 started (1267615570.064)
2010-03-03 12:26:10 [77887] squidGuard ready for requests (1267615570.065)
2010-03-03 12:26:10 [77887] source not found
2010-03-03 12:26:10 [77887] no ACL matching source, using default
http://www.google.nl 127.0.0.1/- - -
2010-03-03 12:26:10 [77887] squidGuard stopped (1267615570.065)
sh-3.2#


When I use the following lines in my squid.conf it doensn't work:


url_rewrite_program /usr/local/squidGuard/bin/squidGuard -c 
/usr/local/squidGuard/squidGuard.conf
url_rewrite_children 8



Any thoughts?

Regards,
Jaap Cammeraat



[squid-users] Squid and squidguard

2010-08-12 Thread Mamadou Touré
Hi,
all when configuring squid for squidguard.
we have :

redirect_program /usr/bin/squidGuard
redirect_children 10

what mean redirect_children.

and value should have for squid wich manage about 100 clients.

regards.


[squid-users] Squid and Squidguard.

2013-06-12 Thread Beto Moreno
Hi.

Guys I have small experience with squid, now need to learn how to use
squidguard.

My doubts are:

1) U have squidrunning with your ACL, groups, users and rules, once u
setup squidguard what is order?
squid - rules them squidguard - rules or
squidguard rules them squid - rules?

2) Squidguard is a URL redirector, them squid ACL stuff will continue working?

3) Squid ACL tool can be replace with squidguard or they are totally different?

Sorry to ask this, I'm a little confuse here, thanks for your time!!!


[squid-users] Squid and Squidguard

2005-04-04 Thread SXB6300 Mailing
Hi,

I'm using squidguard with squid to filtrate the websites our users are
accessing. But I'm wondering one thing, does squid caches the error
pages returned by squidguard?
I explain : when a user try to access a porn site, squidguard returns a
"forbidden page" but as this page is seen as if returned from the porn
site, does squid cache this page?
If so, this would mean that the error page is cached many times as
different sites...

Thx for the answer

Pierre-E



Re: [squid-users] squid and squidguard

2008-08-26 Thread Joop Beris
On Tuesday 26 August 2008 02:34:22 pm İsmail ÖZATAY wrote:
> Hi ,
> I am using 2.6.STABLE6 on CentOS 5.2 + squidguard 1.3 & p1,p2,p3 +
> berkeley db 2.7. Everything seems ok without any problem but when i use
> redirect_program in squid.conf my internal network connect bypassing the
> squidguard. I searched something but can not fix it ? Can anybody help
> me ? Here is my config;



Hi Ismail,

Have a look at your squidGuard.log. Usually squidguard is very verbose when 
something is not working.
Usually the problem lies in wrong permissions on the squidguard db files. Make 
sure the user squid is running under, can read the db files.

HTH,

Joop

 
Dit bericht is gescand op virussen en andere gevaarlijke
inhoud door MailScanner en lijkt schoon te zijn.
Mailscanner door http://www.prosolit.nl
Professional Solutions fot IT



Re: [squid-users] squid and squidguard

2008-08-26 Thread Marcus Kool

Hi Ismail,

I would add a redirect statement to the int_net acl rule.

observation: blocking porn without blocking proxies is the same as blocking 
nothing.
You might want to try ufdbGuard: it is faster than squidguard, and has
additional features for enforcing Google SafeSearch and verifying
HTTPS traffic (certificates and optionally blocking HTTPS to IP addresses 
instead of FQDNs).

-Marcus


İsmail ÖZATAY wrote:

Hi ,
I am using 2.6.STABLE6 on CentOS 5.2 + squidguard 1.3 & p1,p2,p3 + 
berkeley db 2.7. Everything seems ok without any problem but when i use 
redirect_program in squid.conf my internal network connect bypassing the 
squidguard. I searched something but can not fix it ? Can anybody help 
me ? Here is my config;


squidGuard.conf
-
logdir /usr/local/squidGuard/log
dbhome /usr/local/squidGuard/db

src int_net {
   ip 192.168.0.0/24
}
dest porn {
   domainlist BL/porn/domains
   urllistBL/porn/urls
}
acl {
   int_net {
   pass !porn all
   }
   default { pass none
   redirect http://www.google.com.tr
   }
}



squid.conf
---
http_port 0.0.0.0:3128
acl all src 0.0.0.0/0.0.0.0
redirect_program /usr/local/bin/squidGuard -c 
/usr/local/squidGuard/squidGuard.conf

acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports







Re: [squid-users] squid and squidguard

2008-08-26 Thread İsmail ÖZATAY

Marcus Kool yazmış:

Hi Ismail,

I would add a redirect statement to the int_net acl rule.

observation: blocking porn without blocking proxies is the same as 
blocking nothing.

You might want to try ufdbGuard: it is faster than squidguard, and has
additional features for enforcing Google SafeSearch and verifying
HTTPS traffic (certificates and optionally blocking HTTPS to IP 
addresses instead of FQDNs).


-Marcus


İsmail ÖZATAY wrote:

Hi ,
I am using 2.6.STABLE6 on CentOS 5.2 + squidguard 1.3 & p1,p2,p3 + 
berkeley db 2.7. Everything seems ok without any problem but when i 
use redirect_program in squid.conf my internal network connect 
bypassing the squidguard. I searched something but can not fix it ? 
Can anybody help me ? Here is my config;


squidGuard.conf
-
logdir /usr/local/squidGuard/log
dbhome /usr/local/squidGuard/db

src int_net {
   ip 192.168.0.0/24
}
dest porn {
   domainlist BL/porn/domains
   urllistBL/porn/urls
}
acl {
   int_net {
   pass !porn all
   }
   default { pass none
   redirect http://www.google.com.tr
   }
}



squid.conf
---
http_port 0.0.0.0:3128
acl all src 0.0.0.0/0.0.0.0
redirect_program /usr/local/bin/squidGuard -c 
/usr/local/squidGuard/squidGuard.conf

acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports









Hi Marcus i will try ufdbGuard.

Regards

ismail


Re: [squid-users] squid and squidguard

2008-08-26 Thread İsmail ÖZATAY

Marcus Kool yazmış:

Hi Ismail,

I would add a redirect statement to the int_net acl rule.

observation: blocking porn without blocking proxies is the same as 
blocking nothing.

You might want to try ufdbGuard: it is faster than squidguard, and has
additional features for enforcing Google SafeSearch and verifying
HTTPS traffic (certificates and optionally blocking HTTPS to IP 
addresses instead of FQDNs).


-Marcus


İsmail ÖZATAY wrote:

Hi ,
I am using 2.6.STABLE6 on CentOS 5.2 + squidguard 1.3 & p1,p2,p3 + 
berkeley db 2.7. Everything seems ok without any problem but when i 
use redirect_program in squid.conf my internal network connect 
bypassing the squidguard. I searched something but can not fix it ? 
Can anybody help me ? Here is my config;


squidGuard.conf
-
logdir /usr/local/squidGuard/log
dbhome /usr/local/squidGuard/db

src int_net {
   ip 192.168.0.0/24
}
dest porn {
   domainlist BL/porn/domains
   urllistBL/porn/urls
}
acl {
   int_net {
   pass !porn all
   }
   default { pass none
   redirect http://www.google.com.tr
   }
}



squid.conf
---
http_port 0.0.0.0:3128
acl all src 0.0.0.0/0.0.0.0
redirect_program /usr/local/bin/squidGuard -c 
/usr/local/squidGuard/squidGuard.conf

acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports








Also i saw that this is a commercial product. Do you know any free 
software like this ?





Re: [squid-users] squid and squidguard

2008-08-26 Thread Indunil Jayasooriya
>> Also i saw that this is a commercial product. Do you know any free
>> software like this ?

 What about this?
Pls try

 http://www.shallalist.de/



-- 
Thank you
Indunil Jayasooriya


Re: [squid-users] squid and squidguard

2008-08-27 Thread Marcus Kool

Ismail,

ufdbGuard is free.
It can be used with a free URL database and
with a commercial database.

-Marcus


İsmail ÖZATAY wrote:

Marcus Kool yazmış:

Hi Ismail,

I would add a redirect statement to the int_net acl rule.

observation: blocking porn without blocking proxies is the same as 
blocking nothing.

You might want to try ufdbGuard: it is faster than squidguard, and has
additional features for enforcing Google SafeSearch and verifying
HTTPS traffic (certificates and optionally blocking HTTPS to IP 
addresses instead of FQDNs).


-Marcus


İsmail ÖZATAY wrote:

Hi ,
I am using 2.6.STABLE6 on CentOS 5.2 + squidguard 1.3 & p1,p2,p3 + 
berkeley db 2.7. Everything seems ok without any problem but when i 
use redirect_program in squid.conf my internal network connect 
bypassing the squidguard. I searched something but can not fix it ? 
Can anybody help me ? Here is my config;


squidGuard.conf
-
logdir /usr/local/squidGuard/log
dbhome /usr/local/squidGuard/db

src int_net {
   ip 192.168.0.0/24
}
dest porn {
   domainlist BL/porn/domains
   urllistBL/porn/urls
}
acl {
   int_net {
   pass !porn all
   }
   default { pass none
   redirect http://www.google.com.tr
   }
}



squid.conf
---
http_port 0.0.0.0:3128
acl all src 0.0.0.0/0.0.0.0
redirect_program /usr/local/bin/squidGuard -c 
/usr/local/squidGuard/squidGuard.conf

acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports








Also i saw that this is a commercial product. Do you know any free 
software like this ?







Re: [squid-users] squid and squidGuard

2010-03-03 Thread Henrik Nordstrom
ons 2010-03-03 klockan 13:09 +0100 skrev Jaap Cammeraat:
> Hi,
> 
> 
> I'm using squid-3.0.STABLE20
> And running squidGuard 1.4
> 
> 
> When I do a test in my shell I get the answer I want:
> 
> 
> sh-3.2# echo "http://playboy.com 127.0.0.1/ - - GET" | 
> /usr/local/squidGuard/bin/squidGuard -c /usr/local/squidGuard/squidGuard.conf 
> -d

Don't run SquidGuard as root.. you need to test as your
cache_effective_user (the user Squid and any configured helpers runs at
after starting up).

It's very likely you have a permssion issue where the running user can
not access the SquidGuard data..

Regards
Henrik



RE: [squid-users] Squid and squidguard

2010-08-12 Thread Joseph L. Casale
>what mean redirect_children.

First hit on goggle explains it well:)
Its in the config manual:

Tag Nameredirect_children
Usage   redirect_children number

Description
This tag is used to set the number of redirect processes to spawn
Default redirect_children 5

Example
redirect_children 10

Caution
If you start too few Squid will have to wait for them to process a back log of 
URLs, slowing it down. If you start too many they will use RAM and other system 
resources.


Re: [squid-users] Squid and squidguard

2010-08-13 Thread donovan jeffrey j

On Aug 12, 2010, at 12:10 PM, Mamadou Touré wrote:

> Hi,
> all when configuring squid for squidguard.
> we have :
> 
> redirect_program /usr/bin/squidGuard
> redirect_children 10
> 
> what mean redirect_children.
> 
> and value should have for squid wich manage about 100 clients.
> 
> regards.
> 

it means how many squidguard instances should squid spawn.

/usr/local/bin/squidguard
/usr/local/bin/squidguard
/usr/local/bin/squidguard
/usr/local/bin/squidguard
/usr/local/bin/squidguard


watch your processes ie Top or netstat, and watch how many are being used. then 
you can adjust accordingly. 10 is usually just fine.
I have a case where i have thousands of connections so i run 100 redirects. 
Your squid logs will also tell you if your running out.

-j

Re: [squid-users] Squid and Squidguard.

2013-06-12 Thread Bruno Santos
Hi !

I've squid and squidguard working with no problem.

The squid ACLs keep working (I have machine and users ACLS - denying access to 
the machines and users to internet)
and ACLs related to web browsing (denied pages) in squidguard.

You can also do this with squid or vice-versa.


Cheers,

Bruno Santos

- Original Message -
From: "Beto Moreno" 
To: squid-users@squid-cache.org
Sent: Wednesday, June 12, 2013 4:23:30 PM
Subject: [squid-users] Squid and Squidguard.

Hi.

Guys I have small experience with squid, now need to learn how to use
squidguard.

My doubts are:

1) U have squidrunning with your ACL, groups, users and rules, once u
setup squidguard what is order?
squid - rules them squidguard - rules or
squidguard rules them squid - rules?

2) Squidguard is a URL redirector, them squid ACL stuff will continue working?

3) Squid ACL tool can be replace with squidguard or they are totally different?

Sorry to ask this, I'm a little confuse here, thanks for your time!!!

--




Use Open Source Software
"Human knowledge belongs to the world"
Bruno Santos
bvsan...@ulscb.min-saude.pt
http://www.twitter.com/feiticeir0
Tel: +351 962 753 053
Divisão de Informática
informat...@ulscb.min-saude.pt
Tel: +351 272 000 155
Fax: +351 272 000 257
Unidade Local de Saúde de Castelo Branco, E.P.E.
ge...@ulscb.min-saude.pt
Tel: +351 272 000 272
Fax: +351 272 000 257







Re: [squid-users] Squid and Squidguard.

2013-06-12 Thread Amos Jeffries

On 13/06/2013 3:23 a.m., Beto Moreno wrote:

Hi.

Guys I have small experience with squid, now need to learn how to use
squidguard.

My doubts are:

1) U have squidrunning with your ACL, groups, users and rules, once u
setup squidguard what is order?
squid - rules them squidguard - rules or
squidguard rules them squid - rules?


Squidguard is a separate programs.

* Squid ACLs determine whether a transaction is processed, and how that 
processing is performed.
* Squidguard ACLs determine whether or not Squidguard tells Squid to 
alter the URL mid-transaction. Nothing more.


All ACLs in both are run. Squid main http_access, adaptation systems and 
url_rewrite_access ACLs are run before squidguard. The 
url_rewrite_access ACLs determine whether squidguard is used *at all*. 
squidguard is contacted and does its thing. Then the remainder of the 
Squid ones are run depending on whether they need to on the new URL.



2) Squidguard is a URL redirector, them squid ACL stuff will continue working?


Yes.


3) Squid ACL tool can be replace with squidguard or they are totally different?


Totally different. Although some people use URL-rewriting and 
redirection to act like a proxy denial service - what actually happens 
there is a *successful* response with content message saying "failure". 
It is worth avoiding the confusion and complexity whenever possible.


Amos


Re: [squid-users] Squid and Squidguard.

2013-06-13 Thread Beto Moreno
Guys thanks for sharing your knowledge, u clear my mind :-)

On Wed, Jun 12, 2013 at 8:40 PM, Amos Jeffries  wrote:
> On 13/06/2013 3:23 a.m., Beto Moreno wrote:
>>
>> Hi.
>>
>> Guys I have small experience with squid, now need to learn how to use
>> squidguard.
>>
>> My doubts are:
>>
>> 1) U have squidrunning with your ACL, groups, users and rules, once u
>> setup squidguard what is order?
>> squid - rules them squidguard - rules or
>> squidguard rules them squid - rules?
>
>
> Squidguard is a separate programs.
>
> * Squid ACLs determine whether a transaction is processed, and how that
> processing is performed.
> * Squidguard ACLs determine whether or not Squidguard tells Squid to alter
> the URL mid-transaction. Nothing more.
>
> All ACLs in both are run. Squid main http_access, adaptation systems and
> url_rewrite_access ACLs are run before squidguard. The url_rewrite_access
> ACLs determine whether squidguard is used *at all*. squidguard is contacted
> and does its thing. Then the remainder of the Squid ones are run depending
> on whether they need to on the new URL.
>
>
>> 2) Squidguard is a URL redirector, them squid ACL stuff will continue
>> working?
>
>
> Yes.
>
>
>> 3) Squid ACL tool can be replace with squidguard or they are totally
>> different?
>
>
> Totally different. Although some people use URL-rewriting and redirection to
> act like a proxy denial service - what actually happens there is a
> *successful* response with content message saying "failure". It is worth
> avoiding the confusion and complexity whenever possible.
>
> Amos


RE: [squid-users] Squid and Squidguard

2005-04-04 Thread Elsen Marc

 
> Hi,
> 
> I'm using squidguard with squid to filtrate the websites our users are
> accessing. But I'm wondering one thing, does squid caches the error
> pages returned by squidguard?
> I explain : when a user try to access a porn site, squidguard 
> returns a
> "forbidden page" but as this page is seen as if returned from the porn
> site, does squid cache this page?

  - Squid doesn't cache page(s), only (web) objects.
  - The squidguard redirector returns an url being returned for squid to
fetch as a substitute for the blocked site, if so configured.
This url is then fetched by squid, and it's own freshness info, will
apply as to whether squid will cache it and related objects or not.

> If so, this would mean that the error page is cached many times as
> different sites...
> 

  Can't understand that argument. But remember also, squid does not
know about 'pages'
  
  M.


RE: [squid-users] Squid and Squidguard

2005-04-04 Thread SXB6300 Mailing
Thx for the info.
What I was trying to explain, was :
For example if we take xxxsite1.com and xxxsite2.com, when a user try to access 
these 2 sites, squidguard will return an error.php with a specific msg and for 
example a gif file. What I was wondering, was : will this gif be stored two 
times in the cache? (as an object of xxxsite1 and as one of xxxsite2)

Btw, I received the new hardware and configured it using the hints you gave me 
(reiserfs, aufs, ...) and with is a part of the tweeking. But the improve
in the perf are significant (this morning we reached 92req/s with only 50% cpu 
utilisation on one cpu).
Thx!!!

Pierre-E

-Message d'origine-
De : Elsen Marc [mailto:[EMAIL PROTECTED] 
Envoyé : lundi 4 avril 2005 15:16
À : SXB6300 Mailing; squid-users@squid-cache.org
Objet : RE: [squid-users] Squid and Squidguard


 
> Hi,
> 
> I'm using squidguard with squid to filtrate the websites our users are
> accessing. But I'm wondering one thing, does squid caches the error
> pages returned by squidguard?
> I explain : when a user try to access a porn site, squidguard 
> returns a
> "forbidden page" but as this page is seen as if returned from the porn
> site, does squid cache this page?

  - Squid doesn't cache page(s), only (web) objects.
  - The squidguard redirector returns an url being returned for squid to
fetch as a substitute for the blocked site, if so configured.
This url is then fetched by squid, and it's own freshness info, will
apply as to whether squid will cache it and related objects or not.

> If so, this would mean that the error page is cached many times as
> different sites...
> 

  Can't understand that argument. But remember also, squid does not
know about 'pages'
  
  M.




RE: [squid-users] Squid and Squidguard

2005-04-04 Thread Elsen Marc

 
> 
> Thx for the info.
> What I was trying to explain, was :
> For example if we take xxxsite1.com and xxxsite2.com, when a 
> user try to access these 2 sites, squidguard will return an 
> error.php with a specific msg and for example a gif file. 
> What I was wondering, was : will this gif be stored two times 
> in the cache? (as an object of xxxsite1 and as one of xxxsite2)
> 

 The squidguard redirector, 'only' returns a different
 url for a blocked site. Which is then fetched by squid, due
 to the nature of it's redirector interface and how it works.

 All of this 'next step' is completely independend of
 'xxxsite1.com' and 'xxxsite2.com'.
 So the question is kind of irrelevant.

 M.


[squid-users] Squid and SquidGuard retsarting. Why?

2006-07-12 Thread Brian Gregory
We have a Linux box running Suse 10.0 set up as a router and web proxy 
with filtering sharing our DSL connection between 7 Windows XP 
computers. It's running squid and squidGuard with a very large blacklist 
of forbidden URLs and phrases.


Because we basically have no money the Suse box is an old 400MHz Pentium 
II PC with only 256MB of RAM and this isn't likely to change in the near 
future, except that I might be able to get some more RAM if necessary.


Squid is set up to run 5 squidGuard processes. When we boot Suse it 
takes 15-20 minutes with lots of disk thrashing for the 5 squidGuards to 
read in the blacklists and build their tables. During this time the web 
proxy is non functional so we usually leave the Suse box running 24/7 to 
avoid having to wait for it.


Much of the time it works fine but every now and then for no obvious 
reason, squid decides it needs to start more squidGuard processes which 
effectively cuts off all web access. I'm not sure exactly what happens, 
maybe sometimes it just kills the existing squidGuards and starts new 
ones but it sometimes seems to end running 10 squidGuards and thrashing 
the disk hard for ages leaving the users with no web access.


When it's all running properly free -m seems to indicated that there is 
enough memory:


 total   used   free sharedbufferscached
Mem:   250246  3  0 51   126
-/+ buffers/cache: 68181
Swap:  400  2397



Does anyone know what's going on and how to stop it happening?

--

Brian Gregory.
[EMAIL PROTECTED]

Computer Room Volunteer.
Therapy Centre.
Prospect Park Hospital.


[squid-users] Squid and SquidGuard Blacklist just Warning

2008-11-07 Thread Jarosch, Ralph
Hi everyone,

I´m searching for an solution that only warn users if they enter an website 
which is on the Blacklist.
Have anyone some idea how I can realize this ??

It would be look like :

User enter www.denied.com -> squid redirect to squidquard -> squidguard 
redirect to an local website with an button I would enter this site -> if user 
click this button he could enter the website 

Thanks a lot 
Ralph Jarosch
ZIB 
Zentraler IT-Betrieb Niedersächsische Justiz

- Technisches Betriebszentrum -
Ralph Jarosch
Schlossplatz 2
29221 Celle
Tel.: +49 (5141) 206-145
Mobil:   +49 (162) 9069470
E-Mail:    [EMAIL PROTECTED]
Intranet: http://intra.zib.niedersachsen.de

 



Re: [squid-users] Squid and SquidGuard retsarting. Why?

2006-07-12 Thread Dwayne Hottinger
Quoting Brian Gregory <[EMAIL PROTECTED]>:

> We have a Linux box running Suse 10.0 set up as a router and web proxy
> with filtering sharing our DSL connection between 7 Windows XP
> computers. It's running squid and squidGuard with a very large blacklist
> of forbidden URLs and phrases.
>
> Because we basically have no money the Suse box is an old 400MHz Pentium
> II PC with only 256MB of RAM and this isn't likely to change in the near
> future, except that I might be able to get some more RAM if necessary.
>
> Squid is set up to run 5 squidGuard processes. When we boot Suse it
> takes 15-20 minutes with lots of disk thrashing for the 5 squidGuards to
> read in the blacklists and build their tables. During this time the web
> proxy is non functional so we usually leave the Suse box running 24/7 to
> avoid having to wait for it.
>
> Much of the time it works fine but every now and then for no obvious
> reason, squid decides it needs to start more squidGuard processes which
> effectively cuts off all web access. I'm not sure exactly what happens,
> maybe sometimes it just kills the existing squidGuards and starts new
> ones but it sometimes seems to end running 10 squidGuards and thrashing
> the disk hard for ages leaving the users with no web access.
>
> When it's all running properly free -m seems to indicated that there is
> enough memory:
>
>   total   used   free sharedbufferscached
> Mem:   250246  3  0 51   126
> -/+ buffers/cache: 68181
> Swap:  400  2397
>
>
>
> Does anyone know what's going on and how to stop it happening?
>
> --
>
> Brian Gregory.
> [EMAIL PROTECTED]
>
> Computer Room Volunteer.
> Therapy Centre.
> Prospect Park Hospital.
>

How big are your access.log files?  There is a 2gb limit on Squid.  I would
definately think about adding more memory to the box though.  You should be
able to pick up PC 100 memory fairly cheap.
--
Dwayne Hottinger
Network Administrator
Harrisonburg City Public Schools


Re: [squid-users] Squid and SquidGuard retsarting. Why?

2006-07-13 Thread Brian Gregory

Dwayne Hottinger wrote:

Quoting Brian Gregory <[EMAIL PROTECTED]>:


We have a Linux box running Suse 10.0 set up as a router and web proxy
with filtering sharing our DSL connection between 7 Windows XP
computers. It's running squid and squidGuard with a very large blacklist
of forbidden URLs and phrases.

Because we basically have no money the Suse box is an old 400MHz Pentium
II PC with only 256MB of RAM and this isn't likely to change in the near
future, except that I might be able to get some more RAM if necessary.

Squid is set up to run 5 squidGuard processes. When we boot Suse it
takes 15-20 minutes with lots of disk thrashing for the 5 squidGuards to
read in the blacklists and build their tables. During this time the web
proxy is non functional so we usually leave the Suse box running 24/7 to
avoid having to wait for it.

Much of the time it works fine but every now and then for no obvious
reason, squid decides it needs to start more squidGuard processes which
effectively cuts off all web access. I'm not sure exactly what happens,
maybe sometimes it just kills the existing squidGuards and starts new
ones but it sometimes seems to end running 10 squidGuards and thrashing
the disk hard for ages leaving the users with no web access.

When it's all running properly free -m seems to indicated that there is
enough memory:

  total   used   free sharedbufferscached
Mem:   250246  3  0 51   126
-/+ buffers/cache: 68181
Swap:  400  2397



Does anyone know what's going on and how to stop it happening?

--

Brian Gregory.
[EMAIL PROTECTED]

Computer Room Volunteer.
Therapy Centre.
Prospect Park Hospital.



How big are your access.log files?  There is a 2gb limit on Squid.  I would
definately think about adding more memory to the box though.  You should be
able to pick up PC 100 memory fairly cheap.
--
Dwayne Hottinger
Network Administrator
Harrisonburg City Public Schools



Part of the problem may be log file rotation which appears to be set to 
restart squid at the moment.


However this does not explain why I sometimes find that it is running 10 
squidGuard processes when my squid.conf specifies 5.


--

Brian Gregory.
[EMAIL PROTECTED]

Computer Room Volunteer.
Therapy Centre.
Prospect Park Hospital.


Re: [squid-users] Squid and SquidGuard retsarting. Why?

2006-07-18 Thread Henrik Nordstrom
ons 2006-07-12 klockan 15:22 +0100 skrev Brian Gregory:

> Squid is set up to run 5 squidGuard processes. When we boot Suse it 
> takes 15-20 minutes with lots of disk thrashing for the 5 squidGuards to 
> read in the blacklists and build their tables.

This will be much faster if you let squidGuard build it's lookup db.

> Much of the time it works fine but every now and then for no obvious 
> reason, squid decides it needs to start more squidGuard processes which 
> effectively cuts off all web access.

helper processes are restarted

  when "squid -k rotate" is run
  when "squid -k reconfigure" is run
  when more than 50% of the helpers have crashed
  if Squid crashes or is restarted

>  I'm not sure exactly what happens, 

See cache.log for information on why the helpers was restarted.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


[squid-users] Squid and Squidguard using high disk IO

2013-11-09 Thread Kaya Saman

Hi,

I'm wondering if anyone has any ideas on this one.

Basically I have created a standard Squid proxy using Squid 3.3.8 built 
from OpenBSD ports - OS version is OpenBSD 5.4 Current.


Additionally from ports as well I have installed squidGuard 1.4p6.


The configuration seems ok as everything is working; the acls setup in 
squidGuard are redirecting to the proper "blocked" page when unwanted 
information is embedded in a site: eg. ads, p%rn.


Here is the rule list:

dest ads {
domainlist blacklists/ads/domains
urllistblacklists/ads/urls
}

dest adv {
domainlist blacklists/adv/domains
urllistblacklists/adv/urls
}

dest spyware {
domainlist blacklists/spyware/domains
urllistblacklists/spyware/urls
}

dest porn {
domainlist blacklists/porn/domains
urllistblacklists/porn/urls
expressionlist blacklists/porn/expressions
# Logged info is anonymized to protect users' privacy
log anonymous  dest/porn.log
}

acl {
lan {
# The built-in 'in-addr' destination group matches any IP address.
pass !ads !adv !porn all
}
default {
# Default deny to reject unknown clients
pass none
redirect  http://127.0.0.1/blocked.html

}
}

I removed the "spyware" option from the 'lan' acl as I'm trying to debug 
currently


squidGuard is called by Squid using these lines in the squid.conf:

# Path to the redirector program
url_rewrite_program   /usr/local/bin/squidGuard

# Number of redirector processes to spawn
url_rewrite_children  500

# To prevent loops, don't send requests from localhost to the redirector
url_rewrite_accessdeny  localhost


The issue I'm currently seeing is that the disk IO process is hammered???

The 'lan' clients are therefor unable to access the web through the proxy.

Running 'top' and 'ps' I can see that squidGuard has spawned many 
processes which seems to be causing the high IO usage.


The systems' hardware is quite powerful with 8GB RAM and a Xeon E5 CPU 
@3.6GHz, currently being tested with 3x lan machines.



What can I do to improve performance with this?


Is this line too high: url_rewrite_children  500

or simply have a misconfigured something?


I additionally have 'c-icap' running with squidclamav coupled to clamd 
in case that is of importance - not using the squidGuard line in the 
squidclamav.conf file!!!


Basically how can I get the IO usage down and get the system to work again?

- the logs don't indicate anything outside of 'starting squidGuard 
process' many times.



Regards,


Kaya



Re: AW: [squid-users] Squid and SquidGuard retsarting. Why?

2006-07-13 Thread Brian Gregory

[EMAIL PROTECTED] wrote:

Please read the documentation for squidguard.

In short: You should build a squidguard-database containing your blacklists one 
time. After that squidguard should start within a few seconds.

Mit freundlichem Gruß/Yours sincerely
Werner Rost
GMT-FIR - Netzwerk
 
 ZF Boge Elastmetall GmbH

 Friesdorfer Str. 175
 53175 Bonn
 Deutschland/Germany 
 Telefon/Phone +49 228 3825 - 420

 Telefax/Fax +49 228 3825 - 398
 [EMAIL PROTECTED]






Ok I found some documentation that says the -C listfile parameter builds 
a pre-built database but there doesn't seem to be any info on how to use 
a pre-build database. Maybe all will become clear if I experiment a bit.




AW: AW: [squid-users] Squid and SquidGuard retsarting. Why?

2006-07-13 Thread Werner.Rost
Define the location of the pre-built databas in the configuration file of 
squidguard.

Example:

destination porn {
domainlistporn/domains
urllist   porn/urls
expressionlistporn/expressions
log   porn.log
}


Mit freundlichem Gruß/Yours sincerely
Werner Rost
GMT-FIR - Netzwerk
 
 ZF Boge Elastmetall GmbH
 Friesdorfer Str. 175
 53175 Bonn
 Deutschland/Germany 
 Telefon/Phone +49 228 3825 - 420
 Telefax/Fax +49 228 3825 - 398
 [EMAIL PROTECTED]



-Ursprüngliche Nachricht-
Von: Brian Gregory [mailto:[EMAIL PROTECTED] 
Gesendet: Donnerstag, 13. Juli 2006 13:11
An: squid-users@squid-cache.org
Betreff: Re: AW: [squid-users] Squid and SquidGuard retsarting. Why?


[EMAIL PROTECTED] wrote:
> Please read the documentation for squidguard.
> 
> In short: You should build a squidguard-database containing your 
> blacklists one time. After that squidguard should start within a few 
> seconds.
> 
> Mit freundlichem Gruß/Yours sincerely
> Werner Rost
> GMT-FIR - Netzwerk
>  
>  ZF Boge Elastmetall GmbH
>  Friesdorfer Str. 175
>  53175 Bonn
>  Deutschland/Germany
>  Telefon/Phone +49 228 3825 - 420
>  Telefax/Fax +49 228 3825 - 398
>  [EMAIL PROTECTED]
> 
> 
> 


Ok I found some documentation that says the -C listfile parameter builds 
a pre-built database but there doesn't seem to be any info on how to use 
a pre-build database. Maybe all will become clear if I experiment a bit.



RE: [squid-users] Squid and Squidguard using high disk IO

2013-11-09 Thread Rafael Akchurin
Hello Kaya,

May I recommend to try using qlproxy together with your Squid? 
Qlproxy is an ICAP web filtering which may in your particular case do better as 
Squid Guard. At least you may give it a try to compare if the disk io goes down.

Best regards,
Raf

-Original Message-
From: Kaya Saman [mailto:kayasa...@gmail.com] 
Sent: Saturday, November 09, 2013 4:58 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Squid and Squidguard using high disk IO

Hi,

I'm wondering if anyone has any ideas on this one.

Basically I have created a standard Squid proxy using Squid 3.3.8 built from 
OpenBSD ports - OS version is OpenBSD 5.4 Current.

Additionally from ports as well I have installed squidGuard 1.4p6.


The configuration seems ok as everything is working; the acls setup in 
squidGuard are redirecting to the proper "blocked" page when unwanted 
information is embedded in a site: eg. ads, p%rn.

Here is the rule list:

dest ads {
 domainlist blacklists/ads/domains
 urllistblacklists/ads/urls
}

dest adv {
 domainlist blacklists/adv/domains
 urllistblacklists/adv/urls
}

dest spyware {
 domainlist blacklists/spyware/domains
 urllistblacklists/spyware/urls
}

dest porn {
 domainlist blacklists/porn/domains
 urllistblacklists/porn/urls
 expressionlist blacklists/porn/expressions
 # Logged info is anonymized to protect users' privacy
 log anonymous  dest/porn.log
}

acl {
 lan {
 # The built-in 'in-addr' destination group matches any IP address.
 pass !ads !adv !porn all
 }
 default {
 # Default deny to reject unknown clients
 pass none
 redirect  http://127.0.0.1/blocked.html

 }
}

I removed the "spyware" option from the 'lan' acl as I'm trying to debug 
currently

squidGuard is called by Squid using these lines in the squid.conf:

# Path to the redirector program
url_rewrite_program   /usr/local/bin/squidGuard

# Number of redirector processes to spawn url_rewrite_children  500

# To prevent loops, don't send requests from localhost to the redirector
url_rewrite_accessdeny  localhost


The issue I'm currently seeing is that the disk IO process is hammered???

The 'lan' clients are therefor unable to access the web through the proxy.

Running 'top' and 'ps' I can see that squidGuard has spawned many processes 
which seems to be causing the high IO usage.

The systems' hardware is quite powerful with 8GB RAM and a Xeon E5 CPU @3.6GHz, 
currently being tested with 3x lan machines.


What can I do to improve performance with this?


Is this line too high: url_rewrite_children  500

or simply have a misconfigured something?


I additionally have 'c-icap' running with squidclamav coupled to clamd 
in case that is of importance - not using the squidGuard line in the 
squidclamav.conf file!!!

Basically how can I get the IO usage down and get the system to work again?

- the logs don't indicate anything outside of 'starting squidGuard 
process' many times.


Regards,


Kaya



Re: [squid-users] Squid and Squidguard using high disk IO

2013-11-09 Thread Eliezer Croitoru

Hey,

Notes inside.

On 11/09/2013 05:58 PM, Kaya Saman wrote:


What can I do to improve performance with this?


Is this line too high: url_rewrite_children  500

YES!!


or simply have a misconfigured something?




I additionally have 'c-icap' running with squidclamav coupled to clamd
in case that is of importance - not using the squidGuard line in the
squidclamav.conf file!!!

Basically how can I get the IO usage down and get the system to work again?

For how many users exactly?
Just a note that I am not in a favor of any OS by default but I would 
feel better Using Linux.




- the logs don't indicate anything outside of 'starting squidGuard
process' many times.
The basic assumption of using 500 child process is that you have atleast 
100 CPUs.

SquidGuard was design for performance which is lots of urls per sec.
It can be tested just to clear the point out.
for example in a rate of 1500k requests per second you should not have a 
need in more then 40-50 children.
In practice it works a bit different speed since there is a speed limit 
on STDIN and STDOUT which slows down the speed of squid and squidguard 
communication blocking the whole squid instance(in a way).


If you need basic url filtering you can use ICAP which has an option to 
run as a standalone service outside of squid settings and machine.


I have written in the past a small ICAP service for the favor of 
requests manipulation and filtering.
I have never finished it in a level I was happy with but the basic code 
can be seen here:

https://github.com/elico/echelon

I know for a fact that ICAP interface adds concurrency by the "nature" 
of it using TCP.


This is not the place to ask about concurrency in squidguard which can 
allow the usage of square less processes(children) for more requests.


In order to find the right number of children start with 40 and see if 
it fits you and then see what is the bottle neck in the whole setup.


Eliezer


Re: [squid-users] Squid and Squidguard using high disk IO

2013-11-09 Thread Kaya Saman

On 11/09/2013 05:04 PM, Rafael Akchurin wrote:

Hello Kaya,

May I recommend to try using qlproxy together with your Squid?
Qlproxy is an ICAP web filtering which may in your particular case do better as 
Squid Guard. At least you may give it a try to compare if the disk io goes down.

Best regards,
Raf


I'll take a look at it - thanks!

I was also thinking about using Adzapper but I'll do more reading and 
figure out which is the best one for my setup.



Is this line too high: url_rewrite_children 500
YES!! 


Oops the guide I was working from suggested that.


Basically how can I get the IO usage down and get the system to work 
again?

For how many users exactly?
Just a note that I am not in a favor of any OS by default but I would 
feel better Using Linux. 


At the moment I'm just testing with one user! Using sqtop I can see that 
there are 30+ connections being passed to Squid.


But overall this runs on my main router; hence I can't use Linux due to 
the fact that the router is running OpenBSD and needs some special stuff 
from the OS.



In order to find the right number of children start with 40 and see if 
it fits you and then see what is the bottle neck in the whole setup.


Eliezer 


I tried 5 and it was a bit better but not too much I just cranked it 
up to 40 now.


I also disabled DNS lookups from squidclamav.conf which seems to have 
helped a bit though still am experiencing issues. :-(



As mentioned above I am thinking of running Adzapper and then chaining 
squidGuard on that; though it might just be squidclamav that's causing 
this???


The issue seems to get resolved after stopping Squid, then killing the 
remaining squidguard processes so it's really confusing as to where to 
look for the "bottleneck".


Regards,


Kaya





-Original Message-
From: Kaya Saman [mailto:kayasa...@gmail.com]
Sent: Saturday, November 09, 2013 4:58 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Squid and Squidguard using high disk IO

Hi,

I'm wondering if anyone has any ideas on this one.

Basically I have created a standard Squid proxy using Squid 3.3.8 built from 
OpenBSD ports - OS version is OpenBSD 5.4 Current.

Additionally from ports as well I have installed squidGuard 1.4p6.


The configuration seems ok as everything is working; the acls setup in squidGuard are 
redirecting to the proper "blocked" page when unwanted information is embedded 
in a site: eg. ads, p%rn.

Here is the rule list:

dest ads {
  domainlist blacklists/ads/domains
  urllistblacklists/ads/urls
}

dest adv {
  domainlist blacklists/adv/domains
  urllistblacklists/adv/urls
}

dest spyware {
  domainlist blacklists/spyware/domains
  urllistblacklists/spyware/urls
}

dest porn {
  domainlist blacklists/porn/domains
  urllistblacklists/porn/urls
  expressionlist blacklists/porn/expressions
  # Logged info is anonymized to protect users' privacy
  log anonymous  dest/porn.log
}

acl {
  lan {
  # The built-in 'in-addr' destination group matches any IP address.
  pass !ads !adv !porn all
  }
  default {
  # Default deny to reject unknown clients
  pass none
  redirect  http://127.0.0.1/blocked.html

  }
}

I removed the "spyware" option from the 'lan' acl as I'm trying to debug 
currently

squidGuard is called by Squid using these lines in the squid.conf:

# Path to the redirector program
url_rewrite_program   /usr/local/bin/squidGuard

# Number of redirector processes to spawn url_rewrite_children  500

# To prevent loops, don't send requests from localhost to the redirector
url_rewrite_accessdeny  localhost


The issue I'm currently seeing is that the disk IO process is hammered???

The 'lan' clients are therefor unable to access the web through the proxy.

Running 'top' and 'ps' I can see that squidGuard has spawned many processes 
which seems to be causing the high IO usage.

The systems' hardware is quite powerful with 8GB RAM and a Xeon E5 CPU @3.6GHz, 
currently being tested with 3x lan machines.


What can I do to improve performance with this?


Is this line too high: url_rewrite_children  500

or simply have a misconfigured something?


I additionally have 'c-icap' running with squidclamav coupled to clamd
in case that is of importance - not using the squidGuard line in the
squidclamav.conf file!!!

Basically how can I get the IO usage down and get the system to work again?

- the logs don't indicate anything outside of 'starting squidGuard
process' many times.


Regards,


Kaya





Re: [squid-users] Squid and Squidguard using high disk IO

2013-11-09 Thread Kaya Saman

Just found this is Squid cache log:

2013/11/09 19:28:25 kid1| /var/squid/cache/04/7A: (24) Too many open files
2013/11/09 19:31:31 kid1| WARNING: All 20/20 redirector processes are busy.
2013/11/09 19:31:31 kid1| WARNING: 20 pending requests queued
2013/11/09 19:31:31 kid1| WARNING: Consider increasing the number of 
redirector processes in your config file.



The cache size is 2GB though that shouldn't affect performance as 
far as I understand.


On 11/09/2013 05:23 PM, Eliezer Croitoru wrote:

Hey,

Notes inside.

On 11/09/2013 05:58 PM, Kaya Saman wrote:


What can I do to improve performance with this?


Is this line too high: url_rewrite_children  500

YES!!


or simply have a misconfigured something?




I additionally have 'c-icap' running with squidclamav coupled to clamd
in case that is of importance - not using the squidGuard line in the
squidclamav.conf file!!!

Basically how can I get the IO usage down and get the system to work 
again?

For how many users exactly?
Just a note that I am not in a favor of any OS by default but I would 
feel better Using Linux.




- the logs don't indicate anything outside of 'starting squidGuard
process' many times.
The basic assumption of using 500 child process is that you have 
atleast 100 CPUs.

SquidGuard was design for performance which is lots of urls per sec.
It can be tested just to clear the point out.
for example in a rate of 1500k requests per second you should not have 
a need in more then 40-50 children.
In practice it works a bit different speed since there is a speed 
limit on STDIN and STDOUT which slows down the speed of squid and 
squidguard communication blocking the whole squid instance(in a way).


If you need basic url filtering you can use ICAP which has an option 
to run as a standalone service outside of squid settings and machine.


I have written in the past a small ICAP service for the favor of 
requests manipulation and filtering.
I have never finished it in a level I was happy with but the basic 
code can be seen here:

https://github.com/elico/echelon

I know for a fact that ICAP interface adds concurrency by the "nature" 
of it using TCP.


This is not the place to ask about concurrency in squidguard which can 
allow the usage of square less processes(children) for more requests.


In order to find the right number of children start with 40 and see if 
it fits you and then see what is the bottle neck in the whole setup.


Eliezer




Re: [squid-users] Squid and Squidguard using high disk IO

2013-11-09 Thread Loïc BLOT
Hello Kaya,
first, don't forget to look at sysctl kern.maxfiles values. 
Also improve daemon FD values in login.conf for squid. Don't forget each
connection is a FD (1 connection for the client, 1 for the transaction
to remote site, somes for the caching).

Also to improve performances of squidguard, i stored all blacklists DB
to a memory fs (mfs) this improve massively squidguard performance
I have wrote an article to improve squid perfs on OpenBSD:
http://www.unix-experience.fr/2013/monter-un-proxy-cache-performant-avec-squid-et-openbsd/



-- 
Best regards,
Loïc BLOT, 
UNIX systems, security and network engineer
http://www.unix-experience.fr



Le samedi 09 novembre 2013 à 19:39 +, Kaya Saman a écrit :
> Just found this is Squid cache log:
> 
> 2013/11/09 19:28:25 kid1| /var/squid/cache/04/7A: (24) Too many open files
> 2013/11/09 19:31:31 kid1| WARNING: All 20/20 redirector processes are busy.
> 2013/11/09 19:31:31 kid1| WARNING: 20 pending requests queued
> 2013/11/09 19:31:31 kid1| WARNING: Consider increasing the number of 
> redirector processes in your config file.
> 
> 
> The cache size is 2GB though that shouldn't affect performance as 
> far as I understand.
> 
> On 11/09/2013 05:23 PM, Eliezer Croitoru wrote:
> > Hey,
> >
> > Notes inside.
> >
> > On 11/09/2013 05:58 PM, Kaya Saman wrote:
> >>
> >> What can I do to improve performance with this?
> >>
> >>
> >> Is this line too high: url_rewrite_children  500
> > YES!!
> >
> >> or simply have a misconfigured something?
> >
> >
> >> I additionally have 'c-icap' running with squidclamav coupled to clamd
> >> in case that is of importance - not using the squidGuard line in the
> >> squidclamav.conf file!!!
> >>
> >> Basically how can I get the IO usage down and get the system to work 
> >> again?
> > For how many users exactly?
> > Just a note that I am not in a favor of any OS by default but I would 
> > feel better Using Linux.
> >
> >>
> >> - the logs don't indicate anything outside of 'starting squidGuard
> >> process' many times.
> > The basic assumption of using 500 child process is that you have 
> > atleast 100 CPUs.
> > SquidGuard was design for performance which is lots of urls per sec.
> > It can be tested just to clear the point out.
> > for example in a rate of 1500k requests per second you should not have 
> > a need in more then 40-50 children.
> > In practice it works a bit different speed since there is a speed 
> > limit on STDIN and STDOUT which slows down the speed of squid and 
> > squidguard communication blocking the whole squid instance(in a way).
> >
> > If you need basic url filtering you can use ICAP which has an option 
> > to run as a standalone service outside of squid settings and machine.
> >
> > I have written in the past a small ICAP service for the favor of 
> > requests manipulation and filtering.
> > I have never finished it in a level I was happy with but the basic 
> > code can be seen here:
> > https://github.com/elico/echelon
> >
> > I know for a fact that ICAP interface adds concurrency by the "nature" 
> > of it using TCP.
> >
> > This is not the place to ask about concurrency in squidguard which can 
> > allow the usage of square less processes(children) for more requests.
> >
> > In order to find the right number of children start with 40 and see if 
> > it fits you and then see what is the bottle neck in the whole setup.
> >
> > Eliezer
> 


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid and Squidguard using high disk IO

2013-11-09 Thread Marcus Kool
On Sat, Nov 09, 2013 at 11:16:12PM +0100, Loïc BLOT wrote:
> Hello Kaya,
> first, don't forget to look at sysctl kern.maxfiles values. 
> Also improve daemon FD values in login.conf for squid. Don't forget each
> connection is a FD (1 connection for the client, 1 for the transaction
> to remote site, somes for the caching).
> 
> Also to improve performances of squidguard, i stored all blacklists DB
> to a memory fs (mfs) this improve massively squidguard performance

If the disk I/O is really the bottleneck, consider ufdbGuard.
ufdbGuard loads the URL database in memory and easily does
25,000 URL lookups/sec, much more than you will ever need.

Marcus

> I have wrote an article to improve squid perfs on OpenBSD:
> http://www.unix-experience.fr/2013/monter-un-proxy-cache-performant-avec-squid-et-openbsd/
> 
> 
> 
> -- 
> Best regards,
> Loïc BLOT, 
> UNIX systems, security and network engineer
> http://www.unix-experience.fr
> 
> 
> 
> Le samedi 09 novembre 2013 à 19:39 +, Kaya Saman a écrit :
> > Just found this is Squid cache log:
> > 
> > 2013/11/09 19:28:25 kid1| /var/squid/cache/04/7A: (24) Too many open files
> > 2013/11/09 19:31:31 kid1| WARNING: All 20/20 redirector processes are busy.
> > 2013/11/09 19:31:31 kid1| WARNING: 20 pending requests queued
> > 2013/11/09 19:31:31 kid1| WARNING: Consider increasing the number of 
> > redirector processes in your config file.
> > 
> > 
> > The cache size is 2GB though that shouldn't affect performance as 
> > far as I understand.
> > 
> > On 11/09/2013 05:23 PM, Eliezer Croitoru wrote:
> > > Hey,
> > >
> > > Notes inside.
> > >
> > > On 11/09/2013 05:58 PM, Kaya Saman wrote:
> > >>
> > >> What can I do to improve performance with this?
> > >>
> > >>
> > >> Is this line too high: url_rewrite_children  500
> > > YES!!
> > >
> > >> or simply have a misconfigured something?
> > >
> > >
> > >> I additionally have 'c-icap' running with squidclamav coupled to clamd
> > >> in case that is of importance - not using the squidGuard line in the
> > >> squidclamav.conf file!!!
> > >>
> > >> Basically how can I get the IO usage down and get the system to work 
> > >> again?
> > > For how many users exactly?
> > > Just a note that I am not in a favor of any OS by default but I would 
> > > feel better Using Linux.
> > >
> > >>
> > >> - the logs don't indicate anything outside of 'starting squidGuard
> > >> process' many times.
> > > The basic assumption of using 500 child process is that you have 
> > > atleast 100 CPUs.
> > > SquidGuard was design for performance which is lots of urls per sec.
> > > It can be tested just to clear the point out.
> > > for example in a rate of 1500k requests per second you should not have 
> > > a need in more then 40-50 children.
> > > In practice it works a bit different speed since there is a speed 
> > > limit on STDIN and STDOUT which slows down the speed of squid and 
> > > squidguard communication blocking the whole squid instance(in a way).
> > >
> > > If you need basic url filtering you can use ICAP which has an option 
> > > to run as a standalone service outside of squid settings and machine.
> > >
> > > I have written in the past a small ICAP service for the favor of 
> > > requests manipulation and filtering.
> > > I have never finished it in a level I was happy with but the basic 
> > > code can be seen here:
> > > https://github.com/elico/echelon
> > >
> > > I know for a fact that ICAP interface adds concurrency by the "nature" 
> > > of it using TCP.
> > >
> > > This is not the place to ask about concurrency in squidguard which can 
> > > allow the usage of square less processes(children) for more requests.
> > >
> > > In order to find the right number of children start with 40 and see if 
> > > it fits you and then see what is the bottle neck in the whole setup.
> > >
> > > Eliezer
> > 




Re: [squid-users] Squid and Squidguard using high disk IO

2013-11-09 Thread Kaya Saman

Thanks so much for all the advise and responses :-)

I decided to try Dansguardian.

Currently I have a working model setup though it needs a bit of tuning 
and tweaking but good news is that I am using the SquidGuard blacklists 
so all is pretty much good!!



Have been testing; performance is phenomenal though sometimes when Squid 
can't connect to a site properly in order to populate the cache etc... 
the pages might need a bit of refreshing however, I consider those as 
just teething problems.



So yeah NET <- NAT <-  <- Dansguardian <- PF 
is how things look like now :-)



Regards,


Kaya


On 11/09/2013 10:37 PM, Marcus Kool wrote:

On Sat, Nov 09, 2013 at 11:16:12PM +0100, Loïc BLOT wrote:

Hello Kaya,
first, don't forget to look at sysctl kern.maxfiles values.
Also improve daemon FD values in login.conf for squid. Don't forget each
connection is a FD (1 connection for the client, 1 for the transaction
to remote site, somes for the caching).

Also to improve performances of squidguard, i stored all blacklists DB
to a memory fs (mfs) this improve massively squidguard performance

If the disk I/O is really the bottleneck, consider ufdbGuard.
ufdbGuard loads the URL database in memory and easily does
25,000 URL lookups/sec, much more than you will ever need.

Marcus


I have wrote an article to improve squid perfs on OpenBSD:
http://www.unix-experience.fr/2013/monter-un-proxy-cache-performant-avec-squid-et-openbsd/



--
Best regards,
Loïc BLOT,
UNIX systems, security and network engineer
http://www.unix-experience.fr



Le samedi 09 novembre 2013 à 19:39 +, Kaya Saman a écrit :

Just found this is Squid cache log:

2013/11/09 19:28:25 kid1| /var/squid/cache/04/7A: (24) Too many open files
2013/11/09 19:31:31 kid1| WARNING: All 20/20 redirector processes are busy.
2013/11/09 19:31:31 kid1| WARNING: 20 pending requests queued
2013/11/09 19:31:31 kid1| WARNING: Consider increasing the number of
redirector processes in your config file.


The cache size is 2GB though that shouldn't affect performance as
far as I understand.

On 11/09/2013 05:23 PM, Eliezer Croitoru wrote:

Hey,

Notes inside.

On 11/09/2013 05:58 PM, Kaya Saman wrote:

What can I do to improve performance with this?


Is this line too high: url_rewrite_children  500

YES!!


or simply have a misconfigured something?



I additionally have 'c-icap' running with squidclamav coupled to clamd
in case that is of importance - not using the squidGuard line in the
squidclamav.conf file!!!

Basically how can I get the IO usage down and get the system to work
again?

For how many users exactly?
Just a note that I am not in a favor of any OS by default but I would
feel better Using Linux.


- the logs don't indicate anything outside of 'starting squidGuard
process' many times.

The basic assumption of using 500 child process is that you have
atleast 100 CPUs.
SquidGuard was design for performance which is lots of urls per sec.
It can be tested just to clear the point out.
for example in a rate of 1500k requests per second you should not have
a need in more then 40-50 children.
In practice it works a bit different speed since there is a speed
limit on STDIN and STDOUT which slows down the speed of squid and
squidguard communication blocking the whole squid instance(in a way).

If you need basic url filtering you can use ICAP which has an option
to run as a standalone service outside of squid settings and machine.

I have written in the past a small ICAP service for the favor of
requests manipulation and filtering.
I have never finished it in a level I was happy with but the basic
code can be seen here:
https://github.com/elico/echelon

I know for a fact that ICAP interface adds concurrency by the "nature"
of it using TCP.

This is not the place to ask about concurrency in squidguard which can
allow the usage of square less processes(children) for more requests.

In order to find the right number of children start with 40 and see if
it fits you and then see what is the bottle neck in the whole setup.

Eliezer






Re: AW: AW: [squid-users] Squid and SquidGuard retsarting. Why?

2006-07-13 Thread Brian Gregory

[EMAIL PROTECTED] wrote:

Define the location of the pre-built databas in the configuration file of 
squidguard.

Example:

destination porn {
domainlistporn/domains
urllist   porn/urls
expressionlistporn/expressions
log   porn.log
}


Mit freundlichem Gruß/Yours sincerely
Werner Rost
GMT-FIR - Netzwerk
 
 ZF Boge Elastmetall GmbH

 Friesdorfer Str. 175
 53175 Bonn
 Deutschland/Germany 
 Telefon/Phone +49 228 3825 - 420

 Telefax/Fax +49 228 3825 - 398
 [EMAIL PROTECTED]


I think I've got it working now, it certainly starts up much quicker 
even when I configure 10 squidGuard processes.


I have set up the following running on a weekly cron job as root to 
download new blacklists and create the database just once a week (watch 
out for the line wraps):



# This is Brian's blacklist update script

cd ~

rm -f -f bl.tar.gz

wget -O bl.tar.gz 
http://ftp.tdcnorge.no/pub/www/proxy/squidGuard/contrib/blacklists.tar.gz


tar --ungzip --extract --exclude=*.diff 
--directory=/var/lib/squidGuard/db --verbose -f bl.tar.gz


rm -f -f bl.tar.gz

wget -O bl.tar.gz 
ftp://ftp.univ-tlse1.fr/pub/reseau/cache/squidguard_contrib/blacklists.tar.gz


tar --ungzip --extract --exclude=*.diff 
--directory=/var/lib/squidGuard/db --verbose -f bl.tar.gz


rm -f -f bl.tar.gz

chown -R squid:nogroup /var/lib/squidGuard/db

/usr/sbin/squidGuard -C all

chown -R squid:nogroup /var/lib/squidGuard/db

/usr/sbin/squid -k reconfigure

#Script Ends

The squid.conf file seems to be okay exactly as it was. The squidGuard 
processes seem to know to use the databases rather than the text files.



Does this look reasonable?

--

Brian Gregory.
[EMAIL PROTECTED]

Computer Room Volunteer.
Therapy Centre.
Prospect Park Hospital.