Re: [squid-users] 3 ISPs: Routing problem

2009-05-18 Thread jeff donovan


On May 18, 2009, at 11:17 AM, RSCL Mumbai wrote:

On Sun, May 17, 2009 at 11:37 AM, Amos Jeffries  
squ...@treenet.co.nz wrote:

RSCL Mumbai wrote:


On Fri, May 15, 2009 at 10:38 AM, Amos Jeffries squ...@treenet.co.nz 


wrote:


RSCL Mumbai wrote:


On Thu, May 14, 2009 at 4:33 PM, Jeff Pang pa...@arcor.de wrote:


RSCL Mumbai:

What would like to configure is setup specific G/ws for  
specific

clients.

192.168.1.100 to use G/w 192.168.1.1
192.168.1.101 to use G/w 192.168.1.1
192.168.1.102 to use G/w 192.168.1.2
192.168.1.103 to use G/w 192.168.1.2
192.168.1.104 to use G/w 192.168.1.2
192.168.1.105 to use G/w 192.168.1.3
192.168.1.106 to use G/w 192.168.1.3




I just found out that squid is removing the marking on the packet:
This is what I am doing:

(1) I marked packets coming from 10.0.0.120 to port 80, with  
mark1

(mark1 corresponds to isp1)
(2) I added a route rule which says that all packets having mark 1
will be routed through ISP 1

But the packets are not routing via ISP1

When I disable squid redirection rule in IPTables (post 80  
redirection

to 3128 squid), the markings are maintained and packets route via
ISP1.

Now the big question is why is squid removing the marking ??


Because the packets STOP at their destination software.
Normally the destination is a web server. When you NAT (redirect) a
packet
to Squid it STOPS there and gets read by Squid instead of passing  
on to

the
web server.

IF Squid needs to fetch the HTTP object requested from the  
network a

brand
new TCP connection will be created only from Squid to the web  
server.



And how can this be prevented ??


By not intercepting packets. As you already noticed.


Squid offers alternatives, tcp_outgoing_address has already been
mentioned.
tcp_outgoing_tos is an alternative that allows you to mark packets
leaving
Squid.


I tried  tcp_outgoing_address  by adding the following to  
squid.conf


acl ip1 myip 10.0.0.120
acl ip2 myip 10.0.0.121
acl ip3 myip 10.0.0.122
tcp_outgoing_address 10.0.0.120 ip1
tcp_outgoing_address 10.0.0.121 ip2
tcp_outgoing_address 10.0.0.122 ip3

Restarted squid, but no help.

Pls help how I can get the route rules to work.

Simple requirement:
If packets comes from src=10.0.0.120, forward it via ISP-1
If packets comes from src=10.0.0.121, forward it via ISP-2
If packets comes from src=10.0.0.122, forward it via ISP-3
And so forth.

Thx in advance.
Vai


To prevent the first (default) one being used  you may need to do:

 tcp_outgoing_address 10.0.0.120 ip1 !ip2 !ip3
 tcp_outgoing_address 10.0.0.121 ip2 !ip1 !ip3
 tcp_outgoing_address 10.0.0.122 ip3 !ip1 !ip2



I do not have 5 real interfaces for 5 ISPs.
And I believe virtual interfaces will not work in this scenario.

Any other option pls ??

Thx  regards,
Vai



hello Val,
look to your routers to make this decision. You can handout default  
gateway info to your clients or routers

if you don't have 3 squid boxes[ my recommendation] then
i would try 3 nics
if thats not available then you need 3 vlans.
-j


Re: [squid-users] connecting to gmail via imap over squid

2009-05-08 Thread jeff donovan


On May 7, 2009, at 10:24 AM, indyrowe wrote:



Of course there are reasons proxy imap.

I was using gmail via a squid proxy.  I was port forwarding on a  
desktop
traffic for https, because gmail was blocked by security, to my  
squid proxy.

That was working well, but now google has changed something because it
doesn't work any more.  however all my other sites do, including  
hotmail via

https.

Not sure if something like this is what you are trying to setup or  
something

else.


make sure you have ports 993 and 995 open for imap authentication.


[squid-users] squid ldap auth osx

2009-04-22 Thread jeff donovan

Greetings

working on creating a simple web access cache with authentication. I  
want to use my current LDAP directory to get login info.


running squid 3.0 stable 13

so close. clients browser pops up and asks for credentials. The  
username and pass are given and the browser prompts again. never  
giving access.

access logs tell me nothing,
 TCP_DENIED/407 2522 GET http://livepage.apple.com/ joeusername  
NONE/- text/html




auth_param basic program /usr/local/squid/libexec/squid_ldap_auth -b  
dc=host,dc=my,dc=domain,dc=com host.my.domain.com

auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl ldapauth proxy_auth REQUIRED
acl localnet src 10.135.0.0/16  # noc
#
#
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow ldapauth
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access allow localnet
http_access deny all



Re: [squid-users] Cannot access site - TCP_MISS/504

2009-03-31 Thread jeff donovan


On Mar 31, 2009, at 11:42 AM, Clemente Aguiar wrote:


Setup: Squid-3.0-STABLE13 transparent proxy using WCCPv2

When I try to access the site the URL http://sandbox.tagle.it I get a
time out message:

---
The following error was encountered while trying to retrieve the URL:
http://sandbox.tagle.it/

   Connection to 72.27.230.81 failed.


The system returned: (110) Connection timed out

The remote host or network may be down. Please try the request again.
---

and I get the following entry in the access.log:

---
1238510098.006 179565 89.109.64.202 TCP_MISS/504 2413 GET
http://sandbox.tagle.it/ - DIRECT/72.27.230.81 text/html
---

If I bypass the squid box altogether it works, i.e. I get asked for
authentication.


What could be the problem?


turn squid off and tell me if  the server squid is running on can  
access http://sandbox.tagle.it 


Re: [squid-users] squid using more than one IP

2009-02-11 Thread jeff donovan


On Feb 11, 2009, at 3:42 AM, Amos Jeffries wrote:


jeff donovan wrote:

Greetings
currently i have a transparent squid, i was wondering if there is a  
way to have squid use more than 1 ip address or at least cycle  
through a pool of addresses when requesting a site.
I have a small block of IP's allocated, behind a firewall. can i  
have a single squid round robin somehow or is that an OS interface  
setting ?

any ideas ?
-jeff


An OS interface setting.

Squid only has capability to limit the OS choices.

Amos


thanks for the reply. thats what i was thinking but you never know,  
they may be some hidden feature magic bullet I'm unaware of.


thanks again.
-j


[squid-users] squid using more than one IP

2009-02-10 Thread jeff donovan

Greetings

currently i have a transparent squid, i was wondering if there is a  
way to have squid use more than 1 ip address or at least cycle through  
a pool of addresses when requesting a site.
I have a small block of IP's allocated, behind a firewall. can i have  
a single squid round robin somehow or is that an OS interface setting ?


any ideas ?

-jeff


[squid-users] Squid and Google Sorry Message

2009-01-16 Thread jeff donovan

Greetings

I am running two squid boxes as content filters for a number of  
schools. Google has recently changed and we are now getting a  Sorry  
you look like a botnet 

We're sorry...
... but your query looks similar to automated requests from a computer  
virus or spyware application. To protect our users, we can't  process  
your request right now.
We'll restore your access as quickly as possible, so try again soon.  
In the meantime, if you suspect that your computer or network has been  
infected, you might want to run a virus checker or spyware remover to  
make sure that your systems are free of viruses and other spurious  
software.


from what I can tell. too many searches from one IP address.
I have been running this way for years.

is there anything I can do to have squid give out more than one ip  
address? or is there some tweek that I can perform to make this error  
go away?



thanks

-jeff


Re: [squid-users] round robin question

2008-09-25 Thread jeff donovan


On Sep 24, 2008, at 11:38 AM, Kinkie wrote:

On Wed, Sep 24, 2008 at 5:16 PM, jeff donovan  
[EMAIL PROTECTED] wrote:

greetings

How could I go about load balancing two or more transparent proxy  
squid

servers ?
No caching invloved. This is strictly for access.

i thought about dns round robin, but that didn't make sense since i  
am

forwarding all connections to a single interface.

any insight would be helpful



So both instances are running on the same (bridging?) system?
Can you give some more details?


I have 12 subnets coming off router 1 and 12 coming off router 2 each  
pass through a transparent squid.


I want to for better or worse  Mux  these two and add a 3rd box.

combine the 24 subnets --- ( 3 squids ) ---

-j


[squid-users] round robin question

2008-09-24 Thread jeff donovan

greetings

How could I go about load balancing two or more transparent proxy  
squid servers ?

No caching invloved. This is strictly for access.

i thought about dns round robin, but that didn't make sense since i am  
forwarding all connections to a single interface.


any insight would be helpful

-j



Re: [squid-users] stop anonymous browsing

2008-04-11 Thread jeff donovan


On Apr 10, 2008, at 11:51 PM, ekul taylor wrote:


In my squid installation I use an IPtables based firewall to stop all
traffic from the end user subnets from flowing to the internet.
Servers are able to communicate to update things like NTP and DNS but
clients get their NTP and DNS for internal sources only.  Only the
squid server is allowed to communicate with the internet and since it
has authenication (as has been suggested by others) no one who doesn't
have a username and password can browse the internet without
authorization.  It has the added bonus of limiting the internet
traffic to things that are truly necessary since applications can't
phone home (especially nice for things like trojans) and things like
DNS queries are cached.  Since only squid can communicate with the
internet changing proxy servers or trying to tunnel out has no effect
since the traffic is simply denied.

Luke Taylor


Hi Luke,
sorry jumping thread.

i have the same setup you have however  not the Authentication , how  
does the authentication stop a client from accessing  
easyunblocker.com, or the various dns name changes that happen  
everyday ?


current i running squid guard to handle blocks. regex and blacklists.  
regex works pretty good but has holes.


keeping current seems to be the biggest pain in the butt.
-j





On Thu, Apr 10, 2008 at 2:42 AM, Anil Saini [EMAIL PROTECTED]  
wrote:



how to stop anonymous browsing

we have huge collection of  web-proxies to bybass acl blocked list
Is thr any sol to block them all without making list of them.

--
View this message in context: 
http://www.nabble.com/stop-anonymous-browsing-tp16603009p16603009.html
Sent from the Squid - Users mailing list archive at Nabble.com.








Re: [squid-users] upgrade from 2.5 to 2.6 to add NTLM

2008-02-07 Thread jeff donovan


On Feb 7, 2008, at 12:53 PM, Leonardo Rodrigues Magalhães wrote:




Dave Holland escreveu:


On Thu, Feb 07, 2008 at 10:29:14AM -0500, jeff donovan wrote:

Are the NTLM auth modules that come with squid used just for  
accessing

the squid cache ?


As I understand it: yes.



I have done some tests with squid 2.6/3.0 recently and seems  
that sites with NTLM auth do work FINE with squid 2.6/3.0. I'm  
stilll running squid 2.5 in production boxes and sites with NTLM  
auth do NOT work through squid 2.5.


I'm preparing some upgrades here to allow sites with NTLM auth  
to work properly, as my tests confirmed.


Of course, i'm also thinking on the possibility of skipping 2.6  
and going forward to squid 3.0 stable 1 !!!


okay that sounds promising. Your saying ( i'm doing this so my fuzzy  
brain is clear ) that you can access a web site that uses NTLM to  
access restricted web content ? right now 2.5 does not work.

Re: [squid-users] upgrade from 2.5 to 2.6 to add NTLM

2008-02-07 Thread jeff donovan


On Feb 7, 2008, at 1:30 PM, Leonardo Rodrigues Magalhães wrote:




jeff donovan escreveu:


okay that sounds promising. Your saying ( i'm doing this so my  
fuzzy brain is clear ) that you can access a web site that uses  
NTLM to access restricted web content ? right now 2.5 does not work.


  Yes ... i can confirm that based on my tests here. sites with NTLM  
auth do NOT work through squid 2.5 but seems to work fine through  
2.6 and 3.0, according to my tests here. My production boxes are  
still 2.5 and as i have VERY LITTLE problems with NTLM sites, i  
havent upgraded them yet.


  I'm not saying user authentication through NTLM. That 2.5 can do  
well. Yes i'm telling about SITE NTLM authentication passing through  
squid 2.6 and 3.0.


thank you very much leonardo.

now :) would you be willing to share your config ? is there anything  
special that you had to do in your test box.


-j

[squid-users] upgrade from 2.5 to 2.6 to add NTLM

2008-02-07 Thread jeff donovan

Greetings

i have been running into several issues with my Squid proxy ( running  
transparent )

Squid Cache: Version 2.5.STABLE7
configure options:  --host=PPC --enable-async-io --enable-snmp -- 
enable-underscores


accessing Windows IIS6.0 web servers using NTLM authentication. The  
Authentication basically fails refreshing the page. Some sites drop to  
basic Auth and the users can continue. But others require the full  
verification. If I bypass squid the users can authenticate.


I have been reading the release notes and some docs  Squid 2.6.

Are the NTLM auth modules that come with squid used just for accessing  
the squid cache ? or can these modules help my users connections to  
remote IIS servers ?


will 2.6 help in my case ?

TIA

-jeff


Re: [squid-users] upgrade from 2.5 to 2.6 to add NTLM

2008-02-07 Thread jeff donovan


On Feb 7, 2008, at 11:57 AM, Dave Holland wrote:


On Thu, Feb 07, 2008 at 10:29:14AM -0500, jeff donovan wrote:
Are the NTLM auth modules that come with squid used just for  
accessing

the squid cache ?


As I understand it: yes.

See:
http://wiki.squid-cache.org/SquidFaq/CompleteFaq#head-663844d925e559109734bd02d6dd049a861197e0

which says:
Windows NT Challenge/Response authentication requires implicit
end-to-end state and will not work through a proxy server.

I ran into this last week, and asked the IIS admin to switch to basic
authentication + SSL instead -- which does work through Squid.

Dave


okay thats what i thought but i was hoping there was a light at the  
end of the tunnel.


-jeff


Re: [squid-users] upgrade from 2.5 to 2.6 to add NTLM

2008-02-07 Thread jeff donovan

Thank you all who replied,

I'll post more after i recompile.

-jeff


Re: [squid-users] Proxy and Internal Addresses

2008-01-31 Thread jeff donovan

I use a transparent proxy and it only shows my external squid address.
-j
On Jan 31, 2008, at 10:46 AM, Juan Pablo Calomino wrote:


Hello,

I've been searching about this:
If you're browsing the internet through a proxy, and
you enter (i.e.) this URL: www.cual-es-mi-ip.net, it
will show you your public IP, and you internal IP
address.

I can't find a way to hide the fact that i'm going
through a proxy.
Do you know a way fix this?

Thank you very much,
Juan Pablo.


 Yahoo! Encuentros.

Ahora encontrar pareja es mucho más fácil, probá el nuevo Yahoo!  
Encuentros http://yahoo.cupidovirtual.com/servlet/NewRegistration






Re: [squid-users] Google Images and Blacklists

2008-01-10 Thread jeff donovan


On Jan 9, 2008, at 4:57 PM, Cailen Pratt wrote:


Hi guys,

I have posted this a while ago and haven't had much luck with  
responses.


I'm wondering if there is any way to filter images.google.com.au  
using my
blacklist? I have an extensive blacklist which works great however  
if I go

to Google Images, I can search images that belong to domains in my
blacklist. I don't want to block images.google.com.au because I  
would like
users to still have access to this functionality. I'm running Squid  
Version

2.6.STABLE5

My squid.config looks like this:

acl blacklist dstdomain /etc/squid/domains/blacklist
http_access deny all Blacklist

I have tried using url_regex which does work however not with the  
size of my
Blacklist. My Blacklist is 16.3MB in size and therefore this cannot  
be used.

Is there any other way anyone can think of to get around this?

If not, is it possible to stop users from changing the Google  
preferences

under SafeSearch Filtering to  Do not filter my search results

Thanks in advance.


use either SquidGuard or Dansguardian. you redirect and check against  
a blacklist pass or fail.


-j


[squid-users] Transparent IPFW bypass for one host

2008-01-04 Thread jeff donovan

greetings

I'm having a syntax brain fart.
I have a transparent proxy and i need one host to bypass the redirect  
to squid.


what is the correct syntax for IPFW ??

here is what i have.

ipfw add 2 fwd 127.0.0.1,3128 tcp from any to any in recv en1


i need to add a rule that allows host 192.168.1.1 not to have port 80  
traffic redirected to squid ?


TIA




Re: [squid-users] Domain URL blacklists

2007-11-01 Thread jeff donovan


On Nov 1, 2007, at 10:23 AM, Paul Cocker wrote:

My bad, in fact from further analysis it seems that the domain  
files are

the mysite.com listings and URLs are things like
mysite.com/something/?somethingelse.htm. Does the later have any
relevance or use within Squid?

Paul Cocker
IT Systems Administrator

-Original Message-
From: Paul Cocker [mailto:[EMAIL PROTECTED]
Sent: 01 November 2007 13:23
To: squid-users@squid-cache.org
Subject: [squid-users] Domain  URL blacklists

I am using elements of Shalla's blacklists to block content. However,
they ship in two files, domains and URLs, the former being IP  
addresses
and the later URLs. Since our squid proxy is running on Windows I  
would
need to experiment with cygwin to get SquidGuard running, and that  
isn't
something I have time for at the moment, so I am trying to plug in  
what
I can without crippling performance (and what is the likely  
performance

impact?).

Do I call both files via acl {aclname} dstdomain {filepath}, or should
IP lists be called using a different command?

Paul Cocker
IT Systems Administrator



Hi paul  are you using DansGuardian or SquidGuard ? or trying to do  
this with just squid?


-jeff


[squid-users] How can i block this type of script

2007-08-31 Thread jeff donovan

greetings

i am using squidguard for content filtering.

How can i block this type of script?

http://www.softworldpro.com/demos/proxy/

it's easy to block the url. but when the script is executed there is  
nothing in the url that will let me key in on.

here is the regex I am using:

#Block Cgiproxy, Poxy, PHProxy and other Web-based proxies
(cecid.php|nph-webpr|nph-pro|/dmirror|cgiproxy|phpwebproxy|nph- 
proxy.cgi|__new_url)




[squid-users] Time stamp in access.log

2007-05-01 Thread jeff donovan

greetings

how can i get an accurate time stamp in my access.log right now it  
looks like this:


1178025553.639175 192.207.19.129 TCP_MISS/200 11249 GET http:blah  
blah


how can decode that stamp? or can i change it to something human :)

-jeff 


Re: [squid-users] Blacklist for squirm

2005-03-08 Thread Jeff Donovan
On Mar 7, 2005, at 11:32 PM, Awie wrote:
Nevermind - I was able to download Berkeley DB v2.7.7 from SleepyCat
and
squidGuard complies now.
Bryan
Bryan,
squidguard 1.2.0  works better with Berekely DB v3.2.9, you may be 
able
to use 2.7.7 loading blacklists into memory for each redirector, which
takes forever. Using 3.2.9 will allow you much better performance 
using
pre-built database for blacklists

-j
Jeff,
If you said that DB 3.2.9 is better (it should be) than 2.7.7. How is 
about
using the latest version of BerkelyDB v4.3.27?

Thx  rgds,
Awie
I'm not sure. i was troubleshooting a problem a while back when i was 
running 2.7.7. SquidGuard 1.2 wouldn't read the pre-built data bases, 
then i found an obscure web site that listed 3.2.9
http://www.maynidea.com/squidguard/step-by-step.html

So once i installed 3.2.9 and the 2 patches it worked better than ever. 
i have not tried 4.x.x

--j
---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


[squid-users] Store dir info

2005-03-08 Thread Jeff Donovan
greetings
looking at my store directory makes me think I should add another cache 
directory or increase the size

your thoughts
Store Directory Statistics:
Store Entries  : 4031124
Maximum Swap Size  : 67107840 KB
Current Store Swap Size: 60519440 KB
Current Capacity   : 90% used, 10% free
Store Directory #0 (ufs): /Volumes/cache1/cache
FS Block Size 4096 Bytes
First level subdirectories: 16
Second level subdirectories: 256
Maximum Size: 67107840 KB
Current Size: 60519440 KB
Percent Used: 90.18%
Filemap bits in use: 4027925 of 4194304 (96%)
Filesystem Space in use: 62736716/244986264 KB (26%)
Filesystem Inodes in use: 15684177/61246564 (26%)
Flags: SELECTED
Removal policy: lru
LRU reference age: 21.84 days
-j
---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


Re: [squid-users] Blacklist for squirm

2005-03-07 Thread Jeff Donovan
On Mar 7, 2005, at 12:21 PM, Bryan Miles wrote:

Nevermind - I was able to download Berkeley DB v2.7.7 from SleepyCat 
and
squidGuard complies now.

Bryan
Bryan,
squidguard 1.2.0  works better with Berekely DB v3.2.9, you may be able 
to use 2.7.7 loading blacklists into memory for each redirector, which 
takes forever. Using 3.2.9 will allow you much better performance using 
pre-built database for blacklists

-j
---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


Re: [squid-users] How to serve directory index files...?

2005-02-17 Thread Jeff Donovan
On Feb 17, 2005, at 3:46 PM, Peter Yohe wrote:
Hello,
When Squid is in offline mode, how does it know what a default 
document in a
site or directory is if a client does not provide the name of the file?
If the client has not requested information, Why would squid need to 
know the default document ( assuming default.html ) of any site or 
directory?
No request = squid do nothing

what are you trying to do with squid? post your squid.conf and we may 
better answer your questions.
Thanks,
Peter Yohe
The WiderNet Project
---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


Re: [squid-users] no filtering with DB files

2005-02-04 Thread Jeff Donovan
On Feb 4, 2005, at 3:14 AM, Elsen Marc wrote:

Greetings
squidguard -C porn/domain
squidguard -C porn/urls
builds domain.db
builds urls.db
squidguard.conf
logdir /var/log/squidguard
dbhome /usr/local/blacklists
dest porn {
 domainlist porn/domains
 urllist porn/urls
 redirect   http://10.0.1.3/index.html
 logfileblocked.log
}
dest pornexp {
 expressionlist adult/expressions
 redirect   http://10.0.1.3/exp.html
 logfileexpblocked.log
}
acl {
 default {
pass !porn !pornexp all
 }
}
squid.conf
(snip)
redirect_program /usr/local/bin/squidGuard -c
/usr/local/squid/etc/squidguard.conf
redirect_children 32
start squid; everything is fine, no errors
No filtering except my exporessions list. Urls and Domains do not get
filtered.
Take them out and load the lists into memory from text files and it
works fine, I can block a million sites, except it takes forever to
load 32 children.
am i doing something wrong?

  Check squidGuard.log, on 'squid -k reconfigure' (e.g).
  Check whether squidguard find and loads the db file(s).
  You can also create them with :
   % squidGuard -C all.
  Check whether they are created in the dir that squidguard expects
  them (e.g).
here is what i used;
/usr/local/bin/squidguard -c /usr/local/squid/squidguard.conf -C 
porn/domains

and in the blacklists dir
-rw-r--r--   1 squid  squid   8909215 31 Jan 20:10 domains
-rw-r--r--   1 squid  squid  24142848  3 Feb 11:25 domains.db
-rw-r--r--   1 squid  squid   788 31 Aug 01:46 expressions
-rw-r--r--   1 squid  squid   5237030 31 Aug 01:46 urls
-rw-r--r--   1 squid  squid  11108352  3 Feb 11:28 urls.db
the expressions works but not the db's. I'm going to try using Berkly 
db 3.2.9. any suggestions?

tnx
---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


Re: [squid-users] no filtering with DB files

2005-02-04 Thread Jeff Donovan
On Feb 4, 2005, at 12:07 PM, Kyle Davenport wrote:
*** Authentication Certificate ***
I can confirm it  doesn't work for me either.  Normally I use in 
memory
db, so I don't know why pre-built wouldn't work.  This was in my log
file:

2005-02-04 10:51:28 [22067] init domainlist /var/lib/squidGuard/domains
2005-02-04 10:51:28 [22067] loading dbfile 
/var/lib/squidGuard/domains.db
2005-02-04 10:51:28 [22067] domainlist empty, removed from memory 
?
thats it. It doesn't read 2.7.7 db format.
2005-02-04 10:51:28 [22067] init urllist /var/lib/squidGuard/urls
2005-02-04 10:51:28 [22067] squidGuard 1.2.0 started (1107535888.635)
2005-02-04 10:51:28 [22067] squidGuard ready for requests 
(1107535888.640)

domains.db is not empty and apparently contains the contents of 
domains.

I've been using squidGuard-1.2.0-4 from src.rpm  - dependency listed 
was
db3x  What's the x? - I have db3-d.1.17.   Docs say db2 required and 
I
have 2.7.7-3 of that.

Kyle
I got it.
You need to use Berkely DB 3.2.9. i had to recompile squidguard and it 
works great.
the INSTALL file for squidguard is a little off :)

reference this url;
http://www.maynidea.com/squidguard/step-by-step.html
Thanks Rick
--jeff
---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


[squid-users] no filtering with DB files

2005-02-03 Thread Jeff Donovan
Greetings
squidguard -C porn/domain
squidguard -C porn/urls
builds domain.db
builds urls.db
squidguard.conf
logdir /var/log/squidguard
dbhome /usr/local/blacklists
dest porn {
domainlist  porn/domains
urllist  porn/urls
redirecthttp://10.0.1.3/index.html
logfile blocked.log
}
dest pornexp {
expressionlist  adult/expressions
redirecthttp://10.0.1.3/exp.html
logfile expblocked.log
}
acl {
default {
pass !porn !pornexp all
}
}
squid.conf
(snip)
redirect_program /usr/local/bin/squidGuard -c 
/usr/local/squid/etc/squidguard.conf
redirect_children 32

start squid; everything is fine, no errors
No filtering except my exporessions list. Urls and Domains do not get 
filtered.
Take them out and load the lists into memory from text files and it 
works fine, I can block a million sites, except it takes forever to 
load 32 children.

am i doing something wrong?
--jeff
---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


[squid-users] rotate log hang OSX

2005-01-31 Thread Jeff Donovan
greetings
I have been running Squid 2.5Stable7 w/SquidGuard on an OSX server 
10.3.7 as a transparent proxy.
System works very well and is very stable, until i execute the squid -k 
rotate command. the logs rotate, then i get a full system freeze. HTTP 
packets stop passing, all other packets continue to pass.

I have read about linux problems when compiled with Asyc-io, Not sure 
if the darwin build has the same problem. But i did compile with 
Async-IO.
i have tried clearing my redirect statement, then rotate; hangs
restart system, restart squid 'squid -sCd1', then rotate ( actually 
rotates log then hangs ).

tried manual rotate of logs via cron, with various  which comes firsts 
 still hang. I suspect i may have to recompile.

any suggestions ?
thanks
---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


Re: [squid-users] request for pages fails the first time::solved::

2004-11-05 Thread Jeff Donovan
problem is with my internal DNS server not squid.
On Nov 3, 2004, at 5:33 PM, marc elsen wrote:
greetings
I have been having a bit of a problem with accessing pages quickly.
a request for a web site fails the first time it is pulled. the 
second time it comes in right away. This is consistent everywhere on 
my network except outside squid and my firewall. outside access is 
fine.

Im not sure if squid is being slow or is DNS.
I am running squid 2.5 stable 6 with squidguard. as a transparent 
proxy.

client req pt 80--- en0 {{ squid }} en1 --- [ firewall ] isp
the squid box is working it's little butt off but i need to find out 
why we have to  double pump  to get a web site.
any ideas on where to look?
i have no idea what to look for in the cache manager.

any insight would be helpful
What is in access.log for the first failed request ?
Anything else and or more info in cache.log ?
M.
_
Free mail? MSN Hotmail ! http://www.msn.be/hotmailbe/

---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


[squid-users] request for pages fails the first time

2004-11-03 Thread Jeff Donovan
greetings
I have been having a bit of a problem with accessing pages quickly.
a request for a web site fails the first time it is pulled. the second 
time it comes in right away. This is consistent everywhere on my 
network except outside squid and my firewall. outside access is fine.

Im not sure if squid is being slow or is DNS.
I am running squid 2.5 stable 6 with squidguard. as a transparent proxy.
client req pt 80--- en0 {{ squid }} en1 --- [ firewall ] isp
the squid box is working it's little butt off but i need to find out 
why we have to  double pump  to get a web site.
any ideas on where to look?
i have no idea what to look for in the cache manager.

any insight would be helpful
--j
---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


[squid-users] refresh my cache

2004-09-30 Thread Jeff Donovan
greetings
I am having an issue where some browsers can access a site and others 
can't. All browsers can access the site outside squid.

how can i force squid to refresh it's cache? or just refresh the 
contents for this url?

TIA
--j
---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


Re: [squid-users] squid - k rotate problem

2004-08-25 Thread Jeff Donovan
On Aug 25, 2004, at 1:40 AM, Elsen Marc wrote:

greetings
i have squid running as a transparent proxy with squidguard as a
content filter. I recently increased the filter list size and my
machine is choking when i rotate the logs.
i have a cron job execute squid -k rotate at night. the CPU
pegs out at
100% and it takes for ever for squidguard to process the blacklists.
without intervention the machine will choke itself to death
and restart.
with intervention i have been able to flush my IPFW redirect, load
squid, wait 20 minutes until the CPU is done processing the the
blacklist then add the redirect statements back in. the whole process
takes about 30 minutes start to finish.
is there a cleaner way to rotate? the 30 min down time seems too long.
running squid-2.5 Stable6  squidguard 1.2

  Make sure to use the pre-formatted db files when using SquidGuard 
otherwise SQ
has to create this db format in memory when the squidguard processes 
are restarted,
which also happens as a result of 'squid -k rotate'. This can take 
considerable
overhead and delay. Create db files for squid at all times and when 
blacklists are
updated using :

% squidGuard -C all
OMG,.. thank you for removing my head from backside. I totally forgot 
about that process.
Thanks

-j
---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


[squid-users] squid - k rotate problem

2004-08-24 Thread Jeff Donovan
greetings
i have squid running as a transparent proxy with squidguard as a 
content filter. I recently increased the filter list size and my 
machine is choking when i rotate the logs.

i have a cron job execute squid -k rotate at night. the CPU pegs out at 
100% and it takes for ever for squidguard to process the blacklists.
without intervention the machine will choke itself to death and restart.

with intervention i have been able to flush my IPFW redirect, load 
squid, wait 20 minutes until the CPU is done processing the the 
blacklist then add the redirect statements back in. the whole process 
takes about 30 minutes start to finish.

is there a cleaner way to rotate? the 30 min down time seems too long.
running squid-2.5 Stable6  squidguard 1.2
---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


Re: [squid-users] Slow browsing speed

2004-08-10 Thread Jeff Donovan
Greetings
I am going to assume OSX 10.3.4 on the G5.
Are you using this as a transparent proxy?
I am interested in your setup. I have been using G4/G5's for my content  
filter for almost 2 years using squid and squidguard.
I just recently moved to a dual G5 ( like you) and found some  
differences between 10.2.x and 10.3 ipfw that needed some fine tuning.

Did you rebuild squid 2.5 stable 6 on 10.3.4 ?
--jeff
On Aug 10, 2004, at 8:57 AM, Derrick Seymour wrote:
I recently moved squid and dansguardian to a server.  Everything was  
perfect
before, the speed was good and the filtering was perfect.  I moved it  
to my
new server and now the speed went way downhill.

Here are some specs:
Previous system
Apple Emac:  128mb RAM  700Mhz G4 Processor  10/100 NIC
Squid 2  Dansguardian 2.6.1
My New Server
Apple Xserver G5:  1Gb RAM  Dual 2.0Ghz G5 Processors 10/100/1000 NIC  
(but
running at 100)

Squid 2 Dansguardian 2.7.7
I can't understand why it would be running slower, I check my config  
and it
looks to be the same as before.

I used DGComplete as my install.
Any suggestions would be great
Thanks
--- 
-

Derrick Seymour
Administrative Services
Northeastern Regional Information Center
Capital Region BOCES
--- 
-


---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


[squid-users] load balancing

2004-08-02 Thread Jeff Donovan
greetings
I have a new squid server i want to put in parallel with my existing 
system. i read most of the Docs and Faq's but still have not come up 
with a good understanding for my scenario.

I'm running a transparent cache with squid guard. My problem seems to 
be How do I split my traffic ? or how do i create a failover if one 
squid server becomes to busy?
here is my topology

--[ L3 def route]---[ squid1]--
			---[squid2]--
right now i my problem is that I have only one default route option 
coming from my layer 3 device. therefore I cannot split my subnets and 
force 1/2 to one interface and the other half to another.

I do have multiple cards in each squid box.
DNS round robin won't work in this setup because i am forcing all 
unknown traffic to squid1. (correct me if I'm wrong)

any advice?
---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


Re: [squid-users] load balancing

2004-08-02 Thread Jeff Donovan
On Aug 2, 2004, at 9:25 AM, Henrik Nordstrom wrote:
On Mon, 2 Aug 2004, Jeff Donovan wrote:
I'm running a transparent cache with squid guard. My problem seems to
be How do I split my traffic?
This you do in your router in case of transparently intercepting 
proxies..
If you are not using a TCP interception device capable of tracking
individual connections then this is most easily done by splitting the
destination IP address space among the caches. CARP does so 
automatically
for you.

or how do i create a failover if one squid server becomes to busy?
best done by an external load balancer. Linux Virtual Server is a good
free one, highend routers  switches usually also have reasonable load
balancing functions built in.
would this be a placed on a linux server in front of the two cache's or 
does it run on the same device as squid?

Unfortunately my layer 3 switch does not allow for multiple default 
routes or even a policy route. Otherwise i would have just routed my 
traffic to the new interface.
-j
---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan



Re: [squid-users] porn filter

2004-08-02 Thread Jeff Donovan
go to http://www.squidguard.org
it's easy
--j
On Aug 2, 2004, at 2:13 PM, [EMAIL PROTECTED] wrote:
Hi i wish to setup a porn filter of squid and i wish to know if anyone 
can help.

Thanks a lot.


---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


[squid-users] cache drive size question

2004-07-26 Thread Jeff Donovan
what is the max hard drive size squid can use?
I've been using 16 gigs of a 120Gig drive with great success. But i was 
wondering if I could get better performance by utilizing the rest of 
the drive resources. Would partitioning the drive into multiple pieces 
at squids max size limit be the way to go?

any insight would be helpful
TIA
--j

---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


Re: [squid-users] Running out of file descriptors OSX

2004-05-26 Thread Jeff Donovan
On May 25, 2004, at 7:56 AM, Jeff Donovan wrote:
greetings
this is one of the problems. there is no setting ( that i can find ) 
for kern.maxfilesperproc here are the numbers that come back.
kern.maxvnodes = 33584
 kern.maxproc = 2048
 kern.maxfiles = 12288
 kern.argmax = 65536

On an OSX machine there is no sysctl.conf ( not that i can find ).
nor can i locate /usr/include/bits
this may seem to be an Apple lists question. I thought someone might 
have known off hand.

--jeff
On May 24, 2004, at 2:50 PM, Arthur W. Neilson III wrote:
On FreeBSD the number of systemwide file descriptors and
number of file descriptors per process is controlled
via the following sysctls, respectively:
kern.maxfiles
kern.maxfilesperproc
You can set them in /etc/sysctl.conf then reboot
kern.maxfiles=65534
kern.maxfilesperproc=8192
or use the sysctl command to set them dynamically:
sysctl kern.maxfiles=65534
sysctl kern.maxfilesperproc=8192

Greetings
FYI
MacOSX 10.2.x kern.maxfilesperproc is not changeable.  it can only be 
edited in OSX 10.3.x

thanks for the help.
-jeff
---
jeff donovan
basd network operations
(610) 807 5571 x4
AIM  xtdonovan
fwd# 248217


Re: [squid-users] squid dies

2004-05-26 Thread Jeff Donovan
On May 26, 2004, at 1:32 PM, Mike Rambo wrote:
Since upgrading to squid 2.5 a month or so ago we have been having to
restart squid about once a week when it dies suddenly. Here are the 
last
dozen lines or so from cache.log. Is the final line the cause of death
in this case? What does it mean. What other info do you need?

2004/05/26 11:51:52| fqdncacheParse: No PTR record
2004/05/26 11:51:52| fqdncacheParse: No PTR record
2004/05/26 11:52:00| Request header is too large (11680 bytes)
2004/05/26 11:52:00| Config 'request_header_max_size'= 10240 bytes.
2004/05/26 11:52:01| Request header is too large (11680 bytes)
2004/05/26 11:52:01| Config 'request_header_max_size'= 10240 bytes.
2004/05/26 11:52:01| Request header is too large (11680 bytes)
2004/05/26 11:52:01| Config 'request_header_max_size'= 10240 bytes.
2004/05/26 11:53:04| NETDB state saved; 0 entries, 12 msec
2004/05/26 12:07:49| commConnectDnsHandle: Bad dns_error_message
2004/05/26 12:31:48| WARNING: Closing client 10.6.24.111 connection due
to lifetime timeout
2004/05/26 12:31:48|
http://d.centralmedia.ws/d.aspx?ver=4.5.26host=5-A237-5
2004/05/26 12:34:03| urlParse: Illegal character in hostname ''
2004/05/26 12:43:15| assertion failed: errorpage.c:292: mem-inmem_hi
== 0
# squid -v
Squid Cache: Version 2.5.STABLE5
configure options:  --prefix=/usr --exec-prefix=/usr
--sysconfdir=/etc/squid --libexecdir=/usr/libexec/squid
--sharedstatedir=/var/squid/com --localstatedir=/var/squid
--libdir=/usr/lib/squid --enable-gnuregex
--enable-storeio=ufs,aufs,diskd --with-pthreads
--enable-removal-policies=lru,heap --enable-icmp --enable-delay-pools
--enable-useragent-log --enable-referer-log --enable-xmalloc-statistics
--enable-kill-parent-hack --enable-snmp
--enable-cachemgr-hostname=squid.lpsd.local --enable-htcp --enable-ssl
--enable-cache-digests --enable-linux-netfilter 
--enable-auth=basic,ntlm
--enable-basic-auth-helpers=getpwnam,LDAP,MSNT,NCSA,PAM,SMB,winbind
--enable-ntlm-auth-helpers=fakeauth,no_check,SMB,winbind
--enable-ntlm-fail-open --enable-x-accelerator-vary --enable-carp

mike does squid actually stop serving requests?
or does the request for this certain url clog the whole system?
-j
---
jeff donovan
basd network operations
(610) 807 5571 x41
AIM  xtdonovan


Re: [squid-users] Running out of file descriptors OSX

2004-05-25 Thread Jeff Donovan
greetings
this is one of the problems. there is no setting ( that i can find ) 
for kern.maxfilesperproc here are the numbers that come back.
kern.maxvnodes = 33584
 kern.maxproc = 2048
 kern.maxfiles = 12288
 kern.argmax = 65536

On an OSX machine there is no sysctl.conf ( not that i can find ).
nor can i locate /usr/include/bits
this may seem to be an Apple lists question. I thought someone might 
have known off hand.

--jeff
On May 24, 2004, at 2:50 PM, Arthur W. Neilson III wrote:
On FreeBSD the number of systemwide file descriptors and
number of file descriptors per process is controlled
via the following sysctls, respectively:
kern.maxfiles
kern.maxfilesperproc
You can set them in /etc/sysctl.conf then reboot
kern.maxfiles=65534
kern.maxfilesperproc=8192
or use the sysctl command to set them dynamically:
sysctl kern.maxfiles=65534
sysctl kern.maxfilesperproc=8192
--On Monday, May 24, 2004 8:40 PM +0430 [EMAIL PROTECTED] said:
| You can increase it by ulimit -HSn 'NEW VALUE'. and after that 
recompile
| the squid. Whenever you want to run squid , first you should execute 
that
| command again.you can edit /usr/include/bits/type.h  (on some linux 
branch
| like Redhat 9,you should edit /usr/include/bits/typesizes.h) and 
change
| the line #define __FD_SETSIZE to new value like 8192 and then 
recompile
| the squid.All the above tricks are about linux,i am not sure about
| freebsd, but you can test it.
|
| Help.
| It seems that i am pushing this machine to it's limit.
| i have been running Squid for over a year with great success. I am
| running on an Apple 1.2ghz Xserve, 2g ram and 2 180g hd's
|
| the system is throwing errors about running out of file descriptors 
and
| and queueing redirector processes. Eventually, a couple times a day
| Squid willl drop all connections and rebuild the cache.
|
| I read the docs on the increasing the file descriptors. In loading 
it
| says i have 1024. i don't know where to look on a Darwin system for 
the
| file descriptors.
| I used the FreeBSD sysctl -A and that output did not display 
| kern.maxfilesperproc .
|
| I would Assume that there would be a number matching 1024 somewhere 
to
| edit.
|
| I have included my log files, and the output of the sysctl -A. Sorry
| that they are so long, but i need to tweak this machine to keep it
| from reloading.
| If anyone can give me some insight on how to correct this problem I
| would be very thankful.
|
|
|
| --jeff
|
|  { file / log info }
---
jeff donovan
basd network operations
(610) 807 5571 x4
AIM  xtdonovan
fwd# 248217


[squid-users] Running out of file descriptors OSX

2004-05-24 Thread Jeff Donovan
/05/24 10:17:03| WARNING: 36 pending requests queued
2004/05/24 10:17:03| Consider increasing the number of redirector 
processes in your config file.
865/12288 files
  33582 vnodes
swapmode is not (yet) available under Mach
[squidx:~] root# 2004/05/24 10:17:20| Store rebuilding is  0.4% complete
2004/05/24 10:17:36| Store rebuilding is  0.8% complete
2004/05/24 10:17:51| Store rebuilding is  1.1% complete
2004/05/24 10:18:06| Store rebuilding is  1.5% complete
2004/05/24 10:18:24| Store rebuilding is  1.9% complete
2004/05/24 10:18:43| Store rebuilding is  2.3% complete
2004/05/24 10:19:01| Store rebuilding is  2.7% complete
2004/05/24 10:19:17| Store rebuilding is  3.1% complete
2004/05/24 10:19:37| Store rebuilding is  3.4% complete
2004/05/24 10:19:39| parseHttpRequest: Unsupported method '-6288
GNUTELLA'
2004/05/24 10:19:39| clientReadRequest: FD 483 Invalid Request
2004/05/24 10:20:00| Store rebuilding is  3.8% complete
2004/05/24 10:20:26| Store rebuilding is  4.2% complete
2004/05/24 10:20:46| Store rebuilding is  4.6% complete
2004/05/24 10:21:13| Store rebuilding is  5.0% complete
2004/05/24 10:21:30| Store rebuilding is  5.7% complete
2004/05/24 10:21:46| Store rebuilding is  8.0% complete
2004/05/24 10:22:01| Store rebuilding is 11.5% complete
2004/05/24 10:22:16| Store rebuilding is 19.1% complete
2004/05/24 10:22:31| Store rebuilding is 33.7% complete
2004/05/24 10:22:46| Store rebuilding is 50.1% complete
2004/05/24 10:23:01| Store rebuilding is 66.6% complete
2004/05/24 10:23:16| Store rebuilding is 86.5% complete
2004/05/24 10:23:31| Store rebuilding is 98.4% complete
2004/05/24 10:23:33| Done reading /Volumes/squidcache/cache swaplog 
(1070008 entries)
2004/05/24 10:23:33| Finished rebuilding storage from disk.
2004/05/24 10:23:33|   1070008 Entries scanned
2004/05/24 10:23:33| 0 Invalid entries.
2004/05/24 10:23:33| 0 With invalid flags.
2004/05/24 10:23:33|   1061902 Objects loaded.
2004/05/24 10:23:33| 0 Objects expired.
2004/05/24 10:23:33| 0 Objects cancelled.
2004/05/24 10:23:33|  2813 Duplicate URLs purged.
2004/05/24 10:23:33|  5293 Swapfile clashes avoided.
2004/05/24 10:23:33|   Took 394.2 seconds (2693.9 objects/sec).
2004/05/24 10:23:33| Beginning Validation Procedure
2004/05/24 10:23:33|262144 Entries Validated so far.
2004/05/24 10:23:35|524288 Entries Validated so far.
2004/05/24 10:23:38|786432 Entries Validated so far.
2004/05/24 10:23:38|   1048576 Entries Validated so far.
2004/05/24 10:23:38|   Completed Validation Procedure
2004/05/24 10:23:38|   Validated 1061890 Entries
2004/05/24 10:23:38|   store_swap_size = 14825468k
2004/05/24 10:23:51| storeLateRelease: released 415 objects

---
jeff donovan
basd network operations
(610) 807 5571 x4
AIM  xtdonovan
fwd# 248217


Re: [squid-users] Running out of file descriptors OSX

2004-05-24 Thread Jeff Donovan
 storage in /Volumes/squidcache/cache
(CLEAN)
2004/05/24 10:16:59| Using Least Load store dir selection
2004/05/24 10:16:59| WARNING: Can't find current directory, getcwd:
(13) Permission denied
2004/05/24 10:16:59| Loaded Icons.
2004/05/24 10:16:59| Accepting HTTP connections at 0.0.0.0, port 3128,
FD 50.
2004/05/24 10:16:59| Accepting ICP messages at 0.0.0.0, port 3130, FD
51.
2004/05/24 10:16:59| Accepting SNMP messages on port 3401, FD 52.
2004/05/24 10:16:59| WCCP Disabled.
2004/05/24 10:16:59| Ready to serve requests.
2004/05/24 10:17:03| WARNING: All redirector processes are busy.
2004/05/24 10:17:03| WARNING: 36 pending requests queued
2004/05/24 10:17:03| Consider increasing the number of redirector
processes in your config file.
865/12288 files
   33582 vnodes
swapmode is not (yet) available under Mach
[squidx:~] root# 2004/05/24 10:17:20| Store rebuilding is  0.4% 
complete
2004/05/24 10:17:36| Store rebuilding is  0.8% complete
2004/05/24 10:17:51| Store rebuilding is  1.1% complete
2004/05/24 10:18:06| Store rebuilding is  1.5% complete
2004/05/24 10:18:24| Store rebuilding is  1.9% complete
2004/05/24 10:18:43| Store rebuilding is  2.3% complete
2004/05/24 10:19:01| Store rebuilding is  2.7% complete
2004/05/24 10:19:17| Store rebuilding is  3.1% complete
2004/05/24 10:19:37| Store rebuilding is  3.4% complete
2004/05/24 10:19:39| parseHttpRequest: Unsupported method '-6288
GNUTELLA'
2004/05/24 10:19:39| clientReadRequest: FD 483 Invalid Request
2004/05/24 10:20:00| Store rebuilding is  3.8% complete
2004/05/24 10:20:26| Store rebuilding is  4.2% complete
2004/05/24 10:20:46| Store rebuilding is  4.6% complete
2004/05/24 10:21:13| Store rebuilding is  5.0% complete
2004/05/24 10:21:30| Store rebuilding is  5.7% complete
2004/05/24 10:21:46| Store rebuilding is  8.0% complete
2004/05/24 10:22:01| Store rebuilding is 11.5% complete
2004/05/24 10:22:16| Store rebuilding is 19.1% complete
2004/05/24 10:22:31| Store rebuilding is 33.7% complete
2004/05/24 10:22:46| Store rebuilding is 50.1% complete
2004/05/24 10:23:01| Store rebuilding is 66.6% complete
2004/05/24 10:23:16| Store rebuilding is 86.5% complete
2004/05/24 10:23:31| Store rebuilding is 98.4% complete
2004/05/24 10:23:33| Done reading /Volumes/squidcache/cache swaplog
(1070008 entries)
2004/05/24 10:23:33| Finished rebuilding storage from disk.
2004/05/24 10:23:33|   1070008 Entries scanned
2004/05/24 10:23:33| 0 Invalid entries.
2004/05/24 10:23:33| 0 With invalid flags.
2004/05/24 10:23:33|   1061902 Objects loaded.
2004/05/24 10:23:33| 0 Objects expired.
2004/05/24 10:23:33| 0 Objects cancelled.
2004/05/24 10:23:33|  2813 Duplicate URLs purged.
2004/05/24 10:23:33|  5293 Swapfile clashes avoided.
2004/05/24 10:23:33|   Took 394.2 seconds (2693.9 objects/sec).
2004/05/24 10:23:33| Beginning Validation Procedure
2004/05/24 10:23:33|262144 Entries Validated so far.
2004/05/24 10:23:35|524288 Entries Validated so far.
2004/05/24 10:23:38|786432 Entries Validated so far.
2004/05/24 10:23:38|   1048576 Entries Validated so far.
2004/05/24 10:23:38|   Completed Validation Procedure
2004/05/24 10:23:38|   Validated 1061890 Entries
2004/05/24 10:23:38|   store_swap_size = 14825468k
2004/05/24 10:23:51| storeLateRelease: released 415 objects

---
jeff donovan
basd network operations
(610) 807 5571 x4
AIM  xtdonovan
fwd# 248217



---
jeff donovan
basd network operations
(610) 807 5571 x4
AIM  xtdonovan
fwd# 248217


[squid-users] Updateing Blacklists

2003-09-03 Thread Jeff Donovan
greetings
does anyone know a grep statement that i could use to tell the 
differences between two Blacklists?
I want to update my lists but only append new data?

url 's, tips, hints , flames welcome

tnx
-j
Jeff Donovan
BASD Network Operations
AIM xT donovan
(610)807-5571 x4


Re: [squid-users] your cache is Running out of filedescriptors

2003-03-24 Thread Jeff Donovan
ok let me get this straight

send a hup signal to squid
and restart with
./squid ulimit -HSn 2048
(doesn't look right to me)

./squid -h
doesn't show much in line with what you are saying. Do i need to 
recompile squid maybe?

--jeff

On Monday, March 24, 2003, at 11:16 AM, MASOOD AHMAD wrote:

It seems that you OS have support up to 12288 file
des.
but you have not started squid with more than 1024
file des.. so you will have to kill the squid process
and then you will restart it with command like that.
ulimit -HSn 2048 or more than that.

and than start squid

Best Regards,
Masood Ahmad Shah
System Administrator
Fibre Net
Cell #   923004277367
--- Jeff Donovan [EMAIL PROTECTED] wrote:
Silly me , i found a part in the  FAQ-11.4
FreeBSD
by Torsten Sturm
How do I check my maximum filedescriptors?
Do sysctl -a and look for the value of
kern.maxfilesperproc .
How do I increase them?
sysctl -w kern.maxfiles=
 sysctl -w kern.maxfilesperproc=
Warning : You probably want maxfiles 
maxfilesperproc if you're going
to be pushing the limit.
What is the upper limit?
I don't think there is a formal upper limit inside
the kernel. All the
data structures are dynamically allocated.  In
practice there might be
unintended metaphenomena (kernel spending too much
time searching
tables, for example).
Here is my kernel output: i would assume i could
increase the
maxproc and the maxfiles.


[squidx:~] root# sysctl -a | more
kern.ostype = Darwin
kern.osrelease = 6.4
kern.osrevision = 199506
kern.version = Darwin Kernel Version 6.4:
Wed Jan 29 18:50:42 PST 2003;
root:xnu/xnu-344.26.obj~1/RELEASE_PPC
kern.maxvnodes = 33584
kern.maxproc = 2048
kern.maxfiles = 12288
any suggestions on how much to increase this by?

kern.argmax = 65536
kern.securelevel = 1
kern.hostname = squidx
kern.hostid = 3223847169
kern.clockrate: hz = 100, tick = 1, profhz =
100, stathz = 100
kern.posix1version = 198808
kern.ngroups = 16
kern.job_control = 1
kern.saved_ids = 0
kern.boottime = Sat Mar 22 19:52:28 2003
{snip}--not relative

On Monday, March 24, 2003, at 10:19 AM, Marc Elsen
wrote:


Jeff Donovan wrote:
  parsehttpRequest ; requestheader contains NULL
characters
ClientReadRequest : FD {somenumber} Invalid
request
WARNING! Your cache is running out of
filedescriptors
 Unless someone would launch some kind of denial
of service
 attack against your squid. The 2 lines are
normally unrelated
 to the out of file desc. problem.
 Check access.log to see which kind of requests
are being processed
 by squid during the time of these error(s).

 However you may need to increase the available no
of file descriptors.
 I do not know how to do this on OSX however.

 M.



__
Do you Yahoo!?
Yahoo! Platinum - Watch CBS' NCAA March Madness, live on your desktop!
http://platinum.yahoo.com



[squid-users] regular maintenance

2003-03-20 Thread Jeff Donovan
greetings

I have had a transparent Squid + SquidGuard server running great for 
about a month now.
I have a few questions about regular maintenance.
At night I run a Cron job that /squid -k rotate. this works great 
for my logs so that they do not get to fat. Are there other things I 
should be doing to keep this machine healthy?
What are some of the health checks you guys look for?

TIA

--jeff



Re: [squid-users] All redirector processes are busy

2003-02-24 Thread Jeff Donovan
greetings,
thanks for the reply
On Friday, February 21, 2003, at 01:12 PM, [EMAIL PROTECTED] 
wrote:

You are running on a newer SMP Apple machine, right?
okay you got me. what's a  SMP apple machine?

Have you considered
just bumping up the number of processes to the compiled max (32?)
No I haven't considered it.
I would like to compare benchmarks and or system setup with anyone else 
who is running squid and Squidguard on an OSX box.

--jeff



[squid-users] Re: anyone know why this is blocked?

2003-02-14 Thread Jeff Donovan
Rick you are my hero!
is there any way to find out what variables in the expressionslist is 
the culprit?

thanks for the tips. The dual log is awesome.


--jeff

On Thursday, February 13, 2003, at 05:11 PM, Rick Matthews wrote:

Jeff Donovan wrote:


i have a transparent proxy running squid 2.5 and squidguard.
everything is working fine.
however when I was surfing around i came to :
http://www.netbsd.org

now that domain loads fine. but when i click on   Documentation/FAQ 

I get redirected to my Denied file.
I greped my blacklists for the domain, url, and ip and nothing came
back. Then I manually searched ( what a bugger)


It's not blocked here.

As Darren has already mentioned, there are a few things that you can
do when you are setting up squidGuard that will greatly simplify your
research efforts:

- Use squidGuard.cgi (from the /samples folder) for redirects.  That
will give you a redirect page that resembles this:
http://home1.gte.net/res0pj61/squidguard/redirect-sample.gif

- If you can't (or would prefer not to) run cgi, you can still
redirect to a different page from each group.  For example, you might
redirect the porn group to http://home1.gte.net/res0pj61/403prn.html
and the drugs group to http://home1.gte.net/res0pj61/403drgs.html.

- For clarity and ease of use, add a redirect statement to every
destination block.  They could all point to the same location, or
they might all be different.  For starters, I'd recommend pointing
everything but the ads group to the squidGuard.cgi page.  The ads
group should be redirected to a transparent 1x1.gif (or png).

- For clarity and ease of use, add a log statement to every
destination block.  For starters, I'd recommend logging everything
but the ads group to blocked.log.  The ads group should be
logged to ads.log.  This will log the important information
about every block, to greatly simply research.

- If you use the logic presented in the first 2 tips above, you do
not need a redirect statement in any acl sections where the
pass statement ends with all.  You do need a redirect statement
in the acl sections where the pass statement ends with none.

- If you are using an allowed destination group, remember that any
domains entered there have a free pass, even if the domain or
subdomains are listed in blocked destination groups.  The allowed
group should be listed first in your acl, pass allowed !porn 
It is not necessary to have a redirect and log statement in your
allowed group.

- Be extremely careful with expressionlists!  As an example,
remember that your porn expressionlist will define a combination
that, if it appears in a url, will cause it to be classified as a
porn url.  Therefore, that combination should never appear in a
non-porn url.  (Repeat the previous two sentences for each group
that contains an expressionlist, replacing porn with the name
of the destination group.)  I only use 2 expressionlists, both in
areas where the terminology is fairly unique - porn and ads.

- My expressionlists are not in the same destination groups with
domains and urls.  I have a porn group and a pornexp group, the latter
containing only the porn expressionlist.  I also have ads and adsexp
groups.  This is extremely helpful in debugging and correcting
false blocks.  Knowing the destination group that caused the block
immediately tells you whether you have a database or expressionlist
problem.

- Separating the database files from the expressionlists also allows
you to gauge the effectiveness of your expressionlist.  Put the
database before the expressionlist in your pass statement
(pass !porn !pornexp...).  You can then examine your blocked.log
file knowing that if a url was blocked by pornexp, it was not in
the porn databases and would have been approved except for the
expressionlist.

- More information on isolating expressionlist blocks for easier
problem identification:

Here's a small change that you can make to your squidGuard.conf file
so that you will immediately know if you've been blocked by the porn
database or by the porn expressionlist.

Instead of setting up your porn destination group like this:

 not this way --
dest porn {
	domainlist		porn/domains
	urllist		porn/urls
	expressionlist	porn/expressions
	redirect		http://yourserver.com/whatever...
	logfile		blocked.log
}
-  end  

Break out the expressionlist and set it up like this:

-- Recommended --
dest porn {
	domainlist		porn/domains
	urllist		porn/urls
	redirect		http://yourserver.com/whatever...
	logfile		blocked.log
}

dest pornexp {
	expressionlist	expressions
	redirect		http://yourserver.com/whatever...
	logfile		blocked.log
}
-  end  -

Then replace [!porn] with [!porn !pornexp] in your acl and you'll
have exactly the same coverage as before, but now your redirect
page and blocked log will show:

Target group = porn
or
Target group = pornexp

I hope these help!

Rick













[squid-users] anyone have a good expressions list

2003-02-14 Thread Jeff Donovan
greetings

I'm looking for a good expressions list. Something that only targets 
porn sites. I had been using the default exp list that comes with the 
blacklists, but it seems to block out many sites that are not adult 
related.

I'm pretty much REGEX illiterate.

--jeff



Re: [squid-users] Can't download files through squid:;FALSE ALARM::

2003-02-13 Thread Jeff Donovan
sorry for the hasty post. it was a client cache issue.

btw 2.5-stable, OSX 10.2.3 server

--jeff


On Thursday, February 13, 2003, at 08:32 AM, Marc Elsen wrote:




Jeff Donovan wrote:


greetings
i placed my squid + squidguard transparent proxy online today. i have
only one problem.

It seems that users cannot download files from the web.
any insight would be helpful.

--jeff


 Which squid version are you using ?
 On which platform/os/version ?

 What errors are the user's getting (exact browser error) ?

 What's in access.log for a particular case ?

 Any other suspicious error's in cache.log ?


 M.


--

 'Time is a consequence of Matter thus
 General Relativity is a direct consequence of QM
 (M.E. Mar 2002)