RE: [squid-users] filtering based on google search

2009-11-08 Thread michael hiatt

I would like to be shown how to block OR allow (I'm not fussed either way, I 
beleive I can transpose the answer to what I want to do) based upon a google 
search query (submitted by the user). The key here being the google search term 
i want to be able create an ACL for. Not just the google web-site.
 
So going forth with the blacklist-whitelist example (the further complicated 
one), how would I achieve a pattern that matches and allows "pirates of 
penzance" but denies occurences of "pirate"?
 
I have read through the FAQ but I don't believe this exact scenario is covered 
in depth.
 
Also to show I have tried, I have come up with a url_regex pattern in my file 
like so:
q=pirates
 
It would be much better though if I could make this a bit more semantic by 
including the google domain in there and being able to include spaces in the 
pattern.
 
 
Any further help would be great.
 
Regards,
Michael

 
 
> Date: Mon, 9 Nov 2009 19:18:48 +1300
> From: squ...@treenet.co.nz
> CC: squid-users@squid-cache.org
> Subject: Re: [squid-users] filtering based on google search
> 
> michael hiatt wrote:
>> Hi,
>> Just wondering if there is a way of getting squid to block or allow based on 
>> google search results.
>>
> 
> That sentence makes no sense to me whatsoever. Can you explain it a bit? 
> What are you intending to get out of it?
> 
> 
>> I have tried setting two 
>> url_regex -i "file/path/goes/here"
>> 
>> one for allowed and one for blocked.
>> 
>> if
>> I set http://www.google.com to be allowed then unwanted words can be
>> searched and their results displayed. Clicking on said results displays
>> error/blocked page.
>> 
>> If I remove http://www.google.com then I can't search on some words that I 
>> want.
>> 
>> Example:
>> I would like to search on "pirates of penzance" but cannot because "pirate" 
>> is a keyword in my block list.
>> 
>> Is
>> there a better way around this? I don't want to (and can't) install
>> other software like squid-guard and dans guardian. I'm hoping to do
>> this in squid alone.
> 
> You describe a perfectly working URL keyword filter.
> 
> - whitelisting "google.com" ... allows *ALL* of google.com.
> - blacklisting *pirate* ... blocks *ALL* mentions of "pirate" in URL 
> (including google lookup URLs, result URLs, etc)
> 
> 
> Your choices are:
> * accept the price of keyword filtering URLs.
> * stop using the filter.
> * complicate your config further with a set of 
> whitelisted-blacklisted keywords based on other things (like your 
> google.com example).
> 
> see FAQ on managing ACLs...
> http://wiki.squid-cache.org/SquidFaq/SquidAcl
> 
> 
> Amos
> -- 
> Please be using
> Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
> Current Beta Squid 3.1.0.14





View photos of singles in your area! Looking for a date?
  
_
Looking for a date? View photos of singles in your area!
http://clk.atdmt.com/NMN/go/150855801/direct/01/

Re: [squid-users] filtering based on google search

2009-11-08 Thread Amos Jeffries

michael hiatt wrote:

Hi,
Just wondering if there is a way of getting squid to block or allow based on 
google search results.



That sentence makes no sense to me whatsoever. Can you explain it a bit? 
What are you intending to get out of it?



I have tried setting two 
url_regex -i "file/path/goes/here"
 
one for allowed and one for blocked.
 
if

I set http://www.google.com to be allowed then unwanted words can be
searched and their results displayed. Clicking on said results displays
error/blocked page.
 
If I remove http://www.google.com then I can't search on some words that I want.
 
Example:

I would like to search on "pirates of penzance" but cannot because "pirate" is 
a keyword in my block list.
 
Is

there a better way around this? I don't want to (and can't) install
other software like squid-guard and dans guardian. I'm hoping to do
this in squid alone.


You describe a perfectly working URL keyword filter.

 - whitelisting "google.com" ... allows *ALL* of google.com.
 - blacklisting *pirate* ... blocks *ALL* mentions of "pirate" in URL 
(including google lookup URLs, result URLs, etc)



Your choices are:
  * accept the price of keyword filtering URLs.
  * stop using the filter.
  * complicate your config further with a set of 
whitelisted-blacklisted keywords based on other things (like your 
google.com example).


see FAQ on managing ACLs...
  http://wiki.squid-cache.org/SquidFaq/SquidAcl


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.14


[squid-users] filtering based on google search

2009-11-08 Thread michael hiatt

Hi,
Just wondering if there is a way of getting squid to block or allow based on 
google search results.
 
I have tried setting two 
url_regex -i "file/path/goes/here"
 
one for allowed and one for blocked.
 
if
I set http://www.google.com to be allowed then unwanted words can be
searched and their results displayed. Clicking on said results displays
error/blocked page.
 
If I remove http://www.google.com then I can't search on some words that I want.
 
Example:
I would like to search on "pirates of penzance" but cannot because "pirate" is 
a keyword in my block list.
 
Is
there a better way around this? I don't want to (and can't) install
other software like squid-guard and dans guardian. I'm hoping to do
this in squid alone.
 
 
Thanks in advance for the help.
 
Regards,
Michael 
  
_
For more of what happens online Head to the Daily Blob on Windows Live
http://windowslive.ninemsn.com.au/blog.aspx

Re: [squid-users] Squid 3.1 + mrtg

2009-11-08 Thread Amos Jeffries

Babu Chaliyath wrote:

Converting IPv4 address fields to IPv6+IPv4 shared trees...

The client info table had cacheClientAddressType added as .1,
cacheClientAddress shuffled to .2
 ... which bumped all cacheClient* from .N to .N+1

The peering table had cachePeerIndex added as .1 and cacheClientAddressType
added as .2
 ... which bumped all cachePeer* from .N to .N+2

Amos


Now thats all going above my head as far as mrtg setup for the squid
3.1 is concerned. Can U guys tell me where and what changes I need to
make it working?
Sorry for this but I couldnt get much idea from these.

Regards
Babs


Um, I think the best way to go forward is for us to fix this ASAP.
Are you able to test patches if I do the code?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.14


Re: [squid-users] Time-based oddity that I can't quite nail down...

2009-11-08 Thread Kurt Buff
On Sun, Nov 8, 2009 at 17:08, Amos Jeffries  wrote:
> On Sun, 8 Nov 2009 16:55:22 -0800, Kurt Buff  wrote:
>> All,
>>
>> During the normal work out at my company, the squid proxy is
>> reasonably responsive, and seems to work well.
>>
>> However, after roughly 5pm each day, through the night and all during
>> the weekend, web browsing is very slow, with pages taking a very long
>> time (30+ seconds, to sometimes minutes) to load.
>>
>> Does anyone have some suggestions on where I might start looking at
>> this problem? I haven't found anything in the logs that I can detect
>> as relevant. Stopping and starting squid makes no difference.
>
> What version of Squid is this?
>
> Stuff I can think of right now are:
>  IIRC there were some oddities possible when Squid had zero or very few
> traffic events happening (older Squid rely on an IO event to trigger any
> other processing).
>
>  I've also seen some mistakes with time ACL openign the proxy to full
> general use outside work hours. The lack of Squid CPU load in your snapshot
> makes this unlikely but might be worth checking anyway just in case.
>
>  Could also be upstream network load. If this is hanging off a popular ISP
> with a lot of high-bandwidth users the whole network can slow down as
> people at home ramp up their use. Though I would expect to see some
> intermittent problems from end of school hours (~4pm?) in that case.
>
> Amos


Sorry, didn't get this to the list the first time:

It's squid-3.0.19 running on top of FreeBSD 7.0-Stable #0

We have a DS3, with a soft cap of 5mbit (if we use more than the soft
cap over the course of a month, we pay extra, but there are no hard
limits on it - I've seen bursts up to 30mbit/s over short periods),
through a business ISP (NTT), so I don't suspect an ISP network load
issue.

I have no ACLs that are time-dependent.

This is just baffling to me.

Thanks for looking at it, and if you have any more thoughts, I'd love
to hear them.

Kurt


Re: [squid-users] Time-based oddity that I can't quite nail down...

2009-11-08 Thread Amos Jeffries
On Sun, 8 Nov 2009 16:55:22 -0800, Kurt Buff  wrote:
> All,
> 
> During the normal work out at my company, the squid proxy is
> reasonably responsive, and seems to work well.
> 
> However, after roughly 5pm each day, through the night and all during
> the weekend, web browsing is very slow, with pages taking a very long
> time (30+ seconds, to sometimes minutes) to load.
> 
> Does anyone have some suggestions on where I might start looking at
> this problem? I haven't found anything in the logs that I can detect
> as relevant. Stopping and starting squid makes no difference.

What version of Squid is this?

Stuff I can think of right now are:
  IIRC there were some oddities possible when Squid had zero or very few
traffic events happening (older Squid rely on an IO event to trigger any
other processing).

 I've also seen some mistakes with time ACL openign the proxy to full
general use outside work hours. The lack of Squid CPU load in your snapshot
makes this unlikely but might be worth checking anyway just in case.

 Could also be upstream network load. If this is hanging off a popular ISP
with a lot of high-bandwidth users the whole network can slow down as
people at home ramp up their use. Though I would expect to see some
intermittent problems from end of school hours (~4pm?) in that case.

Amos



[squid-users] Time-based oddity that I can't quite nail down...

2009-11-08 Thread Kurt Buff
All,

During the normal work out at my company, the squid proxy is
reasonably responsive, and seems to work well.

However, after roughly 5pm each day, through the night and all during
the weekend, web browsing is very slow, with pages taking a very long
time (30+ seconds, to sometimes minutes) to load.

Does anyone have some suggestions on where I might start looking at
this problem? I haven't found anything in the logs that I can detect
as relevant. Stopping and starting squid makes no difference.

This is a snapshot of top from just a few moments ago, and I'm
experiencing the problem at the moment (I'm VPNed in from home,
looking at some issues):

last pid: 33578;  load averages:  0.31,  0.35,  0.18
up 489+09:46:35 16:49:12
45 processes:  1 running, 44 sleeping
CPU states:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
Mem: 296M Active, 723M Inact, 167M Wired, 70M Cache, 112M Buf, 743M Free
Swap: 1024M Total, 72K Used, 1024M Free

  PID USERNAME  THR PRI NICE   SIZERES STATE  C   TIME   WCPU COMMAND
33576 root1  440  3488K  1584K CPU1   1   0:00  0.10% top
42017 root1  440  3156K   984K select 1 246:01  0.00% syslogd
 1524 root1  440  4628K  1584K select 0  17:29  0.00% ntpd
 1065 root1   80  3184K   940K nanslp 0   3:51  0.00% cron
25039 root1   40  3292K  1072K kqread 1   3:22  0.00% master
25041 postfix 1   40  3292K  1212K kqread 0   0:43  0.00% qmgr
 1814 squid   1   40  3104K   928K accept 1   0:01  0.00% frox
33389 kbuff   1  440  8384K  3052K select 0   0:00  0.00% sshd
33386 root1   40  8384K  3032K sbwait 1   0:00  0.00% sshd
 1058 root1  440  5616K  1884K select 1   0:00  0.00% sshd
68062 root1  440  3472K  1604K select 1   0:00  0.00% bsnmpd
33396 root1  200  4452K  2156K pause  0   0:00  0.00% csh
33252 postfix 1   40  3292K  1260K kqread 0   0:00  0.00% pickup
33578 squid   1  440  4328K  1668K select 1   0:00  0.00% pinger
33577 squid   1  -80  4292K  1396K piperd 0   0:00  0.00% unlinkd
33392 kbuff   1   80  3592K  1388K wait   1   0:00  0.00% su
33391 kbuff   1   80  3456K  1300K wait   1   0:00  0.00% sh
 1107 root1   50  3156K   816K ttyin  0   0:00  0.00% getty
  866 root1  620  1888K   420K select 1   0:00  0.00% devd
 1112 root1   50  3156K   816K ttyin  0   0:00  0.00% getty
 1110 root1   50  3156K   816K ttyin  0   0:00  0.00% getty
 1113 root1   50  3156K   816K ttyin  0   0:00  0.00% getty
  root1   50  3156K   816K ttyin  1   0:00  0.00% getty
 1114 root1   50  3156K   816K ttyin  1   0:00  0.00% getty
 1109 root1   50  3156K   816K ttyin  0   0:00  0.00% getty
11008 root1   50  3156K   796K ttyin  1   0:00  0.00% getty
33573 squid   1   80  7464K  4048K wait   0   0:00  0.00% squid
  136 root1  200  1356K   648K pause  0   0:00  0.00% adjkerntz
33575 squid  17 1130   358M   268M ucond  0   0:00  0.00% squid

Thanks,

Kurt


Re: [squid-users] Squid 3.1 + mrtg

2009-11-08 Thread Henrik Nordstrom
tis 2009-11-03 klockan 17:25 +1300 skrev Amos Jeffries:

> > MIB numbering should never change. Old numbers may cease to exists when
> > their data sources go away and new number appear as new info gets
> > published, but existing numbering should not change...
> 
> Converting IPv4 address fields to IPv6+IPv4 shared trees...
> 
> The client info table had cacheClientAddressType added as .1, 
> cacheClientAddress shuffled to .2
>   ... which bumped all cacheClient* from .N to .N+1
> 
> The peering table had cachePeerIndex added as .1 and 
> cacheClientAddressType added as .2
>... which bumped all cachePeer* from .N to .N+2


Ugh.. that needs to be redone. The new field needs to be added after the
other ones.

It is not permissible to renumber existing MIB entries like this, or to
reuse a old MIB entry for other purpose.

I'll file a bug on that so it's not forgotten.

Regards
Henrik



Re: [squid-users] Squid 3.1 + mrtg

2009-11-08 Thread Babu Chaliyath
>
> Converting IPv4 address fields to IPv6+IPv4 shared trees...
>
> The client info table had cacheClientAddressType added as .1,
> cacheClientAddress shuffled to .2
>  ... which bumped all cacheClient* from .N to .N+1
>
> The peering table had cachePeerIndex added as .1 and cacheClientAddressType
> added as .2
>  ... which bumped all cachePeer* from .N to .N+2
>
> Amos

Now thats all going above my head as far as mrtg setup for the squid
3.1 is concerned. Can U guys tell me where and what changes I need to
make it working?
Sorry for this but I couldnt get much idea from these.

Regards
Babs


RE: [squid-users] Compression in HTTPS traffic

2009-11-08 Thread squid squid

 <4af5ce26.7020...@solutti.com.br>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0


Hi=2C
=20
Thank you for the reply.
=20
If this is the case=2C does this mean that there will be compression if use=
rs have enabled the option of "Use http 1.1 through proxy connections" whic=
h can be found under IE's tool=2C internet options=2C advanced option???
=20
Regards.


> Date: Sat=2C 7 Nov 2009 17:44:38 -0200
> From: leolis...@solutti.com.br
> To: squid...@hotmail.com
> CC: squid-users@squid-cache.org
> Subject: Re: [squid-users] Compression in HTTPS traffic
>
> squid squid escreveu:
>> Hi=2C
>>
>> Currently I am running Squid Version 2.7 Stable 4 on a Linux ES3 box wit=
h 2.5GB RAM.
>>
>> Basically there is no caching configured on the squid apps and it is bei=
ng used like a middle man between client and web/apps servers which has bot=
h http and https transaction.
>>
>> Would like to know does squid support compression for https transaction =
(ie. CONNECT xxx.xxx.com:443")???
>>
>> If it does=2C is thecompression by default or is there some settings or =
configuration needed???
>>
>> I am asking cause httpwatch indicated no compression when I accessed htt=
ps website through squid proxy. However if I accessed the https website dir=
ectly=2C httpwatch does show that there are compression.
>>
>>
>
> i think this has nothing to do with HTTPS. It's indeed related to the
> fact that Squid is NOT fully HTTP/1.1 compliant. Squid can RECEIVES
> HTTP/1.1 requiests=2C but it only sends HTTP/1.0 requests. And in
> HTTP/1.0=2C compression is not supported.
>
> So all requests done by a squid wont have any compression at all. HTTP
> and HTTPS ones. And that's because of squid being a HTTP/1.0 proxy.
>
>
> --
>
>
> Atenciosamente / Sincerily=2C
> Leonardo Rodrigues
> Solutti Tecnologia
> http://www.solutti.com.br
>
> Minha armadilha de SPAM=2C N=C3O mandem email
> gertru...@solutti.com.br
> My SPAMTRAP=2C do not email it
>
>
>
>=20
_
Windows 7: Find the right PC for you. Learn more.
http://windows.microsoft.com/shop=