Re: [squid-users] squidguard not redirecting

2013-05-18 Thread Amos Jeffries

On 18/05/2013 6:23 p.m., Helmut Hullen wrote:

Hallo, Amos,

Du meintest am 18.05.13:


SG has numerous problems which caused it not to do what it's
supposed to, including that "emergency" mode thing. Here are some
things to consider:
1) a BIG blacklist is overhyped - when I had a good look at our
requirements, there was only a small percentage of those websites
we actually wanted to block, the rest were either squatting
websites or non-existent, or not relevant. Squid could blacklist
(eg ACL DENY) those websites natively with a minimum of fuss.

May be - it does a good job even with these unnecessary entries.

If the list is that badly out of date it will also be *missing* a
great deal of entries.


Yes - may be. But updating the list is a really simple job.


2) SG has not been updated for 4 or 5 years, if that's your latest
version, you are still out of date.

I can't see a big need for updating. Software really doesn't need
changes ("updates") every month or so.

For regular software yes. But security software which has set itself
out as enumerating badness/goodness for a control method needs
constant updates.

May be - but "squidguard" does a really simple job: it looks into a list
of not allowed domains and URLs and then decides wether to allow or to
deny. That job doesn't need "constant updates".


Unfortunately it does so by forcing all the compications into Squid.

In order for SG to do that "really simple job". Squid is required to:
* manage a group of sub-processes, including all error handling when 
they fail or hang.
* generate and process requests and responses in a protocol to 
communicate with those sub-processes
* schedule client request handling around the delay from external 
processing, including recovery on SG errors
* clone the HTTP request and perform a sub-request when redirected-to 
URL is presented by SG.


Much better to have Squid doing the simple ACL task and drop all of the 
above complications.


Not to mention that Markus fed back a lot of the ufdbGuard improvements 
into Squid-3.2 and we now have ACLs which operate reasonably fast over 
big lists of regex. Not that using big lists of regex is a great idea 
anyway.






More to the point, you will not find much help now. or anyone to
fix it even if you could prove it's a bug.

"That depends!" - I know many colleagues who use "squidguard" since
years; the program doesn't need much help.

During which time a lot of things have progressed. Squid has gained a
lt of ACL types, better regex handling, better memory management, and
an external ACL helpers interface (which most installations of SG
should really be using).



Which brings me back to my question of what SG was being used for. If
it is something which the current Squid are capable of doing without
SG then you maybe can gain better traffic performance simply by
removing SG from the software chain. Like csn233 found it may be
worth it.

The squidguard job is working with a really big blacklist. And working
with some specialized ACLs.


Which apart from the list files, is all based on received information 
sent to it by Squid.



I know "squid" can do this job too - and I maintain a schoolserver which
uses many of these possibilities of "squid". But then some other people
has to maintain the blacklist. That's no job for the administrator in
the school.


You are the first to mention that change of job.

The proposal was to:
 * make Squid load the blacklist
 * remove SG from the software chain
 * watch response time improve ?

Nowhere in that sequence does it require any change of who is creating 
the list.


At most the administrator may need to run a tool to convert from some 
strange format to one Squid can load. (FWIW: both squidblacklists.org 
and Shalla provide lists which have already been converted to 
Squid-compatible formats).




"better traffic performance" may be a criteria, but (p.e.) blocking porn
URLs is (in schools) a criteria too.
Teachers have to look at "legal protection for children and young
persons" too.


I'm just talking about shifting the checks to the place where they can 
be tested most effecctively. Not removing them.


Squid already has the information about user login, IP address, MAC 
address, URL. No doubt Squid is already doing allow/deny access based on 
login and IP which users are trying to get access with. Making Squid 
load the blocklist and usie it in the http_access controls is relatively 
simple.
 So what is left for SG to do? in most cases you will find the answer 
is "nothing".



Note that we have not even got near discussing the content of those 
"regex" lists. I've seen many SquidGuard installations where the 
rationale for holding onto SG was that squid "can't handle this many 
regex". Listing 5 million domain names in a file with some 1% having a 
"/something" path tacked on the end does not make it a regex list.
 ** split the fie into domains and domain+path entries. Suddenly you 
have a small file of url_regex, a small fi

Re: [squid-users] squidguard not redirecting

2013-05-18 Thread Helmut Hullen
Hallo, Amos,

Du meintest am 18.05.13:

[...]

>> The squidguard job is working with a really big blacklist. And
>> working with some specialized ACLs.

> Which apart from the list files, is all based on received information
> sent to it by Squid.

>> I know "squid" can do this job too - and I maintain a schoolserver
>> which uses many of these possibilities of "squid". But then some
>> other people has to maintain the blacklist. That's no job for the
>> administrator in the school.

> You are the first to mention that change of job.

> The proposal was to:
>   * make Squid load the blacklist
>   * remove SG from the software chain
>   * watch response time improve ?

> Nowhere in that sequence does it require any change of who is
> creating the list.

But that's one of the major problems for a user of any blacklist: who  
maintains this blacklist.

That's no squid job, of course.

> At most the administrator may need to run a tool to convert from some
> strange format to one Squid can load. (FWIW: both squidblacklists.org
> and Shalla provide lists which have already been converted to
> Squid-compatible formats).

Hmmm - sounds interesting.

[...]

> Note that we have not even got near discussing the content of those
> "regex" lists. I've seen many SquidGuard installations where the
> rationale for holding onto SG was that squid "can't handle this many
> regex".

And at least for such purpose as a schoolserver that's a valid objection  
...
A teacher has to teach pupils, not to build regular expressions for a  
machine.

> Listing 5 million domain names in a file with some 1% having
> a "/something" path tacked on the end does not make it a regex list.
>   ** split the fie into domains and domain+path entries. Suddenly you
> have a small file of url_regex, a small file of dstdom_regex and a
> long list of dstdomain ... which Squid can handle.

Yes - I know.
But that sounds more like a theory, not like a downloadable solution.

And again: who maintains this solution?

Viele Gruesse!
Helmut


Re: [squid-users] squidguard not redirecting

2013-05-18 Thread Marcus Kool



On 05/17/2013 11:40 PM, csn233 wrote:

You can use ufdbGuard free.


So it's the filter DB component that's not free. Thanks for clarifying.




No. ufdbGuard is free software, the same as squidguard.
ufdbGuard works with free databases or your own URL blacklist, just like 
squidguard.

ufdbGuard has additional features:
- also works with a commercial grade URL database
- enforces safesearch
- enforces safer HTTPS
- blocks HTTPS tunnels
- detects popular chat protocols over HTTPS
- is 3x faster that squidguard
- and a lot more

Marcus


Re: [squid-users] squidguard not redirecting

2013-05-18 Thread csn233
>> So it's the filter DB component that's not free. Thanks for clarifying.
>>
>>
>
> No. ufdbGuard is free software, the same as squidguard.

I was refering to URLfilterDB which is the paid component by the looks
of it: "The license is both for use of URLfilterDB and subscription to
regular content updates for URLFilterDB. "


Re: [squid-users] squidguard not redirecting

2013-05-18 Thread csn233
>>  a BIG blacklist is overhyped
>
> Nonsense, porn blacklists are big by nature have you tried
> squid-porn.acl lately?
>
> Squidblacklist.org is the new kid on the blacklist block, and our porn
> blacklist is fantastic.

If you actually read what I said, which was "there was only a small
percentage of those websites we
actually wanted to block, the rest were either squatting websites or
non-existent, or not relevant."

The key word is relevance, I don't care how big it is if the vast of
majority of sites in there don't exist (or no longer exists), or don't
consume enough bandwidth to waste our time on, or our users don't even
visit. Or we end up blocking sites we don't want to block. Or sites
that we wanted to block that were not in the list.

Size alone is not useful if what's in there is mostly not relevant. If
you are blocking a lot of sites which doesn't need to be blocked you
are actually wasting server resources. I don't know about other
people, but I prefer quality over quantity.


[squid-users] Re: using squid from home and office

2013-05-18 Thread juhan
Hi Amos,

Lol that vulnerability was exactly what I needed.(Since we do not have any
intranet to be worried about). I just wanted to prevent my kids to spend too
much time in sites like facebook,twitter etc.The problem with browser
configuration is if they see squid is blocking access after certain hours
they would change configuration back to without proxy. (Kids are smart). So
i rather have control of home router and need to do address resolving at
squid box. You mean this is impossible ? (Without downgrading to vulnerable
version) Cant we use maybe another software which does the job and sends
requests to squid ? I am not a very techie guy so sorry if my idea is silly.

Regards



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/using-squid-from-home-and-office-tp4660116p4660134.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Squid 3.1.8 restarting issue mem_hdr

2013-05-18 Thread Daniele Antolini
H, ok, I'll consider to upgrade!! Thanks

Inviato da iPhone

Il giorno 17/mag/2013, alle ore 18:52, csn233  ha scritto:

>> Somebody please can help me?
> 
> As Amos said, 3.1 has a great many bugs.
> 
> If you don't feel like trying the latest releases, you should try
> 3.1.22, at least.
> 
> I certainly would not stay on 3.1.8.


Re: [squid-users] Delay Pools with Digest and External Auth

2013-05-18 Thread Nils Hügelmann
Thanks, i've made it working using a modification of your recommendations.

I summarize my solution in case others have a similar problem:

- Class 5 Delay Pools used (limit by Tag)
- External Auth helper program assigns username as EXT_TAG
- When Digest is used, there is a dummy helper that just assigns
username as EXT_TAG
- Dummy helper is activated using "http_access allow proxyauth
digest_tagger"

- Classification in multiple delay pools is done via other external_auth
ACLs
- These external_auths are activated (to circumvent slow/fast acl
issues) using "http_access allow EXTACLNAME !all"
- These external_auths need to interpret both the external_auth header
and the digest callback to get the username

Best Regards

Nils
Am 13.05.2013 02:32, schrieb Amos Jeffries:
> On 12/05/2013 8:03 a.m., Nils Hügelmann wrote:
>> Hi,
>>
>> I want to use both Digest Auth and External Auth ("simpleheaderauth")
>> for authentification, and need to assign different delay pools to single
>> users based on another external_acl (premiumcheck).
>>
>> So i have (stripped down for readibility)
>>
>> -
>> external_acl_type simpleheaderauth %>{Proxy-Authorization} simpleauth
>> external_acl_type premiumcheck %>{Proxy-Authorization} premium
>> auth_param digest program digestauth
>>
>> acl proxyauth proxy_auth REQUIRED
>> acl simpleheaderauth_passed external simpleheaderauth
>> acl premiumcheck_passed external premiumcheck
>>
>> # activate additional external acls
>> http_access allow premiumcheck_passed !all
>> http_access allow freethrottled_passed !all
>>
>> http_access allow simpleheaderauth_passed
>> http_access allow proxyauth
>> http_access deny !proxyauth
>>
>> http_access deny all
>> -
>>
>> Which works fine in regards to access control, one can either login via
>> "simpleheaderauth" (external_acl) or via "digestauth" (auth_param).
>>
>> I want to have 2 bandwidth limit levels.
>>
>> Situation from here is as follows:
>>
>> When using simpleheaderauth:
>>   - EXT_USER is available (username passed from simpleheaderauth
>> external_acl)
>>   - Tag is available (tag passed from simpleheaderauth external_acl)
>>   - premiumcheck_passed is properly set
>>
>> When using digestauth:
>>   - LOGIN is available (username passed from auth_param)
>>   - Tag is not available
>>   - premiumcheck_passed is not usable
>>
>> Delay pools need to work per individual user, so only class 5 pools (
>> tagrate ) or class 4 pools ( aggregate, network, individual, user )
>> would be possible.
>>
>> As simpleheaderauth has no user defined, and digestauth has no tag, my
>> first attempt for delay_pools was to create 2 sets of pools with 2
>> classes each:
>>
>> -
>> delay_class 1 5
>> delay_class 2 5
>> delay_class 3 4
>> delay_class 4 4
>>
>> # 1st set for simpleheaderauth
>> delay_parameters 2 2097152/2097152
>> delay_access 2 allow simpleheaderauth_passed premiumcheck_passed
>>
>> delay_parameters 1 76800/76800
>> delay_access 1 deny premiumcheck_passed
>> delay_access 1 allow simpleheaderauth_passed
>>
>> # 2nd set for digestauth
>> delay_parameters 4 -1/-1 -1/-1 -1/-1 2097152/2097152
>> delay_access 4 allow premiumcheck_passed
>>
>> delay_parameters 3 -1/-1 -1/-1 -1/-1 76800/76800
>> delay_access 3 deny premiumcheck_passed
>> delay_access 3 allow all
>> -
>>
>> 1. Can one somehow simplify this by making Tag available for digest, or
>> making class 4 username available for external_acl?
>
> I have work lined up on the TODO list for implementing tag on auth
> interfaces in the next Squid versions.
> If you are able to assist with sponsoring that I can divert some time
> back towards it.
>
> However, ...
>
> Alternative #1:
>  * make your simple and premium helper lookups produce tags indicating
> those levels.
>  * create a dummy external ACL helper lookup test which always
> responds "OK tag=digest-auth". Call it only after proxyauth ACL has
> succeeded doing digest.
>
> eg:
>   external_acl_type digestauth %LOGIN basic_fake_auth
>   acl digest_tagger external digestauth
>
>   http_access allow proxyauth digest_tagger
>
> You can then use "tag" type ACLs for delay_access.
>
>
>> 2. The problem with my attempt is that premiumcheck_passed is not
>> evaluated when usind digestauth. Every digestauth user is assigned to
>> pool 3, while simpleheaderauth users are properly assigned based on
>> premiumcheck_passed. How can i solve this?
>
> You have isolated the problem pretty accurately. It's root cause is
> the mismatch between delay_access being "fast" ACL check and the tests
> you are using being "slow" group ACL.
>
> Amos



Re: [squid-users] Squid 3.3.4 icap request issue

2013-05-18 Thread Guy Helmer

On May 17, 2013, at 7:25 PM, Alex Rousskov  
wrote:

> On 05/15/2013 09:12 AM, Guy Helmer wrote:
> 
>> I'm seeing something odd with icap REQMOD requests for HTTP POST
>> requests in Squid 3.3.4: the encapsulated body appears to not be
>> terminated by \r\n0\r\n. This seems to occur consistently with bumped
>> SSL requests to graph.facebook.com:
> 
> The ICAP-encapsulated HTTP request body is not terminated because Squid
> has not received the entire HTTP request body yet. Your ICAP server
> timeout is too aggressive for this particular transaction.

Thanks for the analysis. Since the issue was not in the adaptation layer, 
further testing indicates that building with --enable_kqueue causes squid not 
to read the remainder of the body in these bumped HTTPS transactions.

> Here are the logged steps, FYI:
> […]
> 
>> 45:06.920 kid1| ModXact.cc(647) parseMore: 
>> ICAP/1.0 400 Bad Request
>> ...
> 
> 15 seconds later, the ICAP server gives up and returns a [bogus] error
> to Squid.

Thanks for catching that. There was a bug in my server that kept it from 
returning a 408 response.

Any thoughts on appropriate timeouts for the ICAP protocol? I have not seen 
recommendations for timeouts in RFC 3507 or the ICAP Errata.

Best regards,
Guy