[squid-users] Intermittent TCP_DENIED after authentication

2010-09-17 Thread David Parks
I'm trying to debug a problem in dev:

 - After performing digest authentication (using a custom authentication
helper), pages will load as expected.
 - But when I hit large pages which load many resources (example yahoo or
latimes.com) sometimes they will load, but if I hit them a few times I'll
get TCP_DENIED/407 errors and have to re-authenticate.
 - My authentication helper is not called after the initial authentication
request.

I'm trying to track down why the requests are denied when, by my rational,
they should continue to succeed. And why it only seems to happen when
requests are put through in rapid succession (though there's only 1 user on
the system, I only notice it on pages with many resources, I've never seen
it happen on google.com for example).

Any thoughts are most appreciated.

David




Re: [squid-users] Re: Native Kerberos (squid_kerb_auth) with LDAP-Fallback (squid_ldap_auth)

2010-09-17 Thread John Doe

On 09/17/2010 03:28 PM, Amos Jeffries wrote:

Squid does not currently offer any way to selectively pick the auth
methods to advertise. There are a few possible designs and someone was
working on it a while back.

Offering a specific authentication method for a defined network would be 
a nice feature, don't you think? ;-)



Stripping away auth methods which have failed is not possible. Due to
the problems of: How do you deal with a user typo'd in their password?
or who recently changed password but the browser still sends the old one
first?.
Ok, you are of course right, it sounds complicated. But isn't there a 
basic-fallback mechanism for Kerberos/NTLM? Does this only work if there 
is a technical error with either Kerberos or NTLM?

Or is it a client thing which has to pick the basic mechanism?


The workaround that comes to mind is to run a "shell" squid instance for
each client or at lest for each primary auth type which only does auth
then funnels requests through to some parent proxy for handling.
We are currently running 4 separate squid instances (each on it's own IP 
address, all of them share common acl-files, each has it's own 
independent cache) on any of two real servers (because Squid 3.1.x is 
not SMP capable), we could dedicate two of them for LDAP-only with an 
own VIP-address(loadbalancer is taking care of that) and the two others 
per server for NTLM.
I am not happy with that setup, but there are not many other 
possibilities. I have no idea how the instances will share the 
resources, I would prefer 4 instances which share all requests instead 
of 2 for handling LDAP and 2 for handling NTLM-requests. Could lead to 
performance issues.


Anyway, thanks for your response, Squid is a great piece of software!

regards
Peter


[squid-users] Performance tips for squid 3 (config file included)?

2010-09-17 Thread Andrei
I'm a newbie. To get Squid started all I was able to do is create the
config below. This works but it feels like it could be a little
faster. I have about 300 users.
Are there any other options that you would recommend adding to this
config file? This is my config file for Squid 3.0 on Debian, P4, 40GB
IDE disk.

refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
refresh_pattern . 0 40% 40320
cache_dir ufs /var/spool/squid3 7000 16 256
visible_hostname proxy.ourdomain.com
http_port 176.16.0.9:3128 transparent
acl localnet src 176.16.0.0/255.255.248.0
http_access allow localnet
debug_options ALL,1
access_log /var/log/squid3/access.log squid


[squid-users] Squid auth methods that work without direct app support? Wrappers or "helper" apps for clients to auth to Squid?

2010-09-17 Thread Bucci, David G
Hi, all -- we have a situation where we would benefit (or are at least 
exploring) turning on authentication in Squid.  But we have several apps that 
use HTTP (REST, basically) for their communication, and don't have built-in 
support for basic auth, Kerberos, etc.

So, a basic question.  Is anyone aware of any approaches to leveraging proxy 
authentication with custom-coded applications in such situations?  Are there 
any auth methods that can be configured to work from Windows clients 
"automagically", via built-in support at the network stack level, invisibly or 
independent of the custom application issuing the HTTP calls that are being 
proxied?  Or, alternatively, are there "wrapper" approaches that can be used to 
enable proxy authentication for the apps?

The client and server environments are both Windows, btw.  And we have 
flexibility to run Squid on the client as well as the servers, if it makes 
approaches possible (this indirectly relates to the chains a month ago about 
using Squid on both a client and server to create a poor-man's SSL VPN - which 
we ended up not doing, because of the instability of the SSL support in the 
Squid install from Acme, unfortunately, we instead leveraged Squid only on the 
server, and are sending proxy calls through Stunnel).

This might sound like an arcane situation (or maybe not, not sure) - but we're 
forced to secure 3rd party applications for which we aren't allowed to touch 
the code .

Tia!


 
David G. Bucci 

Chuck Norris can kick through all 6 degrees of separation,
hitting anyone, anywhere, in the face, at any time.
-- ChuckNorrisFacts.com



[squid-users] Re: Caching huge files in chunks?

2010-09-17 Thread Alex Rousskov

On 09/16/2010 06:21 PM, Guy Bashkansky wrote:

Here is the problem description, what solution might Squid or other
cache tools provide?

Some websites serve huge files, usually movies or binary distributions.
Typically a client issues byte range requests, which are not cacheable
as separate objects in Squid.
Waiting for the whole file to be brought into the cache takes way too
long, and is not granular enough for optimizations.

A possible solution would be if Squid (or other tool/plugin) knew how
to download huge files *in chunks*.
Then the tool would cache these chunks and transform them into
arbitrary ranges when serving client requests.
There are some possible optimizations, like predictive chunk caching
and cold chunks eviction.

Does anybody know how to put together such solution based on any existing tools?


Caching of partial responses is allowed by HTTP/1.1 but is not yet 
supported by Squid. It is a complex feature which can, indeed, be a 
useful optimization in some environments. For more information, please see


http://wiki.squid-cache.org/Features/PartialResponsesCaching


http://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F

Thank you,

Alex.



Re: [squid-users] Re: Native Kerberos (squid_kerb_auth) with LDAP-Fallback(squid_ldap_auth)

2010-09-17 Thread Chad Naugle
Perhaps you could install a separate squid at their sites, which in turn, 
routes through yours, dependant on the "inter-networking" topology between 
sites?

-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
 


>>> Amos Jeffries  9/17/2010 9:28 AM >>>
On 18/09/10 00:14, guest01 wrote:
> Hi,
>
> I am stuck with a similar problem, has there been any solution for
> this topic? (Btw, I am running Squid 3.1.8 on RHEL5.5)
>
> We are trying to achieve following:
> CompanyA (us): own Active Directory domain and we are hosting the
> squid web server (central forward proxy for internet access with ICAP
> capabilities)
> CompanyB: completely independent Active Directory Domain
> (CompanyC: might use our squid soon)
> (CompanyD: might use our squid soon)
>
> We have one shared squid server which should authenticate CompanyA
> with NTLM (or kerberos) and CompanyB with LDAP (they insist on LDAP, I
> don't know why, but I suppose without a domain trust I could
> authenticate only one company with NTLM or kerberos and would have
> troubles, right?)
> NTLM is the prefered authentication method and if a Client of CompanyA
> wants to lookup something in the Internet, he will be authenticated
> with NTLM.
> If CompanyB wants to lookup something, the Browser submits NTLM data
> (valid for their domain, not ours) which are not valid for our domain
> and in theory, the browser should try Basic-Authentication (e.g. LDAP)
> next, but that does not happen. It still tries NTLM (Firefox as well
> as IE8 on Windows 7). For further infos, look at [1],[2].
>
> Unfortunately, I don't have much options:
> - disable ntml authentication in IE8 for CompanyB and then IE only
> tries LDAP which works
> - authenticate CompanyA by IP and disable NTML authentication (= our
> current setup)
>
> Of course it would be possible to authenticate everybody by LDAP (we
> are using a OpenLDAP metadirectory which talks to the ADs), but it is
> only a Basic auth and a very bad idea
>
> Has anybody any additional idea? How do you guys handle authentication
> for multiple independent customers?
>
> In my opinion, this is a client problem, unfortunately IE and even FF
> are too dumb. From a functional perspective of view, it should be
> standard to try the weaker (LDAP) authentication if the stronger
> (NTLM) does not work (from a security perspective of view, I am glad
> that this does not seem to work ;-)). Is there any option for the
> squid to track authentication and only offer basic authentication if
> ntlm failed [3]? Or anything similar?
>
> I would appreciate any response!
> best regards
> Peter

Squid does not currently offer any way to selectively pick the auth 
methods to advertise. There are a few possible designs and someone was 
working on it a while back.

Stripping away auth methods which have failed is not possible. Due to 
the problems of: How do you deal with a user typo'd in their password? 
or who recently changed password but the browser still sends the old one 
first?.


The workaround that comes to mind is to run a "shell" squid instance for 
each client or at lest for each primary auth type which only does auth 
then funnels requests through to some parent proxy for handling.


Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.8
   Beta testers wanted for 3.2.0.2


Travel Impressions made the following annotations
-
"This message and any attachments are solely for the intended recipient and may 
contain confidential or privileged information.  If you are not the intended 
recipient, any disclosure, copying, use, or distribution of the information 
included in this message and any attachments is prohibited.  If you have 
received this communication in error, please notify us by reply e-mail and 
immediately and permanently delete this message and any attachments.
Thank you."


Re: [squid-users] Re: Native Kerberos (squid_kerb_auth) with LDAP-Fallback (squid_ldap_auth)

2010-09-17 Thread Amos Jeffries

On 18/09/10 00:14, guest01 wrote:

Hi,

I am stuck with a similar problem, has there been any solution for
this topic? (Btw, I am running Squid 3.1.8 on RHEL5.5)

We are trying to achieve following:
CompanyA (us): own Active Directory domain and we are hosting the
squid web server (central forward proxy for internet access with ICAP
capabilities)
CompanyB: completely independent Active Directory Domain
(CompanyC: might use our squid soon)
(CompanyD: might use our squid soon)

We have one shared squid server which should authenticate CompanyA
with NTLM (or kerberos) and CompanyB with LDAP (they insist on LDAP, I
don't know why, but I suppose without a domain trust I could
authenticate only one company with NTLM or kerberos and would have
troubles, right?)
NTLM is the prefered authentication method and if a Client of CompanyA
wants to lookup something in the Internet, he will be authenticated
with NTLM.
If CompanyB wants to lookup something, the Browser submits NTLM data
(valid for their domain, not ours) which are not valid for our domain
and in theory, the browser should try Basic-Authentication (e.g. LDAP)
next, but that does not happen. It still tries NTLM (Firefox as well
as IE8 on Windows 7). For further infos, look at [1],[2].

Unfortunately, I don't have much options:
- disable ntml authentication in IE8 for CompanyB and then IE only
tries LDAP which works
- authenticate CompanyA by IP and disable NTML authentication (= our
current setup)

Of course it would be possible to authenticate everybody by LDAP (we
are using a OpenLDAP metadirectory which talks to the ADs), but it is
only a Basic auth and a very bad idea

Has anybody any additional idea? How do you guys handle authentication
for multiple independent customers?

In my opinion, this is a client problem, unfortunately IE and even FF
are too dumb. From a functional perspective of view, it should be
standard to try the weaker (LDAP) authentication if the stronger
(NTLM) does not work (from a security perspective of view, I am glad
that this does not seem to work ;-)). Is there any option for the
squid to track authentication and only offer basic authentication if
ntlm failed [3]? Or anything similar?

I would appreciate any response!
best regards
Peter


Squid does not currently offer any way to selectively pick the auth 
methods to advertise. There are a few possible designs and someone was 
working on it a while back.


Stripping away auth methods which have failed is not possible. Due to 
the problems of: How do you deal with a user typo'd in their password? 
or who recently changed password but the browser still sends the old one 
first?.



The workaround that comes to mind is to run a "shell" squid instance for 
each client or at lest for each primary auth type which only does auth 
then funnels requests through to some parent proxy for handling.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] Re: Native Kerberos (squid_kerb_auth) with LDAP-Fallback (squid_ldap_auth)

2010-09-17 Thread guest01
Hi,

I am stuck with a similar problem, has there been any solution for
this topic? (Btw, I am running Squid 3.1.8 on RHEL5.5)

We are trying to achieve following:
CompanyA (us): own Active Directory domain and we are hosting the
squid web server (central forward proxy for internet access with ICAP
capabilities)
CompanyB: completely independent Active Directory Domain
(CompanyC: might use our squid soon)
(CompanyD: might use our squid soon)

We have one shared squid server which should authenticate CompanyA
with NTLM (or kerberos) and CompanyB with LDAP (they insist on LDAP, I
don't know why, but I suppose without a domain trust I could
authenticate only one company with NTLM or kerberos and would have
troubles, right?)
NTLM is the prefered authentication method and if a Client of CompanyA
wants to lookup something in the Internet, he will be authenticated
with NTLM.
If CompanyB wants to lookup something, the Browser submits NTLM data
(valid for their domain, not ours) which are not valid for our domain
and in theory, the browser should try Basic-Authentication (e.g. LDAP)
next, but that does not happen. It still tries NTLM (Firefox as well
as IE8 on Windows 7). For further infos, look at [1],[2].

Unfortunately, I don't have much options:
- disable ntml authentication in IE8 for CompanyB and then IE only
tries LDAP which works
- authenticate CompanyA by IP and disable NTML authentication (= our
current setup)

Of course it would be possible to authenticate everybody by LDAP (we
are using a OpenLDAP metadirectory which talks to the ADs), but it is
only a Basic auth and a very bad idea

Has anybody any additional idea? How do you guys handle authentication
for multiple independent customers?

In my opinion, this is a client problem, unfortunately IE and even FF
are too dumb. From a functional perspective of view, it should be
standard to try the weaker (LDAP) authentication if the stronger
(NTLM) does not work (from a security perspective of view, I am glad
that this does not seem to work ;-)). Is there any option for the
squid to track authentication and only offer basic authentication if
ntlm failed [3]? Or anything similar?

I would appreciate any response!
best regards
Peter

additional infos:
[1] http://img830.imageshack.us/img830/3920/squidntlmnotworking.png

[2] squid config:
#NTLM
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 5
auth_param ntlm keep_alive on

# LDAP authentication
auth_param basic children 5
auth_param basic realm Proxy
auth_param basic credentialsttl 120 minute
auth_param basic program /opt/squid/libexec/squid_ldap_auth -b
"dc=squid-proxy" -D "uid=user" -w passwd -h server -f "(uid=%s)"

[3] Tcpdump show me the header with following infos (squid offers ntlm
and basic):
GET http://fxfeeds.mozilla.com/en-US/firefox/headlines.xml HTTP/1.1
Host: fxfeeds.mozilla.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1)
Gecko/20090624 Firefox/3.5
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Proxy-Connection: keep-alive
X-Moz: livebookmarks

HTTP/1.0 407 Proxy Authentication Required
Server: squid/3.1.8
Mime-Version: 1.0
Date: Fri, 17 Sep 2010 10:09:12 GMT
Content-Type: text/html
Content-Length: 1482
X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
Vary: Accept-Language
Content-Language: en-us
Proxy-Authenticate: NTLM
Proxy-Authenticate: Basic realm="Proxy"
X-Cache: MISS from xlsqip02_1
Via: 1.0 xlsqip02_1 (squid/3.1.8)
Connection: keep-alive


On Fri, Aug 13, 2010 at 4:01 PM, Tom Tux  wrote:
> Hi
>
> I run squid with the named debug-options. The "cache.log"-output seems
> a little bit complicated. So the only way I see, is to have a remarked
> native ldap-authentication-configuration, which I can enable, if the
> kerberos-mechanism fails.
>
> Or does somebody has such a config (kerberos with squid_kerb_ldap to
> get ad-groups AND squid_ldap_auth with a memberOf-filter) running?
>
> Thanks a lot.
> Regards,
> Tom
>
> 2010/8/11 Amos Jeffries :
>> Tom Tux wrote:
>>>
>>> Hi Amos
>>>
>>> Thanks a lot for this explanation. Both configurations seperately -
>>> native kerberos and native ldap - are working fine. But in
>>> combination, there is still one problem.
>>>
>>> Here is my actual configuration (combined two mechanism) again:
>>>
>>> auth_param negotiate program /usr/local/squid/libexec/squid_kerb_auth -i
>>> auth_param negotiate children 50
>>> auth_param negotiate keep_alive on
>>> external_acl_type SQUID_KERB_LDAP ttl=3600 negative_ttl=3600 %LOGIN
>>> /usr/local/squid_kerb_ldap/bin/squid_kerb_ldap -d -g "InternetUsers"
>>> acl INTERNET_ACCESS external SQUID_KERB_LDAP
>>>
>>> external_acl_type SQUID_DENY_KERB_LDAP ttl=3600 negative_ttl=3600
>>> %LOGIN /usr/local/squid_kerb_ldap/bin/squid_kerb_ldap -d -g
>>> "DenyInternetUsers"
>>> acl DENY_INTERNET_ACCESS external SQUI

[squid-users] Re: can't increase Filedescriptor

2010-09-17 Thread flm

Hello,
I find a solution to solve my problem.

I just add the following instruction into /etc/sysconfig/squid
ulimit -SHn 4096

Reboot my server and now :
File descriptor usage for squid:
Maximum number of file descriptors:   4096

But I always not understand why I have to add this, because I already
increased FD on another Squid without that.

Note : Squid run under user squid (chroot)

-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/can-t-increase-Filedescriptor-tp2540496p2543775.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Automatic redirection on igoogle.fr

2010-09-17 Thread Amos Jeffries

On 17/09/10 21:49, Babelo Gmvsdm wrote:


The problem is back, Is this you wanted amos?

HTTP/1.1 302 Moved Temporarily
Date: Fri, 17 Sep 2010 09:44:04 GMT
Server: Apache mod_fcgid/2.3.5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 
FrontPage/5.0.2.2635
X-Powered-By: PHP/5.2.14
Location: http://newwave.orge.pl/?q=
Content-Length: 0
Content-Type: text/html
X-Cache: MISS from Web-Filter
X-Cache-Lookup: MISS from Web-Filter:3128
Via: 1.1 Web-Filter (squid)
Proxy-Connection: keep-alive

Cheers

Herc.

>

Part of it, Google certainly do not run Apache so the redirect is coming 
from an infected source.


The "MISS from Web-Filter" indicates that 302 redirect is not being 
stored by the squid calling itself "Web-Filter" thankfully. This is why 
it was not resolved by clearing the cache.


A double-check for myself:  "Web-Filter" is your squid?

The request the client makes to get that back will give clues where the 
problem infection is and how squid is getting it.



You can protect the clients while investigating by adding this to your 
squid.conf at or near the top of the http_access lines:

  acl newwave dstdomain newwave.orge.pl
  http_access deny newwave


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] Alerting when cache Peer is used.

2010-09-17 Thread Amos Jeffries

On 17/09/10 23:14, GIGO . wrote:


I have configured my proxy servers in two regions for backup internet path of 
each other by declaring the following directives.

Directives on Proxy A:

cache_peer A parent 8080 0 proxy-only
prefer_direct on
nonhierarchical_direct off
cache_peer_access A allow all


Directives on Proxy B:

cache_peer B parent 8080 0 proxy-only
prefer_direct on
nonhierarchical_direct off
cache_peer_access B allow all


Is there a way that whenever a peer cache is used an email alert is generated 
to the admins.



Not from Squid. That is a job for network availability software.

You could hack up a script to scan squid access.log for the peer 
hierarchy codes (DIRECT or FIRST_UP_PARENT etc) being used.



Note that the setting is only "prefer" _direct. It can go to the peer 
with perfectly working network access if the origin web server simply 
takes too long to reply to a connect attempt.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] Squid + Squidguard loaded but not filtering anything

2010-09-17 Thread Amos Jeffries

On 17/09/10 21:06, Babelo Gmvsdm wrote:


Hi,
I have a very strange behaviour with squid today.It loads normaly:
root  2308  0.0  0.0   8164  1940 ?Ss   10:53   0:00 
/usr/sbin/squid3 -YC -f /etc/squid3/squid.confproxy 2310  2.8  0.7  38740 
15580 ?S10:53   0:00 (squid) -YC -f /etc/squid3/squid.confproxy 
2312  1.2  0.2   6688  4540 ?S10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy 2313  0.8  0.2   6688  4544 ?
S10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy 2314  1.0  0.2   6684  4532 ?
S10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy 2315  1.0  0.2   6684  4532 ?
S10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy 2316  1.0  0.2   6684  4536 ?
S10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy 2317  1.0  0.2   6684  4536 ?
S10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy 2318  1.2  0.2   6684  4536 ?
S10:53   0:00 (s

quidGuard) -c /usr/local/squidGuard/squidGuard.confproxy 2319  1.2  0.2   
6684  4536 ?S10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy 2320  1.0  0.2   6684  4532 ?
S10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy 2321  1.0  0.2   6684  4532 ?
S10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy 2322  1.2  0.2   6684  4532 ?
S10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy 2323  1.6  0.2   6684  4532 ?
S10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy 2324  1.2  0.2   6684  4532 ?
S10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy 2325  1.2  0.2   6684  4532 ?
S10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy 2326  1.4  0.2   6684  4532 ?
S10:53   0:00 (squidGuard) -c /usr/local/squidGuard/squidGuard
.confproxy 2327  1.4  0.2   6684  4532 ?S10:53   0:00 
(squidGuard) -c /usr/local/squidGuard/squidGuard.confproxy 2328  1.2  0.2   
6684  4532 ?S10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy 2329  1.2  0.2   6684  4532 ?
S10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy 2330  1.4  0.2   6684  4536 ?
S10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy 2331  1.0  0.2   6684  4536 ?
S10:53   0:00 (squidGuard) -c /usr/local/squidGuard/squidGuard.conf

I checked my iptables and it seems so be ok:
DNAT   tcp  --  anywhere anywheretcp dpt:www 
to:10.2.3.2:3128
Squid seems to load ok:
2010/09/17 10:53:24| Creating Swap Directories2010/09/17 10:53:24| Starting 
Squid Cache version 3.1.2 for i486-pc-linux-gnu...2010/09/17 10:53:24| Process 
ID 23102010/09/17 10:53:24| With 65535 file descriptors available2010/09/17 
10:53:24| Initializing IP Cache...2010/09/17 10:53:24| DNS Socket created at 
[::], FD 72010/09/17 10:53:24| Adding nameserver  1.2.3.4 from 
/etc/resolv.conf2010/09/17 10:53:24| Adding nameserver  1.2.3.5 
/etc/resolv.conf2010/09/17 10:53:24| helperOpenServers: Starting 20/20 
'squidGuard' processes2010/09/17 10:53:25| Unlinkd pipe opened on FD 
522010/09/17 10:53:25| Local cache digest enabled; rebuild/rewrite every 
3600/3600 sec2010/09/17 10:53:25| Store logging disabled2010/09/17 10:53:25| 
Swap maxSize 0 + 262144 KB, estimated 20164 objects2010/09/17 10:53:25| Target 
number of buckets: 10082010/09/17 10:53:25| Using 8192 Store buckets2010/09/17 
10:53:25| Max Mem  size: 262144 KB2010/09/17 10:53:25| Max Swap size: 0 
KB2010/09/17 10:53:25| Using Leas

t Load store dir selection2010/09/17 10:53:25| Set Current Directory to 
/var/spool/squid32010/09/17 10:53:25| Loaded Icons.2010/09/17 10:53:25| 
Accepting  intercepted HTTP connections at 0.0.0.0:3128, FD 53.2010/09/17 
10:53:25| Accepting ICP messages at [::]:3130, FD 54.2010/09/17 10:53:25| HTCP 
Disabled.2010/09/17 10:53:25| Squid modules loaded: 02010/09/17 10:53:25| 
Adaptation support is off.2010/09/17 10:53:25| Ready to serve 
requests.2010/09/17 10:53:26| storeLateRelease: released 0 objects

But it let pass everything, the squidGuard blacklists are totally bypassed.
Please help
Cheers
Herc.



(Something weird happened to the wrap.)

Do you have any evidence that the requests are arriving at squidGuard or 
what its doing with them?



PS: please upgrade to 3.1.8 as soon as possible there are several major 
security problems resolved since your version was released.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] SSL Reverse Proxy to Support Multiple Web Site WITHOUT wildcard crt

2010-09-17 Thread Amos Jeffries

On 17/09/10 19:32, Nikolaos Pavlidis wrote:

Hello Amos, all,

Thank you for your response. As far as understanding what you mean I do
(thats something at least) but I fail to see how this will be syntaxed


Answers inline.



My config is as follows please advise(this is not working of course):

# NETWORK OPTIONS
#
-
http_port 80 accel defaultsite=www.domain.com vhost
https_port 443 cert=/etc/squid/uob/sid_domain.crt
key=/etc/squid/uob/sid_domain.key cafile=/etc/squid/uob/sid_domain.ca
defaultsite=sid.domain.com vhost

>
> https_port 443 cert=/etc/squid/uob/helpdesk_domain.crt
> key=/etc/squid/uob/helpdesk_domain.key
> cafile=/etc/squid/uob/helpdesk_domain.ca defaultsite=helpdesk.domain.com
> vhost

The pubic-facing IP address is needed to open multiple same-numbered ports.

(wrapped for easy reading)

https_port 10.0.0.1:443 accel vhost defaultsite=sid.domain.com
   cert=/etc/squid/uob/sid_domain.crt
   key=/etc/squid/uob/sid_domain.key
   cafile=/etc/squid/uob/sid_domain.ca

https_port 10.0.0.2:443 accel vhost defaultsite=helpdesk.domain.com
   cert=/etc/squid/uob/helpdesk_domain.crt
   key=/etc/squid/uob/helpdesk_domain.key
   cafile=/etc/squid/uob/helpdesk_domain.ca



visible_hostname *MailScanner has detected a possible fraud attempt from
"www.beds.ac.uk" claiming to be* www. domain.
com
unique_hostname cache1.domain.com
offline_mode off
icp_port 3130
request_body_max_size 32 MB

# OPTIONS WHICH AFFECT THE CACHE SIZE
#
-
cache_mem 4096 MB
maximum_object_size 8 MB
maximum_object_size_in_memory 256 KB

# LOGFILE PATHNAMES AND CACHE DIRECTORIES
#
-
cache_dir aufs /var/cache/squid 61440 16 256
emulate_httpd_log on
logfile_rotate 100
logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %h" "%{User-Agent}>h" %Ss:%Sh
access_log /var/log/squid/access.log combined


Just for my interest how does forcing apache "common" format with 
emulate_httpd_log mix with explicitly forcing a locally defined 
"combined" format?

 Which one do you expect to be used in the log?


cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log


Only if you need it. Otherwise:
 cache_store_log none


debug_options ALL,1,33,3,20,3


(space needed between each section,level option pair.)
debug_options ALL,1 33,3 20,3



# OPTIONS FOR EXTERNAL SUPPORT PROGRAMS
#
-
auth_param basic children 10
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

# OPTIONS FOR TUNING THE CACHE
#
-
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i \.css 1440 50% 2880 override-expire
refresh_pattern -i \.swf 1440 50% 2880 ignore-reload override-expire


Missing:
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0


refresh_pattern . 1440 50% 4320 override-expire

# ACCESS CONTROLS
#
-

acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl purge method PURGE
acl CONNECT method CONNECT
acl shoutcast rep_header X-HTTP09-First-Line ^ICY\s[0-9]
upgrade_http0.9 deny shoutcast
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

# reverce-proxy configuration
#
-

cache_peer 194.80.213.28 sibling 80 3130 proxy-only no-digest
no-netdb-exchange


(this is where the deny from itself comes in handy to block looping)

cache_peer_access 194.80.213.28 deny from_cache2
cache_peer_access 194.80.213.28 allow all



cache_peer 10.1.62.230 parent 80 0 no-query originserver no-digest
name=lhdl_cst_srv login=PASS
acl sites_lhdl_cst dstdomain lhdl.cst.domain.com
http_access allow sites_lhdl_cst
cache_peer_access lhdl_cst_srv allow sites_lhdl_cst
cache_peer_access lhdl_cst_srv deny from_cache2


missing "deny all" there.




cache_peer 212.219.119.48 parent 443 0 no-query originserver ssl
sslflags=DONT_VERIFY_PEER no-digest name=beweb_srv_ssl login=PASS
acl sites_beweb_ssl dstdomain sid.domain.com
http_access allow sites_beweb_ssl
cache_peer_access beweb_srv_ssl allow sites_beweb_ssl
cache_peer_access 

[squid-users] Alerting when cache Peer is used.

2010-09-17 Thread GIGO .

I have configured my proxy servers in two regions for backup internet path of 
each other by declaring the following directives.
 
Directives on Proxy A:
 
cache_peer A parent 8080 0 proxy-only
prefer_direct on
nonhierarchical_direct off
cache_peer_access A allow all
 
 
Directives on Proxy B:
 
cache_peer B parent 8080 0 proxy-only
prefer_direct on
nonhierarchical_direct off
cache_peer_access B allow all
 
 
Is there a way that whenever a peer cache is used an email alert is generated 
to the admins.
 
thanking you &
 
Best Regards,
 
Bilal
  

Re: [squid-users] SSL Reverse Proxy to Support Multiple Web Site WITHOUT wildcard crt

2010-09-17 Thread Nikolaos Pavlidis
Hello Amos, all,

Thank you for your response. As far as understanding what you mean I do
(thats something at least) but I fail to see how this will be syntaxed 

My config is as follows please advise(this is not working of course):

# NETWORK OPTIONS
#
-
http_port 80 accel defaultsite=www.domain.com vhost
https_port 443 cert=/etc/squid/uob/sid_domain.crt
key=/etc/squid/uob/sid_domain.key cafile=/etc/squid/uob/sid_domain.ca
defaultsite=sid.domain.com vhost
https_port 443 cert=/etc/squid/uob/helpdesk_domain.crt
key=/etc/squid/uob/helpdesk_domain.key
cafile=/etc/squid/uob/helpdesk_domain.ca defaultsite=helpdesk.domain.com
vhost
visible_hostname www.domain.com
unique_hostname cache1.domain.com
offline_mode off
icp_port 3130
request_body_max_size 32 MB

# OPTIONS WHICH AFFECT THE CACHE SIZE
#
-
cache_mem 4096 MB
maximum_object_size 8 MB
maximum_object_size_in_memory 256 KB

# LOGFILE PATHNAMES AND CACHE DIRECTORIES
#
-
cache_dir aufs /var/cache/squid 61440 16 256
emulate_httpd_log on
logfile_rotate 100
logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %h" "%{User-Agent}>h" %Ss:%Sh
access_log /var/log/squid/access.log combined
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
debug_options ALL,1,33,3,20,3

# OPTIONS FOR EXTERNAL SUPPORT PROGRAMS
#
-
auth_param basic children 10
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

# OPTIONS FOR TUNING THE CACHE
#
-
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i \.css144050% 2880 override-expire
refresh_pattern -i \.swf144050% 2880 ignore-reload
override-expire
refresh_pattern .   144050% 4320 override-expire

# ACCESS CONTROLS
#
-

acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl purge method PURGE
acl CONNECT method CONNECT
acl shoutcast rep_header X-HTTP09-First-Line ^ICY\s[0-9]
upgrade_http0.9 deny shoutcast
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

# reverce-proxy configuration
#
-

cache_peer 194.80.213.28 sibling 80 3130 proxy-only no-digest
no-netdb-exchange

cache_peer 10.1.62.230 parent 80 0 no-query originserver no-digest
name=lhdl_cst_srv login=PASS
acl sites_lhdl_cst dstdomain lhdl.cst.domain.com
http_access allow sites_lhdl_cst
cache_peer_access lhdl_cst_srv allow sites_lhdl_cst
cache_peer_access lhdl_cst_srv deny from_cache2
cache_peer_access lhdl_cst_srv deny all

cache_peer 212.219.119.48 parent 443 0 no-query originserver ssl
sslflags=DONT_VERIFY_PEER no-digest name=beweb_srv_ssl login=PASS
acl sites_beweb_ssl dstdomain sid.domain.com
http_access allow sites_beweb_ssl
cache_peer_access beweb_srv_ssl allow sites_beweb_ssl
cache_peer_access beweb_srv_ssl deny from_cache2
cache_peer_access beweb_srv_ssl deny all

cache_peer 10.1.108.15 parent 443 0 no-query originserver ssl
sslflags=DONT_VERIFY_PEER no-digest name=helpdesk_srv_ssl login=PASS
acl sites_helpdesk_ssl dstdomain helpdesk.domain.com
http_access allow sites_helpdesk_ssl
cache_peer_access helpdesk_srv_ssl allow sites_helpdesk_ssl
cache_peer_access helpdesk_srv_ssl deny from_cache2
cache_peer_access helpdesk_srv_ssl deny all

# forward-proxy security restrictions
#
-

http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all

http_reply_access allow all
acl from_cache2 src 194.80.213.28
icp_access allow from_cache2
icp_access deny all

# ADMINISTRATIVE PARAMETERS
#
-

shutdown_lifetime 15 second
httpd_suppress_version_string on
cache_mgr cache...@d

RE: [squid-users] Problem accessing a particular site through squid

2010-09-17 Thread Seb Harrington
-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: 15 September 2010 15:21
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Problem accessing a particular site through
squid

On 15/09/10 23:24, Seb Harrington wrote:
>
> Hi everyone,
>
> I have a problem when accessing http://smallsteps4life.direct.gov.uk/
> through squid.
>
> When accessing the site directly the site is properly formatted, when 
> accessing through squid the site appears 'unformatted', some of the 
> images do not load and it looks as if the CSS has not been applied.
>
> I thought this behaviour was a little strange so I've tested it on two

> more instances of squid, one a default fresh install allowing 
> everything through (the all acl).
>
> When accessing the site these are the logs:
>   access.log: http://pastebin.com/HtkyfjUJ
>   store.log: http://pastebin.com/0NjnDZzW
>   cache.log: did not output anything useful or informative.
>
> I'm using the ubuntu version of squid3 (apt-get squid3) and I'm using 
> ubuntu 10.04 Lucid Lynx.
>
> Squid version: Squid Cache: Version 3.0.STABLE19   Could someone

>please run that website through their version of squid for  me and let 
>me know if this is a squid issue, a website issue or a bug in  the 
>ubuntu packaged version of squid.
>
> Cheers,
>
> Seb
>

Hi Amos,

Thanks for the reply,

>That trace from the "working" squid? There is zero CSS in it. Just 
>JavaScript files that generate page content on the fly. Most of the
content seems to be going through HTTPS which passes straight through
Squid.

That was the trace from squid fresh installed from apt-get in ubuntu. I
wanted to try a fresh installation so I prove that it was / was not
squid causing the website to display incorrectly. It is showing the same
behaviour as the production squid proxy.

> The all ACL working where regular config catches only some occasional 
> files makes me think either those files are on a domain being blocked,
or you have regex patterns that are catching more than you are aware of.

The squid conf as being used on the fresh installed squid is here:
http://pastebin.com/5QTAL2px

All I've changed is I've uncommented the acl all src all on line # 578
and changed http_access allow all on line # 629. I have made no other
modifications, and the site still displays incorrectly. Any other ideas?

That is a very valid point about the CSS, I've looked at the source of
the webpage and all CSS/JS seems to come from the same domian and
subdomian, but it is weired that the CSS isn't coming through the in the
logs.

Cheers,

Seb

This email carries a disclaimer, a copy of which may be read at 
http://learning.longhill.org.uk/disclaimer


RE: [squid-users] Problem accessing a particular site throughsquid

2010-09-17 Thread Seb Harrington
Hi Chad,

Thanks for your reply.

> Is your squid configured to advertise that it is a proxy to the sites
it connects to?

How do I do this please?

Cheers,

Seb

-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
 


>>> "Seb Harrington"  9/16/2010 6:15 
>>> AM >>>
Hi DaniL,

Thanks for your reply.

>Do you have content filter installed on your machine?

On the production squid server we run dansguradian as a content filter,
but on the other two servers I have installed squid3 onto (to check if
DG or squid was at fault) it was just squid and the problem with the
site remained.

Cheers,

Seb

This email carries a disclaimer, a copy of which may be read at
http://learning.longhill.org.uk/disclaimer


Travel Impressions made the following annotations
-
"This message and any attachments are solely for the intended recipient
and may contain confidential or privileged information.  If you are not
the intended recipient, any disclosure, copying, use, or distribution of
the information included in this message and any attachments is
prohibited.  If you have received this communication in error, please
notify us by reply e-mail and immediately and permanently delete this
message and any attachments.
Thank you."

This email carries a disclaimer, a copy of which may be read at 
http://learning.longhill.org.uk/disclaimer


Re: [squid-users] Automatic redirection on igoogle.fr

2010-09-17 Thread Babelo Gmvsdm

The problem is back, Is this you wanted amos?

HTTP/1.1 302 Moved Temporarily
Date: Fri, 17 Sep 2010 09:44:04 GMT
Server: Apache mod_fcgid/2.3.5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 
FrontPage/5.0.2.2635
X-Powered-By: PHP/5.2.14
Location: http://newwave.orge.pl/?q=
Content-Length: 0
Content-Type: text/html
X-Cache: MISS from Web-Filter
X-Cache-Lookup: MISS from Web-Filter:3128
Via: 1.1 Web-Filter (squid)
Proxy-Connection: keep-alive

Cheers

Herc.
  

[squid-users] Squid + Squidguard loaded but not filtering anything

2010-09-17 Thread Babelo Gmvsdm

Hi,
I have a very strange behaviour with squid today.It loads normaly:
root      2308  0.0  0.0   8164  1940 ?        Ss   10:53   0:00 
/usr/sbin/squid3 -YC -f /etc/squid3/squid.confproxy     2310  2.8  0.7  38740 
15580 ?        S    10:53   0:00 (squid) -YC -f /etc/squid3/squid.confproxy     
2312  1.2  0.2   6688  4540 ?        S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2313  0.8  0.2   6688  4544 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2314  1.0  0.2   6684  4532 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2315  1.0  0.2   6684  4532 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2316  1.0  0.2   6684  4536 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2317  1.0  0.2   6684  4536 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2318  1.2  0.2   6684  4536 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2319  1.2  0.2   6684  4536 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2320  1.0  0.2   6684  4532 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2321  1.0  0.2   6684  4532 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2322  1.2  0.2   6684  4532 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2323  1.6  0.2   6684  4532 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2324  1.2  0.2   6684  4532 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2325  1.2  0.2   6684  4532 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2326  1.4  0.2   6684  4532 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2327  1.4  0.2   6684  4532 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2328  1.2  0.2   6684  4532 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2329  1.2  0.2   6684  4532 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2330  1.4  0.2   6684  4536 ?    
    S    10:53   0:00 (squidGuard) -c 
/usr/local/squidGuard/squidGuard.confproxy     2331  1.0  0.2   6684  4536 ?    
    S    10:53   0:00 (squidGuard) -c /usr/local/squidGuard/squidGuard.conf
I checked my iptables and it seems so be ok:
DNAT       tcp  --  anywhere             anywhere            tcp dpt:www 
to:10.2.3.2:3128
Squid seems to load ok:
2010/09/17 10:53:24| Creating Swap Directories2010/09/17 10:53:24| Starting 
Squid Cache version 3.1.2 for i486-pc-linux-gnu...2010/09/17 10:53:24| Process 
ID 23102010/09/17 10:53:24| With 65535 file descriptors available2010/09/17 
10:53:24| Initializing IP Cache...2010/09/17 10:53:24| DNS Socket created at 
[::], FD 72010/09/17 10:53:24| Adding nameserver  1.2.3.4 from 
/etc/resolv.conf2010/09/17 10:53:24| Adding nameserver  1.2.3.5 
/etc/resolv.conf2010/09/17 10:53:24| helperOpenServers: Starting 20/20 
'squidGuard' processes2010/09/17 10:53:25| Unlinkd pipe opened on FD 
522010/09/17 10:53:25| Local cache digest enabled; rebuild/rewrite every 
3600/3600 sec2010/09/17 10:53:25| Store logging disabled2010/09/17 10:53:25| 
Swap maxSize 0 + 262144 KB, estimated 20164 objects2010/09/17 10:53:25| Target 
number of buckets: 10082010/09/17 10:53:25| Using 8192 Store buckets2010/09/17 
10:53:25| Max Mem  size: 262144 KB2010/09/17 10:53:25| Max Swap size: 0 
KB2010/09/17 10:53:25| Using Least Load store dir selection2010/09/17 10:53:25| 
Set Current Directory to /var/spool/squid32010/09/17 10:53:25| Loaded 
Icons.2010/09/17 10:53:25| Accepting  intercepted HTTP connections at 
0.0.0.0:3128, FD 53.2010/09/17 10:53:25| Accepting ICP messages at [::]:3130, 
FD 54.2010/09/17 10:53:25| HTCP Disabled.2010/09/17 10:53:25| Squid modules 
loaded: 02010/09/17 10:53:25| Adaptation support is off.2010/09/17 10:53:25| 
Ready to serve requests.2010/09/17 10:53:26| storeLateRelease: released 0 
objects
But it let pass everything, the squidGuard blacklists are totally bypassed.
Please help
Cheers
Herc.