Re: [squid-users] Problems with Squid 3.5 and freshclam

2014-06-03 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 3/06/2014 11:11 p.m., DI Peter Burgstaller wrote:
> Dear all,
> 
> I upgraded my squid installation from the default Centos 2.6 to
> the current 3.5 version.

You mean 3.5.0.0 (aka 3.HEAD right now) or 3.4.5 ?

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTjeyfAAoJELJo5wb/XPRjbNEH/R6Ou0ox0F8rEkSdC2nFOqqS
+mvMsruV4EEJ7r48frkSmjLX+bmW3SzubSLVuxrtLAlJZOEjcHEUGyV5yTzqxyH9
OQNtyAi7sU9GVDY8VN3/03anysYYi0RokDbYHM/jg6bt9nQoUEAzYGsiT6+XzH9w
3GUKyFpuptlG/OHiJRfmCn6rxdnxBnLcQuS2/PQwLLt+w2+GZ/SiuavlmQ/3Shog
8gcxtEBNhn+8SGhlD49hzEUDPak+MI7ANGv+vq9BqMQ/JsNP4Dn6FQlPtq53Zh+n
1VBmiLoGqvvZ7VAwx/v3cVR5H+7181Ff4zPM9Ywv5gud1fW7+B5vSt38YfVmn1Y=
=0KQY
-END PGP SIGNATURE-


[squid-users] Problems with Squid 3.5 and freshclam

2014-06-03 Thread DI Peter Burgstaller
Dear all,

I upgraded my squid installation from the default Centos 2.6 to the
current 3.5 version.
Since then, a number of network services do not work via Squid anymore.
The most problematic one is freshclam.
I can see in the access.log that the files are being transferred - even
with the result code of 200.
However, the software does not seem to "get" the entire file.

$ grep clamav /var/log/squid/access.log

1401792483.486   2113 10.1.1.1 TCP_MISS/200 24621072 GET
http://db.local.clamav.net/daily.cvd - HIER_DIRECT/81.223.20.171
application/octet-stream
1401793369.051  10741 10.1.1.1 TCP_MISS/200 64721048 GET
http://db.at.clamav.net/main.cvd - HIER_DIRECT/81.223.20.171
application/octet-stream
1401793409.859   5755 10.1.1.1 TCP_MISS/200 64721048 GET
http://db.at.clamav.net/main.cvd - HIER_DIRECT/81.223.20.171
application/octet-stream
1401793455.523  10597 10.1.1.1 TCP_MISS/200 64721048 GET
http://db.at.clamav.net/main.cvd - HIER_DIRECT/81.223.20.171
application/octet-stream
1401793505.637  20049 10.1.1.1 TCP_MISS/200 64721119 GET
http://db.local.clamav.net/main.cvd - HIER_DIRECT/193.1.193.64 text/plain

A freshclam -v shows the following output:

[root@proxy clamav]# freshclam -v
Current working dir is /var/lib/clamav
Max retries == 3
ClamAV update process started at Tue Jun  3 13:02:38 2014
Using IPv6 aware code
Querying current.cvd.clamav.net
TTL: 1094
Software version from DNS: 0.98.3
Connecting via proxy
Retrieving http://db.at.clamav.net/main.cvd
Trying to download http://db.at.clamav.net/main.cvd (IP: 10.1.1.1)
nonblock_recv: recv timing out (30 secs)
WARNING: getfile: Download interrupted: Operation now in progress (IP:
10.1.1.1)
WARNING: Can't download main.cvd from db.at.clamav.net
Querying main.0.77.0.0.0A0E0FE9.ping.clamav.net
Trying again in 5 secs...
ClamAV update process started at Tue Jun  3 13:03:24 2014
Using IPv6 aware code
Querying current.cvd.clamav.net
TTL: 1048
Software version from DNS: 0.98.3
Connecting via proxy
Retrieving http://db.at.clamav.net/main.cvd
Trying to download http://db.at.clamav.net/main.cvd (IP: 10.1.1.1)
nonblock_recv: recv timing out (30 secs)
WARNING: getfile: Download interrupted: Operation now in progress (IP:
10.1.1.1)
WARNING: Can't download main.cvd from db.at.clamav.net
Querying main.0.77.0.0.0A0E0FE9.ping.clamav.net
Trying again in 5 secs...
ClamAV update process started at Tue Jun  3 13:04:04 2014
Using IPv6 aware code
Querying current.cvd.clamav.net
TTL: 1008
Software version from DNS: 0.98.3
Connecting via proxy
Retrieving http://db.at.clamav.net/main.cvd
Trying to download http://db.at.clamav.net/main.cvd (IP: 10.1.1.1)
nonblock_recv: recv timing out (30 secs)
ERROR: getfile: Download interrupted: Operation now in progress (IP:
10.1.1.1)
ERROR: Can't download main.cvd from db.at.clamav.net
Querying main.0.77.0.0.0A0E0FE9.ping.clamav.net
Giving up on db.at.clamav.net...
ClamAV update process started at Tue Jun  3 13:04:45 2014
Using IPv6 aware code
Querying current.cvd.clamav.net
TTL: 967
Software version from DNS: 0.98.3
Connecting via proxy
Retrieving http://db.local.clamav.net/main.cvd
Trying to download http://db.local.clamav.net/main.cvd (IP: 10.1.1.1)
nonblock_recv: recv timing out (30 secs)
ERROR: getfile: Download interrupted: Operation now in progress (IP:
10.1.1.1)
ERROR: Can't download main.cvd from db.local.clamav.net
Querying main.0.77.0.0.0A0E0FE9.ping.clamav.net
Giving up on db.local.clamav.net...
Update failed. Your network may be down or none of the mirrors listed in
/etc/freshclam.conf is working. Check
http://www.clamav.net/support/mirror-problem for possible reasons.

A direct connection - without squid - works as expected.
Thanks very much for your help, Peter

-- 
Best regards,
DI Peter Burgstaller
---
Head of Hosted Services

SKIDATA AG
Untersbergstrasse 40
A-5083 Grödig / Salzburg
[p] +43 (0) 6246 888-4155
[f] +43 (0) 6246 888-7
[e] peter.burgstal...@skidata.com
[w] www.skidata.com
[§] www.skidata.com/legal-at.html
Please consider the environment before printing this e-mail.



signature.asc
Description: OpenPGP digital signature


AW: [squid-users] Problems with Group detection with ADS

2014-05-21 Thread Puschmann, Sven
Hi Amos,

Sanba/Winbind Version: Version 3.6.6 (from Debian APT Sources)
Squid Version: 3.1.20 (from Debian APT-Sources)
Both are the Same Version.

There are 2 Domains with mixed Subnets, the Proxyservers have unique Names and 
IP Addresses and are both Resolved via DNS Correctly.

The New Proxy Server has nothing to do with the running one, it's a newly 
installed, separate System.

The New Proxy displays with wbinfo -u only the User from his Domain, the 
running Proxy also (for his own domain).

The Output of wbinfo_group.pl with your Suggestion:
===
echo "user.name@ pxy-standard" | /usr/lib/squid3/wbinfo_group.pl
failed to call wbcGetGroups: WBC_ERR_DOMAIN_NOT_FOUND
Could not get groups for user user.name@
ERR

echo "user.name@. pxy-standard" | 
/usr/lib/squid3/wbinfo_group.pl
failed to call wbcGetGroups: WBC_ERR_DOMAIN_NOT_FOUND
Could not get groups for user user.name@.
ERR
===

Same output on the running Proxy (with a User from his Domain)

Greetings
Sven Puschmann


-Ursprüngliche Nachricht-
Von: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Gesendet: Mittwoch, 21. Mai 2014 10:22
An: squid-users@squid-cache.org
Betreff: Re: [squid-users] Problems with Group detection with ADS

So NTLM and Basic user names work.

How about Kerberos credentials?  (user.name@DOMAIN-NAME)


> Has anybody an Idea what might be the Problem? I'm really confused about the 
> Situation that it's okay via IP-Address and not okay via DNS Name.  The DNS 
> Resolution is in function (fromm any Client)
> 

* Squid version(s)?

* Samba version?

* is there anything different about the IPs the proxy hostname resolves
to on each site?

* are the Kerberos keytabs for proxy by-hostname correctly installed on
the clients machine in the new location?
 - compare the sets available to users at each location and see if there
is a difference.


Amos



Re: [squid-users] Problems with Group detection with ADS

2014-05-21 Thread Amos Jeffries
On 21/05/2014 8:00 p.m., Puschmann, Sven wrote:
> Hi Folks,
> 
> i've installed an new SQUID Server for our Holding Company (same 
> ActiveDirectory Forest, but another Domain) and I have an little Problem with 
> it.
> 
> Here's the Auth and ACL External Config from both Servers (running and newly 
> installed)
> 
> Running Config (Part):
> ===
> ### Kerberos
> auth_param negotiate program /usr/local/bin/negotiate_wrapper -d --ntlm 
> /usr/bin/ntlm_auth --diagnostics --helper-protocol=squid-2.5-ntlmssp 
> --domain= --kerberos /usr/lib/squid3/squid_kerb_auth -d -s 
> GSS_C_NO_NAME
> auth_param negotiate children 10
> auth_param negotiate keep_alive off
> 
> ### NTLM
> auth_param ntlm program /usr/bin/ntlm_auth --diagnostics 
> --helper-protocol=squid-2.5-ntlmssp --domain=
> auth_param ntlm children 10
> auth_param ntlm keep_alive off
> 
> ### BASIC
> auth_param basic program /usr/lib/squid3/squid_ldap_auth -R -b 
> "dc=,dc=" -D squid@. -W 
> /etc/squid3/ldappass.txt -f sAMAccountName=%s -h 
> auth_param basic children 10
> auth_param basic realm Internet Proxy
> auth_param basic credentialsttl 1 minute
> 
> 
> ### Access Regeldefinitionen ###
> 
> acl auth proxy_auth REQUIRED
> 
> external_acl_type testForNTGroup %LOGIN /usr/lib/squid3/wbinfo_group.pl
> external_acl_type urlblacklist_lookup ttl=60 %URI /usr/local/bin/url_lookup 
> adult,aggressive,artnudes,chat,dating,desktopsillies,dialers,drugs,filehosting,gambling,games,hacking,instantmessaging,mail,mixed_adult,naturism,onlineauctions,onlinegames,phishing,porn,proxy,ringtones,sexuality,sexualityeducation,socialnetworking,spyware,violence,virusinfected,warez,webmail
> external_acl_type urlblacklist_lookup_soc ttl=60 %URI 
> /usr/local/bin/url_lookup 
> adult,aggressive,artnudes,chat,dating,desktopsillies,dialers,drugs,filehosting,gambling,games,hacking,instantmessaging,mail,mixed_adult,naturism,onlineauctions,onlinegames,phishing,porn,proxy,ringtones,sexuality,sexualityeducation,spyware,violence,virusinfected,warez,webmail
> 
> acl Full external testForNTGroup RZ-PXY-Full
> acl Standard external testForNTGroup RZ-PXY-Standard
> acl Blocked external testForNTGroup RZ-PXY-Blocked
> acl StandardSocial external testForNTGroup RZ-PXY-SocialMedia
> acl StandardVideo external testForNTGroup RZ-PXY-Videoportale
> acl StandardAdvanced external testForNTGroup RZ-PXY-StandardAdvanced
> ===
> 
> Problem Config (same part):
> ===
> ### Kerberos
> auth_param negotiate program /usr/local/bin/negotiate_wrapper -d --ntlm 
> /usr/bin/ntlm_auth --diagnostics --helper-protocol=squid-2.5-ntlmssp 
> --domain= --kerberos /usr/lib/squid3/squid_kerb_auth -d -s 
> GSS_C_NO_NAME
> auth_param negotiate children 10
> auth_param negotiate keep_alive off
> 
> ### NTLM
> auth_param ntlm program /usr/bin/ntlm_auth --diagnostics 
> --helper-protocol=squid-2.5-ntlmssp --domain=
> auth_param ntlm children 10
> auth_param ntlm keep_alive off
> 
> ### BASIC
> auth_param basic program /usr/lib/squid3/squid_ldap_auth -R -b 
> "dc=,dc=" -D squid@. -W 
> /etc/squid3/ldappass.txt -f sAMAccountName=%s -h 
> auth_param basic children 10
> auth_param basic realm Internet Proxy
> auth_param basic credentialsttl 1 minute
> 
> 
> ### Access Regeldefinitionen ###
> 
> acl auth proxy_auth REQUIRED
> 
> external_acl_type testForNTGroup %LOGIN /usr/lib/squid3/wbinfo_group.pl
> external_acl_type urlblacklist_lookup ttl=60 %URI /usr/local/bin/url_lookup 
> adult,aggressive,artnudes,blog,chat,dating,desktopsillies,dialers,drugs,filehosting,gambling,games,hacking,instantmessaging,mail,mixed_adult,naturism,onlineauctions,onlinegames,phishing,porn,proxy,ringtones,sexuality,sexualityeducation,socialnetworking,social_networks,spyware,violence,virusinfected,warez,webmail
> external_acl_type urlblacklist_lookup_soc ttl=60 %URI 
> /usr/local/bin/url_lookup 
> adult,aggressive,artnudes,chat,dating,desktopsillies,dialers,drugs,filehosting,gambling,games,hacking,instantmessaging,mail,mixed_adult,naturism,onlineauctions,onlinegames,phishing,porn,proxy,ringtones,sexuality,sexualityeducation,socialnetworking,spyware,violence,virusinfected,warez,webmail
> 
> acl Full external testForNTGroup pxy-full
> acl Standard external testForNTGroup pxy-standard
> acl Blocked external testForNTGroup pxy-blocked
> acl StandardSocial external testForNTGroup pxy-socialmedia
> acl StandardVideo external testForNTGroup pxy-videoportale
> acl StandardAdvanced external testForNTGroup pxy-standardadvanced
> ===
> 
> The Problem is:
> If the User Connects via the Hostname to the Proxy Server he lands in d

[squid-users] Problems with Group detection with ADS

2014-05-21 Thread Puschmann, Sven
Hi Folks,

i've installed an new SQUID Server for our Holding Company (same 
ActiveDirectory Forest, but another Domain) and I have an little Problem with 
it.

Here's the Auth and ACL External Config from both Servers (running and newly 
installed)

Running Config (Part):
===
### Kerberos
auth_param negotiate program /usr/local/bin/negotiate_wrapper -d --ntlm 
/usr/bin/ntlm_auth --diagnostics --helper-protocol=squid-2.5-ntlmssp 
--domain= --kerberos /usr/lib/squid3/squid_kerb_auth -d -s 
GSS_C_NO_NAME
auth_param negotiate children 10
auth_param negotiate keep_alive off

### NTLM
auth_param ntlm program /usr/bin/ntlm_auth --diagnostics 
--helper-protocol=squid-2.5-ntlmssp --domain=
auth_param ntlm children 10
auth_param ntlm keep_alive off

### BASIC
auth_param basic program /usr/lib/squid3/squid_ldap_auth -R -b 
"dc=,dc=" -D squid@. -W 
/etc/squid3/ldappass.txt -f sAMAccountName=%s -h 
auth_param basic children 10
auth_param basic realm Internet Proxy
auth_param basic credentialsttl 1 minute


### Access Regeldefinitionen ###

acl auth proxy_auth REQUIRED

external_acl_type testForNTGroup %LOGIN /usr/lib/squid3/wbinfo_group.pl
external_acl_type urlblacklist_lookup ttl=60 %URI /usr/local/bin/url_lookup 
adult,aggressive,artnudes,chat,dating,desktopsillies,dialers,drugs,filehosting,gambling,games,hacking,instantmessaging,mail,mixed_adult,naturism,onlineauctions,onlinegames,phishing,porn,proxy,ringtones,sexuality,sexualityeducation,socialnetworking,spyware,violence,virusinfected,warez,webmail
external_acl_type urlblacklist_lookup_soc ttl=60 %URI /usr/local/bin/url_lookup 
adult,aggressive,artnudes,chat,dating,desktopsillies,dialers,drugs,filehosting,gambling,games,hacking,instantmessaging,mail,mixed_adult,naturism,onlineauctions,onlinegames,phishing,porn,proxy,ringtones,sexuality,sexualityeducation,spyware,violence,virusinfected,warez,webmail

acl Full external testForNTGroup RZ-PXY-Full
acl Standard external testForNTGroup RZ-PXY-Standard
acl Blocked external testForNTGroup RZ-PXY-Blocked
acl StandardSocial external testForNTGroup RZ-PXY-SocialMedia
acl StandardVideo external testForNTGroup RZ-PXY-Videoportale
acl StandardAdvanced external testForNTGroup RZ-PXY-StandardAdvanced
===

Problem Config (same part):
===
### Kerberos
auth_param negotiate program /usr/local/bin/negotiate_wrapper -d --ntlm 
/usr/bin/ntlm_auth --diagnostics --helper-protocol=squid-2.5-ntlmssp 
--domain= --kerberos /usr/lib/squid3/squid_kerb_auth -d -s 
GSS_C_NO_NAME
auth_param negotiate children 10
auth_param negotiate keep_alive off

### NTLM
auth_param ntlm program /usr/bin/ntlm_auth --diagnostics 
--helper-protocol=squid-2.5-ntlmssp --domain=
auth_param ntlm children 10
auth_param ntlm keep_alive off

### BASIC
auth_param basic program /usr/lib/squid3/squid_ldap_auth -R -b 
"dc=,dc=" -D squid@. -W 
/etc/squid3/ldappass.txt -f sAMAccountName=%s -h 
auth_param basic children 10
auth_param basic realm Internet Proxy
auth_param basic credentialsttl 1 minute


### Access Regeldefinitionen ###

acl auth proxy_auth REQUIRED

external_acl_type testForNTGroup %LOGIN /usr/lib/squid3/wbinfo_group.pl
external_acl_type urlblacklist_lookup ttl=60 %URI /usr/local/bin/url_lookup 
adult,aggressive,artnudes,blog,chat,dating,desktopsillies,dialers,drugs,filehosting,gambling,games,hacking,instantmessaging,mail,mixed_adult,naturism,onlineauctions,onlinegames,phishing,porn,proxy,ringtones,sexuality,sexualityeducation,socialnetworking,social_networks,spyware,violence,virusinfected,warez,webmail
external_acl_type urlblacklist_lookup_soc ttl=60 %URI /usr/local/bin/url_lookup 
adult,aggressive,artnudes,chat,dating,desktopsillies,dialers,drugs,filehosting,gambling,games,hacking,instantmessaging,mail,mixed_adult,naturism,onlineauctions,onlinegames,phishing,porn,proxy,ringtones,sexuality,sexualityeducation,socialnetworking,spyware,violence,virusinfected,warez,webmail

acl Full external testForNTGroup pxy-full
acl Standard external testForNTGroup pxy-standard
acl Blocked external testForNTGroup pxy-blocked
acl StandardSocial external testForNTGroup pxy-socialmedia
acl StandardVideo external testForNTGroup pxy-videoportale
acl StandardAdvanced external testForNTGroup pxy-standardadvanced
===

The Problem is:
If the User Connects via the Hostname to the Proxy Server he lands in de Last 
"Deny All" ACL because the Proxy Server cannot determine the Users Group 
Correctly. But if I set the Proxy via the direct IP Address everything is okay.
On the running SQUID (first config sniplet) there is no such Problem.

Here are some D

RE: [squid-users] Problems with Custom Error Pages

2014-01-31 Thread Puschmann, Sven
>> deny_info ERR_DENIED_URLBS blockedsites !Full all
>
>NP: hark back to the first problem. deny_info takes *one* ACL name for the 
>custom page to be linked to. Not a set of ACLs.
>
>> http_access deny blockedsites !Full all
>
>Second problem;
> "blockedsites" is not the last ACL on the line, so it is not the reason for 
> denial. It is just one of the steps to get to that reason. "all" is the 
> reason >here.
>
>NP: you can create dummy ACLs for linking to the deny_info like this:
>
>  acl dummy_urlbs src all
>  deny_info ERR_DENIED_URLBS dummy_urlbs
>  http_access deny blockedsites !Full dummy_urlbs

Thank you SO much, I just moved the "Blockedsites" ACL to be the last, now it 
works fine :)

Now it's: http_access deny !Full all blockedsites



Re: [squid-users] Problems with Custom Error Pages

2014-01-31 Thread Amos Jeffries
On 1/02/2014 1:52 a.m., Puschmann, Sven wrote:
> Hi There,
> 
> i have a big Problem, I'm Configuring an Squid Proxy for our Company and want 
> to Show Custom Errors.
> 
> The Proxy Uses ActiveDirectory Authentication with Groupbased Policies.
> 
> Now I want to Show Errorpages in our Corporate Identity.
> 
> Here's my Config (I need use  in the http_access Rule to prevent 
> showing Reauthentication Windows):
> http_access allow prioritysites
> 
> http_access deny !Safe_ports !ftp
> http_access deny CONNECT !SSL_ports !ftp
> http_access allow CONNECT SSL_ports ftp
> 
> deny_info ERR_DENIED_AUTH !auth

First problem;
 the deny_info directive is just for linking an ACL to the response
content which should be displayed whenever that ACL is the reason for
the deny.

The correct way to write the above line is:
  deny_info ERR_DENIED_AUTH auth
...

> http_access deny !auth

... which will be sent to the clients ofor any authentication rejection
done by the login line above.

> http_access allow allowedsites
> deny_info ERR_DENIED_BLOCKED Blocked
> http_access deny Blocked

NP: since the thing you are going to do with "Full" is allow it. Why not
allow it up here before doing any of the below denies?

> deny_info ERR_DENIED_URLBS blockedsites !Full all

NP: hark back to the first problem. deny_info takes *one* ACL name for
the custom page to be linked to. Not a set of ACLs.

> http_access deny blockedsites !Full all

Second problem;
 "blockedsites" is not the last ACL on the line, so it is not the reason
for denial. It is just one of the steps to get to that reason. "all" is
the reason here.

NP: you can create dummy ACLs for linking to the deny_info like this:

  acl dummy_urlbs src all
  deny_info ERR_DENIED_URLBS dummy_urlbs
  http_access deny blockedsites !Full dummy_urlbs

> deny_info ERR_DENIED_BADKEY bad_keywords !Full all
> http_access deny bad_keywords !Full all
> deny_info ERR_DENIED_URLBL urlblacklist !Full all
> http_access deny !urlblacklist !Full all
> http_access allow Standard
> http_access allow Full
> deny_info ERR_DENIED_SONST all
> http_access deny all
> 
> 
> The Problem is, that my Squid always shows the Blocked Sites Error Page, even 
> when the Bad-Keyword ACL acts.

Dont you mean its always showing the ERR_DENIED_SONST page?
 That is because that page was linked to the "all" ACL, and the "all"
ACL is the last one on most of your deny lines.

Amos


Re: [squid-users] problems with some requests

2014-01-29 Thread Amos Jeffries
On 29/01/2014 9:26 p.m., m.shahve...@ece.ut.ac.ir wrote:
> For example I searched something in "https://www.google.com"; and
> access.log is as below:
> 
> 1390982819.881651 10.1.116.50 TCP_MISS/200 855 POST
> http://clients1.google.com/ocsp - HIER_DIRECT/216.239.32.20
> application/ocsp-response


These are HTTP requests for OCSP certificate information *about* HTTPS
clients/servers. It is not HTTPS traffic.

It is one of the more nasty oddities of SSL/TLS that it requires working
un-encrypted HTTP connectivity to fetch certificate verification
information :-(.


The HTTPS "GET https://www.google.com ..." part is going through a
different connection encrypted on port 443.

Amos



Re: [squid-users] problems with some requests

2014-01-29 Thread m . shahverdi
For example I searched something in "https://www.google.com"; and
access.log is as below:

1390982819.881651 10.1.116.50 TCP_MISS/200 855 POST
http://clients1.google.com/ocsp - HIER_DIRECT/216.239.32.20
application/ocsp-response
1390982820.131655 10.1.116.50 TCP_MISS/200 855 POST
http://clients1.google.com/ocsp - HIER_DIRECT/216.239.32.20
application/ocsp-response
1390982820.869981 10.1.116.50 TCP_MISS/200 1878 POST
http://gtglobal-ocsp.geotrust.com/ - HIER_DIRECT/23.59.139.27
application/ocsp-response
1390982821.129997 10.1.116.50 TCP_MISS/200 1878 POST
http://gtglobal-ocsp.geotrust.com/ - HIER_DIRECT/23.59.139.27
application/ocsp-response
1390982830.019671 10.1.116.50 TCP_MISS/200 855 POST
http://clients1.google.com/ocsp - HIER_DIRECT/216.239.32.20
application/ocsp-response
1390982830.196676 10.1.116.50 TCP_MISS/200 855 POST
http://clients1.google.com/ocsp - HIER_DIRECT/216.239.32.20
application/ocsp-response
1390982830.266665 10.1.116.50 TCP_MISS/200 855 POST
http://clients1.google.com/ocsp - HIER_DIRECT/216.239.32.20
application/ocsp-response
1390982830.440669 10.1.116.50 TCP_MISS/200 855 POST
http://clients1.google.com/ocsp - HIER_DIRECT/216.239.32.20
application/ocsp-response


> On 29/01/2014 7:28 p.m., m.shahve...@ece.ut.ac.ir wrote:
>> Thanks a lot, but I'm running a squid on another system that is passing
>> https requests and logging them in "access.log" perfectly! I use same
>> config file for both.
>>
>
> Logging them as what exactly? (the full exact access.log lines please).
>
> Are you really sure that is Squid on the other machine?
>  Because HTTPS has *never* been valid on port-80 which is what
> "http_port ... tproxy" configures Squid to receive.
>
> Amos
>




Re: [squid-users] problems with some requests

2014-01-28 Thread Amos Jeffries
On 29/01/2014 7:28 p.m., m.shahve...@ece.ut.ac.ir wrote:
> Thanks a lot, but I'm running a squid on another system that is passing
> https requests and logging them in "access.log" perfectly! I use same
> config file for both.
> 

Logging them as what exactly? (the full exact access.log lines please).

Are you really sure that is Squid on the other machine?
 Because HTTPS has *never* been valid on port-80 which is what
"http_port ... tproxy" configures Squid to receive.

Amos


Re: [squid-users] problems with some requests

2014-01-28 Thread m . shahverdi
Thanks a lot, but I'm running a squid on another system that is passing
https requests and logging them in "access.log" perfectly! I use same
config file for both.




> On 29/01/2014 6:55 p.m., m.shahve...@ece.ut.ac.ir wrote:
>> Hi,
>> I have a problem with ftp and https requests.
>> I'm running squid in debug mode to trace function calls for a ftp and a
>> https request and finding below lines in cache.log:
>
>
> What makes you think your Squid is capable of receiving HTTPS or FTP
> request messages?
>
>
>>
>> for a https request I'm getting:
>> **
>> client_side.cc(2862) clientParseRequests: local=216.239.32.20:443
>> remote=10.1.116.50 FD 10 flags=17: attempting to parse
>> HttpParser.cc(29) reset: Request buffer is 
>> HttpParser.cc(39) parseRequestFirstLine: parsing possible request: 
>> HttpParser.cc(248) HttpParserParseReqLine: Parser: retval -1: from
>> 0->49:
>> method 0->-1; url -1->-1; version -1->-1 (0/0)
>
> First byte in HTTPs is binary. Which is invalid HTTP characters.
>
>
>> **
>> In fact the request is unrecognizable for squid.
>> and for a ftp request:
>> **
>> AsyncCall.cc(30) make: make call ConnStateData::clientReadRequest
>> [call39]
>> AsyncJob.cc(117) callStart: ConnStateData status in: [ job3]
>> client_side.cc(2923) clientReadRequest: local=10.1.116.49:22
>> remote=10.1.116.50 FD 10 flags=17 size 0
>> client_side.cc(2959) clientReadRequest: local=10.1.116.49:22
>> remote=10.1.116.50 FD 10 flags=17 closed?
>> client_side.cc(2401) connFinishedWithConn: local=10.1.116.49:22
>> remote=10.1.116.50 FD 10 flags=17 closed
>> comm.cc(1102) _comm_close: comm_close: start closing FD 10
>> **
>> That's very wonderful! squid could not read request from socket!
>
> * FTP protocol starts with the server announcing itself to the client.
> * Your agent speaking FTP waits for that announcement.
>
> * HTTP protocol starts with client announcing its request to the proxy
> or server.
> * Squid being an HTTP proxy waits for the request.
>
> * After waiting a while for some traffic to happen TCP protocol simply
> closes the socket.
>
>
> Because:
> 1) Squid is "an HTTP caching proxy", not an FTP proxy **.
>
> 2) You have only configured Squid to receive explicit/direct proxy HTTP
> traffic and TPROXY intercepted HTTP traffic.
>
>> http_port 3128
>> http_port 3129 tproxy
>
>
> ** FTP protocol relaying by Squid is being experimented with but not yet
> available in any of the formal releases.
>
> Amos
>
>




Re: [squid-users] problems with some requests

2014-01-28 Thread Amos Jeffries
On 29/01/2014 6:55 p.m., m.shahve...@ece.ut.ac.ir wrote:
> Hi,
> I have a problem with ftp and https requests.
> I'm running squid in debug mode to trace function calls for a ftp and a
> https request and finding below lines in cache.log:


What makes you think your Squid is capable of receiving HTTPS or FTP
request messages?


> 
> for a https request I'm getting:
> **
> client_side.cc(2862) clientParseRequests: local=216.239.32.20:443
> remote=10.1.116.50 FD 10 flags=17: attempting to parse
> HttpParser.cc(29) reset: Request buffer is 
> HttpParser.cc(39) parseRequestFirstLine: parsing possible request: 
> HttpParser.cc(248) HttpParserParseReqLine: Parser: retval -1: from 0->49:
> method 0->-1; url -1->-1; version -1->-1 (0/0)

First byte in HTTPs is binary. Which is invalid HTTP characters.


> **
> In fact the request is unrecognizable for squid.
> and for a ftp request:
> **
> AsyncCall.cc(30) make: make call ConnStateData::clientReadRequest [call39]
> AsyncJob.cc(117) callStart: ConnStateData status in: [ job3]
> client_side.cc(2923) clientReadRequest: local=10.1.116.49:22
> remote=10.1.116.50 FD 10 flags=17 size 0
> client_side.cc(2959) clientReadRequest: local=10.1.116.49:22
> remote=10.1.116.50 FD 10 flags=17 closed?
> client_side.cc(2401) connFinishedWithConn: local=10.1.116.49:22
> remote=10.1.116.50 FD 10 flags=17 closed
> comm.cc(1102) _comm_close: comm_close: start closing FD 10
> **
> That's very wonderful! squid could not read request from socket!

* FTP protocol starts with the server announcing itself to the client.
* Your agent speaking FTP waits for that announcement.

* HTTP protocol starts with client announcing its request to the proxy
or server.
* Squid being an HTTP proxy waits for the request.

* After waiting a while for some traffic to happen TCP protocol simply
closes the socket.


Because:
1) Squid is "an HTTP caching proxy", not an FTP proxy **.

2) You have only configured Squid to receive explicit/direct proxy HTTP
traffic and TPROXY intercepted HTTP traffic.

> http_port 3128
> http_port 3129 tproxy


** FTP protocol relaying by Squid is being experimented with but not yet
available in any of the formal releases.

Amos



[squid-users] problems with some requests

2014-01-28 Thread m . shahverdi
Hi,
I have a problem with ftp and https requests.
I'm running squid in debug mode to trace function calls for a ftp and a
https request and finding below lines in cache.log:

for a https request I'm getting:
**
client_side.cc(2862) clientParseRequests: local=216.239.32.20:443
remote=10.1.116.50 FD 10 flags=17: attempting to parse
HttpParser.cc(29) reset: Request buffer is 
HttpParser.cc(39) parseRequestFirstLine: parsing possible request: 
HttpParser.cc(248) HttpParserParseReqLine: Parser: retval -1: from 0->49:
method 0->-1; url -1->-1; version -1->-1 (0/0)
**
In fact the request is unrecognizable for squid.
and for a ftp request:
**
AsyncCall.cc(30) make: make call ConnStateData::clientReadRequest [call39]
AsyncJob.cc(117) callStart: ConnStateData status in: [ job3]
client_side.cc(2923) clientReadRequest: local=10.1.116.49:22
remote=10.1.116.50 FD 10 flags=17 size 0
client_side.cc(2959) clientReadRequest: local=10.1.116.49:22
remote=10.1.116.50 FD 10 flags=17 closed?
client_side.cc(2401) connFinishedWithConn: local=10.1.116.49:22
remote=10.1.116.50 FD 10 flags=17 closed
comm.cc(1102) _comm_close: comm_close: start closing FD 10
**
That's very wonderful! squid could not read request from socket!

here is my config file:
**
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow all

# Deny requests to certain unsafe ports
#http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
#http_access deny CONNECT !SSL_ports
#http_access allow localnet
http_access allow localhost

# Squid normally listens to port 3128
http_port 3128
http_port 3129 tproxy

debug_options rotate=1 ALL,5
# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid3

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
refresh_pattern .   0   20% 4320

cache deny all
**



[squid-users] Problems checking if an object is in cache or not

2013-12-04 Thread Donoso Gabilondo, Daniel
I need check if an object is in squid cache or not. I use squid 3.2.0.12 on 
Fedora 16.

I saw that squidclient should do this but it said that objects are "MISS" and I 
don't know why because they are cached. (I checked it with Wireshark)

I tried executing this command:

squidclient -h localhost -p 3128 -t 1 
"http://192.168.230.10/myvideos/VEA_ESP.mov";

and this is the result:

HTTP/1.1 405 Method Not Allowed
Server: Apache-Coyote/1.1
Allow: POST, GET, DELETE, OPTIONS, PUT, HEAD
Content-Length: 0
Date: Wed, 04 Dec 2013 10:40:25 GMT
X-Cache: MISS from pc02
X-Cache-Lookup: MISS from pc02:3128
Via: 1.1 pc02 (squid/3.2.0.12)
Connection: close

Why is giving the Method not allowed error?
Why is answering that objects are "MISS" when they are cached?

Here is my squid.conf file content:
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access allow localhost
http_access allow all
http_port 3128
hierarchy_stoplist cgi-bin ?

cache_dir ufs /hd/SQUID 7000 16 256

coredump_dir /var/spool/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   0%  0

cache_mem 128 MB
maximum_object_size 4194304 KB
range_offset_limit -1
access_log none
acl Purge method PURGE
acl Get method GET
http_access allow all Purge
http_access allow all Get



Re: [squid-users] Problems authenticating against A.D.

2013-10-09 Thread Amos Jeffries

On 10/10/2013 2:19 p.m., Luke Pascoe wrote:

Hi all,

I've been banging my head against an auth issue with Squid3 for some
time now, I'm hoping someone here will be able to shine a light on it.

I have installed squid 3 (3.1.20) on Debian 7 (using the Debian package)

I've configured it according to the wiki doc:
http://wiki.squid-cache.org/ConfigExamples/Authenticate/WindowsActiveDirectory

For the most part, this configuration works great. Browsing through a
browser and most applications works fine using SSO.


Indicating that it is unlikely a proxy issue. But it might still be 
proxy related if the particular apps in question are relying on 
something unusual in the protocol.



However, some applications fail to authenticate correctly. Notably,
windows update and Microsoft Lync. These attempt to auth using the
logged in user account, fail, then pop up an authentication dialog.
Putting correct credentials into the dialog doesn't work either.

Every test I've run suggests that each of the components are working
correctly. Kerberos is configured correctly, squid_kerb_auth works,
ntlm_auth works as does wbinfo.

TCPDumps of sessions that work (through a browser, IE, Chrome, FF all
work) and ones that don't (Windows update) appear pretty much the
same. In both cases the server provides a challenge and the client
responds (dumps available if you want them)

Here's what I see in cache.log for a successful auth:

2013/10/10 14:02:11| negotiate_wrapper: Got 'YR
TlRMTVNTUAABl4II4gAGAbEdDw==' from squid
(length: 59).
2013/10/10 14:02:11| negotiate_wrapper: Decode
'TlRMTVNTUAABl4II4gAGAbEdDw==' (decoded
length: 40).
2013/10/10 14:02:11| negotiate_wrapper: received type 1 NTLM token
2013/10/10 14:02:11| negotiate_wrapper: Return 'TT
TlRMTVNTUAACDgAOADgVgoniTIZLWbbwZ5wAAHAAcABGBgEAAA9QAEEAQwBJAEYASQBDAAIADgBQAEEAQwBJAEYASQBDAAEADABCAEEAQwBPAE8ATgAEABoAcABhAGMAaQBmAGkAYwAuAGwAbwBjAGEAbAADACgAYgBhAGMAbwBvAG4ALgBwAGEAYwBpAGYAaQBjAC4AbABvAGMAYQBsAAA=
'
2013/10/10 14:02:11| negotiate_wrapper: Got 'KK
TlRMTVNTUAADGAAYAIgYABgAoA4ADgBYDAAMAGYWABYAcgAAABAAEAC4FYKI4gYBsR0PLtwz2n5LAwh+0a8fdq1dRVAAQQBDAEkARgBJAEMAbgBiAG8AdwBrAGUASABTAE4AWgAtAE4AQgBPAFcASwBFAHv2o2FAquhEAMG+Tip01j5dx+x1+2hdSAjkN+OskudoRIttHfifAVCtWaNrbmpBARM='
from squid (length: 271).
2013/10/10 14:02:11| negotiate_wrapper: Decode
'TlRMTVNTUAADGAAYAIgYABgAoA4ADgBYDAAMAGYWABYAcgAAABAAEAC4FYKI4gYBsR0PLtwz2n5LAwh+0a8fdq1dRVAAQQBDAEkARgBJAEMAbgBiAG8AdwBrAGUASABTAE4AWgAtAE4AQgBPAFcASwBFAHv2o2FAquhEAMG+Tip01j5dx+x1+2hdSAjkN+OskudoRIttHfifAVCtWaNrbmpBARM='
(decoded length: 200).
2013/10/10 14:02:11| negotiate_wrapper: received type 3 NTLM token
2013/10/10 14:02:11| negotiate_wrapper: Return 'AF = nbowke'

And here's what I see for a failed auth:

2013/10/10 14:02:11| negotiate_wrapper: Got 'YR
TlRMTVNTUAABl4II4gAGAbEdDw==' from squid
(length: 59).
2013/10/10 14:02:11| negotiate_wrapper: Decode
'TlRMTVNTUAABl4II4gAGAbEdDw==' (decoded
length: 40).
2013/10/10 14:02:11| negotiate_wrapper: received type 1 NTLM token
2013/10/10 14:02:11| negotiate_wrapper: Return 'TT
TlRMTVNTUAACDgAOADgVgoniHY15O8+h+dcAAHAAcABGBgEAAA9QAEEAQwBJAEYASQBDAAIADgBQAEEAQwBJAEYASQBDAAEADABCAEEAQwBPAE8ATgAEABoAcABhAGMAaQBmAGkAYwAuAGwAbwBjAGEAbAADACgAYgBhAGMAbwBvAG4ALgBwAGEAYwBpAGYAaQBjAC4AbABvAGMAYQBsAAA=
'
2013/10/10 14:02:11| negotiate_wrapper: Got 'YR
TlRMTVNTUAADGAAYAIIYABgAmg4ADgBYDAAMAGYQABAAcgAAABAAEACyFYKI4gYBsR0PfygaAzoaCa6+fpmNu2/r21AAQQBDAEkARgBJAEMAZABqAG8AaABuAHMASABBAE0AVgBFAFQAMAAyAOAyYXzujroFAFQceVB4lD50mtMNAiSke4gMxp1YqbAVMGv56F/9kmCz3UFpS+lKfgo='
from squid (length: 263).
2013/10/10 14:02:11| negotiate_wrapper: Decode
'TlRMTVNTUAADGAAYAIIYABgAmg4ADgBYDAAMAGYQABAAcgAAABAAEACyFYKI4gYBsR0PfygaAzoaCa6+fpmNu2/r21AAQQBDAEkARgBJAEMAZABqAG8AaABuAHMASABBAE0AVgBFAFQAMAAyAOAyYXzujroFAFQceVB4lD50mtMNAiSke4gMxp1YqbAVMGv56F/9kmCz3UFpS+lKfgo='
(decoded length: 194).
2013/10/10 14:02:11| negotiate_wrapper: received type 3 NTLM token
2013/10/10 14:02:11| negotiate_wrapper: Return 'NA =
NT_STATUS_INVALID_PARAMETER'

The only difference I see there is that negotiate_wrapper gives the
prefix "KK" for the successful response, but "YR" for the failed one.
Is it possible negotiate_wrapper is incorrectly classifying the
response?


This explains the KK/YR and rest of the helper protocol you are seeing:
http://wiki.squid-cache.org/Features/AddonHelpers#Negotiate_and_NTLM_Scheme

YR is given when a user sends a new connection without credentials, or 
if limits areound the re-use of an existing connection token have been 
exceeded. Normally Type 1 tokens being used to generate Type 2 tokens 
(from Squid to client).


KK is given when a Negotiate/NTLM hand

[squid-users] Problems authenticating against A.D.

2013-10-09 Thread Luke Pascoe
Hi all,

I've been banging my head against an auth issue with Squid3 for some
time now, I'm hoping someone here will be able to shine a light on it.

I have installed squid 3 (3.1.20) on Debian 7 (using the Debian package)

I've configured it according to the wiki doc:
http://wiki.squid-cache.org/ConfigExamples/Authenticate/WindowsActiveDirectory

For the most part, this configuration works great. Browsing through a
browser and most applications works fine using SSO.

However, some applications fail to authenticate correctly. Notably,
windows update and Microsoft Lync. These attempt to auth using the
logged in user account, fail, then pop up an authentication dialog.
Putting correct credentials into the dialog doesn't work either.

Every test I've run suggests that each of the components are working
correctly. Kerberos is configured correctly, squid_kerb_auth works,
ntlm_auth works as does wbinfo.

TCPDumps of sessions that work (through a browser, IE, Chrome, FF all
work) and ones that don't (Windows update) appear pretty much the
same. In both cases the server provides a challenge and the client
responds (dumps available if you want them)

Here's what I see in cache.log for a successful auth:

2013/10/10 14:02:11| negotiate_wrapper: Got 'YR
TlRMTVNTUAABl4II4gAGAbEdDw==' from squid
(length: 59).
2013/10/10 14:02:11| negotiate_wrapper: Decode
'TlRMTVNTUAABl4II4gAGAbEdDw==' (decoded
length: 40).
2013/10/10 14:02:11| negotiate_wrapper: received type 1 NTLM token
2013/10/10 14:02:11| negotiate_wrapper: Return 'TT
TlRMTVNTUAACDgAOADgVgoniTIZLWbbwZ5wAAHAAcABGBgEAAA9QAEEAQwBJAEYASQBDAAIADgBQAEEAQwBJAEYASQBDAAEADABCAEEAQwBPAE8ATgAEABoAcABhAGMAaQBmAGkAYwAuAGwAbwBjAGEAbAADACgAYgBhAGMAbwBvAG4ALgBwAGEAYwBpAGYAaQBjAC4AbABvAGMAYQBsAAA=
'
2013/10/10 14:02:11| negotiate_wrapper: Got 'KK
TlRMTVNTUAADGAAYAIgYABgAoA4ADgBYDAAMAGYWABYAcgAAABAAEAC4FYKI4gYBsR0PLtwz2n5LAwh+0a8fdq1dRVAAQQBDAEkARgBJAEMAbgBiAG8AdwBrAGUASABTAE4AWgAtAE4AQgBPAFcASwBFAHv2o2FAquhEAMG+Tip01j5dx+x1+2hdSAjkN+OskudoRIttHfifAVCtWaNrbmpBARM='
from squid (length: 271).
2013/10/10 14:02:11| negotiate_wrapper: Decode
'TlRMTVNTUAADGAAYAIgYABgAoA4ADgBYDAAMAGYWABYAcgAAABAAEAC4FYKI4gYBsR0PLtwz2n5LAwh+0a8fdq1dRVAAQQBDAEkARgBJAEMAbgBiAG8AdwBrAGUASABTAE4AWgAtAE4AQgBPAFcASwBFAHv2o2FAquhEAMG+Tip01j5dx+x1+2hdSAjkN+OskudoRIttHfifAVCtWaNrbmpBARM='
(decoded length: 200).
2013/10/10 14:02:11| negotiate_wrapper: received type 3 NTLM token
2013/10/10 14:02:11| negotiate_wrapper: Return 'AF = nbowke'

And here's what I see for a failed auth:

2013/10/10 14:02:11| negotiate_wrapper: Got 'YR
TlRMTVNTUAABl4II4gAGAbEdDw==' from squid
(length: 59).
2013/10/10 14:02:11| negotiate_wrapper: Decode
'TlRMTVNTUAABl4II4gAGAbEdDw==' (decoded
length: 40).
2013/10/10 14:02:11| negotiate_wrapper: received type 1 NTLM token
2013/10/10 14:02:11| negotiate_wrapper: Return 'TT
TlRMTVNTUAACDgAOADgVgoniHY15O8+h+dcAAHAAcABGBgEAAA9QAEEAQwBJAEYASQBDAAIADgBQAEEAQwBJAEYASQBDAAEADABCAEEAQwBPAE8ATgAEABoAcABhAGMAaQBmAGkAYwAuAGwAbwBjAGEAbAADACgAYgBhAGMAbwBvAG4ALgBwAGEAYwBpAGYAaQBjAC4AbABvAGMAYQBsAAA=
'
2013/10/10 14:02:11| negotiate_wrapper: Got 'YR
TlRMTVNTUAADGAAYAIIYABgAmg4ADgBYDAAMAGYQABAAcgAAABAAEACyFYKI4gYBsR0PfygaAzoaCa6+fpmNu2/r21AAQQBDAEkARgBJAEMAZABqAG8AaABuAHMASABBAE0AVgBFAFQAMAAyAOAyYXzujroFAFQceVB4lD50mtMNAiSke4gMxp1YqbAVMGv56F/9kmCz3UFpS+lKfgo='
from squid (length: 263).
2013/10/10 14:02:11| negotiate_wrapper: Decode
'TlRMTVNTUAADGAAYAIIYABgAmg4ADgBYDAAMAGYQABAAcgAAABAAEACyFYKI4gYBsR0PfygaAzoaCa6+fpmNu2/r21AAQQBDAEkARgBJAEMAZABqAG8AaABuAHMASABBAE0AVgBFAFQAMAAyAOAyYXzujroFAFQceVB4lD50mtMNAiSke4gMxp1YqbAVMGv56F/9kmCz3UFpS+lKfgo='
(decoded length: 194).
2013/10/10 14:02:11| negotiate_wrapper: received type 3 NTLM token
2013/10/10 14:02:11| negotiate_wrapper: Return 'NA =
NT_STATUS_INVALID_PARAMETER'

The only difference I see there is that negotiate_wrapper gives the
prefix "KK" for the successful response, but "YR" for the failed one.
Is it possible negotiate_wrapper is incorrectly classifying the
response?

I've Googled the crap out of this and haven't found anything useful.

Any suggestions of things I can test, please let me know, I've
completely run out of ideas.

Thanks.

squid.conf:
--

visible_hostname bacoon.pacific.local
err_html_text http://serviceplus/CAisd/pdmweb.exe

### negotiate kerberos and ntlm authentication
auth_param negotiate program /usr/local/bin/negotiate_wrapper -d
--ntlm /usr/bin/ntlm_auth --diagnostics
--helper-protocol=squid-2.5-ntlmssp --domain=PACIFIC --kerberos
/usr/lib/squid3/squid_kerb_auth -d -s GSS_C_NO_NAME
auth_param negotiate children 256
auth_param negotiate keep

AW: [squid-users] Problems with helper ntlm_fake_auth

2013-10-08 Thread Vonlanthen, Elmar
Hello Amos

> -Ursprüngliche Nachricht-
> Von: Amos Jeffries [mailto:squ...@treenet.co.nz]
> Gesendet: Dienstag, 8. Oktober 2013 03:35
> An: squid-users@squid-cache.org
> Betreff: Re: [squid-users] Problems with helper ntlm_fake_auth
> 
> FYI: in future this type of code related post should be sent to squid-dev
> mailing list. Thank you.

Ok, I will do that next time.

> > The first problem is that the NTLM response header of type "TT" will be
> generated wrong.
> >
> > This one has been generated with the new helper ntlm_fake_auth:
> > ntlm_fake_auth.cc(219): pid=29811 :sending 'TT' to squid with data:
> > []   4E 54 4C 4D 53 53 50 00   02 00 00 00 09 00 09 00   NTLMSSP. 
> > 
> > [0010]   28 00 00 00 07 82 08 A2   CE 7D 62 FA 44 55 80 E0    
> > ..b.DU..
> > [0020]   00 00 00 00 00 00 3A 00   57 4F 52 4B 47 52 4F 55    
> > WORKGROU
> > [0030]   50  P
> >
> > And this one with the old helper fakeauth_auth (which is working):
> > ntlm-auth[31700](fakeauth_auth.c:421): sending 'TT' to squid with data:
> > []   4E 54 4C 4D 53 53 50 00   02 00 00 00 0A 00 0A 00   NTLMSSP. 
> > 
> > [0010]   30 00 00 00 07 82 08 A2   7B E5 5C 0B 49 DB 6D 36   0... 
> > I.m6
> > [0020]   00 00 00 00 00 00 00 00   00 00 00 00 3A 00 00 00    
> > 
> > [0030]   00 00 00 00 00 00 00 00   00 00 00 00    
> >
> > It seems that the total length of the header has a wrong size and the char
> ":" (0x3a) will be placed in the field "reserved". The client doesn't accept 
> the
> packet with the new response header and is sending a RST.
> 
> Total length being the wrong size can be expected after the 3rd problem you
> mention below. The entire structure is dependent on field sizes being
> calculated and encoded correctly. Otherwise random extra space or barbage
> ends up in the output. Those 00 octets at the end look suspiciously like such.
> Can you please test this problem again and re-do the analysis after the size
> and case patches are applied. A re-analysis on fixed field contents may help
> with that TODO answer...

Unfortunately it doesn't work with only the patches 2 and 3:

2013/10/08 09:06:20.215 kid1| auth_ntlm.cc(248) fixHeader: Sending type:40 
header: 'NTLM 
TlRMTVNTUAACCQAJACgHggiizn1i+kRVgOA6AFdPUktHUk9VUA=='
ntlm_fake_auth.cc(219): pid=5679 :sending 'TT' to squid with data:
[]   4E 54 4C 4D 53 53 50 00   02 00 00 00 09 00 09 00   NTLMSSP. 
[0010]   28 00 00 00 07 82 08 A2   CE 7D 62 FA 44 55 80 E0    ..b.DU..
[0020]   00 00 00 00 00 00 3A 00   57 4F 52 4B 47 52 4F 55    WORKGROU
[0030]   50  P

Wireshark shows this:
NTLM Secure Service Provider
  NTLMSSP identifier: NTLMSSP
  NTLM Message Type: NTLMSSP_CHALLENGE (0x0002)
  Target Name:  <<< Why ""
Length: 9
Maxlen: 9
Offset: 40
  Flags: 0xa2088207
  NTLM Server Challenge: 074657c2b2de2963
  Reserved: 3a00<<< 3a is still in the "Reserved" field
<<< address list is missing>

Compare it with fakeauth_auth:
NTLM Secure Service Provider
  NTLMSSP identifier: NTLMSSP
  NTLM Message Type: NTLMSSP_CHALLENGE (0x0002)
  Target Name:
Length: 10
Maxlen: 10
Offset: 48
  Flags: 0xa2088207
  NTLM Server Challenge: 074657c2b2de2963
  Reserved: 
  Address List: Empty

> > +// doesn't work with this:
> >   // TODO: find out what this context means, and why only the 
> > fake
> auth helper contains it.
> > -chal.context_high = htole32(0x003a<<16);
> > +//chal.context_high = htole32(0x003a<<16);
> > +// twead payload, offset and length to get it working:
> > +chal.payload[4] = 0x3a;
> > +chal.target.offset = 48;
> > +chal.target.len = 10;
> > +chal.target.maxlen = 18;
> >
> >   len = sizeof(chal) - sizeof(chal.payload) +
> le16toh(chal.target.maxlen);
> >   data = (char *) base64_encode_bin((char *) &chal, len);
> >
> > In the code there is a comment "TODO: find out what this context
> > means...". I think there is really some work to do. ;-)
> 
> Yes there is. The other helpers are working perfectly fine without this
> context value being set. Which is why it was elided from these versions.
> Before this gets changed yet again, do you have any ex

Re: [squid-users] Problems with helper ntlm_fake_auth

2013-10-07 Thread Amos Jeffries

On 7/10/2013 10:54 p.m., Vonlanthen, Elmar wrote:

Hello all

There are some problems with the helper module ntlm_fake_auth. I did the tests 
with Squid-3.2.13 but 3.3.9 is affected as well.


Hi Volanthen,
  First off thank you for testing this in such detail.

FYI: in future this type of code related post should be sent to 
squid-dev mailing list. Thank you.


The big problem we have is that none of the current development team are 
able to test the NTLM helpers properly. So your assistance in that area 
is very welcome. I am applying the latter two fixes now. The first one 
is affected by those and the behaviour may change as a result.



The first problem is that the NTLM response header of type "TT" will be 
generated wrong.

This one has been generated with the new helper ntlm_fake_auth:
ntlm_fake_auth.cc(219): pid=29811 :sending 'TT' to squid with data:
[]   4E 54 4C 4D 53 53 50 00   02 00 00 00 09 00 09 00   NTLMSSP. 
[0010]   28 00 00 00 07 82 08 A2   CE 7D 62 FA 44 55 80 E0    ..b.DU..
[0020]   00 00 00 00 00 00 3A 00   57 4F 52 4B 47 52 4F 55    WORKGROU
[0030]   50  P

And this one with the old helper fakeauth_auth (which is working):
ntlm-auth[31700](fakeauth_auth.c:421): sending 'TT' to squid with data:
[]   4E 54 4C 4D 53 53 50 00   02 00 00 00 0A 00 0A 00   NTLMSSP. 
[0010]   30 00 00 00 07 82 08 A2   7B E5 5C 0B 49 DB 6D 36   0... I.m6
[0020]   00 00 00 00 00 00 00 00   00 00 00 00 3A 00 00 00    
[0030]   00 00 00 00 00 00 00 00   00 00 00 00    

It seems that the total length of the header has a wrong size and the char ":" (0x3a) 
will be placed in the field "reserved". The client doesn't accept the packet with the new 
response header and is sending a RST.


Total length being the wrong size can be expected after the 3rd problem 
you mention below. The entire structure is dependent on field sizes 
being calculated and encoded correctly. Otherwise random extra space or 
barbage ends up in the output. Those 00 octets at the end look 
suspiciously like such.
Can you please test this problem again and re-do the analysis after the 
size and case patches are applied. A re-analysis on fixed field contents 
may help with that TODO answer...




Now, if I tweak the header with setting authenticate_ntlm_domain to an empty 
string and tweaking the target value and payload, it is working (ugly 
workaround, I know):

diff -aur a/helpers/ntlm_auth/fake/ntlm_fake_auth.cc 
b/helpers/ntlm_auth/fake/ntlm_fake_auth.cc
--- a/helpers/ntlm_auth/fake/ntlm_fake_auth.cc  2013-09-30 11:48:40.231386531 
+0200
+++ b/helpers/ntlm_auth/fake/ntlm_fake_auth.cc  2013-10-01 10:28:07.727699795 
+0200
@@ -96,7 +96,7 @@
  #define SEND4(X,Y,Z,W) {debug("sending '" X "' to squid\n",Y,Z,W); printf(X 
"\n",Y,Z,W);}
  #endif

-const char *authenticate_ntlm_domain = "WORKGROUP";

+const char *authenticate_ntlm_domain = "";
  int strip_domain_enabled = 0;
  int NTLM_packet_debug_enabled = 0;

@@ -209,8 +209,14 @@

  } else {
  ntlm_make_challenge(&chal, authenticate_ntlm_domain, NULL, 
nonce, NTLM_NONCE_LEN, NTLM_NEGOTIATE_ASCII);
  }
+// doesn't work with this:
  // TODO: find out what this context means, and why only the fake 
auth helper contains it.
-chal.context_high = htole32(0x003a<<16);
+//chal.context_high = htole32(0x003a<<16);
+// twead payload, offset and length to get it working:
+chal.payload[4] = 0x3a;
+chal.target.offset = 48;
+chal.target.len = 10;
+chal.target.maxlen = 18;

  len = sizeof(chal) - sizeof(chal.payload) + le16toh(chal.target.maxlen);

  data = (char *) base64_encode_bin((char *) &chal, len);

In the code there is a comment "TODO: find out what this context means...". I 
think there is really some work to do. ;-)


Yes there is. The other helpers are working perfectly fine without this 
context value being set. Which is why it was elided from these versions.
Before this gets changed yet again, do you have any explanation for what 
those seemingly arbitrary random values mean?




Another problem is the presentation of domain and username. First the domain 
was previously shown in upp

[squid-users] Problems with helper ntlm_fake_auth

2013-10-07 Thread Vonlanthen, Elmar
Hello all

There are some problems with the helper module ntlm_fake_auth. I did the tests 
with Squid-3.2.13 but 3.3.9 is affected as well.

The first problem is that the NTLM response header of type "TT" will be 
generated wrong.

This one has been generated with the new helper ntlm_fake_auth:
ntlm_fake_auth.cc(219): pid=29811 :sending 'TT' to squid with data:
[]   4E 54 4C 4D 53 53 50 00   02 00 00 00 09 00 09 00   NTLMSSP. 
[0010]   28 00 00 00 07 82 08 A2   CE 7D 62 FA 44 55 80 E0    ..b.DU..
[0020]   00 00 00 00 00 00 3A 00   57 4F 52 4B 47 52 4F 55    WORKGROU
[0030]   50  P

And this one with the old helper fakeauth_auth (which is working):
ntlm-auth[31700](fakeauth_auth.c:421): sending 'TT' to squid with data:
[]   4E 54 4C 4D 53 53 50 00   02 00 00 00 0A 00 0A 00   NTLMSSP. 
[0010]   30 00 00 00 07 82 08 A2   7B E5 5C 0B 49 DB 6D 36   0... I.m6
[0020]   00 00 00 00 00 00 00 00   00 00 00 00 3A 00 00 00    
[0030]   00 00 00 00 00 00 00 00   00 00 00 00    

It seems that the total length of the header has a wrong size and the char ":" 
(0x3a) will be placed in the field "reserved". The client doesn't accept the 
packet with the new response header and is sending a RST.

Now, if I tweak the header with setting authenticate_ntlm_domain to an empty 
string and tweaking the target value and payload, it is working (ugly 
workaround, I know):

diff -aur a/helpers/ntlm_auth/fake/ntlm_fake_auth.cc 
b/helpers/ntlm_auth/fake/ntlm_fake_auth.cc  
  
--- a/helpers/ntlm_auth/fake/ntlm_fake_auth.cc  2013-09-30 11:48:40.231386531 
+0200   
 
+++ b/helpers/ntlm_auth/fake/ntlm_fake_auth.cc  2013-10-01 10:28:07.727699795 
+0200   
 
@@ -96,7 +96,7 @@   

   
 #define SEND4(X,Y,Z,W) {debug("sending '" X "' to squid\n",Y,Z,W); printf(X 
"\n",Y,Z,W);}   
  
 #endif 

   


   
-const char *authenticate_ntlm_domain = "WORKGROUP";

   
+const char *authenticate_ntlm_domain = ""; 

   
 int strip_domain_enabled = 0;  

   
 int NTLM_packet_debug_enabled = 0; 

   


   
@@ -209,8 +209,14 @@

   
 } else {   

   
 ntlm_make_challenge(&chal, authenticate_ntlm_domain, NULL, 
nonce, NTLM_NONCE_LEN, NTLM_NEGOTIATE_ASCII);   
   
 }  

   
+// doesn't work with this: 
 

Re: [squid-users] Problems with cache peering, sourcehash, *_uses_indirect, and follow_x_forwarded_for

2013-09-23 Thread Amos Jeffries

On 24/09/2013 9:06 a.m., Martín Ferco wrote:

Hello,

I'm trying to use DansGuardian together with Squid and load-balancing
to use more than one ISP.

I've been able to achieve this by using cache_peer, and I should be
able to perform load balancing with the following two lines:

{{{
cache_peer squid-isp1 parent 13128 0 no-query round-robin sourcehash proxy-only
cache_peer squid-isp2 parent 23128 0 no-query round-robin sourcehash proxy-only
}}}

These two cache-peers run on the same box, as you can see.


Problem #1:
  round-robin is one type of peer selection, sourcehash is a different 
type. Only one method will be used to select between these peers.



I've also made sure that indirect options are set properly like this:

acl_uses_indirect_client on
delay_pool_uses_indirect_client on
log_uses_indirect_client on
follow_x_forwarded_for allow localhost


Problem #2:
  notice how none of these options mention cache_peer or outbound 
connections.



I'm sure that's working fine as the logs show the correct information
for different IP addresses (and not 127.0.0.1, where DansGuardian is
running as well).

Now, the problem with the original two lines is "sourcehash". It lookw
like it's *NOT* using the 'indirect' feature. I've set squid debug
options to "39,2", and the following is shown in the logs:

{{{
2013/09/23 15:10:20| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
2013/09/23 15:10:20| peerSourceHashSelectParent: selected squid-isp1
2013/09/23 15:10:20| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
2013/09/23 15:10:20| peerSourceHashSelectParent: selected squid-isp1
2013/09/23 15:10:20| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
2013/09/23 15:10:20| peerSourceHashSelectParent: selected squid-isp1
2013/09/23 15:10:21| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
2013/09/23 15:10:21| peerSourceHashSelectParent: selected squid-isp1
2013/09/23 15:10:21| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
}}}

So, basically, the IP where DansGuardian is running is being hashed,
instead of the original one. When looking at the sourcecode for
version 2.7.STABLE9 (the one I'm using), it looks like client_addr is
used instead of the indirect one as the key in
"src/peer_sourcehash.c":

{{{
key = inet_ntoa(request->client_addr);
}}}

This also seems to happen in the latest 3.3 version of squid.

Could this be fixed by adding the following lines to that file, after
that line shown above:

{{{
#if FOLLOW_X_FORWARDED_FOR
key = inet_ntoa(request->indirect_client_addr;
#endif /* FOLLOW_X_FORWARDED_FOR */
}}}

Are you aware of this problem, or am I doing something wrong?


It is not a problem per-se.
* sourcehash is a hashing algorithm based in inbound TCP connection details.
* "indirect client" feature is about network state of a TCP connection 
unrelated to Squid.


If round-robin is sufficient for your needs I suggest dropping the 
sourcehash entirely.



Also, I recommend an upgrade to the 3.3 Squid if you can. 2.7 is getting 
very outdated.


Amos


[squid-users] Problems with cache peering, sourcehash, *_uses_indirect, and follow_x_forwarded_for

2013-09-23 Thread Martín Ferco
Hello,

I'm trying to use DansGuardian together with Squid and load-balancing
to use more than one ISP.

I've been able to achieve this by using cache_peer, and I should be
able to perform load balancing with the following two lines:

{{{
cache_peer squid-isp1 parent 13128 0 no-query round-robin sourcehash proxy-only
cache_peer squid-isp2 parent 23128 0 no-query round-robin sourcehash proxy-only
}}}

These two cache-peers run on the same box, as you can see.

I've also made sure that indirect options are set properly like this:

acl_uses_indirect_client on
delay_pool_uses_indirect_client on
log_uses_indirect_client on
follow_x_forwarded_for allow localhost

I'm sure that's working fine as the logs show the correct information
for different IP addresses (and not 127.0.0.1, where DansGuardian is
running as well).

Now, the problem with the original two lines is "sourcehash". It lookw
like it's *NOT* using the 'indirect' feature. I've set squid debug
options to "39,2", and the following is shown in the logs:

{{{
2013/09/23 15:10:20| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
2013/09/23 15:10:20| peerSourceHashSelectParent: selected squid-isp1
2013/09/23 15:10:20| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
2013/09/23 15:10:20| peerSourceHashSelectParent: selected squid-isp1
2013/09/23 15:10:20| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
2013/09/23 15:10:20| peerSourceHashSelectParent: selected squid-isp1
2013/09/23 15:10:21| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
2013/09/23 15:10:21| peerSourceHashSelectParent: selected squid-isp1
2013/09/23 15:10:21| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
}}}

So, basically, the IP where DansGuardian is running is being hashed,
instead of the original one. When looking at the sourcecode for
version 2.7.STABLE9 (the one I'm using), it looks like client_addr is
used instead of the indirect one as the key in
"src/peer_sourcehash.c":

{{{
key = inet_ntoa(request->client_addr);
}}}

This also seems to happen in the latest 3.3 version of squid.

Could this be fixed by adding the following lines to that file, after
that line shown above:

{{{
#if FOLLOW_X_FORWARDED_FOR
key = inet_ntoa(request->indirect_client_addr;
#endif /* FOLLOW_X_FORWARDED_FOR */
}}}

Are you aware of this problem, or am I doing something wrong?

Thanks,
Martín.


Re: [squid-users] Problems browsing some pages

2013-07-12 Thread Amos Jeffries

On 12/07/2013 3:38 p.m., Gustavo Esquivel wrote:

Hi,
i recently install the lastes version of centos and install the 3.1
Squid version (using yum install squid)
i already set the squid and works really great!
only i have this problem, some pages are not browsing i just get a
white page or sometime an error.

i have problems with this website for example: http://www.pucp.edu.pe/
http://www.hp.com


and others...

any idea?


Please try the current release of Squid. You can find details about 
newer CentOS packages here:

http://wiki.squid-cache.org/KnowledgeBase/CentOS

Amos


[squid-users] Problems browsing some pages

2013-07-11 Thread Gustavo Esquivel
Hi,
i recently install the lastes version of centos and install the 3.1
Squid version (using yum install squid)
i already set the squid and works really great!
only i have this problem, some pages are not browsing i just get a
white page or sometime an error.

i have problems with this website for example: http://www.pucp.edu.pe/
http://www.hp.com


and others...

any idea?
thanks a lot!
Best Regards,

Gustavo Esquivel


Re: [squid-users] Problems with SQUID 3.2.6 and /usr/sbin/wbinfo_group.pl

2013-05-14 Thread Amos Jeffries

On 15/05/2013 2:15 a.m., Claudio ML wrote:

Il 14/05/2013 15:51, Amos Jeffries ha scritto:

On 15/05/2013 1:28 a.m., Claudio ML wrote:

Hello all,

A big problem with Squid 3.2.6 and wbinfo_group.pl.

NOTE: 3.2 does not ship with "wbinfo_group.pl" any longer. The 3.2
script is named "ext_wbinfo_group_acl".


   When the script is
called, i got this on log:

2013/05/14 15:17:25| externalAclLookup: 'chkgrp' queue overload
(ch=0x7fe832dab308)

Significant parts of my squid.conf are:

external_acl_type chkgrp children=15 ttl=0 %LOGIN
/usr/sbin/wbinfo_group.pl -d
acl libero  externalchkgrp "navlibera"
http_access allow libero all

The strange thing is i don't have any debug of wbinfo_group.pl -d on the
logs, only the line reported above.

If i try to use manually the wbinfo_group.pl script it works as
aspected:

echo "domain\utente navlibera" | /usr/sbin/wbinfo_group.pl -d
Debugging mode ON.
Got domain\navlibera from squid
User:  -domain\utente-
Group: -navlibera-
SID:   -S-1-5-21-449068364-4053775113-1626773979-32150-
GID:   -4031-
Sending OK to squid
OK

What can be? I don't know what to try already... is so a strange
problem. Another problem with a previous version on Squid works
correctly.

The 3.2 helpers bang-line (#!/usr/sbin/perl -w) is now set during
build time instead of hard-coded to /usr/sbin/perl, and the helper is
renamed to ext_wbinfo_group_acl. There are no other code related
changes since 2010.

Is your test done with the same user credentials Squid will be using
to run the helper?

Amos



Ty for your response. I have already solved the issue, modifyng the line
where the script is called as the following:

external_acl_type chkgrp ipv4 children=15 ttl=0 %LOGIN
/usr/sbin/wbinfo_group.pl -d

And it works, finally!

Ps the perl script i use is copyied from a previous version of Squid.


Any particular reason? (ie things you patched in that might be useful to 
others?)


Cheers
Amos


Re: [squid-users] Problems with SQUID 3.2.6 and /usr/sbin/wbinfo_group.pl

2013-05-14 Thread Claudio ML
Il 14/05/2013 15:51, Amos Jeffries ha scritto:
> On 15/05/2013 1:28 a.m., Claudio ML wrote:
>> Hello all,
>>
>> A big problem with Squid 3.2.6 and wbinfo_group.pl.
>
> NOTE: 3.2 does not ship with "wbinfo_group.pl" any longer. The 3.2
> script is named "ext_wbinfo_group_acl".
>
>>   When the script is
>> called, i got this on log:
>>
>> 2013/05/14 15:17:25| externalAclLookup: 'chkgrp' queue overload
>> (ch=0x7fe832dab308)
>>
>> Significant parts of my squid.conf are:
>>
>> external_acl_type chkgrp children=15 ttl=0 %LOGIN
>> /usr/sbin/wbinfo_group.pl -d
>> acl libero  externalchkgrp "navlibera"
>> http_access allow libero all
>>
>> The strange thing is i don't have any debug of wbinfo_group.pl -d on the
>> logs, only the line reported above.
>>
>> If i try to use manually the wbinfo_group.pl script it works as
>> aspected:
>>
>> echo "domain\utente navlibera" | /usr/sbin/wbinfo_group.pl -d
>> Debugging mode ON.
>> Got domain\navlibera from squid
>> User:  -domain\utente-
>> Group: -navlibera-
>> SID:   -S-1-5-21-449068364-4053775113-1626773979-32150-
>> GID:   -4031-
>> Sending OK to squid
>> OK
>>
>> What can be? I don't know what to try already... is so a strange
>> problem. Another problem with a previous version on Squid works
>> correctly.
>
> The 3.2 helpers bang-line (#!/usr/sbin/perl -w) is now set during
> build time instead of hard-coded to /usr/sbin/perl, and the helper is
> renamed to ext_wbinfo_group_acl. There are no other code related
> changes since 2010.
>
> Is your test done with the same user credentials Squid will be using
> to run the helper?
>
> Amos
>
>
Ty for your response. I have already solved the issue, modifyng the line
where the script is called as the following:

external_acl_type chkgrp ipv4 children=15 ttl=0 %LOGIN
/usr/sbin/wbinfo_group.pl -d

And it works, finally!

Ps the perl script i use is copyied from a previous version of Squid.

Cordially,

Claudio Prono.



Re: [squid-users] Problems with SQUID 3.2.6 and /usr/sbin/wbinfo_group.pl

2013-05-14 Thread Amos Jeffries

On 15/05/2013 1:28 a.m., Claudio ML wrote:

Hello all,

A big problem with Squid 3.2.6 and wbinfo_group.pl.


NOTE: 3.2 does not ship with "wbinfo_group.pl" any longer. The 3.2 
script is named "ext_wbinfo_group_acl".



  When the script is
called, i got this on log:

2013/05/14 15:17:25| externalAclLookup: 'chkgrp' queue overload
(ch=0x7fe832dab308)

Significant parts of my squid.conf are:

external_acl_type chkgrp children=15 ttl=0 %LOGIN
/usr/sbin/wbinfo_group.pl -d
acl libero  externalchkgrp "navlibera"
http_access allow libero all

The strange thing is i don't have any debug of wbinfo_group.pl -d on the
logs, only the line reported above.

If i try to use manually the wbinfo_group.pl script it works as aspected:

echo "domain\utente navlibera" | /usr/sbin/wbinfo_group.pl -d
Debugging mode ON.
Got domain\navlibera from squid
User:  -domain\utente-
Group: -navlibera-
SID:   -S-1-5-21-449068364-4053775113-1626773979-32150-
GID:   -4031-
Sending OK to squid
OK

What can be? I don't know what to try already... is so a strange
problem. Another problem with a previous version on Squid works correctly.


The 3.2 helpers bang-line (#!/usr/sbin/perl -w) is now set during build 
time instead of hard-coded to /usr/sbin/perl, and the helper is renamed 
to ext_wbinfo_group_acl. There are no other code related changes since 2010.


Is your test done with the same user credentials Squid will be using to 
run the helper?


Amos


[squid-users] Problems with SQUID 3.2.6 and /usr/sbin/wbinfo_group.pl

2013-05-14 Thread Claudio ML
Hello all,

A big problem with Squid 3.2.6 and wbinfo_group.pl. When the script is
called, i got this on log:

2013/05/14 15:17:25| externalAclLookup: 'chkgrp' queue overload
(ch=0x7fe832dab308)

Significant parts of my squid.conf are:

external_acl_type chkgrp children=15 ttl=0 %LOGIN
/usr/sbin/wbinfo_group.pl -d
acl libero  externalchkgrp "navlibera"
http_access allow libero all

The strange thing is i don't have any debug of wbinfo_group.pl -d on the
logs, only the line reported above.

If i try to use manually the wbinfo_group.pl script it works as aspected:

echo "domain\utente navlibera" | /usr/sbin/wbinfo_group.pl -d
Debugging mode ON.
Got domain\navlibera from squid
User:  -domain\utente-
Group: -navlibera-
SID:   -S-1-5-21-449068364-4053775113-1626773979-32150-
GID:   -4031-
Sending OK to squid
OK

What can be? I don't know what to try already... is so a strange
problem. Another problem with a previous version on Squid works correctly.

Thanks,

Claudio.





Re: [squid-users] problems with a site

2013-03-08 Thread Helmut Hullen
Hallo, Luigi,

Du meintest am 08.03.13:

> I have some problems with the following site:
> http://www.wasteservmalta.com/
> the .NET framework from site give me an exception. Without squid I
> have not problems.

Here: no problem.
Squid 3.2.3

Viele Gruesse!
Helmut


RE: [squid-users] problems with a site

2013-03-08 Thread jiluspo


> -Original Message-
> From: Luigi Vianello [mailto:lviane...@ambientesc.it]
> Sent: Friday, March 08, 2013 5:18 PM
> To: squid-users@squid-cache.org
> Subject: [squid-users] problems with a site
> 
> Hi,
> I have some problems with the following site:
> http://www.wasteservmalta.com/
I'd like to know more about sites that don't work if squid(forward-proxy) is
used but this site works fine.
> the .NET framework from site give me an exception. Without squid I have
> not problems.
> Thanks for any help.
> 
> Email secured by Check Point



[squid-users] problems with a site

2013-03-08 Thread Luigi Vianello
Hi,
I have some problems with the following site:
http://www.wasteservmalta.com/
the .NET framework from site give me an exception. Without squid I have
not problems.
Thanks for any help.


Re: [squid-users] problems compiling Squid 3.2.3 on 32bit

2012-11-12 Thread Amos Jeffries

On 13/11/2012 12:47 a.m., schumacher wrote:

Hi,

I have some problems compiling Squid-3.2.3 on Centos 5.5 or older 
Fedora boxes (all 32bit).

Squid 3.1.x worked just fine on the very same servers.
Compiling on Centos 6.0 (64bit) works fine too.

This didn't work out:
export CFLAGS="${CFLAGS} -march=i486"

http://www.linuxquestions.org/questions/linux-software-2/glibc-make-error-undefined-reference-to-%60__sync_fetch_and_add_4-a-571961/ 





Squid-3 is C++ code built using C++-compiler    setting the 
C-compiler flags does what do you think?


see "./configure --help" for available flags variables you can set. 
CXXFLAGS is probably what you wanted there.




Here are the error messages when compiling Squid:


libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordT::get() 
const':
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:47: 
undefined reference to `__sync_fetch_and_add_4'


Squid-3.2 requires a minimum of GCC version 4 to compile. Preferrably 
4.2 or later.


Amos



[squid-users] problems compiling Squid 3.2.3 on 32bit

2012-11-12 Thread schumacher

Hi,

I have some problems compiling Squid-3.2.3 on Centos 5.5 or older Fedora 
boxes (all 32bit).

Squid 3.1.x worked just fine on the very same servers.
Compiling on Centos 6.0 (64bit) works fine too.

This didn't work out:
export CFLAGS="${CFLAGS} -march=i486"

http://www.linuxquestions.org/questions/linux-software-2/glibc-make-error-undefined-reference-to-%60__sync_fetch_and_add_4-a-571961/


Here are the error messages when compiling Squid:


libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordT::get() 
const':
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
libIpcIo.a(IpcIoFile.o): In function 
`Ipc::Atomic::WordT::operator+=(int)':
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:31: undefined 
reference to `__sync_add_and_fetch_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordT::get() 
const':
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
libIpcIo.a(IpcIoFile.o):/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:47: 
more undefined references to `__sync_fetch_and_add_4' follow
libIpcIo.a(IpcIoFile.o): In function 
`Ipc::Atomic::WordT::operator+=(int)':
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:31: undefined 
reference to `__sync_add_and_fetch_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordT::get() 
const':
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
libIpcIo.a(IpcIoFile.o): In function 
`Ipc::Atomic::WordT::swap_if(int, int)':
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:38: undefined 
reference to `__sync_bool_compare_and_swap_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordT::get() 
const':
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
libIpcIo.a(IpcIoFile.o): In function 
`Ipc::Atomic::WordT::swap_if(int, int)':
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:38: undefined 
reference to `__sync_bool_compare_and_swap_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordT::get() 
const':
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
libIpcIo.a(IpcIoFile.o): In function 
`Ipc::Atomic::WordT::swap_if(int, int)':
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:38: undefined 
reference to `__sync_bool_compare_and_swap_4'
libIpcIo.a(IpcIoFile.o): In function 
`Ipc::Atomic::WordT::operator-=(int)':
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:32: undefined 
reference to `__sync_sub_and_fetch_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordT::get() 
const':
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
/home/zsoft/proxy/squid-3.2.3/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
ipc/.libs/libipc.a(Queue.o): In function 
`Ipc::Atomic::WordT::swap_if(int, int)':
/home/zsoft/proxy/squid-3.2.3/src/ipc/../../src/ipc/AtomicWord.h:38: 
undefined reference to `__sync_bool_compare_and_swap_4'
ipc/.libs/libipc.a(Queue.o): In function 
`Ipc::Atomic::WordT::swap_if(int, int)':
/home/zsoft/proxy/squid-3.2.3/src/ipc/Queue.cc:256: undefined reference 
to `__sync_bool_compare_and_swap_4'
ipc/.libs/libipc.a(ReadWriteLock.o): In function 
`Ipc::Atomic::WordT::operator--(int)':
/home/zsoft/proxy/squid-3.2.3/src/ipc/../../src/ipc/AtomicWord.h:36: 
undefined reference to `__sync_fetch_and_sub_4'
ipc/.libs/libipc.a(ReadWriteLock.o): In function 
`Ipc::Atomic::WordT::operator+=(int)':
/home/zsoft/proxy/squid-3.2.3/src/ipc/../../src/ipc/AtomicWord.h:31: 
undefined reference to `__sync_add_and_fetch_4'
ipc/.libs/libipc.a(ReadWriteLock.o): In function 
`Ipc::Atomic::WordT::get() const':
/home/zsoft/proxy/squid-3.2.3/src/ipc/../../src/ipc/AtomicWord.h:47: 
undefined reference to `__sync_fetch_and_add_4'
/home/zsoft/proxy/squid-3.2.3/src/ipc/../../src/ipc/AtomicWord.h:47: 
undefined reference to `__sync_fetch_and_add_4'
/home/zsoft/proxy/squid-3.2.3/src/ipc/../../src/ipc/AtomicWord.h:47: 
undefined reference to `__sync_fetch_and_add_4'
/home/zsoft/proxy/squid-3.2.3/src/ipc/../../src/ipc/AtomicWord.h:47: 
undefined reference to `__sync_fetch

Re: [squid-users] Problems with squid Ntlm

2012-09-26 Thread Eliezer Croitoru

On 9/26/2012 11:15 PM, Noc Phibee Telecom wrote:

Le 26/09/2012 17:31, Noc Phibee Telecom a écrit :

Hi

I’m trying to find out why I got a problem with my squid for a week now.

On server I got:

-  A squid using NTLM authentication transferring
-  to a dansguardian for content filtering transferring
-  to a squid used as cache


The problem is that users complain they are disconnected from their
sessions ( citrix ), thing that was not happening one week ago.

If users connect directly to the squid used as cache, no problem. It
looks like it is a problem with NTLM auth but cannot find why.


anyone have a idea for debug ?

thanks
jerome






any idea ???



more data on the environment?
can you use kerberos? which is a better choice.
you can try to use squidguard instead of dansguardian which is another 
proxy on the way..which is a bad idea and can be the reason.

just use one squid with ntlm for cache and filtering..

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Problems with squid Ntlm

2012-09-26 Thread Noc Phibee Telecom

Le 26/09/2012 17:31, Noc Phibee Telecom a écrit :

Hi

I’m trying to find out why I got a problem with my squid for a week now.

On server I got:

-  A squid using NTLM authentication transferring
-  to a dansguardian for content filtering transferring
-  to a squid used as cache


The problem is that users complain they are disconnected from their 
sessions ( citrix ), thing that was not happening one week ago.


If users connect directly to the squid used as cache, no problem. It 
looks like it is a problem with NTLM auth but cannot find why.



anyone have a idea for debug ?

thanks
jerome






any idea ???




[squid-users] Problems with squid Ntlm

2012-09-26 Thread Noc Phibee Telecom

Hi

I’m trying to find out why I got a problem with my squid for a week now.

On server I got:

-  A squid using NTLM authentication transferring
-  to a dansguardian for content filtering transferring
-  to a squid used as cache


The problem is that users complain they are disconnected from their 
sessions ( citrix ), thing that was not happening one week ago.


If users connect directly to the squid used as cache, no problem. It 
looks like it is a problem with NTLM auth but cannot find why.



anyone have a idea for debug ?

thanks
jerome



Re: [squid-users] problems with ssl_crtd

2012-09-25 Thread Linos
Sure, you have it attached.

Miguel Angel.

On 24/09/12 20:10, Ahmed Talha Khan wrote:
> Linos,
> 
> I have not debugged the issue yet. Will post results when do it.
> 
> Can anyone provide with the FATAL patch kindly?
> 
> -talha
> 
> On Mon, Sep 24, 2012 at 9:47 PM, Linos  wrote:
>> On 24/09/12 12:52, Amos Jeffries wrote:
>>> On 24/09/2012 8:44 p.m., Linos wrote:
 On 20/09/12 12:58, Ahmed Talha Khan wrote:
> Hey Guy, All
>
> I have started facing a very similar issue now.I have been using
> squid-3.HEAD-20120421-r12120 for about 5 months without any issues.
> Suddenly from yesterday ive started getting crahses in ssl_crtd
> process.
>
>
> In my case i am the only user but i observe that the behaviour is
> random. Sometimes it crashes and sometimes it works. Different https
> pages give the crash. Even non https pages have caused the crash.
>
>   These occur especially on google https pages like docs,mail,calender 
> etc..
>
> The signing cert is also ok and has NOT expired.
>
>
> My squid conf looks like this:
> ***
> sslproxy_cert_error allow all
>
> sslcrtd_program /usr/local/squid-3.3/libexec/ssl_crtd -s
> /usr/local/squid-3.3/var/lib/ssl_db -M 4MB
> sslcrtd_children 5
>
> http_port 192.168.8.134:3128 ssl-bump generate-host-certificates=on
> dynamic_cert_mem_cache_size=4MB
> cert=/home/asif/squid/www.sample.com.pem
> key=/home/asif/squid/www.sample.com.pem
>
> http_port 192.168.8.134:8080
>
> https_port 192.168.8.134:3129 ssl-bump generate-host-certificates=on
> dynamic_cert_mem_cache_size=4MB
> cert=/home/asif/squid/www.sample.com.pem
> key=/home/asif/squid/www.sample.com.pem
> ***
>
> The ssl_db directory is initialized properly with correct permissions.
>
> ***
> [talha@localhost lib]$ pwd
> /usr/local/squid-3.3/var/lib
>
> [talha@localhost lib]$ ls -al
> total 24
> drwxrwxrwx 3 root   root  4096 Sep 20 15:31 .
> drwxrwxrwx 6 root   root  4096 Sep 20 15:05 ..
> drwxrwxrwx 3 nobody talha 4096 Sep 20 15:31 ssl_db
>
> The size file also has some values in it and cert generation also
> seems to work but suddenly it all crashes .
> **
>
>
>
> 2012/09/20 14:57:45| Starting Squid Cache version
> 3.HEAD-20120425-r12120 for x86_64-unknown-linux-gnu...
> 2012/09/20 14:57:45| Process ID 23826
> 2012/09/20 14:57:45| Process Roles: master worker
> 2012/09/20 14:57:45| With 1024 file descriptors available
> 2012/09/20 14:57:45| Initializing IP Cache...
> 2012/09/20 14:57:45| DNS Socket created at [::], FD 5
> 2012/09/20 14:57:45| DNS Socket created at 0.0.0.0, FD 6
> 2012/09/20 14:57:45| Adding nameserver 192.168.8.1 from /etc/resolv.conf
> 2012/09/20 14:57:45| Adding domain localdomain from /etc/resolv.conf
> 2012/09/20 14:57:45| helperOpenServers: Starting 5/5 'ssl_crtd' processes
> 2012/09/20 14:57:45| Logfile: opening log
> daemon:/usr/local/squid-3.3/var/logs/access.log
> 2012/09/20 14:57:45| Logfile Daemon: opening log
> /usr/local/squid-3.3/var/logs/access.log
> 2012/09/20 14:57:45| Logfile: opening log 
> /usr/local/squid-3.3/var/logs/icap-log
> 2012/09/20 14:57:45| WARNING: log parameters now start with a module
> name. Use 'stdio:/usr/local/squid-3.3/var/logs/icap-log'
>
>
> 2012/09/20 14:57:45| Store logging disabled
> 2012/09/20 14:57:45| Swap maxSize 0 + 262144 KB, estimated 20164 objects
> 2012/09/20 14:57:45| Target number of buckets: 1008
> 2012/09/20 14:57:45| Using 8192 Store buckets
> 2012/09/20 14:57:45| Max Mem  size: 262144 KB
> 2012/09/20 14:57:45| Max Swap size: 0 KB
> 2012/09/20 14:57:45| Using Least Load store dir selection
> 2012/09/20 14:57:45| Set Current Directory to 
> /usr/local/squid-3.3/var/cache
> 2012/09/20 14:57:45| Loaded Icons.
> 2012/09/20 14:57:45| HTCP Disabled.
> 2012/09/20 14:57:45| /usr/local/squid-3.3/var/run/squid.pid: (13)
> Permission denied
> 2012/09/20 14:57:45| WARNING: Could not write pid file
> 2012/09/20 14:57:45| Squid plugin modules loaded: 0
> 2012/09/20 14:57:45| Adaptation support is on
> 2012/09/20 14:57:45| Accepting SSL bumped HTTP Socket connections at
> local=192.168.8.134:3128 remote=[::] FD 20 flags=9
> 2012/09/20 14:57:45| Accepting HTTP Socket connections at
> local=192.168.8.134:8080 remote=[::] FD 21 flags=9
> 2012/09/20 14:57:45| Accepting SSL bumped HTTPS Socket connections at
> local=192.168.8.134:3129 remote=[::] FD 22 flags=9
> 2012/09/20 14:57:46| storeLateRelease: released 0 objects
>
> (ssl_crtd): Cannot create

Re: [squid-users] problems with ssl_crtd

2012-09-24 Thread Ahmed Talha Khan
Linos,

I have not debugged the issue yet. Will post results when do it.

Can anyone provide with the FATAL patch kindly?

-talha

On Mon, Sep 24, 2012 at 9:47 PM, Linos  wrote:
> On 24/09/12 12:52, Amos Jeffries wrote:
>> On 24/09/2012 8:44 p.m., Linos wrote:
>>> On 20/09/12 12:58, Ahmed Talha Khan wrote:
 Hey Guy, All

 I have started facing a very similar issue now.I have been using
 squid-3.HEAD-20120421-r12120 for about 5 months without any issues.
 Suddenly from yesterday ive started getting crahses in ssl_crtd
 process.


 In my case i am the only user but i observe that the behaviour is
 random. Sometimes it crashes and sometimes it works. Different https
 pages give the crash. Even non https pages have caused the crash.

   These occur especially on google https pages like docs,mail,calender 
 etc..

 The signing cert is also ok and has NOT expired.


 My squid conf looks like this:
 ***
 sslproxy_cert_error allow all

 sslcrtd_program /usr/local/squid-3.3/libexec/ssl_crtd -s
 /usr/local/squid-3.3/var/lib/ssl_db -M 4MB
 sslcrtd_children 5

 http_port 192.168.8.134:3128 ssl-bump generate-host-certificates=on
 dynamic_cert_mem_cache_size=4MB
 cert=/home/asif/squid/www.sample.com.pem
 key=/home/asif/squid/www.sample.com.pem

 http_port 192.168.8.134:8080

 https_port 192.168.8.134:3129 ssl-bump generate-host-certificates=on
 dynamic_cert_mem_cache_size=4MB
 cert=/home/asif/squid/www.sample.com.pem
 key=/home/asif/squid/www.sample.com.pem
 ***

 The ssl_db directory is initialized properly with correct permissions.

 ***
 [talha@localhost lib]$ pwd
 /usr/local/squid-3.3/var/lib

 [talha@localhost lib]$ ls -al
 total 24
 drwxrwxrwx 3 root   root  4096 Sep 20 15:31 .
 drwxrwxrwx 6 root   root  4096 Sep 20 15:05 ..
 drwxrwxrwx 3 nobody talha 4096 Sep 20 15:31 ssl_db

 The size file also has some values in it and cert generation also
 seems to work but suddenly it all crashes .
 **



 2012/09/20 14:57:45| Starting Squid Cache version
 3.HEAD-20120425-r12120 for x86_64-unknown-linux-gnu...
 2012/09/20 14:57:45| Process ID 23826
 2012/09/20 14:57:45| Process Roles: master worker
 2012/09/20 14:57:45| With 1024 file descriptors available
 2012/09/20 14:57:45| Initializing IP Cache...
 2012/09/20 14:57:45| DNS Socket created at [::], FD 5
 2012/09/20 14:57:45| DNS Socket created at 0.0.0.0, FD 6
 2012/09/20 14:57:45| Adding nameserver 192.168.8.1 from /etc/resolv.conf
 2012/09/20 14:57:45| Adding domain localdomain from /etc/resolv.conf
 2012/09/20 14:57:45| helperOpenServers: Starting 5/5 'ssl_crtd' processes
 2012/09/20 14:57:45| Logfile: opening log
 daemon:/usr/local/squid-3.3/var/logs/access.log
 2012/09/20 14:57:45| Logfile Daemon: opening log
 /usr/local/squid-3.3/var/logs/access.log
 2012/09/20 14:57:45| Logfile: opening log 
 /usr/local/squid-3.3/var/logs/icap-log
 2012/09/20 14:57:45| WARNING: log parameters now start with a module
 name. Use 'stdio:/usr/local/squid-3.3/var/logs/icap-log'


 2012/09/20 14:57:45| Store logging disabled
 2012/09/20 14:57:45| Swap maxSize 0 + 262144 KB, estimated 20164 objects
 2012/09/20 14:57:45| Target number of buckets: 1008
 2012/09/20 14:57:45| Using 8192 Store buckets
 2012/09/20 14:57:45| Max Mem  size: 262144 KB
 2012/09/20 14:57:45| Max Swap size: 0 KB
 2012/09/20 14:57:45| Using Least Load store dir selection
 2012/09/20 14:57:45| Set Current Directory to 
 /usr/local/squid-3.3/var/cache
 2012/09/20 14:57:45| Loaded Icons.
 2012/09/20 14:57:45| HTCP Disabled.
 2012/09/20 14:57:45| /usr/local/squid-3.3/var/run/squid.pid: (13)
 Permission denied
 2012/09/20 14:57:45| WARNING: Could not write pid file
 2012/09/20 14:57:45| Squid plugin modules loaded: 0
 2012/09/20 14:57:45| Adaptation support is on
 2012/09/20 14:57:45| Accepting SSL bumped HTTP Socket connections at
 local=192.168.8.134:3128 remote=[::] FD 20 flags=9
 2012/09/20 14:57:45| Accepting HTTP Socket connections at
 local=192.168.8.134:8080 remote=[::] FD 21 flags=9
 2012/09/20 14:57:45| Accepting SSL bumped HTTPS Socket connections at
 local=192.168.8.134:3129 remote=[::] FD 22 flags=9
 2012/09/20 14:57:46| storeLateRelease: released 0 objects

 (ssl_crtd): Cannot create ssl certificate or private key.
 2012/09/20 14:58:23| WARNING: ssl_crtd #2 exited
 2012/09/20 14:58:23| Too few ssl_crtd processes are running (need 1/5)

 2012/09/20 14:58:23| Starting new help

Re: [squid-users] problems with ssl_crtd

2012-09-24 Thread Linos
On 24/09/12 12:52, Amos Jeffries wrote:
> On 24/09/2012 8:44 p.m., Linos wrote:
>> On 20/09/12 12:58, Ahmed Talha Khan wrote:
>>> Hey Guy, All
>>>
>>> I have started facing a very similar issue now.I have been using
>>> squid-3.HEAD-20120421-r12120 for about 5 months without any issues.
>>> Suddenly from yesterday ive started getting crahses in ssl_crtd
>>> process.
>>>
>>>
>>> In my case i am the only user but i observe that the behaviour is
>>> random. Sometimes it crashes and sometimes it works. Different https
>>> pages give the crash. Even non https pages have caused the crash.
>>>
>>>   These occur especially on google https pages like docs,mail,calender etc..
>>>
>>> The signing cert is also ok and has NOT expired.
>>>
>>>
>>> My squid conf looks like this:
>>> ***
>>> sslproxy_cert_error allow all
>>>
>>> sslcrtd_program /usr/local/squid-3.3/libexec/ssl_crtd -s
>>> /usr/local/squid-3.3/var/lib/ssl_db -M 4MB
>>> sslcrtd_children 5
>>>
>>> http_port 192.168.8.134:3128 ssl-bump generate-host-certificates=on
>>> dynamic_cert_mem_cache_size=4MB
>>> cert=/home/asif/squid/www.sample.com.pem
>>> key=/home/asif/squid/www.sample.com.pem
>>>
>>> http_port 192.168.8.134:8080
>>>
>>> https_port 192.168.8.134:3129 ssl-bump generate-host-certificates=on
>>> dynamic_cert_mem_cache_size=4MB
>>> cert=/home/asif/squid/www.sample.com.pem
>>> key=/home/asif/squid/www.sample.com.pem
>>> ***
>>>
>>> The ssl_db directory is initialized properly with correct permissions.
>>>
>>> ***
>>> [talha@localhost lib]$ pwd
>>> /usr/local/squid-3.3/var/lib
>>>
>>> [talha@localhost lib]$ ls -al
>>> total 24
>>> drwxrwxrwx 3 root   root  4096 Sep 20 15:31 .
>>> drwxrwxrwx 6 root   root  4096 Sep 20 15:05 ..
>>> drwxrwxrwx 3 nobody talha 4096 Sep 20 15:31 ssl_db
>>>
>>> The size file also has some values in it and cert generation also
>>> seems to work but suddenly it all crashes .
>>> **
>>>
>>>
>>>
>>> 2012/09/20 14:57:45| Starting Squid Cache version
>>> 3.HEAD-20120425-r12120 for x86_64-unknown-linux-gnu...
>>> 2012/09/20 14:57:45| Process ID 23826
>>> 2012/09/20 14:57:45| Process Roles: master worker
>>> 2012/09/20 14:57:45| With 1024 file descriptors available
>>> 2012/09/20 14:57:45| Initializing IP Cache...
>>> 2012/09/20 14:57:45| DNS Socket created at [::], FD 5
>>> 2012/09/20 14:57:45| DNS Socket created at 0.0.0.0, FD 6
>>> 2012/09/20 14:57:45| Adding nameserver 192.168.8.1 from /etc/resolv.conf
>>> 2012/09/20 14:57:45| Adding domain localdomain from /etc/resolv.conf
>>> 2012/09/20 14:57:45| helperOpenServers: Starting 5/5 'ssl_crtd' processes
>>> 2012/09/20 14:57:45| Logfile: opening log
>>> daemon:/usr/local/squid-3.3/var/logs/access.log
>>> 2012/09/20 14:57:45| Logfile Daemon: opening log
>>> /usr/local/squid-3.3/var/logs/access.log
>>> 2012/09/20 14:57:45| Logfile: opening log 
>>> /usr/local/squid-3.3/var/logs/icap-log
>>> 2012/09/20 14:57:45| WARNING: log parameters now start with a module
>>> name. Use 'stdio:/usr/local/squid-3.3/var/logs/icap-log'
>>>
>>>
>>> 2012/09/20 14:57:45| Store logging disabled
>>> 2012/09/20 14:57:45| Swap maxSize 0 + 262144 KB, estimated 20164 objects
>>> 2012/09/20 14:57:45| Target number of buckets: 1008
>>> 2012/09/20 14:57:45| Using 8192 Store buckets
>>> 2012/09/20 14:57:45| Max Mem  size: 262144 KB
>>> 2012/09/20 14:57:45| Max Swap size: 0 KB
>>> 2012/09/20 14:57:45| Using Least Load store dir selection
>>> 2012/09/20 14:57:45| Set Current Directory to /usr/local/squid-3.3/var/cache
>>> 2012/09/20 14:57:45| Loaded Icons.
>>> 2012/09/20 14:57:45| HTCP Disabled.
>>> 2012/09/20 14:57:45| /usr/local/squid-3.3/var/run/squid.pid: (13)
>>> Permission denied
>>> 2012/09/20 14:57:45| WARNING: Could not write pid file
>>> 2012/09/20 14:57:45| Squid plugin modules loaded: 0
>>> 2012/09/20 14:57:45| Adaptation support is on
>>> 2012/09/20 14:57:45| Accepting SSL bumped HTTP Socket connections at
>>> local=192.168.8.134:3128 remote=[::] FD 20 flags=9
>>> 2012/09/20 14:57:45| Accepting HTTP Socket connections at
>>> local=192.168.8.134:8080 remote=[::] FD 21 flags=9
>>> 2012/09/20 14:57:45| Accepting SSL bumped HTTPS Socket connections at
>>> local=192.168.8.134:3129 remote=[::] FD 22 flags=9
>>> 2012/09/20 14:57:46| storeLateRelease: released 0 objects
>>>
>>> (ssl_crtd): Cannot create ssl certificate or private key.
>>> 2012/09/20 14:58:23| WARNING: ssl_crtd #2 exited
>>> 2012/09/20 14:58:23| Too few ssl_crtd processes are running (need 1/5)
>>>
>>> 2012/09/20 14:58:23| Starting new helpers
>>> 2012/09/20 14:58:23| helperOpenServers: Starting 1/5 'ssl_crtd' processes
>>> 2012/09/20 14:58:23| client_side.cc(3478) sslCrtdHandleReply:
>>> "ssl_crtd" helper return  reply
>>> (ssl_crtd): Cannot create ssl certificate or private key.
>>>
>>> 2012/09/20 14:58:23| WARNING: ssl_crtd #1 exi

Re: [squid-users] problems with ssl_crtd

2012-09-24 Thread Linos
On 24/09/12 12:52, Amos Jeffries wrote:
> On 24/09/2012 8:44 p.m., Linos wrote:
>> On 20/09/12 12:58, Ahmed Talha Khan wrote:
>>> Hey Guy, All
>>>
>>> I have started facing a very similar issue now.I have been using
>>> squid-3.HEAD-20120421-r12120 for about 5 months without any issues.
>>> Suddenly from yesterday ive started getting crahses in ssl_crtd
>>> process.
>>>
>>>
>>> In my case i am the only user but i observe that the behaviour is
>>> random. Sometimes it crashes and sometimes it works. Different https
>>> pages give the crash. Even non https pages have caused the crash.
>>>
>>>   These occur especially on google https pages like docs,mail,calender etc..
>>>
>>> The signing cert is also ok and has NOT expired.
>>>
>>>
>>> My squid conf looks like this:
>>> ***
>>> sslproxy_cert_error allow all
>>>
>>> sslcrtd_program /usr/local/squid-3.3/libexec/ssl_crtd -s
>>> /usr/local/squid-3.3/var/lib/ssl_db -M 4MB
>>> sslcrtd_children 5
>>>
>>> http_port 192.168.8.134:3128 ssl-bump generate-host-certificates=on
>>> dynamic_cert_mem_cache_size=4MB
>>> cert=/home/asif/squid/www.sample.com.pem
>>> key=/home/asif/squid/www.sample.com.pem
>>>
>>> http_port 192.168.8.134:8080
>>>
>>> https_port 192.168.8.134:3129 ssl-bump generate-host-certificates=on
>>> dynamic_cert_mem_cache_size=4MB
>>> cert=/home/asif/squid/www.sample.com.pem
>>> key=/home/asif/squid/www.sample.com.pem
>>> ***
>>>
>>> The ssl_db directory is initialized properly with correct permissions.
>>>
>>> ***
>>> [talha@localhost lib]$ pwd
>>> /usr/local/squid-3.3/var/lib
>>>
>>> [talha@localhost lib]$ ls -al
>>> total 24
>>> drwxrwxrwx 3 root   root  4096 Sep 20 15:31 .
>>> drwxrwxrwx 6 root   root  4096 Sep 20 15:05 ..
>>> drwxrwxrwx 3 nobody talha 4096 Sep 20 15:31 ssl_db
>>>
>>> The size file also has some values in it and cert generation also
>>> seems to work but suddenly it all crashes .
>>> **
>>>
>>>
>>>
>>> 2012/09/20 14:57:45| Starting Squid Cache version
>>> 3.HEAD-20120425-r12120 for x86_64-unknown-linux-gnu...
>>> 2012/09/20 14:57:45| Process ID 23826
>>> 2012/09/20 14:57:45| Process Roles: master worker
>>> 2012/09/20 14:57:45| With 1024 file descriptors available
>>> 2012/09/20 14:57:45| Initializing IP Cache...
>>> 2012/09/20 14:57:45| DNS Socket created at [::], FD 5
>>> 2012/09/20 14:57:45| DNS Socket created at 0.0.0.0, FD 6
>>> 2012/09/20 14:57:45| Adding nameserver 192.168.8.1 from /etc/resolv.conf
>>> 2012/09/20 14:57:45| Adding domain localdomain from /etc/resolv.conf
>>> 2012/09/20 14:57:45| helperOpenServers: Starting 5/5 'ssl_crtd' processes
>>> 2012/09/20 14:57:45| Logfile: opening log
>>> daemon:/usr/local/squid-3.3/var/logs/access.log
>>> 2012/09/20 14:57:45| Logfile Daemon: opening log
>>> /usr/local/squid-3.3/var/logs/access.log
>>> 2012/09/20 14:57:45| Logfile: opening log 
>>> /usr/local/squid-3.3/var/logs/icap-log
>>> 2012/09/20 14:57:45| WARNING: log parameters now start with a module
>>> name. Use 'stdio:/usr/local/squid-3.3/var/logs/icap-log'
>>>
>>>
>>> 2012/09/20 14:57:45| Store logging disabled
>>> 2012/09/20 14:57:45| Swap maxSize 0 + 262144 KB, estimated 20164 objects
>>> 2012/09/20 14:57:45| Target number of buckets: 1008
>>> 2012/09/20 14:57:45| Using 8192 Store buckets
>>> 2012/09/20 14:57:45| Max Mem  size: 262144 KB
>>> 2012/09/20 14:57:45| Max Swap size: 0 KB
>>> 2012/09/20 14:57:45| Using Least Load store dir selection
>>> 2012/09/20 14:57:45| Set Current Directory to /usr/local/squid-3.3/var/cache
>>> 2012/09/20 14:57:45| Loaded Icons.
>>> 2012/09/20 14:57:45| HTCP Disabled.
>>> 2012/09/20 14:57:45| /usr/local/squid-3.3/var/run/squid.pid: (13)
>>> Permission denied
>>> 2012/09/20 14:57:45| WARNING: Could not write pid file
>>> 2012/09/20 14:57:45| Squid plugin modules loaded: 0
>>> 2012/09/20 14:57:45| Adaptation support is on
>>> 2012/09/20 14:57:45| Accepting SSL bumped HTTP Socket connections at
>>> local=192.168.8.134:3128 remote=[::] FD 20 flags=9
>>> 2012/09/20 14:57:45| Accepting HTTP Socket connections at
>>> local=192.168.8.134:8080 remote=[::] FD 21 flags=9
>>> 2012/09/20 14:57:45| Accepting SSL bumped HTTPS Socket connections at
>>> local=192.168.8.134:3129 remote=[::] FD 22 flags=9
>>> 2012/09/20 14:57:46| storeLateRelease: released 0 objects
>>>
>>> (ssl_crtd): Cannot create ssl certificate or private key.
>>> 2012/09/20 14:58:23| WARNING: ssl_crtd #2 exited
>>> 2012/09/20 14:58:23| Too few ssl_crtd processes are running (need 1/5)
>>>
>>> 2012/09/20 14:58:23| Starting new helpers
>>> 2012/09/20 14:58:23| helperOpenServers: Starting 1/5 'ssl_crtd' processes
>>> 2012/09/20 14:58:23| client_side.cc(3478) sslCrtdHandleReply:
>>> "ssl_crtd" helper return  reply
>>> (ssl_crtd): Cannot create ssl certificate or private key.
>>>
>>> 2012/09/20 14:58:23| WARNING: ssl_crtd #1 exi

Re: [squid-users] problems with ssl_crtd

2012-09-24 Thread Amos Jeffries

On 24/09/2012 8:44 p.m., Linos wrote:

On 20/09/12 12:58, Ahmed Talha Khan wrote:

Hey Guy, All

I have started facing a very similar issue now.I have been using
squid-3.HEAD-20120421-r12120 for about 5 months without any issues.
Suddenly from yesterday ive started getting crahses in ssl_crtd
process.


In my case i am the only user but i observe that the behaviour is
random. Sometimes it crashes and sometimes it works. Different https
pages give the crash. Even non https pages have caused the crash.

  These occur especially on google https pages like docs,mail,calender etc..

The signing cert is also ok and has NOT expired.


My squid conf looks like this:
***
sslproxy_cert_error allow all

sslcrtd_program /usr/local/squid-3.3/libexec/ssl_crtd -s
/usr/local/squid-3.3/var/lib/ssl_db -M 4MB
sslcrtd_children 5

http_port 192.168.8.134:3128 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB
cert=/home/asif/squid/www.sample.com.pem
key=/home/asif/squid/www.sample.com.pem

http_port 192.168.8.134:8080

https_port 192.168.8.134:3129 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB
cert=/home/asif/squid/www.sample.com.pem
key=/home/asif/squid/www.sample.com.pem
***

The ssl_db directory is initialized properly with correct permissions.

***
[talha@localhost lib]$ pwd
/usr/local/squid-3.3/var/lib

[talha@localhost lib]$ ls -al
total 24
drwxrwxrwx 3 root   root  4096 Sep 20 15:31 .
drwxrwxrwx 6 root   root  4096 Sep 20 15:05 ..
drwxrwxrwx 3 nobody talha 4096 Sep 20 15:31 ssl_db

The size file also has some values in it and cert generation also
seems to work but suddenly it all crashes .
**



2012/09/20 14:57:45| Starting Squid Cache version
3.HEAD-20120425-r12120 for x86_64-unknown-linux-gnu...
2012/09/20 14:57:45| Process ID 23826
2012/09/20 14:57:45| Process Roles: master worker
2012/09/20 14:57:45| With 1024 file descriptors available
2012/09/20 14:57:45| Initializing IP Cache...
2012/09/20 14:57:45| DNS Socket created at [::], FD 5
2012/09/20 14:57:45| DNS Socket created at 0.0.0.0, FD 6
2012/09/20 14:57:45| Adding nameserver 192.168.8.1 from /etc/resolv.conf
2012/09/20 14:57:45| Adding domain localdomain from /etc/resolv.conf
2012/09/20 14:57:45| helperOpenServers: Starting 5/5 'ssl_crtd' processes
2012/09/20 14:57:45| Logfile: opening log
daemon:/usr/local/squid-3.3/var/logs/access.log
2012/09/20 14:57:45| Logfile Daemon: opening log
/usr/local/squid-3.3/var/logs/access.log
2012/09/20 14:57:45| Logfile: opening log /usr/local/squid-3.3/var/logs/icap-log
2012/09/20 14:57:45| WARNING: log parameters now start with a module
name. Use 'stdio:/usr/local/squid-3.3/var/logs/icap-log'


2012/09/20 14:57:45| Store logging disabled
2012/09/20 14:57:45| Swap maxSize 0 + 262144 KB, estimated 20164 objects
2012/09/20 14:57:45| Target number of buckets: 1008
2012/09/20 14:57:45| Using 8192 Store buckets
2012/09/20 14:57:45| Max Mem  size: 262144 KB
2012/09/20 14:57:45| Max Swap size: 0 KB
2012/09/20 14:57:45| Using Least Load store dir selection
2012/09/20 14:57:45| Set Current Directory to /usr/local/squid-3.3/var/cache
2012/09/20 14:57:45| Loaded Icons.
2012/09/20 14:57:45| HTCP Disabled.
2012/09/20 14:57:45| /usr/local/squid-3.3/var/run/squid.pid: (13)
Permission denied
2012/09/20 14:57:45| WARNING: Could not write pid file
2012/09/20 14:57:45| Squid plugin modules loaded: 0
2012/09/20 14:57:45| Adaptation support is on
2012/09/20 14:57:45| Accepting SSL bumped HTTP Socket connections at
local=192.168.8.134:3128 remote=[::] FD 20 flags=9
2012/09/20 14:57:45| Accepting HTTP Socket connections at
local=192.168.8.134:8080 remote=[::] FD 21 flags=9
2012/09/20 14:57:45| Accepting SSL bumped HTTPS Socket connections at
local=192.168.8.134:3129 remote=[::] FD 22 flags=9
2012/09/20 14:57:46| storeLateRelease: released 0 objects

(ssl_crtd): Cannot create ssl certificate or private key.
2012/09/20 14:58:23| WARNING: ssl_crtd #2 exited
2012/09/20 14:58:23| Too few ssl_crtd processes are running (need 1/5)

2012/09/20 14:58:23| Starting new helpers
2012/09/20 14:58:23| helperOpenServers: Starting 1/5 'ssl_crtd' processes
2012/09/20 14:58:23| client_side.cc(3478) sslCrtdHandleReply:
"ssl_crtd" helper return  reply
(ssl_crtd): Cannot create ssl certificate or private key.

2012/09/20 14:58:23| WARNING: ssl_crtd #1 exited
2012/09/20 14:58:23| Too few ssl_crtd processes are running (need 1/5)
2012/09/20 14:58:23| storeDirWriteCleanLogs: Starting...
2012/09/20 14:58:23|   Finished.  Wrote 0 entries.
2012/09/20 14:58:23|   Took 0.00 seconds (  0.00 entries/sec).
FATAL: The ssl_crtd helpers are crashing too rapidly, need help!

Squid Cache (Version 3.HEAD-20120425-r12120): Terminated abnormally.
CPU Usage: 0.355 seconds = 0.289 user + 0.066 sys
Maximum Resident Size: 71104 KB
P

Re: [squid-users] problems with ssl_crtd

2012-09-24 Thread Linos
On 20/09/12 12:58, Ahmed Talha Khan wrote:
> Hey Guy, All
> 
> I have started facing a very similar issue now.I have been using
> squid-3.HEAD-20120421-r12120 for about 5 months without any issues.
> Suddenly from yesterday ive started getting crahses in ssl_crtd
> process.
> 
> 
> In my case i am the only user but i observe that the behaviour is
> random. Sometimes it crashes and sometimes it works. Different https
> pages give the crash. Even non https pages have caused the crash.
> 
>  These occur especially on google https pages like docs,mail,calender etc..
> 
> The signing cert is also ok and has NOT expired.
> 
> 
> My squid conf looks like this:
> ***
> sslproxy_cert_error allow all
> 
> sslcrtd_program /usr/local/squid-3.3/libexec/ssl_crtd -s
> /usr/local/squid-3.3/var/lib/ssl_db -M 4MB
> sslcrtd_children 5
> 
> http_port 192.168.8.134:3128 ssl-bump generate-host-certificates=on
> dynamic_cert_mem_cache_size=4MB
> cert=/home/asif/squid/www.sample.com.pem
> key=/home/asif/squid/www.sample.com.pem
> 
> http_port 192.168.8.134:8080
> 
> https_port 192.168.8.134:3129 ssl-bump generate-host-certificates=on
> dynamic_cert_mem_cache_size=4MB
> cert=/home/asif/squid/www.sample.com.pem
> key=/home/asif/squid/www.sample.com.pem
> ***
> 
> The ssl_db directory is initialized properly with correct permissions.
> 
> ***
> [talha@localhost lib]$ pwd
> /usr/local/squid-3.3/var/lib
> 
> [talha@localhost lib]$ ls -al
> total 24
> drwxrwxrwx 3 root   root  4096 Sep 20 15:31 .
> drwxrwxrwx 6 root   root  4096 Sep 20 15:05 ..
> drwxrwxrwx 3 nobody talha 4096 Sep 20 15:31 ssl_db
> 
> The size file also has some values in it and cert generation also
> seems to work but suddenly it all crashes .
> **
> 
> 
> 
> 2012/09/20 14:57:45| Starting Squid Cache version
> 3.HEAD-20120425-r12120 for x86_64-unknown-linux-gnu...
> 2012/09/20 14:57:45| Process ID 23826
> 2012/09/20 14:57:45| Process Roles: master worker
> 2012/09/20 14:57:45| With 1024 file descriptors available
> 2012/09/20 14:57:45| Initializing IP Cache...
> 2012/09/20 14:57:45| DNS Socket created at [::], FD 5
> 2012/09/20 14:57:45| DNS Socket created at 0.0.0.0, FD 6
> 2012/09/20 14:57:45| Adding nameserver 192.168.8.1 from /etc/resolv.conf
> 2012/09/20 14:57:45| Adding domain localdomain from /etc/resolv.conf
> 2012/09/20 14:57:45| helperOpenServers: Starting 5/5 'ssl_crtd' processes
> 2012/09/20 14:57:45| Logfile: opening log
> daemon:/usr/local/squid-3.3/var/logs/access.log
> 2012/09/20 14:57:45| Logfile Daemon: opening log
> /usr/local/squid-3.3/var/logs/access.log
> 2012/09/20 14:57:45| Logfile: opening log 
> /usr/local/squid-3.3/var/logs/icap-log
> 2012/09/20 14:57:45| WARNING: log parameters now start with a module
> name. Use 'stdio:/usr/local/squid-3.3/var/logs/icap-log'
> 
> 
> 2012/09/20 14:57:45| Store logging disabled
> 2012/09/20 14:57:45| Swap maxSize 0 + 262144 KB, estimated 20164 objects
> 2012/09/20 14:57:45| Target number of buckets: 1008
> 2012/09/20 14:57:45| Using 8192 Store buckets
> 2012/09/20 14:57:45| Max Mem  size: 262144 KB
> 2012/09/20 14:57:45| Max Swap size: 0 KB
> 2012/09/20 14:57:45| Using Least Load store dir selection
> 2012/09/20 14:57:45| Set Current Directory to /usr/local/squid-3.3/var/cache
> 2012/09/20 14:57:45| Loaded Icons.
> 2012/09/20 14:57:45| HTCP Disabled.
> 2012/09/20 14:57:45| /usr/local/squid-3.3/var/run/squid.pid: (13)
> Permission denied
> 2012/09/20 14:57:45| WARNING: Could not write pid file
> 2012/09/20 14:57:45| Squid plugin modules loaded: 0
> 2012/09/20 14:57:45| Adaptation support is on
> 2012/09/20 14:57:45| Accepting SSL bumped HTTP Socket connections at
> local=192.168.8.134:3128 remote=[::] FD 20 flags=9
> 2012/09/20 14:57:45| Accepting HTTP Socket connections at
> local=192.168.8.134:8080 remote=[::] FD 21 flags=9
> 2012/09/20 14:57:45| Accepting SSL bumped HTTPS Socket connections at
> local=192.168.8.134:3129 remote=[::] FD 22 flags=9
> 2012/09/20 14:57:46| storeLateRelease: released 0 objects
> 
> (ssl_crtd): Cannot create ssl certificate or private key.
> 2012/09/20 14:58:23| WARNING: ssl_crtd #2 exited
> 2012/09/20 14:58:23| Too few ssl_crtd processes are running (need 1/5)
> 
> 2012/09/20 14:58:23| Starting new helpers
> 2012/09/20 14:58:23| helperOpenServers: Starting 1/5 'ssl_crtd' processes
> 2012/09/20 14:58:23| client_side.cc(3478) sslCrtdHandleReply:
> "ssl_crtd" helper return  reply
> (ssl_crtd): Cannot create ssl certificate or private key.
> 
> 2012/09/20 14:58:23| WARNING: ssl_crtd #1 exited
> 2012/09/20 14:58:23| Too few ssl_crtd processes are running (need 1/5)
> 2012/09/20 14:58:23| storeDirWriteCleanLogs: Starting...
> 2012/09/20 14:58:23|   Finished.  Wrote 0 entries.
> 2012/09/20 14:58:23|   Took 0.00 seconds (  0.00 entries/sec).
> FATAL: The ssl_crtd helpers

Re: [squid-users] problems with ssl_crtd

2012-09-21 Thread Linos
On 21/09/12 09:20, Amos Jeffries wrote:
> Firstly, is this problem still occuring with a recent snapshot? we have done a
> lot of stabilization on squid-3 in the months working up towards 3.2.1 release
> and the SSL code has had two new features added to improve the bumping process
> and behaviours.
> 
> 
> Secondly, the issue as you found is not in Squid but in the helper. You should
> be able to add -d option to the helper command line to get a debug trace out 
> of
> it into cache.log. Set Squid to a normal (0 or 1) level to avoid any squid 
> debug
> confusing the helper traces.
> 
> In 3.2 helpers crashing is not usually a fatal event, you will simply see an
> annoying amount of that:
> "
> 
> 2012/09/20 14:58:23| WARNING: ssl_crtd #2 exited
> 2012/09/20 14:58:23| Too few ssl_crtd processes are running (need 1/5)
> 2012/09/20 14:58:23| Starting new helpers
> "
> 
> 
> In this case there is something in the cert database or system environment 
> which
> is triggering the crash and persisting across into newly started helpers,
> crashing them as well. This is the one case where Squid is still killed by
> helpers dying faster than they can be sent lookups, thus the
> 
> "FATAL: The ssl_crtd helpers are crashing too rapidly, need help!"
> 
> HTH
> Amos
> 

Tested squid-3.HEAD-20120921-r12321, squid crash itself very fast with this
version, i have no time to test the ssl problem:

squid3 -N
2012/09/21 11:09:49| SECURITY NOTICE: auto-converting deprecated "ssl_bump allow
" to "ssl_bump client-first " which is usually inferior to the newer
server-first bumping mode. Update your ssl_bump rules.
Abortado (`core' generado)

about the core file, no matter what i put in squid.conf, squid does not generate
it, i have this line right now:
coredump_dir /var/log/squid3

but i have tried use the squid cache_dir itself and does not work either, i have
executed it in gdb and get this backtrace.


#0  0x7579a445 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7579dbab in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x556cf63d in xassert (
msg=0x55906778 "!conn() || conn()->clientConnection == NULL ||
conn()->clientConnection->fd == aDescriptor", file=, line=103)
at debug.cc:565
#3  0x557c8985 in ACLFilledChecklist::fd (this=0x5691b418,
aDescriptor=11) at FilledChecklist.cc:103
#4  0x556f73bd in FwdState::initiateSSL (this=0x57b00268)
at forward.cc:831
#5  0x557fd204 in AsyncCall::make (this=0x577c9cf0)
at AsyncCall.cc:35
#6  0x55800227 in AsyncCallQueue::fireNext (this=)
at AsyncCallQueue.cc:52
#7  0x55800380 in AsyncCallQueue::fire (this=0x55d5aba0)
at AsyncCallQueue.cc:38
#8  0x556e8604 in EventLoop::runOnce (this=0x7fffe460)
at EventLoop.cc:130
#9  0x556e86d8 in EventLoop::run (this=0x7fffe460)
at EventLoop.cc:94
#10 0x55749249 in SquidMain (argc=,
argv=) at main.cc:1518
#11 0x55678536 in SquidMainSafe (argv=,
argc=) at main.cc:1240
#12 main (argc=, argv=) at main.cc:1232


Regards,
Miguel Angel.


Re: [squid-users] problems with ssl_crtd

2012-09-21 Thread Linos
On 21/09/12 09:20, Amos Jeffries wrote:
> Firstly, is this problem still occuring with a recent snapshot? we have done a
> lot of stabilization on squid-3 in the months working up towards 3.2.1 release
> and the SSL code has had two new features added to improve the bumping process
> and behaviours.
> 
> 
> Secondly, the issue as you found is not in Squid but in the helper. You should
> be able to add -d option to the helper command line to get a debug trace out 
> of
> it into cache.log. Set Squid to a normal (0 or 1) level to avoid any squid 
> debug
> confusing the helper traces.
> 
> In 3.2 helpers crashing is not usually a fatal event, you will simply see an
> annoying amount of that:
> "
> 
> 2012/09/20 14:58:23| WARNING: ssl_crtd #2 exited
> 2012/09/20 14:58:23| Too few ssl_crtd processes are running (need 1/5)
> 2012/09/20 14:58:23| Starting new helpers
> "
> 
> 
> In this case there is something in the cert database or system environment 
> which
> is triggering the crash and persisting across into newly started helpers,
> crashing them as well. This is the one case where Squid is still killed by
> helpers dying faster than they can be sent lookups, thus the
> 
> "FATAL: The ssl_crtd helpers are crashing too rapidly, need help!"
> 
> HTH
> Amos
> 

I have not tried a recent snapshot but i am going to do right now.

I have added a -d option, now i have this line in squid.conf:
sslcrtd_program /usr/lib/squid3/ssl_crtd -d -s /var/spool/squid3/squid_ssl_db -M
16MB

Still i don't get nothing new in cache.log, this is the last crash:

(ssl_crtd): Cannot create ssl certificate or private key.
2012/09/21 10:33:10| WARNING: ssl_crtd #2 exited
2012/09/21 10:33:10| Too few ssl_crtd processes are running (need 1/10)
2012/09/21 10:33:10| Starting new helpers
2012/09/21 10:33:10| helperOpenServers: Starting 1/10 'ssl_crtd' processes
2012/09/21 10:33:10| client_side.cc(3477) sslCrtdHandleReply: "ssl_crtd" helper
return  reply
(ssl_crtd): Cannot create ssl certificate or private key.
2012/09/21 10:33:10| WARNING: ssl_crtd #1 exited
2012/09/21 10:33:10| Too few ssl_crtd processes are running (need 1/10)
2012/09/21 10:33:10| Closing HTTP port 0.0.0.0:3128
2012/09/21 10:33:10| Closing HTTP port [::]:3150
2012/09/21 10:33:10| storeDirWriteCleanLogs: Starting...
2012/09/21 10:33:10| 65536 entries written so far.
2012/09/21 10:33:10|   Finished.  Wrote 112080 entries.
2012/09/21 10:33:10|   Took 0.04 seconds (2691254.86 entries/sec).
FATAL: The ssl_crtd helpers are crashing too rapidly, need help!

Squid Cache (Version 3.2.1): Terminated abnormally.
(ssl_crtd): Cannot create ssl certificate or private key.
CPU Usage: 1.196 seconds = 0.720 user + 0.476 sys
Maximum Resident Size: 199824 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:   34196 KB
Ordinary blocks:33966 KB 52 blks
Small blocks:   0 KB  1 blks
Holding blocks: 37268 KB  8 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 229 KB
Total in use:   71234 KB 208%
Total free:   229 KB 1%


I have tried to attach to the five ssl_crtd processes but after the crash i get:

[Inferior 1 (process 465) exited normally]
[Inferior 1 (process 463) exited normally]
[Inferior 1 (process 464) exited normally]
[Inferior 1 (process 466) exited with code 01]
[Inferior 1 (process 467) exited with code 01]

so no backtrace, not in gdb neither in cache.log.

About the environment problem seems to be related with google domains, i don't
if i could trigger with other but not as easily for sure.

I am going to try the last snapshot in a while and post here my results.

Regards,
Miguel Angel.


Re: [squid-users] problems with ssl_crtd

2012-09-21 Thread Linos
On 20/09/12 12:58, Ahmed Talha Khan wrote:
> Hey Guy, All
> 
> I have started facing a very similar issue now.I have been using
> squid-3.HEAD-20120421-r12120 for about 5 months without any issues.
> Suddenly from yesterday ive started getting crahses in ssl_crtd
> process.
> 
> 
> In my case i am the only user but i observe that the behaviour is
> random. Sometimes it crashes and sometimes it works. Different https
> pages give the crash. Even non https pages have caused the crash.
> 
>  These occur especially on google https pages like docs,mail,calender etc..
> 
> The signing cert is also ok and has NOT expired.
> 
> 

I can confirm my problem is not reproducible with https://www.apple.com (for
example), not as easily as with google domain almost.

Regards,
Miguel Angel.



Re: [squid-users] problems with ssl_crtd

2012-09-21 Thread Amos Jeffries
Firstly, is this problem still occuring with a recent snapshot? we have 
done a lot of stabilization on squid-3 in the months working up towards 
3.2.1 release and the SSL code has had two new features added to improve 
the bumping process and behaviours.



Secondly, the issue as you found is not in Squid but in the helper. You 
should be able to add -d option to the helper command line to get a 
debug trace out of it into cache.log. Set Squid to a normal (0 or 1) 
level to avoid any squid debug confusing the helper traces.


In 3.2 helpers crashing is not usually a fatal event, you will simply 
see an annoying amount of that:

"

2012/09/20 14:58:23| WARNING: ssl_crtd #2 exited
2012/09/20 14:58:23| Too few ssl_crtd processes are running (need 1/5)
2012/09/20 14:58:23| Starting new helpers
"


In this case there is something in the cert database or system 
environment which is triggering the crash and persisting across into 
newly started helpers, crashing them as well. This is the one case where 
Squid is still killed by helpers dying faster than they can be sent 
lookups, thus the


"FATAL: The ssl_crtd helpers are crashing too rapidly, need help!"

HTH
Amos



Re: [squid-users] problems with ssl_crtd

2012-09-20 Thread Guy Helmer

On Sep 20, 2012, at 4:52 AM, Linos  wrote:

> On 19/09/12 16:46, Guy Helmer wrote:
>>> 
>>> Thanks for reply.
>>> 
>>> i checked the squid_ssl_db/size because i found the empty file problem 
>>> searching
>>> for my own problem in the mailing list, it's ok in my host, the file have 
>>> the
>>> content "139264" right now.
>>> 
>>> I can't found the core file, do i need to do something for it to generate? 
>>> maybe
>>> a configure script option or squid.conf change to activate it?
>>> 
>>> Regards,
>>> Miguel Angel.
>> 
>> I have
>> 
>> coredump_dir /var/log/squid
>> 
>> to get coredumps in my /var/log/squid directory. Now that I think about it, 
>> I don't remember if this works for ssl_crtd though -- seems like I have had 
>> to start "gdb ssl_crtd" and then attach to one of the ssl_crtd processes, 
>> then generate HTTPS traffic to trigger the request to ssl_crtd and get a 
>> backtrace when ssl_crtd gets the segfault signal…
>> 
>> Guy
>> 
> 
> Hi,
>   i have been trying to debug with gdb attaching existing process, the strange
> it's that ssl_ctrd seems to exit normally in this test, here you have it 
> (sorry
> for the spanish locale, i will use english next time, the only file with 
> symbols
> it's ssl_crtd itself):
> 
> 
> GNU gdb (Ubuntu/Linaro 7.4-2012.04-0ubuntu2) 7.4-2012.04
> Copyright (C) 2012 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later 
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-linux-gnu".
> Para las instrucciones de informe de errores, vea:
> .
> (gdb) attach 10495
> Adjuntando a process 10495
> Leyendo símbolos desde /usr/lib/squid3/ssl_crtd...Leyendo símbolos desde
> /usr/lib/debug/usr/lib/squid3/ssl_crtd...hecho.
> hecho.
> Leyendo símbolos desde /lib/x86_64-linux-gnu/libcrypto.so.0.9.8...(no se
> encontraron símbolos de depuración)hecho.
> Símbolos cargados para /lib/x86_64-linux-gnu/libcrypto.so.0.9.8
> Leyendo símbolos desde /usr/lib/x86_64-linux-gnu/libstdc++.so.6...(no se
> encontraron símbolos de depuración)hecho.
> Símbolos cargados para /usr/lib/x86_64-linux-gnu/libstdc++.so.6
> Leyendo símbolos desde /lib/x86_64-linux-gnu/libgcc_s.so.1...(no se 
> encontraron
> símbolos de depuración)hecho.
> Símbolos cargados para /lib/x86_64-linux-gnu/libgcc_s.so.1
> Leyendo símbolos desde /lib/x86_64-linux-gnu/libc.so.6...(no se encontraron
> símbolos de depuración)hecho.
> Símbolos cargados para /lib/x86_64-linux-gnu/libc.so.6
> Leyendo símbolos desde /lib/x86_64-linux-gnu/libdl.so.2...(no se encontraron
> símbolos de depuración)hecho.
> Símbolos cargados para /lib/x86_64-linux-gnu/libdl.so.2
> Leyendo símbolos desde /lib/x86_64-linux-gnu/libz.so.1...(no se encontraron
> símbolos de depuración)hecho.
> Símbolos cargados para /lib/x86_64-linux-gnu/libz.so.1
> Leyendo símbolos desde /lib/x86_64-linux-gnu/libm.so.6...(no se encontraron
> símbolos de depuración)hecho.
> Símbolos cargados para /lib/x86_64-linux-gnu/libm.so.6
> Leyendo símbolos desde /lib64/ld-linux-x86-64.so.2...(no se encontraron 
> símbolos
> de depuración)hecho.
> Símbolos cargados para /lib64/ld-linux-x86-64.so.2
> 0x7f3ef414f0a0 in read () from /lib/x86_64-linux-gnu/libc.so.6
> (gdb) continue
> Continuando.
> [Inferior 1 (process 10495) exited normally]
> (gdb) bt
> No stack.

You may have attached to an ssl_crtd child process that successfully ran 
without a crash. If you can access some sites but not others, that could 
happen… 

> 
> I have tried attaching to squid3 process itself and i have received a signal 
> here:
> 
> GNU gdb (Ubuntu/Linaro 7.4-2012.04-0ubuntu2) 7.4-2012.04
> Copyright (C) 2012 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later 
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-linux-gnu".
> Para las instrucciones de informe de errores, vea:
> .
> (gdb) attach 10732
> Adjuntando a process 10732
> Leyendo símbolos desde /usr/sbin/squid3...coLeyendo símbolos desde
> /usr/lib/debug/usr/sbin/squid3...ntinue
> hecho.
> hecho.
> Leyendo símbolos desde /lib/x86_64-linux-gnu/libpthread.so.0...(no se
> encontraron símbolos de depuración)hecho.
> [Depuración de hilo usando libthread_db enabled]
> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
> Símbolos cargados para /lib/x86_64-linux-gnu/libpthread.so.0
> Leyendo símbolos desde /usr/lib/x86_64-linux-gnu/libxml2.so.2...(no se
> encontraron símbolos de depuración)hecho.
> Símbolos cargados para /usr/lib/x86_64-li

Re: [squid-users] problems with ssl_crtd

2012-09-20 Thread Linos
On 20/09/12 12:58, Ahmed Talha Khan wrote:
> Hey Guy, All
> 
> I have started facing a very similar issue now.I have been using
> squid-3.HEAD-20120421-r12120 for about 5 months without any issues.
> Suddenly from yesterday ive started getting crahses in ssl_crtd
> process.
> 
> 
> In my case i am the only user but i observe that the behaviour is
> random. Sometimes it crashes and sometimes it works. Different https
> pages give the crash. Even non https pages have caused the crash.
> 
>  These occur especially on google https pages like docs,mail,calender etc..
> 
> The signing cert is also ok and has NOT expired.
> 
> 
> My squid conf looks like this:
> ***
> sslproxy_cert_error allow all
> 
> sslcrtd_program /usr/local/squid-3.3/libexec/ssl_crtd -s
> /usr/local/squid-3.3/var/lib/ssl_db -M 4MB
> sslcrtd_children 5
> 
> http_port 192.168.8.134:3128 ssl-bump generate-host-certificates=on
> dynamic_cert_mem_cache_size=4MB
> cert=/home/asif/squid/www.sample.com.pem
> key=/home/asif/squid/www.sample.com.pem
> 
> http_port 192.168.8.134:8080
> 
> https_port 192.168.8.134:3129 ssl-bump generate-host-certificates=on
> dynamic_cert_mem_cache_size=4MB
> cert=/home/asif/squid/www.sample.com.pem
> key=/home/asif/squid/www.sample.com.pem
> ***
> 
> The ssl_db directory is initialized properly with correct permissions.
> 
> ***
> [talha@localhost lib]$ pwd
> /usr/local/squid-3.3/var/lib
> 
> [talha@localhost lib]$ ls -al
> total 24
> drwxrwxrwx 3 root   root  4096 Sep 20 15:31 .
> drwxrwxrwx 6 root   root  4096 Sep 20 15:05 ..
> drwxrwxrwx 3 nobody talha 4096 Sep 20 15:31 ssl_db
> 
> The size file also has some values in it and cert generation also
> seems to work but suddenly it all crashes .
> **
> 
> 
> 
> 2012/09/20 14:57:45| Starting Squid Cache version
> 3.HEAD-20120425-r12120 for x86_64-unknown-linux-gnu...
> 2012/09/20 14:57:45| Process ID 23826
> 2012/09/20 14:57:45| Process Roles: master worker
> 2012/09/20 14:57:45| With 1024 file descriptors available
> 2012/09/20 14:57:45| Initializing IP Cache...
> 2012/09/20 14:57:45| DNS Socket created at [::], FD 5
> 2012/09/20 14:57:45| DNS Socket created at 0.0.0.0, FD 6
> 2012/09/20 14:57:45| Adding nameserver 192.168.8.1 from /etc/resolv.conf
> 2012/09/20 14:57:45| Adding domain localdomain from /etc/resolv.conf
> 2012/09/20 14:57:45| helperOpenServers: Starting 5/5 'ssl_crtd' processes
> 2012/09/20 14:57:45| Logfile: opening log
> daemon:/usr/local/squid-3.3/var/logs/access.log
> 2012/09/20 14:57:45| Logfile Daemon: opening log
> /usr/local/squid-3.3/var/logs/access.log
> 2012/09/20 14:57:45| Logfile: opening log 
> /usr/local/squid-3.3/var/logs/icap-log
> 2012/09/20 14:57:45| WARNING: log parameters now start with a module
> name. Use 'stdio:/usr/local/squid-3.3/var/logs/icap-log'
> 
> 
> 2012/09/20 14:57:45| Store logging disabled
> 2012/09/20 14:57:45| Swap maxSize 0 + 262144 KB, estimated 20164 objects
> 2012/09/20 14:57:45| Target number of buckets: 1008
> 2012/09/20 14:57:45| Using 8192 Store buckets
> 2012/09/20 14:57:45| Max Mem  size: 262144 KB
> 2012/09/20 14:57:45| Max Swap size: 0 KB
> 2012/09/20 14:57:45| Using Least Load store dir selection
> 2012/09/20 14:57:45| Set Current Directory to /usr/local/squid-3.3/var/cache
> 2012/09/20 14:57:45| Loaded Icons.
> 2012/09/20 14:57:45| HTCP Disabled.
> 2012/09/20 14:57:45| /usr/local/squid-3.3/var/run/squid.pid: (13)
> Permission denied
> 2012/09/20 14:57:45| WARNING: Could not write pid file
> 2012/09/20 14:57:45| Squid plugin modules loaded: 0
> 2012/09/20 14:57:45| Adaptation support is on
> 2012/09/20 14:57:45| Accepting SSL bumped HTTP Socket connections at
> local=192.168.8.134:3128 remote=[::] FD 20 flags=9
> 2012/09/20 14:57:45| Accepting HTTP Socket connections at
> local=192.168.8.134:8080 remote=[::] FD 21 flags=9
> 2012/09/20 14:57:45| Accepting SSL bumped HTTPS Socket connections at
> local=192.168.8.134:3129 remote=[::] FD 22 flags=9
> 2012/09/20 14:57:46| storeLateRelease: released 0 objects
> 
> (ssl_crtd): Cannot create ssl certificate or private key.
> 2012/09/20 14:58:23| WARNING: ssl_crtd #2 exited
> 2012/09/20 14:58:23| Too few ssl_crtd processes are running (need 1/5)
> 
> 2012/09/20 14:58:23| Starting new helpers
> 2012/09/20 14:58:23| helperOpenServers: Starting 1/5 'ssl_crtd' processes
> 2012/09/20 14:58:23| client_side.cc(3478) sslCrtdHandleReply:
> "ssl_crtd" helper return  reply
> (ssl_crtd): Cannot create ssl certificate or private key.
> 
> 2012/09/20 14:58:23| WARNING: ssl_crtd #1 exited
> 2012/09/20 14:58:23| Too few ssl_crtd processes are running (need 1/5)
> 2012/09/20 14:58:23| storeDirWriteCleanLogs: Starting...
> 2012/09/20 14:58:23|   Finished.  Wrote 0 entries.
> 2012/09/20 14:58:23|   Took 0.00 seconds (  0.00 entries/sec).
> FATAL: The ssl_crtd helpers

Re: [squid-users] problems with ssl_crtd

2012-09-20 Thread Ahmed Talha Khan
Hey Guy, All

I have started facing a very similar issue now.I have been using
squid-3.HEAD-20120421-r12120 for about 5 months without any issues.
Suddenly from yesterday ive started getting crahses in ssl_crtd
process.


In my case i am the only user but i observe that the behaviour is
random. Sometimes it crashes and sometimes it works. Different https
pages give the crash. Even non https pages have caused the crash.

 These occur especially on google https pages like docs,mail,calender etc..

The signing cert is also ok and has NOT expired.


My squid conf looks like this:
***
sslproxy_cert_error allow all

sslcrtd_program /usr/local/squid-3.3/libexec/ssl_crtd -s
/usr/local/squid-3.3/var/lib/ssl_db -M 4MB
sslcrtd_children 5

http_port 192.168.8.134:3128 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB
cert=/home/asif/squid/www.sample.com.pem
key=/home/asif/squid/www.sample.com.pem

http_port 192.168.8.134:8080

https_port 192.168.8.134:3129 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB
cert=/home/asif/squid/www.sample.com.pem
key=/home/asif/squid/www.sample.com.pem
***

The ssl_db directory is initialized properly with correct permissions.

***
[talha@localhost lib]$ pwd
/usr/local/squid-3.3/var/lib

[talha@localhost lib]$ ls -al
total 24
drwxrwxrwx 3 root   root  4096 Sep 20 15:31 .
drwxrwxrwx 6 root   root  4096 Sep 20 15:05 ..
drwxrwxrwx 3 nobody talha 4096 Sep 20 15:31 ssl_db

The size file also has some values in it and cert generation also
seems to work but suddenly it all crashes .
**



2012/09/20 14:57:45| Starting Squid Cache version
3.HEAD-20120425-r12120 for x86_64-unknown-linux-gnu...
2012/09/20 14:57:45| Process ID 23826
2012/09/20 14:57:45| Process Roles: master worker
2012/09/20 14:57:45| With 1024 file descriptors available
2012/09/20 14:57:45| Initializing IP Cache...
2012/09/20 14:57:45| DNS Socket created at [::], FD 5
2012/09/20 14:57:45| DNS Socket created at 0.0.0.0, FD 6
2012/09/20 14:57:45| Adding nameserver 192.168.8.1 from /etc/resolv.conf
2012/09/20 14:57:45| Adding domain localdomain from /etc/resolv.conf
2012/09/20 14:57:45| helperOpenServers: Starting 5/5 'ssl_crtd' processes
2012/09/20 14:57:45| Logfile: opening log
daemon:/usr/local/squid-3.3/var/logs/access.log
2012/09/20 14:57:45| Logfile Daemon: opening log
/usr/local/squid-3.3/var/logs/access.log
2012/09/20 14:57:45| Logfile: opening log /usr/local/squid-3.3/var/logs/icap-log
2012/09/20 14:57:45| WARNING: log parameters now start with a module
name. Use 'stdio:/usr/local/squid-3.3/var/logs/icap-log'


2012/09/20 14:57:45| Store logging disabled
2012/09/20 14:57:45| Swap maxSize 0 + 262144 KB, estimated 20164 objects
2012/09/20 14:57:45| Target number of buckets: 1008
2012/09/20 14:57:45| Using 8192 Store buckets
2012/09/20 14:57:45| Max Mem  size: 262144 KB
2012/09/20 14:57:45| Max Swap size: 0 KB
2012/09/20 14:57:45| Using Least Load store dir selection
2012/09/20 14:57:45| Set Current Directory to /usr/local/squid-3.3/var/cache
2012/09/20 14:57:45| Loaded Icons.
2012/09/20 14:57:45| HTCP Disabled.
2012/09/20 14:57:45| /usr/local/squid-3.3/var/run/squid.pid: (13)
Permission denied
2012/09/20 14:57:45| WARNING: Could not write pid file
2012/09/20 14:57:45| Squid plugin modules loaded: 0
2012/09/20 14:57:45| Adaptation support is on
2012/09/20 14:57:45| Accepting SSL bumped HTTP Socket connections at
local=192.168.8.134:3128 remote=[::] FD 20 flags=9
2012/09/20 14:57:45| Accepting HTTP Socket connections at
local=192.168.8.134:8080 remote=[::] FD 21 flags=9
2012/09/20 14:57:45| Accepting SSL bumped HTTPS Socket connections at
local=192.168.8.134:3129 remote=[::] FD 22 flags=9
2012/09/20 14:57:46| storeLateRelease: released 0 objects

(ssl_crtd): Cannot create ssl certificate or private key.
2012/09/20 14:58:23| WARNING: ssl_crtd #2 exited
2012/09/20 14:58:23| Too few ssl_crtd processes are running (need 1/5)

2012/09/20 14:58:23| Starting new helpers
2012/09/20 14:58:23| helperOpenServers: Starting 1/5 'ssl_crtd' processes
2012/09/20 14:58:23| client_side.cc(3478) sslCrtdHandleReply:
"ssl_crtd" helper return  reply
(ssl_crtd): Cannot create ssl certificate or private key.

2012/09/20 14:58:23| WARNING: ssl_crtd #1 exited
2012/09/20 14:58:23| Too few ssl_crtd processes are running (need 1/5)
2012/09/20 14:58:23| storeDirWriteCleanLogs: Starting...
2012/09/20 14:58:23|   Finished.  Wrote 0 entries.
2012/09/20 14:58:23|   Took 0.00 seconds (  0.00 entries/sec).
FATAL: The ssl_crtd helpers are crashing too rapidly, need help!

Squid Cache (Version 3.HEAD-20120425-r12120): Terminated abnormally.
CPU Usage: 0.355 seconds = 0.289 user + 0.066 sys
Maximum Resident Size: 71104 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total spac

Re: [squid-users] problems with ssl_crtd

2012-09-20 Thread Linos
On 19/09/12 16:46, Guy Helmer wrote:
>>
>> Thanks for reply.
>>
>> i checked the squid_ssl_db/size because i found the empty file problem 
>> searching
>> for my own problem in the mailing list, it's ok in my host, the file have the
>> content "139264" right now.
>>
>> I can't found the core file, do i need to do something for it to generate? 
>> maybe
>> a configure script option or squid.conf change to activate it?
>>
>> Regards,
>> Miguel Angel.
> 
> I have
> 
> coredump_dir /var/log/squid
> 
> to get coredumps in my /var/log/squid directory. Now that I think about it, I 
> don't remember if this works for ssl_crtd though -- seems like I have had to 
> start "gdb ssl_crtd" and then attach to one of the ssl_crtd processes, then 
> generate HTTPS traffic to trigger the request to ssl_crtd and get a backtrace 
> when ssl_crtd gets the segfault signal…
> 
> Guy
> 

Hi,
   i have been trying to debug with gdb attaching existing process, the strange
it's that ssl_ctrd seems to exit normally in this test, here you have it (sorry
for the spanish locale, i will use english next time, the only file with symbols
it's ssl_crtd itself):


GNU gdb (Ubuntu/Linaro 7.4-2012.04-0ubuntu2) 7.4-2012.04
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Para las instrucciones de informe de errores, vea:
.
(gdb) attach 10495
Adjuntando a process 10495
Leyendo símbolos desde /usr/lib/squid3/ssl_crtd...Leyendo símbolos desde
/usr/lib/debug/usr/lib/squid3/ssl_crtd...hecho.
hecho.
Leyendo símbolos desde /lib/x86_64-linux-gnu/libcrypto.so.0.9.8...(no se
encontraron símbolos de depuración)hecho.
Símbolos cargados para /lib/x86_64-linux-gnu/libcrypto.so.0.9.8
Leyendo símbolos desde /usr/lib/x86_64-linux-gnu/libstdc++.so.6...(no se
encontraron símbolos de depuración)hecho.
Símbolos cargados para /usr/lib/x86_64-linux-gnu/libstdc++.so.6
Leyendo símbolos desde /lib/x86_64-linux-gnu/libgcc_s.so.1...(no se encontraron
símbolos de depuración)hecho.
Símbolos cargados para /lib/x86_64-linux-gnu/libgcc_s.so.1
Leyendo símbolos desde /lib/x86_64-linux-gnu/libc.so.6...(no se encontraron
símbolos de depuración)hecho.
Símbolos cargados para /lib/x86_64-linux-gnu/libc.so.6
Leyendo símbolos desde /lib/x86_64-linux-gnu/libdl.so.2...(no se encontraron
símbolos de depuración)hecho.
Símbolos cargados para /lib/x86_64-linux-gnu/libdl.so.2
Leyendo símbolos desde /lib/x86_64-linux-gnu/libz.so.1...(no se encontraron
símbolos de depuración)hecho.
Símbolos cargados para /lib/x86_64-linux-gnu/libz.so.1
Leyendo símbolos desde /lib/x86_64-linux-gnu/libm.so.6...(no se encontraron
símbolos de depuración)hecho.
Símbolos cargados para /lib/x86_64-linux-gnu/libm.so.6
Leyendo símbolos desde /lib64/ld-linux-x86-64.so.2...(no se encontraron símbolos
de depuración)hecho.
Símbolos cargados para /lib64/ld-linux-x86-64.so.2
0x7f3ef414f0a0 in read () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) continue
Continuando.
[Inferior 1 (process 10495) exited normally]
(gdb) bt
No stack.



I have tried attaching to squid3 process itself and i have received a signal 
here:

GNU gdb (Ubuntu/Linaro 7.4-2012.04-0ubuntu2) 7.4-2012.04
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Para las instrucciones de informe de errores, vea:
.
(gdb) attach 10732
Adjuntando a process 10732
Leyendo símbolos desde /usr/sbin/squid3...coLeyendo símbolos desde
/usr/lib/debug/usr/sbin/squid3...ntinue
hecho.
hecho.
Leyendo símbolos desde /lib/x86_64-linux-gnu/libpthread.so.0...(no se
encontraron símbolos de depuración)hecho.
[Depuración de hilo usando libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Símbolos cargados para /lib/x86_64-linux-gnu/libpthread.so.0
Leyendo símbolos desde /usr/lib/x86_64-linux-gnu/libxml2.so.2...(no se
encontraron símbolos de depuración)hecho.
Símbolos cargados para /usr/lib/x86_64-linux-gnu/libxml2.so.2
Leyendo símbolos desde /lib/x86_64-linux-gnu/libexpat.so.1...(no se encontraron
símbolos de depuración)hecho.
Símbolos cargados para /lib/x86_64-linux-gnu/libexpat.so.1
Leyendo símbolos desde /lib/x86_64-linux-gnu/libssl.so.0.9.8...(no se
encontraron símbolos de depuración)hecho.
Símbolos cargados para /lib

Re: [squid-users] problems with ssl_crtd

2012-09-19 Thread Linos
On 19/09/12 17:26, Eliezer Croitoru wrote:
> On 9/19/2012 1:44 PM, Linos wrote:
>> Hi,
>> i have been using Squid squid-3.2.0.17-20120527-r11561 in an Ubuntu 
>> Server
>> 12.04 some time with ssl-bump without problems for a year, the ca cert 
>> expired
>> some days ago and with the new ca cert i installed squid 3.2.1 stable.
>>
>> Now the proxy exists every time 10 or more users use https at the same time,
>> it's pretty strange, i have tried to downgrade to the old squid version but i
>> can't get the proxy to be stable no matter if using new or old version, i 
>> have
>> tried to recreate other cert just in case, same problem, i recreated too
>> squid_ssl_db and cache_dir, no matter what i do it keeps crashing, the cache 
>> log
>> read as this:
>>
> 
>>
>> I am using this ssl-bump line in squid.conf:
>> http_port 3150 ssl-bump generate-host-certificates=on
>> dynamic_cert_mem_cache_size=16MB cert=/etc/squid3/ssl_cert/myCA.pem
>>
>> I generated this myCA.pem using the instructions here
>> http://wiki.squid-cache.org/Features/DynamicSslCert
> 
> do you still have the old pem file?
> If it's expired ok but it should be still running but creating defective
> certificates.
I have the old pem, yes, but squid it's working fine with the new until more
than 5~6 people visit at the same time a https site, don't seems to be a problem
with a non-working certificate, i will test with the old one anyway.

> 
> did you changed ownership for the directory and files?
I have checked the ownership and files many times, and recreated the directories
some times too.

> did you tried to run the command from shell to see if it works?
it works because being launch by squid works too for some time.

> 
> Eliezer
> 

Miguel Angel.




Re: [squid-users] problems with ssl_crtd

2012-09-19 Thread Linos
On 19/09/12 16:46, Guy Helmer wrote:
> 
> On Sep 19, 2012, at 9:03 AM, Linos  wrote:
> 
>> On 19/09/12 15:30, Guy Helmer wrote:
>>> On Sep 19, 2012, at 5:44 AM, Linos  wrote:
>>>
 Hi,
i have been using Squid squid-3.2.0.17-20120527-r11561 in an Ubuntu 
 Server
 12.04 some time with ssl-bump without problems for a year, the ca cert 
 expired
 some days ago and with the new ca cert i installed squid 3.2.1 stable.

 Now the proxy exists every time 10 or more users use https at the same 
 time,
 it's pretty strange, i have tried to downgrade to the old squid version 
 but i
 can't get the proxy to be stable no matter if using new or old version, i 
 have
 tried to recreate other cert just in case, same problem, i recreated too
 squid_ssl_db and cache_dir, no matter what i do it keeps crashing, the 
 cache log
 read as this:


 --
 2012/09/19 11:58:00| Starting Squid Cache version 3.2.1 for 
 x86_64-pc-linux-gnu...
 2012/09/19 11:58:00| Process ID 30077
 2012/09/19 11:58:00| Process Roles: master worker
 2012/09/19 11:58:00| With 65535 file descriptors available
 2012/09/19 11:58:00| Initializing IP Cache...
 2012/09/19 11:58:00| DNS Socket created at [::], FD 4
 2012/09/19 11:58:00| DNS Socket created at 0.0.0.0, FD 5
 2012/09/19 11:58:00| Adding nameserver 80.58.61.250 from squid.conf
 2012/09/19 11:58:00| Adding nameserver 8.8.8.8 from squid.conf
 2012/09/19 11:58:00| helperOpenServers: Starting 5/10 'ssl_crtd' processes
 2012/09/19 11:58:00| helperOpenServers: Starting 5/20 
 'request_body_max_size.sh'
 processes
 2012/09/19 11:58:00| Logfile: opening log daemon:/var/log/squid3/access.log
 2012/09/19 11:58:00| Logfile Daemon: opening log /var/log/squid3/access.log
 2012/09/19 11:58:00| Unlinkd pipe opened on FD 31
 2012/09/19 11:58:00| Local cache digest enabled; rebuild/rewrite every 
 3600/3600 sec
 2012/09/19 11:58:00| Store logging disabled
 2012/09/19 11:58:00| Swap maxSize 1536 + 262144 KB, estimated 312442 
 objects
 2012/09/19 11:58:00| Target number of buckets: 15622
 2012/09/19 11:58:00| Using 16384 Store buckets
 2012/09/19 11:58:00| Max Mem  size: 262144 KB
 2012/09/19 11:58:00| Max Swap size: 1536 KB
 2012/09/19 11:58:00| Rebuilding storage in /mnt/squid/squid3 (clean log)
 2012/09/19 11:58:00| Using Least Load store dir selection
 2012/09/19 11:58:00| Set Current Directory to /mnt/squid/squid3
 2012/09/19 11:58:00| Loaded Icons.
 2012/09/19 11:58:00| HTCP Disabled.
 2012/09/19 11:58:00| Squid plugin modules loaded: 0
 2012/09/19 11:58:00| Adaptation support is off.
 2012/09/19 11:58:00| Accepting NAT intercepted HTTP Socket connections at
 local=0.0.0.0:3128 remote=[::] FD 36 flags=41
 2012/09/19 11:58:00| Accepting SSL bumped HTTP Socket connections at
 local=[::]:3150 remote=[::] FD 37 flags=9
 2012/09/19 11:58:00| Store rebuilding is 16.55% complete
 2012/09/19 11:58:00| Done reading /mnt/squid/squid3 swaplog (24167 entries)
 2012/09/19 11:58:00| Finished rebuilding storage from disk.
 2012/09/19 11:58:00| 24167 Entries scanned
 2012/09/19 11:58:00| 0 Invalid entries.
 2012/09/19 11:58:00| 0 With invalid flags.
 2012/09/19 11:58:00| 24167 Objects loaded.
 2012/09/19 11:58:00| 0 Objects expired.
 2012/09/19 11:58:00| 0 Objects cancelled.
 2012/09/19 11:58:00| 0 Duplicate URLs purged.
 2012/09/19 11:58:00| 0 Swapfile clashes avoided.
 2012/09/19 11:58:00|   Took 0.12 seconds (204025.29 objects/sec).
 2012/09/19 11:58:00| Beginning Validation Procedure
 2012/09/19 11:58:00|   Completed Validation Procedure
 2012/09/19 11:58:00|   Validated 24167 Entries
 2012/09/19 11:58:00|   store_swap_size = 732468.00 KB
 2012/09/19 11:58:01| storeLateRelease: released 0 objects
 (ssl_crtd): Cannot create ssl certificate or private key.
 2012/09/19 12:03:20| WARNING: ssl_crtd #1 exited
 2012/09/19 12:03:20| Too few ssl_crtd processes are running (need 1/10)
 2012/09/19 12:03:20| Starting new helpers
 2012/09/19 12:03:20| helperOpenServers: Starting 1/10 'ssl_crtd' processes
 2012/09/19 12:03:20| client_side.cc(3477) sslCrtdHandleReply: "ssl_crtd" 
 helper
 return  reply
 (ssl_crtd): Cannot create ssl certificate or private key.
 2012/09/19 12:03:20| WARNING: ssl_crtd #2 exited
 2012/09/19 12:03:20| Too few ssl_crtd processes are running (need 1/10)
 2012/09/19 12:03:20| Closing HTTP port 0.0.0.0:3128
 2012/09/19 12:03:20| Closing HTTP port [::]:3150
 2012/09/19 12:03:20| storeDirWriteCleanLogs: Starting...
 2012/09/19 12:03:20|   Finished.  Wrote 24195 entries.
 2012/09/19 12:03:20|   Took 0.02 seconds (1321120.4

Re: [squid-users] problems with ssl_crtd

2012-09-19 Thread Eliezer Croitoru

On 9/19/2012 1:44 PM, Linos wrote:

Hi,
i have been using Squid squid-3.2.0.17-20120527-r11561 in an Ubuntu 
Server
12.04 some time with ssl-bump without problems for a year, the ca cert expired
some days ago and with the new ca cert i installed squid 3.2.1 stable.

Now the proxy exists every time 10 or more users use https at the same time,
it's pretty strange, i have tried to downgrade to the old squid version but i
can't get the proxy to be stable no matter if using new or old version, i have
tried to recreate other cert just in case, same problem, i recreated too
squid_ssl_db and cache_dir, no matter what i do it keeps crashing, the cache log
read as this:





I am using this ssl-bump line in squid.conf:
http_port 3150 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=16MB cert=/etc/squid3/ssl_cert/myCA.pem

I generated this myCA.pem using the instructions here
http://wiki.squid-cache.org/Features/DynamicSslCert


do you still have the old pem file?
If it's expired ok but it should be still running but creating defective 
certificates.


did you changed ownership for the directory and files?
did you tried to run the command from shell to see if it works?

Eliezer



I don't know what more to do, could i do something to get a more clear error? i
have tried to use "debug_options ALL,9" but i only get much more noise (noise
for me at least). What could i do?

Regards,
Miguel Angel.




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] problems with ssl_crtd

2012-09-19 Thread Guy Helmer

On Sep 19, 2012, at 9:03 AM, Linos  wrote:

> On 19/09/12 15:30, Guy Helmer wrote:
>> On Sep 19, 2012, at 5:44 AM, Linos  wrote:
>> 
>>> Hi,
>>> i have been using Squid squid-3.2.0.17-20120527-r11561 in an Ubuntu 
>>> Server
>>> 12.04 some time with ssl-bump without problems for a year, the ca cert 
>>> expired
>>> some days ago and with the new ca cert i installed squid 3.2.1 stable.
>>> 
>>> Now the proxy exists every time 10 or more users use https at the same time,
>>> it's pretty strange, i have tried to downgrade to the old squid version but 
>>> i
>>> can't get the proxy to be stable no matter if using new or old version, i 
>>> have
>>> tried to recreate other cert just in case, same problem, i recreated too
>>> squid_ssl_db and cache_dir, no matter what i do it keeps crashing, the 
>>> cache log
>>> read as this:
>>> 
>>> 
>>> --
>>> 2012/09/19 11:58:00| Starting Squid Cache version 3.2.1 for 
>>> x86_64-pc-linux-gnu...
>>> 2012/09/19 11:58:00| Process ID 30077
>>> 2012/09/19 11:58:00| Process Roles: master worker
>>> 2012/09/19 11:58:00| With 65535 file descriptors available
>>> 2012/09/19 11:58:00| Initializing IP Cache...
>>> 2012/09/19 11:58:00| DNS Socket created at [::], FD 4
>>> 2012/09/19 11:58:00| DNS Socket created at 0.0.0.0, FD 5
>>> 2012/09/19 11:58:00| Adding nameserver 80.58.61.250 from squid.conf
>>> 2012/09/19 11:58:00| Adding nameserver 8.8.8.8 from squid.conf
>>> 2012/09/19 11:58:00| helperOpenServers: Starting 5/10 'ssl_crtd' processes
>>> 2012/09/19 11:58:00| helperOpenServers: Starting 5/20 
>>> 'request_body_max_size.sh'
>>> processes
>>> 2012/09/19 11:58:00| Logfile: opening log daemon:/var/log/squid3/access.log
>>> 2012/09/19 11:58:00| Logfile Daemon: opening log /var/log/squid3/access.log
>>> 2012/09/19 11:58:00| Unlinkd pipe opened on FD 31
>>> 2012/09/19 11:58:00| Local cache digest enabled; rebuild/rewrite every 
>>> 3600/3600 sec
>>> 2012/09/19 11:58:00| Store logging disabled
>>> 2012/09/19 11:58:00| Swap maxSize 1536 + 262144 KB, estimated 312442 
>>> objects
>>> 2012/09/19 11:58:00| Target number of buckets: 15622
>>> 2012/09/19 11:58:00| Using 16384 Store buckets
>>> 2012/09/19 11:58:00| Max Mem  size: 262144 KB
>>> 2012/09/19 11:58:00| Max Swap size: 1536 KB
>>> 2012/09/19 11:58:00| Rebuilding storage in /mnt/squid/squid3 (clean log)
>>> 2012/09/19 11:58:00| Using Least Load store dir selection
>>> 2012/09/19 11:58:00| Set Current Directory to /mnt/squid/squid3
>>> 2012/09/19 11:58:00| Loaded Icons.
>>> 2012/09/19 11:58:00| HTCP Disabled.
>>> 2012/09/19 11:58:00| Squid plugin modules loaded: 0
>>> 2012/09/19 11:58:00| Adaptation support is off.
>>> 2012/09/19 11:58:00| Accepting NAT intercepted HTTP Socket connections at
>>> local=0.0.0.0:3128 remote=[::] FD 36 flags=41
>>> 2012/09/19 11:58:00| Accepting SSL bumped HTTP Socket connections at
>>> local=[::]:3150 remote=[::] FD 37 flags=9
>>> 2012/09/19 11:58:00| Store rebuilding is 16.55% complete
>>> 2012/09/19 11:58:00| Done reading /mnt/squid/squid3 swaplog (24167 entries)
>>> 2012/09/19 11:58:00| Finished rebuilding storage from disk.
>>> 2012/09/19 11:58:00| 24167 Entries scanned
>>> 2012/09/19 11:58:00| 0 Invalid entries.
>>> 2012/09/19 11:58:00| 0 With invalid flags.
>>> 2012/09/19 11:58:00| 24167 Objects loaded.
>>> 2012/09/19 11:58:00| 0 Objects expired.
>>> 2012/09/19 11:58:00| 0 Objects cancelled.
>>> 2012/09/19 11:58:00| 0 Duplicate URLs purged.
>>> 2012/09/19 11:58:00| 0 Swapfile clashes avoided.
>>> 2012/09/19 11:58:00|   Took 0.12 seconds (204025.29 objects/sec).
>>> 2012/09/19 11:58:00| Beginning Validation Procedure
>>> 2012/09/19 11:58:00|   Completed Validation Procedure
>>> 2012/09/19 11:58:00|   Validated 24167 Entries
>>> 2012/09/19 11:58:00|   store_swap_size = 732468.00 KB
>>> 2012/09/19 11:58:01| storeLateRelease: released 0 objects
>>> (ssl_crtd): Cannot create ssl certificate or private key.
>>> 2012/09/19 12:03:20| WARNING: ssl_crtd #1 exited
>>> 2012/09/19 12:03:20| Too few ssl_crtd processes are running (need 1/10)
>>> 2012/09/19 12:03:20| Starting new helpers
>>> 2012/09/19 12:03:20| helperOpenServers: Starting 1/10 'ssl_crtd' processes
>>> 2012/09/19 12:03:20| client_side.cc(3477) sslCrtdHandleReply: "ssl_crtd" 
>>> helper
>>> return  reply
>>> (ssl_crtd): Cannot create ssl certificate or private key.
>>> 2012/09/19 12:03:20| WARNING: ssl_crtd #2 exited
>>> 2012/09/19 12:03:20| Too few ssl_crtd processes are running (need 1/10)
>>> 2012/09/19 12:03:20| Closing HTTP port 0.0.0.0:3128
>>> 2012/09/19 12:03:20| Closing HTTP port [::]:3150
>>> 2012/09/19 12:03:20| storeDirWriteCleanLogs: Starting...
>>> 2012/09/19 12:03:20|   Finished.  Wrote 24195 entries.
>>> 2012/09/19 12:03:20|   Took 0.02 seconds (1321120.45 entries/sec).
>>> FATAL: The ssl_crtd helpers are crashing too rapidly, need help!
>>> 
>>> Squid Cache (Version 3.2.1): Terminated ab

Re: [squid-users] problems with ssl_crtd

2012-09-19 Thread Linos
On 19/09/12 15:30, Guy Helmer wrote:
> On Sep 19, 2012, at 5:44 AM, Linos  wrote:
> 
>> Hi,
>>  i have been using Squid squid-3.2.0.17-20120527-r11561 in an Ubuntu 
>> Server
>> 12.04 some time with ssl-bump without problems for a year, the ca cert 
>> expired
>> some days ago and with the new ca cert i installed squid 3.2.1 stable.
>>
>> Now the proxy exists every time 10 or more users use https at the same time,
>> it's pretty strange, i have tried to downgrade to the old squid version but i
>> can't get the proxy to be stable no matter if using new or old version, i 
>> have
>> tried to recreate other cert just in case, same problem, i recreated too
>> squid_ssl_db and cache_dir, no matter what i do it keeps crashing, the cache 
>> log
>> read as this:
>>
>>
>> --
>> 2012/09/19 11:58:00| Starting Squid Cache version 3.2.1 for 
>> x86_64-pc-linux-gnu...
>> 2012/09/19 11:58:00| Process ID 30077
>> 2012/09/19 11:58:00| Process Roles: master worker
>> 2012/09/19 11:58:00| With 65535 file descriptors available
>> 2012/09/19 11:58:00| Initializing IP Cache...
>> 2012/09/19 11:58:00| DNS Socket created at [::], FD 4
>> 2012/09/19 11:58:00| DNS Socket created at 0.0.0.0, FD 5
>> 2012/09/19 11:58:00| Adding nameserver 80.58.61.250 from squid.conf
>> 2012/09/19 11:58:00| Adding nameserver 8.8.8.8 from squid.conf
>> 2012/09/19 11:58:00| helperOpenServers: Starting 5/10 'ssl_crtd' processes
>> 2012/09/19 11:58:00| helperOpenServers: Starting 5/20 
>> 'request_body_max_size.sh'
>> processes
>> 2012/09/19 11:58:00| Logfile: opening log daemon:/var/log/squid3/access.log
>> 2012/09/19 11:58:00| Logfile Daemon: opening log /var/log/squid3/access.log
>> 2012/09/19 11:58:00| Unlinkd pipe opened on FD 31
>> 2012/09/19 11:58:00| Local cache digest enabled; rebuild/rewrite every 
>> 3600/3600 sec
>> 2012/09/19 11:58:00| Store logging disabled
>> 2012/09/19 11:58:00| Swap maxSize 1536 + 262144 KB, estimated 312442 
>> objects
>> 2012/09/19 11:58:00| Target number of buckets: 15622
>> 2012/09/19 11:58:00| Using 16384 Store buckets
>> 2012/09/19 11:58:00| Max Mem  size: 262144 KB
>> 2012/09/19 11:58:00| Max Swap size: 1536 KB
>> 2012/09/19 11:58:00| Rebuilding storage in /mnt/squid/squid3 (clean log)
>> 2012/09/19 11:58:00| Using Least Load store dir selection
>> 2012/09/19 11:58:00| Set Current Directory to /mnt/squid/squid3
>> 2012/09/19 11:58:00| Loaded Icons.
>> 2012/09/19 11:58:00| HTCP Disabled.
>> 2012/09/19 11:58:00| Squid plugin modules loaded: 0
>> 2012/09/19 11:58:00| Adaptation support is off.
>> 2012/09/19 11:58:00| Accepting NAT intercepted HTTP Socket connections at
>> local=0.0.0.0:3128 remote=[::] FD 36 flags=41
>> 2012/09/19 11:58:00| Accepting SSL bumped HTTP Socket connections at
>> local=[::]:3150 remote=[::] FD 37 flags=9
>> 2012/09/19 11:58:00| Store rebuilding is 16.55% complete
>> 2012/09/19 11:58:00| Done reading /mnt/squid/squid3 swaplog (24167 entries)
>> 2012/09/19 11:58:00| Finished rebuilding storage from disk.
>> 2012/09/19 11:58:00| 24167 Entries scanned
>> 2012/09/19 11:58:00| 0 Invalid entries.
>> 2012/09/19 11:58:00| 0 With invalid flags.
>> 2012/09/19 11:58:00| 24167 Objects loaded.
>> 2012/09/19 11:58:00| 0 Objects expired.
>> 2012/09/19 11:58:00| 0 Objects cancelled.
>> 2012/09/19 11:58:00| 0 Duplicate URLs purged.
>> 2012/09/19 11:58:00| 0 Swapfile clashes avoided.
>> 2012/09/19 11:58:00|   Took 0.12 seconds (204025.29 objects/sec).
>> 2012/09/19 11:58:00| Beginning Validation Procedure
>> 2012/09/19 11:58:00|   Completed Validation Procedure
>> 2012/09/19 11:58:00|   Validated 24167 Entries
>> 2012/09/19 11:58:00|   store_swap_size = 732468.00 KB
>> 2012/09/19 11:58:01| storeLateRelease: released 0 objects
>> (ssl_crtd): Cannot create ssl certificate or private key.
>> 2012/09/19 12:03:20| WARNING: ssl_crtd #1 exited
>> 2012/09/19 12:03:20| Too few ssl_crtd processes are running (need 1/10)
>> 2012/09/19 12:03:20| Starting new helpers
>> 2012/09/19 12:03:20| helperOpenServers: Starting 1/10 'ssl_crtd' processes
>> 2012/09/19 12:03:20| client_side.cc(3477) sslCrtdHandleReply: "ssl_crtd" 
>> helper
>> return  reply
>> (ssl_crtd): Cannot create ssl certificate or private key.
>> 2012/09/19 12:03:20| WARNING: ssl_crtd #2 exited
>> 2012/09/19 12:03:20| Too few ssl_crtd processes are running (need 1/10)
>> 2012/09/19 12:03:20| Closing HTTP port 0.0.0.0:3128
>> 2012/09/19 12:03:20| Closing HTTP port [::]:3150
>> 2012/09/19 12:03:20| storeDirWriteCleanLogs: Starting...
>> 2012/09/19 12:03:20|   Finished.  Wrote 24195 entries.
>> 2012/09/19 12:03:20|   Took 0.02 seconds (1321120.45 entries/sec).
>> FATAL: The ssl_crtd helpers are crashing too rapidly, need help!
>>
>> Squid Cache (Version 3.2.1): Terminated abnormally.
>> CPU Usage: 1.896 seconds = 0.740 user + 1.156 sys
>> Maximum Resident Size: 144640 KB
>> Page faults with physical i/o: 0
>> Memory usa

Re: [squid-users] problems with ssl_crtd

2012-09-19 Thread Guy Helmer
On Sep 19, 2012, at 5:44 AM, Linos  wrote:

> Hi,
>   i have been using Squid squid-3.2.0.17-20120527-r11561 in an Ubuntu 
> Server
> 12.04 some time with ssl-bump without problems for a year, the ca cert expired
> some days ago and with the new ca cert i installed squid 3.2.1 stable.
> 
> Now the proxy exists every time 10 or more users use https at the same time,
> it's pretty strange, i have tried to downgrade to the old squid version but i
> can't get the proxy to be stable no matter if using new or old version, i have
> tried to recreate other cert just in case, same problem, i recreated too
> squid_ssl_db and cache_dir, no matter what i do it keeps crashing, the cache 
> log
> read as this:
> 
> 
> --
> 2012/09/19 11:58:00| Starting Squid Cache version 3.2.1 for 
> x86_64-pc-linux-gnu...
> 2012/09/19 11:58:00| Process ID 30077
> 2012/09/19 11:58:00| Process Roles: master worker
> 2012/09/19 11:58:00| With 65535 file descriptors available
> 2012/09/19 11:58:00| Initializing IP Cache...
> 2012/09/19 11:58:00| DNS Socket created at [::], FD 4
> 2012/09/19 11:58:00| DNS Socket created at 0.0.0.0, FD 5
> 2012/09/19 11:58:00| Adding nameserver 80.58.61.250 from squid.conf
> 2012/09/19 11:58:00| Adding nameserver 8.8.8.8 from squid.conf
> 2012/09/19 11:58:00| helperOpenServers: Starting 5/10 'ssl_crtd' processes
> 2012/09/19 11:58:00| helperOpenServers: Starting 5/20 
> 'request_body_max_size.sh'
> processes
> 2012/09/19 11:58:00| Logfile: opening log daemon:/var/log/squid3/access.log
> 2012/09/19 11:58:00| Logfile Daemon: opening log /var/log/squid3/access.log
> 2012/09/19 11:58:00| Unlinkd pipe opened on FD 31
> 2012/09/19 11:58:00| Local cache digest enabled; rebuild/rewrite every 
> 3600/3600 sec
> 2012/09/19 11:58:00| Store logging disabled
> 2012/09/19 11:58:00| Swap maxSize 1536 + 262144 KB, estimated 312442 
> objects
> 2012/09/19 11:58:00| Target number of buckets: 15622
> 2012/09/19 11:58:00| Using 16384 Store buckets
> 2012/09/19 11:58:00| Max Mem  size: 262144 KB
> 2012/09/19 11:58:00| Max Swap size: 1536 KB
> 2012/09/19 11:58:00| Rebuilding storage in /mnt/squid/squid3 (clean log)
> 2012/09/19 11:58:00| Using Least Load store dir selection
> 2012/09/19 11:58:00| Set Current Directory to /mnt/squid/squid3
> 2012/09/19 11:58:00| Loaded Icons.
> 2012/09/19 11:58:00| HTCP Disabled.
> 2012/09/19 11:58:00| Squid plugin modules loaded: 0
> 2012/09/19 11:58:00| Adaptation support is off.
> 2012/09/19 11:58:00| Accepting NAT intercepted HTTP Socket connections at
> local=0.0.0.0:3128 remote=[::] FD 36 flags=41
> 2012/09/19 11:58:00| Accepting SSL bumped HTTP Socket connections at
> local=[::]:3150 remote=[::] FD 37 flags=9
> 2012/09/19 11:58:00| Store rebuilding is 16.55% complete
> 2012/09/19 11:58:00| Done reading /mnt/squid/squid3 swaplog (24167 entries)
> 2012/09/19 11:58:00| Finished rebuilding storage from disk.
> 2012/09/19 11:58:00| 24167 Entries scanned
> 2012/09/19 11:58:00| 0 Invalid entries.
> 2012/09/19 11:58:00| 0 With invalid flags.
> 2012/09/19 11:58:00| 24167 Objects loaded.
> 2012/09/19 11:58:00| 0 Objects expired.
> 2012/09/19 11:58:00| 0 Objects cancelled.
> 2012/09/19 11:58:00| 0 Duplicate URLs purged.
> 2012/09/19 11:58:00| 0 Swapfile clashes avoided.
> 2012/09/19 11:58:00|   Took 0.12 seconds (204025.29 objects/sec).
> 2012/09/19 11:58:00| Beginning Validation Procedure
> 2012/09/19 11:58:00|   Completed Validation Procedure
> 2012/09/19 11:58:00|   Validated 24167 Entries
> 2012/09/19 11:58:00|   store_swap_size = 732468.00 KB
> 2012/09/19 11:58:01| storeLateRelease: released 0 objects
> (ssl_crtd): Cannot create ssl certificate or private key.
> 2012/09/19 12:03:20| WARNING: ssl_crtd #1 exited
> 2012/09/19 12:03:20| Too few ssl_crtd processes are running (need 1/10)
> 2012/09/19 12:03:20| Starting new helpers
> 2012/09/19 12:03:20| helperOpenServers: Starting 1/10 'ssl_crtd' processes
> 2012/09/19 12:03:20| client_side.cc(3477) sslCrtdHandleReply: "ssl_crtd" 
> helper
> return  reply
> (ssl_crtd): Cannot create ssl certificate or private key.
> 2012/09/19 12:03:20| WARNING: ssl_crtd #2 exited
> 2012/09/19 12:03:20| Too few ssl_crtd processes are running (need 1/10)
> 2012/09/19 12:03:20| Closing HTTP port 0.0.0.0:3128
> 2012/09/19 12:03:20| Closing HTTP port [::]:3150
> 2012/09/19 12:03:20| storeDirWriteCleanLogs: Starting...
> 2012/09/19 12:03:20|   Finished.  Wrote 24195 entries.
> 2012/09/19 12:03:20|   Took 0.02 seconds (1321120.45 entries/sec).
> FATAL: The ssl_crtd helpers are crashing too rapidly, need help!
> 
> Squid Cache (Version 3.2.1): Terminated abnormally.
> CPU Usage: 1.896 seconds = 0.740 user + 1.156 sys
> Maximum Resident Size: 144640 KB
> Page faults with physical i/o: 0
> Memory usage for squid via mallinfo():
>total space in arena:   18900 KB
>Ordinary blocks:18674 KB 54 blks
>Sma

[squid-users] problems with ssl_crtd

2012-09-19 Thread Linos
Hi,
i have been using Squid squid-3.2.0.17-20120527-r11561 in an Ubuntu 
Server
12.04 some time with ssl-bump without problems for a year, the ca cert expired
some days ago and with the new ca cert i installed squid 3.2.1 stable.

Now the proxy exists every time 10 or more users use https at the same time,
it's pretty strange, i have tried to downgrade to the old squid version but i
can't get the proxy to be stable no matter if using new or old version, i have
tried to recreate other cert just in case, same problem, i recreated too
squid_ssl_db and cache_dir, no matter what i do it keeps crashing, the cache log
read as this:


--
2012/09/19 11:58:00| Starting Squid Cache version 3.2.1 for 
x86_64-pc-linux-gnu...
2012/09/19 11:58:00| Process ID 30077
2012/09/19 11:58:00| Process Roles: master worker
2012/09/19 11:58:00| With 65535 file descriptors available
2012/09/19 11:58:00| Initializing IP Cache...
2012/09/19 11:58:00| DNS Socket created at [::], FD 4
2012/09/19 11:58:00| DNS Socket created at 0.0.0.0, FD 5
2012/09/19 11:58:00| Adding nameserver 80.58.61.250 from squid.conf
2012/09/19 11:58:00| Adding nameserver 8.8.8.8 from squid.conf
2012/09/19 11:58:00| helperOpenServers: Starting 5/10 'ssl_crtd' processes
2012/09/19 11:58:00| helperOpenServers: Starting 5/20 'request_body_max_size.sh'
processes
2012/09/19 11:58:00| Logfile: opening log daemon:/var/log/squid3/access.log
2012/09/19 11:58:00| Logfile Daemon: opening log /var/log/squid3/access.log
2012/09/19 11:58:00| Unlinkd pipe opened on FD 31
2012/09/19 11:58:00| Local cache digest enabled; rebuild/rewrite every 
3600/3600 sec
2012/09/19 11:58:00| Store logging disabled
2012/09/19 11:58:00| Swap maxSize 1536 + 262144 KB, estimated 312442 objects
2012/09/19 11:58:00| Target number of buckets: 15622
2012/09/19 11:58:00| Using 16384 Store buckets
2012/09/19 11:58:00| Max Mem  size: 262144 KB
2012/09/19 11:58:00| Max Swap size: 1536 KB
2012/09/19 11:58:00| Rebuilding storage in /mnt/squid/squid3 (clean log)
2012/09/19 11:58:00| Using Least Load store dir selection
2012/09/19 11:58:00| Set Current Directory to /mnt/squid/squid3
2012/09/19 11:58:00| Loaded Icons.
2012/09/19 11:58:00| HTCP Disabled.
2012/09/19 11:58:00| Squid plugin modules loaded: 0
2012/09/19 11:58:00| Adaptation support is off.
2012/09/19 11:58:00| Accepting NAT intercepted HTTP Socket connections at
local=0.0.0.0:3128 remote=[::] FD 36 flags=41
2012/09/19 11:58:00| Accepting SSL bumped HTTP Socket connections at
local=[::]:3150 remote=[::] FD 37 flags=9
2012/09/19 11:58:00| Store rebuilding is 16.55% complete
2012/09/19 11:58:00| Done reading /mnt/squid/squid3 swaplog (24167 entries)
2012/09/19 11:58:00| Finished rebuilding storage from disk.
2012/09/19 11:58:00| 24167 Entries scanned
2012/09/19 11:58:00| 0 Invalid entries.
2012/09/19 11:58:00| 0 With invalid flags.
2012/09/19 11:58:00| 24167 Objects loaded.
2012/09/19 11:58:00| 0 Objects expired.
2012/09/19 11:58:00| 0 Objects cancelled.
2012/09/19 11:58:00| 0 Duplicate URLs purged.
2012/09/19 11:58:00| 0 Swapfile clashes avoided.
2012/09/19 11:58:00|   Took 0.12 seconds (204025.29 objects/sec).
2012/09/19 11:58:00| Beginning Validation Procedure
2012/09/19 11:58:00|   Completed Validation Procedure
2012/09/19 11:58:00|   Validated 24167 Entries
2012/09/19 11:58:00|   store_swap_size = 732468.00 KB
2012/09/19 11:58:01| storeLateRelease: released 0 objects
(ssl_crtd): Cannot create ssl certificate or private key.
2012/09/19 12:03:20| WARNING: ssl_crtd #1 exited
2012/09/19 12:03:20| Too few ssl_crtd processes are running (need 1/10)
2012/09/19 12:03:20| Starting new helpers
2012/09/19 12:03:20| helperOpenServers: Starting 1/10 'ssl_crtd' processes
2012/09/19 12:03:20| client_side.cc(3477) sslCrtdHandleReply: "ssl_crtd" helper
return  reply
(ssl_crtd): Cannot create ssl certificate or private key.
2012/09/19 12:03:20| WARNING: ssl_crtd #2 exited
2012/09/19 12:03:20| Too few ssl_crtd processes are running (need 1/10)
2012/09/19 12:03:20| Closing HTTP port 0.0.0.0:3128
2012/09/19 12:03:20| Closing HTTP port [::]:3150
2012/09/19 12:03:20| storeDirWriteCleanLogs: Starting...
2012/09/19 12:03:20|   Finished.  Wrote 24195 entries.
2012/09/19 12:03:20|   Took 0.02 seconds (1321120.45 entries/sec).
FATAL: The ssl_crtd helpers are crashing too rapidly, need help!

Squid Cache (Version 3.2.1): Terminated abnormally.
CPU Usage: 1.896 seconds = 0.740 user + 1.156 sys
Maximum Resident Size: 144640 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:   18900 KB
Ordinary blocks:18674 KB 54 blks
Small blocks:   0 KB  1 blks
Holding blocks: 37552 KB  9 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 225 KB
Total in use:   56226 KB 297%
Tota

Re: [squid-users] problems configuring squid with zph (packet marking)

2012-08-30 Thread Amos Jeffries

On 31/08/2012 12:01 a.m., Mustafa Raji wrote:

thanks any for you help
i will use the clientside_tos,
about marking packet using netfilter i really want to mark only tcp_hit packets 
 not all the packets tcp_miss should not included in the packet marking,


Then both tcp_outgoing_tos and clientside_tos are not what you want they 
mark *all* the ACL matched traffic and we have no ACL to filter by 
HIT/MISS status in squid-3 yet.


You only want "qos_flows local-hit" by itself.

Amos


i will try to use the 3.2, kindly would you tell me the linux os you used (most 
used linux distribution with 3.2) with this version of squid 3.2

once again thanks

--- On Thu, 8/30/12, Andrew Beverley  wrote:


From: Andrew Beverley 
Subject: Re: [squid-users] problems configuring squid with zph (packet marking)
To: "Mustafa Raji" 
Cc: squid-users@squid-cache.org
Date: Thursday, August 30, 2012, 10:07 AM
On Thu, 2012-08-30 at 00:14 -0700,
Mustafa Raji wrote:

hi i have a problem with zph configuration in squid

3.1.11 in the squid

wiki i find the zph configuration directive is

qos_flows and i want to

mark the local-hit packet to root this packets locally

the configuration in squid.conf file is

acl localnet 10.10.10.0/24
tcp_outgoing_tos 0xFF localnet

I think you want clientside_tos if you want to affect
packets going to
the local client. tcp_outgoing_tos is for packets going to
the remote
server.


qos_flows local-hit=0xFF

applying these configuration directive and dump the

packets using

tcpdump shows that, this configuration does not works

for me some

output of tcpdump

 From http://www.squid-cache.org/Doc/config/tcp_outgoing_tos/
"Often only multiples of 4 is usable as the two rightmost
bits have been
redefined for use by ECN (RFC 3168 section 23.1)"


i am using squid 3.1.11 with --enable-zph-qos, squid

works in the

intercept mode and the os is debian squeeze

You may want to consider upgrading to the 3.2 branch
(although I
appreciate that this is not a Debian stable package). A lot
of
improvements have been made to the qos_flows code, and there
is also the
option to use netfilter marks which you may find more
flexible.

Andy







Re: [squid-users] problems configuring squid with zph (packet marking)

2012-08-30 Thread Andrew Beverley
On Thu, 2012-08-30 at 05:01 -0700, Mustafa Raji wrote:
> i will try to use the 3.2, kindly would you tell me the linux os you
> used (most used linux distribution with 3.2)

[ Please don't top-post ]

I use Debian and compile v3.2 myself. I am not aware of any Linux
distribution shipping v3.2. Someone else may be able to advise. 




Re: [squid-users] problems configuring squid with zph (packet marking)

2012-08-30 Thread Mustafa Raji
thanks any for you help 
i will use the clientside_tos, 
about marking packet using netfilter i really want to mark only tcp_hit packets 
 not all the packets tcp_miss should not included in the packet marking,
i will try to use the 3.2, kindly would you tell me the linux os you used (most 
used linux distribution with 3.2) with this version of squid 3.2 

once again thanks 

--- On Thu, 8/30/12, Andrew Beverley  wrote:

> From: Andrew Beverley 
> Subject: Re: [squid-users] problems configuring squid with zph (packet 
> marking)
> To: "Mustafa Raji" 
> Cc: squid-users@squid-cache.org
> Date: Thursday, August 30, 2012, 10:07 AM
> On Thu, 2012-08-30 at 00:14 -0700,
> Mustafa Raji wrote:
> > hi i have a problem with zph configuration in squid
> 3.1.11 in the squid
> > wiki i find the zph configuration directive is
> qos_flows and i want to
> > mark the local-hit packet to root this packets locally
> > 
> > the configuration in squid.conf file is 
> > 
> > acl localnet 10.10.10.0/24
> > tcp_outgoing_tos 0xFF localnet
> 
> I think you want clientside_tos if you want to affect
> packets going to
> the local client. tcp_outgoing_tos is for packets going to
> the remote
> server.
> 
> > qos_flows local-hit=0xFF
> > 
> > applying these configuration directive and dump the
> packets using
> > tcpdump shows that, this configuration does not works
> for me some
> > output of tcpdump 
> 
> From http://www.squid-cache.org/Doc/config/tcp_outgoing_tos/
> "Often only multiples of 4 is usable as the two rightmost
> bits have been
> redefined for use by ECN (RFC 3168 section 23.1)"
> 
> > i am using squid 3.1.11 with --enable-zph-qos, squid
> works in the
> > intercept mode and the os is debian squeeze
> 
> You may want to consider upgrading to the 3.2 branch
> (although I
> appreciate that this is not a Debian stable package). A lot
> of
> improvements have been made to the qos_flows code, and there
> is also the
> option to use netfilter marks which you may find more
> flexible.
> 
> Andy
> 
> 
> 


Re: [squid-users] problems configuring squid with zph (packet marking)

2012-08-30 Thread Andrew Beverley
On Thu, 2012-08-30 at 00:14 -0700, Mustafa Raji wrote:
> hi i have a problem with zph configuration in squid 3.1.11 in the squid
> wiki i find the zph configuration directive is qos_flows and i want to
> mark the local-hit packet to root this packets locally
> 
> the configuration in squid.conf file is 
> 
> acl localnet 10.10.10.0/24
> tcp_outgoing_tos 0xFF localnet

I think you want clientside_tos if you want to affect packets going to
the local client. tcp_outgoing_tos is for packets going to the remote
server.

> qos_flows local-hit=0xFF
> 
> applying these configuration directive and dump the packets using
> tcpdump shows that, this configuration does not works for me some
> output of tcpdump 

>From http://www.squid-cache.org/Doc/config/tcp_outgoing_tos/
"Often only multiples of 4 is usable as the two rightmost bits have been
redefined for use by ECN (RFC 3168 section 23.1)"

> i am using squid 3.1.11 with --enable-zph-qos, squid works in the
> intercept mode and the os is debian squeeze

You may want to consider upgrading to the 3.2 branch (although I
appreciate that this is not a Debian stable package). A lot of
improvements have been made to the qos_flows code, and there is also the
option to use netfilter marks which you may find more flexible.

Andy




[squid-users] problems configuring squid with zph (packet marking)

2012-08-30 Thread Mustafa Raji
hi
i have a problem with zph configuration in squid 3.1.11
in the squid wiki i find the zph configuration directive is qos_flows and i 
want to mark the local-hit packet to root this packets locally

the configuration in squid.conf file is 

acl localnet 10.10.10.0/24
tcp_outgoing_tos 0xFF localnet
qos_flows local-hit=0xFF

applying these configuration directive and dump the packets using tcpdump shows 
that, this configuration does not works for me 
some output of tcpdump 

09:19:59.320185 IP (tos 0xfc, ttl 64, id 36951, offset 0, flags [DF], proto TCP 
(6), length 52)
192.168.40.2.55494 > 97.74.215.200.80: Flags [.], cksum 0x1a65 (correct), 
ack 4165, win 63848, options [nop,nop,TS val 21596676 ecr 60185494], length 0
09:19:59.320224 IP (tos 0x0, ttl 64, id 16215, offset 0, flags [DF], proto TCP 
(6), length 1428)
192.168.40.2.80 > 10.10.10.72.50177: Flags [P.], cksum 0x49d7 (correct), 
seq 2863:4251, ack 1, win 507, length 1388
09:19:59.345520 IP (tos 0xfc, ttl 64, id 36952, offset 0, flags [DF], proto TCP 
(6), length 52)
192.168.40.2.55494 > 97.74.215.200.80: Flags [.], cksum 0x14f3 (correct), 
ack 5553, win 63848, options [nop,nop,TS val 21596682 ecr 60185494], length 0
09:19:59.345558 IP (tos 0x0, ttl 64, id 16216, offset 0, flags [DF], proto TCP 
(6), length 1428)


10.10.10.82.50139 > 192.168.40.2.80: Flags [.], cksum 0x76b8 (correct), ack 
2092, win 16425, length 0
09:31:16.036706 IP (tos 0x0, ttl 101, id 24249, offset 0, flags [DF], proto TCP 
(6), length 1440)
208.71.40.183.80 > 192.168.40.2.58412: Flags [.], cksum 0xffa1 (correct), 
ack 1393, win 32748, options [nop,nop,TS val 2588446075 ecr 21630103], length 0
09:22:13.461840 IP (tos 0x0, ttl 74, id 52103, offset 0, flags [DF], proto TCP 
(6), length 52)
66.220.152.16.80 > 192.168.40.2.32847: Flags [.], cksum 0xefe5 (correct), 
ack 2809858213, win 40, options [nop,nop,TS val 1685014915 ecr 21630147], 
length 0
09:22:13.462201 IP (tos 0x0, ttl 74, id 52104, offset 0, flags [DF], proto TCP 
(6), length 52)
66.220.152.16.80 > 192.168.40.2.32847: Flags [.], cksum 0xebb6 (correct), 
ack 1066, win 46, options [nop,nop,TS val 1685014915 ecr 21630147], length 0
09:22:13.584147 IP (tos 0x0, ttl 74, id 52105, offset 0, flags [DF], proto TCP 
(6), length 1440)

i am using squid 3.1.11 with --enable-zph-qos, squid works in the intercept 
mode and the os is debian squeeze
please what is the problem with my configuration
please note the tos of my outgoing packets always 0x0 or 0xfc 


Re: [squid-users] Problems whith Hotmail & attachements

2012-04-25 Thread Amos Jeffries

On 25/04/2012 6:59 p.m., Jose A. Vidal wrote:

Thank you for your suggestions Amos.

To describe the actual problem I'll write down the steps:

1.- Open hotmail account (via http://www.hotmail.com) -> OK
2.- Create new mail -> OK
3.- Try to add some attachement -> the popup screen to select the file 
does not open




This is not much help. Hotmail uses multiple page display systems 
including HTML, Silverlight, Flash, something for mobiles etc. So we 
can't be sure anyone testing is seeing the same behaviour as what you do.


And then there is the Hotmail system design ... 
http://wiki.squid-cache.org/KnowledgeBase/Hotmail


Your detection of difference between HTTP and HTTPS shows its probably 
that security system design still biting badly. I was getting kind of 
hopeful this year, nobody has mentioned hotmail problems since early 
2011 and there was talk of Hotmail site redesign improvements late last 
year.


Amos



Re: [squid-users] Problems whith Hotmail & attachements

2012-04-24 Thread Amos Jeffries

On 25/04/2012 12:48 a.m., Jose A. Vidal wrote:

Hi all,

I have a transparent configuration of squid 2.6.STABLE21 without 
SquidGuardian

nor other addons.


Tried an upgrade? 2.7.STABLE9 at minimum, although even that is about to 
get deprecated now.




I have configured the iptables to redirect  tcp 80 to standard
Squid port and forwarded all other ports to reach destinations.


Everthing is fine:
1.-clients can open their hotmail/gmail accounts;
2.-clients can send mails;
3.-clients can send mails with attachements using gmail;
but
4.- clients can not send mails with attachements using hotmail.

I have googled to find a workaround or solution but have not found
anything:

This  is what I've found and applied without success:
#  hotmail
acl hotmail dstdomain .hotmail.com .passport.com .msn.com 
.passport.net .live.con

balance_on_multiple_ip off
header_access Accept-Encoding deny hotmail
always_direct allow hotmail

¿any ideas?



You describe what you did, but the symptom description leaves us 
wondering what the actual problem is.

What are the relevant HTTP traffic headers?

PS. I have seen identical behaviour from SquirrelMail installations 
which fail to handle POST requests from IPv6 clients due to bad 
X-Forwarded-For handling issues.


Amos


[squid-users] Problems whith Hotmail & attachements

2012-04-24 Thread Jose A. Vidal

Hi all,

I have a transparent configuration of squid 2.6.STABLE21 without 
SquidGuardian

nor other addons.

I have configured the iptables to redirect  tcp 80 to standard
Squid port and forwarded all other ports to reach destinations.


Everthing is fine:
1.-clients can open their hotmail/gmail accounts;
2.-clients can send mails;
3.-clients can send mails with attachements using gmail;
but
4.- clients can not send mails with attachements using hotmail.

I have googled to find a workaround or solution but have not found
anything:

This  is what I've found and applied without success:
#  hotmail
acl hotmail dstdomain .hotmail.com .passport.com .msn.com .passport.net 
.live.con

balance_on_multiple_ip off
header_access Accept-Encoding deny hotmail
always_direct allow hotmail

¿any ideas?

Thank you.











Re: [squid-users] Problems with NTLM

2012-04-19 Thread Harry Mills
Can you give any more details about what isn't working? Is it not 
authenticating for https, or not able to fetch https pages?


Harry


On 19/04/2012 18:43, Wladner Klimach wrote:

I've included squid user in the group and is working now! But https
access is not working. Any clue of what could be causing such a
problem?

Regards,

Wladner

2012/4/19 Harry Mills:

On 19/04/2012 17:52, Wladner Klimach wrote:


Look what I've got from cache.log from a Windows XP client :

[2012/04/19 13:45:04,  0] utils/ntlm_auth.c:558(winbind_pw_check)
   Login for user [REDECAMARA]\[P_991064]@[CAINF-269652] failed due to
[winbind client not authorized to use winbindd_pam_auth_crap. Ensure
permissions on /var/lib/samba/winbindd_privileged are set correctly.]
[2012/04/19 13:45:04,  0]
utils/ntlm_auth.c:833(manage_squid_ntlmssp_request)
2012/04/19 13:45:04.390| authenticateNTLMHandleReply: helper:
'0x12212b08' sent us 'BH NT_STATUS_ACCESS_DENIED'
   NTLMSSP BH: NT_STATUS_ACCESS_DENIED
2012/04/19 13:45:04.390| ntlm/auth_ntlm.cc(504) releaseAuthServer:
releasing NTLM auth server '0x12212b08'
2012/04/19 13:45:04.390| authenticateNTLMHandleReply: Error validating
user via NTLM. Error returned 'BH NT_STATUS_ACCESS_DENIED'

What user do I have to set permission to access winbindd_privileged??



On my Redhat setup I have the following perms:

drwxr-x--- 2 root wbpriv

I have put the squid user into the wbpriv group.

Regards

Harry




regards,

Wladner

2012/4/18 Simon Dwyer:


HI Wladner,

I get that second message when i forget to start the winbind service.

on Centos : service start winbind

Simon

On Wed, 2012-04-18 at 16:05 -0300, Wladner Klimach wrote:


Hi everyone,

I'm trying to implement NTLM scheme in my squid box. I've already
configured samba and winbind so that I can check with wbinfo and even
run /usr/bin/ntlm_auth at the shell and it works. But for some hidden
problem squid is not having the same result. Look what is poping up at
the cache.log:

2012/04/18 15:59:58.404| authenticateNTLMHandleReply: helper:
'0x1fe158b8' sent us 'NA NT_STATUS_UNSUCCESSFUL'

[2012/04/18 16:00:01, 0] utils/ntlm_auth.c:get_winbind_netbios_name(172)
   could not obtain winbind netbios name!

I hope some nice soul can help me!

regards,

Wladner










Re: [squid-users] Problems with NTLM

2012-04-19 Thread Harry Mills

On 19/04/2012 17:52, Wladner Klimach wrote:

Look what I've got from cache.log from a Windows XP client :

[2012/04/19 13:45:04,  0] utils/ntlm_auth.c:558(winbind_pw_check)
   Login for user [REDECAMARA]\[P_991064]@[CAINF-269652] failed due to
[winbind client not authorized to use winbindd_pam_auth_crap. Ensure
permissions on /var/lib/samba/winbindd_privileged are set correctly.]
[2012/04/19 13:45:04,  0] utils/ntlm_auth.c:833(manage_squid_ntlmssp_request)
2012/04/19 13:45:04.390| authenticateNTLMHandleReply: helper:
'0x12212b08' sent us 'BH NT_STATUS_ACCESS_DENIED'
   NTLMSSP BH: NT_STATUS_ACCESS_DENIED
2012/04/19 13:45:04.390| ntlm/auth_ntlm.cc(504) releaseAuthServer:
releasing NTLM auth server '0x12212b08'
2012/04/19 13:45:04.390| authenticateNTLMHandleReply: Error validating
user via NTLM. Error returned 'BH NT_STATUS_ACCESS_DENIED'

What user do I have to set permission to access winbindd_privileged??


On my Redhat setup I have the following perms:

drwxr-x--- 2 root wbpriv

I have put the squid user into the wbpriv group.

Regards

Harry



regards,

Wladner

2012/4/18 Simon Dwyer:

HI Wladner,

I get that second message when i forget to start the winbind service.

on Centos : service start winbind

Simon

On Wed, 2012-04-18 at 16:05 -0300, Wladner Klimach wrote:

Hi everyone,

I'm trying to implement NTLM scheme in my squid box. I've already
configured samba and winbind so that I can check with wbinfo and even
run /usr/bin/ntlm_auth at the shell and it works. But for some hidden
problem squid is not having the same result. Look what is poping up at
the cache.log:

2012/04/18 15:59:58.404| authenticateNTLMHandleReply: helper:
'0x1fe158b8' sent us 'NA NT_STATUS_UNSUCCESSFUL'

[2012/04/18 16:00:01, 0] utils/ntlm_auth.c:get_winbind_netbios_name(172)
   could not obtain winbind netbios name!

I hope some nice soul can help me!

regards,

Wladner







Re: [squid-users] Problems with NTLM

2012-04-19 Thread Wladner Klimach
Look what I've got from cache.log from a Windows XP client :

[2012/04/19 13:45:04,  0] utils/ntlm_auth.c:558(winbind_pw_check)
  Login for user [REDECAMARA]\[P_991064]@[CAINF-269652] failed due to
[winbind client not authorized to use winbindd_pam_auth_crap. Ensure
permissions on /var/lib/samba/winbindd_privileged are set correctly.]
[2012/04/19 13:45:04,  0] utils/ntlm_auth.c:833(manage_squid_ntlmssp_request)
2012/04/19 13:45:04.390| authenticateNTLMHandleReply: helper:
'0x12212b08' sent us 'BH NT_STATUS_ACCESS_DENIED'
  NTLMSSP BH: NT_STATUS_ACCESS_DENIED
2012/04/19 13:45:04.390| ntlm/auth_ntlm.cc(504) releaseAuthServer:
releasing NTLM auth server '0x12212b08'
2012/04/19 13:45:04.390| authenticateNTLMHandleReply: Error validating
user via NTLM. Error returned 'BH NT_STATUS_ACCESS_DENIED'

What user do I have to set permission to access winbindd_privileged??

regards,

Wladner

2012/4/18 Simon Dwyer :
> HI Wladner,
>
> I get that second message when i forget to start the winbind service.
>
> on Centos : service start winbind
>
> Simon
>
> On Wed, 2012-04-18 at 16:05 -0300, Wladner Klimach wrote:
>> Hi everyone,
>>
>> I'm trying to implement NTLM scheme in my squid box. I've already
>> configured samba and winbind so that I can check with wbinfo and even
>> run /usr/bin/ntlm_auth at the shell and it works. But for some hidden
>> problem squid is not having the same result. Look what is poping up at
>> the cache.log:
>>
>> 2012/04/18 15:59:58.404| authenticateNTLMHandleReply: helper:
>> '0x1fe158b8' sent us 'NA NT_STATUS_UNSUCCESSFUL'
>>
>> [2012/04/18 16:00:01, 0] utils/ntlm_auth.c:get_winbind_netbios_name(172)
>>   could not obtain winbind netbios name!
>>
>> I hope some nice soul can help me!
>>
>> regards,
>>
>> Wladner
>
>


Re: [squid-users] Problems with NTLM

2012-04-18 Thread Simon Dwyer
HI Wladner,

I get that second message when i forget to start the winbind service.

on Centos : service start winbind

Simon

On Wed, 2012-04-18 at 16:05 -0300, Wladner Klimach wrote:
> Hi everyone,
> 
> I'm trying to implement NTLM scheme in my squid box. I've already
> configured samba and winbind so that I can check with wbinfo and even
> run /usr/bin/ntlm_auth at the shell and it works. But for some hidden
> problem squid is not having the same result. Look what is poping up at
> the cache.log:
> 
> 2012/04/18 15:59:58.404| authenticateNTLMHandleReply: helper:
> '0x1fe158b8' sent us 'NA NT_STATUS_UNSUCCESSFUL'
> 
> [2012/04/18 16:00:01, 0] utils/ntlm_auth.c:get_winbind_netbios_name(172)
>   could not obtain winbind netbios name!
> 
> I hope some nice soul can help me!
> 
> regards,
> 
> Wladner




[squid-users] Problems with NTLM

2012-04-18 Thread Wladner Klimach
Hi everyone,

I'm trying to implement NTLM scheme in my squid box. I've already
configured samba and winbind so that I can check with wbinfo and even
run /usr/bin/ntlm_auth at the shell and it works. But for some hidden
problem squid is not having the same result. Look what is poping up at
the cache.log:

2012/04/18 15:59:58.404| authenticateNTLMHandleReply: helper:
'0x1fe158b8' sent us 'NA NT_STATUS_UNSUCCESSFUL'

[2012/04/18 16:00:01, 0] utils/ntlm_auth.c:get_winbind_netbios_name(172)
  could not obtain winbind netbios name!

I hope some nice soul can help me!

regards,

Wladner


Re: [squid-users] Problems with squid in a campus setup

2012-03-27 Thread Marcus Kool



But read the FAQ about memory usage and a large disk cache:
http://wiki.squid-cache.org/SquidFaq/SquidMemory
Squid uses an additional 512*14 MB = 7.1 GB for the index of the disk
  cache. I suggest to downsize to 1 GB in-memory index which implies to
use only 73 GB disk cache.


Ah okay, here's one of my initial mistakes. I used only 10 MB for my
calculation but of course we use a 64bit squid. Out of curiosity and because I
want to learn: the reasoning for shrinking the disk cache from 512 GB to 73 GB
is because a big cache as we have it at the moment only leads to lots of stale
objects in cache which additionally burden the CPU and RAM because of in-
memory metadata?


The downsize is required to stop exhausting the resources of the server.
The server needs space for OS, file system buffer, TCP buffers, other
processes etc.
Once you downsized, you can add a little more disk cache in small steps
observing the performance over longer periods of time.

You did not publish the whole squid.conf, but for large disk caches
the parameters cache_swap_low and cache_swap_high are important.
They should differ only by 1 point. I recommend
cache_swap_low 93
cache_swap_high 94

Marcus


Re: [squid-users] Problems with squid in a campus setup

2012-03-27 Thread Eliezer Croitoru

On 27/03/2012 12:25, Christian Loth wrote:

Hello,

first of all, thank you for your recommendations.

On Monday 26 March 2012 16:34:21 Marcus Kool wrote:

Youtube may be hogging your pipe but it is better to know than to guess.


Of course, before we decided for the proxy setup we investigated bandwidth
usage. HTTP Traffic was about 60-70% of our traffic, and a good chunk of that 
was
youtube. That's why we decided, that we would try a squid setup with youtube
caching. However it took a while before we found a solution for caching
youtube, as 3.1 hasn't implemented the necessary features yet.


The access.log shows content sizes so with a simple awk script it should
be easy to find out.

I have also seen many sites where advertisements and trackers consume 15%
bandwidth. This may vary. So blocking ads and trackers is a thing to
consider.


Thanks for this insight! This would of course be a welcome saving of bandwidth
in my personal opinion. I'm just not sure if we're allowed to do this, as the
patron of the proxy is a public-law institution and as such bound to anti-
censorship laws. Need to check with a Legalese translator.



Do not expect too much from web caching. More and more websites
  intentionally make their sites not cacheable. Look at the percentage of
  TCP_MISS in access.log or use a second awk script to find out more about
  cacheability.


Every bit counts. Before we apply for an increase of (expensive) uplink
bandwidth we want to play every trick we have up our sleeve. At the moment our
cache is still cold, because for getting the proxy running again I had to
completely wipe the cache. At the moment we have a hit:miss ratio of about
1:5.  For youtube caching we have a saved bandwidth around 100 GB for the 27th
of march (one video in particular had a size of 768 MB and was watched 19
times). Online lectures are currently en vogue.



I recommend going for a newer Squid: 3.1.19 is stable and fixes issues that
3.1.10 has.


Will do so.


On Linux, aufs has a better performance than diskd


Thanks again for this tip!




Additional memory for storing objects is 2048 MB:

cache_mem 2048 MB


Seems right. But you also need virtual memory for Squid being able to
fork processes without issues. Do have have 8 GB swap ?


Yes. 10 GB actually.



But read the FAQ about memory usage and a large disk cache:
http://wiki.squid-cache.org/SquidFaq/SquidMemory
Squid uses an additional 512*14 MB = 7.1 GB for the index of the disk
  cache. I suggest to downsize to 1 GB in-memory index which implies to
use only 73 GB disk cache.


Ah okay, here's one of my initial mistakes. I used only 10 MB for my
calculation but of course we use a 64bit squid. Out of curiosity and because I
want to learn: the reasoning for shrinking the disk cache from 512 GB to 73 GB
is because a big cache as we have it at the moment only leads to lots of stale
objects in cache which additionally burden the CPU and RAM because of in-
memory metadata?
mostly because for a squid cache (not nginx) of 512GB will consume a lot 
of memory on index while you will prefer to serve other stuff from 
memory and because of the stale objects.
if you are using nginx for youtube caching remember to eliminate the 
caching on youtube videos.
if you wont what will happen is that nginx will create new headers for 
the cached objects and then squid will cache them but will never use 
them again.
(if you didnt changed the basic not cache for objects contains "?" and 
"cgi-bin" you are safe).
by the way if you have some time to analyze the proxy logs you can 
manage to find some other sites you can use nginx to serve some stale files.
i have used it to cache also windows updates and some other video sites 
but the pattern was similar so i had a huge list i could serv most of 
the stale content from nginx what was leaving a lot of ram for squid index.


p.s.
forgot to say that you always must not get into a point that your proxy 
ram is at the limit or else swapiness will happen and will slow the 
server down.


Regards,
Eliezer



Best regards,
  - Christian Loth




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Problems with squid in a campus setup

2012-03-27 Thread Christian Loth
Hello,

first of all, thank you for your recommendations.

On Monday 26 March 2012 16:34:21 Marcus Kool wrote:
> Youtube may be hogging your pipe but it is better to know than to guess.

Of course, before we decided for the proxy setup we investigated bandwidth 
usage. HTTP Traffic was about 60-70% of our traffic, and a good chunk of that 
was 
youtube. That's why we decided, that we would try a squid setup with youtube 
caching. However it took a while before we found a solution for caching 
youtube, as 3.1 hasn't implemented the necessary features yet.

> The access.log shows content sizes so with a simple awk script it should
> be easy to find out.
> 
> I have also seen many sites where advertisements and trackers consume 15%
> bandwidth. This may vary. So blocking ads and trackers is a thing to
> consider.

Thanks for this insight! This would of course be a welcome saving of bandwidth 
in my personal opinion. I'm just not sure if we're allowed to do this, as the 
patron of the proxy is a public-law institution and as such bound to anti-
censorship laws. Need to check with a Legalese translator.

> 
> Do not expect too much from web caching. More and more websites
>  intentionally make their sites not cacheable. Look at the percentage of
>  TCP_MISS in access.log or use a second awk script to find out more about
>  cacheability.

Every bit counts. Before we apply for an increase of (expensive) uplink 
bandwidth we want to play every trick we have up our sleeve. At the moment our 
cache is still cold, because for getting the proxy running again I had to 
completely wipe the cache. At the moment we have a hit:miss ratio of about
1:5.  For youtube caching we have a saved bandwidth around 100 GB for the 27th 
of march (one video in particular had a size of 768 MB and was watched 19 
times). Online lectures are currently en vogue.

> 
> I recommend going for a newer Squid: 3.1.19 is stable and fixes issues that
> 3.1.10 has.

Will do so.

> On Linux, aufs has a better performance than diskd

Thanks again for this tip!

> 
> > Additional memory for storing objects is 2048 MB:
> >
> > cache_mem 2048 MB
> 
> Seems right. But you also need virtual memory for Squid being able to
> fork processes without issues. Do have have 8 GB swap ?

Yes. 10 GB actually.

> 
> But read the FAQ about memory usage and a large disk cache:
> http://wiki.squid-cache.org/SquidFaq/SquidMemory
> Squid uses an additional 512*14 MB = 7.1 GB for the index of the disk
>  cache. I suggest to downsize to 1 GB in-memory index which implies to
> use only 73 GB disk cache.

Ah okay, here's one of my initial mistakes. I used only 10 MB for my 
calculation but of course we use a 64bit squid. Out of curiosity and because I 
want to learn: the reasoning for shrinking the disk cache from 512 GB to 73 GB 
is because a big cache as we have it at the moment only leads to lots of stale 
objects in cache which additionally burden the CPU and RAM because of in-
memory metadata?


Best regards,
 - Christian Loth
 


Re: [squid-users] Problems with squid in a campus setup

2012-03-26 Thread Marcus Kool

Youtube may be hogging your pipe but it is better to know than to guess.
The access.log shows content sizes so with a simple awk script it should
be easy to find out.

I have also seen many sites where advertisements and trackers consume 15%
bandwidth. This may vary. So blocking ads and trackers is a thing to
consider.

Do not expect too much from web caching. More and more websites intentionally
make their sites not cacheable. Look at the percentage of TCP_MISS in
access.log or use a second awk script to find out more about cacheability.


First some information about the setup: the hardware itself is a Xeon E3110
server with 8 GB of RAM and lots of diskspace. OS is CentOS 6.2, a derivate of
Red Hat Enterprise Linux and I'm using the CentOS flavour of Squid, version
squid-3.1.10-1.el6_2.2.x86_64.


I recommend going for a newer Squid: 3.1.19 is stable and fixes issues that
3.1.10 has.


Half a TB is planned for squid webobjects with the following line:

cache_dir diskd /var/cache/proxy/squid 512000 16 256 Q1=72 Q2=64


On Linux, aufs has a better performance than diskd


Additional memory for storing objects is 2048 MB:

cache_mem 2048 MB


Seems right. But you also need virtual memory for Squid being able to
fork processes without issues. Do have have 8 GB swap ?

But read the FAQ about memory usage and a large disk cache:
http://wiki.squid-cache.org/SquidFaq/SquidMemory
Squid uses an additional 512*14 MB = 7.1 GB for the index of the disk cache.
I suggest to downsize to 1 GB in-memory index which implies to
use only 73 GB disk cache.


Squid works in combination with an NGINX proxy setup for caching youtube video
content, as this is probably the greatest bandwith hog. It is configured as a
cache_peer and a regexp acl:

acl youtube_videos url_regex -i 
^http://[^/]+(\.youtube\.com|\.googlevideo\.com|\.video\.google\.com)/(videoplayback|get_video|videodownload)\?
acl range_request req_header Range .
acl begin_param url_regex -i [?&]begin=
acl id_param url_regex -i [?&]id=
acl itag_param url_regex -i [?&]itag=
acl sver3_param url_regex -i [?&]sver=3
cache_peer 127.0.0.1 parent 8081 0 proxy-only no-query connect-timeout=5 
no-digest
cache_peer_access 127.0.0.1 allow youtube_videos id_param itag_param 
sver3_param !begin_param !range_request
cache_peer_access 127.0.0.1 deny all

Squid seemed to be in an infinite restarting loop and the following excerpts
from cache.log seem relevant.

The first restart had the following line in cache.log after about 2 weeks of
operation:

2012/03/25 11:23:45| assertion failed: filemap.cc:76: "fm->max_n_files<= (1<<  
24)"

After that we have a rinse and repeat of squid restarting until after cache
validation and then:

2012/03/26 09:16:30| storeLateRelease: released 0 objects
2012/03/26 09:16:30| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 17: (2) No such file or directory
2012/03/26 09:16:30| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 19: (2) No such file or directory
[..several more of the same..]
2012/03/26 09:16:30| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 132: (2) No such file or directory
2012/03/26 09:16:30| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 137: (2) No such file or directory
2012/03/26 09:16:32| assertion failed: filemap.cc:76: "fm->max_n_files<= (1<<  
24)"


The assertion failure is not common. A very old bugfix (Squid 2.6) suggest that
it is related to a large cache.


this line again.

I'm not sure what exactly happened. Judging from the name of the assert it had
something to do with a maximum number of files. But is it a squid limitation or
is it a filesystem limitation? Amount of filedescriptors is set to 4096.
Filesystem type is ext4.

So finally here are my questions:
1) What exactly happened and how can I fix it?
2) From your experience, are the ressources used adequate for the use case
 given?
3) Is there a better way to cache video content with Squid 3.1 aside from using
 a cache_peer proxy?
4) Are there other hints and tips that you could share regarding such a setup?

Thanks in advance and best regards,
- Christian Loth


[squid-users] Problems with squid in a campus setup

2012-03-26 Thread Christian Loth
Hello everyone,

First of all, my practical experience with squid are as of yet rather limited, 
so please bear with me. I couldn't find my specific problem in the FAQ, or 
rather if it is in the FAQ I couldn't recognize it as my problem, and google 
wasn't helpful either.

Some weeks ago I've been given the task to setup and operate a squid proxy for
roundabout 1500 users. We are managing internet connections for several student
dormitories on a university campus and recently switched from an old-fashioned 
volume-based fee to a flat fee. However we misjudged the change in user 
behaviour and our 100 MBit uplink was soon congested. The main motivation for 
using squid is saving bandwidth and to make the user experience better on 
average. For a minimal invasive approach we decided to use an intercept 
configuration.

And it's been a rocky ride. Mostly because of a hard to find hardware fault. 
The hardware has been replaced and it seemed we have a normal operation now. 
Until yesterday that is.

First some information about the setup: the hardware itself is a Xeon E3110 
server with 8 GB of RAM and lots of diskspace. OS is CentOS 6.2, a derivate of 
Red Hat Enterprise Linux and I'm using the CentOS flavour of Squid, version 
squid-3.1.10-1.el6_2.2.x86_64.

Half a TB is planned for squid webobjects with the following line:

cache_dir diskd /var/cache/proxy/squid 512000 16 256 Q1=72 Q2=64

Additional memory for storing objects is 2048 MB:

cache_mem 2048 MB

Squid works in combination with an NGINX proxy setup for caching youtube video 
content, as this is probably the greatest bandwith hog. It is configured as a 
cache_peer and a regexp acl:

acl youtube_videos url_regex -i 
^http://[^/]+(\.youtube\.com|\.googlevideo\.com|\.video\.google\.com)/(videoplayback|get_video|videodownload)\?
acl range_request req_header Range .
acl begin_param url_regex -i [?&]begin=
acl id_param url_regex -i [?&]id=
acl itag_param url_regex -i [?&]itag=
acl sver3_param url_regex -i [?&]sver=3
cache_peer 127.0.0.1 parent 8081 0 proxy-only no-query connect-timeout=5 
no-digest
cache_peer_access 127.0.0.1 allow youtube_videos id_param itag_param 
sver3_param !begin_param !range_request
cache_peer_access 127.0.0.1 deny all

Squid seemed to be in an infinite restarting loop and the following excerpts 
from cache.log seem relevant.

The first restart had the following line in cache.log after about 2 weeks of 
operation:

2012/03/25 11:23:45| assertion failed: filemap.cc:76: "fm->max_n_files <= (1 << 
24)"

After that we have a rinse and repeat of squid restarting until after cache 
validation and then:

2012/03/26 09:16:30| storeLateRelease: released 0 objects
2012/03/26 09:16:30| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 17: (2) No such file or directory
2012/03/26 09:16:30| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 19: (2) No such file or directory
[..several more of the same..]
2012/03/26 09:16:30| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 132: (2) No such file or directory
2012/03/26 09:16:30| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 137: (2) No such file or directory
2012/03/26 09:16:32| assertion failed: filemap.cc:76: "fm->max_n_files <= (1 << 
24)"

this line again.

I'm not sure what exactly happened. Judging from the name of the assert it had 
something to do with a maximum number of files. But is it a squid limitation or
is it a filesystem limitation? Amount of filedescriptors is set to 4096. 
Filesystem type is ext4.

So finally here are my questions:
1) What exactly happened and how can I fix it?
2) From your experience, are the ressources used adequate for the use case 
given?
3) Is there a better way to cache video content with Squid 3.1 aside from using
a cache_peer proxy?
4) Are there other hints and tips that you could share regarding such a setup?

Thanks in advance and best regards,
- Christian Loth



Re: [squid-users] problems when monitoring traffic in squid

2012-02-20 Thread Amos Jeffries

On 20/02/2012 1:39 a.m., Mustafa Raji wrote:

hi
i installed a squid box with one interfaces this interfaces is connected to 
mikrotik , which in turn connected to the net
now when i check the traffic from the cache server using iptables with -v 
option it show that the amount of traffic passing through the input chain 
(which i suppost the traffic coming from the internet)


No. INPUT chain handles all traffic entering software on the box (ie 
from client to Squid, *plus* from Interent to Squid). This is possibly 
confusing your analysis.
Also the -v shows many rules counters, Without seeing the displayit is 
hard to tell what you are looking at.


I would suggest creating a LOG rule or empty chain first in your 
iptables config that matches on the particular traffic you want to 
measure (port 80 coming or going to anywhere not on your LAN). Using 
that rules counter as a traffic measure.




bigger than the amount of traffic that passes through the output chain(which i 
suppost it's client traffic),


No. OUTPUT chain handles all traffic leaving software on the box (ie 
Squid to client, *plus* Squid to Internet). To measure local traffic use 
a line same as the one suggested for INPUT.


NP: I do find it very strange that while watching a video more is 
arriving than leaving. Even a cache HIT should not have that behaviour. 
This seems like something very strange is going on (compressing data as 
it passes through? fetching a whole large file for delivering only a 
small range of it?)



why this happened , there is no cache error or warning in the cache.log file 
and the cache server seems working fine when i take a look at the access.log,
as a test i checked the rule for cached objects, when i open a video that 
cached in the squid box, the video is served normally but the amount of traffic 
in the input chain increases, why the video is work from the cache in the 
client side, even i though the increase of traffic in input chain increases so 
fast (ex 18Mb in about 5 second) i really don't have this bandwidth so the 
video is served from the cache,please can you help me if the video served from 
the cache why there is an addition in the input chain , this chain that suppost 
to serve only stale object from the page


I think most of wahat you are seeing is client-to-squid traffic and the 
resulting squid-to-client traffic. But there are some weirdness, like 
INPUT chain increasing very much. So a cleaner analysis would be 
worthwhile just to be certain what is actually going on.


Amos


[squid-users] problems when monitoring traffic in squid

2012-02-19 Thread Mustafa Raji
hi 
i installed a squid box with one interfaces this interfaces is connected to 
mikrotik , which in turn connected to the net 
now when i check the traffic from the cache server using iptables with -v 
option it show that the amount of traffic passing through the input chain 
(which i suppost the traffic coming from the internet)bigger than the amount of 
traffic that passes through the output chain(which i suppost it's client 
traffic),why this happened , there is no cache error or warning in the 
cache.log file and the cache server seems working fine when i take a look at 
the access.log,
as a test i checked the rule for cached objects, when i open a video that 
cached in the squid box, the video is served normally but the amount of traffic 
in the input chain increases, why the video is work from the cache in the 
client side, even i though the increase of traffic in input chain increases so 
fast (ex 18Mb in about 5 second) i really don't have this bandwidth so the 
video is served from the cache,please can you help me if the video served from 
the cache why there is an addition in the input chain , this chain that suppost 
to serve only stale object from the page 

thank you 
with my best regards 
 




RE: [squid-users] Problems with Active Sync over squid with basic auth. Any successful config for Active Sync and Outlook Anywhere on Exchange 2010 replacing an ISA server?

2012-01-20 Thread Isenberg, Holger
Configuration is stable now. Tested with several Active Sync mobile clients and 
Desktop Outlook 2010. The only part not yet tested is Kerberos and NTLM based 
authentication where parameter connection-auth might be relevant.

It's almost the same as given in 
http://wiki.squid-cache.org/ConfigExamples/Reverse/OutlookWebAccess with added 
connection-auth parameter and ssl options as I'm using a wildcard certificate. 
To disable cache function proxy-only, no-query and no-digest are added.


# Reverse Proxy for Active Sync, Outlook Webaccess, Outlook Anywhere (RPC over 
HTTPS)
# as frontend for Exchange 2010
# squid.conf for squid 3.1.18
# http://wiki.squid-cache.org/ConfigExamples/Reverse/OutlookWebAccess

# Debugging:
#debug_options ALL,3

logformat combined %>a %[ui %[un [%tl] "%rm %ru HTTP/%rv" %>Hs %h" "%{User-Agent}>h" %Ss:%Sh
access_log /var/log/squidext/access.log combined
cache_log /var/log/squidext/cache.log

cache_effective_user squidext
cache_effective_group squidext
pid_filename /var/run/squidext.pid

httpd_suppress_version_string on 
cache_mgr nomail_address_given
visible_hostname webmail.domain.com
via off
forwarded_for transparent
ignore_expect_100 on
ssl_unclean_shutdown on

# Internet connectors
https_port 172.17.201.25:443 accel \
cert=/etc/ssl/certs/domain.com.pem key=/etc/ssl/private/domain.com.pem \
defaultsite=webmail.domain.com

# destination server (Exchange)
cache_peer 192.168.100.24 parent 443 0 \
ssl ssldomain=*.domain.com sslcafile=/etc/ssl/certs/equifax_CA.pem \
proxy-only no-query no-digest front-end-https=on originserver \
login=PASS connection-auth=on name=exchange 
forceddomain=webmail.domain.com

acl srcall src all
acl EXCH dstdomain webmail.domain.com
never_direct allow EXCH
http_access allow EXCH
http_access deny srcall
cache_peer_access exchange allow EXCH
cache_peer_access exchange deny srcall

# eof

 


RE: [squid-users] Problems with Active Sync over squid with basic auth. Any successful config for Active Sync and Outlook Anywhere on Exchange 2010 replacing an ISA server?

2012-01-19 Thread Isenberg, Holger
With using 3.1.18 now and login=PASS instead and added connection-auth=on, both 
in cache_peer, Active Sync can be used now.

cache_peer 192.168.100.24 parent 443 0 \
ssl sslflags=DONT_VERIFY_PEER \
sslcert=/etc/ssl/certs/webmail.domain.com.pem 
sslkey=/etc/ssl/certs/webmail.domain.com.pem \
proxy-only no-query no-digest front-end-https=on sourcehash round-robin 
originserver \
login=PASS connection-auth=on name=exchange 
forceddomain=webmail.domain.com

I'll reply again in a few days, if this configuration is stable...


> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
> Sent: Thursday, January 19, 2012 11:13 AM
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Problems with Active Sync over 
> squid with basic auth. Any successful config for Active Sync 
> and Outlook Anywhere on Exchange 2010 replacing an ISA server?
>
> 401 status means the header not being accepted is the 
> "Authorization:" 
> header.
> 
> Connection is unchanged from what was passed to Squid, just 
> re-positioned.
> 
> Surrogate-Capability is a bit new yes, but HTTP requires ignoring 
> unsupported headers. IIS would be incapable of performing 
> regular HTTP 
> traffic if it were that sensitive to unknown headers coming from 
> clients. Weird stuff is the norm rather than the exception in HTTP.
> 
> 
> To debug further you can try opening a connection to IIS with 
> telnet and 
> send variations of those headers to it cut-n-paste style. Or use the 
> squidclient tool to tailor the request particulars.
> 
> 
> Amos
> 
> 


Re: [squid-users] Problems with Active Sync over squid with basic auth. Any successful config for Active Sync and Outlook Anywhere on Exchange 2010 replacing an ISA server?

2012-01-19 Thread Amos Jeffries

On 19/01/2012 10:13 p.m., Isenberg, Holger wrote:

Is anyone using squid successful as reverse proxy for Outlook Anywhere (RPC 
over https) and Active Sync for an Exchange 2010?

Trying to use squid 3.2.0.13 to replace an ISA server forwarding RPC over https 
for Outlook Anywhere and Active Sync for Outlook mobile devices like Android 
and iPhone I had some success but problems with some Active Sync clients are 
still a show stopper.

RPC over https works fine with that squid version.

The problem is the very first http OPTIONS request for Active Sync which is 
using http Basic Authentication from an Android with TouchDown as client app. 
The cache.log shows the following request and response:

Mobile sending:
OPTIONS /Microsoft-Server-ActiveSync HTTP/1.1
User-Agent: TouchDown(MSRPC)/7.1.00012/
TD-Info: com.nitrodesk.droid20.nitroid/7.1.00012/NON-PCF/
Connection: keep-alive
X-MS-PolicyKey: 0
MS-ASProtocolVersion: 2.5
Authorization: Basic dGVxx==
Content-Length: 0
Host: webmail.domain.com

Squid sending to IIS (Basic dGV... ist the same as above):
OPTIONS /Microsoft-Server-ActiveSync HTTP/1.1
User-Agent: TouchDown(MSRPC)/7.1.00012/
TD-Info: com.nitrodesk.droid20.nitroid/7.1.00012/NON-PCF/
X-MS-PolicyKey: 0
MS-ASProtocolVersion: 2.5
Authorization: Basic dGVxxx==
Content-Length: 0
Host: webmail.domain.com
Surrogate-Capability: webmail.domain.com="Surrogate/1.0"
Cache-Control: max-age=259200
Connection: keep-alive

IIS responding:
HTTP/1.1 401 Unauthorized
Content-Type: text/html
Server: Microsoft-IIS/7.5
WWW-Authenticate: Basic realm="webmail.domain.com"
X-Powered-By: ASP.NET
Date: Wed, 18 Jan 2012 14:38:32 GMT
Content-Length: 1344

There the connection is closed by the client. Maybe the headers added by squid 
are not accepted by IIS? Is there any parameter to disable adding 
Surrogate-Capability, Cache-Control and Connection to the forwarded request?


401 status means the header not being accepted is the "Authorization:" 
header.


Connection is unchanged from what was passed to Squid, just re-positioned.

Surrogate-Capability is a bit new yes, but HTTP requires ignoring 
unsupported headers. IIS would be incapable of performing regular HTTP 
traffic if it were that sensitive to unknown headers coming from 
clients. Weird stuff is the norm rather than the exception in HTTP.



To debug further you can try opening a connection to IIS with telnet and 
send variations of those headers to it cut-n-paste style. Or use the 
squidclient tool to tailor the request particulars.



Amos


[squid-users] Problems with Active Sync over squid with basic auth. Any successful config for Active Sync and Outlook Anywhere on Exchange 2010 replacing an ISA server?

2012-01-19 Thread Isenberg, Holger
Is anyone using squid successful as reverse proxy for Outlook Anywhere (RPC 
over https) and Active Sync for an Exchange 2010?

Trying to use squid 3.2.0.13 to replace an ISA server forwarding RPC over https 
for Outlook Anywhere and Active Sync for Outlook mobile devices like Android 
and iPhone I had some success but problems with some Active Sync clients are 
still a show stopper.

RPC over https works fine with that squid version.

The problem is the very first http OPTIONS request for Active Sync which is 
using http Basic Authentication from an Android with TouchDown as client app. 
The cache.log shows the following request and response:

Mobile sending:
OPTIONS /Microsoft-Server-ActiveSync HTTP/1.1
User-Agent: TouchDown(MSRPC)/7.1.00012/
TD-Info: com.nitrodesk.droid20.nitroid/7.1.00012/NON-PCF/
Connection: keep-alive
X-MS-PolicyKey: 0
MS-ASProtocolVersion: 2.5
Authorization: Basic dGVxx==
Content-Length: 0
Host: webmail.domain.com

Squid sending to IIS (Basic dGV... ist the same as above):
OPTIONS /Microsoft-Server-ActiveSync HTTP/1.1
User-Agent: TouchDown(MSRPC)/7.1.00012/
TD-Info: com.nitrodesk.droid20.nitroid/7.1.00012/NON-PCF/
X-MS-PolicyKey: 0
MS-ASProtocolVersion: 2.5
Authorization: Basic dGVxxx==
Content-Length: 0
Host: webmail.domain.com
Surrogate-Capability: webmail.domain.com="Surrogate/1.0"
Cache-Control: max-age=259200
Connection: keep-alive

IIS responding:
HTTP/1.1 401 Unauthorized
Content-Type: text/html
Server: Microsoft-IIS/7.5
WWW-Authenticate: Basic realm="webmail.domain.com"
X-Powered-By: ASP.NET
Date: Wed, 18 Jan 2012 14:38:32 GMT
Content-Length: 1344

There the connection is closed by the client. Maybe the headers added by squid 
are not accepted by IIS? Is there any parameter to disable adding 
Surrogate-Capability, Cache-Control and Connection to the forwarded request?

/opt/squid32/sbin/squid -v
Squid Cache: Version 3.2.0.13
configure options:  '--prefix=/opt/squid32' '--enable-ssl'


squid.conf:

cache_effective_user squidext
cache_effective_group squidext
pid_filename /var/run/squidext.pid

acl srcall src all
acl EXCH dstdomain webmail.domain.com

ssl_unclean_shutdown on

httpd_suppress_version_string on 
cache_mgr noemailaddress
visible_hostname webmail.domain.com

# Internet connector
https_port 172.17.200.25:443 accel cert=/etc/ssl/certs/webmail.domain.com.pem \
   key=/etc/ssl/certs/webmail.domain.com.pem defaultsite=webmail.domain.com

# destination server (IIS for Exchange)
cache_peer 192.168.100.24 parent 443 0 \
ssl sslflags=DONT_VERIFY_PEER \
sslcert=/etc/ssl/certs/webmail.domain.com.pem 
sslkey=/etc/ssl/certs/webmail.domain.com.pem \
proxy-only no-query no-digest front-end-https=on sourcehash round-robin 
originserver \
login=PASSTHRU name=exchange forceddomain=webmail.domain.com

debug_options ALL,2
logformat combined %>a %[ui %[un [%tl] "%rm %ru HTTP/%rv" %>Hs %h" "%{User-Agent}>h" %Ss:%Sh
access_log stdio:/var/log/squidext/access.log combined
cache_log /var/log/squidext/cache.log

never_direct allow EXCH
http_access allow EXCH
http_access deny srcall
cache_peer_access exchange allow EXCH
cache_peer_access exchange deny srcall

via off
forwarded_for transparent

#eof


Re: [squid-users] Problems with Internet Download Manager and Squid 2.7 Stable 9

2012-01-18 Thread Matus UHLAR - fantomas

On 19.01.12 05:16, Saiful Alam wrote:

I am using Squid 2.7 Stable on Ubuntu 10.10 x64

Files like
mp3 which have a refresh_pattern defined, and downloaded within the
browser download manager is cached, but if I download the file with
Internet Download Manager 6.05, the file is not cached. Note that IDM by
default uses 8 connections to download a single file. Again, if I
reduce the default connection number to 1, and try to download again
with IDM, then the file is cached instantly.

Most users in our
network have IDM as their primary download manager, and if we can't
cache objects downloaded with IDM, then :


I'm afraid this is due to suid not being to cache partial files at this 
time. Other problem can be causes by current squid not able to do 
collapsed forwarding (fetch the same file only once even when there's 
multiple requests for that) which is related to this problem and afaik 
in progress for 3.2.


Download manager that download multiple chunks in parallel encounter 
this behaviour and often cause data not being cached. 

You can configure squid to work around this behaviour by configuring 
quick_abort_* values to fetch while files even if it's not requested in 
hope that it will be requested later.


Problems are in cases like windows updates that are configured in large 
files of which only small parts are needed, so you would fetch data not 
needed (you can avoid this by having local WSUS server). 

Other problems can be download managers fetching the same file from 
multiple sources, but that's again problem with download managers.


Note that most download managers open multiple connections to get more 
of bandwidth, which results in keeping less of the bandwidth to others.


That results in increased overhead (multiple connections, ack packets, 
etc) and when more people use download managers, they only cause more 
overhead flow through youur company's line.


I hope that any sane network admin configures networks so users can not 
steal bandwitdh from others, only from themselves.


The most sane solution to this problem is configure download managers 
to fetch each file only once.

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Support bacteria - they're the only culture some people have. 


Re: [squid-users] Problems with Internet Download Manager and Squid 2.7 Stable 9

2012-01-18 Thread Amos Jeffries

On 19.01.2012 07:16, Saiful Alam wrote:

Hi,
I am using Squid 2.7 Stable on Ubuntu 10.10 x64

Files like
 mp3 which have a refresh_pattern defined, and downloaded within the
browser download manager is cached, but if I download the file with
Internet Download Manager 6.05, the file is not cached. Note that IDM 
by

 default uses 8 connections to download a single file. Again, if I
reduce the default connection number to 1, and try to download again
with IDM, then the file is cached instantly.



"cached instantly" is a bit of a strange description. Surely it 
requires download time before caching? or did you means something else 
entirely?



Consider what that download manager is doing. Splitting the file into 
8+ pieces and requesting each of those pieces as different HTTP 
requests. Squid cannot cache partial files, only whole files.



You can check for the download manager User-Agent value with a 
"browser" type ACL and also a "maxconn" type ACL to block more than 1 
connection at a time by it.
 The range_offset_limit and abort settings will also help with these 
partial file requests. Their use is best known for WU, but applies to 
any big partial file. http://wiki.squid-cache.org/SquidFaq/WindowsUpdate



Alternatively you can balance the users mistaken perception of DL 
manager benefit with delay_pools which use the "browser" ACL type and 
slow its rate of download so that users see it as worse than normal 
traffic when they use it. Migration away from the manager


It is polite to inform your users about the proxy and the effect these 
managers are having before going to such extremes. If you can make them 
understand that they get *faster* downloads by working with the proxy 
cache sharing. You can perhapse advise alternative methods of getting 
fast download at the same time to encourage the change. see below.




Most users in our
network have IDM as their primary download manager, and if we can't
cache objects downloaded with IDM, then :



This is a strong sign that they perceive the multiple connections the 
manager provides as a faster network connection than the proxied 
traffic. I've usually seen this sort of perception growing out of the 
old browsers limitation of only opening ~2 connections to a proxy, which 
makes things appear really slow when big objects are filling one of the 
connections.
 There is a "connections to server" setting in browsers which can be 
raised to 8-10 to double or quadruple the bandwidth availability for 
each user without needing a manager to do it specifically. NP: be 
careful you have enough FD available on the Squid before letting them 
know about that.


Amos


[squid-users] Problems with Internet Download Manager and Squid 2.7 Stable 9

2012-01-18 Thread Saiful Alam

Hi,
I am using Squid 2.7 Stable on Ubuntu 10.10 x64

Files like
 mp3 which have a refresh_pattern defined, and downloaded within the 
browser download manager is cached, but if I download the file with 
Internet Download Manager 6.05, the file is not cached. Note that IDM by
 default uses 8 connections to download a single file. Again, if I 
reduce the default connection number to 1, and try to download again 
with IDM, then the file is cached instantly. 

Most users in our 
network have IDM as their primary download manager, and if we can't 
cache objects downloaded with IDM, then :

Any help or ideas are welcomed.

Regards,
Saiful

Re: [squid-users] R: [squid-users] Problems authenticator on huge systems

2011-10-13 Thread Luis Daniel Lucio Quiroz
2011/10/13 Job :
> Hello Luis,
> nice reply, first of all, very very interesting...
>
> I noticed in 3.1.8 it seems i cannot place the credenstialttl directive, i 
> can only - in the ntlm schema - insert this: auth_param ntlm keep_alive on.
>
> Is it right? I read it could give some incompatibility problems with IE.
>
> Are there some other parameters to put, in the ntlm schema, 5-minutes cache?
>
> Thank you again,
> Francesco
>
> 
> Da: Luis Daniel Lucio Quiroz [luis.daniel.lu...@gmail.com]
> Inviato: giovedì 13 ottobre 2011 15.49
> A: fra...@itcserra.net
> Cc: squid-users@squid-cache.org
> Oggetto: Re: [squid-users] Problems authenticator on huge systems
>
> 2011/10/13 Francesco :
>> Hello,
>>
>> in a proxy server with some hunderds of users, i experience temporary
>> problems with ntlm authentication; Squid says access deny for some
>> minutes, then everything returns working without any actions.
>>
>> In cache.log i noticed these errors:
>> AuthNTLMUserRequest::authenticate: attempt to perform authentication
>> without a connection!
>>
>> I raised up the per-process max open files to 4096; do you think i am low
>> of authenticator process (200)?
>> Could it be this the problem?
>>
>> I have no cache on ntlm auth helper...
>>
>> Thank you,
>> Francesco
>>
>
> HELO Franchesco,
>
> My first toughts is you shall consider a ntlm cache, about 5 minutes.
> The fact is, that NTLM authentication does not work as basic
> authentication.  I mean, in basic authentication, once the  browser
> sends credentials, it always send credentials each time without
> requesting them again.  In  ntlm, as my understanding, it is quite
> different, browsers after a lapse of time will stop sending
> credentials (the hash).  So a cache will  really offload the samba/AD
> you are forwarding auth requests.
>
> Taking as a reference your message, and without other evidence, i
> guess problem is not between browser-squid, it could be
> squid-ad/samba.
>
> LD
> http://www.twitter.com/ldlq

Give a read here

http://www.squid-cache.org/Versions/v3/3.1/cfgman/authenticate_ttl.html

This may help you,

Please void to top-list, it is very hard to follow conversation.

LD
http://www.twitter.com/ldlq


[squid-users] R: [squid-users] Problems authenticator on huge systems

2011-10-13 Thread Job
Hello Luis,
nice reply, first of all, very very interesting...

I noticed in 3.1.8 it seems i cannot place the credenstialttl directive, i can 
only - in the ntlm schema - insert this: auth_param ntlm keep_alive on.

Is it right? I read it could give some incompatibility problems with IE.

Are there some other parameters to put, in the ntlm schema, 5-minutes cache?

Thank you again,
Francesco


Da: Luis Daniel Lucio Quiroz [luis.daniel.lu...@gmail.com]
Inviato: giovedì 13 ottobre 2011 15.49
A: fra...@itcserra.net
Cc: squid-users@squid-cache.org
Oggetto: Re: [squid-users] Problems authenticator on huge systems

2011/10/13 Francesco :
> Hello,
>
> in a proxy server with some hunderds of users, i experience temporary
> problems with ntlm authentication; Squid says access deny for some
> minutes, then everything returns working without any actions.
>
> In cache.log i noticed these errors:
> AuthNTLMUserRequest::authenticate: attempt to perform authentication
> without a connection!
>
> I raised up the per-process max open files to 4096; do you think i am low
> of authenticator process (200)?
> Could it be this the problem?
>
> I have no cache on ntlm auth helper...
>
> Thank you,
> Francesco
>

HELO Franchesco,

My first toughts is you shall consider a ntlm cache, about 5 minutes.
The fact is, that NTLM authentication does not work as basic
authentication.  I mean, in basic authentication, once the  browser
sends credentials, it always send credentials each time without
requesting them again.  In  ntlm, as my understanding, it is quite
different, browsers after a lapse of time will stop sending
credentials (the hash).  So a cache will  really offload the samba/AD
you are forwarding auth requests.

Taking as a reference your message, and without other evidence, i
guess problem is not between browser-squid, it could be
squid-ad/samba.

LD
http://www.twitter.com/ldlq

Re: [squid-users] Problems authenticator on huge systems

2011-10-13 Thread Luis Daniel Lucio Quiroz
2011/10/13 Francesco :
> Hello,
>
> in a proxy server with some hunderds of users, i experience temporary
> problems with ntlm authentication; Squid says access deny for some
> minutes, then everything returns working without any actions.
>
> In cache.log i noticed these errors:
> AuthNTLMUserRequest::authenticate: attempt to perform authentication
> without a connection!
>
> I raised up the per-process max open files to 4096; do you think i am low
> of authenticator process (200)?
> Could it be this the problem?
>
> I have no cache on ntlm auth helper...
>
> Thank you,
> Francesco
>

HELO Franchesco,

My first toughts is you shall consider a ntlm cache, about 5 minutes.
The fact is, that NTLM authentication does not work as basic
authentication.  I mean, in basic authentication, once the  browser
sends credentials, it always send credentials each time without
requesting them again.  In  ntlm, as my understanding, it is quite
different, browsers after a lapse of time will stop sending
credentials (the hash).  So a cache will  really offload the samba/AD
you are forwarding auth requests.

Taking as a reference your message, and without other evidence, i
guess problem is not between browser-squid, it could be
squid-ad/samba.

LD
http://www.twitter.com/ldlq


[squid-users] Problems authenticator on huge systems

2011-10-13 Thread Francesco
Hello,

in a proxy server with some hunderds of users, i experience temporary
problems with ntlm authentication; Squid says access deny for some
minutes, then everything returns working without any actions.

In cache.log i noticed these errors:
AuthNTLMUserRequest::authenticate: attempt to perform authentication
without a connection!

I raised up the per-process max open files to 4096; do you think i am low
of authenticator process (200)?
Could it be this the problem?

I have no cache on ntlm auth helper...

Thank you,
Francesco


[squid-users] Problems setting up Kerberos authentication

2011-09-21 Thread Nikolaos Milas

Hello,

I am setting up Kerberos auth on squid (3.1.15), but it won't work. 
Browser (IE 8) keeps on poping up the username/password window, but 
authentication is never successful. Yet, I don't see any logging of 
failed authentication attempts in kerberos logs at all! It's as if squid 
is not communicating with kerberos server. Yet, kinit from the command 
line works fine (see details below).


What am I doing wrong? Am I missing something?

I need your help.

Thanks,
Nick

Details of the setup follow (true names/IP addresses have been changed):

I have a working Kerberos Server (MIT Kerberos 5 on CentOS 5.6) on 
kerb.example.com and I am setting up squid on squid.example.com; it's 
Squid 3.1.15.x86_64 as RPM on CentOS 5.6 (from here: 
ftp://ftp.pbone.net/mirror/ftp.pramberger.at/systems/linux/contrib/rhel5/x86_64/squid3-3.1.15-1.el5.pp.x86_64.rpm).


Host squid.example.com is also setup as a kerberos client.

So, I have added to kerberos a host:

   host/squid.example@example.com

and a service:

   HTTP/squid.example@example.com

Then, I created a keytab file (httpsquid.keytab) for the latter:

[root@squid]# kadmin.local
Authenticating as principal userx/ad...@example.com with password.
kadmin.local:  addprinc HTTP/squid.example@example.com
WARNING: no policy specified for HTTP/squid.example@example.com; 
defaulting to no policy

Enter password for principal "HTTP/squid.example@example.com":
Re-enter password for principal "HTTP/squid.example@example.com":
Principal "HTTP/squid.example@example.com" created.
kadmin.local:  ktadd -k /etc/krb5kdc/httpsquid.keytab HTTP/squid.example.com
Entry for principal HTTP/squid.example.com with kvno 2, encryption type 
AES-256 CTS mode with 96-bit SHA-1 HMAC added to keytab 
WRFILE:/etc/krb5kdc/httpsquid.keytab.
Entry for principal HTTP/squid.example.comwith kvno 2, encryption type 
AES-128 CTS mode with 96-bit SHA-1 HMAC added to keytab 
WRFILE:/etc/krb5kdc/httpsquid.keytab.
Entry for principal HTTP/squid.example.comwith kvno 2, encryption type 
Triple DES cbc mode with HMAC/sha1 added to keytab 
WRFILE:/etc/krb5kdc/httpsquid.keytab.
Entry for principal HTTP/squid.example.comwith kvno 2, encryption type 
ArcFour with HMAC/md5 added to keytab WRFILE:/etc/krb5kdc/httpsquid.keytab.
Entry for principal HTTP/squid.example.comwith kvno 2, encryption type 
DES with HMAC/sha1 added to keytab WRFILE:/etc/krb5kdc/httpsquid.keytab.
Entry for principal HTTP/squid.example.comwith kvno 2, encryption type 
DES cbc mode with RSA-MD5 added to keytab 
WRFILE:/etc/krb5kdc/httpsquid.keytab.


...moved it to /etc/squid and changed ownership to root:squid and 
permissions: 640.


I have checked that the keytab file works:

   [root@squid]# kinit -V -k -t httpsquid.keytab HTTP/squid.example.com
   Authenticated to Kerberos v5

I also added to the start of /etc/init.d/squid the lines:

   KRB5_KTNAME=/etc/squid/httpsquid.keytab
   export KRB5_KTNAME

Then, I checked that kerberos authentication is enabled (as explained 
e.g. here: 
http://publib.boulder.ibm.com/infocenter/ltscnnct/v2r0/index.jsp?topic=/com.ibm.connections.25.help/t_install_kerb_edit_browsers.html), 
then I specified (in IE, Internet Options / Connections / LAN Settings) 
squid.example.com as a Proxy on port 3128 and I have tried to visit any 
page. As I explained, browser (IE 8) keeps on poping up the 
username/password window, but authentication is never successful. I have 
tried the following as username, without success:

userx
EXAMPLE.COM\userx
us...@example.com
us...@example.com

On the other hand, Firefox 6 (with similar settings) doesn't show any 
pop up window; it just fails.


I have tried the three following configuration alternatives, but it 
didn't make any difference:

auth_param negotiate program /usr/libexec/squid/squid_kerb_auth -d
auth_param negotiate program /usr/libexec/squid/squid_kerb_auth-d -s 
HTTP/squid.example.com

auth_param negotiate program /usr/libexec/squid/squid_kerb_auth


Here is /etc/squid/squid.conf:
---
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 10.10.10.0/24

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager

http_access allow localhost

auth_param negotiate program /usr/libexec/squid/squid_kerb_auth -d
auth_param negotiate children 10
auth_param negotiate keep_alive on

acl auth proxy_auth REQUIRED

http_access allow auth
#http_

Re: [squid-users] Problems compiling 3.1.12.3-RC with ICAP on RHEL

2011-06-20 Thread Lindsay Hill

On 06/21/2011 09:05 AM, Luis Daniel Lucio Quiroz wrote:

Le lundi 20 juin 2011 12:46:12 Lindsay Hill a écrit :

Hi all

Is anyone else seeing problems with compiling the 3.1.12.3 RC on RHEL,
with --enable-icap-client?

It seems that patch 10313
(http://www.squid-cache.org/Versions/v3/3.1/changesets/squid-3.1-10313.patch
) causes issues.




Similar here
http://kenobi.mandriva.com/queue/failure/2010.1/main/testing/20110620204119.dlucio.kenobi.12580/log/squid-3.1.12.3-3mdv2010.2/
usign gcc4.4 and 4.6


More comments on this here: http://bugs.squid-cache.org/show_bug.cgi?id=3153


Re: [squid-users] Problems compiling 3.1.12.3-RC with ICAP on RHEL

2011-06-20 Thread Luis Daniel Lucio Quiroz
Le lundi 20 juin 2011 12:46:12 Lindsay Hill a écrit :
> Hi all
> 
> Is anyone else seeing problems with compiling the 3.1.12.3 RC on RHEL,
> with --enable-icap-client?
> 
> It seems that patch 10313
> (http://www.squid-cache.org/Versions/v3/3.1/changesets/squid-3.1-10313.patch
> ) causes issues. This is the output I'm getting:
> 
> 
> ngs -Wcomments -Werror  -D_REENTRANT -m64 -O2 -g -m64 -mtune=generic -c
> -o Initiate.lo Initiate.cc
> libtool: compile:  g++ -DHAVE_CONFIG_H -I../.. -I../../include
> -I../../src -I../../include -I../../libltdl -I/usr/include/libxml2
> -I/usr/include/libxml2 -Wall -Wpointer-arith -Wwrite-strings -Wcomments
> -Werror -D_REENTRANT -m64 -O2 -g -m64 -mtune=generic -c Initiate.cc
> -fPIC -DPIC -o .libs/Initiate.o
> /bin/sh ../../libtool --tag=CXX   --mode=compile g++ -DHAVE_CONFIG_H
> -I../.. -I../../include -I../../src -I../../include  -I../../libltdl
> -I/usr/include/libxml2  -I/usr/include/libxml2 -Wall -Wpointer-arith
> -Wwrite-strings -Wcomments -Werror  -D_REENTRANT -m64 -O2 -g -m64
> -mtune=generic -c -o Initiator.lo Initiator.cc
> libtool: compile:  g++ -DHAVE_CONFIG_H -I../.. -I../../include
> -I../../src -I../../include -I../../libltdl -I/usr/include/libxml2
> -I/usr/include/libxml2 -Wall -Wpointer-arith -Wwrite-strings -Wcomments
> -Werror -D_REENTRANT -m64 -O2 -g -m64 -mtune=generic -c Initiator.cc
> -fPIC -DPIC -o .libs/Initiator.o
> Initiate.cc: In destructor 'virtual Adaptation::AnswerCall::~AnswerCall()':
> Initiate.cc:41: error: request for member 'message' in
> '((Adaptation::AnswerCall*)this)->Adaptation::AnswerCall::.AsyncC
> allT::dialer.Adaptation::AnswerDialer::
> .UnaryMemFunT::arg1', which is of non-class
> type 'HttpMsg*'
> Initiate.cc:41: error: request for member 'message' in
> '((Adaptation::AnswerCall*)this)->Adaptation::AnswerCall::.AsyncC
> allT::dialer.Adaptation::AnswerDialer::
> .UnaryMemFunT::arg1', which is of non-class
> type 'HttpMsg*'
> Initiate.cc:42: error: request for member 'message' in
> '((Adaptation::AnswerCall*)this)->Adaptation::AnswerCall::.AsyncC
> allT::dialer.Adaptation::AnswerDialer::
> .UnaryMemFunT::arg1', which is of non-class
> type 'HttpMsg*'
> Initiate.cc: In member function 'void
> Adaptation::Initiate::sendAnswer(HttpMsg*)':
> Initiate.cc:94: error: 'answer' was not declared in this scope
> make[4]: *** [Initiate.lo] Error 1
> make[4]: *** Waiting for unfinished jobs
> libtool: compile:  g++ -DHAVE_CONFIG_H -I../.. -I../../include
> -I../../src -I../../include -I../../libltdl -I/usr/include/libxml2
> -I/usr/include/libxml2 -Wall -Wpointer-arith -Wwrite-strings -Wcomments
> -Werror -D_REENTRANT -m64 -O2 -g -m64 -mtune=generic -c Initiator.cc -o
> Initiator.o >/dev/null 2>&1
> make[4]: Leaving directory
> `/usr/src/redhat/BUILD/squid-3.1.12.3/src/adaptation'
> make[3]: *** [all-recursive] Error 1
> make[3]: Leaving directory
> `/usr/src/redhat/BUILD/squid-3.1.12.3/src/adaptation'
> make[2]: *** [all-recursive] Error 1
> make[2]: Leaving directory `/usr/src/redhat/BUILD/squid-3.1.12.3/src'
> make[1]: *** [all] Error 2
> make[1]: Leaving directory `/usr/src/redhat/BUILD/squid-3.1.12.3/src'
> make: *** [all-recursive] Error 1
> error: Bad exit status from /var/tmp/rpm-tmp.34664 (%build)
> 
> 
> If I reverse that patch, Squid compiles OK.
> 
> Thoughts?
> 
>   - Lindsay



Similar here
http://kenobi.mandriva.com/queue/failure/2010.1/main/testing/20110620204119.dlucio.kenobi.12580/log/squid-3.1.12.3-3mdv2010.2/
usign gcc4.4 and 4.6


[squid-users] Problems compiling 3.1.12.3-RC with ICAP on RHEL

2011-06-19 Thread Lindsay Hill

Hi all

Is anyone else seeing problems with compiling the 3.1.12.3 RC on RHEL, 
with --enable-icap-client?


It seems that patch 10313 
(http://www.squid-cache.org/Versions/v3/3.1/changesets/squid-3.1-10313.patch) 
causes issues. This is the output I'm getting:



ngs -Wcomments -Werror  -D_REENTRANT -m64 -O2 -g -m64 -mtune=generic -c 
-o Initiate.lo Initiate.cc
libtool: compile:  g++ -DHAVE_CONFIG_H -I../.. -I../../include 
-I../../src -I../../include -I../../libltdl -I/usr/include/libxml2 
-I/usr/include/libxml2 -Wall -Wpointer-arith -Wwrite-strings -Wcomments 
-Werror -D_REENTRANT -m64 -O2 -g -m64 -mtune=generic -c Initiate.cc  
-fPIC -DPIC -o .libs/Initiate.o
/bin/sh ../../libtool --tag=CXX   --mode=compile g++ -DHAVE_CONFIG_H  
-I../.. -I../../include -I../../src -I../../include  -I../../libltdl  
-I/usr/include/libxml2  -I/usr/include/libxml2 -Wall -Wpointer-arith 
-Wwrite-strings -Wcomments -Werror  -D_REENTRANT -m64 -O2 -g -m64 
-mtune=generic -c -o Initiator.lo Initiator.cc
libtool: compile:  g++ -DHAVE_CONFIG_H -I../.. -I../../include 
-I../../src -I../../include -I../../libltdl -I/usr/include/libxml2 
-I/usr/include/libxml2 -Wall -Wpointer-arith -Wwrite-strings -Wcomments 
-Werror -D_REENTRANT -m64 -O2 -g -m64 -mtune=generic -c Initiator.cc  
-fPIC -DPIC -o .libs/Initiator.o

Initiate.cc: In destructor 'virtual Adaptation::AnswerCall::~AnswerCall()':
Initiate.cc:41: error: request for member 'message' in 
'((Adaptation::AnswerCall*)this)->Adaptation::AnswerCall::.AsyncCallT::dialer.Adaptation::AnswerDialer::.UnaryMemFunTHttpMsg*>::arg1', which is of non-class type 'HttpMsg*'
Initiate.cc:41: error: request for member 'message' in 
'((Adaptation::AnswerCall*)this)->Adaptation::AnswerCall::.AsyncCallT::dialer.Adaptation::AnswerDialer::.UnaryMemFunTHttpMsg*>::arg1', which is of non-class type 'HttpMsg*'
Initiate.cc:42: error: request for member 'message' in 
'((Adaptation::AnswerCall*)this)->Adaptation::AnswerCall::.AsyncCallT::dialer.Adaptation::AnswerDialer::.UnaryMemFunTHttpMsg*>::arg1', which is of non-class type 'HttpMsg*'
Initiate.cc: In member function 'void 
Adaptation::Initiate::sendAnswer(HttpMsg*)':

Initiate.cc:94: error: 'answer' was not declared in this scope
make[4]: *** [Initiate.lo] Error 1
make[4]: *** Waiting for unfinished jobs
libtool: compile:  g++ -DHAVE_CONFIG_H -I../.. -I../../include 
-I../../src -I../../include -I../../libltdl -I/usr/include/libxml2 
-I/usr/include/libxml2 -Wall -Wpointer-arith -Wwrite-strings -Wcomments 
-Werror -D_REENTRANT -m64 -O2 -g -m64 -mtune=generic -c Initiator.cc -o 
Initiator.o >/dev/null 2>&1
make[4]: Leaving directory 
`/usr/src/redhat/BUILD/squid-3.1.12.3/src/adaptation'

make[3]: *** [all-recursive] Error 1
make[3]: Leaving directory 
`/usr/src/redhat/BUILD/squid-3.1.12.3/src/adaptation'

make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/usr/src/redhat/BUILD/squid-3.1.12.3/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/usr/src/redhat/BUILD/squid-3.1.12.3/src'
make: *** [all-recursive] Error 1
error: Bad exit status from /var/tmp/rpm-tmp.34664 (%build)


If I reverse that patch, Squid compiles OK.

Thoughts?

 - Lindsay


[squid-users] Problems with bandwith

2011-06-07 Thread patric . glazar
Hallo

we have still problems with our delay pool settings!

I tried delay pools with class 1 
we want that the download from our PMServer Lemss is limited to 20KB but 
Client in the intranet should download Patches from squid with full speed!
Destination is allways the same 10.1.1.1
Squid shoul download the full file first before serving it to the clients
the cache should be refreshed once a year!

What is working :
Download limit from PMServer to the squid but clients receive the patches 
very slow 
files in cache have a new date every 3 to 5 day´s

acl LEMSS dst 10.1.1.1/32

delay_pools 1
delay_class 1 1
delay_parameters 1 2/2
delay_access 1 allow LEMSS
delay_access 1 deny all



hierarchy_stoplist cgi-bin ?

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  525600
refresh_pattern .   0   20% 4320

range_offset_limit -1
collapsed_forwarding on 

thank you for you advice!

 Best regards
Patric 



Disclaimer:
Diese Nachricht dient ausschließlich zu Informationszwecken und ist nur 
für den Gebrauch des angesprochenen Adressaten bestimmt.

This message is only for informational purposes and is intended solely for 
the use of the addressee.



[squid-users] problems squid_kerb_auth

2011-05-29 Thread spiderslack

Hello

I'm doing a test with squid using kerberos configured as follows squid 
and kerberos


squid.conf
auth_param negotiate program /usr/lib/squid3/squid_kerb_auth -d
auth_param negotiate children 10
auth_param negotiate keep_alive on

acl auth proxy_auth REQUIRED

http_access allow auth
http_access deny all


krb4.conf
[libdefaults]
default_realm = VIALACTEA.CORP
krb4_config = /etc/krb.conf
krb4_realms = /etc/krb.realms
kdc_timesync = 1
ccache_type = 4
forwardable = true
proxiable = true
dns_lookup_realm = true
dns_lookup_kdc = true
v4_instance_resolve = false
v4_name_convert = {
host = {
rcmd = host
ftp = ftp
}
plain = {
something = something-else
}
}
fcc-mit-ticketflags = true
[realms]
VIALACTEA.CORP = {
kdc = 192.168.1.155
admin_server = 192.168.1.155
}
[domain_realm]
.vialactea.corp = VIALACTEA.CORP
vialactea.corp = VIALACTEA.CORP
[login]
krb4_convert = true
krb4_get_tickets = false


On the client pointed out the proxy address configured and the following 
variables firefox with the domain name:

network.negotiate-auth.delegation-uris
network.negotiate-auth.trusted-uris

When trying to browse I get the following messages in the logs with 
debugging enabled.
2011/05/29 02:42:57| squid_kerb_auth: Got 'YR 
TlRMTVNTUAABl4II4gAGAbAdDw==' from squid 
(length: 59).

2011/05/29 02:42:57| squid_kerb_auth: received type 1 NTLM token

Does anyone have any idea of the problem? At the station installed 
Kerbtray and it shows the ticket


Regards.



Re: [squid-users] Problems with Squid and Active Directory

2011-04-26 Thread Amos Jeffries

On 26/04/11 21:32, olaf.bo...@hvbg.hessen.de wrote:

Hello!

Since a few weeks I have Squid Version 2.7.STABLE7 on Ubuntu Server 10.04. All 
worked fine - different users in an AD-Group could reach the internet through 
my proxy. Because of this my Squid-configuration seems to be OK. Since the name 
of the AD-Group was changed it is no more possible to reach the internet 
through the proxy. The error is:
"Access control configuration prevents your request from being allowed at this 
time."

Switching to the old group name all works fine again, switching to the new one: 
the same error as above.

I changed the debug options and found this entry in cache.log:
"Could not convert sid S-1-5-21-3365863304-72330373-946326852-415981 to gid"

Is that a problem of Squid? Or is it a problem of Samba?
What to do?


The error is produces by winbind. So I doubt it is a Squid problem.

Check that AD has a SID "S-1-5-21-3365863304-72330373-946326852-415981".
Then check that SID has the correct group GID associated with it.

FWIW: Others mentioning this error years ago have had to do things like 
patch, upgrade or re-install their samba or winbind binaries.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1


[squid-users] Problems with Squid and Active Directory

2011-04-26 Thread Olaf.Boldt
Hello!

Since a few weeks I have Squid Version 2.7.STABLE7 on Ubuntu Server 10.04. All 
worked fine - different users in an AD-Group could reach the internet through 
my proxy. Because of this my Squid-configuration seems to be OK. Since the name 
of the AD-Group was changed it is no more possible to reach the internet 
through the proxy. The error is:
"Access control configuration prevents your request from being allowed at this 
time."  

Switching to the old group name all works fine again, switching to the new one: 
the same error as above.

I changed the debug options and found this entry in cache.log:
"Could not convert sid S-1-5-21-3365863304-72330373-946326852-415981 to gid"

Is that a problem of Squid? Or is it a problem of Samba?
What to do?

Thanks!
Olaf



Re: [squid-users] Problems with transparancy and pf

2011-04-09 Thread Amos Jeffries

On 07/04/11 08:03, Leslie Jensen wrote:


On 2011-04-06 05:32, Amos Jeffries wrote:



Thank you. I've split the wiki examples we have for PF into separate
OpenBSD and FreeBSD pages and added a new section for the altered
OpenBSD syntax.

Would any of you mind reading through and checking the texts? please?
http://wiki.squid-cache.org/ConfigExamples/Intercept/OpenBsdPf
http://wiki.squid-cache.org/ConfigExamples/Intercept/FreeBsdPf

Amos


For squid31 on Free BSD there are several options already set.

I think it would be helpful to mention a little more in the wiki.

The configure options that are available are:
(From the Makefile)

OPTIONS= SQUID_KERB_AUTH "Install Kerberos authentication helpers" on \
SQUID_LDAP_AUTH "Install LDAP authentication helpers" off \
SQUID_NIS_AUTH "Install NIS/YP authentication helpers" on \
SQUID_SASL_AUTH "Install SASL authentication helpers" off \
SQUID_IPV6 "Enable IPv6 support" on \
SQUID_DELAY_POOLS "Enable delay pools" off \
SQUID_SNMP "Enable SNMP support" on \
SQUID_SSL "Enable SSL support for reverse proxies" off \
SQUID_PINGER "Install the icmp helper" off \
SQUID_DNS_HELPER "Use the old 'dnsserver' helper" off \
SQUID_HTCP "Enable HTCP support" on \
SQUID_VIA_DB "Enable forward/via database" off \
SQUID_CACHE_DIGESTS "Enable cache digests" off \
SQUID_WCCP "Enable Web Cache Coordination Prot. v1" on \
SQUID_WCCPV2 "Enable Web Cache Coordination Prot. v2" off \
SQUID_STRICT_HTTP "Be strictly HTTP compliant" off \
SQUID_IDENT "Enable ident (RFC 931) lookups" on \
SQUID_REFERER_LOG "Enable Referer-header logging" off \
SQUID_USERAGENT_LOG "Enable User-Agent-header logging" off \
SQUID_ARP_ACL "Enable ACLs based on ethernet address" off \
SQUID_IPFW "Enable transparent proxying with IPFW" off \
SQUID_PF "Enable transparent proxying with PF" off \
SQUID_IPFILTER "Enable transp. proxying with IPFilter" off \
SQUID_FOLLOW_XFF "Follow X-Forwarded-For headers" off \
SQUID_ECAP "En. loadable content adaptation modules" off \
SQUID_ICAP "Enable ICAP client functionality" off \
SQUID_ESI "Enable ESI support (experimental)" off \
SQUID_AUFS "Enable the aufs storage scheme" on \
SQUID_COSS "Enable COSS (currently not available)" off \
SQUID_KQUEUE "Use kqueue(2) (experimental)" on \
SQUID_LARGEFILE "Support log and cache files >2GB" off \
SQUID_STACKTRACES "Create backtraces on fatal errors" off \
SQUID_DEBUG "Enable debugging options" off



Thank you. I see none of the NAT lookup features are turned on.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.6


Re: [squid-users] Problems with transparancy and pf

2011-04-06 Thread Indunil Jayasooriya
>
> Thank you. I've split the wiki examples we have for PF into separate OpenBSD
> and FreeBSD pages and added a new section for the altered OpenBSD syntax.
>
> Would any of you mind reading through and checking the texts? please?

yes

>  http://wiki.squid-cache.org/ConfigExamples/Intercept/OpenBsdPf

OK , Thanks very much.



With Squid Cache: Version 2.7.STABLE9 on OpenBSD 4.8


I have below lines for transparency with PF


# macros
ext_if="em0"
int_if="em1"
lan_net="192.168.0.0/24"

# Deafult deny
block in log
block out log

antispoof quick for { lo $int_if $ext_if }

#These 2 are the rules for transparency with PF

pass in log on $int_if proto tcp from $lan_net to any port 80 \
rdr-to 127.0.0.1 port 3128

pass out log on $ext_if inet proto tcp from  $ext_if to any \
  port 80





-- 
Thank you
Indunil Jayasooriya


Re: [squid-users] Problems with transparancy and pf

2011-04-06 Thread Leslie Jensen


On 2011-04-06 05:32, Amos Jeffries wrote:



Thank you. I've split the wiki examples we have for PF into separate
OpenBSD and FreeBSD pages and added a new section for the altered
OpenBSD syntax.

Would any of you mind reading through and checking the texts? please?
http://wiki.squid-cache.org/ConfigExamples/Intercept/OpenBsdPf
http://wiki.squid-cache.org/ConfigExamples/Intercept/FreeBsdPf

Amos


For squid31 on Free BSD there are several options already set.

I think it would be helpful to mention a little more in the wiki.

The configure options that are available are:
(From the Makefile)

OPTIONS=  SQUID_KERB_AUTH "Install Kerberos authentication helpers" on \
  SQUID_LDAP_AUTH "Install LDAP authentication helpers" off \
  SQUID_NIS_AUTH "Install NIS/YP authentication helpers" on \
  SQUID_SASL_AUTH "Install SASL authentication helpers" off \
  SQUID_IPV6 "Enable IPv6 support" on \
  SQUID_DELAY_POOLS "Enable delay pools" off \
  SQUID_SNMP "Enable SNMP support" on \
  SQUID_SSL "Enable SSL support for reverse proxies" off \
  SQUID_PINGER "Install the icmp helper" off \
  SQUID_DNS_HELPER "Use the old 'dnsserver' helper" off \
  SQUID_HTCP "Enable HTCP support" on \
  SQUID_VIA_DB "Enable forward/via database" off \
  SQUID_CACHE_DIGESTS "Enable cache digests" off \
  SQUID_WCCP "Enable Web Cache Coordination Prot. v1" on \
  SQUID_WCCPV2 "Enable Web Cache Coordination Prot. v2" off \
  SQUID_STRICT_HTTP "Be strictly HTTP compliant" off \
  SQUID_IDENT "Enable ident (RFC 931) lookups" on \
  SQUID_REFERER_LOG "Enable Referer-header logging" off \
  SQUID_USERAGENT_LOG "Enable User-Agent-header logging" off \
  SQUID_ARP_ACL "Enable ACLs based on ethernet address" off \
  SQUID_IPFW "Enable transparent proxying with IPFW" off \
  SQUID_PF "Enable transparent proxying with PF" off \
  SQUID_IPFILTER "Enable transp. proxying with IPFilter" off \
  SQUID_FOLLOW_XFF "Follow X-Forwarded-For headers" off \
  SQUID_ECAP "En. loadable content adaptation modules" off \
  SQUID_ICAP "Enable ICAP client functionality" off \
  SQUID_ESI "Enable ESI support (experimental)" off \
  SQUID_AUFS "Enable the aufs storage scheme" on \
  SQUID_COSS "Enable COSS (currently not available)" off \
  SQUID_KQUEUE "Use kqueue(2) (experimental)" on \
  SQUID_LARGEFILE "Support log and cache files >2GB" off \
  SQUID_STACKTRACES "Create backtraces on fatal errors" off \
  SQUID_DEBUG "Enable debugging options" off



/Leslie


Re: [squid-users] Problems with transparancy and pf

2011-04-05 Thread Amos Jeffries

On Tue, 5 Apr 2011 10:49:37 -0400, Kevin Wilcox wrote:

On Wed, Mar 30, 2011 at 01:06, Indunil Jayasooriya wrote:

some PF syntax have been changed since OpenBSD 4.7. one is rdr . pls 
see this


http://www.openbsd.org/faq/upgrade47.html


So, when it comes to FreeBSD 8.2, I do NOT know, whether these 
syntax

are present. Pls check.


I hate to follow up so late (a week later) but I just got this and
thought it worth commenting.

The FreeBSD 8.x line is still using an extremely dated version of pf,
from circa OpenBSD 4.2.

-HEAD has some newer code, from (I think) OpenBSD 4.5, but nothing
recent enough to incorporate the syntax changes.

My understanding is that more recent pf code is more closely coupled
to OpenBSD at the OS level that makes it more difficult to port
to/import into FreeBSD, it's highly unlikely any version of pf will 
be

pulled into the 8.x/9.x lines that uses the newer syntax.

kmw



Thank you. I've split the wiki examples we have for PF into separate 
OpenBSD and FreeBSD pages and added a new section for the altered 
OpenBSD syntax.


Would any of you mind reading through and checking the texts? please?
  http://wiki.squid-cache.org/ConfigExamples/Intercept/OpenBsdPf
  http://wiki.squid-cache.org/ConfigExamples/Intercept/FreeBsdPf

Amos


Re: [squid-users] Problems with transparancy and pf

2011-04-05 Thread Kevin Wilcox
On Wed, Mar 30, 2011 at 01:06, Indunil Jayasooriya  wrote:

> some PF syntax have been changed since OpenBSD 4.7. one is rdr . pls see this
>
> http://www.openbsd.org/faq/upgrade47.html
>
>
> So, when it comes to FreeBSD 8.2, I do NOT know, whether these syntax
> are present. Pls check.

I hate to follow up so late (a week later) but I just got this and
thought it worth commenting.

The FreeBSD 8.x line is still using an extremely dated version of pf,
from circa OpenBSD 4.2.

-HEAD has some newer code, from (I think) OpenBSD 4.5, but nothing
recent enough to incorporate the syntax changes.

My understanding is that more recent pf code is more closely coupled
to OpenBSD at the OS level that makes it more difficult to port
to/import into FreeBSD, it's highly unlikely any version of pf will be
pulled into the 8.x/9.x lines that uses the newer syntax.

kmw


  1   2   3   4   5   6   7   8   9   >