Re: [squid-users] Server-first SSL bump in Squid 3.5.x

2015-03-18 Thread Dan Charlesworth
Right, I see.

So I’ve got a special ACL to always allow that Test URL for the sake of our 
certcheck … but it’s doing it by dstdomain. So if there are rules to say 
“always redirect to the certificate splash page if you can’t connect to the 
URL”, then it will never pass it because the initial CONNECT step can never 
match a dstdomain and will always be DENIED.

So what I really need to do is change that test URL’s ACL to be a dst instead 
(and find a URL that isn’t going to resolve to different IPs over time). Okay.

While we’re at it, is there a Peek & Splice "equivalent" of the config I posted 
before?

Kind regards
Dan

> On 19 Mar 2015, at 5:18 pm, Amos Jeffries  wrote:
> 
> On 19/03/2015 6:36 p.m., Dan Charlesworth wrote:
>> Hey y’all
>> 
>> Finally got 3.5.2 running. I was under the impression that using 
>> server-first SSL bump would still be compatible, despite all the Peek & 
>> Splice changes, but apparently not. Hopefully someone can explain what might 
>> be going wrong here ...
>> 
> 
> Sadly "being compatible" with an broken design does not mean "working".
> server-first only works nicely if the client, Squid, and server are
> operating with the same TLS features - which is uncommon.
> 
> 
>> Using the same SSL Bump config that we used for 3.4, we now seeing this 
>> happen:
>> 19/Mar/2015-16:21:32 22 d4:f4:6f:71:90:e6 10.0.1.71 TCP_DENIED 200 0 
>> CONNECT 94.31.29.230:443 - server-first - HIER_NONE/- - -
>> 
> 
> The CONNECT request in the clear-text HTTP layer is now subject to
> access controls before any bumping takes place. Earlier Squid would let
> the CONNECT through if you were bumping, even if it would have been
> blocked by your access controls normally.
> 
> This is unrelated to server-first or any other ssl_bump action.
> 
>> Instead of this:
>> 19/Mar/2015-14:42:04736 d4:f4:6f:71:90:e6 10.0.1.71 TCP_MISS 200 96913 
>> GET https://code.jquery.com/jquery-1.11.0.min.js - server-first 
>> Mozilla/5.0%20(iPhone;%20CPU%20iPhone%20OS%208_2%20like%20Mac%20OS%20X)%20AppleWebKit/600.1.4%20(KHTML,%20like%20Gecko)%20Mobile/12D508
>>  ORIGINAL_DST/94.31.29.53 application/x-javascript -
>> 
> 
> That is a different HTTP message from inside the encryption.
> 
> 
> Amos
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Server-first SSL bump in Squid 3.5.x

2015-03-18 Thread Amos Jeffries
On 19/03/2015 6:36 p.m., Dan Charlesworth wrote:
> Hey y’all
> 
> Finally got 3.5.2 running. I was under the impression that using server-first 
> SSL bump would still be compatible, despite all the Peek & Splice changes, 
> but apparently not. Hopefully someone can explain what might be going wrong 
> here ...
> 

Sadly "being compatible" with an broken design does not mean "working".
server-first only works nicely if the client, Squid, and server are
operating with the same TLS features - which is uncommon.


> Using the same SSL Bump config that we used for 3.4, we now seeing this 
> happen:
> 19/Mar/2015-16:21:32 22 d4:f4:6f:71:90:e6 10.0.1.71 TCP_DENIED 200 0 
> CONNECT 94.31.29.230:443 - server-first - HIER_NONE/- - -
> 

The CONNECT request in the clear-text HTTP layer is now subject to
access controls before any bumping takes place. Earlier Squid would let
the CONNECT through if you were bumping, even if it would have been
blocked by your access controls normally.

This is unrelated to server-first or any other ssl_bump action.

> Instead of this:
> 19/Mar/2015-14:42:04736 d4:f4:6f:71:90:e6 10.0.1.71 TCP_MISS 200 96913 
> GET https://code.jquery.com/jquery-1.11.0.min.js - server-first 
> Mozilla/5.0%20(iPhone;%20CPU%20iPhone%20OS%208_2%20like%20Mac%20OS%20X)%20AppleWebKit/600.1.4%20(KHTML,%20like%20Gecko)%20Mobile/12D508
>  ORIGINAL_DST/94.31.29.53 application/x-javascript -
> 

That is a different HTTP message from inside the encryption.


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Server-first SSL bump in Squid 3.5.x

2015-03-18 Thread Dan Charlesworth
Hey y’all

Finally got 3.5.2 running. I was under the impression that using server-first 
SSL bump would still be compatible, despite all the Peek & Splice changes, but 
apparently not. Hopefully someone can explain what might be going wrong here ...

Using the same SSL Bump config that we used for 3.4, we now seeing this happen:
19/Mar/2015-16:21:32 22 d4:f4:6f:71:90:e6 10.0.1.71 TCP_DENIED 200 0 
CONNECT 94.31.29.230:443 - server-first - HIER_NONE/- - -

Instead of this:
19/Mar/2015-14:42:04736 d4:f4:6f:71:90:e6 10.0.1.71 TCP_MISS 200 96913 GET 
https://code.jquery.com/jquery-1.11.0.min.js - server-first 
Mozilla/5.0%20(iPhone;%20CPU%20iPhone%20OS%208_2%20like%20Mac%20OS%20X)%20AppleWebKit/600.1.4%20(KHTML,%20like%20Gecko)%20Mobile/12D508
 ORIGINAL_DST/94.31.29.53 application/x-javascript -

This request happens in a little splash page which is designed to test if 
squid’s CA cert is installed on the client and redirect them to some 
instructions if it’s not. This definitely isn’t happening for all intercepted 
HTTPS requests, just this (particularly important) one and some others.

SSL Bump config:
ssl_bump none localhost
ssl_bump server-first all
sslproxy_cert_error deny all

sslcrtd_program /usr/bin/squid_ssl_crtd -s /path/to/squid/ssl_db -M 4MB
sslcrtd_children 32 startup=5 idle=1

DNAT intercepting port config:
https_port 3130 intercept name=3130 ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=4MB cert=/path/to/squid/proxy-cert.cer 
key=/path/to/squid/proxy-key.key

Thanks!___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] i want to block images with size more than 40 KB

2015-03-18 Thread Amos Jeffries
On 19/03/2015 1:35 p.m., snakeeyes wrote:
> Thank you so much  Amos and leonardo
> Can you provide me any sample config to start with ?
> I feel it so difficult to me .
> I had a look @ ""ACL elements"" section in thw wiki about matching size of 
> image but didn’t find clear thing.
> So again I feel that I will create access list that match size > than 50 Byte 
> and with mime type like jpg or bmp and then deny it.
> 
> Could you help me with startup config plz ?

You mean hand over a cut-n-paste example that you can use and when
things go wrong not understand how to fix?

Sure:
 acl images rep_header Content-Type ^image/ ^x-image/
 acl small rep_header Content-Length ^[1234]?[0-9]$
 http_reply_access deny small images


BUT like Leonardo said, censoring the Internet not as easy as all that.

* Images come in *many* data formats (Content-Type values), some of
which are shared with other non-image things - like octet-stream which
literally means "unknown binary data". They can come embedded inside
other objects, JSON, CSS, archive files (like zip / gzip / xz / ar /
cab) ... even plain old HTML can have base64 blobs of image data in them
which gets decoded by a script... and so on.

For every point of censorship there is a bypass.

* The Content-Length is also not guaranteed to be existing. The object
may be of undefined length streamed in small chunks or as a blob with no
size known until the end of the transaction.


What it comes down to is that you need to know exactly what you are
looking for in the protocol, and use the appropriate ACL types to match
with. Which in turn requires knowing what ACLs you have available and
how to use them to construct *_access rules matching your needs.


When you do have to make abnormal things happen be as precise and
specific as you can. Every bit of fuzz/approximation *will* cause
trouble at some point during production traffic.


So, why are you doing this?

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.2 will only start if cache directory is empty

2015-03-18 Thread Amos Jeffries
On 19/03/2015 11:29 a.m., Stanford Prescott wrote:
> I posted this message to the list a few days ago but haven't received any
> responses yet. I am hoping someone might be able to provide some insight on
> what is going on.
> 
> I have been trying to get Squid 3.5.2 to work with the Smoothwall Express
> 3.1 Linux firewall distribution. Specifically, I have modified the Squid
> version included with Smoothwall Express 3.1 to enable HTTPS caching. I
> have had this working successfully up to Squid version 3.4.10. Now with
> trying to upgrade to Squid 3.5.2 I am having problems that I didn't
> encounter with prior versions of Squid.
> 
> The first issue I had, which is now resolved, was improper permissions of
> the shm folder (in SWE found in /dev/shm). Changing the folder permissions
> to Squid user and group allowed Squid 3.5.2 to start. However, now it will
> only start with an empty cache directory.

Ouch. /dev/shm is a folder for system shared-memory sockets to be
created by applications. It should be owned by root user and group, with
777 permissions. Squid (or the OS kernel) should be able to create
"files" inside it, but it should not be owned by Squid.


> Once it starts with an empty
> cache directory, it seems to function correctly as far as caching SSL
> encrypted web pages. However, if Squid needs to be restarted for any
> reason, it will not restart until the cache directory
> (/var/spool/squid/cache) is emptied.

That HTTP data cache is unrelated to the SSL session cache. Its contents
should not matter.


> *2015/03/14 00:29:47 kid1| helperOpenServers: Starting 5/5 'ssl_crtd'
> processes*
> *FATAL: Ipc::Mem::Segment::open failed to
> shm_open(/squid-ssl_session_cache.shm): (2) No such file or directory*
> 

> 
> What is the "squid-ssl_session_cache". Am I supposed to define that
> somewhere in the
> Squid configuration? Is that why I am getting that error message because an
> ssl_session_cache is not defined somewhere?

The .shm is the name of a shared memory socket "file" name. You have
sslproxy_session_cache_size defined with a size so the SSL session
ticket cache is used.

Please try patching your Squid with
.
It should resolve many permissions issues Squid 3.5 workers are having
on startup.


> 
> This is my squid.conf file with SSL caching using ssl-bump enabled.
> 

> 
> *# A random port for forward-proxy port needed for SSL*
> *http_port 8081*
> 
> *http_port 192.168.100.1:800  intercept ssl-bump
> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
> cert=/var/smoothwall/mods/proxy/ssl_cert/squidCA.pem*
> 
> *https_port 192.168.100.1:808  intercept
> ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
> cert=/var/smoothwall/mods/proxy/ssl_cert/squidCA.pem*

Why two ports? one is usually sufficient.

> 
> *sslproxy_cert_error allow all*
> *sslproxy_flags DONT_VERIFY_PEER*

Please remove the DONT_VERIFY_PEER flag setting. It allows external
servers to corrupt your TLS certificates with garbage and hijack
connections.


> *ssl_bump server-first all*
> 
> *ssl_bump none localhostgreen*
> *sslcrtd_program /var/smoothwall/mods/proxy/libexec/ssl_crtd -s
> /var/smoothwall/mods/proxy/lib/ssl_db -M 4MB*
> *sslcrtd_children 5*
> 
> *sslproxy_session_cache_size 4 MB*



> 
> *cache_access_log /var/log/squid/access.log*
> *cache_log /var/log/squid/cache.log*

You dont need these three:
> *cache_store_log none*
> *error_directory /usr/share/errors/en-us*
> *log_mime_hdrs off*

.. all they do is set the defaults to be used.


> 
> *request_header_access Content-Type allow all*
> *request_header_access Date allow all*
> *request_header_access Host allow all*
> *request_header_access If-Modified-Since allow all*
> *request_header_access Pragma allow all*
> *request_header_access Accept allow all*
> *request_header_access Accept-Charset allow all*
> *request_header_access Accept-Encoding allow all*
> *request_header_access Accept-Language allow all*
> *request_header_access Connection allow all*
> *request_header_access All allow all*

The above settings do nothing but waste CPU time. You can remove them.

What you are instructing Squid to do is effectively "allow certain
headers X, Y, Z, oh and every other header too".

> 
> *shutdown_lifetime 3 seconds*

NOTE: very short shutdown time can corrupt the HTTP data cache as the
memory index does not have enough time to complete saving to disk.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid + AD + Kerb auth question

2015-03-18 Thread Markus Moeller
Hi Joao

Then you hit

http_access allow localnet


and not

http_access allow ad_auth

Comment out the following line in squid.conf 

http_access allow localnet


and try again.

Markus

From: Joao Paulo Monticelli Gaspar 
Sent: Wednesday, March 18, 2015 11:38 PM
To: Markus Moeller 
Subject: Re: [squid-users] Squid + AD + Kerb auth question

yes, I'm using localnet, this is a virtual test lab enviorment, here are some 
log entries 

1426694349.225  59653 192.168.1.251 TCP_MISS/200 4775 CONNECT 
p5-ib4juqow2smme-qg5sbffb457kogr5-505177-i2-v6exp3-ds.metric.gstatic.com:443 - 
DIRECT/216.58.222.35 -
1426694352.258  62686 192.168.1.251 TCP_MISS/200 4774 CONNECT 
p5-ib4juqow2smme-qg5sbffb457kogr5-505177-i1-v6exp3-v4.metric.gstatic.com:443 - 
DIRECT/216.58.222.46 -
1426694613.543  58996 192.168.1.251 TCP_MISS/200 1112 CONNECT 
safebrowsing.google.com:443 - DIRECT/173.194.42.133 -

when I looked at the access.log manual pages I saw that if squid cant get user 
info, he uses the - sign on the access, and we can see it there, but why he 
cant get the user info?


2015-03-18 20:20 GMT-03:00 Markus Moeller :

  Hi,

From which network do you surf ?  From localnet ? 

Can you send sample log entries ?

  Markus

  From: Joao Paulo Monticelli Gaspar 
  Sent: Wednesday, March 18, 2015 9:18 PM
  To: Markus Moeller 
  Subject: Re: [squid-users] Squid + AD + Kerb auth question

  squid.conf 

  visible_hostname proxy.joznet.local

  auth_param negotiate program /usr/lib64/squid/squid_kerb_auth
  auth_param negotiate children 10
  auth_param negotiate keep_alive on
  auth_param basic credentialsttl 2 hours

  acl ad_auth proxy_auth REQUIRED

  acl manager proto cache_object
  acl localhost src 127.0.0.1/32 ::1
  acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

  acl localnet src 192.168.1.0/24 # RFC1918 possible internal network
  acl localnet src fc00::/7   # RFC 4193 local private network range
  acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines

  acl SSL_ports port 443
  acl Safe_ports port 80 # http
  acl Safe_ports port 21 # ftp
  acl Safe_ports port 443 # https
  acl Safe_ports port 70 # gopher
  acl Safe_ports port 210 # wais
  acl Safe_ports port 1025-65535 # unregistered ports
  acl Safe_ports port 280 # http-mgmt
  acl Safe_ports port 488 # gss-http
  acl Safe_ports port 591 # filemaker
  acl Safe_ports port 777 # multiling http
  acl CONNECT method CONNECT

  http_access allow manager localhost
  http_access deny manager

  http_access deny !Safe_ports


  http_access deny CONNECT !SSL_ports


  http_access allow localnet

  http_access allow localhost
  http_access allow ad_auth
  http_access deny all


  http_port 3128

  hierarchy_stoplist cgi-bin ?


  coredump_dir /var/spool/squid


  refresh_pattern ^ftp: 1440 20% 10080

  refresh_pattern ^gopher: 1440 0% 1440
  refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
  refresh_pattern . 0 20% 4320

  

  krb5.conf

  [logging]
  default = FILE:/var/log/krb5libs.log
  kdc = FILE:/var/log/krb5kdc.log
  admin_server = FILE:/var/log/kadmind.log

  [libdefaults]
  default_realm = JOZNET.LOCAL
  dns_lookup_realm = false
  dns_lookup_kdc = false
  ticket_lifetime = 24h
  renew_lifetime = 7d
  forwardable = true

  ; for Windows 2008 with AES

  ;default_tgs_enctypes = aes256-cts-hmac-sha1-96 rc4-hmac des-cbc-crc 
des-cbc-md5
  ;default_tkt_enctypes = aes256-cts-hmac-sha1-96 rc4-hmac des-cbc-crc 
des-cbc-md5
  ;permitted_enctypes = aes256-cts-hmac-sha1-96 rc4-hmac des-cbc-crc 
des-cbc-md5

  ; for MIT/Heimdal kdc no need to restrict encryption type

  [realms]
  JOZNET.LOCAL = {
kdc = srvjoznt.joznet.local:88
admin_server = srvjoznt.joznet.local:749
default_domain = joznet.local 
  }

  [domain_realm]
  .joznet.local= JOZNET.LOCAL
  joznet.local= JOZNET.LOCAL

  [pam]
  debuf = false
  ticket_lifetime = 36000
  renew_lifetime = 36000
  forwardable = true
  krb4_convert = false


  2015-03-18 17:54 GMT-03:00 Markus Moeller :

How does the config file look like ?  

Markus

"Joao Paulo Monticelli Gaspar"  wrote in message 
news:CAFjXhx=idbdxeqxbzy56tr5m3fztasu2tqgwlclydi_s-s3...@mail.gmail.com...
Hey people 

I have a doubt and couldn't find the answer anywhere yet, I'm using SQUID 
integrate to a W2K8 AD server with kerb auth, and everything works fine, the 
main reason of chosing this setup is for the SingleSignOn capabilities of the 
configuration, but on my ACCESS.LOG I cant see the users that are visitating 
the sites...

is possible to show that info with this setup, or by any other setup use 
maintain the SOO?

Thx in advance.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


_

Re: [squid-users] Squid + AD + Kerb auth question

2015-03-18 Thread Markus Moeller
Hi,

  From which network do you surf ?  From localnet ? 

  Can you send sample log entries ?

Markus

From: Joao Paulo Monticelli Gaspar 
Sent: Wednesday, March 18, 2015 9:18 PM
To: Markus Moeller 
Subject: Re: [squid-users] Squid + AD + Kerb auth question

squid.conf 

visible_hostname proxy.joznet.local

auth_param negotiate program /usr/lib64/squid/squid_kerb_auth
auth_param negotiate children 10
auth_param negotiate keep_alive on
auth_param basic credentialsttl 2 hours

acl ad_auth proxy_auth REQUIRED

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 192.168.1.0/24 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines

acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager

http_access deny !Safe_ports


http_access deny CONNECT !SSL_ports


http_access allow localnet

http_access allow localhost
http_access allow ad_auth
http_access deny all


http_port 3128

hierarchy_stoplist cgi-bin ?


coredump_dir /var/spool/squid


refresh_pattern ^ftp: 1440 20% 10080

refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320


krb5.conf

[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

[libdefaults]
default_realm = JOZNET.LOCAL
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true

; for Windows 2008 with AES

;default_tgs_enctypes = aes256-cts-hmac-sha1-96 rc4-hmac des-cbc-crc 
des-cbc-md5
;default_tkt_enctypes = aes256-cts-hmac-sha1-96 rc4-hmac des-cbc-crc 
des-cbc-md5
;permitted_enctypes = aes256-cts-hmac-sha1-96 rc4-hmac des-cbc-crc 
des-cbc-md5

; for MIT/Heimdal kdc no need to restrict encryption type

[realms]
JOZNET.LOCAL = {
  kdc = srvjoznt.joznet.local:88
  admin_server = srvjoznt.joznet.local:749
  default_domain = joznet.local 
}

[domain_realm]
.joznet.local= JOZNET.LOCAL
joznet.local= JOZNET.LOCAL

[pam]
debuf = false
ticket_lifetime = 36000
renew_lifetime = 36000
forwardable = true
krb4_convert = false


2015-03-18 17:54 GMT-03:00 Markus Moeller :

  How does the config file look like ?  

  Markus

  "Joao Paulo Monticelli Gaspar"  wrote in message 
news:CAFjXhx=idbdxeqxbzy56tr5m3fztasu2tqgwlclydi_s-s3...@mail.gmail.com...
  Hey people 

  I have a doubt and couldn't find the answer anywhere yet, I'm using SQUID 
integrate to a W2K8 AD server with kerb auth, and everything works fine, the 
main reason of chosing this setup is for the SingleSignOn capabilities of the 
configuration, but on my ACCESS.LOG I cant see the users that are visitating 
the sites...

  is possible to show that info with this setup, or by any other setup use 
maintain the SOO?

  Thx in advance.

--
  ___
  squid-users mailing list
  squid-users@lists.squid-cache.org
  http://lists.squid-cache.org/listinfo/squid-users


  ___
  squid-users mailing list
  squid-users@lists.squid-cache.org
  http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 3.5.2 will only start if cache directory is empty

2015-03-18 Thread Stanford Prescott
I posted this message to the list a few days ago but haven't received any
responses yet. I am hoping someone might be able to provide some insight on
what is going on.

I have been trying to get Squid 3.5.2 to work with the Smoothwall Express
3.1 Linux firewall distribution. Specifically, I have modified the Squid
version included with Smoothwall Express 3.1 to enable HTTPS caching. I
have had this working successfully up to Squid version 3.4.10. Now with
trying to upgrade to Squid 3.5.2 I am having problems that I didn't
encounter with prior versions of Squid.

The first issue I had, which is now resolved, was improper permissions of
the shm folder (in SWE found in /dev/shm). Changing the folder permissions
to Squid user and group allowed Squid 3.5.2 to start. However, now it will
only start with an empty cache directory. Once it starts with an empty
cache directory, it seems to function correctly as far as caching SSL
encrypted web pages. However, if Squid needs to be restarted for any
reason, it will not restart until the cache directory
(/var/spool/squid/cache) is emptied.

The error I am getting when trying to start Squid 3.5.2 without an empty
cache is

*2015/03/14 00:29:47 kid1| Current Directory is /*

*2015/03/14 00:29:47 kid1| Starting Squid Cache version 3.5.2 for
i586-pc-linux-gnu...*
*2015/03/14 00:29:47 kid1| Service Name: squid*
*2015/03/14 00:29:47 kid1| Process ID 7261*
*2015/03/14 00:29:47 kid1| Process Roles: worker*
*2015/03/14 00:29:47 kid1| With 1024 file descriptors available*
*2015/03/14 00:29:47 kid1| Initializing IP Cache...*
*2015/03/14 00:29:47 kid1| DNS Socket created at 0.0.0.0, FD 8*
*2015/03/14 00:29:47 kid1| Adding nameserver 127.0.0.1 from
/etc/resolv.conf*
*2015/03/14 00:29:47 kid1| helperOpenServers: Starting 5/5 'ssl_crtd'
processes*
*FATAL: Ipc::Mem::Segment::open failed to
shm_open(/squid-ssl_session_cache.shm): (2) No such file or directory*

*Squid Cache (Version 3.5.2): Terminated abnormally.*
*CPU Usage: 0.027 seconds = 0.020 user + 0.007 sys*
*Maximum Resident Size: 26752 KB*
*Page faults with physical i/o: 0*
*2015/03/14 00:29:47.830 kid1| Acl.cc(380) ~ACL: freeing ACL *


What is the "squid-ssl_session_cache". Am I supposed to define that
somewhere in the
Squid configuration? Is that why I am getting that error message because an
ssl_session_cache is not defined somewhere?

This is my squid.conf file with SSL caching using ssl-bump enabled.

*visible_hostname smoothwall*

*# Uncomment the following to send debug info to /var/log/squid/cache.log*
*debug_options ALL,1 33,2 28,9*

*# ACCESS CONTROLS*
*# *
*acl localhostgreen src 192.168.100.1*
*acl localnetgreen src 192.168.100.0/24 *

*acl SSL_ports port 445 443 441 563*
*acl Safe_ports port 80  # http*
*acl Safe_ports port 81  # smoothwall http*
*acl Safe_ports port 21  # ftp *
*acl Safe_ports port 445 443 441 563 # https, snews*
*acl Safe_ports port 70  # gopher*
*acl Safe_ports port 210 # wais  *
*acl Safe_ports port 1025-65535 # unregistered ports*
*acl Safe_ports port 280# http-mgmt*
*acl Safe_ports port 488# gss-http *
*acl Safe_ports port 591# filemaker*
*acl Safe_ports port 777# multiling http*

*acl CONNECT method CONNECT*

*# TAG: http_access*
*# *


*http_access deny !Safe_ports*
*http_access deny CONNECT !SSL_ports*

*http_access allow localnetgreen*
*http_access allow CONNECT localnetgreen*

*http_access allow localhostgreen*
*http_access allow CONNECT localhostgreen*

*# http_port and https_port*
*#*

*# A random port for forward-proxy port needed for SSL*
*http_port 8081*

*http_port 192.168.100.1:800  intercept ssl-bump
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
cert=/var/smoothwall/mods/proxy/ssl_cert/squidCA.pem*

*https_port 192.168.100.1:808  intercept
ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
cert=/var/smoothwall/mods/proxy/ssl_cert/squidCA.pem*

*sslproxy_cert_error allow all*
*sslproxy_flags DONT_VERIFY_PEER*
*ssl_bump server-first all*

*ssl_bump none localhostgreen*
*sslcrtd_program /var/smoothwall/mods/proxy/libexec/ssl_crtd -s
/var/smoothwall/mods/proxy/lib/ssl_db -M 4MB*
*sslcrtd_children 5*

*sslproxy_session_cache_size 4 MB*

*http_access deny all*

*cache_replacement_policy heap GDSF*
*memory_replacement_policy heap GDSF*

*# CACHE OPTIONS*
*#
*
*cache_effective_user squid*
*cache_effective_group squid*

*cache_swap_high 100*
*cache_swap_low 80*

*cache_mem 8 MB*
*maximum_object_size_in_memory 512 KB*

*cache_access_log /var/log/squid/access.log*
*cache_log /var/log/squid/cache.log*
*cache_store_log none*
*error_directory /usr/s

Re: [squid-users] Squid + AD + Kerb auth question

2015-03-18 Thread Markus Moeller
How does the config file look like ?  

Markus

"Joao Paulo Monticelli Gaspar"  wrote in message 
news:CAFjXhx=idbdxeqxbzy56tr5m3fztasu2tqgwlclydi_s-s3...@mail.gmail.com...
Hey people 

I have a doubt and couldn't find the answer anywhere yet, I'm using SQUID 
integrate to a W2K8 AD server with kerb auth, and everything works fine, the 
main reason of chosing this setup is for the SingleSignOn capabilities of the 
configuration, but on my ACCESS.LOG I cant see the users that are visitating 
the sites...

is possible to show that info with this setup, or by any other setup use 
maintain the SOO?

Thx in advance.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] v3.5.x RPM for CentOS 6

2015-03-18 Thread Eliezer Croitoru

Hey Dan,

I will put more efforts into it and will try to publish 3.4.12 and 3.5.2 
for CentOS 6 this week.


About the pinger issue, Indeed there is one.. the suid is not set inside 
the spec files and I will add it later while I am considering patching 
the squid.conf defaults from pinger on to off.
From my experience the sysadmins of a CentOS based systems have 
experience in life to understand that suid should not be taken lightly.


As a sysadmin: Not just anyone that will ask me to set suid for a file 
will get it.


Eliezer

References:
http://www.rpm.org/max-rpm/s1-rpm-anywhere-specifying-file-attributes.html
http://www.linuxnix.com/2011/12/suid-set-suid-linuxunix.html
http://www.cyberciti.biz/faq/unix-bsd-linux-setuid-file/
http://www.squid-cache.org/Doc/config/pinger_enable/
http://stackoverflow.com/questions/10312344/why-traceroute-sends-udp-packets-and-not-icmp-ones


On 18/03/2015 05:16, Dan Charlesworth wrote:

Hey Eliezer

Do you have any plans to maintain a Squid 3.5.x rpm for CentOS 6?

I can see you’ve published one for CentOS 7. In fact I tried to use your spec 
file from the EL7 version to build an EL6 rpm, but ran into errors when 
updating from 3.4.12:

1. Installing the separate squid-helpers package had a dependency error I’m not 
sure how to resolve:
---> Package squid-helpers.x86_64 7:3.5.2-1.el6 will be installed
--> Processing Dependency: perl(Crypt::OpenSSL::X509) for package: 
7:squid-helpers-3.5.2-1.el6.x86_64
--> Processing Dependency: perl(DBI) for package: 
7:squid-helpers-3.5.2-1.el6.x86_64
--> Running transaction check
---> Package perl-DBI.x86_64 0:1.609-4.el6 will be installed
---> Package squid-helpers.x86_64 7:3.5.2-1.el6 will be installed
--> Processing Dependency: perl(Crypt::OpenSSL::X509) for package: 
7:squid-helpers-3.5.2-1.el6.x86_64
--> Finished Dependency Resolution
Error: Package: 7:squid-helpers-3.5.2-1.el6.x86_64 (getbusi-dev)
Requires: perl(Crypt::OpenSSL::X509)
  You could try using --skip-broken to work around the problem

  You could try running: rpm -Va --nofiles --nodigest

2. Having disabled all the helpers which are missing because of that package 
everything was okay except for an error regarding the “ICMP Pinger”:
2015/03/18 14:13:25| pinger: Initialising ICMP pinger ...
2015/03/18 14:13:25|  icmp_sock: (1) Operation not permitted
2015/03/18 14:13:25| pinger: Unable to start ICMP pinger.
2015/03/18 14:13:25|  icmp_sock: (1) Operation not permitted
2015/03/18 14:13:25| pinger: Unable to start ICMPv6 pinger.
2015/03/18 14:13:25| FATAL: pinger: Unable to open any ICMP sockets.

Do you have any advice on how to overcome these issues?

Thanks!
Dan




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] v3.5.x RPM for CentOS 6

2015-03-18 Thread Eliezer Croitoru

Hey List,

Sorry but it takes time(for me) to test squid 3.5.
I have built a testing beta of 3.5 for CentOS 7 but yet to publish it 
officially.


Since you have asked, then the main issue is that RH RPM auto building 
tools helps to find dependencies and there for most of the helpers 
need\requires EPEL repositories.
Due to this fact I have separated squid "core" and "helpers" into 
different packages.


The pinger is part of the "core" squid package and one of the tests I 
ran on the 6 branch that the pinger would get suid flaged since it 
"needs" it or rather the OS "requires" it since it handles ICMP.


For most regular setups pinger is not needed to make the service work 
and serve the clients.
If your setup requires it then you will need to enable suid or else you 
can disable pinger using "pinger_enable off" also look at:

http://www.squid-cache.org/Doc/config/pinger_enable/

"chmod u+s /path/pinger" should be the correct suid set command (and I 
think that if you ask about it then you probably don't need it).


Eliezer

On 18/03/2015 13:11, Amos Jeffries wrote:

perl ?

The helpers that require it are scripts, not compiled binaries. So they
should run with any perl 4/5 version normally installed.

Even just installing the modules with cpan should work.



>- How do I grant the “pinger” the correct permissions in CentOS 6?

Should be possible just to install Squid with:
   make install install-pinger

Amos


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Increase number of ext_ldap_group_acl processes

2015-03-18 Thread Rich549
Thanks! I've added the children-startup=15 to my config but it seems to be
ignoring it. An excerpt of my config is:

external_acl_type internet_domain_group children-startup=15 %LOGIN
/usr/lib/squid3/ext_ldap_group_acl -R -P -b (this then goes on to provide
details of AD structure etc).

Have I missed something?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Increase-number-of-ext-ldap-group-acl-processes-tp4670484p4670488.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid + AD + Kerb auth question

2015-03-18 Thread Joao Paulo Monticelli Gaspar
Hey people

I have a doubt and couldn't find the answer anywhere yet, I'm using SQUID
integrate to a W2K8 AD server with kerb auth, and everything works fine,
the main reason of chosing this setup is for the SingleSignOn capabilities
of the configuration, but on my ACCESS.LOG I cant see the users that are
visitating the sites...

is possible to show that info with this setup, or by any other setup use
maintain the SOO?

Thx in advance.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] i want to block images with size more than 40 KB

2015-03-18 Thread snakeeyes
Thank you so much  Amos and leonardo
Can you provide me any sample config to start with ?
I feel it so difficult to me .
I had a look @ ""ACL elements"" section in thw wiki about matching size of 
image but didn’t find clear thing.
So again I feel that I will create access list that match size > than 50 Byte 
and with mime type like jpg or bmp and then deny it.

Could you help me with startup config plz ?

-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Leonardo Rodrigues
Sent: Wednesday, March 18, 2015 7:32 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] i want to block images with size more than 40 KB

On 18/03/15 08:06, Amos Jeffries wrote:
> On 19/03/2015 5:57 a.m., snakeeyes wrote:
>> I need help in blocking images that has size less than 40 KB
>>
>>   
> Use the Squid provided access controls to manage access to things.
> 
>

 You should know that you cannot evaluate the response size using only the 
request data. So to acchieve what you want, data from the reply must be 
considered as well, the response size for example.

 Images can be identified by the presence of '.jpg' or '.png' on the 
request URL, but images can be generated on-the-fly by scripts as well, so you 
wont see those extensions all the time. In that case, analyzing replies mime 
headers can be usefull as well, the reply mime type having 'image' is a great 
indication that we're receiving an image.

 Put all that together and you'll acchieve the rules you want to. 
But keep in mind that you'll probably break A LOT of sites who 'slices' 
images, background images, menus and all sort of things. I would call that a 
VERY bad idea, but can be acchieved with a few rules.



-- 


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Increase number of ext_ldap_group_acl processes

2015-03-18 Thread Amos Jeffries
On 19/03/2015 3:36 a.m., Rich549 wrote:
> Just a quick question, if I want to increase the number of ext_ldap_group_acl
> processes that start with Squid then what would I need to add into my
> config?
> 
> My reason for asking is because I keep seeing this in the cache.log:
> 
> 2015/03/18 12:25:58| WARNING: external ACL 'internet_domain_group' queue
> overload. Using stale result.
> 
> Restarting the Squid service clears the error and I don't see it for weeks
> but I'd rather just put a fix in place and have read that increasing the
> number of processes can do that.


Please see the squid.conf documentation:
 

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Increase number of ext_ldap_group_acl processes

2015-03-18 Thread Rich549
Just a quick question, if I want to increase the number of ext_ldap_group_acl
processes that start with Squid then what would I need to add into my
config?

My reason for asking is because I keep seeing this in the cache.log:

2015/03/18 12:25:58| WARNING: external ACL 'internet_domain_group' queue
overload. Using stale result.

Restarting the Squid service clears the error and I don't see it for weeks
but I'd rather just put a fix in place and have read that increasing the
number of processes can do that.

Thanks,

Rich



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Increase-number-of-ext-ldap-group-acl-processes-tp4670484.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] i want to block images with size more than 40 KB

2015-03-18 Thread Leonardo Rodrigues

On 18/03/15 08:06, Amos Jeffries wrote:

On 19/03/2015 5:57 a.m., snakeeyes wrote:

I need help in blocking images that has size less than 40 KB

  

Use the Squid provided access controls to manage access to things.




You should know that you cannot evaluate the response size using 
only the request data. So to acchieve what you want, data from the reply 
must be considered as well, the response size for example.


Images can be identified by the presence of '.jpg' or '.png' on the 
request URL, but images can be generated on-the-fly by scripts as well, 
so you wont see those extensions all the time. In that case, analyzing 
replies mime headers can be usefull as well, the reply mime type having 
'image' is a great indication that we're receiving an image.


Put all that together and you'll acchieve the rules you want to. 
But keep in mind that you'll probably break A LOT of sites who 'slices' 
images, background images, menus and all sort of things. I would call 
that a VERY bad idea, but can be acchieved with a few rules.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid SMP and SNMP

2015-03-18 Thread Amos Jeffries
On 19/03/2015 2:50 a.m., Eugene M. Zheganin wrote:
> Hi.
> 
> On 18.03.2015 16:04, Amos Jeffries wrote:
>>
>> SNMP is on the list of SMP-aware features.
>>
>> The worker receiving the SNMP request will contact other workers to
>> fetch the data for producing the SNMP response. This may take some time.
>>
> Yeah, but it seems like it doesn't happen. Plus, I'm getting the errors
> in the cache.log on each attempt:
> 
> [root@taiga:etc/squid]# snmpwalk localhost:3402
> 1.3.6.1.4.1.3495.1.2.1.0 
> Timeout: No Response from localhost:3402
> 
> and in the log:
> 
> 2015/03/18 18:48:26 kid3| comm_udp_sendto: FD 34, (family=2)
> 127.0.0.1:46682: (22) Invalid argument

Process kid3 (SMP coordinator) is attempting to respond.

Since you configured:
  snmp_port 340${process_number}

and the coordinator is process number 3 I think it will be using port
3403 for that response.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid SMP and SNMP

2015-03-18 Thread Eugene M. Zheganin
Hi.

On 18.03.2015 16:04, Amos Jeffries wrote:
>
> SNMP is on the list of SMP-aware features.
>
> The worker receiving the SNMP request will contact other workers to
> fetch the data for producing the SNMP response. This may take some time.
>
Yeah, but it seems like it doesn't happen. Plus, I'm getting the errors
in the cache.log on each attempt:

[root@taiga:etc/squid]# snmpwalk localhost:3402
1.3.6.1.4.1.3495.1.2.1.0 
Timeout: No Response from localhost:3402

and in the log:

2015/03/18 18:48:26 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:46682: (22) Invalid argument
2015/03/18 18:48:49 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument
2015/03/18 18:48:50 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument
2015/03/18 18:48:51 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument
2015/03/18 18:48:52 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument
2015/03/18 18:48:53 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument
2015/03/18 18:48:54 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument

Thanks.
Eugene.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] (about external_acl_type problem ) two people can't login and access internet together

2015-03-18 Thread johnzeng


Hello Amos:

   Thanks again , and i tested for the part and sloved 
the problem just now .



Have a good day with you .


Best Regards

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] v3.5.x RPM for CentOS 6

2015-03-18 Thread Amos Jeffries
On 18/03/2015 7:03 p.m., Dan Charlesworth wrote:
> Hi Donny
> 
> I gathered that much. I guess what I specifically am asking for is:
> 
> - Which CentOS 6 package includes the missing perl modules?

perl ?

The helpers that require it are scripts, not compiled binaries. So they
should run with any perl 4/5 version normally installed.

Even just installing the modules with cpan should work.


> - How do I grant the “pinger” the correct permissions in CentOS 6?

Should be possible just to install Squid with:
  make install install-pinger

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] i want to block images with size more than 40 KB

2015-03-18 Thread Amos Jeffries
On 19/03/2015 5:57 a.m., snakeeyes wrote:
> Hi Guys
> 
> I need help in blocking images that has size less than 40 KB
> 
> Any guidance or help will be appreciated
> 
>  

Use the Squid provided access controls to manage access to things.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid SMP and SNMP

2015-03-18 Thread Amos Jeffries
On 18/03/2015 9:50 p.m., Eugene M. Zheganin wrote:
> Hi.
> 
> I'm gathering statistics from squid using SNMP. When I use single
> process everything is fine, but when it comes to multiple workers - SNMP
> doesn't work - I got timeout when trying to read data with snmpwalk.
> 
> I'm using the following tweak:
> 
> snmp_port 340${process_number}
> 
> both workers bind on ports 3401 and 3402 indeed, but then I got this
> timeout.
> Does anyone have a success story about squid SMP and SNMP ?


SNMP is on the list of SMP-aware features.

The worker receiving the SNMP request will contact other workers to
fetch the data for producing the SNMP response. This may take some time.


> 
> I wrote a message about this problem about a year or so, it was 3.3.x,
> but situation didn't change.

Nothing has changed in SNMP or other mgr report generation code since then.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] (about external_acl_type problem ) two people can't login and access internet together

2015-03-18 Thread Amos Jeffries
On 18/03/2015 10:04 p.m., johnzeng wrote:
> 
> Hello All
> 
> 
>  if possible ,please give me some advisement , thanks 
> 
> 
> 
>  Whether ttl=50 (value) is too low , Maybe i will update ttl value to
> ttl=3600 cache=1048576 .

Whatever. You know better than anyone else about that decision.

> 
> i have a question still , Whether cached results for external_acl is
> reponse from helper program ?
> 

Yes.

> for example :
> 
> if FORMAT is %SRC , and helper progrm return "OK\n" ,
> 
> and external_acl_type tell squid to cache suitable %SRC ( for example :
> client is 192.168.0.21 ,and will cache 192.168.0.21 into Cache valued )


The %SRC format is the cache key. The "OK" is the value cached for the
key "192.168.0.21".

Whenever "192.168.0.21" is looked up the stored value for it may be used
instead of calling the helper again.


> 
> if helper progrm return "ERR\n" ,
> 
> won't cache any value or cache src ip into cached negative valued ...

"ERR" is a successful "negative lookup". The negative_ttl=N value
applies to how long those get stored.

Squid default is to store both positive (OK) and negative (ERR) results
for the same TTL.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] (about external_acl_type problem ) two people can't login and access internet together

2015-03-18 Thread johnzeng

Hello All


 if possible ,please give me some advisement , thanks 



 Whether ttl=50 (value) is too low , Maybe i will update ttl value to
ttl=3600 cache=1048576 .

i have a question still , Whether cached results for external_acl is
reponse from helper program ?

for example :

if FORMAT is %SRC , and helper progrm return "OK\n" ,

and external_acl_type tell squid to cache suitable %SRC ( for example :
client is 192.168.0.21 ,and will cache 192.168.0.21 into Cache valued )

if helper progrm return "ERR\n" ,

won't cache any value or cache src ip into cached negative valued ...


Whether My understanding is correct ?

  external_acl_type name [options] FORMAT.. /path/to/helper [helper 
arguments..]

Options:

  ttl=n TTL in seconds for cached results (defaults to 3600
for 1 hour)

  negative_ttl=n
TTL for cached negative lookups (default same
as ttl)



  cache=n   Limit the result cache size, default is 262144.
The expanded FORMAT value is used as the cache key, so
if the details in FORMAT are highly variable a larger
cache may be needed to produce reduction in helper load.




http://www.squid-cache.org/Versions/v3/3.5/cfgman/external_acl_type.html


Hello All:

i test splash portal via external_acl_type ...

Although the first people succeed to login and can access internet , but
when second people succeed to login and can access internet ,

and the firest people have to login again . when the firest people
succeed to login and can access internet ,

second people have to login again .


my meaning is : There's only one person who can access internet at same
time



I guess [channel-ID] is error at my config , but i can't confirm.


if concurrency=10

how to identify or find correct [channel-ID] ,

and

Whether return value format is correct for squid ?

for example

fwrite(STDOUT, $stream_id." ERR\n");



If possible , please give me some advisement .



http://wiki.squid-cache.org/Features/AddonHelpers#Access_Control_.28ACL.29
http://wiki.squid-cache.org/EliezerCroitoru/SessionHelper

Squid.conf ---

external_acl_type session ipv4 concurrency=10 ttl=50 %SRC
/accerater/webgui/public/wifiportal/logincheck.php
acl session_login external session
acl splash_page url_regex -i ^http://192.168.0.198/wifiportal/index.html

deny_info http://192.168.0.198/wifiportal/index.html session_login

http_access allow splash_page
http_access deny !session_login

--Helper program config ( php )-

while (!feof(STDIN))
{
$stream_line = trim(fgets(STDIN));
$stream_array = split("[ ]+", $stream_line);
$stream_ip = trim($stream_array[1]);
$stream_id = trim($stream_array[0]);

.

fwrite(STDOUT, $stream_id." ERR\n");



fwrite(STDOUT, $stream_id." OK\n");






___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Random SSL bump DB corruption

2015-03-18 Thread Yuri Voinov

As I can research,

this problem produces one of Apple service under HTTPS.

When client query something like iTunes, squid gets strange certificate 
which is corrupts DB.


I found no solution at this time. Just stop squid and cleanup SSL db.

WBR, Yuri

18.03.15 11:21, Dan Charlesworth пишет:

Bumpity bump

Had this go down exactly the same way this past Monday at Deployment #1.

On 10 Mar 2015, at 4:51 pm, Dan Charlesworth > wrote:


Hey folks

After having many of our systems running Squid 3.4.12 for a couple of 
weeks now we had two different deployments fail today due to SSL DB 
corruption.


Never seen this in almost 9 months of SSL bump being in production 
and there were no problems in either cache log until the “wrong 
number of fields” lines, apparently.


Anyone else?

Deployment #1 log excerpt:
wrong number of fields on line 505 (looking for field 6, got 1, '' left)
(squid_ssl_crtd): The SSL certificate database 
/usr/local/mwf/mwf13/squid/ssl_db is corrupted. Please rebuild

2015/03/10 09:04:24 kid1| WARNING: ssl_crtd #Hlpr0 exited
2015/03/10 09:04:24 kid1| Too few ssl_crtd processes are running 
(need 1/32)

2015/03/10 09:04:24 kid1| Starting new helpers
2015/03/10 09:04:24 kid1| helperOpenServers: Starting 1/32 
'squid_ssl_crtd' processes

2015/03/10 09:04:24 kid1| "ssl_crtd" helper returned  reply.
wrong number of fields on line 505 (looking for field 6, got 1, '' left)
(squid_ssl_crtd): The SSL certificate database 
/usr/local/mwf/mwf13/squid/ssl_db is corrupted. Please rebuild


Deployment #2 log excerpt:
wrong number of fields on line 2 (looking for field 6, got 1, '' left)
(squid_ssl_crtd): The SSL certificate database 
/usr/local/mwf/mwf13/squid/ssl_db is corrupted. Please rebuild

2015/03/10 15:29:16 kid1| WARNING: ssl_crtd #Hlpr0 exited
2015/03/10 15:29:16 kid1| Too few ssl_crtd processes are running 
(need 1/32)

2015/03/10 15:29:16 kid1| Starting new helpers
2015/03/10 15:29:16 kid1| helperOpenServers: Starting 1/32 
'squid_ssl_crtd' processes

2015/03/10 15:29:17 kid1| "ssl_crtd" helper returned  reply.
wrong number of fields on line 2 (looking for field 6, got 1, '' left)
(squid_ssl_crtd): The SSL certificate database 
/usr/local/mwf/mwf13/squid/ssl_db is corrupted. Please rebuild






___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid SMP and SNMP

2015-03-18 Thread Eugene M. Zheganin
Hi.

I'm gathering statistics from squid using SNMP. When I use single
process everything is fine, but when it comes to multiple workers - SNMP
doesn't work - I got timeout when trying to read data with snmpwalk.

I'm using the following tweak:

snmp_port 340${process_number}

both workers bind on ports 3401 and 3402 indeed, but then I got this
timeout.
Does anyone have a success story about squid SMP and SNMP ?

I wrote a message about this problem about a year or so, it was 3.3.x,
but situation didn't change.
Should I report this as a bug ?

Thanks.
Eugene.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] i want to block images with size more than 40 KB

2015-03-18 Thread snakeeyes
Hi Guys

I need help in blocking images that has size less than 40 KB

 

 

Any guidance or help will be appreciated

 

regards

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users