[squid-users] CentOS Linux 7 / Squid Cache: Version 3.5.20 / ecap clamav

2016-12-19 Thread bjoern wahl
Hello!

I would like to switch from SLES to CentOS Squid proxy server and just
learned that icap is no longer up-to-date.

Better use eCap, but i am not able to find a good howto telling me how
to get it to work in my environment.

Can anybody help me out ?

Thanks, Björn.

Träger: Klinikum Westmünsterland GmbH
Jur. Sitz der Gesellschaft: Am Boltenhof 7, 46325 Borken
Registergericht Coesfeld, HRB Nr. 4184 I Ust.-Id.Nr.: DE123762133
Geschäftsführer: Christoph Bröcker, Ludger Hellmann (Sprecher)
Aufsichtsratsvorsitzender: Jürgen Büngeler

Diese E-Mail enthält vertrauliche oder rechtlich geschützte
Informationen. Wenn Sie nicht der beabsichtigte Empfänger sind,
informieren Sie bitte sofort den Absender und löschen Sie diese E-Mail.
Das unbefugte Kopieren dieser E-Mail oder die unbefugte Weitergabe der
enthaltenen Informationen ist nicht gestattet.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidcliente stopped working!

2016-12-19 Thread Amos Jeffries
On 20/12/2016 9:52 a.m., Sameh Onaissi wrote:
> 
>> On Dec 19, 2016, at 1:31 PM, Antony Stone wrote:
>>
>> On Monday 19 December 2016 at 17:44:11, Sameh Onaissi wrote:
>>
>>> Hello,
>>>
>>> I was using squid client to get cache stats, however this morning it
>>> completely stopped working.
>>
>>> http://mydomainname.com/squid/access_denied.jpg;
>>> alt="Acceso Denegado" style="width:704px;height:428px;">
>>
>>> the html code is the code of my redirect page whenever a client tries to
>>> access a blacklisted website.
>>
>> How big is your blacklist?  Could you show us what's in it?
>>
>> Have you added the proxy itself to the whitelist?
> 
> The blacklist consistes of the ads, porn, socialnet and spyware lists of the 
> BL list. 
> 
> I added both LAN and WAN IPs of the server to the whitelist but didn’t help.
> 

What URL was being requested that got the above access denied response?

Use -vv parameter to squidclient and "debug_options 11,2" in squid.conf
to have the requests header logged and find that out.


> So, I changed my default acl setting in squid guard config file to pass all 
> for now (I know it is not ideal), just to monitor the cache as I am trying to 
> get the HIT ratio up. (currently only at 7.8%)
>   
> squid guard config: pastebin.com/bbe8CWLE
> 

So your SG config just does basic IP, URL and time based allow or
redirect decisions.

I suggest you drop SG entirely and move that config into your squid.conf:


# Time rules
# abbrev for weekdays:
# s = sun, m = mon, t =tue, w = wed, h = thu, f = fri, a = sat
acl non-working-hours time MTWHF 18:00-24:00 00:00-08:00
acl non-working-hours time MTWHF 18:00-24:00 00:00-08:00
acl non-working-hours time SA 00:00-24:00

# Source addresses
acl exempt src 10.0.0.90 10.0.0.167
acl youtubers src 10.0.0.1-10.0.0.4
acl localnet src 10.0.0.0/24

# Destination classes
acl blah_domains dstdomain "adv/domains"
acl blah_domains dstdomain "deny/domains"
acl blah_domains dstdomain "porn/domains"
acl blah_domains dstdomain "spyware/domains"
acl blah_domains dstdomain "socialnet/domains"

acl blah_urls dstdom_regex "adv/urls"
acl blah_urls dstdom_regex "deny/urls"
acl blah_urls dstdom_regex "porn/urls"
acl blah_urls dstdom_regex "spyware/urls"
acl blah_urls dstdom_regex "socialnet/urls"

acl stuff_always_blocked anyof blah_domains blah_urls

acl whitelist_domains dstdomain "whitelist/domains"
acl whitelist_urls dstdom_regex "whitelist/urls"
acl whitelist anyof whitelist_domains whitelist_urls
deny_info 302:http://example.com/squid/denegado.html whitelist

acl youtubers_domains dstdomain "socialnet/domains"
acl youtubers_urls dstdom_regex "adv/urls"
acl youtubers anyof youtubers_domains youtubers_urls
deny_info 302:http://example.com/squid/denegado.html youtubers

# Policies
http_access deny !localnet
deny_info 302:http://example.com/squid/denegado.html localnet

http_access allow exempt
http_access allow youtubers !stuff_always_blocked
http_access deny youtubers
http_access allow non-working-hours
http_access allow whitelist !stuff_always_blocked
http_access deny whitelist
http_access allow localnet

deny_info 302:http://example.com/squid/denegado.html all
http_access deny all


> 
>>
>>> squid.conf: http://pastebin.com/TQ8H6bRp
>>
>> Quote from your config:
>>
>>  acl Safe_ports port 587 #SMTP
>>
>> Did you read Amos' reply "SMTP is the #1 worst protocol to let anywhere near 
>> an HTTP proxy.  Preventing what you have allowed to happen is one of the 
>> primary reasons Safe_ports exists in the first place!”
> 

> The reason I allow 587 is because the Squid Proxy lives on the same
server as a mail server which needs this port, and several clients have
their mail clientes (Outlook..etc) already configured to use this port.

Bogus. You should know it is possible that two pieces of software can
run on one machine without interferring with each other.

Whether or not a mailserver exists on the same machine has nothing to do
with Squid.

Your mailserver itself should be using that port and controlling what
traffic can use it. *HTTP* traffic should never be allowed to flow from
the proxy software through to the mailserver software.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslpassword_program

2016-12-19 Thread creditu

On Sun, Dec 18, 2016, at 11:24 PM, Amos Jeffries wrote:
> On 19/12/2016 5:59 p.m., creditu wrote:
> > 
> > On Sun, Dec 18, 2016, at 01:21 PM, Michael Pelletier wrote:
> >> Check your file permissions on the key.
> >>
> >> On Dec 18, 2016 2:13 PM, creditu wrote:
> >>
> >>> I'm having trouble getting the sslpassword_program working for an
> >>> encrypted key.  Config looks like this:
> >>>
> >>> sslpassword_program /usr/local/bin/pass.sh
> >>> https_port 10.10.10.1:443 accel vhost cert=/etc/squid/www.crt
> >>> key=/etc/squid/private.key
> >>>
> >>> On start, cache log states "Ignoring https_port 10.10.10.1:443 due to
> >>> SSL initialization failure."
> >>> On stop, console states "Failed to acquire SSL private key
> >>> '/etc/squid/private.key': error:0200100D:system library:fopen:Permission
> >>> denied"
> >>>
> >>> Removing the passphrase from the private key, squid starts normally.
> >>> Permissions on the encrypted and non-encrypted keys are the same.  I
> >>> also tried putting the pass.sh program in /bin.  The pass.sh program
> >>> looks like this:
> >>> #!/bin/sh
> >>> echo "testing"
> >>>
> >>> The hash of the private key modulus and the certificate modulus match as
> >>> well.
> >>>
> >>> Am I missing something? This is on squid 3.1.
> 
> If the ideas below don't help can you try an upgrade? there are a few
> fixes in 3.2 and 3.3 related to that directive.
> 
> >>> ___
> > 
> > Checked the perms and they are identical as the private key that I
> > stripped the password out of.  They are also in the same directory.  The
> > one without a password works fine.
> 
> The one without a password is being opened by OpenSSL directly.
> 
> The one with pssword is being opened in Squid oeprating context, which
> should be root, but may also be the low-privilege proxy user at the time
> the script is run.
> 
> So you need the key file to be readable by whichever of those privilege
> contexts Squid is using at the time. (Sorry I can't be more precise, I'm
> not sure myself which is used in 3.1).
> 
> If you have SELinux or AppArmour they may also be interferring with the
> priviledged access.
> 
> The script itself needs either executable permissions set, or squid.conf
> containing the full shell interpreter path as well as the script path.
>  ie. "sslpassword_program /bin/sh /usr/local/bin/pass.sh"
> 
> 
> >  Also tried encrypting with des3
> > versus aes128 and that didn't make a difference either.   Gotta be
> > missing something.
> 
> >  The error points to a perms problem, but not seeing
> > how since everything is the same.
> 
> The error message says fopen() command is not permitted for whichever
> user account is trying to access the .key file.
>  It's not clear if that is fopen() of the .key file, or fopen() of the
> pass.sh file before running it.
> 
> The way you describe the issues below hint to me that it is the
> permission to access the script which is breaking things.
> 
> 
> Also, those old Squid had some issues with processing errno at the wrong
> times. So there is a small but non-zero chance that the error is
> actually something else. :-(
> 
> 
> >  Also, added a line in the
> > sslpassword_program to touch a file to see if it got executed and it
> > didn't create the file. Additionally, ran the stat command on the 
> > /usr/local/bin/pass.sh after squid started up
> 
> FYI: That test only works if your filesystem has been configured to
> record access times. Using such a setup with Squid will cause major
> slowdown as cache related files and logs get accessed *a lot*. So is
> typically disabled via fstab "noatime" settings if anyone with expertise
> has tuned the proxy machine before you.
> 
> 
> > and the access time never
> > changes.  It seems like the shell script may not being executed for some
> > reason.  I'm able to launch the shell script from the command line and
> > it echos out the pass fine.
> 
> This kind of implies the file permission problem is for Squid to open
> the script "file" before running whats inside.
> 
> Check /usr/local/bin/pass.sh ownership, executable rights, and
> SELinux/AppArmour permissions (whichever is present on that achine).
> 
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

Thanks.  Worked down the list and the problem ended up being SELinux. 
Of course I would have sworn that it was not in enforcing mode.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidcliente stopped working!

2016-12-19 Thread Sameh Onaissi

> On Dec 19, 2016, at 1:31 PM, Antony Stone  
> wrote:
> 
> On Monday 19 December 2016 at 17:44:11, Sameh Onaissi wrote:
> 
>> Hello,
>> 
>> I was using squid client to get cache stats, however this morning it
>> completely stopped working.
> 
>> http://mydomainname.com/squid/access_denied.jpg;
>> alt="Acceso Denegado" style="width:704px;height:428px;">
> 
>> the html code is the code of my redirect page whenever a client tries to
>> access a blacklisted website.
> 
> How big is your blacklist?  Could you show us what's in it?
> 
> Have you added the proxy itself to the whitelist?

The blacklist consistes of the ads, porn, socialnet and spyware lists of the BL 
list. 

I added both LAN and WAN IPs of the server to the whitelist but didn’t help.

So, I changed my default acl setting in squid guard config file to pass all for 
now (I know it is not ideal), just to monitor the cache as I am trying to get 
the HIT ratio up. (currently only at 7.8%)

squid guard config: pastebin.com/bbe8CWLE



> 
>> squid.conf: http://pastebin.com/TQ8H6bRp
> 
> Quote from your config:
> 
>   acl Safe_ports port 587 #SMTP
> 
> Did you read Amos' reply "SMTP is the #1 worst protocol to let anywhere near 
> an HTTP proxy.  Preventing what you have allowed to happen is one of the 
> primary reasons Safe_ports exists in the first place!”

The reason I allow 587 is because the Squid Proxy lives on the same server as a 
mail server which needs this port, and several clients have their mail clientes 
(Outlook..etc) already configured to use this port.

> 
> http://lists.squid-cache.org/pipermail/squid-users/2016-December/013776.html
> 
> By the way, what did you have to fix to prevent those public IP addresses 
> being 
> able to access your Squid proxy?

I basically let them get blocked by squid for a day or two and they stopped. I 
just allowed LAN source IPs.

> 
> http://lists.squid-cache.org/pipermail/squid-users/2016-December/013764.html
> 
> 
> Antony.
> 
> -- 
> Pavlov is in the pub enjoying a pint.
> The barman rings for last orders, and Pavlov jumps up exclaiming "Damn!  I 
> forgot to feed the dog!"
> 
>   Please reply to the list;
> please *don't* CC me.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidcliente stopped working!

2016-12-19 Thread Antony Stone
On Monday 19 December 2016 at 17:44:11, Sameh Onaissi wrote:

> Hello,
> 
> I was using squid client to get cache stats, however this morning it
> completely stopped working.

> http://mydomainname.com/squid/access_denied.jpg;
> alt="Acceso Denegado" style="width:704px;height:428px;">

> the html code is the code of my redirect page whenever a client tries to
> access a blacklisted website.

How big is your blacklist?  Could you show us what's in it?

Have you added the proxy itself to the whitelist?

> squid.conf: http://pastebin.com/TQ8H6bRp

Quote from your config:

acl Safe_ports port 587 #SMTP

Did you read Amos' reply "SMTP is the #1 worst protocol to let anywhere near 
an HTTP proxy.  Preventing what you have allowed to happen is one of the 
primary reasons Safe_ports exists in the first place!"

http://lists.squid-cache.org/pipermail/squid-users/2016-December/013776.html

By the way, what did you have to fix to prevent those public IP addresses being 
able to access your Squid proxy?

http://lists.squid-cache.org/pipermail/squid-users/2016-December/013764.html


Antony.

-- 
Pavlov is in the pub enjoying a pint.
The barman rings for last orders, and Pavlov jumps up exclaiming "Damn!  I 
forgot to feed the dog!"

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid.conf blocking live video stream

2016-12-19 Thread Robert Watson
The site I was having trouble with was video.foxnews.com.  The page loads
but the actual video hangs with "spinning wheel of death".  I took Amos
suggestion and added deny via, request-x-forward and that fixed the issue
but I was trying to create the anonymous proxy paranoid setup initially,
and Amos suggestion won't achieve that.

On Mon, Dec 19, 2016 at 9:28 AM, Robert Watson 
wrote:

> The site I was having trouble with was video.foxnews.com.  I took Amos
> suggestion and added deny via, request-x-forward and that fixed the issue
> but I was trying to create the anonymous proxy paranoid setup initially,
> and Amos suggestion won't achieve that.
>
> On Sun, Dec 18, 2016 at 2:25 PM, Eliezer Croitoru 
> wrote:
>
>> Hey Robert,
>>
>> Can you be more specific?
>> “Not working” can depend on couple things and on the nature of the
>> streaming system.
>> I know that many streaming sites do work under transparent squid so it’s
>> not really well understood what is not working from the spectrum of
>> options.
>> Can you give examples for streaming sites that do work and others that do
>> not?
>> The first that pops in my mind to test it would be:
>> https://www.youtube.com/
>> https://www.crunchyroll.com/
>> https://rutube.ru/
>> And many others that are mentioned at:
>> http://www.unveiltech.com/indexsquidvideobooster.php (under Smart Cache)
>>
>> And take Amos suggestion about restricting the headers more selectively.
>> Depends on your system policy you would be able to find that for most
>> sites
>> you won’t have any issues letting any headers pass but for selective sites
>> you would want to take another policy that would be to block in general
>> and
>> leaving aside the specific headers “allowed” approach.
>>
>> Also, have you tried to disable the virus scan to verify if it’s the
>> culprit for the streaming issue?
>>
>> Please give one example so I and maybe others would be able to grasp the
>> issue in some way.
>>
>> Thanks,
>> Eliezer
>>
>> 
>> Eliezer Croitoru 
>> Linux System Administrator
>> Mobile: +972-5-28704261
>> Email: elie...@ngtech.co.il
>>
>>
>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
>> Behalf Of Robert Watson
>> Sent: Saturday, December 17, 2016 7:00 AM
>> To: squid-users@lists.squid-cache.org
>> Subject: [squid-users] squid.conf blocking live video stream
>>
>> Sorry if this shows up twice on the mailing list...
>> I've setup a transparent proxy squid v3.5.22 on a x86_64 Arch Linux
>> server.
>> The transparent proxy is working fine for web page caching but live video
>> isn't getting through.  I thought it was a netfilter issue but bypassing
>> the
>> proxy fixes this issue.
>>
>> acl localnet src 10.20.0.0/16  # RFC1918
>> possible
>> internal network
>> acl SSL_ports port 443  # https
>> acl Safe_ports port 80  # http
>> acl Safe_ports port 554 # rtsp
>> acl Safe_ports port 1935# rtmp
>> acl Safe_ports port 21  # ftp
>> acl Safe_ports port 443 # https
>> acl Safe_ports port 1025-65535  # unregistered ports
>> acl CONNECT method CONNECT
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>> http_access allow localhost manager
>> http_access deny manager
>> http_access deny to_localhost
>> http_access allow localnet
>> http_access allow localhost
>> http_access deny all
>> visible_hostname server.ourhome.net 
>> http_port 10.20.30.1:3128   intercept
>> disable-pmtu-discovery=transparent
>> http_port 127.0.0.0:8181 
>> coredump_dir /var/cache/squid
>> refresh_pattern ^ftp:   144020% 10080
>> refresh_pattern ^gopher:14400%  1440
>> refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
>> refresh_pattern .   0   20% 4320
>> #
>> # Anonymous Proxy settings
>> include /etc/squid/extra/anonymous.conf
>> #
>> # Virus scanning via C-ICAP
>> #
>> include /etc/squid/extra/c-icap.conf
>> #
>>
>> By the process of elimination I've narrowed it down to the anonymous proxy
>> settings...
>> anonymous.conf
>>
>> forwarded_for off
>> request_header_access Allow allow all
>> request_header_access Authorization allow all
>> request_header_access WWW-Authenticate allow all
>> request_header_access Proxy-Authorization allow all
>> request_header_access Proxy-Authenticate allow all
>> request_header_access Cache-Control allow all
>> request_header_access Content-Encoding allow all
>> request_header_access Content-Length allow all
>> request_header_access Content-Type allow all
>> request_header_access Date allow all
>> request_header_access Expires allow all
>> request_header_access Host allow all
>> request_header_access If-Modified-Since allow all
>> request_header_access Last-Modified allow all
>> request_header_access Location allow all
>> request_header_access Pragma allow all
>> 

Re: [squid-users] squidcliente stopped working!

2016-12-19 Thread Alex Rousskov
On 12/19/2016 09:44 AM, Sameh Onaissi wrote:
> squid client returns numbers based on traffic on 3128 by default right?

No, the above statement is incorrect. The cache manager interface
reports whole-Squid statistics by default, including all listening ports.

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squidcliente stopped working!

2016-12-19 Thread Sameh Onaissi
Hello,

I was using squid client to get cache stats, however this morning it completely 
stopped working.


When I run squidclient mgr:info I get the following

HTTP/1.1 200 OK
Date: Mon, 19 Dec 2016 16:33:44 GMT
Server: Apache/2.4.7 (Ubuntu)
Last-Modified: Fri, 25 Nov 2016 16:55:22 GMT
ETag: "bd-54222fce80317"
Accept-Ranges: bytes
Content-Length: 189
Vary: Accept-Encoding
Content-Type: text/html
Age: 539
X-Cache: HIT from hostname
X-Cache-Lookup: HIT from hostname:3128
Via: 1.1 hostname (squid/3.5.22)
Connection: close






http://mydomainname.com/squid/access_denied.jpg; alt="Acceso 
Denegado" style="width:704px;height:428px;">





the html code is the code of my redirect page whenever a client tries to access 
a blacklisted website.

squid.conf: http://pastebin.com/TQ8H6bRp

Any idea how to fix this?


ON SIDE NOTE:
squid client returns numbers based on traffic on 3128 by default right? but in 
my intercept, I have https traffic going through 3127 and ssl-bump on port 
3129. How can I account for all traffic being cached?

Thanks you!
Sam
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squidcliente stopped working!

2016-12-19 Thread Sameh Onaissi
Hello,

I was using squid client to get cache stats, however this morning it completely 
stopped working.


When I run squidclient mgr:info I get the following

HTTP/1.1 200 OK
Date: Mon, 19 Dec 2016 16:33:44 GMT
Server: Apache/2.4.7 (Ubuntu)
Last-Modified: Fri, 25 Nov 2016 16:55:22 GMT
ETag: "bd-54222fce80317"
Accept-Ranges: bytes
Content-Length: 189
Vary: Accept-Encoding
Content-Type: text/html
Age: 539
X-Cache: HIT from hostname
X-Cache-Lookup: HIT from hostname:3128
Via: 1.1 hostname (squid/3.5.22)
Connection: close






http://mydomainname.com/squid/access_denied.jpg; alt="Acceso 
Denegado" style="width:704px;height:428px;">





the html code is the code of my redirect page whenever a client tries to access 
a blacklisted website.

squid.conf:
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Missing cache files

2016-12-19 Thread Odhiambo Washington
On 19 December 2016 at 16:06, Eliezer Croitoru  wrote:

> Did you noticed these errors:
> FATAL: The ssl_crtd helpers are crashing too rapidly, need help!
>
> Squid Cache (Version 3.5.23): Terminated abnormally.
> CPU Usage: 63.837 seconds = 28.308 user + 35.529 sys
> Maximum Resident Size: 171488 KB
> Page faults with physical i/o: 154
> 2016/12/18 17:25:05| Set Current Directory to /opt/squid-3.5/var/logs/
> 2016/12/18 17:25:06| Starting Squid Cache version 3.5.23 for
> i386-unknown-freebsd9.3...
> 2016/12/18 17:25:06| Service Name: squid
> 2016/12/18 17:25:06| Process ID 2943
> 2016/12/18 17:25:06| Process Roles: master worker
> 2016/12/18 17:25:06| NOTICE: Could not increase the number of
> filedescriptors
> 2016/12/18 17:25:06| With 32768 file descriptors available
> 2016/12/18 17:25:06| Initializing IP Cache...
> 2016/12/18 17:25:06| DNS Socket created at [::], FD 10
> 2016/12/18 17:25:06| DNS Socket created at 0.0.0.0, FD 11
> 2016/12/18 17:25:06| Adding domain crownkenya.com from /etc/resolv.conf
> 2016/12/18 17:25:06| Adding nameserver 192.168.55.254 from /etc/resolv.conf
> 2016/12/18 17:25:06| helperOpenServers: Starting 5/15 'ssl_crtd' processes
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not
> permitted
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not
> permitted
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not
> permitted
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not
> permitted
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not
> permitted
> 2016/12/18 17:25:06| helperOpenServers: Starting 5/10 'perl' processes
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not
> permitted
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not
> permitted
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not
> permitted
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not
> permitted
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not
> permitted
> 2016/12/18 17:25:11| Logfile: opening log stdio:/opt/squid-3.5/var/logs/
> access.log
>
> 
>

I did not see those, funnily.



> And it's good to know you are running FreeBSD 9.3...(32 bit..)
>

Yes, might soon become 10.3 or even 11.


>
> You need to fix the issues with the helpers before anything else since
> these are blockers for squid to operate right.
> The missing file is a side effect which happens at almost the same time.
> I would have started with looking at the lines:
> sslcrtd_program /opt/squid-3.5/libexec/ssl_crtd -s /opt/squid-3.5/ssl_db
> -M 4MB
> store_id_program /usr/local/bin/perl /opt/squid-3.5/scripts/store-id.pl


I have started with disabling the stuff to do with ssl_bump, etc because
it's not practical using it in the environment.



> And see what is causing this operation is not permitted.
> It can be rights or another issue but you must resolve it.
>

I doubt it is rights at all. I checked my /etc/devfs.conf, which is the
only other place I thought could have an issue but it looks fine.



> And before diving hard into StoreID make sure your squid just runs fine
> with ssl bump.
>

I abandoned ssl bump because it wasn't practical in the environment.



> Then jump into StoreID and feel free to share your wishes for this
> service..(caching youtube, Microsoft updates etc..)
>
> Let me know if you need anything.
>
>
Sure. I will.




-- 
Best regards,
Odhiambo WASHINGTON,
Nairobi,KE
+254 7 3200 0004/+254 7 2274 3223
"Oh, the cruft."
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Missing cache files

2016-12-19 Thread Amos Jeffries
On 20/12/2016 2:06 a.m., Eliezer Croitoru wrote:
> Did you noticed these errors:
> FATAL: The ssl_crtd helpers are crashing too rapidly, need help!
> 

Unfortunately that helper does not have much debug to figure out why its
crashing. There are a couple of bug reports open about this crashing,
but only after its been running a while in some past execution.

The current workaround seems to be erasing the cert DB it uses and
re-generating it. Any help debugging that would be very welcome.


> Squid Cache (Version 3.5.23): Terminated abnormally.
> CPU Usage: 63.837 seconds = 28.308 user + 35.529 sys
> Maximum Resident Size: 171488 KB
> Page faults with physical i/o: 154
> 2016/12/18 17:25:05| Set Current Directory to /opt/squid-3.5/var/logs/
> 2016/12/18 17:25:06| Starting Squid Cache version 3.5.23 for 
> i386-unknown-freebsd9.3...
> 2016/12/18 17:25:06| Service Name: squid
> 2016/12/18 17:25:06| Process ID 2943
> 2016/12/18 17:25:06| Process Roles: master worker
> 2016/12/18 17:25:06| NOTICE: Could not increase the number of filedescriptors
> 2016/12/18 17:25:06| With 32768 file descriptors available
> 2016/12/18 17:25:06| Initializing IP Cache...
> 2016/12/18 17:25:06| DNS Socket created at [::], FD 10
> 2016/12/18 17:25:06| DNS Socket created at 0.0.0.0, FD 11
> 2016/12/18 17:25:06| Adding domain crownkenya.com from /etc/resolv.conf
> 2016/12/18 17:25:06| Adding nameserver 192.168.55.254 from /etc/resolv.conf
> 2016/12/18 17:25:06| helperOpenServers: Starting 5/15 'ssl_crtd' processes
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
> 2016/12/18 17:25:06| helperOpenServers: Starting 5/10 'perl' processes
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
> 2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
> 2016/12/18 17:25:11| Logfile: opening log 
> stdio:/opt/squid-3.5/var/logs/access.log
> 
> 
> And it's good to know you are running FreeBSD 9.3...(32 bit..)
> 
> You need to fix the issues with the helpers before anything else since these 
> are blockers for squid to operate right.
> The missing file is a side effect which happens at almost the same time.
> I would have started with looking at the lines:
> sslcrtd_program /opt/squid-3.5/libexec/ssl_crtd -s /opt/squid-3.5/ssl_db -M 
> 4MB
> store_id_program /usr/local/bin/perl /opt/squid-3.5/scripts/store-id.pl
> 
> And see what is causing this operation is not permitted.

That is a known bug on BSD. The child process fork()'d to run the helper
already has been down-privileged by Squid. On BSD systems that means the
test to see if it is still root fails - unfortunately loudly.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] 4.0.17 assert http->storeEntry()->objectLen() >= headers_sz

2016-12-19 Thread Heiler Bemerguy


Hi guys, have been getting crashes with this message..

2016/12/19 10:00:06 kid1| assertion failed: client_side_reply.cc:1167: 
"http->storeEntry()->objectLen() >= headers_sz"



clean rockstore databases from 3 days ago
cache_dir rock /cache  135000 min-size=0 max-size=12288 slot-size=12288
cache_dir rock /cache2 135000 min-size=12289 max-size=65536
cache_dir rock /cache3 135000 min-size=65537 max-size=262144
cache_dir rock /cache4 135000 min-size=262145

--
Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Websocket Issue

2016-12-19 Thread Hardik Dangar
Based on Amos's Answer,

acl serverIsws ssl::server_name .w0.whatsapp.com
acl serverIsws ssl::server_name .w1.whatsapp.com

acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump bump !serverIsws all
ssl_bump splice all

will above work ?

Or should i splice first and bump all others later?

This is very interesting. I will definitely try this when i will reach
office.

On Mon, Dec 19, 2016 at 6:40 PM, Eliezer Croitoru 
wrote:

> I can give a hint that once you see the request you can identify using an
> ICAP\ECAP services couple details about the request.
> Basically I had a regex which allowed any what's app traffic to be spliced
> by the SNI domain name.
> It should be something like "w[0-9]+\.web\.whatsapp\.com$" to match the
> required domains for whatsapp to be spliced.
> If nobody will try it before me it's on my todo list for this release
> (3.5.23, 4.0.17).
>
> Eliezer
>
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
> Behalf Of Amos Jeffries
> Sent: Monday, December 19, 2016 8:51 AM
> To: Hardik Dangar 
> Cc: Squid Users 
> Subject: Re: [squid-users] Squid Websocket Issue
>
> On 19/12/2016 12:14 p.m., Hardik Dangar wrote:
> > can you give me one example please ?
> > like in the above example.
> > w4.web.whatsapp.com domain is fixed
> > are you suggesting i can create acl and by pass it to squid ?
> >
>
> You are the first person to ask about WhatsApp traffic.
>
> These might be a useful starting point
>  Configuration_Examples>
>
> What the examples are doing for banks is what you want to do for WhatsApp.
>
> The trick though will be figuring out how to splice *before* seeing what
> type of HTTP request exists inside the tunnel. If you are lucky the app
> will be using SNI.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Websocket Issue

2016-12-19 Thread Eliezer Croitoru
I can give a hint that once you see the request you can identify using an 
ICAP\ECAP services couple details about the request.
Basically I had a regex which allowed any what's app traffic to be spliced by 
the SNI domain name.
It should be something like "w[0-9]+\.web\.whatsapp\.com$" to match the 
required domains for whatsapp to be spliced.
If nobody will try it before me it's on my todo list for this release (3.5.23, 
4.0.17).

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Amos Jeffries
Sent: Monday, December 19, 2016 8:51 AM
To: Hardik Dangar 
Cc: Squid Users 
Subject: Re: [squid-users] Squid Websocket Issue

On 19/12/2016 12:14 p.m., Hardik Dangar wrote:
> can you give me one example please ?
> like in the above example.
> w4.web.whatsapp.com domain is fixed
> are you suggesting i can create acl and by pass it to squid ?
> 

You are the first person to ask about WhatsApp traffic.

These might be a useful starting point


What the examples are doing for banks is what you want to do for WhatsApp.

The trick though will be figuring out how to splice *before* seeing what type 
of HTTP request exists inside the tunnel. If you are lucky the app will be 
using SNI.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Missing cache files

2016-12-19 Thread Eliezer Croitoru
Did you noticed these errors:
FATAL: The ssl_crtd helpers are crashing too rapidly, need help!

Squid Cache (Version 3.5.23): Terminated abnormally.
CPU Usage: 63.837 seconds = 28.308 user + 35.529 sys
Maximum Resident Size: 171488 KB
Page faults with physical i/o: 154
2016/12/18 17:25:05| Set Current Directory to /opt/squid-3.5/var/logs/
2016/12/18 17:25:06| Starting Squid Cache version 3.5.23 for 
i386-unknown-freebsd9.3...
2016/12/18 17:25:06| Service Name: squid
2016/12/18 17:25:06| Process ID 2943
2016/12/18 17:25:06| Process Roles: master worker
2016/12/18 17:25:06| NOTICE: Could not increase the number of filedescriptors
2016/12/18 17:25:06| With 32768 file descriptors available
2016/12/18 17:25:06| Initializing IP Cache...
2016/12/18 17:25:06| DNS Socket created at [::], FD 10
2016/12/18 17:25:06| DNS Socket created at 0.0.0.0, FD 11
2016/12/18 17:25:06| Adding domain crownkenya.com from /etc/resolv.conf
2016/12/18 17:25:06| Adding nameserver 192.168.55.254 from /etc/resolv.conf
2016/12/18 17:25:06| helperOpenServers: Starting 5/15 'ssl_crtd' processes
2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
2016/12/18 17:25:06| helperOpenServers: Starting 5/10 'perl' processes
2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
2016/12/18 17:25:06| WARNING: no_suid: setuid(0): (1) Operation not permitted
2016/12/18 17:25:11| Logfile: opening log 
stdio:/opt/squid-3.5/var/logs/access.log


And it's good to know you are running FreeBSD 9.3...(32 bit..)

You need to fix the issues with the helpers before anything else since these 
are blockers for squid to operate right.
The missing file is a side effect which happens at almost the same time.
I would have started with looking at the lines:
sslcrtd_program /opt/squid-3.5/libexec/ssl_crtd -s /opt/squid-3.5/ssl_db -M 4MB
store_id_program /usr/local/bin/perl /opt/squid-3.5/scripts/store-id.pl

And see what is causing this operation is not permitted.
It can be rights or another issue but you must resolve it.
And before diving hard into StoreID make sure your squid just runs fine with 
ssl bump.
Then jump into StoreID and feel free to share your wishes for this 
service..(caching youtube, Microsoft updates etc..)

Let me know if you need anything.

Eliezer


http://ngtech.co.il/lmgtfy/
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


From: Odhiambo Washington [mailto:odhia...@gmail.com] 
Sent: Monday, December 19, 2016 1:37 PM
To: Eliezer Croitoru 
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Missing cache files

Hi,

I have added details.txt and also fixed perms on cache.log

On 19 December 2016 at 14:10, Eliezer Croitoru  
wrote:
The file:
http://gw.crownkenya.com/~wash/3.5.22/cache.log.txt

isn't accessible.
Also missing details like OS version and other things.
It seems like a very simple setup and the first thing to check is permissions 
to the whole directory tree:
/opt/squid-3.5/var/cache

Please add the output of:
# ls -la /opt/squid-3.5/var/cache/

Eliezer


http://ngtech.co.il/lmgtfy/
Linux System Administrator
Mobile: +972-5-28704261
Email: mailto:elie...@ngtech.co.il


From: squid-users [mailto:mailto:squid-users-boun...@lists.squid-cache.org] On 
Behalf Of Odhiambo Washington
Sent: Monday, December 19, 2016 10:11 AM
To: mailto:squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Missing cache files

Hi Eliezer,

I have put the files on this link: http://bit.ly/2h1bzqp


On 19 December 2016 at 01:38, Eliezer Croitoru 
 wrote:
Can you give more details on the setup?.. squid.conf.
And cache.log dumps.
Files are not usually disappearing but it’s not clear who erased them.
If you do not have a swap file at /opt/squid-3.5/var/cache/ then you should
ask yourself how squid has started at all.

Please fill up the missing pieces,
Eliezer


Eliezer Croitoru 
Linux System Administrator
Mobile: +972-5-28704261
Email: mailto:mailto:elie...@ngtech.co.il


From: squid-users 
[mailto:mailto:mailto:mailto:squid-users-boun...@lists.squid-cache.org] On
Behalf Of Odhiambo Washington
Sent: Saturday, December 17, 2016 12:41 PM
To: mailto:mailto:squid-users@lists.squid-cache.org
Subject: [squid-users] Missing cache files

Hi,

I keep seeing something that I think is odd. Squid has been exiting on

Re: [squid-users] cache_peer and PROXY protocol

2016-12-19 Thread Amos Jeffries
On 20/12/2016 12:44 a.m., David Touzeau wrote:
> 
> Hi
> 
> Squid accept "Proxy protocol" in http_port, is there a chance to see  "PROXY
> Protocol" supported in cache_peer if you need to link 2 squid ?
> 

'a chance' only at this point unless somebody (you?) wants to sponsor
it. It is on my TODO list, way down under TLS improvements and HTTP/2
support - both big projects.

What is your use-case ?

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Crash: every 1-2 hour: kernel: Out of memory: Kill process (squid)

2016-12-19 Thread noc
Oh sorry, I miss some replys.

>I think you still have a forwarding loop. Does the cisco WCCP send port
>443 connections from Squid to reach the Internet instead of sending them
>back into Squid.

interface TenGigabitEthernet0/2/0.501
 description for WCCP
 encapsulation dot1Q 501
 ip address 192.168.253.1 255.255.255.0
 no ip redirects
 no ip unreachables
 no ip proxy-arp
 ip nat inside
 ip wccp redirect exclude in

interface TenGigabitEthernet0/2/0.600
 description SQUID external IP
 encapsulation dot1Q 600
 ip address 1.1.1.65 255.255.255.192
 no ip redirects
 no ip proxy-arp
 ip wccp 70 redirect in

I'd change:
(config)# interface TenGigabitEthernet0/2/0.600
(config-subif)# no ip wccp 70 redirect in

Then restrat squid and wait for results.
I'll report.

--
Sergey


> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
> Behalf Of Amos Jeffries
> Sent: Thursday, December 15, 2016 7:52 AM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Crash: every 1-2 hour: kernel: Out of
> memory: Kill process (squid)
> 
> On 15/12/2016 6:24 a.m., n...@forceline.net wrote:
> > Eliezer, thanks for your reply. Guides:
> > http://wiki.squid-cache.org/Features/SslBump
> > http://wiki.squid-cache.org/Features/SslPeekAndSplice
> > https://habrahabr.ru/post/267851/  <-- Russian lang
> > https://habrahabr.ru/post/272733/  <-- Russian lang
> >
> >> First goes first change this: 13130:
> > Done, nothing changed. Squid died.
> >
> > Maby it will be work fine whith lower load even with https. But I
> don't
> > understand, why it killed by a kernel rather than just update memory
> by new
> > one.
> >
> > http://wiki.squid-cache.org/Features/SslBump
> >> Memory usage
> >>
> >>/!\ Warning: Unlike the rest of this page at the time of writing,
> this
> > section applies to Squid-3.3 and possibly later code capable of
> dynamic SSL
> > certificate generation and origin server certificate mimicking. The
> current
> > section text is intended primarily for developers and early adopters
> facing
> > excessive memory consumption in certain SslBump environments. These
> notes
> > may be relocated elsewhere if a better location is found.
> >>
> >> Current documentation is specific to bump-server-first
> configurations.
> >
> > In attach server statistic.
> >
> 
> 
> I think you still have a forwarding loop. Does the cisco WCCP send port
> 443 connections from Squid to reach the Internet instead of sending
> them
> back into Squid.
> 
> The Via header will protect against HTTP messages looping, but the TLS
> handshake traffic has no such protection.
> 
> Amos
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] cache_peer and PROXY protocol

2016-12-19 Thread David Touzeau

Hi

Squid accept "Proxy protocol" in http_port, is there a chance to see  "PROXY
Protocol" supported in cache_peer if you need to link 2 squid ?

Best regards.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Missing cache files

2016-12-19 Thread Odhiambo Washington
Hi,

I have added details.txt and also fixed perms on cache.log

On 19 December 2016 at 14:10, Eliezer Croitoru  wrote:

> The file:
> http://gw.crownkenya.com/~wash/3.5.22/cache.log.txt
>
> isn't accessible.
> Also missing details like OS version and other things.
> It seems like a very simple setup and the first thing to check is
> permissions to the whole directory tree:
> /opt/squid-3.5/var/cache
>
> Please add the output of:
> # ls -la /opt/squid-3.5/var/cache/
>
> Eliezer
>
> 
> http://ngtech.co.il/lmgtfy/
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
> Behalf Of Odhiambo Washington
> Sent: Monday, December 19, 2016 10:11 AM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Missing cache files
>
> Hi Eliezer,
>
> I have put the files on this link: http://bit.ly/2h1bzqp
>
>
> On 19 December 2016 at 01:38, Eliezer Croitoru  elie...@ngtech.co.il> wrote:
> Can you give more details on the setup?.. squid.conf.
> And cache.log dumps.
> Files are not usually disappearing but it’s not clear who erased them.
> If you do not have a swap file at /opt/squid-3.5/var/cache/ then you should
> ask yourself how squid has started at all.
>
> Please fill up the missing pieces,
> Eliezer
>
> 
> Eliezer Croitoru 
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: mailto:elie...@ngtech.co.il
>
>
> From: squid-users [mailto:mailto:squid-users-boun...@lists.squid-cache.org]
> On
> Behalf Of Odhiambo Washington
> Sent: Saturday, December 17, 2016 12:41 PM
> To: mailto:squid-users@lists.squid-cache.org
> Subject: [squid-users] Missing cache files
>
> Hi,
>
> I keep seeing something that I think is odd. Squid has been exiting on
> signal 6, and I keep seeing this:
>
> root@gw:/usr/local/openssl # tail -f /opt/squid-3.5/var/logs/cache.log
> 2016/12/17 13:38:32| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2016/12/17 13:38:32|/opt/squid-3.5/var/cache/00/26/264D
> 2016/12/17 13:40:24| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2016/12/17 13:40:24|/opt/squid-3.5/var/cache/00/3B/3B56
> 2016/12/17 13:42:34| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2016/12/17 13:42:34|/opt/squid-3.5/var/cache/00/6B/6B0D
> 2016/12/17 13:43:36| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2016/12/17 13:43:36|/opt/squid-3.5/var/cache/00/00/0050
> 2016/12/17 13:44:25| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2016/12/17 13:44:25|/opt/squid-3.5/var/cache/00/AF/AFF1
>
> So, what could be making the files disappear?
>
>
> --
> Best regards,
> Odhiambo WASHINGTON,
> Nairobi,KE
> +254 7 3200 0004/+254 7 2274 3223
> "Oh, the cruft."
>
>
>
>
> --
> Best regards,
> Odhiambo WASHINGTON,
> Nairobi,KE
> +254 7 3200 0004/+254 7 2274 3223
> "Oh, the cruft."
>
>


-- 
Best regards,
Odhiambo WASHINGTON,
Nairobi,KE
+254 7 3200 0004/+254 7 2274 3223
"Oh, the cruft."
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users