Re: [squid-users] Two connections per client

2016-03-15 Thread Amos Jeffries
On 16/03/2016 12:38 p.m., Chris Nighswonger wrote:
> Why does netstat show two connections per client connection to Squid:
> 
> tcp0  0 127.0.0.1:3128  127.0.0.1:34167
> ESTABLISHED
> tcp0  0 127.0.0.1:34167 127.0.0.1:3128
> ESTABLISHED
> 
> In this case, there is a content filter running in front of Squid on the
> same box. The same netstat command filtered on the content filter port
> shows only one connection per client:
> 
> tcp0  0 192.168.x.x:8080  192.168.x.y:1310   ESTABLISHED
> 

Details of your Squid configuration are needed to answer that.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] gzip deflate

2016-03-15 Thread Amos Jeffries
On 16/03/2016 11:26 a.m., joe wrote:
> any way of having decompression in futur?
> there is lots a public nice source if it help to use it to make squid 
> better 
> using zlib
> 

There is an eCAP module for that
. Most reports have been that
adding compression makes traffic slower.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sudden but sustained high bandwidth usage

2016-03-15 Thread Amos Jeffries
On 16/03/2016 6:51 a.m., Heiler Bemerguy wrote:
> 
> Hi joe, Eliezer, Amos.. today I saw something different regarding high
> bandwidth and caching of windows updates ranged requests..
> 
> A client begins a windows update, it does a:
> HEAD to check size or something, which is ok..
> then a ranged GET, which outputs a TCP_MISS/206,
> then the next GET gives a TCP_SWAPFAIL_MISS/206.
> Lots of other TCP_MISS/206, then in the end, TCP_MISS_ABORTED/000
> 
> A big amount of parallel connections are being made because of each
> GET.. ok, I know squid can't do much about it.. but then, why the
> content does not get cached in the end?
> I mean, the way it is, it will happen every day.. are these
> "TCP_MISS_ABORTED" really the client aborting the download? I doubt it...
> 

It is. These things happen a lot more often than you might expect. All
it takes is Squid not knowing properly when the object is ended, or a
brief network outage and it will hang for a while (after finishing)
until the client disconnects.


> Take a look and see if you can understand:

I can. The big question is whether *you* understand what is going on?
This log extract shows a nice sequential series of Range requests being
fetched and satisfied. Until at the end the server stops providing data
and the client disconnects with a ~5min timeout.

Bandwidth consumed is presumably the correct amount for those downloads.
the log does not record server or total bandwidth consumption, only
client delivered payload size. So it naturally does not show many hints
about what your "high bandwidth" problem is.

Also, you have logged in "human" format. Which makes the time
differential math to figure out whether those are sequential or parallel
transactions VERY difficult.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] access from same ID and different IP addresses.

2016-03-15 Thread asakura
Hello,

Recently, in our environment, CPU load on the squid proxy server
is happening trouble to become a 100%.

example
PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
29767 squid 20   0 1430m 1.3g 5332 R 99.1 17.4   6836:56 squid
16856 squid 20   0 29764 3280 1620 S  2.0  0.0  68:46.34 squid_kerb_auth
16860 squid 20   0 29760 3272 1616 S  1.7  0.0  43:53.67 squid_kerb_auth
16855 squid 20   0 22636 1244 1000 S  0.3  0.0   2:57.66 negotiate_wrapp
21437 asakura   20   0 15432 1632  932 R  0.3  0.0   0:01.02 top
26167 root  20   0 19088 2248 1060 S  0.3  0.0   1016:14 syslog-ng
---

As a result of investigation, We suspect that CPU load become a 100%
when user attempts to log in from more than different ip addresses. 

This time, squid has been accessed from 20 or more units of
the PC with the same user ID.
When we disable user authentication from target segment, CPU load be low.

We want to know whether CPU load goes up when squid is accessed from
a large number of different IP addresses with the same user ID.

Our environment is below,
- squid-3.5.1 with squid_kerb_auth(sorry old version...) x5 server
- using BIG-IP LTM load balancer
- enable "follow_x_fowarded_for" option
- User ID number is about 5300
- IP address number is about 6300
- Most user authentication is ActiveDirectory(Kerberos), NTLM is only a little
- Normaly, CPU load is about 20%

Regards,
Kazuhiro
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Two connections per client

2016-03-15 Thread Chris Nighswonger
Why does netstat show two connections per client connection to Squid:

tcp0  0 127.0.0.1:3128  127.0.0.1:34167
ESTABLISHED
tcp0  0 127.0.0.1:34167 127.0.0.1:3128
ESTABLISHED

In this case, there is a content filter running in front of Squid on the
same box. The same netstat command filtered on the content filter port
shows only one connection per client:

tcp0  0 192.168.x.x:8080  192.168.x.y:1310   ESTABLISHED

Thanks,
Chris
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] gzip deflate

2016-03-15 Thread joe
any way of having decompression in futur?
there is lots a public nice source if it help to use it to make squid 
better 
using zlib



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/gzip-deflate-tp4676698.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sudden but sustained high bandwidth usage

2016-03-15 Thread Eliezer Croitoru

Hey,

Your words describe the BUG in his wildest and simplest form.
Please file a bug report to follow the progress.
Writing here more and more will not be really a good help as it is.

Eliezer

On 15/03/2016 19:51, Heiler Bemerguy wrote:


Hi joe, Eliezer, Amos.. today I saw something different regarding high
bandwidth and caching of windows updates ranged requests..

A client begins a windows update, it does a:
HEAD to check size or something, which is ok..
then a ranged GET, which outputs a TCP_MISS/206,
then the next GET gives a TCP_SWAPFAIL_MISS/206.
Lots of other TCP_MISS/206, then in the end, TCP_MISS_ABORTED/000

A big amount of parallel connections are being made because of each
GET.. ok, I know squid can't do much about it.. but then, why the
content does not get cached in the end?
I mean, the way it is, it will happen every day.. are these
"TCP_MISS_ABORTED" really the client aborting the download? I doubt it...

Take a look and see if you can understand:


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid eat bandwidth

2016-03-15 Thread HackXBack
with my squid server i have 1 ethernet
this squid box is connected with mikrotik routerOS
this mikrotik have users conneted to it
and in it i can redirect port 80 that come from users to squid server 
okay now i see that this squid take internet more than it give to users
this mean it take bandwidth more than it give so it eat the bandwidth,
another thing , if i stop the redirection for port 80,
squid stop giving bandwidth to users and this is the true thing 
but the false thing is that squid keep taking bandwidth for abour hour, this
mean that squid still serving files till it finish them  
and i dont use range_offset_limit at all, which this conf can make this
problem



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-eat-bandwidth-tp4676641p4676696.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sudden but sustained high bandwidth usage

2016-03-15 Thread Heiler Bemerguy


Hi joe, Eliezer, Amos.. today I saw something different regarding high 
bandwidth and caching of windows updates ranged requests..


A client begins a windows update, it does a:
HEAD to check size or something, which is ok..
then a ranged GET, which outputs a TCP_MISS/206,
then the next GET gives a TCP_SWAPFAIL_MISS/206.
Lots of other TCP_MISS/206, then in the end, TCP_MISS_ABORTED/000

A big amount of parallel connections are being made because of each 
GET.. ok, I know squid can't do much about it.. but then, why the 
content does not get cached in the end?
I mean, the way it is, it will happen every day.. are these 
"TCP_MISS_ABORTED" really the client aborting the download? I doubt it...


Take a look and see if you can understand:

[Tue Mar 15 12:36:12 2016].159 95 10.88.100.100 TCP_MISS/200 508 
HEAD 
http://fg.v4.download.windowsupdate.com/msdownload/update/software/crup/2012/11/windows8-rt-kb2769165-x64_e97d07dda97d56be1561ece773aaff90fb61f7e6.psf 
- HIER_DIRECT/13.107.4.50 application/octet-stream
[Tue Mar 15 12:36:36 2016].321  23798 10.88.100.100 TCP_MISS/206 5441 
GET 
http://fg.v4.download.windowsupdate.com/msdownload/update/software/crup/2012/11/windows8-rt-kb2769165-x64_e97d07dda97d56be1561ece773aaff90fb61f7e6.psf 
- HIER_DIRECT/13.107.4.50 application/octet-stream
[Tue Mar 15 12:37:00 2016].353  24026 10.88.100.100 
TCP_SWAPFAIL_MISS/206 14155 GET 
http://fg.v4.download.windowsupdate.com/msdownload/update/software/crup/2012/11/windows8-rt-kb2769165-x64_e97d07dda97d56be1561ece773aaff90fb61f7e6.psf 
- HIER_DIRECT/13.107.4.50 application/octet-stream
[Tue Mar 15 12:37:39 2016].293  38939 10.88.100.100 TCP_MISS/206 22954 
GET 
http://fg.v4.download.windowsupdate.com/msdownload/update/software/crup/2012/11/windows8-rt-kb2769165-x64_e97d07dda97d56be1561ece773aaff90fb61f7e6.psf 
- HIER_DIRECT/13.107.4.50 application/octet-stream
[Tue Mar 15 12:38:04 2016].711  25415 10.88.100.100 TCP_MISS/206 31488 
GET 
http://fg.v4.download.windowsupdate.com/msdownload/update/software/crup/2012/11/windows8-rt-kb2769165-x64_e97d07dda97d56be1561ece773aaff90fb61f7e6.psf 
- HIER_DIRECT/13.107.4.50 application/octet-stream
[Tue Mar 15 12:38:40 2016].959  36243 10.88.100.100 TCP_MISS/206 33880 
GET 
http://fg.v4.download.windowsupdate.com/msdownload/update/software/crup/2012/11/windows8-rt-kb2769165-x64_e97d07dda97d56be1561ece773aaff90fb61f7e6.psf 
- HIER_DIRECT/13.107.4.50 application/octet-stream
[Tue Mar 15 12:39:19 2016].051  38089 10.88.100.100 TCP_MISS/206 38228 
GET 
http://fg.v4.download.windowsupdate.com/msdownload/update/software/crup/2012/11/windows8-rt-kb2769165-x64_e97d07dda97d56be1561ece773aaff90fb61f7e6.psf 
- HIER_DIRECT/13.107.4.50 application/octet-stream
[Tue Mar 15 12:39:27 2016].767   8714 10.88.100.100 TCP_MISS/206 38826 
GET 
http://fg.v4.download.windowsupdate.com/msdownload/update/software/crup/2012/11/windows8-rt-kb2769165-x64_e97d07dda97d56be1561ece773aaff90fb61f7e6.psf 
- HIER_DIRECT/13.107.4.50 application/octet-stream
[Tue Mar 15 12:40:36 2016].177  68404 10.88.100.100 TCP_MISS/206 42085 
GET 
http://fg.v4.download.windowsupdate.com/msdownload/update/software/crup/2012/11/windows8-rt-kb2769165-x64_e97d07dda97d56be1561ece773aaff90fb61f7e6.psf 
- HIER_DIRECT/13.107.4.50 application/octet-stream
[Tue Mar 15 12:43:01 2016].193 145011 10.88.100.100 TCP_MISS/206 47404 
GET 
http://fg.v4.download.windowsupdate.com/msdownload/update/software/crup/2012/11/windows8-rt-kb2769165-x64_e97d07dda97d56be1561ece773aaff90fb61f7e6.psf 
- HIER_DIRECT/13.107.4.50 application/octet-stream
[Tue Mar 15 12:44:06 2016].563  65368 10.88.100.100 TCP_MISS/206 52196 
GET 
http://fg.v4.download.windowsupdate.com/msdownload/update/software/crup/2012/11/windows8-rt-kb2769165-x64_e97d07dda97d56be1561ece773aaff90fb61f7e6.psf 
- HIER_DIRECT/13.107.4.50 application/octet-stream
[Tue Mar 15 12:45:58 2016].017 111451 10.88.100.100 TCP_MISS/206 52577 
GET 
http://fg.v4.download.windowsupdate.com/msdownload/update/software/crup/2012/11/windows8-rt-kb2769165-x64_e97d07dda97d56be1561ece773aaff90fb61f7e6.psf 
- HIER_DIRECT/13.107.4.50 application/octet-stream
[Tue Mar 15 12:47:18 2016].578  80558 10.88.100.100 TCP_MISS/206 51278 
GET 
http://fg.v4.download.windowsupdate.com/msdownload/update/software/crup/2012/11/windows8-rt-kb2769165-x64_e97d07dda97d56be1561ece773aaff90fb61f7e6.psf 
- HIER_DIRECT/13.107.4.50 application/octet-stream
[Tue Mar 15 12:49:44 2016].069 145484 10.88.100.100 TCP_MISS/206 52524 
GET 
http://fg.v4.download.windowsupdate.com/msdownload/update/software/crup/2012/11/windows8-rt-kb2769165-x64_e97d07dda97d56be1561ece773aaff90fb61f7e6.psf 
- HIER_DIRECT/13.107.4.50 application/octet-stream
[Tue Mar 15 12:50:41 2016].665  57594 10.88.100.100 TCP_MISS/206 53211 
GET 
http://fg.v4.download.windowsupdate.com/msdownload/update/software/crup/2012/11/windows8-rt-kb2769165-x64_e97d07dda97d56be1561ece773aaff90fb61f7e6.psf 
- HIER_DIRECT/13.107.4.50 application/octet-stream
[Tue Mar 15 12:51:50 2016].167  68500 10.88

Re: [squid-users] how i will avoid the warning info ? "This cache hit is still fresh and more than 1 day old"

2016-03-15 Thread joe
if you don't want your clients to see any specific reply just add this 
be warned some info if you deny might fkp your clients
browsing.
example:
reply_header_access Warning deny all



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-i-will-avoid-the-warning-info-This-cache-hit-is-still-fresh-and-more-than-1-day-old-tp4676683p4676694.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid with LDAP-authentication: bypass selected URLs

2016-03-15 Thread FredB
I guess you have an acl with proxy_auth ?
Something like acl ldapauth proxy_auth REQUIRED ?

So you can just add http_access allow ldapauth !pdfdoc and perhaps http_access 
allow pdfdoc after

Fred

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Bandwidth control with delay pool

2016-03-15 Thread FredB
You can easily make this with an acl, delay_pool is a very powerful tool 

eg:

Bandwidth 64k for each users with an identification except for acl BP and only 
in time included in acl desk 

acl my_ldap_auth proxy_auth REQUIRED 
acl bp dstdom_regex "/etc/squid/limit"

acl desk time 09:00-12:00
acl desk time 13:30-16:00

delay_pools 1
delay_class 1 4
delay_access 1 allow my_ldap_auth desk !bp
delay_parameters 1 -1/-1 -1/-1 -1/-1 64000/64000

Be careful, a recent version is needed (squid 3.5) to avoid some bugs with https

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Bandwidth control with delay pool

2016-03-15 Thread Luigi Kurihara
Good afternoon,

I need to control the bandwidth using squid to certain domain/IP’s/server’s.
My client needs, I reduce bandwidth in his network to employees using
facebook. Ex.: facebook - 10K for the all other sites still stay normal.
I read many tutorials, search in google and the one of solution I encounter
is using TC - Traffic Control, but I really want to do it in squid.
Anyone can I help me?

Thanks for any explanation.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Bandwidth control with delay pool

2016-03-15 Thread Luigi Kurihara
Good afternoon,

I need to control the bandwidth using squid to certain domain/IP’s/server’s.
My client needs, I reduce bandwidth in his network to employees using
facebook. Ex.: facebook - 10K for the all other sites still stay normal.
I read many tutorials, search in google and the one of solution I encounter
is using TC - Traffic Control, but I really want to do it in squid.
Anyone can I help me?

Thanks for any explanation.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid with LDAP-authentication: bypass selected URLs

2016-03-15 Thread Verwaiser
Hello,
we use user-authentication using a LDAP server. 
We want to use a pdf - document which connects to an internet address
(europa.eu) for a kind of examination. The pdf doesnt ask for
proxy-authentification, so I tried to go around squid using ACLs like:

acl alle src 0.0.0.0/0.0.0.0
acl pdfdoc dstdomain "/etc/squid/urlListe"
http_access allow pdfdoc alle

with entries "europa.eu" and "*.europa.eu" and some more in the file
urlListe 

Also I tried:

acl CONNECT method CONNECT
acl wuCONNECT dstdomain webgate.ec.europa.eu
http_access allow CONNECT wuCONNECT

The result is allways the same: The Acrobat Reader tells "connection
failed".


In access.log I find:
192.168.12.23 - - [15/Mar/2016:10:32:37 +0100] "GET
http://ctldl.windowsupdate.com/msdownload/update/v3/static/trustedr/en/disallowedcertstl.cab?
HTTP/1.1" 407 2066 "-" "Microsoft-CryptoAPI/6.1" TCP_DENI
ED:NONE
192.168.12.23 - - [15/Mar/2016:10:32:37 +0100] "GET
http://ocsp.globalsign.com/rootr1/MEwwSjBIMEYwRDAJBgUrDgMCGgkUNl8qJUC99BM00qP%2F8%2FUsCCwQAAURO8EJH
HTTP/1
.1" 407 2219 "-" "Microsoft-CryptoAPI/6.1" TCP_DENIED:NONE
192.168.12.23 - - [15/Mar/2016:10:32:37 +0100] "GET
http://crl.globalsign.net/root.crl HTTP/1.1" 407 1889 "-"
"Microsoft-CryptoAPI/6.1" TCP_DENIED:NONE
192.168.12.23 - - [15/Mar/2016:10:32:37 +0100] "GET
http://ocsp2.globalsign.com/gsorganizationvalsha2g2/MFMwUTBPMEBl7BwQUlt5h8b0cFilTHMDMfTuDAEDmGnwCEhEhiMXAk3Q
3QqEElr8w7e7kcA%3D%3D HTTP/1.1" 407 2303 "-" "Microsoft-CryptoAPI/6.1"
TCP_DENIED:NONE
192.168.12.23 - - [15/Mar/2016:10:32:37 +0100] "GET
http://crl.globalsign.com/gs/gsorganizationvalsha2g2.crl HTTP/1.1" 407 1955
"-" "Microsoft-CryptoAPI/6.1" TCP_DENIED:NONE
192.168.12.23 - - [15/Mar/2016:10:32:37 +0100] "CONNECT
webgate.ec.europa.eu:443 HTTP/1.0" 200 3154 "-" "Mozilla/3.0 (compatible;
Acrobat 5.0; Windows)" TCP_MISS:DIRECT

Any idea if I can do something using squid.conf to establish connection?

Holger

PS: Using "internet at home" without squid the pdf-document works well.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-with-LDAP-authentication-bypass-selected-URLs-tp4676689.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how i will avoid the warning info ? "This cache hit is still fresh and more than 1 day old"

2016-03-15 Thread Amos Jeffries
On 16/03/2016 1:19 a.m., johnzeng wrote:
> 
> Hello Dear Sir :
> 
> i found a warning info via firebug , how i will avoid the warning info ?



Is the statement made in the header incorrect?

> 
> Age 474416
> Cache-Control max-age=31536
> Content-Length 1556
> Content-Type image/jpeg
> Date Sat, 05 Mar 2016 01:38:36 GMT
> Expires Thu, 31 Dec 2037 23:55:55 GMT
> Last-Modified Wed, 25 Mar 2015 13:00:08 GMT

> Warning 113 squid_cache2 (squid/3.5.2) This cache hit is still fresh and
> more than 1 day old


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FreeBSD and Kerberos: RC4 keytabs work, AES256 don't

2016-03-15 Thread Victor Sudakov
Marko Cupa?? wrote:
> 
> I am setting up new AD-integrated squid server, so I thought I might as
> well upgrade kerberos crypto on keytabs.
> 
> It seems that, at least on FreeBSD 10.2-RELEASE-p13, squid-3.5.15
> compiled with GSSAPI_BASE (kerberos from base system) can't
> authenticate users via kerberos using AES256 keytabs.
> 
> Testing with kinit works, but squid auth does not. I am getting these
> in cache.log:
> BH gss_accept_sec_context() failed:  Miscellaneous failure (see text).
> unknown mech-code 0 for mech unknown

What encryption type is the ticket (for the HTTP/proxy@YOUR.REALM) the
Windows KDC gives you? You can figure this out from klist.exe or
kerbtray.exe.

In my case, the Windows KDC never issues an AES256 ticket for some
reason, even if the squid service principal has one in the AD.

-- 
Victor Sudakov,  VAS4-RIPE, VAS47-RIPN
sip:suda...@sibptus.tomsk.ru
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Windows Installer

2016-03-15 Thread Rafael Akchurin
Hello Patrick,

The ROOTDRIVE is MSI property, not environment variable. See for example at 
https://msdn.microsoft.com/en-us/library/windows/desktop/aa367988%28v=vs.85%29.aspx

Best regards,
Rafael Akchurin
Diladele B.V.

--
Please take a look at Web Safety - our ICAP based web filter server for Squid 
proxy


From: Patrick Flaherty [mailto:patrick.flahe...@verizon.net]
Sent: Sunday, March 13, 2016 6:08 PM
To: Rafael Akchurin 
Subject: RE: [squid-users] Squid Windows Installer

Hi Rafael,

Thank you for your response.

How do I use the ROOTDRIVE variable?

Do I set ROOTDRIVE as an environment variable prior to calling the MSI? If so 
and I always want the C drive, do I set:

ROOTDRIVE=C
or
ROOTDRIVE=C:

Thank You for your quick response.
Patrick

From: Rafael Akchurin [mailto:rafael.akchu...@diladele.com]
Sent: Sunday, March 13, 2016 12:33 PM
To: vze2k...@verizon.net; 
squid-users@lists.squid-cache.org
Subject: RE: [squid-users] Squid Windows Installer

Hi Patrick,

Yes, this is the default behavior of Wix/InstallShield that the disk with the 
most space is picked up for installation.
You can override this behavior by directly specifying ROOTDRIVE variable during 
installation using msiexec.

Best regards,
Rafael Akchurin
Diladele B.V.

--
Please take a look at Web Safety - our ICAP based web filter server for Squid 
proxy



From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of vze2k...@verizon.net
Sent: Saturday, March 12, 2016 11:04 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Squid Windows Installer

Hi,

The Squid Windows installer defaults to F:\squid on my machine where I have a 
C,  D (CD),  E (Windows created Recovery Disk) and F (My USB Backup Drive). Why 
did the installer pick the F drive by default? I'm writing an installer that 
wraps around the squid msi installer and this causes problems that I do not 
think I can control. I thought it would always default to C:\Squid. Maybe it is 
selecting the drive with the most space as my F drive is?

Any help or guidance here would be greatly appreciated. Thank you Diladele for 
producing this installer.

Best,
Patrick


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidGuard: redirect to squid-internal URLs no longer working with 3.5?

2016-03-15 Thread Silamael
On 03/15/2016 12:52 PM, Amos Jeffries wrote:
>> So, if i try this, i get a 404 response and the ERR_INVALID_REQ page.
> 
> Okay. That is the correct behaviour for this situation.
> Squid does not normally load anything at the
> /squid-internal-static/error-access-denied path.
> 
> A second bug / undefined behaviour that got fixed in 3.5 was incorrect
> HTTP status codes being delivered on cachemgr and internal responses. It
> now returns 404 Not Found when URL internal URL does not point at an
> object, not "access denied" - because access wasn't denied, the file was
> not found.
> 
> The /squid-internal-static/* objects to be loaded are configured in the
> squid /etc/squid/mime.conf configuration file. (though some OS distros
> move it out of /etc/squid for some reason).
> 
> However, that would just make the response a 200 OK with the internal
> object as the payload. If you want to retain 403 you need to block the
> re-written URL from being serviced by Squid:
> 
>  acl SG_deny urlpath_regex ^/squid-internal-static/error-access-denied$
>  adapted_http_access deny SG_deny
> 
> (If that dont work you can use miss_access instead)
> 
> 
>> With debugging I can see that there is a request like
>> GET ://127.0.0.1:3128/squid-internal-static/error-access-denied
>> This is indeed an invalid request...
> 
> Nod. Though the internal ID being used is correct, so its only the
> output display and upstream messages which are broken. I'm looking into
> that now, but the fix wont change anything relating to your actual
> problem. You will still need to use the above config settings to trigger
> a 403.
> 
> 
>> If I use http:// instead of internal:// the whole request is forwarded
>> to the upstream cache peer and again replied with ERR_INVALID_REQ...
>> With
>> internal://$visible_hostname:3128/squid-internal-static/error-access-denied
>> the response is a 400 Bad Request with ERR_UNSUP_REQ.
>> According to debugging here again the internal schema is not passed
>> along when building the GET request.
>>
>> BTW, the old URL I used worked for years!
> 
> The problem with relying on undefined behaviour is that it can work for
> a long time then disappear without warning.
> 
> What I think was going on previously was the parser handling the
> URL-rewriter output accepted the URL with 'squid-internal-static' as
> hostname. When Squid got to the upstream forwarding stage the check for
> internal status did a sub-string check and (wrongly) found
> "/squid-internal-". So decided to handle it as internal. But the
> internal-server logics could not find any object and (wrongly) generated
> a 403 error page.
> 
> So a chain of at least 2 bugs being relied on to produce a 403 Access
> Denied, when it should not have. We have now fixed those bugs, so what
> you had relying on them goes splat.
> 
> Amos
> 

Hi Amos,

Many thanks for these clarifications.

Greetings,
Matthias
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Core dump / xassert on FwdState::unregister (3.5.15)

2016-03-15 Thread squid
On 2016-03-15 09:40, sq...@peralex.com wrote:
> On 2016-03-15 09:05, Amos Jeffries wrote:
>> On 15/03/2016 7:34 p.m., squid wrote:
>>
>> This is bug 4447. Please update to a build from the 3.5 snapshot.
>>
> 
> Thanks.  I'll give that a try.
> 

Looks like it's working correctly now - been running for 4 hours without
any problems.  Thanks for the assistance.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] how i will avoid the warning info ? "This cache hit is still fresh and more than 1 day old"

2016-03-15 Thread johnzeng

Hello Dear Sir :

i found a warning info via firebug , how i will avoid the warning info ?

Age 474416
Cache-Control max-age=31536
Content-Length 1556
Content-Type image/jpeg
Date Sat, 05 Mar 2016 01:38:36 GMT
Expires Thu, 31 Dec 2037 23:55:55 GMT
Last-Modified Wed, 25 Mar 2015 13:00:08 GMT
Server JDWS
Via http/1.1 BJ-Y-JCS-208 ( [cHs f ]), http/1.1 GZ-CT-1-JCS-107 ( [cRs f
]), 1.1 squid_cache2 (squid/3.5
.2)
Warning 113 squid_cache2 (squid/3.5.2) This cache hit is still fresh and
more than 1 day old
X-Cache HIT from squid_cache2


refresh_pattern \.htmll$ 480 50% 22160 reload-into-ims
refresh_pattern \.htm$ 480 50% 22160 reload-into-ims
refresh_pattern \.jpeg$ 10080 90% 43200 reload-into-ims
refresh_pattern \.jpg$ 10080 90% 43200 reload-into-ims


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] FreeBSD and Kerberos: RC4 keytabs work, AES256 don't

2016-03-15 Thread Marko Cupać
Hi,

I am setting up new AD-integrated squid server, so I thought I might as
well upgrade kerberos crypto on keytabs.

It seems that, at least on FreeBSD 10.2-RELEASE-p13, squid-3.5.15
compiled with GSSAPI_BASE (kerberos from base system) can't
authenticate users via kerberos using AES256 keytabs.

Testing with kinit works, but squid auth does not. I am getting these
in cache.log:
BH gss_accept_sec_context() failed:  Miscellaneous failure (see text).
unknown mech-code 0 for mech unknown

Any help appreciated.
-- 
Before enlightenment - chop wood, draw water.
After  enlightenment - chop wood, draw water.

Marko Cupać
https://www.mimar.rs/
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidGuard: redirect to squid-internal URLs no longer working with 3.5?

2016-03-15 Thread Amos Jeffries
On 15/03/2016 9:25 p.m., Silamael wrote:
> On 03/15/2016 12:10 AM, Amos Jeffries wrote:
>> On 15/03/2016 2:22 a.m., Silamael wrote:
>>>
>>> On 03/14/2016 02:16 PM, Kinkie wrote:
 Hi,
   .. has it ever? internal:// doesn't seem like a recognized protocol to 
 me.
>>> It worked till the update to Squid 3.5.
>>>
>>
>> It should not have. That was a bug.
>>
>> The correct syntax for 'internal:' URI looks like:
>>
>>   internal://$visible_hostname:3128/squid-internal-static/...
> 
> Sorry, I don't get it. Formerly we had
> internal://squid-internal-static/error-access-denied and this resulted
> in the ERR_ACCESS_DENIED page being delivered to the client.
> Now you say this has been wrong all the time and the correct path would
> be internal://$visible_hostname:3128/squid-internal-static/...

Yes.

> So, if i try this, i get a 404 response and the ERR_INVALID_REQ page.

Okay. That is the correct behaviour for this situation.
Squid does not normally load anything at the
/squid-internal-static/error-access-denied path.

A second bug / undefined behaviour that got fixed in 3.5 was incorrect
HTTP status codes being delivered on cachemgr and internal responses. It
now returns 404 Not Found when URL internal URL does not point at an
object, not "access denied" - because access wasn't denied, the file was
not found.

The /squid-internal-static/* objects to be loaded are configured in the
squid /etc/squid/mime.conf configuration file. (though some OS distros
move it out of /etc/squid for some reason).

However, that would just make the response a 200 OK with the internal
object as the payload. If you want to retain 403 you need to block the
re-written URL from being serviced by Squid:

 acl SG_deny urlpath_regex ^/squid-internal-static/error-access-denied$
 adapted_http_access deny SG_deny

(If that dont work you can use miss_access instead)


> With debugging I can see that there is a request like
> GET ://127.0.0.1:3128/squid-internal-static/error-access-denied
> This is indeed an invalid request...

Nod. Though the internal ID being used is correct, so its only the
output display and upstream messages which are broken. I'm looking into
that now, but the fix wont change anything relating to your actual
problem. You will still need to use the above config settings to trigger
a 403.


> If I use http:// instead of internal:// the whole request is forwarded
> to the upstream cache peer and again replied with ERR_INVALID_REQ...
> With
> internal://$visible_hostname:3128/squid-internal-static/error-access-denied
> the response is a 400 Bad Request with ERR_UNSUP_REQ.
> According to debugging here again the internal schema is not passed
> along when building the GET request.
> 
> BTW, the old URL I used worked for years!

The problem with relying on undefined behaviour is that it can work for
a long time then disappear without warning.

What I think was going on previously was the parser handling the
URL-rewriter output accepted the URL with 'squid-internal-static' as
hostname. When Squid got to the upstream forwarding stage the check for
internal status did a sub-string check and (wrongly) found
"/squid-internal-". So decided to handle it as internal. But the
internal-server logics could not find any object and (wrongly) generated
a 403 error page.

So a chain of at least 2 bugs being relied on to produce a 403 Access
Denied, when it should not have. We have now fixed those bugs, so what
you had relying on them goes splat.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Landing- Disclaimer-Page for an Exchange 2013 Reverse Proxy

2016-03-15 Thread Squid Users
Hi,

I've installed a Squid reverse proxy for a MS-Exchange Test-Installation to 
reach OWA from the outside.

My current environment is as follows:

Squid Version 3.4.8 with ssl on a Debian Jessie (self compiled)
The Squid and the exchange system are in the internal network with private 
ip-addresses (same network segment)
The access to the squid system is realized by port forwarding (tcp/80, tcp/443, 
tcp/22) from a public ip-address
Used certificate is from letsencrypt (san-certificate, used by both servers)

Current Status:

Pre-Login works
Outlook-Access to OWA works (other protocolls not tested yet)
https://portal.xxx.de doesn't work (Forwarding denied)
(which is quite normal because there is no acl for it)

Ho can I reach that:

1) Access to https://portal.xxx.de ends up on a kind of "landing-page" with 
instructions how to use the exchange test-installation
(web server can be the iis oh the exchange system, apache on the squid system 
or a third system)

2) Is there a way to integrate the initial password dialog in that web page? 

Kind regards
Bob


Squid configuration:

# Hostname
visible_hostname portal.xxx.de

# Externer Zugriff
https_port 192.168.xxx.21:443 accel 
cert=/root/letsencrypt/certs/xxx.de/cert.pem 
key=/root/letsencrypt/certs/xxx.de/privkey.pem 
cafile=/root/letsencrypt/certs/xxx.de/fullchain.pem defaultsite=portal.xxx.de

# Interner Server
cache_peer 192.168.xxx.20 parent 443 0 no-query originserver login=PASS ssl 
sslflags=DONT_VERIFY_PEER sslcert=/root/letsencrypt/certs/xxx.de/cert.pem 
sslkey=/root/letsencrypt/certs/xxx.de/privkey.pem name=ExchangeServer

# Zugriff auf folgende Adressen ist erlaubt
acl EXCH url_regex -i ^https://portal.xxx.de$
acl EXCH url_regex -i ^https://portal.xxx.de/owa.*$
acl EXCH url_regex -i ^https://portal.xxx.de/Microsoft-Server-ActiveSync.*$
acl EXCH url_regex -i ^https://portal.xxx.de/ews.*$
acl EXCH url_regex -i ^https://portal.xxx.de/autodiscover.*$
acl EXCH url_regex -i ^https://portal.xxx.de/rpc/.*$

# Auth
auth_param basic program /usr/lib/squid3/basic_ncsa_auth /etc/squid3/passwd
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive on

# Regeln
acl ncsa_users proxy_auth REQUIRED
http_access allow ncsa_users
cache_peer_access ExchangeServer allow EXCH
never_direct allow EXCH
http_access allow EXCH
http_access deny all
miss_access allow EXCH
miss_access deny all

# Logging
access_log /var/log/squid3/access.log squid
debug_options ALL,9

cache_mgr mailto:x...@xxx.de



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidGuard: redirect to squid-internal URLs no longer working with 3.5?

2016-03-15 Thread Silamael
On 03/15/2016 12:10 AM, Amos Jeffries wrote:
> On 15/03/2016 2:22 a.m., Silamael wrote:
>>
>> On 03/14/2016 02:16 PM, Kinkie wrote:
>>> Hi,
>>>   .. has it ever? internal:// doesn't seem like a recognized protocol to me.
>> It worked till the update to Squid 3.5.
>>
> 
> It should not have. That was a bug.
> 
> The correct syntax for 'internal:' URI looks like:
> 
>   internal://$visible_hostname:3128/squid-internal-static/...

Sorry, I don't get it. Formerly we had
internal://squid-internal-static/error-access-denied and this resulted
in the ERR_ACCESS_DENIED page being delivered to the client.
Now you say this has been wrong all the time and the correct path would
be internal://$visible_hostname:3128/squid-internal-static/...
So, if i try this, i get a 404 response and the ERR_INVALID_REQ page.
With debugging I can see that there is a request like
GET ://127.0.0.1:3128/squid-internal-static/error-access-denied
This is indeed an invalid request...
If I use http:// instead of internal:// the whole request is forwarded
to the upstream cache peer and again replied with ERR_INVALID_REQ...
With
internal://$visible_hostname:3128/squid-internal-static/error-access-denied
the response is a 400 Bad Request with ERR_UNSUP_REQ.
According to debugging here again the internal schema is not passed
along when building the GET request.

BTW, the old URL I used worked for years!

-- Matthias
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need advice on some crazy access control requirements

2016-03-15 Thread Rafael Akchurin
Hello Victor,

Please send me e-mail at supp...@diladele.com .
We should not pollute the list with this off topics.

Best regards,
Rafael

-Original Message-
From: Victor Sudakov [mailto:suda...@sibptus.tomsk.ru] 
Sent: Tuesday, March 15, 2016 8:45 AM
To: Rafael Akchurin 
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Need advice on some crazy access control requirements

Rafael Akchurin wrote:
> > 
> > In order to scan the contents of the files being downloaded you 
> > might need to have eCAP or ICAP module/server attached to your Squid.
> > Please take a look at Web Safety - our ICAP based web filter server 
> > for Squid proxy
> 
> > It's for pfSense, isn't it? Would it work on stock FreeBSD 9 and 10 we are 
> > using on our proxy servers?
> 
> We build ICAP on native FreeBSD 10 and then provide instructions/tutorial how 
> to install/adapt it for pfSense (which is also FreeBSD 10 inside).
> It will not work on FreeBSD 9 though :(

Can I request a trial version?

--
Victor Sudakov,  VAS4-RIPE, VAS47-RIPN
sip:suda...@sibptus.tomsk.ru
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need advice on some crazy access control requirements

2016-03-15 Thread Victor Sudakov
Rafael Akchurin wrote:
> > 
> > In order to scan the contents of the files being downloaded you might 
> > need to have eCAP or ICAP module/server attached to your Squid.  
> > Please take a look at Web Safety - our ICAP based web filter server 
> > for Squid proxy
> 
> > It's for pfSense, isn't it? Would it work on stock FreeBSD 9 and 10 we are 
> > using on our proxy servers?
> 
> We build ICAP on native FreeBSD 10 and then provide instructions/tutorial how 
> to install/adapt it for pfSense (which is also FreeBSD 10 inside).
> It will not work on FreeBSD 9 though :(

Can I request a trial version?

-- 
Victor Sudakov,  VAS4-RIPE, VAS47-RIPN
sip:suda...@sibptus.tomsk.ru
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Core dump / xassert on FwdState::unregister (3.5.15)

2016-03-15 Thread squid
On 2016-03-15 09:05, Amos Jeffries wrote:
> On 15/03/2016 7:34 p.m., squid wrote:
>>
>> I'm running FreeBSD 9.3-STABLE and Squid 3.5.15 and I'm getting regular
>> core dumps with the following stack.  Note that I have disabled caching.
>>  Any suggestions?  I've logged a bug (4467):
>>
>> #0  0x000801b8c96c in thr_kill () from /lib/libc.so.7
>> #1  0x000801c55fcb in abort () from /lib/libc.so.7
>> #2  0x005d2545 in xassert (msg=0x8b8816 "serverConnection() ==
>> conn", file=0x8b8513 "FwdState.cc", line=447) at debug.cc:544
> 
> This is bug 4447. Please update to a build from the 3.5 snapshot.
> 

Thanks.  I'll give that a try.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Core dump / xassert on FwdState::unregister (3.5.15)

2016-03-15 Thread Amos Jeffries
On 15/03/2016 7:34 p.m., squid wrote:
> 
> I'm running FreeBSD 9.3-STABLE and Squid 3.5.15 and I'm getting regular
> core dumps with the following stack.  Note that I have disabled caching.
>  Any suggestions?  I've logged a bug (4467):
> 
> #0  0x000801b8c96c in thr_kill () from /lib/libc.so.7
> #1  0x000801c55fcb in abort () from /lib/libc.so.7
> #2  0x005d2545 in xassert (msg=0x8b8816 "serverConnection() ==
> conn", file=0x8b8513 "FwdState.cc", line=447) at debug.cc:544

This is bug 4447. Please update to a build from the 3.5 snapshot.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need advice on some crazy access control requirements

2016-03-15 Thread Rafael Akchurin
Hello Victor,

We build ICAP on native FreeBSD 10 and then provide instructions/tutorial how 
to install/adapt it for pfSense (which is also FreeBSD 10 inside).
It will not work on FreeBSD 9 though :(

Best regards,
Rafael Akchurin
Diladele B.V.

-Original Message-
From: Victor Sudakov [mailto:suda...@sibptus.tomsk.ru] 
Sent: Tuesday, March 15, 2016 4:02 AM
To: Rafael Akchurin 
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Need advice on some crazy access control requirements

Rafael Akchurin wrote:
> 
> In order to scan the contents of the files being downloaded you might 
> need to have eCAP or ICAP module/server attached to your Squid.  
> Please take a look at Web Safety - our ICAP based web filter server 
> for Squid proxy

It's for pfSense, isn't it? Would it work on stock FreeBSD 9 and 10 we are 
using on our proxy servers?

--
Victor Sudakov,  VAS4-RIPE, VAS47-RIPN
sip:suda...@sibptus.tomsk.ru
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users