Re: [squid-users] Clarity on sending intercepted HTTPS traffic upstream to a cache_peer

2017-01-27 Thread Amos Jeffries
On 28/01/2017 1:32 p.m., Charlie Orford wrote:
> On 27/01/2017 23:43, Alex Rousskov wrote:
>> On 01/27/2017 04:04 PM, Charlie Orford wrote:
>>> A post from another user on this list seems to suggest they successfully
>>> got squid to do what we want
>>> (http://lists.squid-cache.org/pipermail/squid-users/2015-November/007955.html)
>>>
>>> but when emulating their setup (i.e. peeking at step1, staring at step2
>>> and then bumping at step3) we get the same
>>> SQUID_X509_V_ERR_DOMAIN_MISMATCH error.
>> I suggest the following order:
>>
>>1. Decide whether your Squid should bump or splice.
>>2. Find the configuration that does what you decided in #1.
>>
>> So far, you have given no reasons to warrant bumping so I assume you do
>> not need or want to bump anything. Thus, you should ignore any
>> configurations that contain "stare", "bump", or deprecated "*-first"
>> ssl_bump actions.
> 
> Sorry if my original intent wasn't clear. Obviously it makes no sense
> intercepting ssl traffic if we're going to splice everything.
> 
> Our design goal is: intercept and bump local client https traffic on
> squid1 (so we can filter certain urls, cache content etc.) and then
> forward the request on to the origin server via an upstream squid2
> (which has internet access).

Under a narrow set of splice conditions you can get traffic through the
2-proxy heirarchy. But that is a very limited set of circumstances and
definitely not working with 'bump' anywhere involved.

As Alex pointed out step3 also eliminates the CONNECT ability. Which I
was not aware of a year ago when I wrote that original email you referenced.


The problem is that *any* server or peer TLS communication prior to
deciding to splice eliminates the ability to use a fake-CONNECT. That is
absolute because *all* TLS server/peer communication has to go through
the CONNECT tunnel - or none can. Anything happening prior to its
existence wont be TLS authenticated with the origin server.

> 
> The user who posted
> http://lists.squid-cache.org/pipermail/squid-users/2015-November/007955.html
> seems to have successfully done this but I can't replicate it. After

They did not because Squid _cannot_ do it.

Note that their cache_peer has 'ssl' flag enabled. So their transparent
traffic is using the peer certificate to base the auto-generated certs
on. Which you already tried and decided was not workable for you.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Clarity on sending intercepted HTTPS traffic upstream to a cache_peer

2017-01-27 Thread Charlie Orford

On 27/01/2017 23:43, Alex Rousskov wrote:

On 01/27/2017 04:04 PM, Charlie Orford wrote:

A post from another user on this list seems to suggest they successfully
got squid to do what we want
(http://lists.squid-cache.org/pipermail/squid-users/2015-November/007955.html)
but when emulating their setup (i.e. peeking at step1, staring at step2
and then bumping at step3) we get the same
SQUID_X509_V_ERR_DOMAIN_MISMATCH error.

I suggest the following order:

   1. Decide whether your Squid should bump or splice.
   2. Find the configuration that does what you decided in #1.

So far, you have given no reasons to warrant bumping so I assume you do
not need or want to bump anything. Thus, you should ignore any
configurations that contain "stare", "bump", or deprecated "*-first"
ssl_bump actions.


Sorry if my original intent wasn't clear. Obviously it makes no sense 
intercepting ssl traffic if we're going to splice everything.


Our design goal is: intercept and bump local client https traffic on 
squid1 (so we can filter certain urls, cache content etc.) and then 
forward the request on to the origin server via an upstream squid2 
(which has internet access).


The user who posted 
http://lists.squid-cache.org/pipermail/squid-users/2015-November/007955.html 
seems to have successfully done this but I can't replicate it. After 
doing a lot of googling (and semi-successfully trying to interpret Amos' 
various replies whenever bumping and cache_peers come up on this list) 
I'm beginning to wonder if it is indeed possible or if that user simple 
mistook what he was seeing when he posted that message (e.g. didn't 
notice that squid was actually not bumping his client connections).


Charlie









___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Clarity on sending intercepted HTTPS traffic upstream to a cache_peer

2017-01-27 Thread Alex Rousskov
On 01/27/2017 04:04 PM, Charlie Orford wrote:

> Clients get a SQUID_X509_V_ERR_DOMAIN_MISMATCH error (because the
> auto-generated cert squid1 gives to the client contains the domain of
> the cache_peer *not* the ultimate origin server).

Under normal circumstances, Squid should generate no certificates in
your setup AFAICT.


> The above is with the following ssl_bump directives set in squid1's config:
> 
> ssl_bump peek step1
> ssl_bump peek step2
> ssl_bump splice step3

In other words:

  ssl_bump peek all
  ssl_bump splice all

Why not just do this instead:

  ssl_bump splice all

I have not tested this, but I do not understand why you want a
three-step SslBump to blindly forward an SSL connection to a peer. I
would use the minimal number of steps possible: one if it works or two
if I have to because of some Squid bugs/missing features.

When everything goes OK, Squid should generate no certificates in either
case, but with three-step SslBump, there are a lot more opportunities
for Squid to detect problems and want to send an error response to the
client. To send an error message, Squid bumps the connection to the
client (which does require fake certificate generation).

Finally, I do not know whether Squid is capable of peeking at the origin
server through a peer, but I doubt it is. If my guess is correct, then
three-step splicing will not work for you because during step3 your
Squid will already be talking to the origin server rather than a
cache_peer (or will fail because it cannot do that).


> A post from another user on this list seems to suggest they successfully
> got squid to do what we want
> (http://lists.squid-cache.org/pipermail/squid-users/2015-November/007955.html)
> but when emulating their setup (i.e. peeking at step1, staring at step2
> and then bumping at step3) we get the same
> SQUID_X509_V_ERR_DOMAIN_MISMATCH error.

I suggest the following order:

  1. Decide whether your Squid should bump or splice.
  2. Find the configuration that does what you decided in #1.

So far, you have given no reasons to warrant bumping so I assume you do
not need or want to bump anything. Thus, you should ignore any
configurations that contain "stare", "bump", or deprecated "*-first"
ssl_bump actions.


HTH,

Alex.


> On 27/01/2017 18:24, Charlie Orford wrote:
>> Hi list
>>
>> We're using squid 3.5.23 and trying to achieve the following:
>>
>> client https request (not proxy aware) -> squid1 (https NAT intercept)
>> -> upstream squid2 (configured as a cache_peer in squid1) -> origin
>> server (e.g. www.google.com)
>>
>> Amos mentioned in this thread
>> http://lists.squid-cache.org/pipermail/squid-users/2016-March/009468.html
>> that:
>>
>> > Squid can:
>> >
>> >  A) relay CONNECT message from client to any upstream proxy.
>> >
>> >  B) generate CONNECT message on arriving intercepted HTTPS and relay
>> > that to upstream proxy *IF* (and only if) ssl_bump selects the 'splice'
>> > action.
>> >
>> >  C) relay https:// URLs to an upstream TLS proxy.
>> >
>> >
>> > That is all at present.
>> >
>> > Squid cannot (yet) generate CONNECT messages to try and fetch TLS
>> > details via a non-TLS cache_peer. If you are able to sponsor that
>> > enhancement work patches are welcome, or sponsorship $$ to help pay
>> > persons working on these things (Christos / measurement-factory) are
>> > also welcome.
>>
>> Option B seems to cover what we need i.e. squid1 wraps arriving
>> intercepted HTTPS traffic in a CONNECT and sends it upstream to squid2
>> which in turn tunnels it to the origin server. Unfortunately, we can't
>> get it to work: as soon as squid1 receives a client HTTPS request it
>> exits with "assertion failed: PeerConnector.cc:116: "peer->use_ssl""
>> in cache.log
>>
>> Relevant config for squid1:
>> ##
>> acl localnet src 10.100.0.0/24
>> acl SSL_ports port 443
>> acl Safe_ports port 80  # http
>> acl Safe_ports port 443 # https
>> acl Safe_ports port 777 # multiling http
>> acl CONNECT method CONNECT
>> acl Blocked_domains dstdomain "/etc/squid3/blocked.domains.acl"
>> acl step1 at_step SslBump1
>> acl step2 at_step SslBump2
>> acl step3 at_step SslBump3
>> acl MITM_TRAFFIC myportname MITM_port
>>
>> http_access allow manager localhost
>> http_access deny manager
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>> http_access deny to_localhost
>> http_access deny Blocked_domains
>> http_access allow localhost
>> http_access allow localnet
>> http_access deny all
>> http_reply_access allow all
>>
>> https_port 8443 ssl-bump intercept
>> cert=/etc/squid3/root_ca.combined.pem generate-host-certificates=on
>> dynamic_cert_mem_cache_size=8MB name=MITM_port
>> sslcrtd_program /usr/lib/squid3/ssl_crtd -s /var/lib/squid3/ssl_db -M 4MB
>>
>> ssl_bump peek all
>> ssl_bump splice all
>>
>> nonhierarchical_direct off
>> never_direct allow all
>> prefer_direct off
>> cache_peer 192.168.0.1 parent 3128 0 no-query no-digest
>> no-ne

Re: [squid-users] Clarity on sending intercepted HTTPS traffic upstream to a cache_peer

2017-01-27 Thread Charlie Orford

To follow up:

Adding ssl to the cache_peer directive on squid1 (and changing squid2 so 
it listens for connections on an https_port) gets us a little further 
but still doesn't work.


Clients get a SQUID_X509_V_ERR_DOMAIN_MISMATCH error (because the 
auto-generated cert squid1 gives to the client contains the domain of 
the cache_peer *not* the ultimate origin server).


The above is with the following ssl_bump directives set in squid1's config:

ssl_bump peek step1
ssl_bump peek step2
ssl_bump splice step3

A post from another user on this list seems to suggest they successfully 
got squid to do what we want 
(http://lists.squid-cache.org/pipermail/squid-users/2015-November/007955.html) 
but when emulating their setup (i.e. peeking at step1, staring at step2 
and then bumping at step3) we get the same 
SQUID_X509_V_ERR_DOMAIN_MISMATCH error.


Setting sslflags=DONT_VERIFY_DOMAIN on the cache_peer directive has no 
effect.


Connecting to squid1 with a proxy aware client (on a standard http_port 
with the ssl-bump flag set but no intercept) also results in the same 
problem.


Where are we going wrong?

Charlie

On 27/01/2017 18:24, Charlie Orford wrote:

Hi list

We're using squid 3.5.23 and trying to achieve the following:

client https request (not proxy aware) -> squid1 (https NAT intercept) 
-> upstream squid2 (configured as a cache_peer in squid1) -> origin 
server (e.g. www.google.com)


Amos mentioned in this thread 
http://lists.squid-cache.org/pipermail/squid-users/2016-March/009468.html 
that:


> Squid can:
>
>  A) relay CONNECT message from client to any upstream proxy.
>
>  B) generate CONNECT message on arriving intercepted HTTPS and relay
> that to upstream proxy *IF* (and only if) ssl_bump selects the 'splice'
> action.
>
>  C) relay https:// URLs to an upstream TLS proxy.
>
>
> That is all at present.
>
> Squid cannot (yet) generate CONNECT messages to try and fetch TLS
> details via a non-TLS cache_peer. If you are able to sponsor that
> enhancement work patches are welcome, or sponsorship $$ to help pay
> persons working on these things (Christos / measurement-factory) are
> also welcome.

Option B seems to cover what we need i.e. squid1 wraps arriving 
intercepted HTTPS traffic in a CONNECT and sends it upstream to squid2 
which in turn tunnels it to the origin server. Unfortunately, we can't 
get it to work: as soon as squid1 receives a client HTTPS request it 
exits with "assertion failed: PeerConnector.cc:116: "peer->use_ssl"" 
in cache.log


Relevant config for squid1:
##
acl localnet src 10.100.0.0/24
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 443 # https
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl Blocked_domains dstdomain "/etc/squid3/blocked.domains.acl"
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
acl MITM_TRAFFIC myportname MITM_port

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access deny Blocked_domains
http_access allow localhost
http_access allow localnet
http_access deny all
http_reply_access allow all

https_port 8443 ssl-bump intercept 
cert=/etc/squid3/root_ca.combined.pem generate-host-certificates=on 
dynamic_cert_mem_cache_size=8MB name=MITM_port

sslcrtd_program /usr/lib/squid3/ssl_crtd -s /var/lib/squid3/ssl_db -M 4MB

ssl_bump peek all
ssl_bump splice all

nonhierarchical_direct off
never_direct allow all
prefer_direct off
cache_peer 192.168.0.1 parent 3128 0 no-query no-digest 
no-netdb-exchange name=WWW_GATEWAY



Relevant config for squid2:
##
acl localnet src 192.168.0.0/24
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 443 # https
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_port 3128

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access allow localnet
http_access deny all

http_reply_access allow all


Is what we want to do currently achievable with the latest 3.5 branch 
or have we misunderstood what Amos was stating (some of his posts 
about ssl interception and cache_peer support can be fairly cryptic)?


If it is achievable, does the cache_peer link itself also need to be 
encrypted (via the ssl flag) to make it work?


Thanks,
Charlie





___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Clarity on sending intercepted HTTPS traffic upstream to a cache_peer

2017-01-27 Thread Charlie Orford

Hi list

We're using squid 3.5.23 and trying to achieve the following:

client https request (not proxy aware) -> squid1 (https NAT intercept) 
-> upstream squid2 (configured as a cache_peer in squid1) -> origin 
server (e.g. www.google.com)


Amos mentioned in this thread 
http://lists.squid-cache.org/pipermail/squid-users/2016-March/009468.html 
that:


> Squid can:
>
>  A) relay CONNECT message from client to any upstream proxy.
>
>  B) generate CONNECT message on arriving intercepted HTTPS and relay
> that to upstream proxy *IF* (and only if) ssl_bump selects the 'splice'
> action.
>
>  C) relay https:// URLs to an upstream TLS proxy.
>
>
> That is all at present.
>
> Squid cannot (yet) generate CONNECT messages to try and fetch TLS
> details via a non-TLS cache_peer. If you are able to sponsor that
> enhancement work patches are welcome, or sponsorship $$ to help pay
> persons working on these things (Christos / measurement-factory) are
> also welcome.

Option B seems to cover what we need i.e. squid1 wraps arriving 
intercepted HTTPS traffic in a CONNECT and sends it upstream to squid2 
which in turn tunnels it to the origin server. Unfortunately, we can't 
get it to work: as soon as squid1 receives a client HTTPS request it 
exits with "assertion failed: PeerConnector.cc:116: "peer->use_ssl"" in 
cache.log


Relevant config for squid1:
##
acl localnet src 10.100.0.0/24
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 443 # https
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl Blocked_domains dstdomain "/etc/squid3/blocked.domains.acl"
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
acl MITM_TRAFFIC myportname MITM_port

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access deny Blocked_domains
http_access allow localhost
http_access allow localnet
http_access deny all
http_reply_access allow all

https_port 8443 ssl-bump intercept cert=/etc/squid3/root_ca.combined.pem 
generate-host-certificates=on dynamic_cert_mem_cache_size=8MB 
name=MITM_port

sslcrtd_program /usr/lib/squid3/ssl_crtd -s /var/lib/squid3/ssl_db -M 4MB

ssl_bump peek all
ssl_bump splice all

nonhierarchical_direct off
never_direct allow all
prefer_direct off
cache_peer 192.168.0.1 parent 3128 0 no-query no-digest 
no-netdb-exchange name=WWW_GATEWAY



Relevant config for squid2:
##
acl localnet src 192.168.0.0/24
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 443 # https
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_port 3128

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access allow localnet
http_access deny all

http_reply_access allow all


Is what we want to do currently achievable with the latest 3.5 branch or 
have we misunderstood what Amos was stating (some of his posts about ssl 
interception and cache_peer support can be fairly cryptic)?


If it is achievable, does the cache_peer link itself also need to be 
encrypted (via the ssl flag) to make it work?


Thanks,
Charlie



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-27 Thread joseph
im not here to fight dont mention RFC caus its alredy violating RFC just
using enable-http-violations
pls re read my post or get someone to translate the structure of it 
else no benefit explaining or protecting RFC shit
so pls careful reading my point of view else waisting time with one year
experienced guy
bye folx





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Not-all-html-objects-are-being-cached-tp4681293p4681368.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-27 Thread Yuri Voinov


27.01.2017 19:35, Garri Djavadyan пишет:
> On Fri, 2017-01-27 at 17:58 +0600, Yuri wrote:
>> 27.01.2017 17:54, Garri Djavadyan пишет:
>>> On Fri, 2017-01-27 at 15:47 +0600, Yuri wrote:
 --2017-01-27 15:29:54--  https://www.microsoft.com/ru-kz/
 Connecting to 127.0.0.1:3128... connected.
 Proxy request sent, awaiting response...
 HTTP/1.1 200 OK
 Cache-Control: no-cache, no-store
 Pragma: no-cache
 Content-Type: text/html
 Expires: -1
 Server: Microsoft-IIS/8.0
 CorrelationVector: BzssVwiBIUaXqyOh.1.1
 X-AspNet-Version: 4.0.30319
 X-Powered-By: ASP.NET
 Access-Control-Allow-Headers: Origin, X-Requested-With,
 Content-
 Type,
 Accept
 Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
 Access-Control-Allow-Credentials: true
 P3P: CP="ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD
 TAI
 TELo
 OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI"
 X-Frame-Options: SAMEORIGIN
 Vary: Accept-Encoding
 Content-Encoding: gzip
 Date: Fri, 27 Jan 2017 09:29:56 GMT
 Content-Length: 13322
 Set-Cookie: MS-CV=BzssVwiBIUaXqyOh.1; domain=.microsoft.com;
 expires=Sat, 28-Jan-2017 09:29:56 GMT; path=/
 Set-Cookie: MS-CV=BzssVwiBIUaXqyOh.2; domain=.microsoft.com;
 expires=Sat, 28-Jan-2017 09:29:56 GMT; path=/
 Strict-Transport-Security: max-age=0; includeSubDomains
 X-CCC: NL
 X-CID: 2
 X-Cache: MISS from khorne
 X-Cache-Lookup: MISS from khorne:3128
 Connection: keep-alive
 Length: 13322 (13K) [text/html]
 Saving to: 'index.html'

 index.html  100%[==>]  13.01K --.-
 KB/sin
 0s

 2017-01-27 15:29:57 (32.2 MB/s) - 'index.html' saved
 [13322/13322]

 Can you explain me - for what static index.html has this:

 Cache-Control: no-cache, no-store
 Pragma: no-cache

 ?

 What can be broken to ignore CC in this page?
>>> Hi Yuri,
>>>
>>>
>>> Why do you think the page returned for URL
>>> [https://www.microsot.cpom/r
>>> u-kz/] is static and not dynamically generated one?
>> And for me, what's the difference? Does it change anything? In
>> addition, 
>> it is easy to see on the page and even the eyes - strangely enough -
>> to 
>> open its code. And? What do you see there?
> I see an official home page of Microsoft company for KZ region. The
> page is full of javascripts and products offer. It makes sense to
> expect that the page could be changed intensively enough.
In essence, the question is, what to say? In addition to the general
discussion of particulars or examples? As I said - this is just one
example. A lot of them. And I think sometimes it's better to chew than talk.
>
>
>>> The index.html file is default file name for wget.
>> And also the name of the default home page in the web. Imagine - I
>> know 
>> the obvious things. But the question was about something else.
>>> man wget:
>>>--default-page=name
>>> Use name as the default file name when it isn't known
>>> (i.e., for
>>> URLs that end in a slash), instead of index.html.
>>>
>>> In fact the https://www.microsoft.com/ru-kz/index.html is a stub
>>> page
>>> (The page you requested cannot be found.).
>> You living in wrong region. This is geo-dependent page, as obvious,
>> yes?
> What I mean is the pages https://www.microsoft.com/ru-kz/ and https://w
> ww.microsoft.com/ru-kz/index.html are not same. You can easily confirm
> it.
>
>
>> Again. What is the difference? I open it from different
>> workstations, 
>> from different browsers - I see the same thing. The code is
>> identical. I 
>> can is to cache? Yes or no?
> I'm a new member of Squid community (about 1 year). While tracking for
> community activity I found that you can't grasp the advantages of
> HTTP/1.1 over HTTP/1.0 for caching systems. Especially, its ability to
> _safely_ cache and serve same amount (but I believe even more) of the
> objects as HTTP/1.0 compliant caches do (while not breaking internet).
> The main tool of HTTP/1.1 compliant proxies is _revalidation_ process.
> HTTP/1.1 compliant caches like Squid tend to cache all possible objects
> but later use revalidation for dubious requests. In fact the
> revalidation is not costly process, especially using conditional GET
> requests.
Nuff said. Let's stop waste time. Take a look on attachement.
>
> I found that most of your complains in the mail list and Bugzilla are
> related to HTTPS scheme. FYI: The primary tool (revalidation) does not
> work for HTTPS scheme using all current Squid branches at the moment.
> See bug 4648.
Forgot about it. Now I've solved all of my problems.
>
> Try to apply the proposed patch and update all related bug reports.
I have no unresolved problems with caching. For me personally, this
debate - only of academic interest. You can continue to spen

Re: [squid-users] Not all html objects are being cached

2017-01-27 Thread Garri Djavadyan
On Fri, 2017-01-27 at 06:15 -0800, joseph wrote:
> hi its not about https scheme its about evrything

Hi,

First of all, I can't brag about my English and writing style, but your
writing style is _very_ offensive to other members. Please, try it
better. First of all, it is very difficult to catch the idea of many
sentences. I believe punctuation marks could help a lot. Thanks in
advance.

> i decide not to involve with arg...
> but why not its the last one i should say it once
> they ar right most of the ppl admin have no knwoleg so its ok to baby
> sit
> them as its
> but
> --enable-http-violations should be fully ignore cache control and in
> refresh
> pattern  admin shuld control the behavior of his need else they
> should  take
> of  —enable-http-violations or alow us to do so
> controlling the 
> Pragma: no-cache and  Cache-Control: no-cache + + ++ +
> in both request reply

Squid, as HTTP/1.1 compliant cache successfully caches and serves
CC:no-cache replies. Below is excerpt from the RFC7234:

5.2.2.2.  no-cache

   The "no-cache" response directive indicates that the response MUST
   NOT be used to satisfy a subsequent request without successful
   validation on the origin server.

The key word is _validation_. There is nothing bad with revalidation.
It is inexpensive but saves us from possible problems. The log entry
'TCP_REFRESH_UNMODIFIED' should be welcomed as TCP_HIT or TCP_MEM_HIT.

Example:

$ curl -v -s -x http://127.0.0.1:3128 http://sandbox.comnet.local/test.
bin >/dev/null

< HTTP/1.1 200 OK
< Last-Modified: Wed, 31 Aug 2016 19:00:00 GMT
< Accept-Ranges: bytes
< Content-Length: 262146
< Content-Type: application/octet-stream
< Expires: Thu, 01 Dec 1994 16:00:00 GMT
< Date: Fri, 27 Jan 2017 14:55:09 GMT
< Server: Apache
< ETag: "ea0cd5-40002-53b62b438ac00"
< Cache-Control: no-cache
< Age: 3
< X-Cache: HIT from gentoo.comnet.uz
< Via: 1.1 gentoo.comnet.uz (squid/3.5.23-BZR)
< Connection: keep-alive

1485528912.222 18 127.0.0.1 TCP_REFRESH_UNMODIFIED/200 262565 GET h
ttp://sandbox.comnet.local/test.bin - HIER_DIRECT/192.168.24.5
application/octet-stream


As you can see, there are no problems with the no-cache reply.


I advise you to consider every specific case where you believe Squid's
transition to HTTP/1.1 compliance restricts you to cache something.


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-27 Thread joseph
hi its not about https scheme its about evrything
i decide not to involve with arg...
but why not its the last one i should say it once
they ar right most of the ppl admin have no knwoleg so its ok to baby sit
them as its
but
--enable-http-violations should be fully ignore cache control and in refresh
pattern  admin shuld control the behavior of his need else they should  take
of  —enable-http-violations or alow us to do so
controlling the 
Pragma: no-cache and  Cache-Control: no-cache + + ++ +
in both request reply 
and its up to us to fix broke site   since almost 80% or more from the web
admin programmer using them just to prevent caching not becaus it brake the
page
has nothing to do with old damen page that we can fix the obj to be fresh
soon all web programmer will use those control and squid will become suks
end up having cache server not being able to cache all lool s
let other admin use squid without --enable-http-violations  if they ar worry
about braking web shit bad site
and let other good admin that know wat they ar doing control wat they need
using --enable-http-violations fully open no restriction at all
https is  rarely used not everywhere can use depend on country 
bye
joseph
so as my structure i have http only and as its squid  save me only 5% from
all the http bandwith




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Not-all-html-objects-are-being-cached-tp4681293p4681365.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Strange behavior - reload service failed, but not start.... (solved)

2017-01-27 Thread Antony Stone
On Friday 27 January 2017 at 14:36:01, erdosain9 wrote:

> Hi, again.
> Now, i do this
> 
> [root@squid ips]# ps aux | grep squid
> root  2228  0.0  0.0 130900   344 ?Ss   ene24   0:00
> /usr/sbin/squid -sYC

... snip ...

> [root@squid ips]# systemctl stop squid
> [root@squid ips]# pkill squid
> [root@squid ips]# squid -z
> 
> And now is working, also with the command systemctl but, anyway you
> recommend more the use of squid -k commands no??

Well, if you started it with systemctl / systemd, then it's a good idea to 
stop it with systemctl / systemd.

However:

On Thursday 26 January 2017 at 03:57:48, Amos Jeffries wrote:

> On 26/01/2017 5:38 a.m., erdosain9 wrote:
> 
> > some other approach??
> 
> Not using systemd to control Squid-3. The two are not compatible. As you
> just found out the hard way.
> 
> Squid is not a daemon, it is a Daemon + Manager in one binary/process.
> systemd is based around the naive assumption that everything is a simple
> daemon and gets horribly confuzled when reality bites. It is not alone,
> upstart has the same issues. Basically only start/stop work, and even
> those only most of the time if done very carefully.
> 
> Your choices with systemd are (1) use the 'squid -k' commands, or (2)
> upgrade to Squid-4 and install the tools/systemd/squid.service file we
> provide for that version.

Therefore avoid using systemd with Squid, and you should be able to manage it 
normally.


Antony.

-- 
A user interface is like a joke.
If you have to explain it, it didn't work.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Strange behavior - reload service failed, but not start.... (solved)

2017-01-27 Thread erdosain9
Hi, again.
Now, i do this

[root@squid ips]# ps aux | grep squid
root  2228  0.0  0.0 130900   344 ?Ss   ene24   0:00
/usr/sbin/squid -sYC
squid 2230  6.2 64.9 1341864 1205160 ? Rene24 263:30 (squid-1)
-sYC
squid 2231  0.4  0.1  68196  1948 ?Sene24  20:35 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2232  0.0  0.1  68196  1944 ?Sene24   1:21 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2233  0.0  0.1  68196  1948 ?Sene24   0:32 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2234  0.0  0.1  68196  1952 ?Sene24   0:17 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2235  0.0  0.1  68196  1944 ?Sene24   0:11 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2236  0.0  0.0  33712   216 ?Sene24   1:48
(logfile-daemon) /var/log/squid/access.log
squid 2237  0.0  0.0  33560   220 ?Sene24   0:20 (unlinkd)
squid 2238  0.8  0.0  34084   484 ?Sene24  34:55 diskd
2283524 2283525 2283526
squid 2239  0.0  0.1  68196  1944 ?Sene24   0:06 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2240  0.0  0.1  68196  1944 ?Sene24   0:04 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2241  0.0  0.1  68196  1944 ?Sene24   0:02 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2242  0.0  0.1  68196  1944 ?Sene24   0:01 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2243  0.0  0.1  68196  1940 ?Sene24   0:01 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2244  0.0  0.1  68184  1932 ?Sene24   0:01 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2245  0.0  0.1  68196  1948 ?Sene24   0:01 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2246  0.0  0.1  68196  1940 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2247  0.0  0.1  68196  1940 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2248  0.0  0.1  68196  2076 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2278  0.0  0.1  68196  1940 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2325  0.0  0.1  68196  2064 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2368  0.0  0.1  68196  1984 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2369  0.0  0.1  68196  2168 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2371  0.0  0.0  68152  1656 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2397  0.0  0.1  68180  1920 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2398  0.0  0.1  68188  1920 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2399  0.0  0.1  68184  1924 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2400  0.0  0.1  68184  1932 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2401  0.0  0.1  68180  2032 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2402  0.0  0.1  68180  2032 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2403  0.0  0.0  68152  1648 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2404  0.0  0.0  68152  1620 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2405  0.0  0.0  68152  1612 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2406  0.0  0.1  68188  1920 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2407  0.0  0.0  68152  1612 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 2408  0.0  0.0  68152  1608 ?Sene24   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
root  8128  0.0  0.0 112672   972 pts/0S+   10:24   0:00 grep
--color=auto squid
[root@squid ips]# systemctl stop squid
[root@squid ips]# pkill squid
[root@squid ips]# squid -z


And now is working, also with the command systemctl but, anyway you
recommend more the use of squid -k commands no??

Thanks again.

pd: this is process now. 
[root@squid ips]# ps aux | grep squid
root  8156  0.0  1.3 130900 25272 ?Ss   10:26   0:00
/usr/sbin/squid -sYC
squid 8158  6.5 18.7 452532 347580 ?   S10:26   0:42 (squid-1)
-sYC
squid 8165  0.0  0.0  33560  1300 ?S10:26   0:00 (unlinkd)
squid 8166  1.0  0.0  34084  1572 ?S10:26   0:06 diskd
8353796 8353797 8353798
squid 8182  0.0  0.0  33712  1304 ?S10:28   0:00
(logfile-daemon) /var/log/squid/access.log
squid 8183  0.5  0.2  68188  4940 ?S10:28   0:02 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 8184  0.0  0.2  68152  4708 ?S10:28   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 8185  0.0  0.2  68192  4936 ?S10:28   0:00 (ssl_crtd)
-s /var/lib/ssl_db -M 4MB
squid 8186  0.0  0.2  68152  4708 ?S10:28   0:00 (ssl_crtd)
-s /var/lib/

Re: [squid-users] Strange behavior - reload service failed, but not start....

2017-01-27 Thread Antony Stone
On Friday 27 January 2017 at 14:13:55, erdosain9 wrote:

> Ok, thanks.
> But something more its wrong look up this:
> 
> [root@squid ips]# squid -k restart
> squid: ERROR: Could not send signal 21 to process 8083: (3) No such process
> 
> [root@squid ips]# squid -k shutdown
> squid: ERROR: Could not send signal 15 to process 8083: (3) No such process
> 
> [root@squid ips]# squid -k kill
> squid: ERROR: Could not send signal 9 to process 8083: (3) No such process
> 
> [root@squid ips]# squid -k debug
> squid: ERROR: Could not send signal 12 to process 8083: (3) No such process
> 
> ..mmm... what's going on here???
> 
> But actually squid is running and working,

What does ps -ax tell you the process ID for it is?

I bet it's not 8083...

> Also, if i do a change in squid.conf... it dosent take it. neither
> systemctl, or like you see any squid -k command

Sounds like a permissions problem to me - what are the ownerships and 
permissions on your squid.conf file, and on the Squid PID file?


Antony.

-- 
I want to build a machine that will be proud of me.

 - Danny Hillis, creator of The Connection Machine

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-27 Thread Garri Djavadyan
On Fri, 2017-01-27 at 17:58 +0600, Yuri wrote:
> 
> 27.01.2017 17:54, Garri Djavadyan пишет:
> > On Fri, 2017-01-27 at 15:47 +0600, Yuri wrote:
> > > --2017-01-27 15:29:54--  https://www.microsoft.com/ru-kz/
> > > Connecting to 127.0.0.1:3128... connected.
> > > Proxy request sent, awaiting response...
> > > HTTP/1.1 200 OK
> > > Cache-Control: no-cache, no-store
> > > Pragma: no-cache
> > > Content-Type: text/html
> > > Expires: -1
> > > Server: Microsoft-IIS/8.0
> > > CorrelationVector: BzssVwiBIUaXqyOh.1.1
> > > X-AspNet-Version: 4.0.30319
> > > X-Powered-By: ASP.NET
> > > Access-Control-Allow-Headers: Origin, X-Requested-With,
> > > Content-
> > > Type,
> > > Accept
> > > Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
> > > Access-Control-Allow-Credentials: true
> > > P3P: CP="ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD
> > > TAI
> > > TELo
> > > OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI"
> > > X-Frame-Options: SAMEORIGIN
> > > Vary: Accept-Encoding
> > > Content-Encoding: gzip
> > > Date: Fri, 27 Jan 2017 09:29:56 GMT
> > > Content-Length: 13322
> > > Set-Cookie: MS-CV=BzssVwiBIUaXqyOh.1; domain=.microsoft.com;
> > > expires=Sat, 28-Jan-2017 09:29:56 GMT; path=/
> > > Set-Cookie: MS-CV=BzssVwiBIUaXqyOh.2; domain=.microsoft.com;
> > > expires=Sat, 28-Jan-2017 09:29:56 GMT; path=/
> > > Strict-Transport-Security: max-age=0; includeSubDomains
> > > X-CCC: NL
> > > X-CID: 2
> > > X-Cache: MISS from khorne
> > > X-Cache-Lookup: MISS from khorne:3128
> > > Connection: keep-alive
> > > Length: 13322 (13K) [text/html]
> > > Saving to: 'index.html'
> > > 
> > > index.html  100%[==>]  13.01K --.-
> > > KB/sin
> > > 0s
> > > 
> > > 2017-01-27 15:29:57 (32.2 MB/s) - 'index.html' saved
> > > [13322/13322]
> > > 
> > > Can you explain me - for what static index.html has this:
> > > 
> > > Cache-Control: no-cache, no-store
> > > Pragma: no-cache
> > > 
> > > ?
> > > 
> > > What can be broken to ignore CC in this page?
> > 
> > Hi Yuri,
> > 
> > 
> > Why do you think the page returned for URL
> > [https://www.microsot.cpom/r
> > u-kz/] is static and not dynamically generated one?
> 
> And for me, what's the difference? Does it change anything? In
> addition, 
> it is easy to see on the page and even the eyes - strangely enough -
> to 
> open its code. And? What do you see there?

I see an official home page of Microsoft company for KZ region. The
page is full of javascripts and products offer. It makes sense to
expect that the page could be changed intensively enough.


> > The index.html file is default file name for wget.
> 
> And also the name of the default home page in the web. Imagine - I
> know 
> the obvious things. But the question was about something else.
> > 
> > man wget:
> >    --default-page=name
> > Use name as the default file name when it isn't known
> > (i.e., for
> > URLs that end in a slash), instead of index.html.
> > 
> > In fact the https://www.microsoft.com/ru-kz/index.html is a stub
> > page
> > (The page you requested cannot be found.).
> 
> You living in wrong region. This is geo-dependent page, as obvious,
> yes?

What I mean is the pages https://www.microsoft.com/ru-kz/ and https://w
ww.microsoft.com/ru-kz/index.html are not same. You can easily confirm
it.


> Again. What is the difference? I open it from different
> workstations, 
> from different browsers - I see the same thing. The code is
> identical. I 
> can is to cache? Yes or no?

I'm a new member of Squid community (about 1 year). While tracking for
community activity I found that you can't grasp the advantages of
HTTP/1.1 over HTTP/1.0 for caching systems. Especially, its ability to
_safely_ cache and serve same amount (but I believe even more) of the
objects as HTTP/1.0 compliant caches do (while not breaking internet).
The main tool of HTTP/1.1 compliant proxies is _revalidation_ process.
HTTP/1.1 compliant caches like Squid tend to cache all possible objects
but later use revalidation for dubious requests. In fact the
revalidation is not costly process, especially using conditional GET
requests.

I found that most of your complains in the mail list and Bugzilla are
related to HTTPS scheme. FYI: The primary tool (revalidation) does not
work for HTTPS scheme using all current Squid branches at the moment.
See bug 4648.

Try to apply the proposed patch and update all related bug reports.

HTH


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Strange behavior - reload service failed, but not start....

2017-01-27 Thread erdosain9
Ok, thanks.
But something more its wrong look up this:

[root@squid ips]# squid -k restart
squid: ERROR: Could not send signal 21 to process 8083: (3) No such process

[root@squid ips]# squid -k shutdown
squid: ERROR: Could not send signal 15 to process 8083: (3) No such process

[root@squid ips]# squid -k kill
squid: ERROR: Could not send signal 9 to process 8083: (3) No such process

[root@squid ips]# squid -k debug
squid: ERROR: Could not send signal 12 to process 8083: (3) No such process

..mmm... what's going on here???

But actually squid is running and working, so 
Also, if i do a change in squid.conf... it dosent take it. neither
systemctl, or like you see any squid -k command







--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Strange-behavior-reload-service-failed-but-not-start-tp4681317p4681360.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-27 Thread Yuri



27.01.2017 18:25, Antony Stone пишет:

On Friday 27 January 2017 at 13:15:21, Yuri wrote:


27.01.2017 18:05, Antony Stone пишет:


You're entitled to do whatever you want to, following standards and
recommendations or not - just don't complain when choosing not to follow
those standards and recommendations results in behaviour different from
what you wanted (or what someone else intended).

All this crazy debate reminds me of Microsoft Windows. Windows is better
to know why the administrator should not have full access. Windows is
better to know how to work. Windows is better to know how to tell the
system administrator so that he called the system administrator.

That should remind you of OS X and Android as well, at the very least (and
quite possibly systemd as well)

My opinion is that it's your choice whether to run Microsoft Windows (or Apple
OS X, or Google Android) or not - but you have to accept it as a whole
package; you can't say "I want some of the neat features, but I want them to
work *my* way".

If you don't accept all aspects of the package, then don't use it.
I just want to have a choice and an opportunity to say - "F*ck you, man, 
I'm the System Administrator".


If you do not want to violate the RFC - remove violations HTTP at all. 
If you remember, this mode is now enabled by default.


You do not have to teach me that I use. I - an administrator and wish to 
be able to select tools. And do not be in a situation where the choice 
is made for me.






Antonio, you've seen at least once, so I complained about the
consequences of my own actions?

You seem to continually complain that people are recommending not to try going
against standards, or trying to defeat the anti-caching directives on websites
you find.

It's your choice to try doing that; people are saying "but if you do that, bad
things will happen, or things will break, or it just won't work the way you
want it to", and then you say "but I don't like having to follow the rules".

That's what I meant about complaining about the consequences of your actions.
It is my right and my choice. Personally, I do not complain of the 
consequences, having enough tools to solve any problem.


Enough to learn me. Op asked why he did not cached static html. That 
explains to him that in fact there live dragons and why he is wrong in 
desires to cache any and all.



Antony.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-27 Thread Antony Stone
On Friday 27 January 2017 at 13:15:21, Yuri wrote:

> 27.01.2017 18:05, Antony Stone пишет:
> 
> > You're entitled to do whatever you want to, following standards and
> > recommendations or not - just don't complain when choosing not to follow
> > those standards and recommendations results in behaviour different from
> > what you wanted (or what someone else intended).
> 
> All this crazy debate reminds me of Microsoft Windows. Windows is better
> to know why the administrator should not have full access. Windows is
> better to know how to work. Windows is better to know how to tell the
> system administrator so that he called the system administrator.

That should remind you of OS X and Android as well, at the very least (and 
quite possibly systemd as well)

My opinion is that it's your choice whether to run Microsoft Windows (or Apple 
OS X, or Google Android) or not - but you have to accept it as a whole 
package; you can't say "I want some of the neat features, but I want them to 
work *my* way".

If you don't accept all aspects of the package, then don't use it.

> Antonio, you've seen at least once, so I complained about the
> consequences of my own actions?

You seem to continually complain that people are recommending not to try going 
against standards, or trying to defeat the anti-caching directives on websites 
you find.

It's your choice to try doing that; people are saying "but if you do that, bad 
things will happen, or things will break, or it just won't work the way you 
want it to", and then you say "but I don't like having to follow the rules".

That's what I meant about complaining about the consequences of your actions.


Antony.

-- 
"Life is just a lot better if you feel you're having 10 [small] wins a day 
rather than a [big] win every 10 years or so."

 - Chris Hadfield, former skiing (and ski racing) instructor

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-27 Thread Yuri



27.01.2017 18:05, Antony Stone пишет:

On Friday 27 January 2017 at 12:58:52, Yuri wrote:


Again. What is the difference? I open it from different workstations,
from different browsers - I see the same thing. The code is identical. I
can is to cache? Yes or no?

You're entitled to do whatever you want to, following standards and
recommendations or not - just don't complain when choosing not to follow those
standards and recommendations results in behaviour different from what you
wanted (or what someone else intended).
All this crazy debate reminds me of Microsoft Windows. Windows is better 
to know why the administrator should not have full access. Windows is 
better to know how to work. Windows is better to know how to tell the 
system administrator so that he called the system administrator.


Antonio, you've seen at least once, so I complained about the 
consequences of my own actions?




Oh, and by the way, what did you mean earlier when you said:


You either wear pants or remove the cross, as they say.

?

This is the end of a good Russian joke about a priest who had sex.
I meant that we should ever stop having sex - or remove the pectoral 
cross. This is to ensure that the need to be consistent.



Antony.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-27 Thread Antony Stone
On Friday 27 January 2017 at 12:58:52, Yuri wrote:

> Again. What is the difference? I open it from different workstations,
> from different browsers - I see the same thing. The code is identical. I
> can is to cache? Yes or no?

You're entitled to do whatever you want to, following standards and 
recommendations or not - just don't complain when choosing not to follow those 
standards and recommendations results in behaviour different from what you 
wanted (or what someone else intended).

Oh, and by the way, what did you mean earlier when you said:

> You either wear pants or remove the cross, as they say.

?


Antony.

-- 
"640 kilobytes (of RAM) should be enough for anybody."

 - Bill Gates

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-27 Thread Yuri
I understand that I want to conclusively prove its case. But for the 
sake of objectivity - dynamically generated only dynamic pages? Maybe 
the solution is still the administrator to leave? If I see that 
something is broken or users complain about me - directive *cache deny* 
already canceled?



27.01.2017 17:54, Garri Djavadyan пишет:

On Fri, 2017-01-27 at 15:47 +0600, Yuri wrote:

--2017-01-27 15:29:54--  https://www.microsoft.com/ru-kz/
Connecting to 127.0.0.1:3128... connected.
Proxy request sent, awaiting response...
HTTP/1.1 200 OK
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Type: text/html
Expires: -1
Server: Microsoft-IIS/8.0
CorrelationVector: BzssVwiBIUaXqyOh.1.1
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Access-Control-Allow-Headers: Origin, X-Requested-With, Content-
Type,
Accept
Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
Access-Control-Allow-Credentials: true
P3P: CP="ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD TAI
TELo
OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI"
X-Frame-Options: SAMEORIGIN
Vary: Accept-Encoding
Content-Encoding: gzip
Date: Fri, 27 Jan 2017 09:29:56 GMT
Content-Length: 13322
Set-Cookie: MS-CV=BzssVwiBIUaXqyOh.1; domain=.microsoft.com;
expires=Sat, 28-Jan-2017 09:29:56 GMT; path=/
Set-Cookie: MS-CV=BzssVwiBIUaXqyOh.2; domain=.microsoft.com;
expires=Sat, 28-Jan-2017 09:29:56 GMT; path=/
Strict-Transport-Security: max-age=0; includeSubDomains
X-CCC: NL
X-CID: 2
X-Cache: MISS from khorne
X-Cache-Lookup: MISS from khorne:3128
Connection: keep-alive
Length: 13322 (13K) [text/html]
Saving to: 'index.html'

index.html  100%[==>]  13.01K --.-KB/sin
0s

2017-01-27 15:29:57 (32.2 MB/s) - 'index.html' saved [13322/13322]

Can you explain me - for what static index.html has this:

Cache-Control: no-cache, no-store
Pragma: no-cache

?

What can be broken to ignore CC in this page?

Hi Yuri,


Why do you think the page returned for URL [https://www.microsoft.com/r
u-kz/] is static and not dynamically generated one?

The index.html file is default file name for wget.

man wget:
   --default-page=name
Use name as the default file name when it isn't known (i.e., for
URLs that end in a slash), instead of index.html.

In fact the https://www.microsoft.com/ru-kz/index.html is a stub page
(The page you requested cannot be found.).


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-27 Thread Yuri



27.01.2017 17:54, Garri Djavadyan пишет:

On Fri, 2017-01-27 at 15:47 +0600, Yuri wrote:

--2017-01-27 15:29:54--  https://www.microsoft.com/ru-kz/
Connecting to 127.0.0.1:3128... connected.
Proxy request sent, awaiting response...
HTTP/1.1 200 OK
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Type: text/html
Expires: -1
Server: Microsoft-IIS/8.0
CorrelationVector: BzssVwiBIUaXqyOh.1.1
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Access-Control-Allow-Headers: Origin, X-Requested-With, Content-
Type,
Accept
Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
Access-Control-Allow-Credentials: true
P3P: CP="ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD TAI
TELo
OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI"
X-Frame-Options: SAMEORIGIN
Vary: Accept-Encoding
Content-Encoding: gzip
Date: Fri, 27 Jan 2017 09:29:56 GMT
Content-Length: 13322
Set-Cookie: MS-CV=BzssVwiBIUaXqyOh.1; domain=.microsoft.com;
expires=Sat, 28-Jan-2017 09:29:56 GMT; path=/
Set-Cookie: MS-CV=BzssVwiBIUaXqyOh.2; domain=.microsoft.com;
expires=Sat, 28-Jan-2017 09:29:56 GMT; path=/
Strict-Transport-Security: max-age=0; includeSubDomains
X-CCC: NL
X-CID: 2
X-Cache: MISS from khorne
X-Cache-Lookup: MISS from khorne:3128
Connection: keep-alive
Length: 13322 (13K) [text/html]
Saving to: 'index.html'

index.html  100%[==>]  13.01K --.-KB/sin
0s

2017-01-27 15:29:57 (32.2 MB/s) - 'index.html' saved [13322/13322]

Can you explain me - for what static index.html has this:

Cache-Control: no-cache, no-store
Pragma: no-cache

?

What can be broken to ignore CC in this page?

Hi Yuri,


Why do you think the page returned for URL [https://www.microsoft.com/r
u-kz/] is static and not dynamically generated one?
And for me, what's the difference? Does it change anything? In addition, 
it is easy to see on the page and even the eyes - strangely enough - to 
open its code. And? What do you see there?


The index.html file is default file name for wget.
And also the name of the default home page in the web. Imagine - I know 
the obvious things. But the question was about something else.


man wget:
   --default-page=name
Use name as the default file name when it isn't known (i.e., for
URLs that end in a slash), instead of index.html.

In fact the https://www.microsoft.com/ru-kz/index.html is a stub page
(The page you requested cannot be found.).

You living in wrong region. This is geo-dependent page, as obvious, yes?

Again. What is the difference? I open it from different workstations, 
from different browsers - I see the same thing. The code is identical. I 
can is to cache? Yes or no?



Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-27 Thread Garri Djavadyan
On Fri, 2017-01-27 at 15:47 +0600, Yuri wrote:
> --2017-01-27 15:29:54--  https://www.microsoft.com/ru-kz/
> Connecting to 127.0.0.1:3128... connected.
> Proxy request sent, awaiting response...
>    HTTP/1.1 200 OK
>    Cache-Control: no-cache, no-store
>    Pragma: no-cache
>    Content-Type: text/html
>    Expires: -1
>    Server: Microsoft-IIS/8.0
>    CorrelationVector: BzssVwiBIUaXqyOh.1.1
>    X-AspNet-Version: 4.0.30319
>    X-Powered-By: ASP.NET
>    Access-Control-Allow-Headers: Origin, X-Requested-With, Content-
> Type, 
> Accept
>    Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
>    Access-Control-Allow-Credentials: true
>    P3P: CP="ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD TAI
> TELo 
> OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI"
>    X-Frame-Options: SAMEORIGIN
>    Vary: Accept-Encoding
>    Content-Encoding: gzip
>    Date: Fri, 27 Jan 2017 09:29:56 GMT
>    Content-Length: 13322
>    Set-Cookie: MS-CV=BzssVwiBIUaXqyOh.1; domain=.microsoft.com; 
> expires=Sat, 28-Jan-2017 09:29:56 GMT; path=/
>    Set-Cookie: MS-CV=BzssVwiBIUaXqyOh.2; domain=.microsoft.com; 
> expires=Sat, 28-Jan-2017 09:29:56 GMT; path=/
>    Strict-Transport-Security: max-age=0; includeSubDomains
>    X-CCC: NL
>    X-CID: 2
>    X-Cache: MISS from khorne
>    X-Cache-Lookup: MISS from khorne:3128
>    Connection: keep-alive
> Length: 13322 (13K) [text/html]
> Saving to: 'index.html'
> 
> index.html  100%[==>]  13.01K --.-KB/sin
> 0s
> 
> 2017-01-27 15:29:57 (32.2 MB/s) - 'index.html' saved [13322/13322]
> 
> Can you explain me - for what static index.html has this:
> 
> Cache-Control: no-cache, no-store
> Pragma: no-cache
> 
> ?
> 
> What can be broken to ignore CC in this page?

Hi Yuri,


Why do you think the page returned for URL [https://www.microsoft.com/r
u-kz/] is static and not dynamically generated one?

The index.html file is default file name for wget.

man wget:
  --default-page=name
   Use name as the default file name when it isn't known (i.e., for
   URLs that end in a slash), instead of index.html.

In fact the https://www.microsoft.com/ru-kz/index.html is a stub page
(The page you requested cannot be found.).


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] transparent http and https filter with white-list only

2017-01-27 Thread Sergey Klusov
Hello. I'm trying to get working transparent setup allowing only certain 
domains and have problem that in order to allow https "ssl_bump splice 
allowed_domains" i have to "http_access allow all", thus allowing all 
other http traffic through. Otherwise https traffic is not allowed at all.


Here is my config:

===config===
http_port 10.96.243.1:3128 intercept options=NO_SSLv3:NO_SSLv2
http_port 10.96.243.1:3130 options=NO_SSLv3:NO_SSLv2
https_port 10.96.243.1:3129 intercept ssl-bump 
options=ALL:NO_SSLv3:NO_SSLv2 connection-auth=off 
cert=/etc/squid/squidCA.pem

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 443 # https
acl CONNECT method CONNECT

acl http_allow dstdomain "/etc/squid/http_allow_domains.txt"
acl https_allow ssl::server_name "/etc/squid/https_allow_domains.txt"

sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER

acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump splice https_allow
ssl_bump terminate all

cache deny all

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager

http_access allow all http_allow
http_access allow all https_allow
http_access deny all

always_direct allow all

coredump_dir /var/spool/squid

refresh_pattern .   0   0%  0

logformat ssl %ts.%03tu %6tr %>a %la:%lp %Ss/%03>Hs %sni 
%ru %[un %Sh/%
access_log daemon:/var/log/squid/access.log logformat=ssl
cut==

files with domain names:
=
# cat http_allow_domains.txt
.google.com
# cat https_allow_domains.txt
.google.com
=

With this config http filtering works and https://google.com request 
gets replied with self-signed squid deny message.
If i replace "http_access deny all" with "http_access allow all", https 
filtering starts working, allowing https://google.com and resetting 
other https requests, BUT it allows any http traffic as well!


What do i do wrong?
I need my server to pass "/etc/squid/http_allow_domains.txt" HTTP and 
"/etc/squid/https_allow_domains.txt" HTTPS domains ONLY.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-27 Thread Yuri



27.01.2017 9:10, Amos Jeffries пишет:

On 27/01/2017 9:46 a.m., Yuri Voinov wrote:


27.01.2017 2:44, Matus UHLAR - fantomas пишет:

26.01.2017 2:22, boruc пишет:

After a little bit of analyzing requests and responses with WireShark I
noticed that many sites that weren't cached had different
combination of
below parameters:

Cache-Control: no-cache, no-store, must-revalidate, post-check,
pre-check,
private, public, max-age, public
Pragma: no-cache

On 26.01.17 02:44, Yuri Voinov wrote:

If the webmaster has done this - he had good reason to. Trying to break
the RFC in this way, you break the Internet.

Actually, no. If the webmaster has done the above - he has no damn
idea what
those mean (private and public?) , and how to provide properly cacheable
content.

It was sarcasm.


You may have intended it to be. But you spoke the simple truth.

Other than 'public' there really are situations which have "good reason"
to send that set of controls all at once.

For example; any admin who wants a RESTful or SaaS application to
actually work for all their potential customers.


I have been watching the below cycle take place for the past 20 years in
HTTP:

Webmaster: dont cache this please.

   "Cache-Control: no-store"

Proxy Admin: ignore-no-store


Webmaster: I meant it. Dont deliver anything you cached without fetching
a updated version.

   ... "no-store, no-cache"

Proxy Admin: ignore-no-cache


Webmaster: really you MUST revalidate before using ths data.

  ... "no-store, no-cache, must-revalidate"

Proxy Admin: ignore-must-revalidate


Webmaster: Really I meant it. This is non-storable PRIVATE DATA!

... "no-store, no-cache, must-revalidate, private"

Proxy Admin: ignore-private


Webmaster: Seriously. I'm changing it on EVERY request! dont store it.

... "no-store, no-cache, must-revalidate, private, max-age=0"
"Expires: -1"

Proxy Admin: ignore-expires


Webmaster: are you one of those dumb HTTP/1.0 proxies who dont
understand Cache-Control?

"Pragma: no-cache"
"Expires: 1 Jan 1970"

Proxy Admin: hehe! I already ignore-no-cache ignore-expires


Webmaster: F*U!  May your clients batch up their traffic to slam you
with it all at once!

... "no-store, no-cache, must-revalidate, private, max-age=0,
pre-check=1, post-check=1"


Proxy Admin: My bandwidth! I need to cache more!

Webmaster: Doh! Oh well, so I have to write my application to force new
content then.

Proxy Admin: ignore-reload


Webmaster: Now What? Oh HTTPS wont have any damn proxies in the way

... the cycle repeats again within HTTPS. Took all of 5 years this time.

... the cycle repeats again within SPDY. That took only ~1 year.

... the cycle repeats again within CoAP. The standards are not even
finished yet and its underway.


Stop this cycle of stupidity. It really HAS "broken the Internet".
All that would be just great if a webmaster was conscientious. I will 
give just one example.


Only one example.

root @ khorne /patch # wget -S http://www.microsoft.com
--2017-01-27 15:29:54--  http://www.microsoft.com/
Connecting to 127.0.0.1:3128... connected.
Proxy request sent, awaiting response...
  HTTP/1.1 302 Found
  Server: AkamaiGHost
  Content-Length: 0
  Location: http://www.microsoft.com/ru-kz/
  Date: Fri, 27 Jan 2017 09:29:54 GMT
  X-CCC: NL
  X-CID: 2
  X-Cache: MISS from khorne
  X-Cache-Lookup: MISS from khorne:3128
  Connection: keep-alive
Location: http://www.microsoft.com/ru-kz/ [following]
--2017-01-27 15:29:54--  http://www.microsoft.com/ru-kz/
Reusing existing connection to 127.0.0.1:3128.
Proxy request sent, awaiting response...
  HTTP/1.1 301 Moved Permanently
  Server: AkamaiGHost
  Content-Length: 0
  Location: https://www.microsoft.com/ru-kz/
  Date: Fri, 27 Jan 2017 09:29:54 GMT
  Set-Cookie: 
akacd_OneRF=1493285394~rv=7~id=6a2316770abdbb58a85c16676a0f84fd; path=/; 
Expires=Thu, 27 Apr 2017 09:29:54 GMT

  X-CCC: NL
  X-CID: 2
  X-Cache: MISS from khorne
  X-Cache-Lookup: MISS from khorne:3128
  Connection: keep-alive
Location: https://www.microsoft.com/ru-kz/ [following]
--2017-01-27 15:29:54--  https://www.microsoft.com/ru-kz/
Connecting to 127.0.0.1:3128... connected.
Proxy request sent, awaiting response...
  HTTP/1.1 200 OK
  Cache-Control: no-cache, no-store
  Pragma: no-cache
  Content-Type: text/html
  Expires: -1
  Server: Microsoft-IIS/8.0
  CorrelationVector: BzssVwiBIUaXqyOh.1.1
  X-AspNet-Version: 4.0.30319
  X-Powered-By: ASP.NET
  Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, 
Accept

  Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
  Access-Control-Allow-Credentials: true
  P3P: CP="ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD TAI TELo 
OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI"

  X-Frame-Options: SAMEORIGIN
  Vary: Accept-Encoding
  Content-Encoding: gzip
  Date: Fri, 27 Jan 2017 09:29:56 GMT
  Content-Length: 13322
  Set-Cookie: MS-CV=BzssVwiBIUaXqyOh.1; domain=.microsoft.com; 
expires=Sat, 28-Jan-2017 09:29:56 GMT; path=/
  Set-Cookie: MS-CV=BzssV

Re: [squid-users] squid on it's own server

2017-01-27 Thread Antony Stone
On Friday 27 January 2017 at 05:17:28, John Pearson wrote:

> hi all, my current setup: laptop(10.0.1.10) and squid-box(10.0.1.11) and
> debian router(10.0.1.1).
> 
> I am doing wget on laptop
> 
> wget squid-cache.org
> 
> I am redirecting packets on the router to squid-box by changing the
> destination MAC address

Well, that's a novel way of doing policy routiong...

> and destination IP and port address.

Oh dear.

> I am able to see the packets reaching the squid-box and in squid log I am
> seeing many
> 
> 10.0.1.11 TCP_MISS/503 47502 GET http://squid-cache.org/ - ORIGINAL_DST/
> 10.0.1.11 text/html
> 
> The log stream is really fast. All I see on laptop is “HTTP request sent,
> awaiting response …" Any advice? thanks!

Yes, do NOT change the destination IP address on ANY machine except the one 
which Squid is running on.

See http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxRedirect and pay 
attention to the part which says "This configuration is given for use *on the 
squid box*."

Get the packets *to* that box however you like, but don't change them along 
the way.


Antony.

-- 
It may not seem obvious, but (6 x 5 + 5) x 5 - 55 equals 5!

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy (accelerator) for MS Exchange OWA

2017-01-27 Thread Vieri




- Original Message -
From: Alex Rousskov 

>> It's interesting to note that the following actually DOES give more 
>> information (unsupported 

>> protocol):>
> * If the server sent nothing, then Curl gave you potentially incorrect
> information (i.e., Curl is just _guessing_ what went wrong).


I never tried telling Squid to use TLS 1.1 ONLY so I never got to see Squid's 
log when using that protocol. I'm supposing I would have seen the same thing in 
Squid as I've seen it with CURL.
So I'm sure Squid would log useful information for the sys admin but... (see 
below).

>> Maybe if Squid gets an SSL negotiation error with no apparent reason
>> then it might need to retry connecting by being more explicit, just
>> like in my cURL and openssl binary examples above.
>
> Sorry, I do not know what "retry connecting by being more explicit"
> means. AFAICT, neither Curl nor s_client tried reconnecting in your
> examples. Also, an appropriate default for a command-line client is
> often a bad default for a proxy. It is complicated.


Let me rephrase my point but please keep in mind that I have no idea how Squid 
actually behaves. Simply put, when Squid tries to connect for the first time, 
it will probably (I'm guessing here) try the most secure protcol known today 
(ie. TSL 1.2), or let OpenSSL decide by default which is probably the same. In 
my case, the server replies nothing. That would be like running:

# curl -k -v https://10.215.144.21
or
# openssl s_client -connect 10.215.144.21:443

They give me the same information as Squid's log... almost nothing.

So my point is, if that first connection fails and gives me nothing for TLS 1.2 
(or whatever the default is), two things can happen: either the remote site is 
failing or it isn't supporting the protocol. Why not "try again" but this time 
by being more specific? It would be like doing something like this:

# openssl s_client -connect 10.215.144.21:443 || openssl s_client -connect 
10.215.144.21:443 -tls1_1 || openssl s_client -connect 10.215.144.21:443 -tls1
 

Of course, this shouldn't be done each and every time it tries to connect 
because it would probably give performance issues. If Squid successfully 
connects with TSL 1.0 then it could "remember" that for later connections to 
the same peer. It could also forget it after a sensible timeout, in case the 
remote peer starts supporting a safer protocol.

> Agreed in general, but the devil is in the details. Improving this is
> difficult, and nobody is working on it at the moment AFAIK.


I can imagine it must be difficult...


Instead of improving the source code, maybe a FAQ or some doc related to "squid 
error negotiating SSL" which would describe what to try when the error message 
is a mere "handshake failure". In the end, it's as simple as setting ssloptions 
correctly (in my case, NO_SSLv3,NO_SSLv2,NO_TLSv1_2,NO_TLSv1_1). I know there 
could be many other reasons for such a failure but at least that would be a 
good starting point.


Or even better... if Squid detects an SSL handshake failure with no extra info 
like in my case, can't it simply log an extra string that would look something 
like "Failed to negotiate SSL for unknown reason. Try setting ssloptions 
(cache_peer) or options (https_port) with a combination of NO_SSLv2 NO_SSLv3 
NO_TLSv1 NO_TLSv1_1 NO_TLSv1_2. Find out which SSL protocol is supported by the 
remote peer. If the connection still fails then you will need to analyze 
traffic with the peer to find out the reason."

In my case, that would have been enough info in Squid's log to fix the issue.

Thanks again.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users