Re: [squid-users] Caching http google deb files

2016-10-22 Thread Eliezer Croitoru
Well you are right about that but for me it’s simpler to Write and ICAP
service to do that then hack squid code.

Eliezer


Eliezer Croitoru  
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
 

From: Heiler Bemerguy [mailto:heiler.bemer...@cinbesa.com.br] 
Sent: Saturday, October 22, 2016 21:18
To: Eliezer Croitoru 
Cc: squid-us...@squid-cache.org
Subject: Re: [squid-users] Caching http google deb files


Hi Eliezer
I've never used ICAP, and I think hacking the code is way faster than
creating/using a separate service for that. And I'm not sure, but I don't
think I can manage to get this done with current squid's options.
This patch will make squid NOT ignore the objects with "Vary: *" replies.
It will consider them a valid and cacheable object. 
And it will only consider as a valid Vary option those who begins
with"accept" or the "user-agent" one.

-- 
Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751
Em 21/10/2016 16:07, Eliezer Croitoru escreveu:
Instead of modifying the code, would you consider to use an ICAP service
that will mangle this?
I am unsure about the risks about doing so but why patch the sources if you
can resolve it with the current mainstream capabilities and API?

Eliezer


Eliezer Croitoru 
  
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il  
 

From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
Behalf Of Heiler Bemerguy
Sent: Friday, October 21, 2016 18:21
To: squid-us...@squid-cache.org  
Subject: Re: [squid-users] Caching http google deb files


Hello,
I've limited the "vary" usage and gained some hits by making these
modifications (in blue) to the http.cc code:
while (strListGetItem(, ',', , , )) {
SBuf name(item, ilen);
if (name == asterisk) {
/*  vstr.clear();
break; */ 
continue;
}
name.toLower();

   if (name.cmp("accept", 6) != 0 &&
  name.cmp("user-agent", 10) != 0)
   continue;

if (!vstr.isEmpty())
vstr.append(", ", 2);




<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching http google deb files

2016-10-22 Thread Heiler Bemerguy


Hi Eliezer

I've never used ICAP, and I think hacking the code is way faster than 
creating/using a separate service for that. And I'm not sure, but I 
don't think I can manage to get this done with current squid's options.


This patch will make squid NOT ignore the objects with "*Vary: **" 
replies. It will consider them a valid and cacheable object.
And it will only consider as a valid Vary option those who *begins 
*with"accept" or the "user-agent" one.



--
Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751

Em 21/10/2016 16:07, Eliezer Croitoru escreveu:

Instead of modifying the code, would you consider to use an ICAP service
that will mangle this?
I am unsure about the risks about doing so but why patch the sources if you
can resolve it with the current mainstream capabilities and API?

Eliezer


Eliezer Croitoru 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
  


From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
Behalf Of Heiler Bemerguy
Sent: Friday, October 21, 2016 18:21
To: squid-us...@squid-cache.org
Subject: Re: [squid-users] Caching http google deb files


Hello,
I've limited the "vary" usage and gained some hits by making these
modifications (in blue) to the http.cc code:
 while (strListGetItem(, ',', , , )) {
 SBuf name(item, ilen);
 if (name == asterisk) {
 /*  vstr.clear();
 break; */
 continue;
 }
 name.toLower();

if (name.cmp("accept", 6) != 0 &&
   name.cmp("user-agent", 10) != 0)
continue;

 if (!vstr.isEmpty())
 vstr.append(", ", 2);





___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] possible to intercept https traffic in TCP_TUNNEL CONNECT method ?

2016-10-22 Thread Antony Stone
On Saturday 22 October 2016 at 15:42:23, --Ahmad-- wrote:

> Hi guys
> say that i have squid proxy sever
> and i was running  capturing traffic on that server .

You mean using ICAP or ECAP service?

> say that all users were using ip:port —> ((tcp_connect  tunnel))) mode of
> squid

I'm not sure what you mean here - are you saying the clients are configured to 
use the proxy, or that the proxy is operating in intercept mode, and the 
clients don't know?

> the question is being asked here … will i be able to see https traffic like
> Facebook  as normal traffic ? or encrypted ?

You can always see the encrypted traffic - you don't need Squid for that - just 
run tcpdump, wireshark or similar on a router between your clients and the 
Internet.  Encrypted traffic isn't going to tell you much, though.

> the question in other way  …. is it possible to hack https traffic and see
> it as not encrypted ?

Yes - you perform a Man-in-the-Middle attack, which requires configuring the 
clients to accept fake certificates from Squid by trusting its built-in 
Certificate Authority.  In other words, you cannot do it without clients 
knowing that the certificate presented by Squid does not belong to the site 
they're visiting.

Also, all technical possibilities aside, it may well be illegal for you to do 
this, depending on where you are and who your users are.

See http://wiki.squid-cache.org/Features/SslPeekAndSplice and 
http://wiki.squid-cache.org/SquidFaq/ContentAdaptation for more details.


Antony.

-- 
"Life is just a lot better if you feel you're having 10 [small] wins a day 
rather than a [big] win every 10 years or so."

 - Chris Hadfield, former skiing (and ski racing) instructor

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Peeking on TLS traffic: unknown cipher returned

2016-10-22 Thread James Lay
Excellent...glad it worked.
James
On Sat, 2016-10-22 at 10:35 -0300, Leandro Barragan wrote:
> Thanks a lot James, compiling Squid 3.5.22 using that specific commit
> of LibreSSL worked as a charm! I no longer have that "unknown cipher
> returned" errors. I do have some errors with a tiny amount of sites,
> but I suppose its because of server-side misconfigurations that
> LibreSSL simply don't like.
> 
> 
> On 21 October 2016 at 13:01, James Lay 
> wrote:
> > 
> > On 2016-10-21 09:58, Leandro Barragan wrote:
> > > 
> > > 
> > > James, thanks for your advice! I've read your email on this list
> > > about
> > > LibreSSL. I tried to compile Squid with LibreSSL in the first
> > > place
> > > because of what you wrote about ChaCha20. But unfortunately, I
> > > couldn't, compilation stopped because of some obscure error.
> > > 
> > > Do you remember what version of squid and libressl you used? BTW
> > > I
> > > tried with OpenSSL 1.0.2g applying the CloudFare ChaCha20 patch,
> > > but
> > > it doesn't work either, same error (unknown cipher)
> > > 
> > > Thanks!
> > > 
> > > On 21 October 2016 at 10:55, James Lay 
> > > wrote:
> > > > 
> > > > 
> > > > On 2016-10-20 20:15, Leandro Barragan wrote:
> > > > > 
> > > > > 
> > > > > 
> > > > > Thanks for your time Alex! I modified my original config
> > > > > based on Amos
> > > > > recommendations, so I think now I have a more consistent peek
> > > > > & splice
> > > > > config:
> > > > > 
> > > > >  acl TF ssl::server_name_regex -i facebook fbcdn twitter
> > > > > reddit
> > > > >  ssl_bump peek all
> > > > >  ssl_bump terminate TF
> > > > >  ssl_bump splice all
> > > > > 
> > > > > As you mentioned, terminate closes the connection, it doesn't
> > > > > serve an
> > > > > error page (when it works, i.e. with reddit and twitter).
> > > > > 
> > > > > I've compiled Squid 3.5.22 using OpenSSL 1.0.2j and I'm
> > > > > having the
> > > > > same exact issue, even with this new config. Based on what
> > > > > you
> > > > > explained, I think it's a OpenSSL problem and Squid can't do
> > > > > anything
> > > > > about it. I have two reasons to believe that:
> > > > > 
> > > > > 1) The "unknown cipher returned" error get's triggered on
> > > > > terminated
> > > > > and non terminated (e.g. microsoft.com) sites, which makes me
> > > > > think it
> > > > > has nothing to do with Squid ACLs.
> > > > > 2) All problematic sites use a new cipher called "ChaCha20"
> > > > > (E.g.
> > > > > TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256according to
> > > > > Qualys
> > > > > online analyzer and TestSSLServer tool)
> > > > > 
> > > > > A lot of sites are using this new cipher. I'm back at the
> > > > > beginning, I
> > > > > will continue trying to compile Squid with patched versions
> > > > > of OpenSSL
> > > > > or LibreSSL.
> > > > > 
> > > > > Thanks!
> > > > > 
> > > > > On 20 October 2016 at 01:01, Alex Rousskov
> > > > >  wrote:
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > On 10/19/2016 12:44 AM, Leandro Barragan wrote:
> > > > > > 
> > > > > > > 
> > > > > > > > 
> > > > > > > > error:140920F8:SSL
> > > > > > > > routines:SSL3_GET_SERVER_HELLO:unknown cipher
> > > > > > > > returned (1/-1/0)
> > > > > > 
> > > > > > 
> > > > > > > 
> > > > > > > I fail to see why is this happening. I only need to peek
> > > > > > > on the
> > > > > > > connection and make a decision based on SNI,
> > > > > > 
> > > > > > 
> > > > > > Please note that "peek and make a decision based on SNI" is
> > > > > > not what
> > > > > > your configuration tells Squid to do. Your configuration
> > > > > > tells Squid to
> > > > > > peek during step2, which means making a decision based on
> > > > > > server
> > > > > > certificates (and SNI).
> > > > > > 
> > > > > > 
> > > > > > > 
> > > > > > > I'm not Bumping, so I
> > > > > > > don't understand why ciphers matter in my situation.
> > > > > > 
> > > > > > 
> > > > > > The ciphers matter because Squid v3 uses OpenSSL parsers
> > > > > > during step1,
> > > > > > step2, and step3. FWIW, Squid v4 uses OpenSSL parsers
> > > > > > during step2 (a
> > > > > > little) and step3. It is possible to completely remove
> > > > > > OpenSSL from
> > > > > > step2 but there is currently no project to do that AFAIK.
> > > > > > 
> > > > > > 
> > > > > > > 
> > > > > > > > 
> > > > > > > > ssl_bump peek all step1
> > > > > > > > ssl_bump peek all step2
> > > > > > > > ssl_bump terminate face step3
> > > > > > > > ssl_bump terminate twitter step3
> > > > > > > > ssl_bump splice all step3
> > > > > > 
> > > > > > 
> > > > > > BTW, "step1", "step2", and "step3" ACLs do nothing useful
> > > > > > in the above
> > > > > > config. You can safely remove them to arrive at the
> > > > > > equivalent ssl_bump
> > > > > > configuration.
> > > > > > 
> > > > > > 
> > > > > > On 10/19/2016 07:42 AM, Amos Jeffries wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > 

Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-22 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 


22.10.2016 19:32, gar...@comnet.uz пишет:
> On 2016-10-22 17:56, Antony Stone wrote:
>> Disclaimer: I am not a Squid developer.
>>
>> On Saturday 22 October 2016 at 14:43:55, gar...@comnet.uz wrote:
>>
>>> IMO:
>>>
>>> The only reason I believe [explains] why core developers of Squid
tend to
>>> move HTTP violating settings from average users is to prevent possible
>>> abuse/misuse.
>>
>> I believe the reason is that one of Squid's goals is to be RFC compliant,
>> therefore it does not contain features which violate HTTP.
>>
>>> Nevertheless, I believe that core developers should publish an
>>> _official_ explanations regarding the tendency, as it often becomes a
>>> "center of gravity" of many topics.
>>
>> Which "tendency"?
>>
>> What are you asking for an official explanation of?
>>
>>
>> Antony.
>
> Since I started use Squid, it's configuration always RFC compliant by
default, _but_ there were always knobs for users to make it HTTP
violent. It was in hands of users to decide how to handle a web
resource. Now it is not always possible, and the topic is an evidence.
For example, in terms of this topic, users can't violate this RFC
statement [1]:
>
>A Vary field value of "*" signals that anything about the request
>might play a role in selecting the response representation, possibly
>including elements outside the message syntax (e.g., the client's
>network address).  A recipient will not be able to determine whether
>this response is appropriate for a later request without forwarding
>the request to the origin server.  A proxy MUST NOT generate a Vary
>field with a "*" value.
>
> [1] https://tools.ietf.org/html/rfc7231#section-7.1.4
Well, what of it? What developers RFC got good money from Google for
ignoring caching level standards. Because that Google is profitable.
"Hey - they say - these dumb bastards all unlimited internet! Let it pay!"

And Google is not the only example in this case. I have seen, for
example, http://www.example.com/big_fucking_favicon.ico?null=0 design.
Where the size of the icons was hundreds of kilobytes! How about this?
Do not tell me that this is required for the functioning of the site - a
code may be in the picture?

What's the bottom line? Let's continue to sit on horseback, dressed in
white, and pray to the RFC!

> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJYC22IAAoJENNXIZxhPexGELAH/02opOgF+Jh4fff/6T15ECMB
kqobxY+RYdLgkzGV23Fx88dLD4AHDQIapw7tlbpgzGpjc8N4z78AY/TSBRT/l3AP
l7wfQ+Egq9DRC2Z+XXN5oQT0naIgHmGbJl73btpG9t59u84N9jqMrA4i3fnVy0aO
fY1dq5+aG6jo4aGB17QzL9JGJxFsBkVbAvI6ZVJ445RMmoeh4+MHOUoewv7h/xY6
GSRN9kwdAfhqkGtiRAH4y8mpexRAztpTB6EOpGXupJzRuTuAujB2LGKlbnHYvXL4
a+PzlcvG8n2ZHy4YtjxRg0mymbM59F7SZvMTTRaQ7knD/2/cnXTx5U22roT57Io=
=kpXt
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] possible to intercept https traffic in TCP_TUNNEL CONNECT method ?

2016-10-22 Thread --Ahmad--
Hi guys 
say that i have squid proxy sever 
and i was running  capturing traffic on that server .

say that all users were using ip:port —> ((tcp_connect  tunnel))) mode of squid 

the question is being asked here … will i be able to see https traffic like 
Facebook  as normal traffic ?
or encrypted ?


the question in other way  …. is it possible to hack https traffic and see it 
as not encrypted ?


cheers

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Peeking on TLS traffic: unknown cipher returned

2016-10-22 Thread Leandro Barragan
Thanks a lot James, compiling Squid 3.5.22 using that specific commit
of LibreSSL worked as a charm! I no longer have that "unknown cipher
returned" errors. I do have some errors with a tiny amount of sites,
but I suppose its because of server-side misconfigurations that
LibreSSL simply don't like.


On 21 October 2016 at 13:01, James Lay  wrote:
> On 2016-10-21 09:58, Leandro Barragan wrote:
>>
>> James, thanks for your advice! I've read your email on this list about
>> LibreSSL. I tried to compile Squid with LibreSSL in the first place
>> because of what you wrote about ChaCha20. But unfortunately, I
>> couldn't, compilation stopped because of some obscure error.
>>
>> Do you remember what version of squid and libressl you used? BTW I
>> tried with OpenSSL 1.0.2g applying the CloudFare ChaCha20 patch, but
>> it doesn't work either, same error (unknown cipher)
>>
>> Thanks!
>>
>> On 21 October 2016 at 10:55, James Lay  wrote:
>>>
>>> On 2016-10-20 20:15, Leandro Barragan wrote:


 Thanks for your time Alex! I modified my original config based on Amos
 recommendations, so I think now I have a more consistent peek & splice
 config:

  acl TF ssl::server_name_regex -i facebook fbcdn twitter reddit
  ssl_bump peek all
  ssl_bump terminate TF
  ssl_bump splice all

 As you mentioned, terminate closes the connection, it doesn't serve an
 error page (when it works, i.e. with reddit and twitter).

 I've compiled Squid 3.5.22 using OpenSSL 1.0.2j and I'm having the
 same exact issue, even with this new config. Based on what you
 explained, I think it's a OpenSSL problem and Squid can't do anything
 about it. I have two reasons to believe that:

 1) The "unknown cipher returned" error get's triggered on terminated
 and non terminated (e.g. microsoft.com) sites, which makes me think it
 has nothing to do with Squid ACLs.
 2) All problematic sites use a new cipher called "ChaCha20" (E.g.
 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256according to Qualys
 online analyzer and TestSSLServer tool)

 A lot of sites are using this new cipher. I'm back at the beginning, I
 will continue trying to compile Squid with patched versions of OpenSSL
 or LibreSSL.

 Thanks!

 On 20 October 2016 at 01:01, Alex Rousskov
  wrote:
>
>
> On 10/19/2016 12:44 AM, Leandro Barragan wrote:
>
>>> error:140920F8:SSL routines:SSL3_GET_SERVER_HELLO:unknown cipher
>>> returned (1/-1/0)
>
>
>
>> I fail to see why is this happening. I only need to peek on the
>> connection and make a decision based on SNI,
>
>
>
> Please note that "peek and make a decision based on SNI" is not what
> your configuration tells Squid to do. Your configuration tells Squid to
> peek during step2, which means making a decision based on server
> certificates (and SNI).
>
>
>> I'm not Bumping, so I
>> don't understand why ciphers matter in my situation.
>
>
>
> The ciphers matter because Squid v3 uses OpenSSL parsers during step1,
> step2, and step3. FWIW, Squid v4 uses OpenSSL parsers during step2 (a
> little) and step3. It is possible to completely remove OpenSSL from
> step2 but there is currently no project to do that AFAIK.
>
>
>>> ssl_bump peek all step1
>>> ssl_bump peek all step2
>>> ssl_bump terminate face step3
>>> ssl_bump terminate twitter step3
>>> ssl_bump splice all step3
>
>
>
> BTW, "step1", "step2", and "step3" ACLs do nothing useful in the above
> config. You can safely remove them to arrive at the equivalent ssl_bump
> configuration.
>
>
> On 10/19/2016 07:42 AM, Amos Jeffries wrote:
>>
>>
>> Terminate means impersonating the server and responding to the client
>> with an HTTPS error page.
>
>
>
> Terminate means "close client and server connections immediately". The
> problem is not with the terminate action but with peeking (which relies
> on OpenSSL, especially during step2, especially in Squid v3).
>
>
> HTH,
>
> Alex.
>>>
>>>
>>>
>>> FWIW I've had great success with the git version of libressl and using
>>> the
>>> below:
>>>
>>> ./configure --prefix=/opt/libressl
>>>
>>> and for squid:
>>>
>>> ./configure --prefix=/opt --with-openssl=/opt/libressl --enable-ssl
>>> --enable-ssl-crtd
>>>
>>> James
>
>
> I'm currently using squid-3.5.22 and using the below git for libressl:
>
> commit b7ba692f72f232602efb3e720ab0510406bae69c
> Author: Brent Cook 
> Date:   Wed Sep 14 23:40:10 2016 -0500
>
> What's the error you're getting when you try and compile?
>
>
> James
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> 

Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-22 Thread garryd

On 2016-10-22 17:56, Antony Stone wrote:

Disclaimer: I am not a Squid developer.

On Saturday 22 October 2016 at 14:43:55, gar...@comnet.uz wrote:


IMO:

The only reason I believe [explains] why core developers of Squid tend 
to

move HTTP violating settings from average users is to prevent possible
abuse/misuse.


I believe the reason is that one of Squid's goals is to be RFC 
compliant,

therefore it does not contain features which violate HTTP.


Nevertheless, I believe that core developers should publish an
_official_ explanations regarding the tendency, as it often becomes a
"center of gravity" of many topics.


Which "tendency"?

What are you asking for an official explanation of?


Antony.


Since I started use Squid, it's configuration always RFC compliant by 
default, _but_ there were always knobs for users to make it HTTP 
violent. It was in hands of users to decide how to handle a web 
resource. Now it is not always possible, and the topic is an evidence. 
For example, in terms of this topic, users can't violate this RFC 
statement [1]:


   A Vary field value of "*" signals that anything about the request
   might play a role in selecting the response representation, possibly
   including elements outside the message syntax (e.g., the client's
   network address).  A recipient will not be able to determine whether
   this response is appropriate for a later request without forwarding
   the request to the origin server.  A proxy MUST NOT generate a Vary
   field with a "*" value.

[1] https://tools.ietf.org/html/rfc7231#section-7.1.4
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-22 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
I will explain why I am extremely outraged by this position. Every
single major players - both from the Web companies and from suppliers
caching solutions (BlueCoat, ThunderCache etc.) - to one degree or
another violate RFC. And developers of position - is to be paladins in
white robes, who strictly follow these _recommendations_ (not standard,
please note!) And to be holier than the Pope, even at the expense of its
users!

And, what is the strangest thing, while being the most support among the
users.

At the same time, for a moment, everybody forgets one simple thing.
Traffic - is money. A lot of money. Almost nowhere is there any truly
unlimited Internet, and we - the users - are paying money for it. And
because of the position of developers, we lose money. Anybody are told -
"Relax, you can always make a fork Or you can always make some crutches
as you like. This OpenSource, baby!". We can - and do. But this is - not
the solution. This is problem disregard.

Yes, we can make a fork. Yes, we can buy a commercial solution. But then
the question arises - why, in fact, if at all there is Squid? For
pathos? Or, as a source of commercial forks?

The trend is that the one who can with impunity violate RFC - he got a
lot of money. Remaining calm myself that this is the standard, it is
required to follow all. Go on! Most people believe that Squid is worth
nothing as a caching proxy! And - they right. Vanilla Squid makes not
above 10% byte hit. With increasing latency. Yes, I know that he is not
currently marketed as a caching proxy. Just in case, I'll take another
proxy, without the useless features that are not possible without the
need to break the RFC recommendations. Just - not needed.

22.10.2016 18:56, Antony Stone пишет:
> Disclaimer: I am not a Squid developer.
>
> On Saturday 22 October 2016 at 14:43:55, gar...@comnet.uz wrote:
>
>> IMO:
>>
>> The only reason I believe [explains] why core developers of Squid tend to
>> move HTTP violating settings from average users is to prevent possible
>> abuse/misuse.
>
> I believe the reason is that one of Squid's goals is to be RFC compliant,
> therefore it does not contain features which violate HTTP.
>
>> Nevertheless, I believe that core developers should publish an
>> _official_ explanations regarding the tendency, as it often becomes a
>> "center of gravity" of many topics.
>
> Which "tendency"?
>
> What are you asking for an official explanation of?
>
>
> Antony.
>

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJYC2X3AAoJENNXIZxhPexGgZYH/1/YbvICadk7nrFD/6znHC8y
JD74iAsB9XEKm9VSKKF+dEmKpBKs0iP4kJe75NZqJ8dBh6hM5H5FDAix7kvqkSj1
rJqxaqzZs2FfOO2+ylNYAVyjSVDWrsstpvX2fBMK8I4+WDXzAHzvYrFyo/KpP8uO
brdlrWrubMH0mfAJGIiVT/R3rNuRh7ZXkihakv2iLTg4ayZsQoDEgcbFfDW9ZN0M
mPWiPe2gofluXj2lYoAH/albY0NVypyvCSs0c9CBjvFwaMyj1pzbpHz0udsM1ix8
uZ7WTQPnuM4qh1lFNPHJ1bMUW3Fz9AiHXdrs2Ct0llppoj+pdGoAG4aQuefZhDw=
=pGxs
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-22 Thread Antony Stone
Disclaimer: I am not a Squid developer.

On Saturday 22 October 2016 at 14:43:55, gar...@comnet.uz wrote:

> IMO:
> 
> The only reason I believe [explains] why core developers of Squid tend to
> move HTTP violating settings from average users is to prevent possible
> abuse/misuse.

I believe the reason is that one of Squid's goals is to be RFC compliant, 
therefore it does not contain features which violate HTTP.

> Nevertheless, I believe that core developers should publish an
> _official_ explanations regarding the tendency, as it often becomes a
> "center of gravity" of many topics.

Which "tendency"?

What are you asking for an official explanation of?


Antony.

-- 
"640 kilobytes (of RAM) should be enough for anybody."

 - Bill Gates

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-22 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 


22.10.2016 18:43, gar...@comnet.uz пишет:
> On 2016-10-22 16:05, Yuri Voinov wrote:
>> Good explanations do not always help to get a good solution. A person
>> needs no explanation and solution.
>>
>> So far I've seen a lot of excellent reasons why Squid can not do
>> so-and-so in the normal configuration. However, this explanation does
>> not help in solving problems.
>>
>> Nothing personal, just observation.
>
> IMO:
>
> The only reason I believe explains why core developers of Squid tend
to move HTTP violating settings from average users is to prevent
possible abuse/misuse. Options like 'refresh_pattern ... ignore-vary'
can severe affect browsing experience if used by people without enough
knowledge of HTTP protocol(s). The abuse can easily compromise
reputation of Squid software.
>
> Fortunately, the license of Squid permits modification of the
software. There are many ways to get desired and not yet implemented
features of Squid:
>
> * Group of enthusiasts can easily make a fork project, name it
"Humboldt", for example and implement options like 'refresh_pattern ...
ignore-vary', 'host_forgery_verification off'. For example, some time
ago there was the project Lusca, which implemented address spoofing
(like TProxy) for BSD systems (among other features). The feature was
highly demanded and Squid project also implemented it later for BSD
systems. Now Lusca is not so popular.
>
> * Commercial organizations like ISP or any other enterprise can hire a
developer to implement the options.
>
> * Many system administrators with programming skills can successfully
modify the Squid sources to reach the goal. The squid-users list and
bugzilla remembers those success stories.
>
>
> Nevertheless, I believe that core developers should publish an
_official_ explanations regarding the tendency, as it often becomes a
"center of gravity" of many topics.
I do not think that someone is authorized to make official statements by
the developers. On behalf of any of its own community.
>
>
> Garri
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
But who would argue. You can always make a fork. Just got another indoor
bike. Received another 2049-Linux distribution. I do not think that this
is something to aspire to any open society.

And the trend is obvious. Large Internet companies pay a lot of money
for advertising, and they absolutely do not care about users and their
traffic. Arises only one rhetorical question. Key developers think about
the users or large companies with their income?

Actually, it's completely pointless debate. Developer position for a
long time, we all know. "You can modify the code and go to the devil."

I do not see the point in any further discussion, Harry. Your position
is quite clear.
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJYC2EQAAoJENNXIZxhPexGg9YH+gNpJVa5cDTbw96qZXlbRCmq
/Znw9tkSirx8ZKMeY/wHaKvlPxIXTO1f46180UqtNWf8VTBjXq0U1Y6C+uEg2yhj
/RjT+pxjmaV0CystefIUmHeyvB+iKltmkPVLWCkD4jGCoBGljmGSdUTlfQtMu4lW
eogyWZju/LDVNmJ516YreVX0TY47q4qz1zxh9yQ+dP7+6jKROqp/kLTPND8MXTbV
RXyM+pLWbFC3uK1KnGhMLdaq+RK8FW3KKo0gWqQf6/iNRry1Oin8VauhpmejTmbz
AeakNFFGbbQiYVuNNp1EadFRDE1O025BeQn72Un+SYADkZtDrAdSMsZ4VNFQzJM=
=CyoF
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-22 Thread garryd

On 2016-10-22 16:05, Yuri Voinov wrote:

Good explanations do not always help to get a good solution. A person
needs no explanation and solution.

So far I've seen a lot of excellent reasons why Squid can not do
so-and-so in the normal configuration. However, this explanation does
not help in solving problems.

Nothing personal, just observation.


IMO:

The only reason I believe explains why core developers of Squid tend to 
move HTTP violating settings from average users is to prevent possible 
abuse/misuse. Options like 'refresh_pattern ... ignore-vary' can severe 
affect browsing experience if used by people without enough knowledge of 
HTTP protocol(s). The abuse can easily compromise reputation of Squid 
software.


Fortunately, the license of Squid permits modification of the software. 
There are many ways to get desired and not yet implemented features of 
Squid:


* Group of enthusiasts can easily make a fork project, name it 
"Humboldt", for example and implement options like 'refresh_pattern ... 
ignore-vary', 'host_forgery_verification off'. For example, some time 
ago there was the project Lusca, which implemented address spoofing 
(like TProxy) for BSD systems (among other features). The feature was 
highly demanded and Squid project also implemented it later for BSD 
systems. Now Lusca is not so popular.


* Commercial organizations like ISP or any other enterprise can hire a 
developer to implement the options.


* Many system administrators with programming skills can successfully 
modify the Squid sources to reach the goal. The squid-users list and 
bugzilla remembers those success stories.



Nevertheless, I believe that core developers should publish an 
_official_ explanations regarding the tendency, as it often becomes a 
"center of gravity" of many topics.


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-22 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 


22.10.2016 16:55, gar...@comnet.uz пишет:
> On 2016-10-22 13:53, Rui Lopes wrote:
>> Hello,
>>
>> I'm trying to receive a cached version of
>> googlechromestandaloneenterprise64.msi with:
>>
>> refresh_pattern googlechromestandaloneenterprise64\.msi 4320 100% 4320
>> override-expire override-lastmod reload-into-ims ignore-reload
>> ignore-no-store ignore-private
>>
>> and trying it with the following httpie command:
>>
>> https_proxy=http://10.10.10.222:3128 http --verify=no -o chrome.msi
>>
'https://dl.google.com/tag/s/appguid=%7B----%7D=%7B----%7D=en=4=0=Google%20Chrome=true/dl/chrome/install/googlechromestandaloneenterprise64.msi'
>>
>> but squid never caches the response. it always shows:
>>
>> 1477125665.643   4040 10.10.10.1 TCP_MISS/200 50323942 GET
>>
https://dl.google.com/tag/s/appguid=%7B----%7D=%7B----%7D=en=4=0=Google%20Chrome=true/dl/chrome/install/googlechromestandaloneenterprise64.msi
>> - HIER_DIRECT/216.58.210.174 [2] application/octet-stream
>>
>> how can I make it cache?
>>
>> -- RGL
>>
>> PS I'm using squid 3.5.12-1ubuntu7.2 and my full squid.conf is:
>>
>> acl localnet src 10.0.0.0/8 [1]
>> acl SSL_ports port 443
>> acl Safe_ports port 80
>> acl Safe_ports port 443
>> acl CONNECT method CONNECT
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>> http_access allow localhost manager
>> http_access deny manager
>> http_access allow localnet
>> http_access allow localhost
>> http_access deny all
>>
>> http_port \
>> 3128 \
>> ssl-bump \
>> generate-host-certificates=on \
>> dynamic_cert_mem_cache_size=16MB \
>> key=/etc/squid/ssl_cert/ca.key \
>> cert=/etc/squid/ssl_cert/ca.pem
>>
>> ssl_bump bump all
>>
>> sslcrtd_program \
>> /usr/lib/squid/ssl_crtd \
>> -s /var/lib/ssl_db \
>> -M 16MB \
>> -b 4096 \
>> sslcrtd_children 5
>>
>> # a ~15 GiB cache (only caches files that have a length of 2 GiB or
>> less).
>> maximum_object_size 2 GB
>> cache_dir ufs /var/spool/squid 15000 16 256
>>
>> cache_store_log daemon:/var/log/squid/store.log
>>
>> shutdown_lifetime 2 seconds
>>
>> coredump_dir /var/spool/squid
>>
>> refresh_pattern googlechromestandaloneenterprise64\.msi 4320 100% 4320
>> override-expire override-lastmod reload-into-ims ignore-reload
>> ignore-no-store ignore-private
>>
>>
>>
>> Links:
>> --
>> [1] http://10.0.0.0/8
>> [2] http://216.58.210.174
>
> Hi,
>
> It have already been well explained by Amos this month:
>
>
http://lists.squid-cache.org/pipermail/squid-users/2016-October/012869.html
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

Good explanations do not always help to get a good solution. A person
needs no explanation and solution.

So far I've seen a lot of excellent reasons why Squid can not do
so-and-so in the normal configuration. However, this explanation does
not help in solving problems.

Nothing personal, just observation.

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJYC0f6AAoJENNXIZxhPexGpygH/A0mzDuTYq4g1LCZjHJA0Cuu
vZKf0+t/Z8dJtf6gKyhYDHH7IlByNJslPCOxWjx9b0ZnPuLAcMVp1rH9omzim93H
IEUnBj+4iSxOD5NEzmdwauYy4McyUedZsJLpEIc0MS9RF2X/18xIljjrYW+Rl4I5
9t88NiQjlwqTGKqgm5hIzjgMQDQbxmLhITmeXuC4sebGl0o8y+rl1NdJ/cy+0s5B
iCMb8PhqN8N/bmrHix6dIKhdktGVzyKyWiplPFymX21u0OLAwkbk2ZHYUzTCEX8G
eT7Ot+hRmk7GxdjEm5rlurUTkhynViYuc/BnMsmSNGqJk5p/zJx7jv/wPdTJ6r8=
=okUk
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-22 Thread garryd

On 2016-10-22 13:53, Rui Lopes wrote:

Hello,

I'm trying to receive a cached version of
googlechromestandaloneenterprise64.msi with:

refresh_pattern googlechromestandaloneenterprise64\.msi 4320 100% 4320
override-expire override-lastmod reload-into-ims ignore-reload
ignore-no-store ignore-private

and trying it with the following httpie command:

https_proxy=http://10.10.10.222:3128 http --verify=no -o chrome.msi
'https://dl.google.com/tag/s/appguid=%7B----%7D=%7B----%7D=en=4=0=Google%20Chrome=true/dl/chrome/install/googlechromestandaloneenterprise64.msi'

but squid never caches the response. it always shows:

1477125665.643   4040 10.10.10.1 TCP_MISS/200 50323942 GET
https://dl.google.com/tag/s/appguid=%7B----%7D=%7B----%7D=en=4=0=Google%20Chrome=true/dl/chrome/install/googlechromestandaloneenterprise64.msi
- HIER_DIRECT/216.58.210.174 [2] application/octet-stream

how can I make it cache?

-- RGL

PS I'm using squid 3.5.12-1ubuntu7.2 and my full squid.conf is:

acl localnet src 10.0.0.0/8 [1]
acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 443
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost
http_access deny all

http_port \
3128 \
ssl-bump \
generate-host-certificates=on \
dynamic_cert_mem_cache_size=16MB \
key=/etc/squid/ssl_cert/ca.key \
cert=/etc/squid/ssl_cert/ca.pem

ssl_bump bump all

sslcrtd_program \
/usr/lib/squid/ssl_crtd \
-s /var/lib/ssl_db \
-M 16MB \
-b 4096 \
sslcrtd_children 5

# a ~15 GiB cache (only caches files that have a length of 2 GiB or
less).
maximum_object_size 2 GB
cache_dir ufs /var/spool/squid 15000 16 256

cache_store_log daemon:/var/log/squid/store.log

shutdown_lifetime 2 seconds

coredump_dir /var/spool/squid

refresh_pattern googlechromestandaloneenterprise64\.msi 4320 100% 4320
override-expire override-lastmod reload-into-ims ignore-reload
ignore-no-store ignore-private



Links:
--
[1] http://10.0.0.0/8
[2] http://216.58.210.174


Hi,

It have already been well explained by Amos this month:

http://lists.squid-cache.org/pipermail/squid-users/2016-October/012869.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-22 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Try to use store-ID. Your URL seems dynamic. So, Squid never can cache it.

Don't forget - Google, like many other web companies, actively
counteracts caching. It is likely that you even Store ID will not help.


22.10.2016 14:53, Rui Lopes пишет:
> Hello,
>
> I'm trying to receive a cached version of
googlechromestandaloneenterprise64.msi with:
>
> refresh_pattern googlechromestandaloneenterprise64\.msi 4320 100% 4320
override-expire override-lastmod reload-into-ims ignore-reload
ignore-no-store ignore-private
>
> and trying it with the following httpie command:
>
> https_proxy=http://10.10.10.222:3128 http --verify=no -o chrome.msi
'https://dl.google.com/tag/s/appguid=%7B----%7D=%7B----%7D=en=4=0=Google%20Chrome=true/dl/chrome/install/googlechromestandaloneenterprise64.msi'
>
> but squid never caches the response. it always shows:
>
> 1477125665.643   4040 10.10.10.1 TCP_MISS/200 50323942 GET
https://dl.google.com/tag/s/appguid=%7B----%7D=%7B----%7D=en=4=0=Google%20Chrome=true/dl/chrome/install/googlechromestandaloneenterprise64.msi
- HIER_DIRECT/216.58.210.174 
application/octet-stream
>
> how can I make it cache?
>
> -- RGL
>
> PS I'm using squid 3.5.12-1ubuntu7.2 and my full squid.conf is:
> acl localnet src 10.0.0.0/8 
> acl SSL_ports port 443
> acl Safe_ports port 80
> acl Safe_ports port 443
> acl CONNECT method CONNECT
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow localhost manager
> http_access deny manager
> http_access allow localnet
> http_access allow localhost
> http_access deny all
>
> http_port \
> 3128 \
> ssl-bump \
> generate-host-certificates=on \
> dynamic_cert_mem_cache_size=16MB \
> key=/etc/squid/ssl_cert/ca.key \
> cert=/etc/squid/ssl_cert/ca.pem
>
> ssl_bump bump all
>
> sslcrtd_program \
> /usr/lib/squid/ssl_crtd \
> -s /var/lib/ssl_db \
> -M 16MB \
> -b 4096 \
> sslcrtd_children 5
>
> # a ~15 GiB cache (only caches files that have a length of 2 GiB or less).
> maximum_object_size 2 GB
> cache_dir ufs /var/spool/squid 15000 16 256
>
> cache_store_log daemon:/var/log/squid/store.log
>
> shutdown_lifetime 2 seconds
>
> coredump_dir /var/spool/squid
>
> refresh_pattern googlechromestandaloneenterprise64\.msi 4320 100% 4320
override-expire override-lastmod reload-into-ims ignore-reload
ignore-no-store ignore-private
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJYC0UaAAoJENNXIZxhPexGYM0H/RcmQBWc2A2b5FyRtlFBz3it
rKWailKYibbTj//SLjj4C0lh1SlFvB5v64liLNEUAMg0KdXJbfm8isnvpatR6/Lx
Hd44JIf87Xqy66IoQL9/LD/frrPf4XipDgBqHqKuijJVZqyXNSUBdlOZG23qF5th
U1rJfCcjBw0eWBd5Qp46XVTYPtLIg1iYuUBQDqWM3EDLwAiUoI6LMnS1zas0LJyk
hkKtMlVHLOAgSo/YHipvPuUzoWGUgmzGvInVo+dhyxN2c83jlm9HiJKWlQoUELo3
mGFb0UgfsBz/8+bfZR6J9sC7YVJL4aErbeT1xvlnmVDNXYowoQCY82OAzqz0Yb4=
=de5W
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-22 Thread Rui Lopes
Hello,

I'm trying to receive a cached version
of googlechromestandaloneenterprise64.msi with:

refresh_pattern googlechromestandaloneenterprise64\.msi 4320 100% 4320
override-expire override-lastmod reload-into-ims ignore-reload
ignore-no-store ignore-private

and trying it with the following httpie command:

https_proxy=http://10.10.10.222:3128 http --verify=no -o chrome.msi '
https://dl.google.com/tag/s/appguid=%7B----%7D=%7B----%7D=en=4=0=Google%20Chrome=true/dl/chrome/install/googlechromestandaloneenterprise64.msi
'

but squid never caches the response. it always shows:

1477125665.643   4040 10.10.10.1 TCP_MISS/200 50323942 GET
https://dl.google.com/tag/s/appguid=%7B----%7D=%7B----%7D=en=4=0=Google%20Chrome=true/dl/chrome/install/googlechromestandaloneenterprise64.msi
- HIER_DIRECT/216.58.210.174 application/octet-stream

how can I make it cache?

-- RGL

PS I'm using squid 3.5.12-1ubuntu7.2 and my full squid.conf is:

acl localnet src 10.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 443
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost
http_access deny allhttp_port \3128 \
ssl-bump \
generate-host-certificates=on \
dynamic_cert_mem_cache_size=16MB \
key=/etc/squid/ssl_cert/ca.key \
cert=/etc/squid/ssl_cert/ca.pemssl_bump bump allsslcrtd_program \
  /usr/lib/squid/ssl_crtd \
-s /var/lib/ssl_db \
-M 16MB \
-b 4096 \
sslcrtd_children 5# a ~15 GiB cache (only caches files that have a
length of 2 GiB or less).
maximum_object_size 2 GB
cache_dir ufs /var/spool/squid 15000 16 256cache_store_log
daemon:/var/log/squid/store.logshutdown_lifetime 2 secondscoredump_dir
/var/spool/squidrefresh_pattern
googlechromestandaloneenterprise64\.msi 4320 100% 4320 override-expire
override-lastmod reload-into-ims ignore-reload ignore-no-store
ignore-private
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users