[squid-dev] Squid's mailman

2024-02-09 Thread Marcus Kool

Hi all,

I posted today a message on squid-us...@lists.squid-cache.org and got 2 DMARC messages (more may follow in the next 24 hours) indicating that the mail message that mailman forwards to list members is 
rejected for list members - could be many more if they block without sending a DMARC message.
It seems that the mailman software that lists.squid-cache.org uses does not obey current best practices and fails to deliver a message on the list to all subscribers where the subscriber's mail server 
does SPF checks and notices that the From header does not match with the IP address of Squid's mailman server and rejects/quarantines the message.

It could be that many messages (not just mine) do not reach the mailboxes of 
list subscribers.

Marcus

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] squid website certificate error

2022-08-29 Thread Marcus Kool

I typed the name of the website without https in the address bar.  I am not 
sure how it was redirected to the https address (could be my browser history or 
web server).

In Firefox and Vivaldi I get the correct site.  When I type 'www.squid-cache.org' in the address bar of Chrome it goes very wrong showing the contents of https://grass.osgeo.org/.  Maybe Chrome tries 
https first and then http.


Marcus

On 29/08/2022 18:52, Francesco Chemolli wrote:

The squid website is not supposed to be over https, because it’s served by 
multiple mirrors not necessarily under the project’s control.
We have some ideas on how to change this but need the developer time to do it.
Help is welcome :)

On Mon, 29 Aug 2022 at 15:25, Marcus Kool  wrote:

Has anybody already complained that the certificates for squid-cache.org 
<http://squid-cache.org> and www.squid-cache.org <http://www.squid-cache.org> 
are messed up?

Marcus

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

--
@mobile___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] squid website certificate error

2022-08-29 Thread Marcus Kool

Has anybody already complained that the certificates for squid-cache.org and 
www.squid-cache.org are messed up?

Marcus

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] TLS 1.3 0rtt

2018-11-15 Thread Marcus Kool
After reading https://www.privateinternetaccess.com/blog/2018/11/supercookey-a-supercookie-built-into-tls-1-2-and-1-3/ I am wondering if the TLS 1.3 implementation in Squid will have an option to 
disable the 0rtt feature so that user tracking is reduced.


Marcus

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Block users dynamically

2018-05-30 Thread Marcus Kool



On 30/05/18 08:55, Daniel Berredo wrote:

Have you ever considered using an external ACL helper?


eh, for what ?


On Tue, May 29, 2018, 9:57 PM Marcus Kool mailto:marcus.k...@urlfilterdb.com>> wrote:


On 28/05/18 15:10, dean wrote:
 > I am implementing modifications to Squid 3.5.27 for a thesis job. At 
some point in the code, I need to block a user. What I'm doing is writing to an 
external file that is used in the
configuration,
 > like Squish does. But it does not block the user, however when I 
reconfigure Squid if it blocks it. Is there something I do not know? When I change 
the file, should I reconfigure Squid? Is there
 > another way to block users dynamically from the Squid code?

You can use ufdbGuard for this purpose.  ufdbGuard is a free URL redirector 
for Squid which can be configured to read lists of usernames or list of IP 
addresses every X minutes (default for X is 15).
So if you control a blacklist with usernames and write the name of the user 
to a defined file, ufdbguardd will block these users.
If the user must be blocked immediately you need to reload ufdbguardd, 
otherwise you wait until the configured time interval to reread the userlist 
expires and so after a few minutes the user gets
blocked.

Note that reloading ufdbguardd does not interfere with Squid and all 
activity by browsers and squid continues normally.

Marcus

___
squid-dev mailing list
squid-dev@lists.squid-cache.org <mailto:squid-dev@lists.squid-cache.org>
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Block users dynamically

2018-05-29 Thread Marcus Kool


On 28/05/18 15:10, dean wrote:
I am implementing modifications to Squid 3.5.27 for a thesis job. At some point in the code, I need to block a user. What I'm doing is writing to an external file that is used in the configuration, 
like Squish does. But it does not block the user, however when I reconfigure Squid if it blocks it. Is there something I do not know? When I change the file, should I reconfigure Squid? Is there 
another way to block users dynamically from the Squid code?


You can use ufdbGuard for this purpose.  ufdbGuard is a free URL redirector for 
Squid which can be configured to read lists of usernames or list of IP 
addresses every X minutes (default for X is 15).
So if you control a blacklist with usernames and write the name of the user to 
a defined file, ufdbguardd will block these users.
If the user must be blocked immediately you need to reload ufdbguardd, otherwise you wait until the configured time interval to reread the userlist expires and so after a few minutes the user gets 
blocked.


Note that reloading ufdbguardd does not interfere with Squid and all activity 
by browsers and squid continues normally.

Marcus

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] wiki.squid-cache.org has an expired certificate

2017-11-08 Thread Marcus Kool


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Bug 4662 adding --with-libressl build option

2017-02-01 Thread Marcus Kool



Do you think we can compromise and call it USE_OPENSSL_OR_LIBRESSL ?


or call it USE_OPENSSL_API

and then the code will eventually have none or few occurrences of
USE_OPENSSL and USE_LIBRESSL to deal with OpenSSL and LibreSSL specifics.

Marcus
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] simplifying ssl_bump complexity

2016-11-28 Thread Marcus Kool



On 11/28/2016 03:58 PM, Alex Rousskov wrote:

On 11/28/2016 06:30 AM, Marcus Kool wrote:

On 11/27/2016 11:20 PM, Alex Rousskov wrote:

It would be nice to prohibit truly impossible actions at the syntax
level, but I suspect that the only way to make that possible is to focus
on final actions [instead of steps] and require at *most* one ssl_bump
rule for each of the supported final actions:

  ssl_bump splice...rules that define when to splice...
  ssl_bump bump  ...rules that define when to bump...
  ssl_bump terminate ...rules that define when to terminate...
  # no other ssl_bump lines allowed!

The current intermediate actions (peek and stare) would have to go into
the ACLs. There will be no ssl_bump rules for them at all. In other
words, the admin would be required to _always_ write an equivalent of

  if (a1() && a2() && ...)
  then
  splice
  elsif (b1() && b2() && ...)
  then
  bump
  elsif (c1() && c2() && ...)
  then
  terminate
  else
  splice or bump, depending on state
  (or some other default; this decision is secondary)
  endif

where a1(), b2(), and other functions/ACLs may peek or stare as needed
to get the required information.



The above if-then-else tree is clear.


It is clear at the top level. I am not sure we can also make the
secondary level (i.e., intermediate peak/state actions) clear because
they have to be side effects in this design and side effects are rarely
clear. We can try though.



I like your suggestion to drop steps in the configuration and make
Squid more intelligent to take decisions at the appropriate
moments (steps).


Please note that the current configuration does not contain implicit
steps either; in its spirit/intent, it is essentially the same as the
above three-action sketch. Needless to say, most correct configurations
do contain an explicit step ACL or two, but they are supposed to be used
to essentially implement the above three-action logic.

Also, to avoid misunderstanding, I am not (yet) advocating any specific
configuration approach, including the one I started to sketch above. I
am only documenting a possible solution to the "How to make impossible
actions invalid at the syntax level" problem you have raised.


Yeah, I used a bit optimistic wording.  It cannot be solved at the syntax
level, but should be possible at the semantic level.


You mentioned admins being surprised about Squid bumping for a
notification of an error and one way to improve that is to replace
'terminate' by 'terminate_with_error' (with bumping) and 'quick_terminate'
(no bumping, just close fd).  The quick_terminate, if used, is also
faster, which is an added benefit.


I replied to the item where admins get confused when Squid bumps
to generate an error and suggested a way to control the behavior
of Squid to terminate instead of bump and generate an error.


Terminate is always an instant TCP connection closure, without any
bumping and without any errors being delivered to the user.

AFAIK, admins are not surprised by bumping when a terminate rule matches
(that would be a Squid bug). Admins are surprised by bumping when a
splice, peek, or stare rule matches. That surprise is a different
problem (that we should also try to solve, of course -- see the three
"core problem" bullets in my original response).






The only comment that I want to make without starting a
new thread is that I think that conceptual terms are better
than technical terms (hence my preference for 'passthrough'
instead of 'splice').  But let's save this discussion for later.


Personally, I doubt we should spend much time discussing whether
"passthrough" is better than "splice", but I agree that nobody should be
discussing naming until the much bigger (and, hopefully, solvable)
problems are solved.



I suspect that not looking at some SSL Hellos will always be needed
because some of those Hellos are not Hellos at all and it takes too much
time/resources for the SSL Hellos parser to detect some non-SSL Hellos.
Besides that, it is always nice to be able to selectively bypass the
complex Hello parser code in emergencies.




Perhaps less resources are used if there is a two-stage parser:
1) quick scan of input for data layout of a ClientHello
2) do the complex parsing.


Squid v4 already uses a two-stage Hello parser (the first stage is built
into Squid and the second stage is provided by OpenSSL).

You are oversimplifying the general protocol recognition problem because
you are focusing of the trivial cases. For example, one of the reasons
the first-stage parser cannot be always "quick" is because, in some
cases, there is no data to parse -- the server has to speak before the
client will send anything. Another example is SSL-like traffic that is
not really SSL. And a third example is Squid/OpenSSL limitations in
handling a

Re: [squid-dev] g++ 4.8.x and std::regex problems

2016-11-28 Thread Marcus Kool



On 11/28/2016 07:46 PM, Alex Rousskov wrote:

Please undo that commit and let's discuss whether switching from
libregex to std::regex now is a good idea.


Thank you,

Alex.


Has anybody considered using RE2?
It is a regex library that is fast, C++ source, high quality, public domain, 
and is supported by older compilers.

Marcus
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] simplifying ssl_bump complexity

2016-11-28 Thread Marcus Kool



On 11/27/2016 11:20 PM, Alex Rousskov wrote:

On 11/19/2016 07:06 PM, Amos Jeffries wrote:

On 20/11/2016 12:08 p.m., Marcus Kool wrote:

The current ssl bump steps allow problematic configs where Squid
bumps or stares in one step and to splice in an other step,
which can be resolved (made impossible) in a new configuration syntax.


It would be nice to prohibit truly impossible actions at the syntax
level, but I suspect that the only way to make that possible is to focus
on final actions [instead of steps] and require at *most* one ssl_bump
rule for each of the supported final actions:

  ssl_bump splice...rules that define when to splice...
  ssl_bump bump  ...rules that define when to bump...
  ssl_bump terminate ...rules that define when to terminate...
  # no other ssl_bump lines allowed!

The current intermediate actions (peek and stare) would have to go into
the ACLs. There will be no ssl_bump rules for them at all. In other
words, the admin would be required to _always_ write an equivalent of

  if (a1() && a2() && ...)
  then
  splice
  elsif (b1() && b2() && ...)
  then
  bump
  elsif (c1() && c2() && ...)
  then
  terminate
  else
  splice or bump, depending on state
  (or some other default; this decision is secondary)
  endif

where a1(), b2(), and other functions/ACLs may peek or stare as needed
to get the required information.


The above if-then-else tree is clear.
I like your suggestion to drop steps in the configuration and make
Squid more intelligent to take decisions at the appropriate
moments (steps).

You mentioned admins being surprised about Squid bumping for a
notification of an error and one way to improve that is to replace
'terminate' by 'terminate_with_error' (with bumping) and 'quick_terminate'
(no bumping, just close fd).  The quick_terminate, if used, is also
faster, which is an added benefit.



I am not sure such a change is desirable, but it is worth considering, I
guess.

Please note that I am ignoring the directives/actions naming issue for now.


AFAICT, the syntax proposed by Amos (i.e., making stepN mandatory) does
not solve this particular problem at all:

  # Syntactically valid nonsense:
  ssl_bump_step1 splice all
  ssl_bump_step2 bump all


and neither is yours:

  # Syntactically valid nonsense?
  tls_server_hello passthrough all
  tls_client_hello terminate all



Correct.  It would be nice to have a better configuration syntax
where impossible rules are easier to avoid and/or Squid has
intelligence to detect nonsense rules and produce an error.


Below is a new proposal to attempt to make the configuration
more intuitive and less prone for admin misunderstandings.

First the admin must define if there is any bumping at all.
This could be done with
https_decryption on|off
This is similar to tls_new_connection peek|splice but much
more intuitive.


I do not see why

  https_decryption off

is more intuitive (or more precise) than

  ssl_bump splice all


For me this is because of the used terminology and because
with 'https_decryption off' one does not write anything
that has 'bump' in it, so the admin does not even have to
read the documentation to learn that 'ssl_bump splice all'
means 'no decryption'.


especially after you consider the order of directives.

Again, I am ignoring the naming issue for now. You may assume any name
you want for any directive or ACL.


allright for now.
The only comment that I want to make without starting a
new thread is that I think that conceptual terms are better
than technical terms (hence my preference for 'passthrough'
instead of 'splice').  But let's save this discussion for later.


Iff https_decryption is on:

1) the "connection" step:
When a browser uses "CONNECT " Squid does not need to make
peek or splice decisions.
When Squid intercepts a connection to "port 443 of " no peek
or splice decision is made here any more.
This step becomes obsolete in the proposed configuration.



I am still hoping that will happen. But also still getting pushback that
people want to terminate or splice without even looking at the
clear-text hello details.


I suspect that not looking at some SSL Hellos will always be needed
because some of those Hellos are not Hellos at all and it takes too much
time/resources for the SSL Hellos parser to detect some non-SSL Hellos.
Besides that, it is always nice to be able to selectively bypass the
complex Hello parser code in emergencies.


Perhaps less resources are used if there is a two-stage parser:
1) quick scan of input for data layout of a ClientHello without
   semantic parsing the content, e.g. look at the CipherSuite and verify
   that the whole field has legal characters without verifying
   that the ciphersuite is a valid SSL string of ciphers.
2) do the complex parsing.

Stage 1 should 

Re: [squid-dev] [RFC] simplifying ssl_bump complexity

2016-11-24 Thread Marcus Kool

Hi Amos,

Can you share your thoughts ?

Thanks
Marcus


On 11/20/2016 10:55 AM, Marcus Kool wrote:



On 11/20/2016 12:06 AM, Amos Jeffries wrote:

On 20/11/2016 12:08 p.m., Marcus Kool wrote:



[snip]



I like the intent of the proposal and the new directives tls_*.
What currently makes configuration in Squid 3/4 difficult is
the logic of 'define in step x what to do in the next step' and
IMO this logic is the main cause of misunderstandings and
incorrect configurations.  Also the terms 'bump' and 'splice'
do not help ease of understanding.  Since Squid evolved and
bumping changed from 3.3 - 3.4 - 3.5 to 4.x, and likely will
change again in 5.x, there is an opportunity to improve
things more than is proposed.
There is also a difference in dealing with transparent intercepted
connections and direct connections (browsers doing a CONNECT)
which also causes some misunderstandings.
The current ssl bump steps allow problematic configs where Squid
bumps or stares in one step and to splice in an other step,
which can be resolved (made impossible) in a new configuration syntax.

I propose to use a new logic for the configuration directives
where 'define in step x what to do in the next step' is replaced
with a new logic 'define in step x what to do _now_'.


From reading the below I think you are mistaking what "now" means to
Squid. Input access control directives in squid.conf make a decision
about what action to do based on some state that just arrived.


Maybe it is necessary to redefine 'now' but my point remains that
'define in step x what to do in the next step' is the cause of
most misunderstandings.


For example:
 HTTP message just finished parsing -> check http_access what to do with it.
 HTTP reply message just arrived -> check http_reply_access what to do
with it.

Thus my proposal was along the lines of:
  client hello recieved -> check tls_client_hello what to do with it.
  server hello recieved -> check tls_server_hello what to do with it.


For both hello messages: is the decision moment the moment where it
has been peeked at?



Below is a new proposal to attempt to make the configuration
more intuitive and less prone for admin misunderstandings.

First the admin must define if there is any bumping at all.
This could be done with
https_decryption on|off
This is similar to tls_new_connection peek|splice but much
more intuitive.

Iff https_decryption is on:

1) the "connection" step:
When a browser uses "CONNECT " Squid does not need to make
peek or splice decisions.
When Squid intercepts a connection to "port 443 of " no peek
or splice decision is made here any more.
This step becomes obsolete in the proposed configuration.


I am still hoping that will happen. But also still getting pushback that
people want to terminate or splice without even looking at the
clear-text hello details.


We must know the reasons behind this pushback.  Only then sane decisions
can be made.



2) the "TLS client hello" step:
When a browser uses CONNECT, Squid has a FQDN and does not need
peeking a TLS client hello message. It can use the tls_client_hello
directives given below.


Sadly this is not correct. Squid still needs to get the client hello
details at this point. They are needed to perform bump before the server
hello is received, and to "terminate with an error message" without
contacting a server.


yes, correct.  Squid must do this.  But does it have to be configured?


When Squid intercepts a connection, Squid always peeks to retrieve
the SNI which is the equivalent of the FQDN used by a CONNECT.
In this step admins may want to define what Squid must do, e.g.
tls_client_hello passthrough aclfoo
Note that the acl 'aclfoo' can use tls::client_servername and
tls::client_servername should always have a FQDN if the connection
is https.  tls::client_servername expands to the IP address if
the SNI of an intercepted connection could not be retrieved.


What if the SNI contradicts the CONNECT message FQDN ?
What if a raw-IP in the CONNECT message (or TCP SYN) does not belong to
the server named in SNI ?


:-)  I left this out on purpose to not make the post even larger than it was.
There is of course a lot of error checking.  The question is if
we have to configure it.  If yes, can we get away with one directive based
on an acl that uses tls::handshake_failure ?


Squid would now be diverting the client transparently to a server other
than the one it expects and caching under that FQDN. But the server cert
would still authenticate as being the SNI host, so TLS cannot detect the
diversion.

The fake CONNECT's are a bit messy but IMHO we can only get rid of the
first one done for intercepted connections. Although that alone would
make both cases handle the same way.


I do not know anything about the code that generates the fake CONNECT
of an transparent int

Re: [squid-dev] [RFC] simplifying ssl_bump complexity

2016-11-20 Thread Marcus Kool



On 11/20/2016 12:06 AM, Amos Jeffries wrote:

On 20/11/2016 12:08 p.m., Marcus Kool wrote:



[snip]



I like the intent of the proposal and the new directives tls_*.
What currently makes configuration in Squid 3/4 difficult is
the logic of 'define in step x what to do in the next step' and
IMO this logic is the main cause of misunderstandings and
incorrect configurations.  Also the terms 'bump' and 'splice'
do not help ease of understanding.  Since Squid evolved and
bumping changed from 3.3 - 3.4 - 3.5 to 4.x, and likely will
change again in 5.x, there is an opportunity to improve
things more than is proposed.
There is also a difference in dealing with transparent intercepted
connections and direct connections (browsers doing a CONNECT)
which also causes some misunderstandings.
The current ssl bump steps allow problematic configs where Squid
bumps or stares in one step and to splice in an other step,
which can be resolved (made impossible) in a new configuration syntax.

I propose to use a new logic for the configuration directives
where 'define in step x what to do in the next step' is replaced
with a new logic 'define in step x what to do _now_'.


From reading the below I think you are mistaking what "now" means to
Squid. Input access control directives in squid.conf make a decision
about what action to do based on some state that just arrived.


Maybe it is necessary to redefine 'now' but my point remains that
'define in step x what to do in the next step' is the cause of
most misunderstandings.


For example:
 HTTP message just finished parsing -> check http_access what to do with it.
 HTTP reply message just arrived -> check http_reply_access what to do
with it.

Thus my proposal was along the lines of:
  client hello recieved -> check tls_client_hello what to do with it.
  server hello recieved -> check tls_server_hello what to do with it.


For both hello messages: is the decision moment the moment where it
has been peeked at?



Below is a new proposal to attempt to make the configuration
more intuitive and less prone for admin misunderstandings.

First the admin must define if there is any bumping at all.
This could be done with
https_decryption on|off
This is similar to tls_new_connection peek|splice but much
more intuitive.

Iff https_decryption is on:

1) the "connection" step:
When a browser uses "CONNECT " Squid does not need to make
peek or splice decisions.
When Squid intercepts a connection to "port 443 of " no peek
or splice decision is made here any more.
This step becomes obsolete in the proposed configuration.


I am still hoping that will happen. But also still getting pushback that
people want to terminate or splice without even looking at the
clear-text hello details.


We must know the reasons behind this pushback.  Only then sane decisions
can be made.



2) the "TLS client hello" step:
When a browser uses CONNECT, Squid has a FQDN and does not need
peeking a TLS client hello message. It can use the tls_client_hello
directives given below.


Sadly this is not correct. Squid still needs to get the client hello
details at this point. They are needed to perform bump before the server
hello is received, and to "terminate with an error message" without
contacting a server.


yes, correct.  Squid must do this.  But does it have to be configured?


When Squid intercepts a connection, Squid always peeks to retrieve
the SNI which is the equivalent of the FQDN used by a CONNECT.
In this step admins may want to define what Squid must do, e.g.
tls_client_hello passthrough aclfoo
Note that the acl 'aclfoo' can use tls::client_servername and
tls::client_servername should always have a FQDN if the connection
is https.  tls::client_servername expands to the IP address if
the SNI of an intercepted connection could not be retrieved.


What if the SNI contradicts the CONNECT message FQDN ?
What if a raw-IP in the CONNECT message (or TCP SYN) does not belong to
the server named in SNI ?


:-)  I left this out on purpose to not make the post even larger than it was.
There is of course a lot of error checking.  The question is if
we have to configure it.  If yes, can we get away with one directive based
on an acl that uses tls::handshake_failure ?


Squid would now be diverting the client transparently to a server other
than the one it expects and caching under that FQDN. But the server cert
would still authenticate as being the SNI host, so TLS cannot detect the
diversion.

The fake CONNECT's are a bit messy but IMHO we can only get rid of the
first one done for intercepted connections. Although that alone would
make both cases handle the same way.


I do not know anything about the code that generates the fake CONNECT
of an transparent interception connection, but logically there should
not be a fake CONNECT for true HTTPS (TLS+HTTP) si

Re: [squid-dev] [RFC] simplifying ssl_bump complexity

2016-11-19 Thread Marcus Kool



On 11/19/2016 08:07 AM, Amos Jeffries wrote:

Since ssl_bump directive went in my original opinion of it as being too
complicated and confusing has pretty much been demonstrated as correct
by the vast amount of misconfigurations and failed attempts of people to
use it without direct assistance from those of us involved with its design.

Since we are also transitioning to a world where 'SSL' does not exist
any longer I think v5 is a good time to rename and redesign the
directive a bit.

I propose going back to the older config style where each step has its
own directive name which self-documents what it does. That will reduce
the confusion about what is going on at each 'step', and allow us a
chance to have clearly documented default actions for each step.

For example:
 tls_new_connection
  - default: peek all
  - or run ssl_bump check if that directive exists

 tls_client_hello
  - default: splice all
  - or run ssl_bump check if that directive exists

 tls_server_hello
  - default: terminate all
  - or run ssl_bump check if that directive exists


I like the intent of the proposal and the new directives tls_*.
What currently makes configuration in Squid 3/4 difficult is
the logic of 'define in step x what to do in the next step' and
IMO this logic is the main cause of misunderstandings and
incorrect configurations.  Also the terms 'bump' and 'splice'
do not help ease of understanding.  Since Squid evolved and
bumping changed from 3.3 - 3.4 - 3.5 to 4.x, and likely will
change again in 5.x, there is an opportunity to improve
things more than is proposed.
There is also a difference in dealing with transparent intercepted
connections and direct connections (browsers doing a CONNECT)
which also causes some misunderstandings.
The current ssl bump steps allow problematic configs where Squid
bumps or stares in one step and to splice in an other step,
which can be resolved (made impossible) in a new configuration syntax.

I propose to use a new logic for the configuration directives
where 'define in step x what to do in the next step' is replaced
with a new logic 'define in step x what to do _now_'.

Below is a new proposal to attempt to make the configuration
more intuitive and less prone for admin misunderstandings.

First the admin must define if there is any bumping at all.
This could be done with
https_decryption on|off
This is similar to tls_new_connection peek|splice but much
more intuitive.

Iff https_decryption is on:

1) the "connection" step:
When a browser uses "CONNECT " Squid does not need to make
peek or splice decisions.
When Squid intercepts a connection to "port 443 of " no peek
or splice decision is made here any more.
This step becomes obsolete in the proposed configuration.

2) the "TLS client hello" step:
When a browser uses CONNECT, Squid has a FQDN and does not need
peeking a TLS client hello message. It can use the tls_client_hello
directives given below.
When Squid intercepts a connection, Squid always peeks to retrieve
the SNI which is the equivalent of the FQDN used by a CONNECT.
In this step admins may want to define what Squid must do, e.g.
tls_client_hello passthrough aclfoo
Note that the acl 'aclfoo' can use tls::client_servername and
tls::client_servername should always have a FQDN if the connection
is https.  tls::client_servername expands to the IP address if
the SNI of an intercepted connection could not be retrieved.

For https connections with a client hello without the SNI extension:
tls_client_hello passthrough|terminate aclbar
where aclbar can contain tls::client_hello_missing_sni

For connections that do not use TLS (i.e. not a valid
TLS client hello message was seen):
tls_client_hello passthrough|terminate aclbar2
where aclbar2 may contain tls::handshake_failure

To define that the TLS handshake continues, the config can contain
tls_client_hello continue
This is a basically a no-op and not required but enhances readability
of a configuration.

3) the "TLS server hello" step:
Usually no directives are needed since rarely actions are taken
based on the server hello message, so the default is
tls_server_hello continue
The tls_server_hello can be used to terminate specific connections.
In this step many types of certificate errors can be detected
and in the Squid configuration there must be a way to define
what to do for specific errors and optionally for which FQDN.
E.g. allow to define that connections with self-signed certificates
are terminates but the self-signed cert for domain foo.example.com
is allowed.  See also the example config below and the use of
tls::server_servername.

What is left, is a configuration directive for connections
that use TLS as an encryption wrapper but do not use HTTP
inside the TLS wrapper:
tls_no_http passthrough|terminate   # similar to on_unsupported_protocol

An example configuration looks like this:
https_decryption on
acl banks tls::client_servername .bank1.example.org
acl no_sni tls::client_hello_missing_sni
acl no_handshake tls::h

Re: [squid-dev] [PATCH] Support tunneling of bumped non-HTTP traffic. Other SslBump fixes.

2016-10-14 Thread Marcus Kool
I started testing this patch and observed one unwanted side effect of  
this patch:

When a client connects to mtalk.google.com,
Squid sends the following line to the URL rewriter:
(unknown)://173.194.76.188:443 / - NONE

Marcus

Quoting Christos Tsantilas :

Use case: Skype groups appear to use TLS-encrypted MSNP protocol  
instead of HTTPS. This change allows Squid admins using SslBump to  
tunnel Skype groups and similar non-HTTP traffic bytes via  
"on_unsupported_protocol tunnel all". Previously, the combination  
resulted in encrypted HTTP 400 (Bad Request) messages sent to the  
client (that does not speak HTTP).


Also this patch:
 * fixes bug 4529: !EBIT_TEST(entry->flags, ENTRY_FWD_HDR_WAIT)  
assertion in FwdState.cc.


 * when splicing transparent connections during SslBump step1, avoid  
access-logging an extra record and log %ssl::bump_mode as the  
expected "splice" not "none".


 * handles an XXX comment inside clientTunnelOnError for possible  
memory leak of client streams related objects


 * fixes TunnelStateData logging in the case of splicing after peek.

This is a Measurement Factory project.



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Benchmarking Performance with reuseport

2016-08-13 Thread Marcus Kool

This article better explains the benefits of O_REUSEPORT:
https://lwn.net/Articles/542629/

A key paragraph is this:
The problem with this technique, as Tom pointed out, is that when
multiple threads are waiting in the accept() call, wake-ups are not
fair, so that, under high load, incoming connections may be
distributed across threads in a very unbalanced fashion. At Google,
they have seen a factor-of-three difference between the thread
accepting the most connections and the thread accepting the
fewest connections; that sort of imbalance can lead to
underutilization of CPU cores. By contrast, the SO_REUSEPORT
implementation distributes connections evenly across all of the
threads (or processes) that are blocked in accept() on the same port.

So using O_REUSEPORT seems very beneficial for SMP-based Squid.

Marcus


On 08/09/2016 09:19 PM, Henrik Nordström wrote:

tor 2016-08-04 klockan 23:12 +1200 skrev Amos Jeffries:



I imagine that Nginx are seeing latency reduction due to no longer
needing a central worker that receives the connection then spawns a
whole new process to handle it. The behaviour sort of makes sense for
a
web server (which Nginx is at heart still, a copy of Apache) spawning
CGI processes to handle each request. But kind of daft in these
HTTP/1.1
multiplexed performance-centric days.


No, it's only about accepting new connections on existing workers.

Many high load sites still run with non-persistent connections to keep
worker count down, and these benefit a lot from this change.

Sites using persistent connections only benefit marginally. But the
larger the worker count the higher the benefit as the load from new
connections gets distrubuted by the kernel instead of a stamping herd
of workers.

Regards
Henrik

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Benchmarking Performance with reuseport

2016-08-03 Thread Marcus Kool

https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/
is an interesting short article about using the SO_REUSEPORT socket
option which increased performance of nginx and had better balancing
of connections across sockets of workers.
Since Squid has the issue that load is not very well balanced between
workers, I thought it is interesting to look at.

Marcus
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] HTTP meetup in Stockholm

2016-07-12 Thread Marcus Kool


On 07/12/2016 06:53 AM, Henrik Nordström wrote:

tis 2016-07-12 klockan 18:34 +1200 skrev Amos Jeffries:

I'm much more in favour of binary formats. The HTTP/2 HPACK design
lends
itself very easily to binary header values (ie sending integers as
interger encoded value). Following PHK's lead on those.


json is very ambiguous with no defined schema or type restrictions.
It's up to the receiver to guess type information from format while
parsing, which in itself is a mess from security point of view.

The beauty of json is that it is trivially extensible with new data,
and have all basic data constructs you need for arbitrary data. (name
tagged, strings, integers, floats, booleans, arrays, dictionaries and
maybe something more). But for the same reason it's also unsuitable for
HTTP header information which should be consise, terse and un-ambiguous
with little room for syntax errors.

Regards
Henrik


Extensible json headers seems to lend itself to put a lot of 
application-specific
stuff in headers instead of in payload. The headers should be used for the
protocol only.

Squid has had many issues in the past with non-conformity to standards.
The Squid developers obviously want to stick with the standards and are
forced by non-conformant apps and servers to support non-conformity.
Can this workshop be used to address this?

Marcus
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] on_crash

2015-12-09 Thread Marcus Kool



On 12/09/2015 09:20 PM, Alex Rousskov wrote:

On 12/09/2015 02:28 PM, Amos Jeffries wrote:

The above
considerations are all good reasons for us not to be bundling by default
IMO.


I agree.

Alex.


I did not get what the script does, does it call gdb ?

A script/executable that calls gdb and produces a readable stack trace of all 
squid processes is a powerful tool which makes debugging an issue much easier 
for many admins.
So I suggest to release the binaries and scripts that you have, install them by default in a new subdirectory, e.g. .../debugbin or .../sbin/debug, and _not_ configure them in the default squid.conf 
to prevent them being used accidentally.


If you do not want to bundle, then what is the alternative?
Make a download area on squid-cache.org for the binaries and scripts ?

Marcus
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Fake CONNECT requests during SSL Bump

2015-09-27 Thread Marcus Kool



On 09/26/2015 10:21 PM, Amos Jeffries wrote:

On 24/09/2015 11:40 p.m., Marcus Kool wrote:


On 09/24/2015 02:13 AM, Eliezer Croitoru wrote:

On 23/09/2015 04:52, Amos Jeffries wrote:

Exactly. They are processing steps. Not messages to be adapted.

Amos


+1 For that.


[...]


In any case the bottom line from me is that for now ICAP and ECAP are
called ADAPTATION services and not ACL services.
It can be extended to do so and it's not a part of the RFCs or
definitions and it might be the right way to do things but it will
require simple enough libraries that will let most admins (if not all)
to be able to implement their ACL logics using these
protocol\implementations.

Eliezer


ICAP is an adaptation protocol that almost everybody uses for access
control.


Which is wrong way to do it in many cases. All the effort put into
avoiding the use of correct APIs is frustrating.

Adaptation is for adaptation, not authorization.



The ICAP server must be able to see all traffic going through Squid so
that it can do what it was designed for and block (parts) of websites
and other data streams.


Blocking *parts* of websites is not access control. It is censorship and
copyright violation.


We may interpret the words "parts of websites" differently.  I find
removing an ad or a tracking java script not censorship nor a violation.
Even if you want to call removing ads censorship, it is _desired_ censorship,
i.e. censorship in the good sense of the word, like you censor a child not
to see everything that the world offers.


Rejecting the requests with 451 status is the correct (and legal) way to
go about that.



Other data streams may not be HTTP(S)-based and hence are not bumped,
but for the ICAP server to be able to do its thing, it still needs a
(fake) CONNECT.

Going back to Steve's original message, I think that it is not necessary
to generate a (fake) CONNECT for each bump step,
but to send exactly one CONNECT at the moment that Squid makes a
decision.  I.e. when Squid decides to bump or splice.



No. CONNECT means a change of traffic protocols is happening on the
connection.

Specifically it is a placeholder for the request message that would have
been sent by the client if it was using an explicit-proxy and the
protocol stack was being changed properly by an explicit HTTP layer
CONNECT message, instead of intercepted.


The first layer protocol received is TCP.

  - If that was intercepted we use a CONNECT with raw-IP to represent the
machine NAT subsystem requesting a tunnel.
   + Explicit proxies this actually exists as bytes of on-wire message
and can have domain names from the client.

  - If that HTTP-over-TCP CONNECT request is allowed it gets serviced,
maybe adapted. Otherwise it gets rejected.

  - If ssl_bump says to do anything other than splice with it ...


The second layer protocol is TLS-over-HTTP (-over-TCP).

  - If the connection was intercepted and SNI was now found to be present.
+ We reset (some of) the connection state and go back to "first layer
protocol" processing but using the SNI instead of raw-IP.
+ For the *sole* reason of better emulating the real explicit-proxy
behaviour. Otherwise...

  - If splicing is chosen the tunnel is enacted.

  - If bumping is done ...


The third protocol is HTTP-over-TLS (-over-HTTP-over-TCP).

  - If that HTTP(S) request is allowed it gets serviced, maybe adapted.
Otherwise it gets rejected.


** Interception proxies run through later 1, maybe 1 again and 3.
** Explicit proxies run through layer 1 and 3.


IMO we are better off removing the raw-IP CONNECT by always peeking at
the SNI than going the other way and adding more repeated CONNECT
processing.

Contextually a second CONNECT for explicit proxy is meaningless since
the SNI value is just the FQDN used for all the bumped HTTPS requests
URLs. There is no real HTTP message or even a protocol change action for
the adaptation to associate a client response with.


Sorry, I read your response 3 times, but I fail to understand the point.
What is wrong with Squid sending exactly one message to an ICAP server
when a CONNECT message is being bumped ?

Marcus


One part lacking that we do need to fix soonish is to add hostVerify
logics to enforce the inner layer (3) messages Host: header == SNI
security requirement.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Fake CONNECT requests during SSL Bump

2015-09-24 Thread Marcus Kool


On 09/24/2015 02:13 AM, Eliezer Croitoru wrote:

On 23/09/2015 04:52, Amos Jeffries wrote:

Exactly. They are processing steps. Not messages to be adapted.

Amos


+1 For that.


[...]


In any case the bottom line from me is that for now ICAP and ECAP are called 
ADAPTATION services and not ACL services.
It can be extended to do so and it's not a part of the RFCs or definitions and 
it might be the right way to do things but it will require simple enough 
libraries that will let most admins (if not all)
to be able to implement their ACL logics using these protocol\implementations.

Eliezer


ICAP is an adaptation protocol that almost everybody uses for access control.

The ICAP server must be able to see all traffic going through Squid so that it 
can do what it was designed for and block (parts) of websites and other data 
streams.
Other data streams may not be HTTP(S)-based and hence are not bumped, but for 
the ICAP server to be able to do its thing, it still needs a (fake) CONNECT.

Going back to Steve's original message, I think that it is not necessary to 
generate a (fake) CONNECT for each bump step,
but to send exactly one CONNECT at the moment that Squid makes a decision.  
I.e. when Squid decides to bump or splice.

Marcus
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] download squid 3.5.8 fails

2015-09-02 Thread Marcus Kool

The normal download URL works now also.

Thanks

Marcus

On 09/02/2015 04:14 PM, Amos Jeffries wrote:

On 3/09/2015 3:23 a.m., Marcus Kool wrote:


The download of the 3.5.8 sources fails :-(

wget -vvv http://www.squid-cache.org/Versions/v3/3.5/squid-3.5.8.tar.gz
--2015-09-02 17:16:43--
http://www.squid-cache.org/Versions/v3/3.5/squid-3.5.8.tar.gz
Resolving www.squid-cache.org (www.squid-cache.org)... 92.223.231.190,
209.169.10.131
Connecting to www.squid-cache.org
(www.squid-cache.org)|92.223.231.190|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2015-09-02 17:16:43 ERROR 404: Not Found.



Yeah. I'm having trouble with one of the mirrors too.

Try west.squid-cache.org as the domain. That one I know works.


Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] download squid 3.5.8 fails

2015-09-02 Thread Marcus Kool


The download of the 3.5.8 sources fails :-(

wget -vvv http://www.squid-cache.org/Versions/v3/3.5/squid-3.5.8.tar.gz
--2015-09-02 17:16:43--  
http://www.squid-cache.org/Versions/v3/3.5/squid-3.5.8.tar.gz
Resolving www.squid-cache.org (www.squid-cache.org)... 92.223.231.190, 
209.169.10.131
Connecting to www.squid-cache.org (www.squid-cache.org)|92.223.231.190|:80... 
connected.
HTTP request sent, awaiting response... 404 Not Found
2015-09-02 17:16:43 ERROR 404: Not Found.

Best regards,

Marcus
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] bug 4303

2015-08-19 Thread Marcus Kool



On 08/19/2015 06:00 AM, Tsantilas Christos wrote:

Hi Marcus, Amos
I just posted the squid-3.5 version of the patch under the mail-thread
  "[PATCH] Ignore impossible SSL bumping actions, as intended and..."

Regards,
   Christos


Hi Christos,

I applied the patch and tested it.
Squid works fine: the stock 3.5.7 gave 6 assertion failures yesterday and 1 
hour of testing today showed none.

Thanks

Marcus

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] bug 4303

2015-08-18 Thread Marcus Kool



On 08/18/2015 12:36 PM, Amos Jeffries wrote:

On 19/08/2015 12:56 a.m., Marcus Kool wrote:

Amos, Christos,

Christos' patch seems not to work for plain 3.5.7 sources.
What do you suggest to try ?   Will there be a snapshot release that is
suitable for testing ?


Christos now has it in trunk, but the last snapshot refused to build due
to a compiler issues in the build farm which is now resolved. Tomorows
trunk snapshot should be r14229 or later with it in.

Next round of backports to 3.5 should include it there in 2-3 days as
well unless something goes wrong in the portage.


Thanks, I will wait for the 3.5 backport.  Will the patch be announced on the 
list?

marcus


Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] bug 4303

2015-08-18 Thread Marcus Kool

Amos, Christos,

Christos' patch seems not to work for plain 3.5.7 sources.
What do you suggest to try ?   Will there be a snapshot release that is 
suitable for testing ?

Marcus


On 08/12/2015 11:32 AM, Marcus Kool wrote:

Amos,
I tried the patch but several hunks failed.
It seems that the patch is not compatible with the 3.5.7 release code or I am 
doing something wrong (see below).
Marcus

[root@srv018 squid-3.5.7]# patch -b -p0 --dry-run < ../squid-sslbump-patch
checking file src/acl/Acl.h
Hunk #1 succeeded at 150 (offset 1 line).
checking file src/acl/BoolOps.cc
checking file src/acl/BoolOps.h
Hunk #1 FAILED at 45.
1 out of 1 hunk FAILED
checking file src/acl/Checklist.cc
checking file src/acl/Checklist.h
checking file src/acl/Tree.cc
Hunk #2 FAILED at 69.
1 out of 2 hunks FAILED
checking file src/acl/Tree.h
Hunk #1 FAILED at 23.
1 out of 1 hunk FAILED
checking file src/client_side.cc
Hunk #1 FAILED at 4181.
Hunk #2 FAILED at 4247.
2 out of 2 hunks FAILED
checking file src/ssl/PeerConnector.cc
Hunk #1 FAILED at 214.
1 out of 1 hunk FAILED


On 08/12/2015 10:25 AM, Amos Jeffries wrote:

On 13/08/2015 12:48 a.m., Marcus Kool wrote:

yesterday I filed bug 4303 - assertion failed in PeerConnector:743 squid
3.5.7
I am not sure if it is a duplicate of bug 4259 since that bug
description has almost no info to compare against.

I enclosed a small fragment of cache.log in the bug report but the debug
setting was ALL,1 93,3 61,9 so cache.log is very large.
In case that you need a larger fragment of cache.log, I can provide it.



Thanks Marcus.

I was about to reply to the bug report, but this is better.

I suspect this is a case of Squid going the wrong way in ssl_bump
interpretation. Specifically the peek action at stage 3.

Would you be able to try Christos' patch at the end of the mail here:
<http://lists.squid-cache.org/pipermail/squid-dev/2015-August/002981.html>


Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] bug 4303

2015-08-12 Thread Marcus Kool

Amos,
I tried the patch but several hunks failed.
It seems that the patch is not compatible with the 3.5.7 release code or I am 
doing something wrong (see below).
Marcus

[root@srv018 squid-3.5.7]# patch -b -p0 --dry-run < ../squid-sslbump-patch
checking file src/acl/Acl.h
Hunk #1 succeeded at 150 (offset 1 line).
checking file src/acl/BoolOps.cc
checking file src/acl/BoolOps.h
Hunk #1 FAILED at 45.
1 out of 1 hunk FAILED
checking file src/acl/Checklist.cc
checking file src/acl/Checklist.h
checking file src/acl/Tree.cc
Hunk #2 FAILED at 69.
1 out of 2 hunks FAILED
checking file src/acl/Tree.h
Hunk #1 FAILED at 23.
1 out of 1 hunk FAILED
checking file src/client_side.cc
Hunk #1 FAILED at 4181.
Hunk #2 FAILED at 4247.
2 out of 2 hunks FAILED
checking file src/ssl/PeerConnector.cc
Hunk #1 FAILED at 214.
1 out of 1 hunk FAILED


On 08/12/2015 10:25 AM, Amos Jeffries wrote:

On 13/08/2015 12:48 a.m., Marcus Kool wrote:

yesterday I filed bug 4303 - assertion failed in PeerConnector:743 squid
3.5.7
I am not sure if it is a duplicate of bug 4259 since that bug
description has almost no info to compare against.

I enclosed a small fragment of cache.log in the bug report but the debug
setting was ALL,1 93,3 61,9 so cache.log is very large.
In case that you need a larger fragment of cache.log, I can provide it.



Thanks Marcus.

I was about to reply to the bug report, but this is better.

I suspect this is a case of Squid going the wrong way in ssl_bump
interpretation. Specifically the peek action at stage 3.

Would you be able to try Christos' patch at the end of the mail here:
<http://lists.squid-cache.org/pipermail/squid-dev/2015-August/002981.html>


Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] bug 4303

2015-08-12 Thread Marcus Kool

yesterday I filed bug 4303 - assertion failed in PeerConnector:743 squid 3.5.7
I am not sure if it is a duplicate of bug 4259 since that bug description has 
almost no info to compare against.

I enclosed a small fragment of cache.log in the bug report but the debug 
setting was ALL,1 93,3 61,9 so cache.log is very large.
In case that you need a larger fragment of cache.log, I can provide it.

Best regards,

Marcus
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Temporary fix to restore compatibility with Amazon

2015-06-24 Thread Marcus Kool



On 06/24/2015 05:24 PM, Kinkie wrote:

My 2c: I vote for reality; possibly with a shaming announce message; I
wouldn't even recommend logging the violation: there is nothing the
average admin can do about it. I would consider adding a shaming
comment in the release notes.


more cents:

correct.

A standard can be considered a strong guideline but if important sites
violate the standard (i.e. users/admins complain) then Squid
should be able to cope with it or it risks getting abandoned because
Squid cannot cope with traffic of sites that otherwise work without Squid.

For an admin it is irrelevant if the problem is caused by Squid or by
a website.  And the admin who dares to say to its users "only visit
sites that comply with the standards" probably gets fired.



On Wed, Jun 24, 2015 at 10:12 PM, Alex Rousskov
 wrote:

On 06/24/2015 05:26 AM, Amos Jeffries wrote:


On 24/06/2015 5:55 p.m., Alex Rousskov wrote:

 This temporary trunk fix adds support for request URIs containing
'|' characters. Such URIs are used by popular Amazon product (and
probably other) sites: /images/I/ID1._RC|ID2.js,ID3.js,ID4.js_.js

Without this fix, all requests for affected URIs timeout while Squid
waits for the end of request headers it has already received(*).




This is not right. Squid should be identifying the message as
non-HTTP/1.x (which it isn't due to the URI syntax violation) and
treating it as such.


I agree that Amazon violates URI syntax. On the other hand, the message
can be interpreted as HTTP/1.x for all practical purposes AFAICT. If you
want to implement a different fix, please do so. Meanwhile, folks
suffering from this serious regression can try the temporary fix I posted.



The proper long-term fix is to allow any character in URI as long as we
can reliably parse the request line (and, later, URI components). There
is no point in hurting users by rejecting requests while slowly
accumulating the list of benign characters used by web sites but
prohibited by some RFC.



The *proper* long term fix is to obey the standards in regard to message
syntax so applications stop using these invalid (when un-encoded)
characters and claiming HTTP/1.1 support.


We had "standards vs reality" and "policing traffic" discussions several
times in the past, with no signs of convergence towards a single
approach, so I am not going to revisit that discussion now. We continue
to disagree [while Squid users continue to suffer].


Thank you,

Alex.

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev





___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Death of SSLv3

2015-05-07 Thread Marcus Kool



On 05/07/2015 07:03 AM, Amos Jeffries wrote:

Its done. SSLv3 is now a "MUST NOT use" protocol from RFC 7525
()


good decision.


It's time for us to start ripping out from trunk all features and hacks
supporting its use. Over the coming days I will be submitting patches to
remove the squid.conf settings, similar to SSLv2 removal earlier.

The exceptions which may remain are SSLv3 features which are used by the
still-supported TLS versions. Such as session resume, and the SSLv3
format of Hello message (though not the SSLv3 protocol IDs).


are you sure you want to do this _now_ ?

It is predictable that users will complain with
"I know this provider is stupid and uses SSLv3 but I _need_ to access that site for 
our business"
and use this as a reason not to upgrade or blame squid.

It may not be that much extra work to have a new option "use_sslv3" with the 
default setting to OFF
and not ripping SSLv3 code yet.  Also, if you do not rip SSLv3, Squid can 
detect that a site uses
SSLv3 and give a useful error message like "this site insists in using the unsafe 
SSLv3 protocol"
instead of a confusing "unknown protocol".

Marcus



Christos, if you can keep this in mind for all current / pending, and
future "SSL" work.

Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Non-HTTP bypass

2015-01-02 Thread Marcus Kool



On 12/31/2014 02:31 PM, Alex Rousskov wrote:

On 12/31/2014 03:33 AM, Marcus Kool wrote:

On 12/31/2014 05:54 AM, Alex Rousskov wrote:

What would help is to decide whether we want to focus on

A) multiple conditions for establishing a TCP tunnel;
B) multiple ways to handle an unrecognized protocol error; OR
C) multiple ways to handle multiple errors.

IMO, we want (B) or perhaps (C) while leaving (A) as a separate
out-of-scope feature.

The proposed patch implements (B). To implement (C), the patch needs to
add an ACL type to distinguish an "unrecognized protocol" error from
other errors.




 From an administrators point of view, the admins that want Squid to
filter internet access, definitely want (B).  They want (B) to block
audio, video, SSH tunnnels, VPNs, chat, file sharing, webdisks and all
sorts of applications (but not all!) that use port 443.


Agreed, except this is not limited to port 443. The scope includes
intercepted port 80 connections and even CONNECT tunnels.


If CONNECT tunnels are in scope, then so are all the applications that use it,
including webdisk, audio, video, SSH etc.

I think it was Amos who said that application builders hould use 
application-specific
ports, but reality is that all firewalls block those ports by default.
Skype was one of the first applications that worked everywhere, even behind
a corporate firewall and it was done using CONNECT to the web proxy.
And from a security point of view I think that administrators prefer that
applications use CONNECT to the web proxy to have more control and logging
about what traffic is going from a LAN to the internet.


Basically
this means that admins desire a more fine-grained control about what to
do with each tunnel.


There are two different needs here, actually:

1. A choice of actions (i.e., "what to do") when dealing with an
unsupported protocol. Currently, there is only one action: Send an HTTP
error response. The proposed feature adds another action (tunnel) and,
more importantly, adds a configuration interface to support more actions
later.


Sending an HTTP error to an application that does not speak HTTP is not
very useful.  Skype, SSH, videoplayers etc. only get confused at best.
Simply closing the tunnel may be better and may result in an end user message
'cannot connect to ...' instead of 'server sends garbage' or 'undefined 
protocol'.

Marcus


2. A way to further classify an unsupported protocol (i.e.,
"fine-grained control"). I started a new thread on this topic as it is
not about the proposed bypass feature.


Cheers,

Alex.

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] unsupported protocol classification

2014-12-31 Thread Marcus Kool



On 12/31/2014 02:23 PM, Alex Rousskov wrote:

[ I am changing the Subject line for this sub-thread because this new
discussion is not really relevant to the unsupported protocol bypass
feature, even though that bypass feature will be used by those who need
to classify unsupported protocols. ]


On 12/31/2014 03:33 AM, Marcus Kool wrote:


The current functionality of filtering is divided between Squid itself and
3rd party software (ICAP daemons and URL redirectors).


... as well as external ACLs and eCAP adapters.



I plea for an interface where an external helper can decide what to do
with an unknown protocol inside a tunnel because it is much more flexible
than using ACLs and extending Squid with detection of (many) protocols.


I doubt pleading will be enough, unfortunately, because a considerable
amount of coding and design expertise is required to fulfill your dream.
IMO, a quality implementation would involve:


It is clear to me that this functionality will not be implemented next week,
but for me it is not a dream.  It is a reality that filtering becomes more
important, just wait until a headline in the news comes along like
"secret document stolen via a web tunnel" and everybody wants it.
The risk is real and it is so simple to abuse CONNECT on port 443 for anything
that it is extremely likely that it is already being used for illegal actions
and will continue to be used for illegal actions.

There is also not much point in having a web proxy that can filter 50% or 99%
of what you want to filter.  If you cannot filter everything and especially 
cannot
filter known security risks, the filter solution is very weak.
That is why ufdbGuard currently sends probes to sites that an application 
CONNECTs to.
The probes tell ufdbGuard what type of traffic is to be expected but
are also not 100% reliable since a probe is not the same as an inspection
of the real traffic.


1. Encoding the tunnel information (including traffic) in [small]
HTTP-like messages to be passed to ICAP/eCAP services. It is important
to get this API design right while anticipating complications like
servers that speak first, agents that do not send Hellos until they hear
the other agent Hello, and fragmented Hellos. Most likely, the design
will involve two tightly linked but concurrent streams of adaptation
messages: user->Squid->origin and origin->Squid->user. Let's call that
TUNMOD, as opposed to the existing REQMOD and RESPMOD.


Getting the design right is definitely important.  Therefore I like
to bring up this issue once in a while so that with the design decisions
made today of related parts, it will be easier to implement TUNMOD
in the future.


2. Writing adaptation hooks to pass tunnel information (using TUNMOD
design above) to adaptation services. The primary difficulty here is
handling incremental "give me more" and "give them more" decisions while
shoveling tunneled bytes. The current tunneling code does not do any
adaptation at all so the developers would be starting from scratch
(albeit with good examples available from non-tunneling code dealing
with HTTP/FTP requests and HTTP/FTP responses).


It can be simpler.  TUNMOD replies can be limited to
DONTKNOW - continue with what is happening and keep the TUNMOD server informed
ACCEPT - continue and do not inform the TUNMOD server any more about this tunnel
BLOCK - close the tunnel

I think there is no need for adaptation since one accepts a webdisk, voice
chat, VPN or whatever, or one does not accept it. So adaptation as is
used for HTTP, is not an important feature.

Sending an HTTP error on a tunnel is only useful if the tunnel uses
SSL-encapsulated HTTP.


3. Implementing more actions than the already implemented "start a blind
tunnel" and "respond with an error". The "shovel this to the other side
and then come back to me with the newly received bytes" action would be
essential in many production cases, for example.

The above is a large project. I do not recall any projects of that size
and complexity implemented without sponsors in recent years but YMMV.


We will see.  Maybe there will be a sponsor to do this.

It is 15:38 local time and my last post of the year.
Happy New Year to all.

Marcus


Please note that modern Squid already has an API that lets 3rd party
software to pick one of the supported actions. It is called annotations:
External software sends Squid an annotation and the admin configures
Squid to do X when annotation Y is received in context Z.



A while back when we discussed the older sslBump not being able to cope
with Skype I suggested to use ICAP so that the ICAP daemon receives a
REQMOD/RESPMOD message with CONNECT and intercepted content, which also is
a valid option for me.


Yes, ICAP/eCAP is the right direction here IMO, but there are several
challenges on that road. I tried to detail them above.


HTH,

Alex.




Re: [squid-dev] [PATCH] Non-HTTP bypass

2014-12-31 Thread Marcus Kool



On 12/31/2014 05:54 AM, Alex Rousskov wrote:
[...]


What would help is to decide whether we want to focus on

   A) multiple conditions for establishing a TCP tunnel;
   B) multiple ways to handle an unrecognized protocol error; OR
   C) multiple ways to handle multiple errors.

IMO, we want (B) or perhaps (C) while leaving (A) as a separate
out-of-scope feature.

The proposed patch implements (B). To implement (C), the patch needs to
add an ACL type to distinguish an "unrecognized protocol" error from
other errors.


From an administrators point of view, the admins that want Squid to
filter internet access, definitely want (B).  They want (B) to block
audio, video, SSH tunnnels, VPNs, chat, file sharing, webdisks and all sorts
of applications (but not all!) that use port 443.  Basically
this means that admins desire a more fine-grained control about what to
do with each tunnel.

The current functionality of filtering is divided between Squid itself and
3rd party software (ICAP daemons and URL redirectors).
I plea for an interface where an external helper can decide what to do
with an unknown protocol inside a tunnel because it is much more flexible
than using ACLs and extending Squid with detection of (many) protocols.

A while back when we discussed the older sslBump not being able to cope
with Skype I suggested to use ICAP so that the ICAP daemon receives a
REQMOD/RESPMOD message with CONNECT and intercepted content, which also is
a valid option for me.

I wish to all a Blessful and Happy New Year!
Marcus




[...]



Thank you,

Alex.
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] FYI: the C++11 roadmap

2014-11-05 Thread Marcus Kool

On 6/11/2014 1:27 a.m., Marcus Kool wrote:
>
>
> On 11/05/2014 02:01 AM, Amos Jeffries wrote: On 6/05/2014 2:21
> a.m., Amos Jeffries wrote:
>>>> I have just announced the change in 3.4.5 regarding C++11
>>>> support and accompanied it with a notice that GCC verion 4.8
>>>> is likely to become the minimum version later this calendar
>>>> year.
>>>>
>>>> As it stands (discussed earlier):
>>>>
>>>> * Squid-3.4 needs to build with any GCC 4.0+ version with
>>>> C++03.
>>>>
>>>> * Squid-3.6 will need to build with C++11.
>>>>
>>>> * The Squid-3.5 situation is still in limbo and will depend
>>>> on how long we go before it branches.
>
> Squid-3.5 retains GCC 4.0+ support shared with older versions.
>
>>>>
>>>> We have a growing list of items needing C++11 features for
>>>> simpler implementation. At this point I am going to throw a
>>>> peg out and say Sept or Oct for starting to use C++11
>>>> specific code features.


This "peg" has now moved to Nov. The code cleanup and polishing
patches now going into trunk work fine with C++11 builds but
possibly not with compilers older than Clang 3.3 or GCC 4.8.


Are you referring here to building squid 3.5 or 3.6 ?
Do you think that changing to C++11 is ok for both versions?


It is understandable that developers want to move forward.
Almost everybody does. But system administrators tend to do the
opposite: they stay on a stable platform for as long as possible
while they wait for a new platform to mature. Lets look at the
most popular Linux server platform: Redhat 6 / CentOS 6. Redhat 6
has gcc 4.4 while the successor Redhat 7 has gcc 4.8. This means
that when Squid gets a new requirement with gcc 4.8 the new
versions will not run on Redhat 6.



Because Redhat 6 is the most popular and stable server platform
and Redhat 7 is not mature and does not meet my expectations of a
stable platform (especially systemd is still a big troublemaker),
this implies that choosing for C++11 effectively means that new
releases of Squid will not be installed any longer on the most
popular Linux server platform.



So I think my opinion from the point of view of a system
administrator is clearly "choose for installability, choose
C++03".


"If an admin is so dearly wanting aged-stability. What are they doing
building the latest Squid release in its youth?"


Admins do not want "the latest", they want "the latest proven".
For me that is 3.4.9 and when 3.5.5 comes out, it is time to evaluate
upgrading.  Depends also on the number and nature of complaints
on the mailing list.


Seriously though, these are my reasons for the timing:

* Both clang 3.3 and gcc 4.8 were released in early 2013. By the time
these Squid-3.6 builds get near a beta release these compiler versions
will already be nearing their 3rd birthdays.


The man page of gcc 4.8 explains the -std=c11 like this:
ISO C11, the 2011 revision of the ISO C standard.  Support is incomplete and 
experimental.

The man page of gcc 4.9.1 has a new text:
ISO C11, the 2011 revision of the ISO C standard.  This
standard is substantially completely supported, modulo bugs,
extended identifiers (supported except for corner cases when
-fextended-identifiers is used), floating-point issues (mainly
but not entirely relating to optional C11 features from Annexes
F and G) and the optional Annexes K (Bounds-checking
interfaces) and L (Analyzability).

Not sure if we can rely on man pages; it seems that gcc 4.9 is a safer 'minimal 
requirement'.


* Most of C++11'isms expected to be useful are the early "C++0x"
features (atomics, auto etc.) which are supported by GCC 4.4 from
RHEL6 anyway. RHEL6 people just need to explicitly pass one of the
special C++0x flags when building. The reason for naming those
specific compilers is that they are the first to offer *full* C++11
support.
   ** if someone wants to work on auto-detecting RHEL6 / CentOS6.5 and
auto-enabling that option, great.

* It is growing more and more difficult to prevent useful C++isms
showing up.
http://www.squid-cache.org/Versions/v3/3.5/changesets/squid-3-13627.patch
took several days with clean rebuilds to even find out what the
problem was. Some like auto have now been around in the literature and
example code for 5+ years.

...regarding RHEL and CentOS specifically...

* RHEL/CentOS are the only OS family which has this problem. The
others are all on shorter or simply earlier cycles of turnover and
will be providing the compilers in their mainstream OS versions by
about the same time Squid-3.6 goes beta, if not already now.


Suse 11 SP3 does not have gcc 4.8 either (only 4.7)
Suse 12 is also not mature, so it has the same problem as Redhat 7.


* If RHEL7 matc

Re: [squid-dev] FYI: the C++11 roadmap

2014-11-05 Thread Marcus Kool



On 11/05/2014 02:01 AM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 6/05/2014 2:21 a.m., Amos Jeffries wrote:

I have just announced the change in 3.4.5 regarding C++11 support
and accompanied it with a notice that GCC verion 4.8 is likely to
become the minimum version later this calendar year.

As it stands (discussed earlier):

* Squid-3.4 needs to build with any GCC 4.0+ version with C++03.

* Squid-3.6 will need to build with C++11.

* The Squid-3.5 situation is still in limbo and will depend on how
long we go before it branches.


Squid-3.5 retains GCC 4.0+ support shared with older versions.



We have a growing list of items needing C++11 features for simpler
implementation. At this point I am going to throw a peg out and say
Sept or Oct for starting to use C++11 specific code features.


This "peg" has now moved to Nov. The code cleanup and polishing
patches now going into trunk work fine with C++11 builds but possibly
not with compilers older than Clang 3.3 or GCC 4.8.


It is understandable that developers want to move forward. Almost everybody 
does.
But system administrators tend to do the opposite: they stay on a stable
platform for as long as possible while they wait for a new platform to mature.
Lets look at the most popular Linux server platform: Redhat 6 / CentOS 6.
Redhat 6 has gcc 4.4 while the successor Redhat 7 has gcc 4.8.
This means that when Squid gets a new requirement with gcc 4.8 the new versions
will not run on Redhat 6.

Because Redhat 6 is the most popular and stable server platform and Redhat 7
is not mature and does not meet my expectations of a stable platform
(especially systemd is still a big troublemaker), this implies that choosing
for C++11 effectively means that new releases of Squid will not be
installed any longer on the most popular Linux server platform.

So I think my opinion from the point of view of a system administrator
is clearly "choose for installability, choose C++03".

just my 2ct.

Marcus


Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUWaERAAoJELJo5wb/XPRjDscH/2CwRC04bqiKXbO5mjXQEdnu
2p+neAdQo26xWmYUo/pJUagoHchF9yq8yWqsZfeXI7Xi3X9NwKwAmnQmCrBxcLHc
lzgNrbWJVIDG7BizbxZ/VRe9bBCe9v3vmSFKyenHd2jUc7XCnf3F++rsgzfHug+N
+KLQ1rMfngzk9vyv+dVjOvz6y9hTC54x3Kqd8ek2Z8OXKP0MUrKOOcWxomwRx14H
x3vv8mZtdBSQhU7bSKWgZa4P3zGA/OIVs6eQnqUGKajokgq3qgC33Hr0hWamqWcs
yNH90ee5Bl9iwVvm2KVSWO8zlsOxN48E0Zr+Bgonc802esy4JYkt9Ijy2jJVP/s=
=s3el
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] adapting 100-Continue / A Bug 4067 fix

2014-11-01 Thread Marcus Kool



On 11/01/2014 03:59 PM, Tsantilas Christos wrote:

Hi all,

This patch is a fix for bug 4067:
 http://bugs.squid-cache.org/show_bug.cgi?id=4067


The bug report has a list of bugs.  Which ones are addressed ?

Thanks
Marcus


Currently squid fails to handle correctly "100 Continue" requests/responses  when 
adaptation/ICAP  is used. When a data-upload request enters squid (eg a PUT request with  
"Expect: 100-continue"
header), squid sends ICAP headers and HTTP headers to ICAP server and stuck 
waiting for ever  the request body data.

This patch implements the "force_request_body_continuation" access list  
directive which controls how Squid handles data upload requests from HTTP and send the 
request body to Squid.

>

An allow match tells Squid to respond with the HTTP 100 or FTP 150 (Please 
Continue) control message on its own, before forwarding the request to an 
adaptation service or peer. Such a response usually
forces the request sender to proceed with sending the body. A deny match tells 
Squid to delay that control response until the origin server confirms that the 
request body is needed. Delaying is the
default behavior.

This is a Measurement Factory project


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev