Re: [squid-users] Splice certain SNIs which served by the same IP

2022-02-22 Thread Christos Tsantilas

On 22/2/22 9:45 μ.μ., Eliezer Croitoru wrote:

Just To mention that once Squid is not splicing the connection it would have
full control in the URL level.

Exactly.

For many HTTP2 sites the SNI does not provide enough info for 
splicing/bumping decision.


The google sites is one of them. You can not safely bump google.com or 
youtube.com and splice gmail.com. You have to weighing  the risks and 
probably splice all google sites including the gmail.com.




I do not know the scenario but I have yet to have seen a similar case and
it's probably because I am bumping
almost all connections.


... and because squid while proxying uses HTTP/1.1 protocol not HTTP/2.

Regards,
   Christos



Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Splice certain SNIs which served by the same IP

2022-02-21 Thread Christos Tsantilas

Hi Ben,

When HTTP/2 is used, requests for two different domains may served using 
the same TLS connection if both domains are served from the same remote 
server and use the same TLS certificate.

There is a description here:
   https://daniel.haxx.se/blog/2016/08/18/http2-connection-coalescing/

And a similar problem report here:
   https://bugs.chromium.org/p/chromium/issues/detail?id=1176673

Regards,
   Christos


On 14/2/22 3:49 μ.μ., Ben Goz wrote:

By the help of God.

Hi,
Ny squid version is 4.15, using it on tproxy configuration.

I'm using ssl bump to intercept https connection, but I want to splice 
several domains.
I have a problem that when I'm splicing some google domains eg. 
youtube.com  then

gmail.com  domain also spliced.

I know that it is very common for google servers to host multiple 
domains on single server.
And I suspect that when I'm splicing for example youtube.com 
 it'll also splices google.com .


  Here are my squid configurations for the ssl bump:

https_port  ssl-bump tproxy generate-host-certificates=on 
options=ALL dynamic_cert_mem_cache_size=4MB 
cert=/usr/local/squid/etc/ssl_cert/myCA.pem 
dhparams=/usr/local/squid/etc/dhparam.pem sslflags=NO_DEFAULT_CA


acl DiscoverSNIHost at_step SslBump1

acl NoSSLIntercept ssl::server_name  "/usr/local/squid/etc/url-no-bump"
acl NoSSLInterceptRegexp ssl::server_name_regex -i 
"/usr/local/squid/etc/url-no-bump-regexp"

ssl_bump splice NoSSLInterceptRegexp_always
ssl_bump splice NoSSLIntercept
ssl_bump splice NoSSLInterceptRegexp
ssl_bump peek DiscoverSNIHost
ssl_bump bump all



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websockets through Squid

2020-10-21 Thread Christos Tsantilas

Hi Vieri,

I attached a patch to bug5084 which may help us to debug the issue:
   https://bugs.squid-cache.org/attachment.cgi?id=3772

The patch is for squid-v5 and produces debug messages at debug level 1.

Regards,
   Christos




On 17/10/20 11:36 μ.μ., Alex Rousskov wrote:

On 10/16/20 11:58 AM, Vieri wrote:


I pinpointed one particular request that's failing:

2020/10/16 16:56:37.250 kid1| 85,2| client_side_request.cc(745) clientAccessCheckDone: The 
request GET 
https://ed1lncb62601.webex.com/direct?type=websocket&dtype=binary&rand=1602860196950&uuidtag=G7609603-81A2-4B8D-A1C0-C379CC9B12G9&gatewayip=PUB_IPv4_ADDR_2
 is ALLOWED; last ACL checked: all

It is in this log:

https://drive.google.com/file/d/1OrB42Cvom2PNmV-dnfLVrnMY5IhJkcpS/view?usp=sharing


Thank you, that helped a lot!

I see that Squid decides that the client has closed the connection.
Squid propagates that connection closure to the server. Due to a Squid
bug (or my misunderstanding), cache.log does not contain enough details
to tell whether the client actually closed the connection in this case
and, if it did, whether it did so nicely or due to some TLS error.

I filed bug #5084 in hope to improve handling of this case. At the very
least, the log should categorize the closure, but it is also possible
that the client did not actually close the connection, and Squid is
completely mishandling the situation (in addition to being silent about
it). See https://bugs.squid-cache.org/show_bug.cgi?id=5084 for details.

I cannot volunteer to work on this further right now, but I hope this
triage will help another volunteer (or a paid contractor) to make
further progress.

https://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump adventures in enterprise production environment

2015-11-17 Thread Christos Tsantilas

On 11/16/2015 08:00 AM, Eugene M. Zheganin wrote:

Hi.

On 16.11.2015 00:14, Yuri Voinov wrote:


It's common knowledge. Squid is unable to pass an unknown protocol on
the standard port. Consequently, the ability to proxy this protocol does
not exist.

If it was simply a tunneling ... It is not https. And not just
HTTP-over-443. This is more complicated and very marginal protocol.


I'm really sorry to tell you that, but you are perfectly wrong. These
non-HTTPS tunnels have been working for years. And this isn't JTTPS
because of:

# openssl s_client -connect login.icq.com:443
CONNECTED(0003)
34379270680:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown
protocol:/usr/src/secure/lib/libssl/../../../crypto/openssl/ssl/s23_clnt.c:782:


This is does not looks like an SSL protocol.
It can not be used on SSL-bumping squid port.

The "on_unsupported_protocol" configuration parameter which exist on 
squid-trunk and squid-4.x maybe is useful for your case.




---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 297 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
---

Eugene.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to get c-icap url category from squid access lo

2015-11-04 Thread Christos Tsantilas

On 11/04/2015 08:34 AM, Murat K wrote:

Hi guys,

please can someone tell me if it is possible to send url category info
from c-icap to squid access log?


The ICAP response headers can be logged using the "adapt::formating code in squid.


If you are using the url_check c-icap service then you can log the 
X-Atttribute, X-Response-Info and X-Response-Desc ICAP headers.


If you are using a custom c-icap service then you should sent 
information from icap server to squid sending an ICAP response header.





thanks so much



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Assert(call->dialer.handler == callback)

2015-05-05 Thread Christos Tsantilas

Hi Steve,
 We have similar crashes.

I created a new bug report in squid bugzilla (I did not found any other 
similar report), using your stack trace:

  http://bugs.squid-cache.org/show_bug.cgi?id=4238

Also I attached a patch here, which probably fixes this problem. Can you 
please test it?


Regards,
   Christos


On 04/30/2015 07:14 PM, Steve Hill wrote:


I've just migrated a system from Squid 3.4.10 to 3.5.3 and I'm getting
frequent crashes with an assertion of "call->dialer.handler == callback"
in Read.cc:comm_read_cancel().

call->dialer.handler == (IOCB *) 0x7ffe1493b2d0


callback == 


This is quite a busy system doing server-first ssl_bump and I get a lot
of SSL negotiation errors in cache.log (these were present under 3.4.10
too).  I think a good chunk of these are Team Viewer, which abuses
CONNECTs to port 443 of remote servers to do non-SSL traffic, so
obviously isn't going to work with ssl_bump.  I _suspect_ that the
assertion may be being triggered by these SSL errors (e.g. connection
being unexpectedly torn down because SSL negotiation failed?), but I
can't easily prove that.

I don't quite understand the comm_read_cancel() function though - as far
as I can see, the callback parameter is only used in the assert() - is
that correct?


Stack trace:
#0  0x7ffe1155d625 in raise (sig=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1  0x7ffe1155ee05 in abort () at abort.c:92
#2  0x7ffe148210df in xassert (msg=Unhandled dwarf expression opcode
0xf3
) at debug.cc:544
#3  0x7ffe14a62787 in comm_read_cancel (fd=600,
callback=0x7ffe148c8dd0 ,
 data=0x7ffe176c8298) at Read.cc:204
#4  0x7ffe148c5e62 in IdleConnList::clearHandlers
(this=0x7ffe176c8298, conn=...) at pconn.cc:157
#5  0x7ffe148c94ab in IdleConnList::findUseable
(this=0x7ffe176c8298, key=...) at pconn.cc:269
#6  0x7ffe148c979d in PconnPool::pop (this=0x7ffe145db010, dest=...,
domain=Unhandled dwarf expression opcode 0xf3
) at pconn.cc:449
#7  0x7ffe14852142 in FwdState::pconnPop (this=Unhandled dwarf
expression opcode 0xf3
) at FwdState.cc:1153
#8  0x7ffe14855605 in FwdState::connectStart (this=0x7ffe2034c4e8)
at FwdState.cc:850
#9  0x7ffe14856a31 in FwdState::startConnectionOrFail
(this=0x7ffe2034c4e8) at FwdState.cc:398
#10 0x7ffe148d2fa5 in peerSelectDnsPaths (psstate=0x7ffe1fd0c028) at
peer_select.cc:302
#11 0x7ffe148d6a1d in peerSelectDnsResults (ia=0x7ffe14f0ac20,
details=Unhandled dwarf expression opcode 0xf3
) at peer_select.cc:383
#12 0x7ffe148a8e71 in ipcache_nbgethostbyname (name=Unhandled dwarf
expression opcode 0xf3
) at ipcache.cc:518
#13 0x7ffe148d23c1 in peerSelectDnsPaths (psstate=0x7ffe1fd0c028) at
peer_select.cc:259
#14 0x7ffe148d6a1d in peerSelectDnsResults (ia=0x7ffe14f0ac20,
details=Unhandled dwarf expression opcode 0xf3
) at peer_select.cc:383
#15 0x7ffe148a8e71 in ipcache_nbgethostbyname (name=Unhandled dwarf
expression opcode 0xf3
) at ipcache.cc:518
#16 0x7ffe148d23c1 in peerSelectDnsPaths (psstate=0x7ffe1fd0c028) at
peer_select.cc:259
#17 0x7ffe148d382b in peerSelectFoo (ps=0x7ffe1fd0c028) at
peer_select.cc:522
#18 0x7ffe149bba6a in ACLChecklist::checkCallback
(this=0x7ffe2065b9e8, answer=...) at Checklist.cc:167
#19 0x7ffe148d3f5a in peerSelectFoo (ps=0x7ffe1fd0c028) at
peer_select.cc:459
#20 0x7ffe148d5176 in peerSelect (paths=0x7ffe2034c540,
request=0x7ffe1b660b70, al=Unhandled dwarf expression opcode 0xf3
) at peer_select.cc:163
#21 0x7ffe14852ae3 in FwdState::Start (clientConn=...,
entry=0x7ffe1b0da790, request=0x7ffe1b660b70, al=...) at FwdState.cc:366
#22 0x7ffe14801401 in clientReplyContext::processMiss
(this=0x7ffe1fcf5838) at client_side_reply.cc:691
#23 0x7ffe14801eb0 in clientReplyContext::doGetMoreData
(this=0x7ffe1fcf5838) at client_side_reply.cc:1797
#24 0x7ffe14805a89 in ClientHttpRequest::httpStart
(this=0x7ffe1dcda618) at client_side_request.cc:1518
#25 0x7ffe14808cac in ClientHttpRequest::processRequest
(this=0x7ffe1dcda618) at client_side_request.cc:1504
#26 0x7ffe14809013 in ClientHttpRequest::doCallouts
(this=0x7ffe1dcda618) at client_side_request.cc:1830
#27 0x7ffe1480b453 in checkNoCacheDoneWrapper (answer=...,
data=0x7ffe1e5db378) at client_side_request.cc:1400
#28 0x7ffe149bba6a in ACLChecklist::checkCallback
(this=0x7ffe1c88b4a8, answer=...) at Checklist.cc:167
#29 0x7ffe1480b40a in ClientRequestContext::checkNoCache
(this=0x7ffe1e5db378) at client_side_request.cc:1385
#30 0x7ffe14809c04 in ClientHttpRequest::doCallouts
(this=0x7ffe1dcda618) at client_side_request.cc:1748
#31 0x7ffe1480d109 in ClientRequestContext::clientAccessCheckDone
(this=0x7ffe1e5db378, answer=Unhandled dwarf expression opcode 0xf3
) at client_side_request.cc:821
#32 0x7ffe1480d898 in ClientRequestContext::clientAccessCheck2
(this=0x7ffe1e5db378) at client_side_request.cc:718
#33 0x7ffe14809767 in ClientHttpRequest::doCallouts
(this=0x7ffe1dcda618) at client_side_request.cc:

Re: [squid-users] light weight ICAP server that isn't dead :o)

2015-02-10 Thread Christos Tsantilas

On 02/10/2015 01:00 AM, Luis Miguel Silva wrote:


The most interesting one seems to be C-ICAP but I don't like that it
hasn't even reached a 1.0 version...


If you believe that it is interesting then at least test it to see if it 
matches your needs.


The version number has to do with its goals and number of implemented 
features and not with its stability.


Regards,
   Christos



What do you guys recommend I adopt?

Thank you,
Luis Silva

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Correctly implementing peak-splice

2014-11-05 Thread Christos Tsantilas

On 11/04/2014 02:26 PM, James Lay wrote:


Thanks a bunch Christos,

That list of IP's is things like apple.com, textnow.me, and windows
updates...IP's that simply don't bump well.  My setup is a linux box
that's a router...one NIC internal IP, the other external IP.  Via
iptables redirect, I'm transparently intercepting the web traffic of a
few devices, only allowing them access to the list of sites in url.txt.
At issue with using the broken_sites list, is that I have to just
specify large chucks of netblocks, which I lose control and visibility
of.  What I'm really hoping for is for a way for squid to be able to, in
my case at least, look at either the server_name extension in the Client


You need to build your own external_acl helper which will take as input 
the client sni (server_name extension). Read squid wiki for informations 
about external acl helpers:

 http://wiki.squid-cache.org/Features/AddonHelpers#Access_Control_.28ACL.29

It is easy to build one in perl or as a shell script. I am suggesting to 
build an external_acl helper which return "OK" when the sni matches or 
no sni information exist.


You can use the following configuration or similar:
#
external_acl_type EXTACL %ssl::>sni /path-to-my/external-acl-helper.sh
acl EXTACL external EXTACL

acl step1 at_step  SslBump1
acl step2 at_step  SslBump2
acl step3 at_step  SslBump3

# At first step peek all
ssl_bump peek step1 all
ssl_bump splice step2 EXTACL
ssl_bump bump all



Hello, or, if that's not present, look at the dNSName of certificate
being sent, check the access against url.txt, and either allow or deny.


In your case the server certificate informations will not work well. At 
the time this information is available:

1) in peek mode, you can not bump any more
2) in stare mode, you can not splice any more.
There are exceptions to the above rules (for example in case the client 
uses the same SSL library with squid) but the SSL protocol is enough 
safe to not allow us to make something better than this.


Regards,
   Christos



Ssl_bump does work well for most sites...and I understand we are
performing a man in the middle attack so it's not supposed to be easy.
Again my hope isn't really to perform a mitm...more of an access control
type thing.  Thanks again Christos...I hope I explained this well
enough.

James


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Correctly implementing peak-splice

2014-11-04 Thread Christos Tsantilas

On 11/03/2014 03:00 PM, James Lay wrote:


Thanks Christos,

So here's where I'm at...my full test config below:
..
..

logformat common %>a %[ui %[un [%tl] "%rm %ru HTTP/%rv" %>Hs %cert_subject


The %ssl::>cert_subject will print the subject of the client 
certificate, if there is any. In most cases the client does not sent any 
certificate.

Logging the server certificate subject is not yet implemented.



The above works, but allows all sites regardless of what's in url.txt.


If you want to use a list of urls to restrict sites which should bumped 
you should use an external_acl helper.
You can send to the external_acl helpers the client SNI informations (on 
at_step SslBump2) and/OR the server certificate subject (on at_step 
SslBump3).



Additionally, there's no logging of any kind.  The allow part makes
sense as this is the last ACL, the no logging part is confusing.  If I
add:

acl broken_sites dst 69.25.139.128/25
acl .

> .

and change to
ssl_bump peek step1 broken_sites
ssl_bump peek step2 broken_sites
ssl_bump splice step3 broken_sites


This is will splice any connection to broken_sites and will not bump any 
other request.




that works, but again...I get no logging, which is worse then "ssl_bump
splice broken_sites", and defeats the purpose of trying to avoid having
to create the broken_sites ACL in the first place.  Lastly, if I try and
change splice to peek or bump it's broken with odd log entries such as:


Will help if you describe what are you trying to do.
The acl broken_sites includes only IP addresses. Looks that the 
peek-and-splice is not needed in your application.

You can just use "ssl_bump none broken_sites"



Nov  3 05:45:23 gateway (squid-1): 192.168.1.110 - -
[03/Nov/2014:05:45:23 -0700] "GET https://www.google.com/ HTTP/1.1" 503
3854 TAG_NONE:HIER_NONE -
Nov  3 05:45:31 gateway (squid-1): 192.168.1.110 - -
[03/Nov/2014:05:45:31 -0700] "CONNECT 206.190.36.45:443 HTTP/1.1" 403
3402 TCP_DENIED:HIER_NONE -
Nov  3 05:45:31 gateway (squid-1): 192.168.1.110 - -
[03/Nov/2014:05:45:31 -0700] "#026#003#001 %BB/%CESsJ%B3%C2%BC%CC%BD%90
HTTP/1.1" 400 3577 TAG_NONE:HIER_NONE -

Is there something I am missing?  I've been really reading through the
squid site, but I can't find any examples of peek splice.  Thank you.

James

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Correctly implementing peak-splice

2014-11-03 Thread Christos Tsantilas

On 10/30/2014 02:06 PM, James Lay wrote:

Hello all,

Here is my complete config for trying out peek/splice.  This currently
does not work..is there something obvious that I'm mission?  Current
error is:

Oct 30 06:03:14 gateway squid: 192.168.1.110 - - [30/Oct/2014:06:03:14
-0600] "GET https://www.google.com/ HTTP/1.1" 503 3854
TAG_NONE:HIER_NONE

and on the page I get a 71 protocol error and a SSL3_WRITE_PENDING:bad
write retry.


- You should use at_step acl to configure different bumping modes on 
each bumping step.


- If you used "peek" mode on SslBump1 and SslBump2 steps then on 
SslBump3 step you should use "splice". If you select "bump" the most 
possible is that you got SSL connection errors.

The "peek" mode on SslBump3 step is interpreted as "bump" mode.

-if you selected peek mode on SslBump1 and SslBump2 steps, in most 
cases, you can select only "terminate" or "splice" for SslBump3 step.


The following configuration should work:

# Bumping steps:
acl step1 at_step  SslBump1
acl step2 at_step  SslBump2
acl step3 at_step  SslBump3

# Selecting bumping mode
ssl_bump peek step1 all
ssl_bump peek step2 all
ssl_bump splice step3 all

Regards,
Christos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid-3.4.8 sslbump breaks facebook

2014-10-16 Thread Christos Tsantilas


A patch for this bug attached to 4102 bug report.
Please test it and report any problem.

Regards,
  Christos



On 10/16/2014 12:14 PM, Amm wrote:


On 10/16/2014 02:35 PM, Jason Haar wrote:

On 16/10/14 20:54, Jason Haar wrote:

I also checked the ssl_db/certs dir and
removed the facebook certs and restarted - didn't help

let me rephrase that. I deleted the dirtree and re-ran "ssl_crtd -s
/usr/local/squid/var/lib/ssl_db -c" - ie restarted with an empty cache.
It didn't help. It created a new fake facebook cert - but the cert
doesn't fully match the characteristics of the "real" cert


http://bugs.squid-cache.org/show_bug.cgi?id=4102

Please add weight to bug report :)

Amm.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users