Re: [squid-users] TOS squid-3.5.0.4

2015-05-31 Thread Amos Jeffries
On 31/05/2015 11:38 p.m., Marcel Fossua wrote:
 Hi All
 let see if some of you can help me troubleshoot the issue I have with
 squid-3.5.0.4

Please upgrade. 3.5.5 was released the other day, and Eliezer has new
CentOS 6/7 packages available already.

 on centos 6.6 configure with tproxy
 in fact the issue is relate to qos stuff  i just set things according to
 manual
 
 qos_flows tos local-hit=0x30
 qos_flows mark local-hit=0x30
 qos_flows tos sibling-hit=0x31
 qos_flows mark sibling-hit=0x31
 qos_flows tos parent-hit=0x32
 qos_flows mark parent-hit=0x32

As the manual states only the TOS values in multiples of 4 are available
for use. The final two bits in the TOS field are used by the TCP ECN
feature. You should be seeing log warnings as Squid masks away those 0x1
/ 0x2 bits to produce a valid DiffServ value.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TOS squid-3.5.0.4

2015-05-31 Thread Amos Jeffries
On 1/06/2015 11:35 a.m., Marcel Fossua wrote:
 HI Amos thanks for your reply I just upgrade to 3.5.5 but compiling from
 source to get --enable-ecap
 but I can't figure out what you means exactly concerning the TOS part
 did you means what I set is ok or not?

 qos_flows tos

The above line does nothing.


 qos_flows local-hit=0x30

  0x30  0xFA  - 0x30

OK.

 qos_flows parent-hit=0x32

  0x32  0xFA  - 0x30

not OK.

The second hex digit value can only be 0x0, 0x4, 0x8, or 0xC.

Also, the qos_flows lines are missing the identifier to determine
whether its TCP TOS or iptables NFMARK being set...

  qos_flows tos local-hit=0x30
  qos_flows tos parent-hit=0x34

Or just:
  qos_flows tos local-hit=0x30 parent-hit=0x34

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TOS squid-3.5.0.4

2015-05-31 Thread Marcel Fossua
No luck 
Still not getting result at all I think the issue could be with my Mikrotik
box 

# Marking packets with DSCP (for Mikrotik 6.x) for cache hit content coming
from SQUID Proxy

/ip firewall mangle 
add action=mark-packet chain=prerouting  disabled=no dscp=12
new-packet-mark=squid-connection passthrough=no comment===SQUID - TOS 12
==

http://squid-web-proxy-cache.1019090.n4.nabble.com/file/n4671467/Captura_de_pantalla_2015-05-29_a_las_21.png
 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TOS-squid-3-5-0-4-tp4671459p4671467.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl_bump and SNI

2015-05-31 Thread Nathan Hoad
Hello,

Here are some excerpts of what I've used, and an example Python helper:

https_port 60099 intercept ssl-bump tcpkeepalive
cert=/path/to/cert.pem key=/path/to/key.pem options=NO_SSLv2,NO_SSLv3
generate-host-certificates=on

external_acl_type sni ttl=30 concurrency=X children-max=Y
children-startup=Z %ssl::sni /path/to/your/helper

acl sni_exclusions external sni
acl tcp_level at_step SslBump1
acl client_hello_peeked at_step SslBump2

ssl_bump peek tcp_level all
ssl_bump splice client_hello_peeked sni_exclusions
ssl_bump bump all

Helper:

import sys

line = sys.stdin.read()

# run loop until an empty read, which indicates the process should shut down.
while line:
concurrency_id, sni = line.split()

if sni == 'wellsfargo.com':
sys.stdout.write('%s OK\n' % concurrency_id)
else:
sys.stdout.write('%s ERR\n' % concurrency_id)

line = sys.stdin.read()

Hope that helps,

Nathan.

On 30 May 2015 at 01:14, James Lay j...@slave-tothe-box.net wrote:
 On 2015-05-29 08:57 AM, Nathan Hoad wrote:

 Yes, I have it working on about a dozen deployments so far, using an
 external ACL to make bumping decisions based on the SNI server name
 and a few other things. No complaints from me, it Just Works.
 On 29/05/2015 5:50 pm, sp_ ap...@yandex.ru wrote:

 Hello,

 does anyone have the working squid 3.5 with intercept + https?
 I've googled a lot, but seems there is no any positive experience
 with it.

 --
 View this message in context:


 http://squid-web-proxy-cache.1019090.n4.nabble.com/ssl-bump-and-SNI-tp4670207p4671432.html

 [1]
 Sent from the Squid - Users mailing list archive at Nabble.com.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users [2]



 Links:
 --
 [1]

 http://squid-web-proxy-cache.1019090.n4.nabble.com/ssl-bump-and-SNI-tp4670207p4671432.html
 [2] http://lists.squid-cache.org/listinfo/squid-users

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users


 Nathan,

 Care to post your config and external helper?  I know I'd love to see
 concrete examples.  Thank you.

 James

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Ssl-bump deep dive (intercept last post and final thoughts)

2015-05-31 Thread James Lay
So this has been REALLY good!  The tl;dr:  ssl-bumping is pretty easy
even with intercept, ssl-bumping with access control is a little more
difficult...jump to the config to skip the chit chat.

My goal has always been to a content filter based on url regex.  This
works just fine for http traffic, but is much more difficult for https
traffic just for the case of you may or may not know the host you're
going to, depending on the site/app.  I'll be real honest hereI'm
only doing this to protect/filter the traffic of two kids, on laptops,
iPhone, and Android phone, so it's a mixed bag of content and, since
it's just the two of them in a home environment, I get to play around
and see what works and what doesn't.

Below is a close as I can get transparent intercept ssl-bump with
content filtering with using a list of domains/urls with both http and
https.  I still have to use a list of broken sites, which are large
netblocks (17.0.0.0/8..Apple anyone?) because some of these I just can't
seem to get host/domain information during the ssl handshake.  As I
discovered after attempting to put this into production, I have not
been able to emulate using wget or curl an https session that doesn't
have any SNI information, so that threw me for a loop.  TextNow is a
great example (I'm including a packet capture of this in this post).
There's no host information in the client hellothere's no host
information in the server hello.buried deep in the certificate ONLY
is the commonName=.*textnow.me...that's it.  This dashed my hopes of
using an url_regex for access control with all https sessions.  I have
%ssl::cert_subject in my logging, and I never did see this log in any
of my tests...and I tested a BUNCH of different peek/stare/splice/bump
cominations..so I don't think squid is actually seeing this from the
certificate.

Another challenge is getting http url_regex filtering to work with https
filtering.  My method of filtering means not having an http_access
allow localnet, which directly conflicted with also trying to filter
https.  The solution was to add an acl for port 443, then http_access to
just allow it, as our filtering was going to happen for https further
down.

I know there's a fair amount of people who just want to plop in some
config files, run a few commands, and be up and running.  The below
configuration has two additional files it references, http_url.txt,
which is an a list of domains/urls (\.apple\.com for example), and the
aptly named broken, which is a IP list (17.0.0.0/8).  The broken list
should be (semi) trusted and are sites that we just can't get SNI or
hostname information from.  If you've created a single cert/key pair
from the Squid documentation, you won't need the key= line in your
https_port directive.  If you've followed along in my posts, you already
have the configure line from my previous posts.  Change the
commands/config to fir where your squid config and ssl_db are.  So after
configuring, make sure you:

sudo /opt/libexec/ssl_crtd -c -s /opt/var/ssl_db
sudo chown -R nobody /opt/var/ssl_db/

As I believe in a lot of logging, and actually looking at said logging,
below is what you can expect to see in your logs (mine logs to syslog,
again, change this if you log to a different file):

Allowed http to .apple.com in http_url.txt:
May 31 17:03:48 gateway (squid-1): 192.168.1.100 - -
[31/May/2015:17:03:48 -0600] GET
http://init.ess.apple.com/WebObjects/VCInit.woa/wa/getBag? HTTP/1.1 - -
200 5243 TCP_MISS:ORIGINAL_DST -
Denied http to symcb.com not in http_url.txt
May 31 17:03:48 gateway (squid-1): 192.168.1.100 - -
[31/May/2015:17:03:48 -0600] GET http://sd.symcb.com/sd.crt HTTP/1.1 -
- 403 3618 TCP_DENIED:HIER_NONE -
Spliced https IP in broken.txt (google block 216.58.192.0/19)
May 31 17:04:34 gateway (squid-1): 192.168.1.101 - -
[31/May/2015:17:04:34 -0600] CONNECT 216.58.216.138:443 HTTP/1.1 - -
200 568 TCP_TUNNEL:ORIGINAL_DST peek
Spliced https IP in broken.txt that we got SNI or bumped site in
http_url.txt look exactly the same
May 31 17:09:45 gateway (squid-1): 192.168.1.100 - -
[31/May/2015:17:09:45 -0600] CONNECT 23.222.157.21:443 HTTP/1.1
init.itunes.apple.com - 200 30314 TCP_TUNNEL:ORIGINAL_DST peek

The only drag with the configuration is you won't see when an https
session is terminated when the IP/url is not in the broken.txt, or the
http_url.txt:

[17:20:53 jlay@analysis:~$] wget -d
--ca-certificate=/etc/ssl/certs/sslsplit.crt https://www.yahoo.com
Setting --ca-certificate (cacertificate) to /etc/ssl/certs/sslsplit.crt
DEBUG output created by Wget 1.16.1 on linux-gnu.

URI encoding = ‘UTF-8’
--2015-05-31 17:20:59--  https://www.yahoo.com/
Resolving www.yahoo.com (www.yahoo.com)... 206.190.36.45,
206.190.36.105, 2001:4998:c:a06::2:4008
Caching www.yahoo.com = 206.190.36.45 206.190.36.105
2001:4998:c:a06::2:4008
Connecting to www.yahoo.com (www.yahoo.com)|206.190.36.45|:443...
connected.
Created socket 3.
Releasing 0x7fdf67eecdd0 (new refcount 1).
Initiating 

Re: [squid-users] Ssl-bump deep dive (intercept last post and final thoughts)

2015-05-31 Thread James Lay
On Mon, 2015-06-01 at 13:00 +1200, Amos Jeffries wrote:

 On 1/06/2015 11:56 a.m., James Lay wrote:
  So this has been REALLY good!  The tl;dr:  ssl-bumping is pretty easy
  even with intercept, ssl-bumping with access control is a little more
  difficult...jump to the config to skip the chit chat.
  
  My goal has always been to a content filter based on url regex.  This
  works just fine for http traffic, but is much more difficult for https
  traffic just for the case of you may or may not know the host you're
  going to, depending on the site/app.  I'll be real honest hereI'm
  only doing this to protect/filter the traffic of two kids, on laptops,
  iPhone, and Android phone, so it's a mixed bag of content and, since
  it's just the two of them in a home environment, I get to play around
  and see what works and what doesn't.
  
  Below is a close as I can get transparent intercept ssl-bump with
  content filtering with using a list of domains/urls with both http and
  https.  I still have to use a list of broken sites, which are large
  netblocks (17.0.0.0/8..Apple anyone?) because some of these I just can't
  seem to get host/domain information during the ssl handshake.  As I
  discovered after attempting to put this into production, I have not
  been able to emulate using wget or curl an https session that doesn't
  have any SNI information, so that threw me for a loop.  TextNow is a
  great example (I'm including a packet capture of this in this post).
  There's no host information in the client hellothere's no host
  information in the server hello.buried deep in the certificate ONLY
  is the commonName=.*textnow.me...that's it.  This dashed my hopes of
  using an url_regex for access control with all https sessions.  I have
  %ssl::cert_subject in my logging, and I never did see this log in any
  of my tests...and I tested a BUNCH of different peek/stare/splice/bump
  cominations..so I don't think squid is actually seeing this from the
  certificate.
  
  Another challenge is getting http url_regex filtering to work with https
  filtering.  My method of filtering means not having an http_access
  allow localnet, which directly conflicted with also trying to filter
  https.  The solution was to add an acl for port 443, then http_access to
  just allow it, as our filtering was going to happen for https further
  down.
  
  I know there's a fair amount of people who just want to plop in some
  config files, run a few commands, and be up and running.  The below
  configuration has two additional files it references, http_url.txt,
  which is an a list of domains/urls (\.apple\.com for example), and the
  aptly named broken, which is a IP list (17.0.0.0/8).  The broken list
  should be (semi) trusted and are sites that we just can't get SNI or
  hostname information from.  If you've created a single cert/key pair
  from the Squid documentation, you won't need the key= line in your
  https_port directive.  If you've followed along in my posts, you already
  have the configure line from my previous posts.  Change the
  commands/config to fir where your squid config and ssl_db are.  So after
  configuring, make sure you:
  
  sudo /opt/libexec/ssl_crtd -c -s /opt/var/ssl_db
  sudo chown -R nobody /opt/var/ssl_db/
  
  As I believe in a lot of logging, and actually looking at said logging,
  below is what you can expect to see in your logs (mine logs to syslog,
  again, change this if you log to a different file):
  
  Allowed http to .apple.com in http_url.txt:
  May 31 17:03:48 gateway (squid-1): 192.168.1.100 - -
  [31/May/2015:17:03:48 -0600] GET
  http://init.ess.apple.com/WebObjects/VCInit.woa/wa/getBag? HTTP/1.1 - -
  200 5243 TCP_MISS:ORIGINAL_DST -
  Denied http to symcb.com not in http_url.txt
  May 31 17:03:48 gateway (squid-1): 192.168.1.100 - -
  [31/May/2015:17:03:48 -0600] GET http://sd.symcb.com/sd.crt HTTP/1.1 -
  - 403 3618 TCP_DENIED:HIER_NONE -
  Spliced https IP in broken.txt (google block 216.58.192.0/19)
  May 31 17:04:34 gateway (squid-1): 192.168.1.101 - -
  [31/May/2015:17:04:34 -0600] CONNECT 216.58.216.138:443 HTTP/1.1 - -
  200 568 TCP_TUNNEL:ORIGINAL_DST peek
  Spliced https IP in broken.txt that we got SNI or bumped site in
  http_url.txt look exactly the same
  May 31 17:09:45 gateway (squid-1): 192.168.1.100 - -
  [31/May/2015:17:09:45 -0600] CONNECT 23.222.157.21:443 HTTP/1.1
  init.itunes.apple.com - 200 30314 TCP_TUNNEL:ORIGINAL_DST peek
  
  The only drag with the configuration is you won't see when an https
  session is terminated when the IP/url is not in the broken.txt, or the
  http_url.txt:
  
  [17:20:53 jlay@analysis:~$] wget -d
  --ca-certificate=/etc/ssl/certs/sslsplit.crt https://www.yahoo.com
  Setting --ca-certificate (cacertificate) to /etc/ssl/certs/sslsplit.crt
  DEBUG output created by Wget 1.16.1 on linux-gnu.
  
  URI encoding = ‘UTF-8’
  --2015-05-31 17:20:59--  https://www.yahoo.com/
  Resolving www.yahoo.com (www.yahoo.com)... 

Re: [squid-users] TOS squid-3.5.0.4

2015-05-31 Thread Marcel Fossua
Thanks Amos
I will try it.

Rgds



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TOS-squid-3-5-0-4-tp4671459p4671465.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Ssl-bump deep dive (intercept last post and final thoughts)

2015-05-31 Thread Amos Jeffries
On 1/06/2015 11:56 a.m., James Lay wrote:
 So this has been REALLY good!  The tl;dr:  ssl-bumping is pretty easy
 even with intercept, ssl-bumping with access control is a little more
 difficult...jump to the config to skip the chit chat.
 
 My goal has always been to a content filter based on url regex.  This
 works just fine for http traffic, but is much more difficult for https
 traffic just for the case of you may or may not know the host you're
 going to, depending on the site/app.  I'll be real honest hereI'm
 only doing this to protect/filter the traffic of two kids, on laptops,
 iPhone, and Android phone, so it's a mixed bag of content and, since
 it's just the two of them in a home environment, I get to play around
 and see what works and what doesn't.
 
 Below is a close as I can get transparent intercept ssl-bump with
 content filtering with using a list of domains/urls with both http and
 https.  I still have to use a list of broken sites, which are large
 netblocks (17.0.0.0/8..Apple anyone?) because some of these I just can't
 seem to get host/domain information during the ssl handshake.  As I
 discovered after attempting to put this into production, I have not
 been able to emulate using wget or curl an https session that doesn't
 have any SNI information, so that threw me for a loop.  TextNow is a
 great example (I'm including a packet capture of this in this post).
 There's no host information in the client hellothere's no host
 information in the server hello.buried deep in the certificate ONLY
 is the commonName=.*textnow.me...that's it.  This dashed my hopes of
 using an url_regex for access control with all https sessions.  I have
 %ssl::cert_subject in my logging, and I never did see this log in any
 of my tests...and I tested a BUNCH of different peek/stare/splice/bump
 cominations..so I don't think squid is actually seeing this from the
 certificate.
 
 Another challenge is getting http url_regex filtering to work with https
 filtering.  My method of filtering means not having an http_access
 allow localnet, which directly conflicted with also trying to filter
 https.  The solution was to add an acl for port 443, then http_access to
 just allow it, as our filtering was going to happen for https further
 down.
 
 I know there's a fair amount of people who just want to plop in some
 config files, run a few commands, and be up and running.  The below
 configuration has two additional files it references, http_url.txt,
 which is an a list of domains/urls (\.apple\.com for example), and the
 aptly named broken, which is a IP list (17.0.0.0/8).  The broken list
 should be (semi) trusted and are sites that we just can't get SNI or
 hostname information from.  If you've created a single cert/key pair
 from the Squid documentation, you won't need the key= line in your
 https_port directive.  If you've followed along in my posts, you already
 have the configure line from my previous posts.  Change the
 commands/config to fir where your squid config and ssl_db are.  So after
 configuring, make sure you:
 
 sudo /opt/libexec/ssl_crtd -c -s /opt/var/ssl_db
 sudo chown -R nobody /opt/var/ssl_db/
 
 As I believe in a lot of logging, and actually looking at said logging,
 below is what you can expect to see in your logs (mine logs to syslog,
 again, change this if you log to a different file):
 
 Allowed http to .apple.com in http_url.txt:
 May 31 17:03:48 gateway (squid-1): 192.168.1.100 - -
 [31/May/2015:17:03:48 -0600] GET
 http://init.ess.apple.com/WebObjects/VCInit.woa/wa/getBag? HTTP/1.1 - -
 200 5243 TCP_MISS:ORIGINAL_DST -
 Denied http to symcb.com not in http_url.txt
 May 31 17:03:48 gateway (squid-1): 192.168.1.100 - -
 [31/May/2015:17:03:48 -0600] GET http://sd.symcb.com/sd.crt HTTP/1.1 -
 - 403 3618 TCP_DENIED:HIER_NONE -
 Spliced https IP in broken.txt (google block 216.58.192.0/19)
 May 31 17:04:34 gateway (squid-1): 192.168.1.101 - -
 [31/May/2015:17:04:34 -0600] CONNECT 216.58.216.138:443 HTTP/1.1 - -
 200 568 TCP_TUNNEL:ORIGINAL_DST peek
 Spliced https IP in broken.txt that we got SNI or bumped site in
 http_url.txt look exactly the same
 May 31 17:09:45 gateway (squid-1): 192.168.1.100 - -
 [31/May/2015:17:09:45 -0600] CONNECT 23.222.157.21:443 HTTP/1.1
 init.itunes.apple.com - 200 30314 TCP_TUNNEL:ORIGINAL_DST peek
 
 The only drag with the configuration is you won't see when an https
 session is terminated when the IP/url is not in the broken.txt, or the
 http_url.txt:
 
 [17:20:53 jlay@analysis:~$] wget -d
 --ca-certificate=/etc/ssl/certs/sslsplit.crt https://www.yahoo.com
 Setting --ca-certificate (cacertificate) to /etc/ssl/certs/sslsplit.crt
 DEBUG output created by Wget 1.16.1 on linux-gnu.
 
 URI encoding = ‘UTF-8’
 --2015-05-31 17:20:59--  https://www.yahoo.com/
 Resolving www.yahoo.com (www.yahoo.com)... 206.190.36.45,
 206.190.36.105, 2001:4998:c:a06::2:4008
 Caching www.yahoo.com = 206.190.36.45 206.190.36.105
 2001:4998:c:a06::2:4008
 Connecting to 

Re: [squid-users] Conditional question

2015-05-31 Thread Amos Jeffries
On 31/05/2015 10:24 a.m., James Lay wrote:
 On Sun, 2015-05-31 at 08:45 +1200, Amos Jeffries wrote:
 
 On 31/05/2015 4:48 a.m., James Lay wrote:
 Per the docs:

 #  Conditional configuration
 #
 #   If-statements can be used to make configuration directives
 #   depend on conditions:
 #
 #   if CONDITION
 #   ... regular configuration directives ...
 #   [else
 #   ... regular configuration directives ...]
 #   endif
 #
 #   The else part is optional. The keywords if, else, and
 endif
 #   must be typed on their own lines, as if they were regular
 #   configuration directives.
 #
 #   NOTE: An else-if condition is not supported.
 #
 #   These individual conditions types are supported:
 #
 #   true
 #   Always evaluates to true.
 #   false
 #   Always evaluates to false.
 #   integer = integer
 #   Equality comparison of two integer numbers.

 Anyone have any examples, documentation, heck ANYTHING that can show how
 this works?  I can't seem to find a thing besides the above.

 Those are for process controls (SMP, named services, etc).

  My goal is
 something like the below:

 if port = 80
 http_access deny all
 else
 http_access allow all
 endif

 But nothing I'm trying as the condition expression is working.  Thank
 you.

 The default Squid configuration should just work...

   http_access deny !Safe_ports
   http_access deny CONNECT !SSL_Ports
   ...
   # this one permits the CONNECT *:443 requests to get bumped
   http_access allow localnet
   ..
   http_access deny all

 If you are using any other access controls on your client traffic you
 need to keep in mind that Squid is dealing with CONNECT raw-IP:443 ...
 requests in http_access / adapted_http_access / url_rewrite_access /
 adaptation_access / ssl_bump prior to bumping them.

 Amos
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
 
 
 Hi again Amos,
 
 So...my method of access control might be weird.  I have a regex list of
 sites that work fine via http (say \.acer\.com).  So, I allow access to
 this list via:
 
 acl allowed_http_sites url_regex /opt/etc/squid/http_url.txt
 http_access allow allowed_http_sites
 http_access deny !allowed_http_sites

By using url_regex to match domain names you are preventing any chance
for Squid to perfom reverse-DNS lookup on the raw-IP CONNECT requests
and see if the rDNS site name matches an entry in your list.

If you made that ACL a dstdomain (which is the right type to be matching
domains with) you would see this rDNS behaviour and only have to add
domain entries for sites where the rDNS is different from the popular
domain names.

 
 This works well for allowing access to the list of sitesthe lack of
 http_access allow localnet makes this happen.  With the above however,
 ssl_bumping stops working as I get:
 
 [16:18:22 jlay@powerbook:~/test$ wget
 --ca-certificate=/etc/ssl/certs/sslsplit_ca_cert.pem -d
 https://www.msn.com
 DEBUG output created by Wget 1.16 on linux-gnu.
 
 URI encoding = ‘UTF-8’
 --2015-05-30 16:19:46--  https://www.msn.com/
 Certificates loaded: 173
 Resolving www.msn.com (www.msn.com)... 204.79.197.203
 Caching www.msn.com = 204.79.197.203
 Connecting to www.msn.com (www.msn.com)|204.79.197.203|:443...
 connected.
 Created socket 4.
 Releasing 0x10c3ef98 (new refcount 1).
 The certificate's owner does not match hostname ‘www.msn.com’
 
 May 30 16:19:46 analysis squid: 192.168.1.73 - - [30/May/2015:16:19:46
 -0600] CONNECT 204.79.197.203:443 HTTP/1.1 - 200 0
 TCP_DENIED:HIER_NONE peek
 
 Adding http_access alllow localnet makes ssl_bumping work correctly, but
 then the http_access deny !allowed_http_sites does not work.  I'm having
 a hard time getting both http and https filtering to play well together
 with one instance of squid.  I'd like to try and just go with one, but
 if I have to I'll go with two.  Anyway thanks again for looking...I hope
 I'm explaining this well.

The above mentioned solution, OR as you found allowing all port 443
traffic through to at least the bumping stage will do it.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TOS squid-3.5.0.4

2015-05-31 Thread Marcel Fossua
Hi All
let see if some of you can help me troubleshoot the issue I have with
squid-3.5.0.4
on centos 6.6 configure with tproxy
in fact the issue is relate to qos stuff  i just set things according to
manual

qos_flows tos local-hit=0x30
qos_flows mark local-hit=0x30
qos_flows tos sibling-hit=0x31
qos_flows mark sibling-hit=0x31
qos_flows tos parent-hit=0x32
qos_flows mark parent-hit=0x32
qos_flows tos disable-preserve-miss
tcpdump output

tcpdump -vni eth1 | grep 'tos 0x30'

tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 65535
bytes

01:37:24.787867 IP (tos 0x30, ttl 64, id 38723, offset 0, flags [DF], proto
TCP (6), length 534)

01:37:24.788003 IP (tos 0x30, ttl 64, id 38724, offset 0, flags [DF], proto
TCP (6), length 2920)

01:37:24.788019 IP (tos 0x30, ttl 64, id 38726, offset 0, flags [DF], proto
TCP (6), length 1256)
01:37:24.788141 IP (tos 0x30, ttl 64, id 38727, offset 0, flags [DF], proto
TCP (6), length 2920)

but for sure it's not marking anything while send traffic to my pppoe BRAS
(MK)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TOS-squid-3-5-0-4-tp4671459.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users