Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Garri Djavadyan
On Mon, 2016-10-24 at 21:05 +0500, Garri Djavadyan wrote:
> On 2016-10-24 19:40, Garri Djavadyan wrote:
> > 
> > So, the big G sends 304 only to HEAD requests, although it is a
> > violation [1], AIUI:
> > 
> > curl --head -H 'If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT'
> > -H
> > 'If-None-Match: "101395"' http://dl.google.com/linux/direct/google-
> > chro
> > me-stable_current_amd64.deb
> > HTTP/1.1 304 Not Modified
> > ETag: "101395"
> > Server: downloads
> > Vary: *
> > X-Content-Type-Options: nosniff
> > X-Frame-Options: SAMEORIGIN
> > X-Xss-Protection: 1; mode=block
> > Date: Mon, 24 Oct 2016 14:36:32 GMT
> > Connection: keep-alive
> > 
> > ---
> > 
> > $ curl --verbose -H 'If-Modified-Since: Thu, 20 Oct 2016 08:29:09
> > GMT'
> > -H 'If-None-Match: "101395"' http://dl.google.com/linux/direct/goog
> > le-c
> > hrome-stable_current_amd64.deb > /dev/null
> > > 
> > > GET /linux/direct/google-chrome-stable_current_amd64.deb HTTP/1.1
> > > Host: dl.google.com
> > > User-Agent: curl/7.50.3
> > > Accept: */*
> > > If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT
> > > If-None-Match: "101395"
> > > 
> > < HTTP/1.1 200 OK
> > < Accept-Ranges: bytes
> > < Content-Type: application/x-debian-package
> > < ETag: "101395"
> > < Last-Modified: Thu, 20 Oct 2016 08:29:09 GMT
> > < Server: downloads
> > < Vary: *
> > < X-Content-Type-Options: nosniff
> > < X-Frame-Options: SAMEORIGIN
> > < X-Xss-Protection: 1; mode=block
> > < Date: Mon, 24 Oct 2016 14:38:19 GMT
> > < Content-Length: 45532350
> > < Connection: keep-alive
> > 
> > [1] https://tools.ietf.org/html/rfc7234#section-4.3.5
> 
> Actually I mixed SHOULD agains MUST. The RFC 7231, section 4.3.2
> states 
> [1]:
> ...
> The server SHOULD send the same header fields in response to a HEAD 
> request as it would have sent if
> the request had been a GET, except that the payload header fields 
> (Section 3.3) MAY be omitted.
> ...
> 
> So, big G does not follow the recommendation, but does not violate
> the 
> standard.
> 
> [1] https://tools.ietf.org/html/rfc7231#section-4.3.2
> 
> Garri

I've overlooked that the statement applies to header _fields_, not to
reply code. The full paragraph states:

   The HEAD method is identical to GET except that the server MUST NOT
   send a message body in the response (i.e., the response terminates
   at the end of the header section).  The server SHOULD send the same
   header fields in response to a HEAD request as it would have sent if
   the request had been a GET, except that the payload header fields
   (Section 3.3) MAY be omitted.  This method can be used for obtaining
   metadata about the selected representation without transferring the
   representation data and is often used for testing hypertext links  
   for validity, accessibility, and recent modification.

Nevertheless, the last sentence in the above excerpt use word 'can',
same for the following excerpt from section 4.3.5 [1]:

   A response to the HEAD method is identical to what an equivalent
   request made with a GET would have been, except it lacks a body.
   This property of HEAD responses can be used to invalidate or update
   a cached GET response if the more efficient conditional GET request
   mechanism is not available (due to no validators being present in  
   the stored response) or if transmission of the representation body  
   is not desired even if it has changed.

So, HEAD request _can_ be used as a reliable source for object
revalidation. How the 'can' should it be interpreted? RFC2119 [2] does
not specifies that.


AIUI, that exact case leaves two choices:

* Implement something like 'revalidate_using_head [[!]acl]
* Contact Google and inform about the behavior

The former is RFC-compliant way to solve that particular case, but
requires costly development efforts and may be useless after some time.
The latter may break HEAD revalidation also, but gives hopes that the
GET conditionals may be fixed.

[1] https://tools.ietf.org/html/rfc7234#section-4.3.5
[2] https://tools.ietf.org/html/rfc2119
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] External nat'ed transparent proxy

2016-10-24 Thread Eliezer Croitoru
+1 for Amos direction.
I am still trying to understand what is the difference between a router and a 
switch since they seems to have the same CPU but missing one or two embedded 
instructions.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Amos Jeffries
Sent: Friday, September 30, 2016 20:36
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] External nat'ed transparent proxy

On 1/10/2016 12:27 a.m., Henry Paulissen wrote:
> Hi Matus,
> 
> 
> On 30-09-16 12:36, Matus UHLAR - fantomas wrote:
>> On 29.09.16 16:39, Henry Paulissen wrote:
>>> In the company I work for we are currently using squid v2 proxies in 
>>> transparent mode to intercept traffic from servers to the outside 
>>> (access control).
>>>
>>> The technical solution for this is roughly as follows:
>>> [server] -> [gateway] -> [firewall]
>>>  |
>>>--- DNAT -
>>>   v
>>> [squid]  -> [gateway] -> [firewall] -> [internet router]
>>
>> this is a bad configuration. The firewall in the path should NOT use 
>> DNAT, since it makes the important part of connection (destination 
>> IP) invisible to squid.
>>
> 
> That is where the HTTP Host header can be used for... For squid to 
> figure out the destination of the request. (aren´t they?)

That is what it was intended for 20 or so years ago. But times change and 
nowdays we have to deal with browsers that can be sent a scimple script and 
instructed to do all sorts of nasty things in the traffic. If you want the gory 
details you can find my prvious answers to people asking this same question 
repeatedly over the last 5 years.

The TL;DR is: no, that is no longer safe to do and Squid will not do it any 
more. Simply dont use DNAT on the port 80 (or 443) packets before they hit the 
machine running Squid. Routing is a more powerful feature than most realize, 
make use of it.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Web Whatsapp, Dropbox... problem

2016-10-24 Thread Eliezer Croitoru
It took me a while and I hope that I will be able to get the dumps this week.
I started working on an example of ebtables level traffic redirection towards 
the squid machine.
The scenario should be a good example for embedded devices which operates 
mostly food in the bridge level rather then the CPU and iptables level.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Amos Jeffries
Sent: Thursday, September 29, 2016 07:16
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Web Whatsapp, Dropbox... problem

On 29/09/2016 11:27 a.m., Eliezer Croitoru wrote:
> I am also testing this issue and I have the next settings:
> acl DiscoverSNIHost at_step SslBump1
> acl NoSSLIntercept ssl::server_name_regex -i "/etc/squid/url.nobump"
> ssl_bump splice NoSSLIntercept
> ssl_bump peek DiscoverSNIHost
> ssl_bump bump all
> sslcrtd_program /usr/lib/squid/ssl_crtd -s /var/squid/ssl -M 4MB 
> sslcrtd_children 10 read_ahead_gap 64 MB sslproxy_cert_error allow all 
> tls_outgoing_options flags=DONT_VERIFY_PEER acl foreignProtocol 
> squid_error ERR_PROTOCOL_UNKNOWN ERR_TOO_BIG on_unsupported_protocol 
> tunnel foreignProtocol
> 
> (Which is not recommended for production as is!!!)
> 
> Now the "/etc/squid/url.nobump" file contains:
> # WU (Squid 3.5.x and above with SSL Bump) # Only this sites must be 
> spliced.
> update\.microsoft\.com$
> update\.microsoft\.com\.akadns\.net$
> v10\.vortex\-win\.data\.microsoft.com$
> settings\-win\.data\.microsoft\.com$
> # The next are trusted SKYPE addresses a\.config\.skype\.com$ 
> pipe\.skype\.com$ mail\.rimon\.net\.il$ w[0-9]+\.web\.whatsapp\.com$ 
> \.web\.whatsapp\.com$ web\.whatsapp\.com$ ##END OF NO BUMP DOMAINS.
> 
> And squid 4.0.14 doesn't tunnel the requests.
> The above is with:
> http_port 3128
> http_port 13128 intercept
> https_port 13129 intercept ssl-bump \
>cert=/etc/squid/ssl_cert/myCA.pem \
>  generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
> 
> On the 443 intercept port.
> Access log output:
> 1475100891.636 000445 192.168.10.112 NONE/200 0 CONNECT 
> 158.85.224.178:443 - ORIGINAL_DST/158.85.224.178 - 52:54:00:bc:9f:73
> 1475100908.469 000223 192.168.10.112 TCP_MISS/200 508 GET 
> https://web.whatsapp.com/status.json - ORIGINAL_DST/31.13.90.51 
> text/json 52:54:00:bc:9f:73
> 1475100952.107 000445 192.168.10.112 NONE/200 0 CONNECT 
> 158.85.224.178:443 - ORIGINAL_DST/158.85.224.178 - 52:54:00:bc:9f:73
> 1475100968.832 000191 192.168.10.112 NONE/200 0 CONNECT 
> 216.58.214.110:443 - ORIGINAL_DST/216.58.214.110 - 52:54:00:bc:9f:73
> 1475100968.984 000199 192.168.10.112 NONE/200 0 CONNECT 
> 172.217.22.14:443 - ORIGINAL_DST/172.217.22.14 - 52:54:00:bc:9f:73
> 1475101012.572 000447 192.168.10.112 NONE/200 0 CONNECT 
> 158.85.224.178:443 - ORIGINAL_DST/158.85.224.178 - 52:54:00:bc:9f:73
> 1475101033.232 000621 192.168.10.112 NONE/200 0 CONNECT 
> 31.13.66.49:443 - ORIGINAL_DST/31.13.66.49 - 52:54:00:bc:9f:73
> 1475101034.470 001224 192.168.10.112 TCP_MISS/200 512 GET 
> https://web.whatsapp.com/status.json - ORIGINAL_DST/31.13.66.49 
> text/json 52:54:00:bc:9f:73
> 1475101073.039 000446 192.168.10.112 NONE/200 0 CONNECT 
> 158.85.224.178:443 - ORIGINAL_DST/158.85.224.178 - 52:54:00:bc:9f:73
> 1475101133.502 000448 192.168.10.112 NONE/200 0 CONNECT 
> 158.85.224.178:443 - ORIGINAL_DST/158.85.224.178 - 52:54:00:bc:9f:73
> 
> Now the issue is more then just this since I cannot see any logs about the 
> websocket connections ie to the domains:
> w3.web.whatsapp.com
> 

They might be in the ones with raw-IP in NONE/200 lines. Since 
server_name_regex matches against the TLS-cert details which do not necessarily 
get logged as a URL domain name when splice is done.

The SNI _should_ be made the CONNECT URI domain. But when it matches the server 
cert altSubjectName that is definitely not a client requested value.


> and couple other similar.
> 
> What I did until now is to bypass specific domains IP addresses using 
> ipset+iptables.
> I believe that squid can do much better then it's doing now.

Can you get a packet dump to see what its TLS handshake details actually are? 
both client and server sides of Squid.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid with ASR9001

2016-10-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
No, Juniper is not my area ;)

It is impossible to know everything :)
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJYDpOEAAoJENNXIZxhPexGUWEH/jdttWLpJNQm49z0XlTMwwIM
HfPo3gUEufPGtYSNqvx+XWq448BMr+VxvcMi5ojDhE43FhHpLgCaJK40mw8U2M/i
EUV2DFOJ9f6D6KgIAYYPtAngD/hhYCxl8YBlRG3G+OfTbip8n4pSqShVwTHRs8F8
pzPtQ/5qrzIdocXQy+VKO+O6+vYYfZYA71LiS2YObu+M6lY8pNPYLeG6RB9c3Ou7
lFXyuoTwIdfhbj3fY78IrB8kDXHft5283rxtgqe5vzXCouI8swcAZ00I7Z6xEr6n
RkYShP3xu+vhn2uzx/aqcDo+t3DswsZw0f0EgWkSZTYu3s09kuuYaeIb5fr4KmU=
=4AXt
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid with ASR9001

2016-10-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Compared with PBR - definitely.

IF OS TCP stack supports bridging - exactly.

25.10.2016 3:59, Eliezer Croitoru пишет:
> So what you are illustrating is that if we will handle the connection
> interception using bridge tables it would be much more efficient then
Policy
> Based routing.
> I believe it’s very simple to implement in linux.
>
> Eliezer
>
> 
> Eliezer Croitoru 
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
> 
>
> From: Yuri Voinov [mailto:yvoi...@gmail.com]
> Sent: Monday, October 24, 2016 22:01
> To: Eliezer Croitoru 
> Cc: 'Garth van Sittert | BitCo' ;
> squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Squid with ASR9001
>
>
> Well, if we're talking about squid-based appliances.
>
> http://wiki.squid-cache.org/ConfigExamples/Intercept/CiscoIOSv15Wccp2
>
> In this article descrived approx. half-year experimental experience with
> various LAN topologies, and Cisco devices.
>
> More common:
>
>
http://www.cisco.com/c/en/us/td/docs/ios/12_2/configfun/configuration/guide
> /ffun_c/fcf018.html
>
https://supportforums.cisco.com/document/143961/understanding-wccp-redirect
> ion-and-assignment-methods-waas
>
> Cisco has not best-in-the-world documentation, yes, but everything depends
> on an understanding of network protocols and basic architecture.
>
> 25.10.2016 0:44, Eliezer Croitoru пишет:
> > Well I do agree on most of the
>   things but it seems that CPU is missing in
>
>   > some devices and there for a simpler protocol is better but….
>   CPU…
> Yess. Router has CPU. :) Not only ASIC. :) PBR is problem, because of
> EVERY policy/ACL match handles on CPU.
>
> This brings us to the other side - the rules / policies must be carefully
> optimized - that too few people do, until the router does not choke on CPU
> overload.
>
> > Admins in many cases do not use
>   their own to understand the complexity but
>
>   > from what I do see in the jobs market employers expect the
>   unexpected.
> Admins, in most cases, understand nothing and do not bother trying to
grasp
> and understand more deeply than in the first three-five seconds. ;)
>
> About the present, of course, do not tell. :)
>
> > Or if to be more accurate: They
>   expect a mage which knows and understand
>
>   > every single protocol language and piece of hardware.
>
>
>
>   > Can you gather me what ever documentation on the WCCP
>   protocol?
>
>   > I want to see how simple it would be to implement the same
>   concepts with an
>
>   > HTTP\tcp interface.
> There's really just all. The main thing to understand how the network
works
> on L2 and L3 in OSI. And a bit network hardware knowledge.
>
>
>
>   > Eliezer
>
>
>
>   > 
>
>   > Eliezer Croitoru 
>  
>
>   > Linux System Administrator
>
>   > Mobile: +972-5-28704261
>
>   > Email: elie...@ngtech.co.il 
>
>
>
>
>
>   > From: Yuri Voinov [mailto:yvoi...@gmail.com]
>
>   > Sent: Monday, October 24, 2016 21:07
>
>   > To: Eliezer Croitoru 
>  ; 'Garth van
>   Sittert | BitCo'
>
>   >   ;
> squid-users@lists.squid-cache.org

> > Subject: Re: [squid-users] Squid with ASR9001
>
>
>
>
>
>   > No.
>
>
>
>   > 24.10.2016 23:40, Eliezer Croitoru пишет:
>
>   > > And why would you want this
>
>   >   exactly?
>
>
>
>   >   > The most simple thing is to use routing policy and
>   to monitor
>
>   >   the proxy in
>
>
>
>   >   > a much higher level then WCCP.
>
>   > Based on my personal experience with WCCP (over 6 years). PBR
>   is VERY
>
>   > router's CPU consumpted.
>
>   > WCCP - is not (L2, not GRE. GRE performs on CPU, L2 on
>   control-plane and
>
>   > hardware-accelerated).
>
>
>
>   > However, using edge router for WCCP is not so good idea by
>   another reason.
>
>   > It breaks good network architecture in most cases. I'm not
>   CCA, but ever for
>
>   > me it's obvious.
>
>
>
>   > So, underlying aggregations switches is more appropriate
>   target for WCCP,
>
>   > because of they can be uses L2 WCCP - which is extremely
>   fast.
>
>
>
>   > > For example fetch a web page or
>
>   >   a statistics page every 10 seconds.
>
>
>
>   >   > It’s considered pretty right in the industry.
>
>
>
>   >   > For routers it’s a whole another story but for a
>   rock solid
>
>   >   system I do not
>
>
>
>   >   > believe WCCP is a must.
>
>   > Depending of router. Branch router must have. Just take a
>   look on whole
>
>   > Cisco's router's range. Just for interest.
>
>
>
>   > > Any juniper and Ci

Re: [squid-users] Squid with ASR9001

2016-10-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Ah - in Squid's wiki don't pay attention on Variant III.

This is minimalistic topology, main goal was as minimum hardware units
as possible (for cost minimization).

Also VRF-variant still in progress. We can't solve DNS proxying task as
required yet.

25.10.2016 0:44, Eliezer Croitoru пишет:
> Well I do agree on most of the things but it seems that CPU is missing in
> some devices and there for a simpler protocol is better but…. CPU…
> Admins in many cases do not use their own to understand the complexity but
> from what I do see in the jobs market employers expect the unexpected.
> Or if to be more accurate: They expect a mage which knows and understand
> every single protocol language and piece of hardware.
>
> Can you gather me what ever documentation on the WCCP protocol?
> I want to see how simple it would be to implement the same concepts
with an
> HTTP\tcp interface.
>
> Eliezer
>
> 
> Eliezer Croitoru 
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
> 
>
> From: Yuri Voinov [mailto:yvoi...@gmail.com]
> Sent: Monday, October 24, 2016 21:07
> To: Eliezer Croitoru ; 'Garth van Sittert | BitCo'
> ; squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Squid with ASR9001
>
>
> No.
>
> 24.10.2016 23:40, Eliezer Croitoru пишет:
> > And why would you want this
>   exactly?
>
>   > The most simple thing is to use routing policy and to monitor
>   the proxy in
>
>   > a much higher level then WCCP.
> Based on my personal experience with WCCP (over 6 years). PBR is VERY
> router's CPU consumpted.
> WCCP - is not (L2, not GRE. GRE performs on CPU, L2 on control-plane and
> hardware-accelerated).
>
> However, using edge router for WCCP is not so good idea by another reason.
> It breaks good network architecture in most cases. I'm not CCA, but
ever for
> me it's obvious.
>
> So, underlying aggregations switches is more appropriate target for WCCP,
> because of they can be uses L2 WCCP - which is extremely fast.
>
> > For example fetch a web page or
>   a statistics page every 10 seconds.
>
>   > It’s considered pretty right in the industry.
>
>   > For routers it’s a whole another story but for a rock solid
>   system I do not
>
>   > believe WCCP is a must.
> Depending of router. Branch router must have. Just take a look on whole
> Cisco's router's range. Just for interest.
>
> > Any juniper and Cisco + others
>   these days do not rely on WCCP since it’s
>
>   > considered a hassle to maintain.
> Cats delicious. You just do not know how to cook them :)
>
> WCCP is a very simple protocol. While there may be poorly documented.
There
> is another problem - very few people well versed in networking
technologies,
> few details delves into what makes. The vast majority simply copy-paste
> configs without a single thought in his head, not bothering to understand.
>
> What is there to maintain? Just configure it once and sit on the ass
> straight.
>
>
>   > Eliezer
>
>
>
>   > 
>
>   > Eliezer Croitoru 
>  
>
>   > Linux System Administrator
>
>   > Mobile: +972-5-28704261
>
>   > Email: elie...@ngtech.co.il 
>
>
>
>
>
>   > From: squid-users
>   [mailto:squid-users-boun...@lists.squid-cache.org] On
>
>   > Behalf Of Yuri
>
>   > Sent: Monday, October 24, 2016 14:06
>
>   > To: Garth van Sittert | BitCo 
>  ;
>
>   > squid-users@lists.squid-cache.org
> 
>
>   > Subject: Re: [squid-users] Squid with ASR9001
>
>
>
>   > Ha, it seems ASR9000 really does not support WCCP exactly.
>   You right.
>
>
>
>   > WCCP supported on Nexus, on ASR1000... So, your router only
>   can use PBR or
>
>   > analoquie.
>
>
>
>   > The only idea is to buy 3750 as aggregation switch, config
>   WCCP on it and
>
>   > connect to your ASR by fiber trunk.
>
>   > 24.10.2016 16:30, Garth van Sittert | BitCo пишет:
>
>
>
>   > By Cisco employee - “Correct, there is no WCCP and no plans
>   for it
>
>   > either... :(”
>
>
>   https://supportforums.cisco.com/discussion/12227051/ios-xr-and-wccp
>
>
>
>   > WCCP supported platforms –
>
>
>
>
>
https://supportforums.cisco.com/document/133201/wccp-platform-support-overv
> i
>
>   > ew
>
>
>
>   > Our ASR9001 has no commands that support wccp anywhere…
>
>
>
>
>
>
>
>
>
>
>
>   > Garth van Sittert | Chief Executive Officer 
>
>   > (BSC Physics & Computer Science)
>
>   > Tel: 087 135  Ext: 201
>
>   > ga...@bitco.co.za 
>    
>
>   > bitco.co.za  
>
>
>
>
>   > From: Yuri [mailto:yvoi...@gmail.com]
>

Re: [squid-users] Squid with ASR9001

2016-10-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Well, if we're talking about squid-based appliances.

http://wiki.squid-cache.org/ConfigExamples/Intercept/CiscoIOSv15Wccp2

In this article descrived approx. half-year experimental experience with
various LAN topologies, and Cisco devices.

More common:

http://www.cisco.com/c/en/us/td/docs/ios/12_2/configfun/configuration/guide/ffun_c/fcf018.html
https://supportforums.cisco.com/document/143961/understanding-wccp-redirection-and-assignment-methods-waas

Cisco has not best-in-the-world documentation, yes, but everything
depends on an understanding of network protocols and basic architecture.

25.10.2016 0:44, Eliezer Croitoru пишет:
> Well I do agree on most of the things but it seems that CPU is missing in
> some devices and there for a simpler protocol is better but…. CPU…
Yess. Router has CPU. :) Not only ASIC. :) PBR is problem, because
of EVERY policy/ACL match handles on CPU.

This brings us to the other side - the rules / policies must be
carefully optimized - that too few people do, until the router does not
choke on CPU overload.

> Admins in many cases do not use their own to understand the complexity but
> from what I do see in the jobs market employers expect the unexpected.
Admins, in most cases, understand nothing and do not bother trying to
grasp and understand more deeply than in the first three-five seconds. ;)

About the present, of course, do not tell. :)

> Or if to be more accurate: They expect a mage which knows and understand
> every single protocol language and piece of hardware.
>
> Can you gather me what ever documentation on the WCCP protocol?
> I want to see how simple it would be to implement the same concepts
with an
> HTTP\tcp interface.
There's really just all. The main thing to understand how the network
works on L2 and L3 in OSI. And a bit network hardware knowledge.

>
> Eliezer
>
> 
> Eliezer Croitoru 
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
> 
>
> From: Yuri Voinov [mailto:yvoi...@gmail.com]
> Sent: Monday, October 24, 2016 21:07
> To: Eliezer Croitoru ; 'Garth van Sittert | BitCo'
> ; squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Squid with ASR9001
>
>
> No.
>
> 24.10.2016 23:40, Eliezer Croitoru пишет:
> > And why would you want this
>   exactly?
>
>   > The most simple thing is to use routing policy and to monitor
>   the proxy in
>
>   > a much higher level then WCCP.
> Based on my personal experience with WCCP (over 6 years). PBR is VERY
> router's CPU consumpted.
> WCCP - is not (L2, not GRE. GRE performs on CPU, L2 on control-plane and
> hardware-accelerated).
>
> However, using edge router for WCCP is not so good idea by another reason.
> It breaks good network architecture in most cases. I'm not CCA, but
ever for
> me it's obvious.
>
> So, underlying aggregations switches is more appropriate target for WCCP,
> because of they can be uses L2 WCCP - which is extremely fast.
>
> > For example fetch a web page or
>   a statistics page every 10 seconds.
>
>   > It’s considered pretty right in the industry.
>
>   > For routers it’s a whole another story but for a rock solid
>   system I do not
>
>   > believe WCCP is a must.
> Depending of router. Branch router must have. Just take a look on whole
> Cisco's router's range. Just for interest.
>
> > Any juniper and Cisco + others
>   these days do not rely on WCCP since it’s
>
>   > considered a hassle to maintain.
> Cats delicious. You just do not know how to cook them :)
>
> WCCP is a very simple protocol. While there may be poorly documented.
There
> is another problem - very few people well versed in networking
technologies,
> few details delves into what makes. The vast majority simply copy-paste
> configs without a single thought in his head, not bothering to understand.
>
> What is there to maintain? Just configure it once and sit on the ass
> straight.
>
>
>   > Eliezer
>
>
>
>   > 
>
>   > Eliezer Croitoru 
>  
>
>   > Linux System Administrator
>
>   > Mobile: +972-5-28704261
>
>   > Email: elie...@ngtech.co.il 
>
>
>
>
>
>   > From: squid-users
>   [mailto:squid-users-boun...@lists.squid-cache.org] On
>
>   > Behalf Of Yuri
>
>   > Sent: Monday, October 24, 2016 14:06
>
>   > To: Garth van Sittert | BitCo 
>  ;
>
>   > squid-users@lists.squid-cache.org
> 
>
>   > Subject: Re: [squid-users] Squid with ASR9001
>
>
>
>   > Ha, it seems ASR9000 really does not support WCCP exactly.
>   You right.
>
>
>
>   > WCCP supported on Nexus, on ASR1000... So, your router only
>   can use PBR or
>
>   > analoquie.
>
>
>
>   > The only idea is to buy 3750 as aggregation switch, config
>   WCCP 

Re: [squid-users] sourcehash load balance

2016-10-24 Thread André Janna

On 23/10/2016 10:35 a.m., Amos Jeffries wrote:


On 22/10/2016 12:21 a.m., André Janna wrote:

I set up a Squid proxy that forwards all requests to 2 parent caches.
I'm using Squid version 3.5.19.
My goal is that multiple connection from a client to a server should be
forwarded to the same parent, so that the server see all requests coming
from the same IP address.

I'm using the following configuration:
cache_peer squid1 parent 3128 0 no-query sourcehash
cache_peer squid2 parent 3128 0 no-query sourcehash
never_direct allow all

Looking at access.log some requests are tagged as CLOSEST_PARENT instead
of SOURCEHASH_PARENT, so it seams that Squid is not always using source
hash rule to forward requests to parent caches.
For instance:
1477046954.047   3882 10.11.2.4 TCP_TUNNEL/200 21935 CONNECT
sso.cisco.com:443 - SOURCEHASH_PARENT/10.0.33.12 -
1477046968.056 21 10.11.2.4 TCP_MISS/200 1012 POST
http://ocsp.digicert.com/ - CLOSEST_PARENT/10.0.33.13
application/ocsp-response
1477047782.038 22 10.11.2.4 TCP_MISS/204 307 GET
http://clients1.google.com/generate_204 - SOURCEHASH_PARENT/10.0.33.12 -
1477047782.045181 10.11.2.4 TCP_MISS/200 745 GET
http://tags.bluekai.com/site/2964? - CLOSEST_PARENT/10.0.33.13 image/gif

So requests from the same client are not sent to the same parent cache.
How can I force Squid to always use source hash parent selection method?

Check you setting for nonhierarchical_direct. It should be 'off'. The
default is 'on'.

Amos




Hi Amos,
I've just added "nonhierarchical_direct off" to my setting but there are 
still requests that are forwarded using a parent selection method 
different from "sourcehash".

For instance:
1477331305.998 13 10.11.0.12 TCP_MISS/200 4267 GET 
http://www.cisco.com/etc/designs/cdc/dmr/icons/arrows-grey.png - 
SOURCEHASH_PARENT/10.0.33.12 image/png
1477331306.948 10 10.11.0.12 TCP_MISS/304 609 GET 
http://platform.twitter.com/widgets.js - CLOSEST_PARENT/10.0.33.13 -


Regards,
Andre

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid with ASR9001

2016-10-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
No.

24.10.2016 23:40, Eliezer Croitoru пишет:
> And why would you want this exactly?
> The most simple thing is to use routing policy and to monitor the proxy in
> a much higher level then WCCP.
Based on my personal experience with WCCP (over 6 years). PBR is VERY
router's CPU consumpted.
WCCP - is not (L2, not GRE. GRE performs on CPU, L2 on control-plane and
hardware-accelerated).

However, using edge router for WCCP is not so good idea by another
reason. It breaks good network architecture in most cases. I'm not CCA,
but ever for me it's obvious.

So, underlying aggregations switches is more appropriate target for
WCCP, because of they can be uses L2 WCCP - which is extremely fast.

> For example fetch a web page or a statistics page every 10 seconds.
> It’s considered pretty right in the industry.
> For routers it’s a whole another story but for a rock solid system I
do not
> believe WCCP is a must.
Depending of router. Branch router must have. Just take a look on whole
Cisco's router's range. Just for interest.

> Any juniper and Cisco + others these days do not rely on WCCP since it’s
> considered a hassle to maintain.
Cats delicious. You just do not know how to cook them :)

WCCP is a very simple protocol. While there may be poorly documented.
There is another problem - very few people well versed in networking
technologies, few details delves into what makes. The vast majority
simply copy-paste configs without a single thought in his head, not
bothering to understand.

What is there to maintain? Just configure it once and sit on the ass
straight.
>
> Eliezer
>
> 
> Eliezer Croitoru 
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
> 
>
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
> Behalf Of Yuri
> Sent: Monday, October 24, 2016 14:06
> To: Garth van Sittert | BitCo ;
> squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Squid with ASR9001
>
> Ha, it seems ASR9000 really does not support WCCP exactly. You right.
>
> WCCP supported on Nexus, on ASR1000... So, your router only can use PBR or
> analoquie.
>
> The only idea is to buy 3750 as aggregation switch, config WCCP on it and
> connect to your ASR by fiber trunk.
> 24.10.2016 16:30, Garth van Sittert | BitCo пишет:
> 
> By Cisco employee - “Correct, there is no WCCP and no plans for it
> either... :(”
> https://supportforums.cisco.com/discussion/12227051/ios-xr-and-wccp
> 
> WCCP supported platforms –
>
>
https://supportforums.cisco.com/document/133201/wccp-platform-support-overvi
> ew
> 
> Our ASR9001 has no commands that support wccp anywhere…
> 
> 
> 
> 
> 
> Garth van Sittert | Chief Executive Officer 
> (BSC Physics & Computer Science)
> Tel: 087 135  Ext: 201
> ga...@bitco.co.za  
> bitco.co.za 
> 
> 
> From: Yuri [mailto:yvoi...@gmail.com]
> Sent: Monday, 24 October 2016 12:12 PM
> To: Garth van Sittert | BitCo 
>  ; squid-users@lists.squid-cache.org
> 
> Subject: Re: [squid-users] Squid with ASR9001
> 
> 
> 
> 24.10.2016 13:16, Garth van Sittert | BitCo пишет:
> Yes, it looks like all of the ASR9000 range which makes use of IOS XR no
> longer supports WCCP.
> Please, provide prooflink from Cisco.
>
>
> 
> Policy Based Routing has been replaced by ACL Based Forwarding or ABF.
> So? This is therminology difference, if any.
>
>
> 
> 
> 
> 
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
> Behalf Of Yuri Voinov
> Sent: Sunday, 23 October 2016 9:35 PM
> To: squid-users@lists.squid-cache.org
> 
> Subject: Re: [squid-users] Squid with ASR9001
> 
>
>
>
> 23.10.2016 23:16, Garth van Sittert | BitCo пишет:
>
>
>
>
>   > Good day all
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>   > Has anyone had any experience setting up Squid with any IOS
>   XR Cisco routers?  The Cisco ASR9000 range doesn’t support WCCP
>   and I cannot find any examples online.
>
>
>
>
> Seriously, the entire range?
>
> Who said that it does not support WCCP? It is obligation to support, if
> only because it is not a home dish soap. That's when Cisco write the
> documentation that does not support - and then we cry.
>
>
>
>
>
>
>
>
>
>
>
>
>   > I have also found quotes regarding PBR on the ASR9000… “With
>   IOS XR traditional policy-based routing (PBR) is history”
>
>
>
>
> It's crazy city a forum talking about? PBR - is a fundamental
functionality
> for the router. Especially for the router at this level. I somehow
difficult
> to imagine a company that completely cuts down the business by releasing
> incompatible with what device. This is only possible in the
OpenSource. But
> not in huge IT-business company. AFAIK.
>
>
>
>
>
>
>
>
>
>
>
>
>   > I plan to use this on our 10Gbps ISP traffic to improve
>

Re: [squid-users] skype connection problem

2016-10-24 Thread Eliezer Croitoru
Just to understand the scenario:
You have let say 1 client on network 192.168.0.0/24
You have a proxy at 192.168.0.200
The client doesn’t have a gateway in the network IE cannot run dns queries
or pings to the internet.
The client must define the proxy in order to access any Internet resources.
Right?
The proxy have access to dns and the ip stack natted or not.

I believe it would be pretty simple to reproduced in order to verify the
issue by another party.

Let me know if I got the situation right.

Eliezer


Eliezer Croitoru  
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
 

From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
Behalf Of N V
Sent: Monday, October 24, 2016 01:11
To: squid-us...@squid-cache.org
Subject: [squid-users] skype connection problem

hi there,
i've had problems with windows skype clients with the only internet
connection is through squid. the clients can login successful but when they
make a call, it hangs after 12 secconds.

I checked the client connections and see that attempts to connect directly
even if the proxy is properly configured.

my squid version is 3.5.12
the skype clients have the last version available.
does anyone have the same issues?
any idea?

thanks in advance!
Nicolás.

pd. sorry about my english
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] skype connection problem

2016-10-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 


24.10.2016 22:28, Nicolas Valera пишет:
>
>
> On 10/24/2016 01:21 PM, Yuri Voinov wrote:
>>
>
> 24.10.2016 22:19, Nicolas Valera пишет:
> >>> Hi Yuri, thanks for the answer!
> >>>
> >>> we don't have the squid in transparent mode in this network.
> So, you route all traffic to proxy box?
> > Yes, clients do not have direct Internet access
Here is root of problem. Skype does not always uses HTTP/HTTPS as
transport. Just pass Skype connections with proxy bypass and it will work.

In transparent environment non-HTTP/HTTPS connections not route to proxy.
>
> >>> the squid configuration is very basic. here is the conf:
> >>>
> >>>
-
> >>> http_port 1280 connection-auth=off
> >>> forwarded_for delete
> >>> httpd_suppress_version_string on
> >>> client_persistent_connections off
> >>>
> >>> cache_mem 16 GB
> >>> maximum_object_size_in_memory 8 MB
> >>>
> >>> url_rewrite_program /usr/bin/squidGuard
> >>> url_rewrite_children 10
> >>> url_rewrite_access allow all
> >>>
> >>> acl numeric_IPs dstdom_regex
>
^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9a-f]+)?:([0-9a-f:]+)?:([0-9a-f]+|0-9\.]+)?\])):443
> >>> acl Skype_UA browser ^skype
> >>>
> >>> acl SSL_ports port 443 563 873 1445 2083 8000 8088 10017 8443 5443
> 7443 50001
> >>> acl Safe_ports port 80 82 88 182 210 554 591 777 873 1001 21 443 70
> 280 488
> >>> acl Safe_ports port 1025-65535  # unregistered ports
> >>>
> >>> acl CONNECT method CONNECT
> >>> acl safe_method method GET
> >>> acl safe_method method PUT
> >>> acl safe_method method POST
> >>> acl safe_method method HEAD
> >>> acl safe_method method CONNECT
> >>> acl safe_method method OPTIONS
> >>> acl safe_method method PROPFIND
> >>> acl safe_method method REPORT
> >>> acl safe_method method MERGE
> >>> acl safe_method method MKACTIVITY
> >>> acl safe_method method CHECKOUT
> >>>
> >>> http_access deny !Safe_ports
> >>> http_access allow CONNECT localnet numeric_IPS Skype_UA
> >>> http_access deny CONNECT !SSL_ports
> >>> http_access deny !safe_method
> >>> http_access allow localnet
> >>> http_access allow localhost
> >>> http_access deny all
> >>>
> >>> refresh_pattern ^ftp:144020%10080
> >>> refresh_pattern ^gopher:14400%1440
> >>> refresh_pattern -i (/cgi-bin/|\?) 00%0
> >>> refresh_pattern Packages\.tar$ 0   20%4320 refresh-ims
> ignore-no-cache
> >>> refresh_pattern Packages\.bz2$ 0   20%4320 refresh-ims
> ignore-no-cache
> >>> refresh_pattern Sources\.bz2$  0   20%4320 refresh-ims
> ignore-no-cache
> >>> refresh_pattern Release\.gpg$  0   20%4320 refresh-ims
> >>> refresh_pattern Release$   0   20%4320 refresh-ims
> >>> refresh_pattern -i
> microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
> 43200 reload-into-ims ignore-no-cache
> >>> refresh_pattern -i
> windowsupdate.com/.*\.(esd|cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)
> 4320 80% 43200 reload-into-ims ignore-no-cache
> >>> refresh_pattern -i
> windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
> 43200 reload-into-ims ignore-no-cache
> >>> refresh_pattern -i
> live.net/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 43200
> reload-into-ims ignore-no-cache
> >>> refresh_pattern .020%4320
> >>>
> >>>
-
> >>>
> >>> please, can you send me your settings for ssl bump?
> Copy-n-paste unknown configs is very bad idea, Nicolas.
>
> > sorry about that!
> > the only way to make skype works through squid is with ssl bump?
No. Just permit skype TCP traffic bypass proxy.
>
> >>>
> >>> thanks again!
> >>> nicolás.
> >>>
> >>> On 10/23/2016 07:28 PM, Yuri Voinov wrote:
> 
> >>>
> >>>
> >>> 24.10.2016 4:11, N V пишет:
> >>> >>> hi there,
> >>> >>> i've had problems with windows skype clients with the only
internet
> >>> connection is through squid. the clients can login successful but when
> >>> they make a call, it hangs after 12 secconds.
> >>> >>>
> >>> >>> I checked the client connections and see that attempts to connect
> >>> directly even if the proxy is properly configured.
> >>> Exactly, Skype does not use HTTP to calls. So, why you expect it calls
> >>> should goes via proxy?
> >>> >>>
> >>> >>> my squid version is 3.5.12
> >>> >>> the skype clients have the last version available.
> >>> >>> does anyone have the same issues?
> >>> >>> any idea?
> >>> With properly configured ssl bump and transparent proxy we have
not any
> >>> problems with skype. I don't know your details.
> >>> >>>
> >>> >>> thanks in advance!
> >>> >>> Nicolás.
> >>> >>>
> >>> >>> pd. sorry about my english
> >>> >>>
> >>> >>>
> >>> >>>
> >>> >>> ___
> >>> >>> squid-users mailing list
> >>> >>> squid-users@lists.squid-cache.org
> >>> >>> http://lists.squid-cache.org/listinfo/squid-users
> >>>
> 
>

Re: [squid-users] skype connection problem

2016-10-24 Thread Nicolas Valera



On 10/24/2016 01:21 PM, Yuri Voinov wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


24.10.2016 22:19, Nicolas Valera пишет:

Hi Yuri, thanks for the answer!

we don't have the squid in transparent mode in this network.

So, you route all traffic to proxy box?

Yes, clients do not have direct Internet access



the squid configuration is very basic. here is the conf:

-
http_port 1280 connection-auth=off
forwarded_for delete
httpd_suppress_version_string on
client_persistent_connections off

cache_mem 16 GB
maximum_object_size_in_memory 8 MB

url_rewrite_program /usr/bin/squidGuard
url_rewrite_children 10
url_rewrite_access allow all

acl numeric_IPs dstdom_regex

^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9a-f]+)?:([0-9a-f:]+)?:([0-9a-f]+|0-9\.]+)?\])):443

acl Skype_UA browser ^skype

acl SSL_ports port 443 563 873 1445 2083 8000 8088 10017 8443 5443

7443 50001

acl Safe_ports port 80 82 88 182 210 554 591 777 873 1001 21 443 70

280 488

acl Safe_ports port 1025-65535  # unregistered ports

acl CONNECT method CONNECT
acl safe_method method GET
acl safe_method method PUT
acl safe_method method POST
acl safe_method method HEAD
acl safe_method method CONNECT
acl safe_method method OPTIONS
acl safe_method method PROPFIND
acl safe_method method REPORT
acl safe_method method MERGE
acl safe_method method MKACTIVITY
acl safe_method method CHECKOUT

http_access deny !Safe_ports
http_access allow CONNECT localnet numeric_IPS Skype_UA
http_access deny CONNECT !SSL_ports
http_access deny !safe_method
http_access allow localnet
http_access allow localhost
http_access deny all

refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern Packages\.tar$ 0   20%4320 refresh-ims

ignore-no-cache

refresh_pattern Packages\.bz2$ 0   20%4320 refresh-ims

ignore-no-cache

refresh_pattern Sources\.bz2$  0   20%4320 refresh-ims

ignore-no-cache

refresh_pattern Release\.gpg$  0   20%4320 refresh-ims
refresh_pattern Release$   0   20%4320 refresh-ims
refresh_pattern -i

microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
43200 reload-into-ims ignore-no-cache

refresh_pattern -i

windowsupdate.com/.*\.(esd|cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)
4320 80% 43200 reload-into-ims ignore-no-cache

refresh_pattern -i

windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
43200 reload-into-ims ignore-no-cache

refresh_pattern -i

live.net/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 43200
reload-into-ims ignore-no-cache

refresh_pattern .020%4320

-

please, can you send me your settings for ssl bump?

Copy-n-paste unknown configs is very bad idea, Nicolas.


sorry about that!
the only way to make skype works through squid is with ssl bump?



thanks again!
nicolás.

On 10/23/2016 07:28 PM, Yuri Voinov wrote:





24.10.2016 4:11, N V пишет:
>>> hi there,
>>> i've had problems with windows skype clients with the only internet
connection is through squid. the clients can login successful but when
they make a call, it hangs after 12 secconds.
>>>
>>> I checked the client connections and see that attempts to connect
directly even if the proxy is properly configured.
Exactly, Skype does not use HTTP to calls. So, why you expect it calls
should goes via proxy?
>>>
>>> my squid version is 3.5.12
>>> the skype clients have the last version available.
>>> does anyone have the same issues?
>>> any idea?
With properly configured ssl bump and transparent proxy we have not any
problems with skype. I don't know your details.
>>>
>>> thanks in advance!
>>> Nicolás.
>>>
>>> pd. sorry about my english
>>>
>>>
>>>
>>> ___
>>> squid-users mailing list
>>> squid-users@lists.squid-cache.org
>>> http://lists.squid-cache.org/listinfo/squid-users





___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


- --
Cats - delicious. You just do not know how to cook them.
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJYDjURAAoJENNXIZxhPexGJAYH/jWHDNBJz43d17Lx1iUZSn1N
88PER8+AcS9aVlAzBWnu7uSu2yCWdcmMMNz1g5O2PYOnzuzMpyBHd2fKZFgksoP8
azdw5AXeHT9FOvXnY1qjGGWmn/vcBXC06NDpA8OEeuW9qNpEoRYR/0LQUrAOokW3
vLFft2FWT127ZK5c2DlD/p7yPrW7FmlovSkMlAAoe+sXkMMmPomSu75PhDBv3dKs
HCsTpama4Cwv+huJg/HDMyOLCsy4uiYZoFmilNiOF92Hg6RNq18LymVqe2FX0IlY
guY1U/DrkugmeGF1n8M+6Z5VWhR1Nhq2+lna9wlozRF1EqfuwsYT/a6EUSkx/LU=
=fHtH
-END PGP SIGNATURE-



___
squid-users mailing li

Re: [squid-users] skype connection problem

2016-10-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 

24.10.2016 22:19, Nicolas Valera пишет:
> Hi Yuri, thanks for the answer!
>
> we don't have the squid in transparent mode in this network.
So, you route all traffic to proxy box?

> the squid configuration is very basic. here is the conf:
>
> -
> http_port 1280 connection-auth=off
> forwarded_for delete
> httpd_suppress_version_string on
> client_persistent_connections off
>
> cache_mem 16 GB
> maximum_object_size_in_memory 8 MB
>
> url_rewrite_program /usr/bin/squidGuard
> url_rewrite_children 10
> url_rewrite_access allow all
>
> acl numeric_IPs dstdom_regex
^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9a-f]+)?:([0-9a-f:]+)?:([0-9a-f]+|0-9\.]+)?\])):443
> acl Skype_UA browser ^skype
>
> acl SSL_ports port 443 563 873 1445 2083 8000 8088 10017 8443 5443
7443 50001
> acl Safe_ports port 80 82 88 182 210 554 591 777 873 1001 21 443 70
280 488
> acl Safe_ports port 1025-65535  # unregistered ports
>
> acl CONNECT method CONNECT
> acl safe_method method GET
> acl safe_method method PUT
> acl safe_method method POST
> acl safe_method method HEAD
> acl safe_method method CONNECT
> acl safe_method method OPTIONS
> acl safe_method method PROPFIND
> acl safe_method method REPORT
> acl safe_method method MERGE
> acl safe_method method MKACTIVITY
> acl safe_method method CHECKOUT
>
> http_access deny !Safe_ports
> http_access allow CONNECT localnet numeric_IPS Skype_UA
> http_access deny CONNECT !SSL_ports
> http_access deny !safe_method
> http_access allow localnet
> http_access allow localhost
> http_access deny all
>
> refresh_pattern ^ftp:144020%10080
> refresh_pattern ^gopher:14400%1440
> refresh_pattern -i (/cgi-bin/|\?) 00%0
> refresh_pattern Packages\.tar$ 0   20%4320 refresh-ims
ignore-no-cache
> refresh_pattern Packages\.bz2$ 0   20%4320 refresh-ims
ignore-no-cache
> refresh_pattern Sources\.bz2$  0   20%4320 refresh-ims
ignore-no-cache
> refresh_pattern Release\.gpg$  0   20%4320 refresh-ims
> refresh_pattern Release$   0   20%4320 refresh-ims
> refresh_pattern -i
microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
43200 reload-into-ims ignore-no-cache
> refresh_pattern -i
windowsupdate.com/.*\.(esd|cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)
4320 80% 43200 reload-into-ims ignore-no-cache
> refresh_pattern -i
windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
43200 reload-into-ims ignore-no-cache
> refresh_pattern -i
live.net/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 43200
reload-into-ims ignore-no-cache
> refresh_pattern .020%4320
>
> -
>
> please, can you send me your settings for ssl bump?
Copy-n-paste unknown configs is very bad idea, Nicolas.

>
> thanks again!
> nicolás.
>
> On 10/23/2016 07:28 PM, Yuri Voinov wrote:
>>
>
>
> 24.10.2016 4:11, N V пишет:
> >>> hi there,
> >>> i've had problems with windows skype clients with the only internet
> connection is through squid. the clients can login successful but when
> they make a call, it hangs after 12 secconds.
> >>>
> >>> I checked the client connections and see that attempts to connect
> directly even if the proxy is properly configured.
> Exactly, Skype does not use HTTP to calls. So, why you expect it calls
> should goes via proxy?
> >>>
> >>> my squid version is 3.5.12
> >>> the skype clients have the last version available.
> >>> does anyone have the same issues?
> >>> any idea?
> With properly configured ssl bump and transparent proxy we have not any
> problems with skype. I don't know your details.
> >>>
> >>> thanks in advance!
> >>> Nicolás.
> >>>
> >>> pd. sorry about my english
> >>>
> >>>
> >>>
> >>> ___
> >>> squid-users mailing list
> >>> squid-users@lists.squid-cache.org
> >>> http://lists.squid-cache.org/listinfo/squid-users
>
>>
>>
>>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

- -- 
Cats - delicious. You just do not know how to cook them.
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJYDjURAAoJENNXIZxhPexGJAYH/jWHDNBJz43d17Lx1iUZSn1N
88PER8+AcS9aVlAzBWnu7uSu2yCWdcmMMNz1g5O2PYOnzuzMpyBHd2fKZFgksoP8
azdw5AXeHT9FOvXnY1qjGGWmn/vcBXC06NDpA8OEeuW9qNpEoRYR/0LQUrAOokW3
vLFft2FWT127ZK5c2DlD/p7yPrW7FmlovSkMlAAoe+sXkMMmPomSu75PhDBv3dKs
HCsTpama4Cwv+huJg/HDMyOLCsy4uiYZoFmilNiOF92Hg6RNq18LymVqe2FX0IlY
guY1U/DrkugmeGF1n8M+6Z5VWhR1Nhq2+lna9wlozRF1EqfuwsYT/a6EUSkx/LU=
=fHtH
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys

Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 


24.10.2016 22:05, Garri Djavadyan пишет:
> On 2016-10-24 19:40, Garri Djavadyan wrote:
>> So, the big G sends 304 only to HEAD requests, although it is a
>> violation [1], AIUI:
>>
>> curl --head -H 'If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT' -H
>> 'If-None-Match: "101395"' http://dl.google.com/linux/direct/google-chro
>> me-stable_current_amd64.deb
>> HTTP/1.1 304 Not Modified
>> ETag: "101395"
>> Server: downloads
>> Vary: *
>> X-Content-Type-Options: nosniff
>> X-Frame-Options: SAMEORIGIN
>> X-Xss-Protection: 1; mode=block
>> Date: Mon, 24 Oct 2016 14:36:32 GMT
>> Connection: keep-alive
>>
>> ---
>>
>> $ curl --verbose -H 'If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT'
>> -H 'If-None-Match: "101395"' http://dl.google.com/linux/direct/google-c
>> hrome-stable_current_amd64.deb > /dev/null
>>> GET /linux/direct/google-chrome-stable_current_amd64.deb HTTP/1.1
>>> Host: dl.google.com
>>> User-Agent: curl/7.50.3
>>> Accept: */*
>>> If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT
>>> If-None-Match: "101395"
>>>
>> < HTTP/1.1 200 OK
>> < Accept-Ranges: bytes
>> < Content-Type: application/x-debian-package
>> < ETag: "101395"
>> < Last-Modified: Thu, 20 Oct 2016 08:29:09 GMT
>> < Server: downloads
>> < Vary: *
>> < X-Content-Type-Options: nosniff
>> < X-Frame-Options: SAMEORIGIN
>> < X-Xss-Protection: 1; mode=block
>> < Date: Mon, 24 Oct 2016 14:38:19 GMT
>> < Content-Length: 45532350
>> < Connection: keep-alive
>>
>> [1] https://tools.ietf.org/html/rfc7234#section-4.3.5
>
> Actually I mixed SHOULD agains MUST. The RFC 7231, section 4.3.2
states [1]:
> ...
> The server SHOULD send the same header fields in response to a HEAD
request as it would have sent if
> the request had been a GET, except that the payload header fields
(Section 3.3) MAY be omitted.
> ...
>
> So, big G does not follow the recommendation, but does not violate the
standard.
Of course, one does not violate the standards. Just a little do not
follow the recommendations. It also does not interfere with critical
transactions, right? Just prevent caching. But this is no problem here -
you can download file? That's enough. Isn't it?

Corporation of Good allows itself not to follow the recommendations.
What is permitted to Jupiter - the bull is not allowed, is not it? They
do all by rule - "Because we can". We are, instead, must follows
"Because we can't".

Nothing personal, no trolling. Just a note.

>
> [1] https://tools.ietf.org/html/rfc7231#section-4.3.2
>
> Garri
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

- -- 
Cats - delicious. You just do not know how to cook them.
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJYDjSeAAoJENNXIZxhPexGAHAIAIDM3QyTLdMbK+HTz8o8zBnH
eKbnCBdkiDUalojgVlkWwW6llp78lFdJWf8ilzKpq9WE83g8fiesUMz5qQzShtqg
OFD9NT25w793L0F7Ne7b4haPqSh05RgIsPvri0PWSy1WRLBV1l+nHAKHzsTLoZ6w
MmtoQycP86p8z+FuuOg1mkmjlgUAlfeG0jWWQkwxfYcn0vxi2vM1nLc00xCxJi4U
iX3dbzWPuPDlljO+wm6ZKaOiQCdjTb8pk5AmFaFH/hhOIflffhPZdMBVikWAhaCp
1p8YTlUJvKj2nmP9SVkrFSFP5/AmAy0AZn+Cbg79+4lWRG2+KwepCAoO7EMbS4Q=
=hPcm
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] skype connection problem

2016-10-24 Thread Nicolas Valera

Hi Yuri, thanks for the answer!

we don't have the squid in transparent mode in this network.
the squid configuration is very basic. here is the conf:

-
http_port 1280 connection-auth=off
forwarded_for delete
httpd_suppress_version_string on
client_persistent_connections off

cache_mem 16 GB
maximum_object_size_in_memory 8 MB

url_rewrite_program /usr/bin/squidGuard
url_rewrite_children 10
url_rewrite_access allow all

acl numeric_IPs dstdom_regex 
^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9a-f]+)?:([0-9a-f:]+)?:([0-9a-f]+|0-9\.]+)?\])):443

acl Skype_UA browser ^skype

acl SSL_ports port 443 563 873 1445 2083 8000 8088 10017 8443 5443 7443 
50001

acl Safe_ports port 80 82 88 182 210 554 591 777 873 1001 21 443 70 280 488
acl Safe_ports port 1025-65535  # unregistered ports

acl CONNECT method CONNECT
acl safe_method method GET
acl safe_method method PUT
acl safe_method method POST
acl safe_method method HEAD
acl safe_method method CONNECT
acl safe_method method OPTIONS
acl safe_method method PROPFIND
acl safe_method method REPORT
acl safe_method method MERGE
acl safe_method method MKACTIVITY
acl safe_method method CHECKOUT

http_access deny !Safe_ports
http_access allow CONNECT localnet numeric_IPS Skype_UA
http_access deny CONNECT !SSL_ports
http_access deny !safe_method
http_access allow localnet
http_access allow localhost
http_access deny all

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern Packages\.tar$ 0   20%4320 refresh-ims 
ignore-no-cache
refresh_pattern Packages\.bz2$ 0   20%4320 refresh-ims 
ignore-no-cache
refresh_pattern Sources\.bz2$  0   20%4320 refresh-ims 
ignore-no-cache

refresh_pattern Release\.gpg$  0   20%4320 refresh-ims
refresh_pattern Release$   0   20%4320 refresh-ims
refresh_pattern -i 
microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 
43200 reload-into-ims ignore-no-cache
refresh_pattern -i 
windowsupdate.com/.*\.(esd|cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 
4320 80% 43200 reload-into-ims ignore-no-cache
refresh_pattern -i 
windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 
43200 reload-into-ims ignore-no-cache
refresh_pattern -i 
live.net/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 43200 
reload-into-ims ignore-no-cache

refresh_pattern .   0   20% 4320

-

please, can you send me your settings for ssl bump?

thanks again!
nicolás.

On 10/23/2016 07:28 PM, Yuri Voinov wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



24.10.2016 4:11, N V пишет:

hi there,
i've had problems with windows skype clients with the only internet

connection is through squid. the clients can login successful but when
they make a call, it hangs after 12 secconds.


I checked the client connections and see that attempts to connect

directly even if the proxy is properly configured.
Exactly, Skype does not use HTTP to calls. So, why you expect it calls
should goes via proxy?


my squid version is 3.5.12
the skype clients have the last version available.
does anyone have the same issues?
any idea?

With properly configured ssl bump and transparent proxy we have not any
problems with skype. I don't know your details.


thanks in advance!
Nicolás.

pd. sorry about my english



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJYDTmeAAoJENNXIZxhPexG15oH/Alq2pQYRr80H/gMJUj4RJSi
z3X/lD+QN7I7N7XkxV4/vL5Lzznxc/bGKznuAqiusha/t4mDdgpIp0issR9LtcV4
8pLnrnovxTrEWZR7yFfYX+u8V1KGnudQNxlfaJXLL8C8K0mg3cp3GpsW+1a8s2c5
3gvsrj6Ft871gKfNmXXVmT7BVQdrBwnQvBLmP4eKEOIiT9mKQSIZwMJB4HgKUgVW
dmNQQb4q4975FD6c2t8/0Uu6l/A5lbMcxxuRIv3O9xrLqQud05IjYcSDDakzgtTy
qv+w7gFHbKe1YWDUkl2wJEi/TPbIdiXvV73cmh+HiogItDrw++v2rftxMfbJa4U=
=s3Ih
-END PGP SIGNATURE-



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Garri Djavadyan

On 2016-10-24 19:40, Garri Djavadyan wrote:

So, the big G sends 304 only to HEAD requests, although it is a
violation [1], AIUI:

curl --head -H 'If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT' -H
'If-None-Match: "101395"' http://dl.google.com/linux/direct/google-chro
me-stable_current_amd64.deb
HTTP/1.1 304 Not Modified
ETag: "101395"
Server: downloads
Vary: *
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 1; mode=block
Date: Mon, 24 Oct 2016 14:36:32 GMT
Connection: keep-alive

---

$ curl --verbose -H 'If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT'
-H 'If-None-Match: "101395"' http://dl.google.com/linux/direct/google-c
hrome-stable_current_amd64.deb > /dev/null

GET /linux/direct/google-chrome-stable_current_amd64.deb HTTP/1.1
Host: dl.google.com
User-Agent: curl/7.50.3
Accept: */*
If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT
If-None-Match: "101395"


< HTTP/1.1 200 OK
< Accept-Ranges: bytes
< Content-Type: application/x-debian-package
< ETag: "101395"
< Last-Modified: Thu, 20 Oct 2016 08:29:09 GMT
< Server: downloads
< Vary: *
< X-Content-Type-Options: nosniff
< X-Frame-Options: SAMEORIGIN
< X-Xss-Protection: 1; mode=block
< Date: Mon, 24 Oct 2016 14:38:19 GMT
< Content-Length: 45532350
< Connection: keep-alive

[1] https://tools.ietf.org/html/rfc7234#section-4.3.5


Actually I mixed SHOULD agains MUST. The RFC 7231, section 4.3.2 states 
[1]:

...
The server SHOULD send the same header fields in response to a HEAD 
request as it would have sent if
the request had been a GET, except that the payload header fields 
(Section 3.3) MAY be omitted.

...

So, big G does not follow the recommendation, but does not violate the 
standard.


[1] https://tools.ietf.org/html/rfc7231#section-4.3.2

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error "ipcacheParse: No Address records in response to"

2016-10-24 Thread erdosain9
By the way...

When i get this error

2016/10/24 12:13:36 kid1| ipcacheParse: No Address records in response to
'client.wns.windows.com' 
2016/10/24 12:13:36 kid1| ipcacheParse: No Address records in response to
'client.wns.windows.com' 
2016/10/24 12:13:36 kid1| ipcacheParse: No Address records in response to
'client.wns.windows.com' 
2016/10/24 12:13:36 kid1| ipcacheParse: No Address records in response to
'skp03.epimg.net' 
2016/10/24 12:13:36 kid1| ipcacheParse: No Address records in response to
'www.msftncsi.com' 
2016/10/24 12:13:41 kid1| ipcacheParse: No Address records in response to
'client-s.gateway.messenger.live.com' 
2016/10/24 12:13:41 kid1| ipcacheParse: No Address records in response to
'client-s.gateway.messenger.live.com' 
2016/10/24 12:13:41 kid1| ipcacheParse: No Address records in response to
'client-s.gateway.messenger.live.com' 
2016/10/24 12:13:54 kid1| Error negotiating SSL connection on FD 76: (104)
Connection reset by peer 
2016/10/24 12:14:06 kid1| ipcacheParse: No Address records in response to
'www.posadadonantonio.com' 
2016/10/24 12:14:06 kid1| ipcacheParse: No Address records in response to
'www.posadadonantonio.com' 
2016/10/24 12:14:16 kid1| ipcacheParse: No Address records in response to
'www.msftncsi.com' 
2016/10/24 12:14:31 kid1| ipcacheParse: No Address records in response to
'c.live.com' 



I put

set envar ipv6=yes

in juniper

and this error not happen again , but google sometimes give a ipv6, and
squid dosent work then.

(sorry for my english)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Error-ipcacheParse-No-Address-records-in-response-to-tp4680254p4680255.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Error "ipcacheParse: No Address records in response to"

2016-10-24 Thread erdosain9
Hi.
The squid was working perfect... but, i need to change the router (for some
problems). So im using a Juniper Firewall like router... 
So, now i have this error

2016/10/24 12:13:27 kid1| WARNING: All 32/32 ssl_crtd processes are busy.
2016/10/24 12:13:27 kid1| WARNING: 32 pending requests queued
2016/10/24 12:13:27 kid1| WARNING: Consider increasing the number of
ssl_crtd processes in your config file.
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:27 kid1| Queue overload, rejecting
2016/10/24 12:13:36 kid1| ipcacheParse: No Address records in response to
'client.wns.windows.com'
2016/10/24 12:13:36 kid1| ipcacheParse: No Address records in response to
'client.wns.windows.com'
2016/10/24 12:13:36 kid1| ipcacheParse: No Address records in response to
'client.wns.windows.com'
2016/10/24 12:13:36 kid1| ipcacheParse: No Address records in response to
'skp03.epimg.net'
2016/10/24 12:13:36 kid1| ipcacheParse: No Address records in response to
'www.msftncsi.com'
2016/10/24 12:13:41 kid1| ipcacheParse: No Address records in response to
'client-s.gateway.messenger.live.com'
2016/10/24 12:13:41 kid1| ipcacheParse: No Address records in response to
'client-s.gateway.messenger.live.com'
2016/10/24 12:13:41 kid1| ipcacheParse: No Address records in response to
'client-s.gateway.messenger.live.com'
2016/10/24 12:13:54 kid1| Error negotiating SSL connection on FD 76: (104)
Connection reset by peer
2016/10/24 12:14:06 kid1| ipcacheParse: No Address records in response to
'www.posadadonantonio.com'
2016/10/24 12:14:06 kid1| ipcacheParse: No Address records in response to
'www.posadadonantonio.com'
2016/10/24 12:14:16 kid1| ipcacheParse: No Address records in response to
'www.msftncsi.com'
2016/10/24 12:14:31 kid1| ipcacheParse: No Address records in response to
'c.live.com'
2016/10/24 12:14:33 kid1| Error negotiating SSL connection on FD 35:
error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca (1/0)
2016/10/24 12:14:56 kid1| ipcacheParse: No Address records in response to
'outlook.live.com'
2016/10/24 12:14:56 kid1| ipcacheParse: No Address records in response to
'outlook.live.com'
2016/10/24 12:14:56 kid1| ipcacheParse: No Address records in response to
'outlook.live.com'
2016/10/24 12:14:56 kid1| ipcacheParse: No Address records in response to
'outlook.live.com'
2016/10/24 12:15:16 kid1| ipcacheParse: No Address records in response to
'ipv6.msftncsi.com'
2016/10/24 12:15:16 kid1| ipcacheParse: No Address records in response to
'ocsp.digicert.com'
2016/10/24 12:15:31 kid1| ipcacheParse: No Address records in response to
'outlook.live.com'
2016/10/24 12:15:31 kid1| ipcacheParse: No Address records in response to
'outlook.live.com'
2016/10/24 12:15:31 kid1| ipcacheParse: No Address records in response to
'outlook.live.com'
2016/10/24 12:15:31 kid1| ipcacheParse: No Address records in response to
'outlook.live.com'
2016/10/24 12:15:31 kid1| ipcacheParse: No Address records in response to
'outlook.live.com'
2016/10/24 12:15:31 kid1| ipcacheParse: No Address records in response to
'fravega.vteximg.com.br'
2016/10/24 12:15:36 kid1| ipcacheParse: No Address records in response to
'www.clarin.com'
2016/10/24 12:15:36 kid1| ipcacheParse: No Address records in response to
'www.clarin.com'


What can i do??? by the way sometimes google.com answer with a
ipv6... and squid, do not know what to do with that.

Thanks!



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Error-ipcacheParse-No-Address-records-in-response-to-tp4680254.html
Sent from the Squid - Users mailing list archive at Nabble.com.
__

Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Yuri
I'm sorry to interrupt - I remember someone saying that you need to 
always abide by RFC? Well, as you say it to Google?



24.10.2016 20:40, Garri Djavadyan пишет:

On Tue, 2016-10-25 at 01:22 +1300, Amos Jeffries wrote:

On 25/10/2016 12:32 a.m., Garri Djavadyan wrote:

On Mon, 2016-10-24 at 23:51 +1300, Amos Jeffries wrote:

On 24/10/2016 9:59 p.m., Garri Djavadyan wrote:

Nevertheless, the topic surfaced new details regarding the Vary
and
I
tried conditional requests on same URL (Google Chrome) from
different
machines/IPs. Here results:

$ curl --head --header "If-Modified-Since: Thu, 22 Oct 2016
08:29:09
GMT" https://dl.google.com/linux/direct/google-chrome-stable_cu
rren
t_am
d64.deb
HTTP/1.1 304 Not Modified
Etag: "101395"
Server: downloads
Vary: *
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 1; mode=block
Date: Mon, 24 Oct 2016 08:53:44 GMT
Alt-Svc: quic=":443"; ma=2592000; v="36,35,34"



$ curl --head --header 'If-None-Match: "101395"' https://dl.goo
gle.
com/
linux/direct/google-chrome-stable_current_amd64.deb
HTTP/1.1 304 Not Modified
Etag: "101395"
Last-Modified: Thu, 20 Oct 2016 08:29:09 GMT
Server: downloads
Vary: *
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 1; mode=block
Date: Mon, 24 Oct 2016 08:54:18 GMT
Alt-Svc: quic=":443"; ma=2592000; v="36,35,34"


Sweet! Far better than I was expecting. That means this patch
should
work:

=== modified file 'src/http.cc'
--- src/http.cc 2016-10-08 22:19:44 +
+++ src/http.cc 2016-10-24 10:50:16 +
@@ -593,7 +593,7 @@
  while (strListGetItem(&vary, ',', &item, &ilen, &pos)) {
  SBuf name(item, ilen);
  if (name == asterisk) {
-vstr.clear();
+vstr = asterisk;
  break;
  }
  name.toLower();
@@ -947,6 +947,12 @@
  varyFailure = true;
  } else {
  entry->mem_obj->vary_headers = vary;
+
+// RFC 7231 section 7.1.4
+// Vary:* can be cached, but has mandatory
revalidation
+static const SBuf asterisk("*");
+if (vary == asterisk)
+EBIT_SET(entry->flags, ENTRY_REVALIDATE_ALWAYS);
  }
  }


Amos

I have applied the patch. Below my results.

In access.log I see:

1477307991.672  49890 127.0.0.1 TCP_REFRESH_MODIFIED/200 45532786
GET h
ttp://dl.google.com/linux/direct/google-chrome-
stable_current_amd64.deb
  - HIER_DIRECT/173.194.222.136 application/x-debian-package

In packet capture, I see that Squid doesn't use conditional
request:

GET /linux/direct/google-chrome-stable_current_amd64.deb HTTP/1.1
User-Agent: curl/7.50.3
Accept: */*
Host: dl.google.com
Via: 1.1 gentoo.comnet.uz (squid/3.5.22)
X-Forwarded-For: 127.0.0.1
Cache-Control: max-age=259200
Connection: keep-alive

Hmmm. That looks to me like the new patch is working (log says
REFRESH
being done) but there is some bug in the revalidate logic not adding
the
required headers.
  If thats right, then that bug might be causing other revalidate
traffic
to have major /200 issues.

I'm in need of sleep right now. If you can grab a ALL,9 cache.log
trace
and mail it to me I will take a look in the morning. Otherwise I will
try to replicate the case myself and track it down in the next few
days.

Amos

Sorry, I probably analysed the header of first request. I tried again,
and found that Squid sends the header correctly:

GET /linux/direct/google-chrome-stable_current_amd64.deb HTTP/1.1
If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT
If-None-Match: "101395"
User-Agent: curl/7.50.3
Accept: */*
Host: dl.google.com
Via: 1.1 gentoo.comnet.uz (squid/3.5.22)
X-Forwarded-For: 127.0.0.1
Cache-Control: max-age=259200
Connection: keep-alive


Sad enough, the reply is:

HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Type: application/x-debian-package
ETag: "101395"
Last-Modified: Thu, 20 Oct 2016 08:29:09 GMT
Server: downloads
Vary: *
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 1; mode=block
Date: Mon, 24 Oct 2016 13:41:13 GMT
Content-Length: 45532350
Connection: keep-alive


So, the big G sends 304 only to HEAD requests, although it is a
violation [1], AIUI:

curl --head -H 'If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT' -H
'If-None-Match: "101395"' http://dl.google.com/linux/direct/google-chro
me-stable_current_amd64.deb
HTTP/1.1 304 Not Modified
ETag: "101395"
Server: downloads
Vary: *
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 1; mode=block
Date: Mon, 24 Oct 2016 14:36:32 GMT
Connection: keep-alive

---

$ curl --verbose -H 'If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT'
-H 'If-None-Match: "101395"' http://dl.google.com/linux/direct/google-c
hrome-stable_current_amd64.deb > /dev/null

GET /linux/direct/google-chrome-stable_current_amd64.deb HTTP/1.1
Host: dl.google.com
User-Agent: curl/7.50.3
Accept: */*
If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT
If-None-Match: "101395"
  

< HTTP/1.1 200 OK
<

Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Garri Djavadyan
On Tue, 2016-10-25 at 01:22 +1300, Amos Jeffries wrote:
> On 25/10/2016 12:32 a.m., Garri Djavadyan wrote:
> > 
> > On Mon, 2016-10-24 at 23:51 +1300, Amos Jeffries wrote:
> > > 
> > > On 24/10/2016 9:59 p.m., Garri Djavadyan wrote:
> > > > 
> > > > Nevertheless, the topic surfaced new details regarding the Vary
> > > > and
> > > > I
> > > > tried conditional requests on same URL (Google Chrome) from
> > > > different
> > > > machines/IPs. Here results:
> > > > 
> > > > $ curl --head --header "If-Modified-Since: Thu, 22 Oct 2016
> > > > 08:29:09
> > > > GMT" https://dl.google.com/linux/direct/google-chrome-stable_cu
> > > > rren
> > > > t_am
> > > > d64.deb
> > > > HTTP/1.1 304 Not Modified
> > > > Etag: "101395"
> > > > Server: downloads
> > > > Vary: *
> > > > X-Content-Type-Options: nosniff
> > > > X-Frame-Options: SAMEORIGIN
> > > > X-Xss-Protection: 1; mode=block
> > > > Date: Mon, 24 Oct 2016 08:53:44 GMT
> > > > Alt-Svc: quic=":443"; ma=2592000; v="36,35,34"
> > > > 
> > > > 
> > > > 
> > > > $ curl --head --header 'If-None-Match: "101395"' https://dl.goo
> > > > gle.
> > > > com/
> > > > linux/direct/google-chrome-stable_current_amd64.deb 
> > > > HTTP/1.1 304 Not Modified
> > > > Etag: "101395"
> > > > Last-Modified: Thu, 20 Oct 2016 08:29:09 GMT
> > > > Server: downloads
> > > > Vary: *
> > > > X-Content-Type-Options: nosniff
> > > > X-Frame-Options: SAMEORIGIN
> > > > X-Xss-Protection: 1; mode=block
> > > > Date: Mon, 24 Oct 2016 08:54:18 GMT
> > > > Alt-Svc: quic=":443"; ma=2592000; v="36,35,34"
> > > > 
> > > 
> > > Sweet! Far better than I was expecting. That means this patch
> > > should
> > > work:
> > > 
> > > === modified file 'src/http.cc'
> > > --- src/http.cc 2016-10-08 22:19:44 +
> > > +++ src/http.cc 2016-10-24 10:50:16 +
> > > @@ -593,7 +593,7 @@
> > >  while (strListGetItem(&vary, ',', &item, &ilen, &pos)) {
> > >  SBuf name(item, ilen);
> > >  if (name == asterisk) {
> > > -vstr.clear();
> > > +vstr = asterisk;
> > >  break;
> > >  }
> > >  name.toLower();
> > > @@ -947,6 +947,12 @@
> > >  varyFailure = true;
> > >  } else {
> > >  entry->mem_obj->vary_headers = vary;
> > > +
> > > +// RFC 7231 section 7.1.4
> > > +// Vary:* can be cached, but has mandatory
> > > revalidation
> > > +static const SBuf asterisk("*");
> > > +if (vary == asterisk)
> > > +EBIT_SET(entry->flags, ENTRY_REVALIDATE_ALWAYS);
> > >  }
> > >  }
> > > 
> > > 
> > > Amos
> > 
> > I have applied the patch. Below my results.
> > 
> > In access.log I see:
> > 
> > 1477307991.672  49890 127.0.0.1 TCP_REFRESH_MODIFIED/200 45532786
> > GET h
> > ttp://dl.google.com/linux/direct/google-chrome-
> > stable_current_amd64.deb
> >  - HIER_DIRECT/173.194.222.136 application/x-debian-package
> > 
> > In packet capture, I see that Squid doesn't use conditional
> > request:
> > 
> > GET /linux/direct/google-chrome-stable_current_amd64.deb HTTP/1.1
> > User-Agent: curl/7.50.3
> > Accept: */*
> > Host: dl.google.com
> > Via: 1.1 gentoo.comnet.uz (squid/3.5.22)
> > X-Forwarded-For: 127.0.0.1
> > Cache-Control: max-age=259200
> > Connection: keep-alive
> 
> Hmmm. That looks to me like the new patch is working (log says
> REFRESH
> being done) but there is some bug in the revalidate logic not adding
> the
> required headers.
>  If thats right, then that bug might be causing other revalidate
> traffic
> to have major /200 issues.
> 
> I'm in need of sleep right now. If you can grab a ALL,9 cache.log
> trace
> and mail it to me I will take a look in the morning. Otherwise I will
> try to replicate the case myself and track it down in the next few
> days.
> 
> Amos

Sorry, I probably analysed the header of first request. I tried again,
and found that Squid sends the header correctly:

GET /linux/direct/google-chrome-stable_current_amd64.deb HTTP/1.1
If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT
If-None-Match: "101395"
User-Agent: curl/7.50.3
Accept: */*
Host: dl.google.com
Via: 1.1 gentoo.comnet.uz (squid/3.5.22)
X-Forwarded-For: 127.0.0.1
Cache-Control: max-age=259200
Connection: keep-alive


Sad enough, the reply is:

HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Type: application/x-debian-package
ETag: "101395"
Last-Modified: Thu, 20 Oct 2016 08:29:09 GMT
Server: downloads
Vary: *
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 1; mode=block
Date: Mon, 24 Oct 2016 13:41:13 GMT
Content-Length: 45532350
Connection: keep-alive


So, the big G sends 304 only to HEAD requests, although it is a
violation [1], AIUI:

curl --head -H 'If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT' -H
'If-None-Match: "101395"' http://dl.google.com/linux/direct/google-chro
me-stable_current_amd64.deb
HTTP/1.1 304 Not Modified
ETag: "101395"
Server: downloads
Vary: *
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Prote

Re: [squid-users] ERROR: Cannot connect to 127.0.0.1:3128

2016-10-24 Thread Amos Jeffries
On 24/10/2016 9:34 p.m., Михаил wrote:
> Hi!
> Could you write me if you had managed to emulate the problem that I have?
> Best regards, Misha.

I have not been able to replicate it here. I think I remember seeing it
a few years back, but not recently and trying last week my Squid worked
okay.

I was suspicious that the ::1 was being resolved. But your -vv output
shows it is finding 127.0.0.1 just fine. Something in the proxy is
denying the transaction, but your config looks like it should be allowed
through without any problem.


As a wild guess; try commenting out the ::1 entry in your /etc/hosts
file. Squid loads that file into its internal DNS cache and maybe the
entry is causing an issue on the Squid side of things.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Amos Jeffries
On 25/10/2016 12:32 a.m., Garri Djavadyan wrote:
> On Mon, 2016-10-24 at 23:51 +1300, Amos Jeffries wrote:
>> On 24/10/2016 9:59 p.m., Garri Djavadyan wrote:
>>> Nevertheless, the topic surfaced new details regarding the Vary and
>>> I
>>> tried conditional requests on same URL (Google Chrome) from
>>> different
>>> machines/IPs. Here results:
>>>
>>> $ curl --head --header "If-Modified-Since: Thu, 22 Oct 2016
>>> 08:29:09
>>> GMT" https://dl.google.com/linux/direct/google-chrome-stable_curren
>>> t_am
>>> d64.deb
>>> HTTP/1.1 304 Not Modified
>>> Etag: "101395"
>>> Server: downloads
>>> Vary: *
>>> X-Content-Type-Options: nosniff
>>> X-Frame-Options: SAMEORIGIN
>>> X-Xss-Protection: 1; mode=block
>>> Date: Mon, 24 Oct 2016 08:53:44 GMT
>>> Alt-Svc: quic=":443"; ma=2592000; v="36,35,34"
>>>
>>> 
>>>
>>> $ curl --head --header 'If-None-Match: "101395"' https://dl.google.
>>> com/
>>> linux/direct/google-chrome-stable_current_amd64.deb 
>>> HTTP/1.1 304 Not Modified
>>> Etag: "101395"
>>> Last-Modified: Thu, 20 Oct 2016 08:29:09 GMT
>>> Server: downloads
>>> Vary: *
>>> X-Content-Type-Options: nosniff
>>> X-Frame-Options: SAMEORIGIN
>>> X-Xss-Protection: 1; mode=block
>>> Date: Mon, 24 Oct 2016 08:54:18 GMT
>>> Alt-Svc: quic=":443"; ma=2592000; v="36,35,34"
>>>
>>
>> Sweet! Far better than I was expecting. That means this patch should
>> work:
>>
>> === modified file 'src/http.cc'
>> --- src/http.cc 2016-10-08 22:19:44 +
>> +++ src/http.cc 2016-10-24 10:50:16 +
>> @@ -593,7 +593,7 @@
>>  while (strListGetItem(&vary, ',', &item, &ilen, &pos)) {
>>  SBuf name(item, ilen);
>>  if (name == asterisk) {
>> -vstr.clear();
>> +vstr = asterisk;
>>  break;
>>  }
>>  name.toLower();
>> @@ -947,6 +947,12 @@
>>  varyFailure = true;
>>  } else {
>>  entry->mem_obj->vary_headers = vary;
>> +
>> +// RFC 7231 section 7.1.4
>> +// Vary:* can be cached, but has mandatory revalidation
>> +static const SBuf asterisk("*");
>> +if (vary == asterisk)
>> +EBIT_SET(entry->flags, ENTRY_REVALIDATE_ALWAYS);
>>  }
>>  }
>>
>>
>> Amos
> 
> I have applied the patch. Below my results.
> 
> In access.log I see:
> 
> 1477307991.672  49890 127.0.0.1 TCP_REFRESH_MODIFIED/200 45532786 GET h
> ttp://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
>  - HIER_DIRECT/173.194.222.136 application/x-debian-package
> 
> In packet capture, I see that Squid doesn't use conditional request:
> 
> GET /linux/direct/google-chrome-stable_current_amd64.deb HTTP/1.1
> User-Agent: curl/7.50.3
> Accept: */*
> Host: dl.google.com
> Via: 1.1 gentoo.comnet.uz (squid/3.5.22)
> X-Forwarded-For: 127.0.0.1
> Cache-Control: max-age=259200
> Connection: keep-alive

Hmmm. That looks to me like the new patch is working (log says REFRESH
being done) but there is some bug in the revalidate logic not adding the
required headers.
 If thats right, then that bug might be causing other revalidate traffic
to have major /200 issues.

I'm in need of sleep right now. If you can grab a ALL,9 cache.log trace
and mail it to me I will take a look in the morning. Otherwise I will
try to replicate the case myself and track it down in the next few days.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Amos Jeffries
On 24/10/2016 11:49 p.m., Yuri wrote:
> 
> 
> 24.10.2016 16:42, Alex Crow пишет:
>> On 24/10/16 11:26, Yuri wrote:
>>
>>> No, Amos, I'm not trolling your or another developers.
>>>
>>> I just really do not understand why there is a caching proxy, which
>>> is almost nothing can cache in the modern world. And that in vanilla
>>> version gives a maximum of 10-30% byte hit. From me personally, it
>>> needs no justification and no explanation. And the results.
>>>
>>> I can not explain to management why no result, referring to your
>>> explanations or descriptions of standards. I think it's understandable.
>>>
>>> At the present time to obtain any acceptable result it is necessary
>>> to make a hell of a lot of effort. To maintenance such installation
>>> is not easy.
>>>
>>> And as with every new version of the caching level it falls - and it
>>> is very easy to check - it is very difficult to explain to
>>> management, is not it?
>>>
>>> It's not my imagination - this is confirmed by dozens of Squid
>>> administrators, including me personally familiar. Therefore, I would
>>> heed to claim that I lie or deliberately introduce someone else astray.
>>>
>>
>> I'd rather have to explain to management about a low hitrate than have
>> to explain why they weren't seeing the content they expected to see,
>> or that some vital transaction did not go through, but, hey look here,
>> we're saving 80% of web traffic bill!
> So, what are you talking about - the smallest of the problems that can
> be easily testing and solving by existing functionality - such as no_cache.
> 
> In any case - it is the choice of each and I only wish to have all
> possible tools. And not to have my hands tied.

Squid is moving to HTTP/1.1 specifications. It no longer does some
things the HTTP/1.0-ish way.

I keep mentioning over and over again:  the controls you keep asking for
are only needed by the HTTP/1.0 behaviours ... to make the HTTP/1.0
proxy operate more like HTTP/1.1 !! But not quite identical to a 1.1
proxy/cache because it adds traffic problems that the true 1.1
proxy/cache does not allow to happen.

Squid is being converted to HTTP/1.1 native behaviour. The controls no
longer are needed in the bits that have been converted, the current
releases do *better* than what you are asking for when faced with
Cache-Control:no-cache, private etc. which have been converted already.

Having controls to force the old 1990's behaviour on todays Internet
traffic only leads to old bugs and problems being forced on clients. The
gains you got from those controls on HTTP/1.0 traffic are now just
happening naturally with HTTP/1.1 - no knobs need turning on/off for it
to happen.

So the old settings are going away (replaced). If the new behaviour
needs new settings that is something to discover as Squid improves.
Evidence so far is that there are few needed, but that could change.


So lets put it this way:
 You started with a proxy that could do X and be forced to also do a Y
and a Z thing.
 You then upgraded to a proxy that did X and Y, and be forced to do a Z
thing.
 So you complain that you can no longer force the new proxy to do Y thing.

Makes no sense to me unless I assume you are confused by the way the
forced-Y looks different to the real Y - though both are almost the same
thing. The difference being how real-Y fixed some nasty bugs caused by
forced-Y.


> 
> Especially when there are competing products that provide the desired
> results. Yes, they cost money. But management is not often ask - just
> buy what they need, by they opinion, and you - with Squid - went to look
> for a job. So simple.
> 

There will always be other products that do part of what Squid does
along with other things Squid does not. Just like Squid does part of
what they do and other things they do not.

Causing a product installation to produce corrupted traffic responses
does not help with that products reputation compared to 'the
competition' - no matter whether its Squid or something else. Whereas
reliable and accurate data transfer integrity is a cornerstone for good
reputation in any caching or networking product.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Garri Djavadyan
On Mon, 2016-10-24 at 23:51 +1300, Amos Jeffries wrote:
> On 24/10/2016 9:59 p.m., Garri Djavadyan wrote:
> > Nevertheless, the topic surfaced new details regarding the Vary and
> > I
> > tried conditional requests on same URL (Google Chrome) from
> > different
> > machines/IPs. Here results:
> > 
> > $ curl --head --header "If-Modified-Since: Thu, 22 Oct 2016
> > 08:29:09
> > GMT" https://dl.google.com/linux/direct/google-chrome-stable_curren
> > t_am
> > d64.deb
> > HTTP/1.1 304 Not Modified
> > Etag: "101395"
> > Server: downloads
> > Vary: *
> > X-Content-Type-Options: nosniff
> > X-Frame-Options: SAMEORIGIN
> > X-Xss-Protection: 1; mode=block
> > Date: Mon, 24 Oct 2016 08:53:44 GMT
> > Alt-Svc: quic=":443"; ma=2592000; v="36,35,34"
> > 
> > 
> > 
> > $ curl --head --header 'If-None-Match: "101395"' https://dl.google.
> > com/
> > linux/direct/google-chrome-stable_current_amd64.deb 
> > HTTP/1.1 304 Not Modified
> > Etag: "101395"
> > Last-Modified: Thu, 20 Oct 2016 08:29:09 GMT
> > Server: downloads
> > Vary: *
> > X-Content-Type-Options: nosniff
> > X-Frame-Options: SAMEORIGIN
> > X-Xss-Protection: 1; mode=block
> > Date: Mon, 24 Oct 2016 08:54:18 GMT
> > Alt-Svc: quic=":443"; ma=2592000; v="36,35,34"
> > 
> 
> Sweet! Far better than I was expecting. That means this patch should
> work:
> 
> === modified file 'src/http.cc'
> --- src/http.cc 2016-10-08 22:19:44 +
> +++ src/http.cc 2016-10-24 10:50:16 +
> @@ -593,7 +593,7 @@
>  while (strListGetItem(&vary, ',', &item, &ilen, &pos)) {
>  SBuf name(item, ilen);
>  if (name == asterisk) {
> -vstr.clear();
> +vstr = asterisk;
>  break;
>  }
>  name.toLower();
> @@ -947,6 +947,12 @@
>  varyFailure = true;
>  } else {
>  entry->mem_obj->vary_headers = vary;
> +
> +// RFC 7231 section 7.1.4
> +// Vary:* can be cached, but has mandatory revalidation
> +static const SBuf asterisk("*");
> +if (vary == asterisk)
> +EBIT_SET(entry->flags, ENTRY_REVALIDATE_ALWAYS);
>  }
>  }
> 
> 
> Amos

I have applied the patch. Below my results.

In access.log I see:

1477307991.672  49890 127.0.0.1 TCP_REFRESH_MODIFIED/200 45532786 GET h
ttp://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
 - HIER_DIRECT/173.194.222.136 application/x-debian-package

In packet capture, I see that Squid doesn't use conditional request:

GET /linux/direct/google-chrome-stable_current_amd64.deb HTTP/1.1
User-Agent: curl/7.50.3
Accept: */*
Host: dl.google.com
Via: 1.1 gentoo.comnet.uz (squid/3.5.22)
X-Forwarded-For: 127.0.0.1
Cache-Control: max-age=259200
Connection: keep-alive

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] external_acl_type problem

2016-10-24 Thread reinerotto
>But the startup should be 0 in all Squid-3.2+ like you say. Are you
applying any patches to external_acl.cc or helper/ChildConfig.cc ? <

No patches. 
Now I rebuilt squid on a 32-bit debian, with default ./configure opts.
Same effect:
2016/10/24 09:54:09 kid1| helperOpenServers: Starting 5/5 'check_delay.sh'
processes

having this in squid.conf:

external_acl_type check_delay ttl=0 cache=0 %SRC /etc/squid/check_delay.sh



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/external-acl-type-problem-tp4680203p4680247.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid with ASR9001

2016-10-24 Thread Yuri

Ha, it seems ASR9000 really does not support WCCP exactly. You right.

WCCP supported on Nexus, on ASR1000... So, your router only can use PBR 
or analoquie.


The only idea is to buy 3750 as aggregation switch, config WCCP on it 
and connect to your ASR by fiber trunk.


24.10.2016 16:30, Garth van Sittert | BitCo пишет:


By Cisco employee - “Correct, there is no WCCP and no plans for it 
either... :(”


https://supportforums.cisco.com/discussion/12227051/ios-xr-and-wccp

WCCP supported platforms –

https://supportforums.cisco.com/document/133201/wccp-platform-support-overview

Our ASR9001 has no commands that support wccp anywhere…

http://www.loveburd.com/bitco/bitco-email-logo.jpg

Garth van Sittert | Chief Executive Officer
/(BSC Physics & Computer Science)/
Tel: 087 135  Ext: 201
ga...@bitco.co.za 
bitco.co.za 

*From:*Yuri [mailto:yvoi...@gmail.com]
*Sent:* Monday, 24 October 2016 12:12 PM
*To:* Garth van Sittert | BitCo ; 
squid-users@lists.squid-cache.org

*Subject:* Re: [squid-users] Squid with ASR9001

24.10.2016 13:16, Garth van Sittert | BitCo пишет:

Yes, it looks like all of the ASR9000 range which makes use of IOS
XR no longer supports WCCP.

Please, provide prooflink from Cisco.

Policy Based Routing has been replaced by ACL Based Forwarding or ABF.

So? This is therminology difference, if any.

*From:*squid-users
[mailto:squid-users-boun...@lists.squid-cache.org] *On Behalf Of
*Yuri Voinov
*Sent:* Sunday, 23 October 2016 9:35 PM
*To:* squid-users@lists.squid-cache.org

*Subject:* Re: [squid-users] Squid with ASR9001


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



23.10.2016 23:16, Garth van Sittert | BitCo пишет:
>


  > Good day all


  >


  >


  >


  > Has anyone had any experience setting up Squid with any IOS

  XR Cisco routers?  The Cisco ASR9000 range doesn’t support WCCP

  and I cannot find any examples online.


  >
Seriously, the entire range?

Who said that it does not support WCCP? It is obligation to
support, if only because it is not a home dish soap. That's when
Cisco write the documentation that does not support - and then we cry.
>


  >


  >


  > I have also found quotes regarding PBR on the ASR9000… “With

  IOS XR traditional policy-based routing (PBR) is history”


  >
It's crazy city a forum talking about? PBR - is a fundamental
functionality for the router. Especially for the router at this
level. I somehow difficult to imagine a company that completely
cuts down the business by releasing incompatible with what device.
This is only possible in the OpenSource. But not in huge
IT-business company. AFAIK.
>


  >


  >


  > I plan to use this on our 10Gbps ISP traffic to improve

  customer experience…


  >
There is no examples because the solutions of such a level rarely
use Squid. Personally, I do not have a machine to play and write
an example to Squid's wiki. As you know, Christmas is not the wife
of a router is present as trinkets.
>


  >


  >


  > Garth


  >


  >


  >


  >


  >


  > BitCo Email Footer







  > The information contained in this message is intended solely

  for the individual to whom it is specifically and originally

  addressed. This message and its contents may contain
confidential

  or privileged information from BitCo. If you are not the
intended

  recipient, you are hereby notified that any disclosure or

  distribution, is strictly prohibited. If you receive this
email in

  error, please notify BitCo immediately and delete it. BitCo does

  not accept any liability or responsibility if action is taken in

  reliance on the contents of this information.


  >


  >


  > ___


  > squid-users mailing list


  > squid-users@lists.squid-cache.org



  > http://lists.squid-cache.org/listinfo/squid-users


-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEbBAEBCAAGBQJYDRDSAAoJENNXIZxhPexG7roH90gh9VtKKk4g7WKscldhl5ki
tjs5d46Wl6uIWOI0

Re: [squid-users] [External] Re: Issue when connecting to apple APN

2016-10-24 Thread Alaa Hassan Barqawi
Dears,
Finally we found the issue as we are running squid on RHEL 7 
Service dnsmasq was stopped 
We just start it and everything worked fine !


علاء حسن جميل برقاوي
Ala'a Hasan Jamil Barqawi
System Integrator

8191 Takhassusi Road, Olaya
Riyadh 12333 - 3038, KSA
Tel +(9661) 2887513
Mobile +(966) 565050590
abarq...@elm.sa 
www.elm.sa

-Original Message-
From: Alaa Hassan Barqawi 
Sent: Monday, October 24, 2016 1:28 PM
To: 'Antony Stone'; squid-users@lists.squid-cache.org
Subject: RE: [External] Re: [squid-users] Issue when connecting to apple APN

I used the same configuration as this URL 
https://panaharjuna.wordpress.com/2009/12/17/speed-your-squid-server-using-google-public-dns/
But unfortunately when it comes to resolve the apple APN gateway it failed and 
return 503 service unavailable Any hope to solve it please?
Access.log

1477295971.100  0 192.168.186.37 TCP_MISS/503 0 CONNECT 
gateway.push.apple.com:2195 - HIER_NONE/- -




-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Antony Stone
Sent: Monday, October 24, 2016 12:42 PM
To: squid-users@lists.squid-cache.org
Subject: [External] Re: [squid-users] Issue when connecting to apple APN

On Monday 24 October 2016 at 11:36:34, Antony Stone wrote:

> On Monday 24 October 2016 at 11:27:17, Alaa Hassan Barqawi wrote:
> > Dears,
> > I am facing issue in connecting with apple APN gateway.push.apple.com :
> > 2195 The name cannot be resolved although I am using google DNS 
> > servers and it throws an error Unable to determine IP address from 
> > host name gateway.push.apple.com The DNS server returned:
> > No DNS records
> 
> There is no A (or ) record, but it is a CNAME:
> 
> $ dig gateway.push.apple.com
> 
> ; <<>> DiG 9.8.4-rpz2+rl005.12-P1 <<>> gateway.push.apple.com ;; 
> global options: +cmd ;; Got answer:
> ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4722 ;; flags: qr 
> rd ra; QUERY: 1, ANSWER: 9, AUTHORITY: 0, ADDITIONAL: 0
> 
> ;; QUESTION SECTION:
> ;gateway.push.apple.com.IN  A
> 
> ;; ANSWER SECTION:
> gateway.push.apple.com. 193 IN  CNAME   gateway.push-
> apple.com.akadns.net.
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.129.25
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.134.21
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.135.152
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.135.149
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.134.150
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.136.184
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.137.150
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.142.26
> 
> ;; Query time: 19 msec
> ;; SERVER: 80.68.80.24#53(80.68.80.24) ;; WHEN: Mon Oct 24 10:35:09
> 2016 ;; MSG SIZE  rcvd: 215
> 
> Are you using your own DNS server, or someone else's?

I apologise for not noticing "I am using Google DNS servers".

However, sending the above query to 8.8.8.8 gives me precisely the same result.


Antony.

--
The Magic Words are Squeamish Ossifrage.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

This e-mail message and all attachments transmitted with it are intended solely 
for the use of the addressee and may contain legally privileged and 
confidential information. If the reader of this message is not the intended 
recipient, or an employee or agent responsible for delivering this message to 
the intended recipient, you are hereby notified that any dissemination, 
distribution, copying, or other use of this message or its attachments is 
strictly prohibited. If you have received this message in error, please notify 
the sender immediately by replying to this message and please delete it from 
your computer.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Amos Jeffries
On 24/10/2016 9:59 p.m., Garri Djavadyan wrote:
> Hi Amos,
> 
> Thank you very much for so detailed explanation. I've made conclusions
> from presented information. I deeply regret, that the topic took so
> many time from you. I believe, information presented here will be
> helpful for the community.
> 
> Nevertheless, the topic surfaced new details regarding the Vary and I
> tried conditional requests on same URL (Google Chrome) from different
> machines/IPs. Here results:
> 
> $ curl --head --header "If-Modified-Since: Thu, 22 Oct 2016 08:29:09
> GMT" https://dl.google.com/linux/direct/google-chrome-stable_current_am
> d64.deb
> HTTP/1.1 304 Not Modified
> Etag: "101395"
> Server: downloads
> Vary: *
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-Xss-Protection: 1; mode=block
> Date: Mon, 24 Oct 2016 08:53:44 GMT
> Alt-Svc: quic=":443"; ma=2592000; v="36,35,34"
> 
> 
> 
> $ curl --head --header 'If-None-Match: "101395"' https://dl.google.com/
> linux/direct/google-chrome-stable_current_amd64.deb 
> HTTP/1.1 304 Not Modified
> Etag: "101395"
> Last-Modified: Thu, 20 Oct 2016 08:29:09 GMT
> Server: downloads
> Vary: *
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-Xss-Protection: 1; mode=block
> Date: Mon, 24 Oct 2016 08:54:18 GMT
> Alt-Svc: quic=":443"; ma=2592000; v="36,35,34"
> 

Sweet! Far better than I was expecting. That means this patch should work:

=== modified file 'src/http.cc'
--- src/http.cc 2016-10-08 22:19:44 +
+++ src/http.cc 2016-10-24 10:50:16 +
@@ -593,7 +593,7 @@
 while (strListGetItem(&vary, ',', &item, &ilen, &pos)) {
 SBuf name(item, ilen);
 if (name == asterisk) {
-vstr.clear();
+vstr = asterisk;
 break;
 }
 name.toLower();
@@ -947,6 +947,12 @@
 varyFailure = true;
 } else {
 entry->mem_obj->vary_headers = vary;
+
+// RFC 7231 section 7.1.4
+// Vary:* can be cached, but has mandatory revalidation
+static const SBuf asterisk("*");
+if (vary == asterisk)
+EBIT_SET(entry->flags, ENTRY_REVALIDATE_ALWAYS);
 }
 }


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Yuri



24.10.2016 16:42, Alex Crow пишет:

On 24/10/16 11:26, Yuri wrote:


No, Amos, I'm not trolling your or another developers.

I just really do not understand why there is a caching proxy, which 
is almost nothing can cache in the modern world. And that in vanilla 
version gives a maximum of 10-30% byte hit. From me personally, it 
needs no justification and no explanation. And the results.


I can not explain to management why no result, referring to your 
explanations or descriptions of standards. I think it's understandable.


At the present time to obtain any acceptable result it is necessary 
to make a hell of a lot of effort. To maintenance such installation 
is not easy.


And as with every new version of the caching level it falls - and it 
is very easy to check - it is very difficult to explain to 
management, is not it?


It's not my imagination - this is confirmed by dozens of Squid 
administrators, including me personally familiar. Therefore, I would 
heed to claim that I lie or deliberately introduce someone else astray.




I'd rather have to explain to management about a low hitrate than have 
to explain why they weren't seeing the content they expected to see, 
or that some vital transaction did not go through, but, hey look here, 
we're saving 80% of web traffic bill!
So, what are you talking about - the smallest of the problems that can 
be easily testing and solving by existing functionality - such as no_cache.


In any case - it is the choice of each and I only wish to have all 
possible tools. And not to have my hands tied.


Especially when there are competing products that provide the desired 
results. Yes, they cost money. But management is not often ask - just 
buy what they need, by they opinion, and you - with Squid - went to look 
for a job. So simple.





--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute 
advice.
The information provided is correct to our knowledge & belief and must 
not
be used as a substitute for obtaining tax, regulatory, investment, 
legal or

any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 
7608 5300.

(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Alex Crow

On 24/10/16 11:26, Yuri wrote:


No, Amos, I'm not trolling your or another developers.

I just really do not understand why there is a caching proxy, which is 
almost nothing can cache in the modern world. And that in vanilla 
version gives a maximum of 10-30% byte hit. From me personally, it 
needs no justification and no explanation. And the results.


I can not explain to management why no result, referring to your 
explanations or descriptions of standards. I think it's understandable.


At the present time to obtain any acceptable result it is necessary to 
make a hell of a lot of effort. To maintenance such installation is 
not easy.


And as with every new version of the caching level it falls - and it 
is very easy to check - it is very difficult to explain to management, 
is not it?


It's not my imagination - this is confirmed by dozens of Squid 
administrators, including me personally familiar. Therefore, I would 
heed to claim that I lie or deliberately introduce someone else astray.




I'd rather have to explain to management about a low hitrate than have 
to explain why they weren't seeing the content they expected to see, or 
that some vital transaction did not go through, but, hey look here, 
we're saving 80% of web traffic bill!



--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid with ASR9001

2016-10-24 Thread Garth van Sittert | BitCo

By Cisco employee - “Correct, there is no WCCP and no plans for it either... :(”
https://supportforums.cisco.com/discussion/12227051/ios-xr-and-wccp

WCCP supported platforms –

https://supportforums.cisco.com/document/133201/wccp-platform-support-overview

Our ASR9001 has no commands that support wccp anywhere…




[http://www.loveburd.com/bitco/bitco-email-logo.jpg]

Garth van Sittert | Chief Executive Officer
(BSC Physics & Computer Science)
Tel: 087 135  Ext: 201
ga...@bitco.co.za
bitco.co.za



From: Yuri [mailto:yvoi...@gmail.com]
Sent: Monday, 24 October 2016 12:12 PM
To: Garth van Sittert | BitCo ; 
squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid with ASR9001




24.10.2016 13:16, Garth van Sittert | BitCo пишет:
Yes, it looks like all of the ASR9000 range which makes use of IOS XR no longer 
supports WCCP.
Please, provide prooflink from Cisco.


Policy Based Routing has been replaced by ACL Based Forwarding or ABF.
So? This is therminology difference, if any.





From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Yuri Voinov
Sent: Sunday, 23 October 2016 9:35 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid with ASR9001


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



23.10.2016 23:16, Garth van Sittert | BitCo пишет:
>


  > Good day all


  >


  >


  >


  > Has anyone had any experience setting up Squid with any IOS
  XR Cisco routers?  The Cisco ASR9000 range doesn’t support WCCP
  and I cannot find any examples online.


  >
Seriously, the entire range?

Who said that it does not support WCCP? It is obligation to support, if only 
because it is not a home dish soap. That's when Cisco write the documentation 
that does not support - and then we cry.
>


  >


  >


  > I have also found quotes regarding PBR on the ASR9000… “With
  IOS XR traditional policy-based routing (PBR) is history”


  >
It's crazy city a forum talking about? PBR - is a fundamental functionality for 
the router. Especially for the router at this level. I somehow difficult to 
imagine a company that completely cuts down the business by releasing 
incompatible with what device. This is only possible in the OpenSource. But not 
in huge IT-business company. AFAIK.
>


  >


  >


  > I plan to use this on our 10Gbps ISP traffic to improve
  customer experience…


  >
There is no examples because the solutions of such a level rarely use Squid. 
Personally, I do not have a machine to play and write an example to Squid's 
wiki. As you know, Christmas is not the wife of a router is present as trinkets.
>


  >


  >


  > Garth


  >


  >


  >


  >


  >


  > BitCo Email Footer



  > The information contained in this message is intended solely
  for the individual to whom it is specifically and originally
  addressed. This message and its contents may contain confidential
  or privileged information from BitCo. If you are not the intended
  recipient, you are hereby notified that any disclosure or
  distribution, is strictly prohibited. If you receive this email in
  error, please notify BitCo immediately and delete it. BitCo does
  not accept any liability or responsibility if action is taken in
  reliance on the contents of this information.


  >


  >


  > ___


  > squid-users mailing list


  > 
squid-users@lists.squid-cache.org


  > http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEbBAEBCAAGBQJYDRDSAAoJENNXIZxhPexG7roH90gh9VtKKk4g7WKscldhl5ki
tjs5d46Wl6uIWOI0XyK7+94wKGV2oE4cAnoTqmDesxe058r8H67djDJvehIW9s1Q
zjd3DI4Th8QXEzMn5LnxqVSYz3WmANV5Jf/UsUQsUzPzgW2VHOpA8YfLPfEgbvhZ
zeJRG0gMg5fgyFlt90pK1p0v6sAOEB2leigxiWBXI27BEDajBnnSfbqeMvqanDgI
9Cwh1itpkukDNeU7e/e9y1sHLAJrJ8Z0V7ag2iqYb4KJv/SqkcCAsjX1aSv3VpDE
M4OvE+2tRT3v8ud4gIQroQmWrbNKCaBFgKI1tM82ojErj6FgTmv/5FjxHGq1Cw==
=YLEX
-END PGP SIGNATURE-
[BitCo Email 
Footer]
The information contained in this message is intended solely for the individual 
to whom it is specifically and originally addressed. This message and its 
conte

Re: [squid-users] [External] Re: Issue when connecting to apple APN

2016-10-24 Thread Alaa Hassan Barqawi
I used the same configuration as this URL 
https://panaharjuna.wordpress.com/2009/12/17/speed-your-squid-server-using-google-public-dns/
But unfortunately when it comes to resolve the apple APN gateway it failed and 
return 503 service unavailable
Any hope to solve it please?
Access.log

1477295971.100  0 192.168.186.37 TCP_MISS/503 0 CONNECT 
gateway.push.apple.com:2195 - HIER_NONE/- -




-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Antony Stone
Sent: Monday, October 24, 2016 12:42 PM
To: squid-users@lists.squid-cache.org
Subject: [External] Re: [squid-users] Issue when connecting to apple APN

On Monday 24 October 2016 at 11:36:34, Antony Stone wrote:

> On Monday 24 October 2016 at 11:27:17, Alaa Hassan Barqawi wrote:
> > Dears,
> > I am facing issue in connecting with apple APN gateway.push.apple.com :
> > 2195 The name cannot be resolved although I am using google DNS 
> > servers and it throws an error Unable to determine IP address from 
> > host name gateway.push.apple.com The DNS server returned:
> > No DNS records
> 
> There is no A (or ) record, but it is a CNAME:
> 
> $ dig gateway.push.apple.com
> 
> ; <<>> DiG 9.8.4-rpz2+rl005.12-P1 <<>> gateway.push.apple.com ;; 
> global options: +cmd ;; Got answer:
> ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4722 ;; flags: qr 
> rd ra; QUERY: 1, ANSWER: 9, AUTHORITY: 0, ADDITIONAL: 0
> 
> ;; QUESTION SECTION:
> ;gateway.push.apple.com.IN  A
> 
> ;; ANSWER SECTION:
> gateway.push.apple.com. 193 IN  CNAME   gateway.push-
> apple.com.akadns.net.
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.129.25
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.134.21
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.135.152
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.135.149
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.134.150
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.136.184
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.137.150
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.142.26
> 
> ;; Query time: 19 msec
> ;; SERVER: 80.68.80.24#53(80.68.80.24) ;; WHEN: Mon Oct 24 10:35:09 
> 2016 ;; MSG SIZE  rcvd: 215
> 
> Are you using your own DNS server, or someone else's?

I apologise for not noticing "I am using Google DNS servers".

However, sending the above query to 8.8.8.8 gives me precisely the same result.


Antony.

-- 
The Magic Words are Squeamish Ossifrage.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

This e-mail message and all attachments transmitted with it are intended solely 
for the use of the addressee and may contain legally privileged and 
confidential information. If the reader of this message is not the intended 
recipient, or an employee or agent responsible for delivering this message to 
the intended recipient, you are hereby notified that any dissemination, 
distribution, copying, or other use of this message or its attachments is 
strictly prohibited. If you have received this message in error, please notify 
the sender immediately by replying to this message and please delete it from 
your computer.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Yuri

No, Amos, I'm not trolling your or another developers.

I just really do not understand why there is a caching proxy, which is 
almost nothing can cache in the modern world. And that in vanilla 
version gives a maximum of 10-30% byte hit. From me personally, it needs 
no justification and no explanation. And the results.


I can not explain to management why no result, referring to your 
explanations or descriptions of standards. I think it's understandable.


At the present time to obtain any acceptable result it is necessary to 
make a hell of a lot of effort. To maintenance such installation is not 
easy.


And as with every new version of the caching level it falls - and it is 
very easy to check - it is very difficult to explain to management, is 
not it?


It's not my imagination - this is confirmed by dozens of Squid 
administrators, including me personally familiar. Therefore, I would 
heed to claim that I lie or deliberately introduce someone else astray.



24.10.2016 12:03, Amos Jeffries пишет:

On 24/10/2016 6:28 a.m., gar...@comnet.uz wrote:

On 2016-10-23 18:31, Amos Jeffries wrote:

On 23/10/2016 2:32 a.m., garryd wrote:

Since I started use Squid, it's configuration always RFC compliant by
default, _but_ there were always knobs for users to make it HTTP
violent. It was in hands of users to decide how to handle a web
resource. Now it is not always possible, and the topic is an evidence.
For example, in terms of this topic, users can't violate this RFC
statement [1]:

A Vary field value of "*" signals that anything about the request
might play a role in selecting the response representation, possibly
including elements outside the message syntax (e.g., the client's
network address).  A recipient will not be able to determine whether
this response is appropriate for a later request without forwarding
the request to the origin server.  A proxy MUST NOT generate a Vary
field with a "*" value.

[1] https://tools.ietf.org/html/rfc7231#section-7.1.4


Please name the option in any version of Squid which allowed Squid to
cache those "Vary: *" responses.

No such option ever existed. For the 20+ years Vary has existed Squid
has behaved in the same way it does today. For all that time you did not
notice these responses.

You are absolutely right, but there were not such abuse vector in the
past (at least in my practice). There were tools provided by devs to
admins to protect against trending abuse cases.

What trend? There is exactly one mentioned URL that I'm aware of, the
Chrome browser download URL. I've posted two reasons why Chrome uses the
Vary:* header. Just opinions of mine, but formed after actual
discussions with the Chrome developers some years back.


[I very much dislike writing this. But you seem to have been sucked in
and deserve to know the history.]

All the fuss that is going on AFAICS was started by Yuri. His comment
history here and in bugzilla, and in private responses range from
non-compromising "cache everything no matter what - do what I say, now!"
(repeatedy in unrelated bugzilla reports), "f*ck the RFCs and anyone
following them, just store everything I dont care about what happens"
(this mornings post), to personal attacks against anyone who mentions
the previous stance might have problems (all the "Squid developers
believe/say/..." comments - none of which match what the team we have
actually said to him or believe).

There is one other email address which changes its name occasionally and
posts almost exactly the same words as Yuri's. So it looks to me as Yuri
and some sock puppets performing a campaign to spread lies and FUD about
Squid and hurt the people doing work on it.

Not exactly a good way to get people to do things for free. But it seems
to have worked on getting you and a few others now doing the coding part
for him at no cost, and I have now wasted time responding to you and
thinking of a solution for it that might get accepted for merge.


This particular topic is not the first to have such behaviour by Yuri.
There have been other things where someone made a mistake (overlooked
something) and all hell full of insults broke loose at them. And several
other cases where missing features in Squid did not get instant
obedience to quite blunt and insulting demands. Followed by weeks of
insults until the bug was fixed by other people - then suddenly polite
Yuri comes back overnight.


As a developer, I personally decided not to write the requested code.
Not in the way demanded. This seems to have upset Yuri who has taken to
insulting me and the rest of the dev team as a whole. I'm not sure if he
is trolling to intentionally cause the above mentioned effects, or
really in need of medical assistance to deal with work related stress.

[/history]



So, the question arised,
what changed in Squid development policy?

In policy: Nothing I'm aware of in the past 10 years.

What changed on the Internet? a new bunch of RFCs came out, the server
an

Re: [squid-users] Squid with ASR9001

2016-10-24 Thread Yuri



24.10.2016 13:16, Garth van Sittert | BitCo пишет:


Yes, it looks like all of the ASR9000 range which makes use of IOS XR 
no longer supports WCCP.



Please, provide prooflink from Cisco.


Policy Based Routing has been replaced by ACL Based Forwarding or ABF.


So? This is therminology difference, if any.


*From:*squid-users [mailto:squid-users-boun...@lists.squid-cache.org] 
*On Behalf Of *Yuri Voinov

*Sent:* Sunday, 23 October 2016 9:35 PM
*To:* squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] Squid with ASR9001


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



23.10.2016 23:16, Garth van Sittert | BitCo пишет:
>

  > Good day all

  >

  >

  >

  > Has anyone had any experience setting up Squid with any IOS

  XR Cisco routers?  The Cisco ASR9000 range doesn’t support WCCP

  and I cannot find any examples online.

  >
Seriously, the entire range?

Who said that it does not support WCCP? It is obligation to support, 
if only because it is not a home dish soap. That's when Cisco write 
the documentation that does not support - and then we cry.

>

  >

  >

  > I have also found quotes regarding PBR on the ASR9000… “With

  IOS XR traditional policy-based routing (PBR) is history”

  >
It's crazy city a forum talking about? PBR - is a fundamental 
functionality for the router. Especially for the router at this level. 
I somehow difficult to imagine a company that completely cuts down the 
business by releasing incompatible with what device. This is only 
possible in the OpenSource. But not in huge IT-business company. AFAIK.

>

  >

  >

  > I plan to use this on our 10Gbps ISP traffic to improve

  customer experience…

  >
There is no examples because the solutions of such a level rarely use 
Squid. Personally, I do not have a machine to play and write an 
example to Squid's wiki. As you know, Christmas is not the wife of a 
router is present as trinkets.

>

  >

  >

  > Garth

  >

  >

  >

  >

  >

  > BitCo Email Footer

 



  > The information contained in this message is intended solely

  for the individual to whom it is specifically and originally

  addressed. This message and its contents may contain confidential

  or privileged information from BitCo. If you are not the intended

  recipient, you are hereby notified that any disclosure or

  distribution, is strictly prohibited. If you receive this email in

  error, please notify BitCo immediately and delete it. BitCo does

  not accept any liability or responsibility if action is taken in

  reliance on the contents of this information.

  >

  >

  > ___

  > squid-users mailing list

  > squid-users@lists.squid-cache.org 



  > http://lists.squid-cache.org/listinfo/squid-users 



-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEbBAEBCAAGBQJYDRDSAAoJENNXIZxhPexG7roH90gh9VtKKk4g7WKscldhl5ki
tjs5d46Wl6uIWOI0XyK7+94wKGV2oE4cAnoTqmDesxe058r8H67djDJvehIW9s1Q
zjd3DI4Th8QXEzMn5LnxqVSYz3WmANV5Jf/UsUQsUzPzgW2VHOpA8YfLPfEgbvhZ
zeJRG0gMg5fgyFlt90pK1p0v6sAOEB2leigxiWBXI27BEDajBnnSfbqeMvqanDgI
9Cwh1itpkukDNeU7e/e9y1sHLAJrJ8Z0V7ag2iqYb4KJv/SqkcCAsjX1aSv3VpDE
M4OvE+2tRT3v8ud4gIQroQmWrbNKCaBFgKI1tM82ojErj6FgTmv/5FjxHGq1Cw==
=YLEX
-END PGP SIGNATURE-

BitCo Email Footer 

The information contained in this message is intended solely for the 
individual to whom it is specifically and originally addressed. This 
message and its contents may contain confidential or privileged 
information from BitCo. If you are not the intended recipient, you are 
hereby notified that any disclosure or distribution, is strictly 
prohibited. If you receive this email in error, please notify BitCo 
immediately and delete it. BitCo does not accept any liability or 
responsibility if action is taken in reliance on the contents of this 
information.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Issue when connecting to apple APN

2016-10-24 Thread Antony Stone
On Monday 24 October 2016 at 11:36:34, Antony Stone wrote:

> On Monday 24 October 2016 at 11:27:17, Alaa Hassan Barqawi wrote:
> > Dears,
> > I am facing issue in connecting with apple APN gateway.push.apple.com :
> > 2195 The name cannot be resolved although I am using google DNS servers
> > and it throws an error Unable to determine IP address from host name
> > gateway.push.apple.com The DNS server returned:
> > No DNS records
> 
> There is no A (or ) record, but it is a CNAME:
> 
> $ dig gateway.push.apple.com
> 
> ; <<>> DiG 9.8.4-rpz2+rl005.12-P1 <<>> gateway.push.apple.com
> ;; global options: +cmd
> ;; Got answer:
> ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4722
> ;; flags: qr rd ra; QUERY: 1, ANSWER: 9, AUTHORITY: 0, ADDITIONAL: 0
> 
> ;; QUESTION SECTION:
> ;gateway.push.apple.com.IN  A
> 
> ;; ANSWER SECTION:
> gateway.push.apple.com. 193 IN  CNAME   gateway.push-
> apple.com.akadns.net.
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.129.25
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.134.21
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.135.152
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.135.149
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.134.150
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.136.184
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.137.150
> gateway.push-apple.com.akadns.net. 60 IN A  17.188.142.26
> 
> ;; Query time: 19 msec
> ;; SERVER: 80.68.80.24#53(80.68.80.24)
> ;; WHEN: Mon Oct 24 10:35:09 2016
> ;; MSG SIZE  rcvd: 215
> 
> Are you using your own DNS server, or someone else's?

I apologise for not noticing "I am using Google DNS servers".

However, sending the above query to 8.8.8.8 gives me precisely the same 
result.


Antony.

-- 
The Magic Words are Squeamish Ossifrage.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Issue when connecting to apple APN

2016-10-24 Thread Antony Stone
On Monday 24 October 2016 at 11:27:17, Alaa Hassan Barqawi wrote:

> Dears,
> I am facing issue in connecting with apple APN gateway.push.apple.com :
> 2195 The name cannot be resolved although I am using google DNS servers
> and it throws an error Unable to determine IP address from host name
> gateway.push.apple.com The DNS server returned:
> No DNS records

There is no A (or ) record, but it is a CNAME:

$ dig gateway.push.apple.com

; <<>> DiG 9.8.4-rpz2+rl005.12-P1 <<>> gateway.push.apple.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4722
;; flags: qr rd ra; QUERY: 1, ANSWER: 9, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;gateway.push.apple.com.IN  A

;; ANSWER SECTION:
gateway.push.apple.com. 193 IN  CNAME   gateway.push-
apple.com.akadns.net.
gateway.push-apple.com.akadns.net. 60 IN A  17.188.129.25
gateway.push-apple.com.akadns.net. 60 IN A  17.188.134.21
gateway.push-apple.com.akadns.net. 60 IN A  17.188.135.152
gateway.push-apple.com.akadns.net. 60 IN A  17.188.135.149
gateway.push-apple.com.akadns.net. 60 IN A  17.188.134.150
gateway.push-apple.com.akadns.net. 60 IN A  17.188.136.184
gateway.push-apple.com.akadns.net. 60 IN A  17.188.137.150
gateway.push-apple.com.akadns.net. 60 IN A  17.188.142.26

;; Query time: 19 msec
;; SERVER: 80.68.80.24#53(80.68.80.24)
;; WHEN: Mon Oct 24 10:35:09 2016
;; MSG SIZE  rcvd: 215

Are you using your own DNS server, or someone else's?


Antony.

-- 
"There is no reason for any individual to have a computer in their home."

 - Ken Olsen, President of Digital Equipment Corporation (DEC, later consumed 
by Compaq, later merged with HP)

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Issue when connecting to apple APN

2016-10-24 Thread Alaa Hassan Barqawi
Dears,
I am facing issue in connecting with apple APN gateway.push.apple.com : 2195
The name cannot be resolved although I am using google DNS servers and it 
throws an error
Unable to determine IP address from host name gateway.push.apple.com
The DNS server returned:
No DNS records

Thanks for support
confirm ca611d6d71a1f7df902469d92f5ea5977079243b


This e-mail message and all attachments transmitted with it are intended solely 
for the use of the addressee and may contain legally privileged and 
confidential information. If the reader of this message is not the intended 
recipient, or an employee or agent responsible for delivering this message to 
the intended recipient, you are hereby notified that any dissemination, 
distribution, copying, or other use of this message or its attachments is 
strictly prohibited. If you have received this message in error, please notify 
the sender immediately by replying to this message and please delete it from 
your computer.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Garri Djavadyan
On Mon, 2016-10-24 at 19:03 +1300, Amos Jeffries wrote:
> On 24/10/2016 6:28 a.m., gar...@comnet.uz wrote:
> > 
> > On 2016-10-23 18:31, Amos Jeffries wrote:
> > > 
> > > On 23/10/2016 2:32 a.m., garryd wrote:
> > > > 
> > > > Since I started use Squid, it's configuration always RFC
> > > > compliant by
> > > > default, _but_ there were always knobs for users to make it
> > > > HTTP
> > > > violent. It was in hands of users to decide how to handle a web
> > > > resource. Now it is not always possible, and the topic is an
> > > > evidence.
> > > > For example, in terms of this topic, users can't violate this
> > > > RFC
> > > > statement [1]:
> > > > 
> > > >    A Vary field value of "*" signals that anything about the
> > > > request
> > > >    might play a role in selecting the response representation,
> > > > possibly
> > > >    including elements outside the message syntax (e.g., the
> > > > client's
> > > >    network address).  A recipient will not be able to determine
> > > > whether
> > > >    this response is appropriate for a later request without
> > > > forwarding
> > > >    the request to the origin server.  A proxy MUST NOT generate
> > > > a Vary
> > > >    field with a "*" value.
> > > > 
> > > > [1] https://tools.ietf.org/html/rfc7231#section-7.1.4
> > > 
> > > 
> > > Please name the option in any version of Squid which allowed
> > > Squid to
> > > cache those "Vary: *" responses.
> > > 
> > > No such option ever existed. For the 20+ years Vary has existed
> > > Squid
> > > has behaved in the same way it does today. For all that time you
> > > did not
> > > notice these responses.
> > 
> > You are absolutely right, but there were not such abuse vector in
> > the
> > past (at least in my practice). There were tools provided by devs
> > to
> > admins to protect against trending abuse cases.
> 
> What trend? There is exactly one mentioned URL that I'm aware of, the
> Chrome browser download URL. I've posted two reasons why Chrome uses
> the
> Vary:* header. Just opinions of mine, but formed after actual
> discussions with the Chrome developers some years back.
> 
> 
> [I very much dislike writing this. But you seem to have been sucked
> in
> and deserve to know the history.]
> 
> All the fuss that is going on AFAICS was started by Yuri. His comment
> history here and in bugzilla, and in private responses range from
> non-compromising "cache everything no matter what - do what I say,
> now!"
> (repeatedy in unrelated bugzilla reports), "f*ck the RFCs and anyone
> following them, just store everything I dont care about what happens"
> (this mornings post), to personal attacks against anyone who mentions
> the previous stance might have problems (all the "Squid developers
> believe/say/..." comments - none of which match what the team we have
> actually said to him or believe).
> 
> There is one other email address which changes its name occasionally
> and
> posts almost exactly the same words as Yuri's. So it looks to me as
> Yuri
> and some sock puppets performing a campaign to spread lies and FUD
> about
> Squid and hurt the people doing work on it.
> 
> Not exactly a good way to get people to do things for free. But it
> seems
> to have worked on getting you and a few others now doing the coding
> part
> for him at no cost, and I have now wasted time responding to you and
> thinking of a solution for it that might get accepted for merge.
> 
> 
> This particular topic is not the first to have such behaviour by
> Yuri.
> There have been other things where someone made a mistake (overlooked
> something) and all hell full of insults broke loose at them. And
> several
> other cases where missing features in Squid did not get instant
> obedience to quite blunt and insulting demands. Followed by weeks of
> insults until the bug was fixed by other people - then suddenly
> polite
> Yuri comes back overnight.
> 
> 
> As a developer, I personally decided not to write the requested code.
> Not in the way demanded. This seems to have upset Yuri who has taken
> to
> insulting me and the rest of the dev team as a whole. I'm not sure if
> he
> is trolling to intentionally cause the above mentioned effects, or
> really in need of medical assistance to deal with work related
> stress.
> 
> [/history]
> 
> 
> > 
> > So, the question arised,
> > what changed in Squid development policy?
> 
> In policy: Nothing I'm aware of in the past 10 years.
> 
> What changed on the Internet? a new bunch of RFCs came out, the
> server
> and clients Squid talks to all got updated to follow those documents
> more closely.
> 
> What changed in Squid? the dev team have been slowly adding the new
> abilities to Squid. One by one, its only ~90% (maybe less) compliant
> withe the MUST conditions, not even close to that on the SHOULDs,
> MAYs,
> and implied processing abilities.
> 
> 
> What do you think should happen to Squid when all the software it
> talks
> to speaks and expects what the RFCs say they should expect from
> recipie

Re: [squid-users] ERROR: Cannot connect to 127.0.0.1:3128

2016-10-24 Thread Михаил
Hi!Could you write me if you had managed to emulate the problem that I have? Best regards, Misha.   14.10.2016, 18:51, "Михаил" :Hi.Ready. # squidclient -vv mgr:info | head -n 40stub time| WARNING: BCP 177 violation. IPv6 transport forced OFF by build parameters.verbosity level set to 2Request:GET cache_object://localhost/info HTTP/1.0Host: localhostUser-Agent: squidclient/3.5.21Accept: */*Connection: close .Transport detected: IPv4-onlyResolving localhost ...Connecting... localhost (127.0.0.1:3128)Connected to: localhost (127.0.0.1:3128)Sending HTTP request ...done.HTTP/1.1 403 ForbiddenServer: squidMime-Version: 1.0Date: Fri, 14 Oct 2016 10:46:56 GMTContent-Type: text/html;charset=utf-8Content-Length: 3676X-Squid-Error: ERR_ACCESS_DENIED 0X-Cache: MISS from uis-proxy-rop.office.ipe.corpVia: 1.1 uis-proxy-rop.office.ipe.corp (squid)Connection: close ОШИБКА: Запрошенный URL не может быть получен

Re: [squid-users] Squid with ASR9001

2016-10-24 Thread Garth van Sittert | BitCo
Yes, it looks like all of the ASR9000 range which makes use of IOS XR no longer 
supports WCCP.

Policy Based Routing has been replaced by ACL Based Forwarding or ABF.




From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Yuri Voinov
Sent: Sunday, 23 October 2016 9:35 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid with ASR9001


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



23.10.2016 23:16, Garth van Sittert | BitCo пишет:
>

  > Good day all

  >

  >

  >

  > Has anyone had any experience setting up Squid with any IOS
  XR Cisco routers?  The Cisco ASR9000 range doesn’t support WCCP
  and I cannot find any examples online.

  >
Seriously, the entire range?

Who said that it does not support WCCP? It is obligation to support, if only 
because it is not a home dish soap. That's when Cisco write the documentation 
that does not support - and then we cry.
>

  >

  >

  > I have also found quotes regarding PBR on the ASR9000… “With
  IOS XR traditional policy-based routing (PBR) is history”

  >
It's crazy city a forum talking about? PBR - is a fundamental functionality for 
the router. Especially for the router at this level. I somehow difficult to 
imagine a company that completely cuts down the business by releasing 
incompatible with what device. This is only possible in the OpenSource. But not 
in huge IT-business company. AFAIK.
>

  >

  >

  > I plan to use this on our 10Gbps ISP traffic to improve
  customer experience…

  >
There is no examples because the solutions of such a level rarely use Squid. 
Personally, I do not have a machine to play and write an example to Squid's 
wiki. As you know, Christmas is not the wife of a router is present as trinkets.
>

  >

  >

  > Garth

  >

  >

  >

  >

  >

  > BitCo Email Footer


  > The information contained in this message is intended solely
  for the individual to whom it is specifically and originally
  addressed. This message and its contents may contain confidential
  or privileged information from BitCo. If you are not the intended
  recipient, you are hereby notified that any disclosure or
  distribution, is strictly prohibited. If you receive this email in
  error, please notify BitCo immediately and delete it. BitCo does
  not accept any liability or responsibility if action is taken in
  reliance on the contents of this information.

  >

  >

  > ___

  > squid-users mailing list

  > 
squid-users@lists.squid-cache.org

  > http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEbBAEBCAAGBQJYDRDSAAoJENNXIZxhPexG7roH90gh9VtKKk4g7WKscldhl5ki
tjs5d46Wl6uIWOI0XyK7+94wKGV2oE4cAnoTqmDesxe058r8H67djDJvehIW9s1Q
zjd3DI4Th8QXEzMn5LnxqVSYz3WmANV5Jf/UsUQsUzPzgW2VHOpA8YfLPfEgbvhZ
zeJRG0gMg5fgyFlt90pK1p0v6sAOEB2leigxiWBXI27BEDajBnnSfbqeMvqanDgI
9Cwh1itpkukDNeU7e/e9y1sHLAJrJ8Z0V7ag2iqYb4KJv/SqkcCAsjX1aSv3VpDE
M4OvE+2tRT3v8ud4gIQroQmWrbNKCaBFgKI1tM82ojErj6FgTmv/5FjxHGq1Cw==
=YLEX
-END PGP SIGNATURE-
[BitCo Email 
Footer]
The information contained in this message is intended solely for the individual 
to whom it is specifically and originally addressed. This message and its 
contents may contain confidential or privileged information from BitCo. If you 
are not the intended recipient, you are hereby notified that any disclosure or 
distribution, is strictly prohibited. If you receive this email in error, 
please notify BitCo immediately and delete it. BitCo does not accept any 
liability or responsibility if action is taken in reliance on the contents of 
this information.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users