Re: [squid-users] New Squid 3.5 reconfigure causes service down

2017-10-05 Thread Amos Jeffries

On 06/10/17 05:44, Nicola Ferrari (#554252) wrote:

On 05/10/2017 18:25, Alex Rousskov wrote:

The "couple of minutes" part might be related to your upgrade and, if
so, you may be able to avoid such delays. For list readers not familiar
with Debian releases, which _Squid_ version are you upgrading from?



I was running squid 3.4 on top of Debian 8 (jessie)
I upgraded to squid 3.5 on top of Debian 9 (stretch)


I suggest to start by figuring our what Squid is doing during those
"couple of minutes" if you have not already.


What I notice by checking cache.log is that it stops for a while on

helperOpenServers: Starting 1/60 'ntlm_auth' processes
2017/10/05 11:36:06 kid1| Starting new ntlmauthenticator helpers...

This was not a usual behaviour on Squid 3.4;


The behaviour of starting helpers has been present since forever - 
though it may not have been logged correctly. The "Starting N/N 
'helper_name' processes" log entry was added with dynamic helper in 
Squid-3.2, so should have been visible in Jesse.


The 1/60 indicates that the number of ntlm_auth helpers running was 1 
less than your startup=N configuration value. The N defaults to the max 
value (60) if not configured explicitly.





At the moment of the upgrade, I had to adjust various path from
"/squid3" to "/squid" ..

I checked authenticators path and other occurrences in conf file,
everything seems to be ok.


Did Squid start properly and at least seem to work okay after the 
upgrade and before you manually ran the "-k reconfigure" ?


FYI: Stretch brings somewhat deeper SELinux integration in the 
background. The packaged init script updates the SELinux permissions for 
cache_dir. But if you have any custom directories for other things you 
may need to run /sbin/restorecon on them manually after any changes to 
the path or OS permissions - or do it anyway just in case SELinux is 
being confused.




Just for testing purposes, I would try my config on a new clean install,
just to be sure this is not related to the upgrade in some way, and let
you know!



Please also try a full clean restart of Squid:

 Shutdown completely using the init script. If any 'squid' or 
'(squid-N)' process remains after that use kill -9 to halt that process, 
and manually delete the squid.pid / squid3.pid file if any still exists.


 Starting Squid using the Stretch package init script should then 
ensure that the expected paths have the right permissions, and runs the 
'-k parse' checks for you.


Since this is the Samba NTLM helper you should also check that the 
Samba, winbind etc components still have it enabled. Behaviour is 
undefined if the OS components are only partially functioning.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Bump Failures with Google and Wikipedia

2017-10-05 Thread Rafael Akchurin
Hello Eliezer,

From desktop ff/chrome goto youtube. It will be br encoded.

Best regards,
Rafael Akchurin

> Op 6 okt. 2017 om 02:43 heeft Eliezer Croitoru  het 
> volgende geschreven:
> 
> Hey Yuri and Rafael,
> 
> I have tried to find a site which uses brotli compression but yet to find one.
> Also I have not seen any brotli request headers in firefox or chrome, maybe 
> there is a specific browser which uses it?
> 
> Thanks,
> Eliezer
> 
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
> 
> 
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Yuri
> Sent: Sunday, October 1, 2017 04:08
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] SSL Bump Failures with Google and Wikipedia
> 
> I guess in HTTP headers. =-O :-D
> 
> 
> 01.10.2017 7:05, Eliezer Croitoru пишет:
>> Hey Rafael,
>> 
>> Where have you seen the details about brotli being used?
>> 
>> Thanks,
>> Eliezer
>> 
>> 
>> Eliezer Croitoru
>> Linux System Administrator
>> Mobile: +972-5-28704261
>> Email: elie...@ngtech.co.il
>> 
>> 
>> 
>> -Original Message-
>> From: Rafael Akchurin [mailto:rafael.akchu...@diladele.com]
>> Sent: Sunday, October 1, 2017 01:16
>> To: Jeffrey Merkey 
>> Cc: Eliezer Croitoru ; squid-users 
>> 
>> Subject: Re: [squid-users] SSL Bump Failures with Google and Wikipedia
>> 
>> Hello Jeff,
>> 
>> Do not forget Google and YouTube are now using brotli encoding 
>> extensively, not only gzip.
>> 
>> Best regards,
>> Rafael Akchurin
>> 
>>> Op 30 sep. 2017 om 23:49 heeft Jeffrey Merkey  
>>> het
>> volgende geschreven:
 On 9/30/17, Eliezer Croitoru  wrote:
 Hey Jeffrey,
 
 What happens when you disable the next icap service this way:
 icap_service service_avi_resp respmod_precache 
 icap://127.0.0.1:1344/cherokee bypass=0 adaptation_access 
 service_avi_resp deny all
 
 Is it still the same?
 What I suspect is that the requests are defined to accept gzip 
 compressed objects and the icap service is not "gnuzip" them which 
 results in what you see.
 
 To make sure that squid is not at fault here try to disable both 
 icap services and then add then one at a time and see which of this 
 triangle is giving you trouble.
 I enhanced an ICAP library which is written in GoLang at:
 https://github.com/elico/icap
 
 And I have couple examples on how to work with http requests and 
 responses
 at:
 https://github.com/andybalholm/redwood/
 https://github.com/andybalholm/redwood/search?utf8=%E2%9C%93&q=gzip&;
 t
 ype=
 
 Let me know if you need help finding out the issue.
 
 All The Bests,
 Eliezer
 
 
 Eliezer Croitoru
 Linux System Administrator
 Mobile: +972-5-28704261
 Email: elie...@ngtech.co.il
 
 
 
 -Original Message-
 From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org]
 On Behalf Of Jeffrey Merkey
 Sent: Saturday, September 30, 2017 23:28
 To: squid-users 
 Subject: [squid-users] SSL Bump Failures with Google and Wikipedia
 
 Hello All,
 
 I have been working with the squid server and icap and I have been 
 running into problems with content cached from google and wikipedia.
 Some sites using https, such as Centos.org work perfectly with ssl 
 bumping and I get the decrypted content as html and it's readable.
 Other sites, such as google and wikipedia return what looks like 
 encrypted traffic, or perhaps mime encoded data, I am not sure which.
 
 Are there cases where squid will default to direct mode and not 
 decrypt the traffic?  I am using the latest squid server 3.5.27.  I 
 really would like to get this working with google and wikipedia.  I 
 reviewed the page source code from the browser viewer and it looks 
 nothing like the data I am getting via the icap server.
 
 Any assistance would be greatly appreciated.
 
 The config I am using is:
 
 #
 # Recommended minimum configuration:
 #
 
 # Example rule allowing access from your local networks.
 # Adapt to list your (internal) IP networks from where browsing # 
 should be allowed
 
 acl localnet src 127.0.0.1
 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
 acl localnet src 172.16.0.0/12  # RFC1918 possible internal network 
 acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
 acl localnet src fc00::/7   # RFC 4193 local private network range
 acl localnet src fe80::/10  # RFC 4291 link-local (directly
 plugged) machines
 
 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gophe

Re: [squid-users] YouTube\GoogleVideo caching is here, tested and verified.

2017-10-05 Thread Mohd Akhbar
Hi,
would you mind sharing the details steps, or have you documented it
anywhere that we can refer. For me, i would like to test in my school env
and maye al so be beneficial for education purposes. Another thing is, does
you solution will save a specific video res like 720p or is it vbr ?

And what squid version available for this solution ? i'm using version 3.5
(latest stable).

Thank you

On Wed, Oct 4, 2017 at 7:54 AM, Eliezer Croitoru 
wrote:

> Hey All,
>
> After quite some time I have compiled a solution for YouTube videos caching
> for PC's.
> The next step would be to cache it for mobile devices but I am looking for
> someone who wants this solution.
> The only drawback is that this solution is designed for
> intercept\transparent proxies and not tproxy but it might work on these
> too.
>
> The installation is pretty simple on a systemd based OS like Debian 8+9,
> Ubuntu 16.04, CentOS\RHEK 7, Oracle Enterprise Linux 7 and OpenSUSE leap
> and
> above.
> It consists of four components:
> - local redis DB
> - ICAP service binary
> - ICAP service system service unit file
> - ruby StoreID script
>
> And of course a special squid.conf with ssl_bump enabled.
>
> For now I will not package it using RPM or DEB and will use a shell script
> or README.md that describes the installation and usage steps.
> Contact me here or by a PM.
>
> Eliezer
>
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Bump Failures with Google and Wikipedia

2017-10-05 Thread Eliezer Croitoru
Hey Yuri and Rafael,

I have tried to find a site which uses brotli compression but yet to find one.
Also I have not seen any brotli request headers in firefox or chrome, maybe 
there is a specific browser which uses it?

Thanks,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Yuri
Sent: Sunday, October 1, 2017 04:08
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] SSL Bump Failures with Google and Wikipedia

I guess in HTTP headers. =-O :-D


01.10.2017 7:05, Eliezer Croitoru пишет:
> Hey Rafael,
>
> Where have you seen the details about brotli being used?
>
> Thanks,
> Eliezer
>
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
>
> -Original Message-
> From: Rafael Akchurin [mailto:rafael.akchu...@diladele.com]
> Sent: Sunday, October 1, 2017 01:16
> To: Jeffrey Merkey 
> Cc: Eliezer Croitoru ; squid-users 
> 
> Subject: Re: [squid-users] SSL Bump Failures with Google and Wikipedia
>
> Hello Jeff,
>
> Do not forget Google and YouTube are now using brotli encoding 
> extensively, not only gzip.
>
> Best regards,
> Rafael Akchurin
>
>> Op 30 sep. 2017 om 23:49 heeft Jeffrey Merkey  
>> het
> volgende geschreven:
>>> On 9/30/17, Eliezer Croitoru  wrote:
>>> Hey Jeffrey,
>>>
>>> What happens when you disable the next icap service this way:
>>> icap_service service_avi_resp respmod_precache 
>>> icap://127.0.0.1:1344/cherokee bypass=0 adaptation_access 
>>> service_avi_resp deny all
>>>
>>> Is it still the same?
>>> What I suspect is that the requests are defined to accept gzip 
>>> compressed objects and the icap service is not "gnuzip" them which 
>>> results in what you see.
>>>
>>> To make sure that squid is not at fault here try to disable both 
>>> icap services and then add then one at a time and see which of this 
>>> triangle is giving you trouble.
>>> I enhanced an ICAP library which is written in GoLang at:
>>> https://github.com/elico/icap
>>>
>>> And I have couple examples on how to work with http requests and 
>>> responses
>>> at:
>>> https://github.com/andybalholm/redwood/
>>> https://github.com/andybalholm/redwood/search?utf8=%E2%9C%93&q=gzip&;
>>> t
>>> ype=
>>>
>>> Let me know if you need help finding out the issue.
>>>
>>> All The Bests,
>>> Eliezer
>>>
>>> 
>>> Eliezer Croitoru
>>> Linux System Administrator
>>> Mobile: +972-5-28704261
>>> Email: elie...@ngtech.co.il
>>>
>>>
>>>
>>> -Original Message-
>>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org]
>>> On Behalf Of Jeffrey Merkey
>>> Sent: Saturday, September 30, 2017 23:28
>>> To: squid-users 
>>> Subject: [squid-users] SSL Bump Failures with Google and Wikipedia
>>>
>>> Hello All,
>>>
>>> I have been working with the squid server and icap and I have been 
>>> running into problems with content cached from google and wikipedia.
>>> Some sites using https, such as Centos.org work perfectly with ssl 
>>> bumping and I get the decrypted content as html and it's readable.
>>> Other sites, such as google and wikipedia return what looks like 
>>> encrypted traffic, or perhaps mime encoded data, I am not sure which.
>>>
>>> Are there cases where squid will default to direct mode and not 
>>> decrypt the traffic?  I am using the latest squid server 3.5.27.  I 
>>> really would like to get this working with google and wikipedia.  I 
>>> reviewed the page source code from the browser viewer and it looks 
>>> nothing like the data I am getting via the icap server.
>>>
>>> Any assistance would be greatly appreciated.
>>>
>>> The config I am using is:
>>>
>>> #
>>> # Recommended minimum configuration:
>>> #
>>>
>>> # Example rule allowing access from your local networks.
>>> # Adapt to list your (internal) IP networks from where browsing # 
>>> should be allowed
>>>
>>> acl localnet src 127.0.0.1
>>> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
>>> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network 
>>> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
>>> acl localnet src fc00::/7   # RFC 4193 local private network range
>>> acl localnet src fe80::/10  # RFC 4291 link-local (directly
>>> plugged) machines
>>>
>>> acl SSL_ports port 443
>>> acl Safe_ports port 80  # http
>>> acl Safe_ports port 21  # ftp
>>> acl Safe_ports port 443 # https
>>> acl Safe_ports port 70  # gopher
>>> acl Safe_ports port 210 # wais
>>> acl Safe_ports port 1025-65535  # unregistered ports
>>> acl Safe_ports port 280 # http-mgmt
>>> acl Safe_ports port 488 # gss-http
>>> acl Safe_ports port 591 # filemaker
>>> acl Safe_ports port 777 # multiling http
>>> acl CONNECT method CONNECT
>>>
>>> #
>>> # Recommended minimum Access Permission configurat

Re: [squid-users] Enable tproxy in Squid 3.5 running on Debian 9

2017-10-05 Thread Eliezer Croitoru
Hey,

Can you clarify the network topology of your setup?
Also is squid another machine on lan and you are using another router or squid 
sits in the a DMZ?
Can you add ip addresses of:
- client machine
- squid
- router of this network?

Thanks,
Eliezer


http://ngtech.co.il/lmgtfy/
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of xpro6000
Sent: Thursday, October 5, 2017 19:54
To: Amos Jeffries 
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Enable tproxy in Squid 3.5 running on Debian 9

I'm back to square one then, and it looks like there is no way to tell Squid to 
use the same connecting ip for the outgoing ip, which is what I need.

On Thu, Oct 5, 2017 at 3:49 AM, Amos Jeffries  
wrote:
On 05/10/17 15:01, xpro6000 wrote:
I'm trying to setup tproxy with Squid 3.5 for the purpose of having the same 
outgoing ip as the connecting ip. (I have thousands of IPs and I can not add 
them one by one)

I started with a fresh install of Debian 9, installed Squid by

apt install squid

then I added

http_port 3129 tproxy

to squid.conf

I then ran the following commands for iptables

iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT

iptables  -t mangle -A PREROUTING -p tcp -m socket -j DIVERT

iptables  -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --tproxy-mark 
0x1/0x1 --on-port 3129


I can use the proxy with no problems on port 3128, but on Firefox I get a 
message "The proxy server is refusing connections" when I set the proxy to port 
3129. Did I miss any steps or am I doing something wrong?

You missed the fact that TPROXY is an MITM operation. You *cannot* setup the 
browser to use the proxy directly to its tproxy port. You have to route the 
packets to the proxy machine without any explicit browser or client 
configuration.

Only the Squid machine bits (and thus behaviour) are different with TPROXY vs 
NAT interception.

...
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access allow all

Do not do "allow all" like this. Setup the localnet ACL to your LAN range(s) 
properly and only allow those clients through the proxy.

Then you can use the recommended default:
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access deny manager
 http_access allow localnet
 http_access deny all

Amos
___
squid-users mailing list
mailto:squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SNI-based forwarding to parent proxy

2017-10-05 Thread C. Kroeger
Hi there,

i've been new to squid and trying to get a certain problem solved. I
have a setup with an VPN server, redirecting any traffic to its port
80/443 to a squid server. The users within that VPN can browse the web
(both http and https) without any problems.

However, I need to redirect http(s) traffic for a list of domains to
another proxy. While this works fine for http, it doesn't work for
https, even with the peek-n-slice functionality available in 3.5+.

Below is my current configuration:

```
http_port 3128
https_port 3130 intercept ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl/squid.pem
options=NO_SSLv2:NO_SSLv3
sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid/ssl_db -M
4MB sslcrtd_children 8 startup=1 idle=1

# peek SNI and splice all https connections for tunneling
acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump splice all

# ACL for SNIs that need to be forwarded to another proxy
acl sni_fwd ssl::server_name .google.com

# redirect matching traffic to another proxy
cache_peer 10.0.2.115 parent 3128 0 no-query default name=px2
cache_peer_access px2 allow sni_fwd
cache_peer_access px2 deny all
```

Surprisingly, http requests are sent to px2, but https ones are not.
What I'm doing wrong here?

Note: Requests not matching the SNI ACL shall not be forwarded and
processed directly.

Best regards,
Christian
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Enable tproxy in Squid 3.5 running on Debian 9

2017-10-05 Thread Alex K
You will need to transpareny redirect the traffic and not explicitly
pointing your browser to squid. Seems that the mentioned firewall rules are
correct. You will need a policy route also for the marked traffic.

On Oct 5, 2017 7:54 PM, "xpro6000"  wrote:

I'm back to square one then, and it looks like there is no way to tell
Squid to use the same connecting ip for the outgoing ip, which is what I
need.

On Thu, Oct 5, 2017 at 3:49 AM, Amos Jeffries  wrote:

> On 05/10/17 15:01, xpro6000 wrote:
>
>> I'm trying to setup tproxy with Squid 3.5 for the purpose of having the
>> same outgoing ip as the connecting ip. (I have thousands of IPs and I can
>> not add them one by one)
>>
>> I started with a fresh install of Debian 9, installed Squid by
>>
>> apt install squid
>>
>> then I added
>>
>> http_port 3129 tproxy
>>
>> to squid.conf
>>
>> I then ran the following commands for iptables
>>
>> iptables -t mangle -N DIVERT
>> iptables -t mangle -A DIVERT -j MARK --set-mark 1
>> iptables -t mangle -A DIVERT -j ACCEPT
>>
>> iptables  -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
>>
>> iptables  -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
>> --tproxy-mark 0x1/0x1 --on-port 3129
>>
>>
>> I can use the proxy with no problems on port 3128, but on Firefox I get a
>> message "The proxy server is refusing connections" when I set the proxy to
>> port 3129. Did I miss any steps or am I doing something wrong?
>>
>
> You missed the fact that TPROXY is an MITM operation. You *cannot* setup
> the browser to use the proxy directly to its tproxy port. You have to route
> the packets to the proxy machine without any explicit browser or client
> configuration.
>
> Only the Squid machine bits (and thus behaviour) are different with TPROXY
> vs NAT interception.
>
> ...
>
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>> http_access allow localhost manager
>> http_access deny manager
>> http_access allow localhost
>> http_access allow all
>>
>
> Do not do "allow all" like this. Setup the localnet ACL to your LAN
> range(s) properly and only allow those clients through the proxy.
>
> Then you can use the recommended default:
>  http_access deny !Safe_ports
>  http_access deny CONNECT !SSL_ports
>  http_access allow localhost
>  http_access deny manager
>  http_access allow localnet
>  http_access deny all
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] New Squid 3.5 reconfigure causes service down

2017-10-05 Thread Alex Rousskov
On 10/05/2017 10:44 AM, Nicola Ferrari (#554252) wrote:
> On 05/10/2017 18:25, Alex Rousskov wrote:
>> The "couple of minutes" part might be related to your upgrade and, if
>> so, you may be able to avoid such delays. For list readers not familiar
>> with Debian releases, which _Squid_ version are you upgrading from?

> I was running squid 3.4 on top of Debian 8 (jessie)
> I upgraded to squid 3.5 on top of Debian 9 (stretch)

>> I suggest to start by figuring our what Squid is doing during those
>> "couple of minutes" if you have not already.
> 
> What I notice by checking cache.log is that it stops for a while on
> 
> helperOpenServers: Starting 1/60 'ntlm_auth' processes
> 2017/10/05 11:36:06 kid1| Starting new ntlmauthenticator helpers...
> 
> This was not a usual behaviour on Squid 3.4;

The next task is to figure out what changed related to that line (i.e.,
to starting ntlmauthenticator helpers). Here are a few things you may
want to check: Do you start the same number of helpers as before? Does
starting a single helper take longer in v3.5 than in v3.4? Does Squid
v3.5 consume a lot more RAM before it tries to start that helper than
Squid v3.4 consumed? Does Squid v3.5 helper itself consume a lot more
RAM than Squid v3.4 helper?

Something must have changed. If you can pinpoint that change, it is
likely that you can reverse or work around it. Since we probably do not
know what that change is, have no access to your server, and no free
time to investigate, you have to narrow the suspects down yourself.


> Just for testing purposes, I would try my config on a new clean install

That is a good initial test as well. And remember that you are not
looking for something that broke or does not work. You are looking for
something that works differently. And your initial focus should be on
things that affect helper startup (i.e., fork() and exec() system
calls): process size, number of processes, etc. Commands like strace can
help you measure delays down to a single system call level if needed.


Good luck,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Enable tproxy in Squid 3.5 running on Debian 9

2017-10-05 Thread xpro6000
I'm back to square one then, and it looks like there is no way to tell
Squid to use the same connecting ip for the outgoing ip, which is what I
need.

On Thu, Oct 5, 2017 at 3:49 AM, Amos Jeffries  wrote:

> On 05/10/17 15:01, xpro6000 wrote:
>
>> I'm trying to setup tproxy with Squid 3.5 for the purpose of having the
>> same outgoing ip as the connecting ip. (I have thousands of IPs and I can
>> not add them one by one)
>>
>> I started with a fresh install of Debian 9, installed Squid by
>>
>> apt install squid
>>
>> then I added
>>
>> http_port 3129 tproxy
>>
>> to squid.conf
>>
>> I then ran the following commands for iptables
>>
>> iptables -t mangle -N DIVERT
>> iptables -t mangle -A DIVERT -j MARK --set-mark 1
>> iptables -t mangle -A DIVERT -j ACCEPT
>>
>> iptables  -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
>>
>> iptables  -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
>> --tproxy-mark 0x1/0x1 --on-port 3129
>>
>>
>> I can use the proxy with no problems on port 3128, but on Firefox I get a
>> message "The proxy server is refusing connections" when I set the proxy to
>> port 3129. Did I miss any steps or am I doing something wrong?
>>
>
> You missed the fact that TPROXY is an MITM operation. You *cannot* setup
> the browser to use the proxy directly to its tproxy port. You have to route
> the packets to the proxy machine without any explicit browser or client
> configuration.
>
> Only the Squid machine bits (and thus behaviour) are different with TPROXY
> vs NAT interception.
>
> ...
>
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>> http_access allow localhost manager
>> http_access deny manager
>> http_access allow localhost
>> http_access allow all
>>
>
> Do not do "allow all" like this. Setup the localnet ACL to your LAN
> range(s) properly and only allow those clients through the proxy.
>
> Then you can use the recommended default:
>  http_access deny !Safe_ports
>  http_access deny CONNECT !SSL_ports
>  http_access allow localhost
>  http_access deny manager
>  http_access allow localnet
>  http_access deny all
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] New Squid 3.5 reconfigure causes service down

2017-10-05 Thread Nicola Ferrari (#554252)
On 05/10/2017 18:25, Alex Rousskov wrote:
> The "couple of minutes" part might be related to your upgrade and, if
> so, you may be able to avoid such delays. For list readers not familiar
> with Debian releases, which _Squid_ version are you upgrading from?
>

I was running squid 3.4 on top of Debian 8 (jessie)
I upgraded to squid 3.5 on top of Debian 9 (stretch)

> I suggest to start by figuring our what Squid is doing during those
> "couple of minutes" if you have not already.

What I notice by checking cache.log is that it stops for a while on

helperOpenServers: Starting 1/60 'ntlm_auth' processes
2017/10/05 11:36:06 kid1| Starting new ntlmauthenticator helpers...

This was not a usual behaviour on Squid 3.4;

At the moment of the upgrade, I had to adjust various path from
"/squid3" to "/squid" ..

I checked authenticators path and other occurrences in conf file,
everything seems to be ok.

Just for testing purposes, I would try my config on a new clean install,
just to be sure this is not related to the upgrade in some way, and let
you know!

Nick



-- 
+-+
| Linux User  #554252 |
+-+

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] New Squid 3.5 reconfigure causes service down

2017-10-05 Thread Alex Rousskov
On 10/05/2017 03:20 AM, Nicola Ferrari (#554252) wrote:

> issuing the "squid -k reconfigure" command (i.e. to
> adjust acls in conf file) the result is not just a configuration reload,
> but authenticators processes are restarting,

As you have discovered already, running heavy unnecessary actions is a
known problem with Squid hot reconfiguration support. An upgrade may
have an effect on certain aspects of that problem, but the problem
itself is as old as Squid.


> causing an "out-of-service" for all users, for a courple of minutes.

The "couple of minutes" part might be related to your upgrade and, if
so, you may be able to avoid such delays. For list readers not familiar
with Debian releases, which _Squid_ version are you upgrading from?


> Basically the same issue as in this thread:
> https://serverfault.com/questions/247835/squid-3-reloading-makes-it-stop-serving-requests

The symptoms are the same but the underlying cause may be different
(unless you have already checked but did not tell us).


> I'm in doubt if reducing helpers number would be a good idea, since we
> need to serve ca. 300 simultaneous users.
> 
> Before the recent upgrade, with the previous Debian8, reload took some
> seconds only..
> 
> Is there any best-practice to get an "Hot-Configurable" system?
> Do you have any suggestion?


I suggest to start by figuring our what Squid is doing during those
"couple of minutes" if you have not already. The mailing list thread
linked from the above serverfault answer shows how to do that and has
several potentially useful comments. Compare the new logs with those of
your older Squid. What has changed related to the startup delays?


FWIW, there is now a low-priority project to support fast ACL-only
reconfiguration. We have the initial high-level design and some code,
but it will take a while (possibly a year or more) to complete at its
current priority.

http://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Is your kerberos ticket expired?

2017-10-05 Thread erdosain9
Hi.
All is working fine, but im having this error in the mail of root

--


From r...@squid.domain.lan  Tue Oct  3 04:00:02 2017
Return-Path: 
X-Original-To: root
Delivered-To: r...@squid.domain.lan
Received: by squid.domain.lan (Postfix, from userid 0)
id 2581F8066D7F; Tue,  3 Oct 2017 04:00:02 -0300 (ART)
From: "(Cron Daemon)" 
To: r...@squid.domain.lan
Subject: Cron   msktutil --auto-update --verbose --computer-name
squidproxy-k | logger -t msktutil > /dev/null
Content-Type: text/plain; charset=UTF-8
Auto-Submitted: auto-generated
Precedence: bulk
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
Message-Id: <20171003070002.2581f8066...@squid.domain.lan>
Date: Tue,  3 Oct 2017 04:00:02 -0300 (ART)

SASL/GSSAPI authentication started
Error: ldap_sasl_interactive_bind_s failed (Local error)
Error: ldap_connect failed
--> Is your kerberos ticket expired? You might try re-"kinit"ing.

From r...@squid.domain.lan  Wed Oct  4 04:00:02 2017
Return-Path: 
X-Original-To: root
Delivered-To: r...@squid.domain.lan
Received: by squid.domain.lan (Postfix, from userid 0)
id 24EC282EEFD7; Wed,  4 Oct 2017 04:00:02 -0300 (ART)
From: "(Cron Daemon)" 
To: r...@squid.domain.lan
Subject: Cron   msktutil --auto-update --verbose --computer-name
squidproxy-k | logger -t msktutil > /dev/null
Content-Type: text/plain; charset=UTF-8
Auto-Submitted: auto-generated
Precedence: bulk
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
Message-Id: <20171004070002.24ec282ee...@squid.domain.lan>
Date: Wed,  4 Oct 2017 04:00:02 -0300 (ART)

SASL/GSSAPI authentication started
Error: ldap_sasl_interactive_bind_s failed (Local error)
Error: ldap_connect failed
--> Is your kerberos ticket expired? You might try re-"kinit"ing.

From r...@squid.domain.lan  Thu Oct  5 04:00:02 2017
Return-Path: 
X-Original-To: root
Delivered-To: r...@squid.domain.lan
Received: by squid.domain.lan (Postfix, from userid 0)
id 9B89F8057477; Thu,  5 Oct 2017 04:00:02 -0300 (ART)
From: "(Cron Daemon)" 
To: r...@squid.domain.lan
Subject: Cron   msktutil --auto-update --verbose --computer-name
squidproxy-k | logger -t msktutil > /dev/null
Content-Type: text/plain; charset=UTF-8
Auto-Submitted: auto-generated
Precedence: bulk
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
X-Cron-Env: 
Message-Id: <20171005070002.9b89f8057...@squid.domain.lan>
Date: Thu,  5 Oct 2017 04:00:02 -0300 (ART)

SASL/GSSAPI authentication started
Error: ldap_sasl_interactive_bind_s failed (Local error)
Error: ldap_connect failed
--> Is your kerberos ticket expired? You might try re-"kinit"ing.



[root@squid network-scripts]# systemctl status squid
● squid.service - Squid Web Proxy Server
   Loaded: loaded (/usr/lib/systemd/system/squid.service; enabled; vendor
preset: disabled)
   Active: active (running) since vie 2017-09-22 11:17:42 ART; 1 weeks 5
days ago
 Docs: man:squid(8)
  Process: 25024 ExecStop=/usr/sbin/squidshut.sh (code=exited,
status=0/SUCCESS)
  Process: 14166 ExecReload=/usr/sbin/squid -kreconf (code=exited,
status=0/SUCCESS)
  Process: 25048 ExecStart=/usr/sbin/squid -sYC (code=exited,
status=0/SUCCESS)
  Process: 25046 ExecStartPre=/usr/bin/chown squid.squid /var/run/squid
(code=exited, status=0/SUCCESS)
  Process: 25044 ExecStartPre=/usr/bin/mkdir -p /var/run/squid (code=exited,
status=0/SUCCESS)
 Main PID: 4613 (squid)
   CGroup: /system.slice/squid.service
   ├─ 4613 (squid-1) -sYC
   ├─ 4630 (unlinkd)
   ├─ 4631 diskd 4723716 4723717 4723718
   ├─14169 (logfile-daemon) /var/log/squid/access.log
   ├─14170 (ssl_crtd) -s /var/lib/ssl_db -M 4MB
   ├─14171 (ssl_crtd) -s /var/lib/ssl_db -M 4MB
   ├─14172 (ssl_crtd) -s /var/lib/ssl_db -M 4MB
   ├─14173 (ssl_crtd) -s /var/lib/ssl_db -M 4MB
   ├─14174 (ssl_crtd) -s /var/lib/ssl_db -M 4MB
   ├─14175 (ext_kerberos_ldap_group_acl) -g i-f...@domain.lan
   ├─14176 (ext_kerberos_ldap_group_acl) -g i-f...@domain.lan
   ├─14177 (ext_kerberos_ldap_group_acl) -g i-f...@domain.lan
   ├─14178 (ext_kerberos_ldap_group_acl) -g i-f...@domain.lan
   ├─14179 (ext_kerberos_ldap_group_acl) -g i-f...@domain.lan
   ├─14180 (ext_kerberos_ldap_group_acl) -g i-limit...@domain.lan
   ├─14181 (ext_kerberos_ldap_group_acl) -g i-limit...@domain.lan
   ├─14182 (ext_kerberos_ldap_group_acl) -g i-limit...@domain.lan
   ├─14183 (ext_kerberos_ldap_group_acl) -g i-limit...@domain.lan
   ├─14184 (ext_kerberos_ldap_group_acl) -g i-limit...@domain.lan
  ├─14185 (negotiate_ke

Re: [squid-users] Pages sometimes load as a mess of random (?) symbols

2017-10-05 Thread Grey
Firstly, thanks a lot for taking the time to check my configuration and
provide such detailed suggestions; I think I've followed all of them and
fixed the problems you pointed out.
We have a Windows domain and all those "all" directives where inherited from
our old proxy server (running Squid verson 3.1.20) and were used to let
domain users not receive any popups asking for credentials, while at the
same time presenting those credentials requests to non-domain users; if I'm
understanding your comments correctly I can safely remove them and get the
same result, am I right?
We were having an issue with authentication too, where domain users
sometimes received a popup asking for credentials (shouldn't happen since I
have only enabled kerberos auth) and would need to click "Cancel" and reload
the page to resume browsing correctly; could the presence of all those "all"
directives have caused that too in your opinion?

The new configuration should result in this if I didn't miss/misunderstand
anything (I've addedd a whitelist rule that I missed earlier):

### TESTSQUID1 ###

http_port 3128
dns_v4_first on
pinger_enable off
netdb_filename none

error_default_language it
cache_mgr helpd...@test.it

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

auth_param negotiate program /usr/lib/squid/negotiate_kerberos_auth -r -d
auth_param negotiate children 150
auth_param negotiate children 150 startup=20 idle=10
auth_param negotiate keep_alive on

external_acl_type ProxyUser children-max=75 %LOGIN
/usr/lib/squid/ext_kerberos_ldap_group_acl -g INTERNET@TEST.LOCAL -D
TEST.LOCAL -S testldap
acl ProxyUser external ProxyUser

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny manager

acl destsquid dstdomain .testsquid1 .testsquid2
http_access allow destsquid

acl siti_whitelist dstdomain "/etc/squid/siti_whitelist"

acl AUTH proxy_auth REQUIRED
http_access deny !AUTH

http_access allow siti_whitelist
http_access allow ProxyUser
http_access deny all

icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service service_req reqmod_precache bypass=1
icap://testicap:1344/REQ-Service
adaptation_access service_req allow all
icap_service service_resp respmod_precache bypass=0
icap://testicap:1344/resp
adaptation_access service_resp allow all

coredump_dir /var/spool/squid

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

Getting back to the main problem, i've set "icap_enable off" and reloaded
Squid, then tried again and got the same problem; since we're not using any
cache parent and Squid isn't using ICAP at the moment, can I assume there's
nothing else I can do and just have to ignore the problem?
The thing that bugs me is that only Chrome seems to be having this
particular problem... could this even be something linked to a bug or a
simple behaviour difference between Chrome and IE/Firefox?
Thanks again for all your patience :)



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] New Squid 3.5 reconfigure causes service down

2017-10-05 Thread Nicola Ferrari (#554252)
Hi List!

We're experiencing problems with a just-upgraded squid install (from
Debian 8 to Debian 9, using packages in repos). Here are the details
from squid -version:

Squid Cache: Version 3.5.23
Service Name: squid
Debian linux

We use "negotiate kerberos" authenticators to offer Active Directory SSO.

We're also running squidguard.

Lines in config file are:

[...]
# NEGOTIATE KERBEROS AUTH
auth_param negotiate program /usr/lib/squid/negotiate_wrapper_auth
--ntlm /usr/$
auth_param negotiate children 60
auth_param negotiate keep_alive off
[...]
url_rewrite_program /usr/bin/squidGuard -c /etc/squidguard/squidGuard.conf


The problem is that, issuing the "squid -k reconfigure" command (i.e. to
adjust acls in conf file) the result is not just a configuration reload,
but authenticators processes are restarting, causing an "out-of-service"
for all users, for a courple of minutes.

Basically the same issue as in this thread:
https://serverfault.com/questions/247835/squid-3-reloading-makes-it-stop-serving-requests

I'm in doubt if reducing helpers number would be a good idea, since we
need to serve ca. 300 simultaneous users.

Before the recent upgrade, with the previous Debian8, reload took some
seconds only..

Is there any best-practice to get an "Hot-Configurable" system?
Do you have any suggestion?

Thanks!
Best regards,


PS: English isn't my first language, so please excuse any mistakes..

-- 
+-+
| Linux User  #554252 |
+-+

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Enable tproxy in Squid 3.5 running on Debian 9

2017-10-05 Thread Amos Jeffries

On 05/10/17 15:01, xpro6000 wrote:
I'm trying to setup tproxy with Squid 3.5 for the purpose of having the 
same outgoing ip as the connecting ip. (I have thousands of IPs and I 
can not add them one by one)


I started with a fresh install of Debian 9, installed Squid by

apt install squid

then I added

http_port 3129 tproxy

to squid.conf

I then ran the following commands for iptables

iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT

iptables  -t mangle -A PREROUTING -p tcp -m socket -j DIVERT

iptables  -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY 
--tproxy-mark 0x1/0x1 --on-port 3129



I can use the proxy with no problems on port 3128, but on Firefox I get 
a message "The proxy server is refusing connections" when I set the 
proxy to port 3129. Did I miss any steps or am I doing something wrong?


You missed the fact that TPROXY is an MITM operation. You *cannot* setup 
the browser to use the proxy directly to its tproxy port. You have to 
route the packets to the proxy machine without any explicit browser or 
client configuration.


Only the Squid machine bits (and thus behaviour) are different with 
TPROXY vs NAT interception.


...

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access allow all


Do not do "allow all" like this. Setup the localnet ACL to your LAN 
range(s) properly and only allow those clients through the proxy.


Then you can use the recommended default:
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access deny manager
 http_access allow localnet
 http_access deny all

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Pages sometimes load as a mess of random (?) symbols

2017-10-05 Thread Amos Jeffries

On 05/10/17 19:42, Grey wrote:

Sorry for not including enough informatio nin the first place.

1. Here's my config, keep in mind it's a test server that will eventually
replace the one (not updated) we're using right now so the configuration is
kinda bare-bones:

### TESTSQUID1 ###

http_port 3128
dns_v4_first on
pinger_enable off
netdb_filename none

error_default_language it
cache_mgr helpd...@test.it

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

auth_param negotiate program /usr/lib/squid/negotiate_kerberos_auth -r -d
auth_param negotiate children 150
auth_param negotiate keep_alive on

external_acl_type ProxyUser children-max=75 %LOGIN
/usr/lib/squid/ext_kerberos_ldap_group_acl -g INTERNET@TEST.LOCAL -D
TEST.LOCAL -S testldap
acl ProxyUser external ProxyUser

acl AUTH proxy_auth REQUIRED
http_access deny !AUTH all


So two problems.
1) 'all' here means clients with incorrect OR missing auth credentials 
do not get challenged for working credentials. Since any sane client 
security system will not present credentials until told they are 
necessary the above should rightfully prevent *any* secure clients from 
using this proxy.


2) your custom config lines should be placed below the default security 
settings. This is especially important for ACLs like auth which involve 
a lot of background work. The default settings are there to block things 
like DoS or attacks that can be trivially and quickly denied, and to do 
so with minimal CPU expense.




http_access deny !Safe_ports all
http_access deny CONNECT !SSL_ports all
http_access allow localhost manager
http_access deny manager all
http_access allow localhost all


If you place the "allow localhost" above the "deny manager" you can 
remove one extra line of checks.




acl destsquid dstdomain .testquid1 .testsquid2
http_access allow destsquid all


The 'all' ACL is a pointless waste of CPU cycles on all of the lines above.



http_access allow ProxyUser all
The 'all' ACL here *might* prevent unauthenticated clients from being 
challenged for credentials like the 'deny !AUTH' line did. But YMMV. It 
either does that or is pointless.


The current 3.5 provides the %un format code which should not generate 
an auth challenge. That should eliminate the need for the all-hack here.




http_access deny all

icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service service_req reqmod_precache bypass=1
icap://testicap:1344/REQ-Service
adaptation_access service_req allow all
icap_service service_resp respmod_precache bypass=0
icap://testicap:1344/resp
adaptation_access service_resp allow all

coredump_dir /var/spool/squid

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

2. This is the access log when first loading the page:

1507185342.611  0 99.99.99.99 TCP_DENIED/407 5179 GET
http://www.tomshardware.com/ - HIER_NONE/- text/html
1507185344.121   1473 99.99.99.99 TCP_MISS/200 48225 GET
http://www.tomshardware.com/ testuser HIER_DIRECT/23.40.112.227 text/html

And this is the one after reloading:



By "reloading" do you mean:

 * using a testing tool that sends an identical repeat request? or
 * clicking + pressing enter in a browser address bar? or
 * pressing the browser reload button? or
 * pressing the force-refresh (F5) button? or
 * holding shift while doing any of the above?

Only the first two above methods will perform a clean HTTP test request. 
The others all deliver cache controls to force specific cache behaviour 
which void the test results.




1507185356.932187 99.99.99.99 TCP_MISS/200 47858 GET
http://www.tomshardware.com/ testuser HIER_DIRECT/23.40.112.227 text/html
1507185357.425  0 99.99.99.99 TCP_DENIED/407 4440 GET
http://platform.twitter.com/widgets.js - HIER_NONE/- text/html
1507185357.482 13 99.99.99.99 TCP_MISS/200 2019 GET
http://www.tomshardware.com/medias/favicon/favicon-32x32.png? testuser
HIER_DIRECT/23.40.112.227 image/png
1507185357.548 61 99.99.99.99 TCP_REFRESH_UNMODIFIED/304 516 GET
http://platform.twitter.com/widgets.js testuser HIER_DIRECT/199.96.57.6 -
1507185357.565  0 99.99.99.99 TCP_DENIED/407 4178 CONNECT
www.tomshardware.com:443 - HIER_NONE/- text/html
1507185357.924  0 99.99.99.99 TCP_DENIED/407 4190 CONNECT
syndication.twitter.com:443 - HIER_NONE/- tex