Re: [squid-users] squid in container aborted on low memory server

2019-03-05 Thread George Xie
more detail of the backtrace:

(gdb) up
#4  0x558a3d0a in comm_init () at comm.cc:1206
1206fd_table =(fde *) xcalloc(Squid_MaxFD, sizeof(fde));
(gdb) p Squid_MaxFD
$1 = 1048576
(gdb) p sizeof(fde)
$2 = 392

It seems Squid_MaxFD is way too large, and its value is directly from ulimit:

# ulimit -n
1048576

therefore, I try to add this option:

max_filedesc 4096

now squid works and only takes ~50m memory.
thanks very much for your help!

Xie Shi

Xie Shi

On Tue, Mar 5, 2019 at 12:34 AM Alex Rousskov
 wrote:
>
> On 3/3/19 9:39 PM, George Xie wrote:
>
> > Squid version: 3.5.23-5+deb9u1
>
> > http_port 127.0.0.1:3128
> > cache deny all
> > access_log none
>
> Unfortunately, this configuration wastes RAM: Squid is not yet smart
> enough to understand that you do not want any caching and may allocate
> 256+ MB of memory cache plus supporting indexes. To correct that default
> behavior, add this:
>
>   cache_mem 0
>
> Furthermore, older Squids, possibly including your no-longer-supported
> version, may allocate shared memory indexes where none are needed. That
> might explain why you see your Squid allocating a 392 MB table.
>
> If you want to know what is going on for sure, then configure malloc to
> dump core on allocation failures and post a stack trace leading to that
> allocation failure so that we know _what_ Squid was trying to allocate
> when it ran out of RAM.
>
>
> HTH,
>
> Alex.
>
>
> > runs in a container with following Dockerfile:
> >
> > FROM debian:9
> > RUN apt update && \
> > apt install --yes squid
> >
> >
> > the total memory of the host server is very low, only 592m, about 370m
> > free memory.
> > if I start squid in the container, squid will abort immediately.
> >
> > error messages in /var/log/squid/cache.log:
> >
> >
> > FATAL: xcalloc: Unable to allocate 1048576 blocks of 392 bytes!
> >
> > Squid Cache (Version 3.5.23): Terminated abnormally.
> > CPU Usage: 0.012 seconds = 0.004 user + 0.008 sys
> > Maximum Resident Size: 47168 KB
> >
> >
> > error message captured with strace -f -e trace=memory:
> >
> > [pid   920] mmap(NULL, 411176960, PROT_READ|PROT_WRITE,
> > MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
> >
> >
> > it appears that squid (or glibc) tries to allocate 392m memory, which is
> > larger than host free memory 370m.
> > but I guess squid don't need that much memory, I have another running
> > squid instance, which only uses < 200m memory.
> > the oddest thing is if I run squid on the host (also Debian 9) directly,
> > not in the container, squid could start and run as normal.
> >
> > am I doing something wrong thing here?
> >
> > Xie Shi
> >
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users
> >
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid in container aborted on low memory server

2019-03-05 Thread Amos Jeffries
On 4/03/19 9:45 pm, George Xie wrote:
> > On 4/03/19 5:39 pm, George Xie wrote:
> > > hi all:
> > >
> > > Squid version: 3.5.23-5+deb9u1
> > > Docker version 18.09.3, build 774a1f4
> > > Linux instance-4 4.9.0-8-amd64 #1 SMP Debian 4.9.130-2 (2018-10-27)
> > > x86_64 GNU/Linux
> > >
> > > I have the following squid config:
> > >
> > >
> > >     http_port 127.0.0.1:3128 
> > >     cache deny all
> > >     access_log none
> > >
> > What is it exactly that you think this is doing in regards to Squid
> > memory needs?
> >
> 
> 
> sorry, I don't get your quest.
>  

I was asking to see what you were thinking was going on with those settings.

As Alex already pointed out the "cache deny all" does not reduce memory
needs of Squid in any way. It just makes 256MB of that RAM become
pointless allocating.

So, if you actually do not want the proxy caching anything, then
disabling the cache_mem (set it to 0 as per Alex response) would be the
best choice of action before you go any further.

Or if you *do* want caching, and were trying to disable it for testing
the memory issue. Then your test was wrong, and produces incorrect
conclusion. Just reducing cache_mem would be best for this case - set it
to a value that should reasonably fit this container and see if the
proxy runs okay.


...
> > >
> > > it appears that squid (or glibc) tries to allocate 392m memory,
> which is
> > > larger than host free memory 370m.
> > > but I guess squid don't need that much memory, I have another
> running
> > > squid instance, which only uses < 200m memory.
> > No doubt it is configured to use less memory. For example by reducing
> > the default memory cache size.
> >
> 
> 
> that running squid instance has the same config.
>  

Then something odd is going on between the two. They should indeed have
had the same behaviour (either work or same error).

Whatever the issue is it is being triggered by the large blocks of RAM
allocated by a default Squid. The easiest to modify is the cache_mem.


> 
> > > the oddest thing is if I run squid on the host (also Debian 9)
> directly,
> > > not in the container, squid could start and run as normal.
> > >
> > Linux typically allows RAM over-allocation. Which works okay so
> long as
> > there is sufficient swap space and there is time between memory
> usage to
> > do the swap in/out process.
> > Amos
> 
> 
> swap is disabled in the host server, so do in the container. 
> 
> after all, I wonder why squid would try to claim 392m memory if don't
> need that much.
> 

Squid thinks it does. All client traffic is denied being cached by that
"deny all". BUT ... there are internally generated items which also use
cache. So there is 256MB default RAM cache allocated and only those few
small things being put in it.

You could set it to '0' or to some small value and the allocation size
should go down accordingly.


That said, every bit of client traffic headed towards the proxy uses
memory of volatile amount and at peak times it may need to allocate
large blocks.

So disabling swap entirely on the server is not a great idea. It just
moves the error and shutdown to happen at peak traffic times when it is
least wanted.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and url modifying

2019-03-05 Thread Egoitz Aurrekoetxea
Hi Alex, 

What you told about http_reply_access could work for me... but I have a
problem... 

Can http_reply_access and some for of... url_regexp dstdom_regex or
similar cause a redirect by using matching content?. 

I mean : 

https://a.b.c.cloud.aaa.bbb 

to be redirected to : 

https://a.b.c 

Let's say matching all but cloud.aaa.bbb ? 

Cheers!!

---

EGOITZ AURREKOETXEA 
Dpto. de sistemas 
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia) 
ego...@sarenet.es 
www.sarenet.es [3] 
Antes de imprimir este correo electrónico piense si es necesario
hacerlo. 

El 2019-03-05 08:13, Alex Rousskov escribió:

> On 3/4/19 11:20 AM, Egoitz Aurrekoetxea wrote:
> 
>> Clients, will ask :
>> 
>> https://ooo..ttt.thesquidserver.org/
> 
>> So the answer [to the second question] I assume should be yes.
> 
> If I am interpreting your answers correctly, then your setup looks like
> a reverse proxy to me. In that case, you do not need SslBump and
> interception. You do need an web server certificate for the
> ooo..ttt.thesquidserver.org domain, issued by a well-trusted CA.
> Do you already have that?
> 
>> I have DNAT rules, for being able to
>> redirect tcp/80 and tcp/443 to squid's port silently.
> 
> Please note that your current Squid configuration is not a reverse proxy
> configuration. It is an interception configuration. It also lacks
> https_port for handling port 443 traffic. There are probably some
> documents on Squid wiki (and/or elsewhere) explaining how to configure
> Squid to become a reverse proxy. Follow them.
> 
>> I wanted to setup a proxy machine which I wanted to be able to receive
>> url like :
>> 
>> - www.iou.net.theproxy.com/hj.php?ui=9 [1]
>> 
>> If this site returns clean content (scanned by Icap server) the url
>> redirector should return :
>> 
>> - www.iou.net/hj.php?ui=9 [2]  (the real
>> url) as URL.
> 
> OK.
> 
>> - Is it possible with Squid to achieve my goal?. With Squid, a
>> redirector, and a Icap daemon which performs virus scanning...
> 
> A redirector seems out of scope here -- it works on requests while you
> want to rewrite (scanned by ICAP) responses.
> 
> It is probably possible to use deny_info to respond with a redirect
> message. To trigger a deny_info action, you would have to configure your
> Squid to block virus-free responses, which is rather strange!
> 
>> - For plain http the config and the URL seem to be working BUT the virus
>> are not being scanned. Could the config be adjusted for that?.
> 
> I would start by removing the redirector, "intercept", SslBump, and
> disabling ICAP. Configure your Squid as a reverse proxy without any
> virus scanning. Then add ICAP. Get the virus scanning working without
> any URL manipulation. Once that is done, you can adjust Squid to block
> virus-free responses (via http_reply_access) and trigger a deny_info
> response containing an HTTP redirect.
> 
> Please note that once the browser gets a redirect to another site, that
> browser is not going to revisit your reverse proxy for any content
> related to that other site -- all requests for that other site will go
> from the browser to that other site. Your proxy will not be in the loop
> anymore. If that is not what you want, then you cannot use redirects at
> all -- you would have to accelerate that other site for all requests
> instead and make sure that other site does not contain absolute URLs
> pointing the browser away from your reverse proxy.
> 
> Disclaimer: I have not tested the above ideas and, again, I may be
> misinterpreting what you really want to achieve.
> 
> Alex.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
 

Links:
--
[1] http://www.iou.net.theproxy.com/hj.php?ui=9
[2] http://www.iou.net/hj.php?ui=9
[3] http://www.sarenet.es___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and url modifying

2019-03-05 Thread Egoitz Aurrekoetxea
Hi!, 

I have Squid configured with the virus scanning software using ICAP and
working. But, when I do : 

acl matchear_todo url_regex [-i] ^.*$
http_reply_access deny matchear_todo
deny_info   http://172.16.8.61/redirigir.php?url=%s matchear_todo 

it's always redirecting me without passing the own ICAP system... I
wanted the redirection to be done only when content is clean... this is
doing it always... have I missed something? 

Cheers! 

---

EGOITZ AURREKOETXEA 
Dpto. de sistemas 
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia) 
ego...@sarenet.es 
www.sarenet.es [3] 
Antes de imprimir este correo electrónico piense si es necesario
hacerlo. 

El 2019-03-05 08:13, Alex Rousskov escribió:

> On 3/4/19 11:20 AM, Egoitz Aurrekoetxea wrote:
> 
>> Clients, will ask :
>> 
>> https://ooo..ttt.thesquidserver.org/
> 
>> So the answer [to the second question] I assume should be yes.
> 
> If I am interpreting your answers correctly, then your setup looks like
> a reverse proxy to me. In that case, you do not need SslBump and
> interception. You do need an web server certificate for the
> ooo..ttt.thesquidserver.org domain, issued by a well-trusted CA.
> Do you already have that?
> 
>> I have DNAT rules, for being able to
>> redirect tcp/80 and tcp/443 to squid's port silently.
> 
> Please note that your current Squid configuration is not a reverse proxy
> configuration. It is an interception configuration. It also lacks
> https_port for handling port 443 traffic. There are probably some
> documents on Squid wiki (and/or elsewhere) explaining how to configure
> Squid to become a reverse proxy. Follow them.
> 
>> I wanted to setup a proxy machine which I wanted to be able to receive
>> url like :
>> 
>> - www.iou.net.theproxy.com/hj.php?ui=9 [1]
>> 
>> If this site returns clean content (scanned by Icap server) the url
>> redirector should return :
>> 
>> - www.iou.net/hj.php?ui=9 [2]  (the real
>> url) as URL.
> 
> OK.
> 
>> - Is it possible with Squid to achieve my goal?. With Squid, a
>> redirector, and a Icap daemon which performs virus scanning...
> 
> A redirector seems out of scope here -- it works on requests while you
> want to rewrite (scanned by ICAP) responses.
> 
> It is probably possible to use deny_info to respond with a redirect
> message. To trigger a deny_info action, you would have to configure your
> Squid to block virus-free responses, which is rather strange!
> 
>> - For plain http the config and the URL seem to be working BUT the virus
>> are not being scanned. Could the config be adjusted for that?.
> 
> I would start by removing the redirector, "intercept", SslBump, and
> disabling ICAP. Configure your Squid as a reverse proxy without any
> virus scanning. Then add ICAP. Get the virus scanning working without
> any URL manipulation. Once that is done, you can adjust Squid to block
> virus-free responses (via http_reply_access) and trigger a deny_info
> response containing an HTTP redirect.
> 
> Please note that once the browser gets a redirect to another site, that
> browser is not going to revisit your reverse proxy for any content
> related to that other site -- all requests for that other site will go
> from the browser to that other site. Your proxy will not be in the loop
> anymore. If that is not what you want, then you cannot use redirects at
> all -- you would have to accelerate that other site for all requests
> instead and make sure that other site does not contain absolute URLs
> pointing the browser away from your reverse proxy.
> 
> Disclaimer: I have not tested the above ideas and, again, I may be
> misinterpreting what you really want to achieve.
> 
> Alex.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
 

Links:
--
[1] http://www.iou.net.theproxy.com/hj.php?ui=9
[2] http://www.iou.net/hj.php?ui=9
[3] http://www.sarenet.es___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] How to secure your cryptocurrencies

2019-03-05 Thread b4uwallet
The craze of dealing in cryptocurrencies is leading to its pitch right now.
If we gather upon the research that is made from the sensational sectors of
working so it is concluded that there are more than 2 million clients those
are dealing with blockchain. It not enough thousands of them are performing
selling and buying transactions daily. 
It is versatile truth the dealing in cryptocurrency may lead to a profitable
state. So, millions of people are enriched with blockchain.
Despite all these profitable statements, the thing that worthy matters are
is your money secure ?? is it safe?
This kind of pop-ups really occurs even at once in our mind. So, there is a
very secure arrangement by B4U  Wallet and Exchange  
. Today, we will tell you how you can secure your cryptocurrencies with us.
Protecting your software…
It's our foremost duty to secure your amount so the first attempt that's
made by our team ( B4U wallet   ) is we going to
secure all your leading software’s. We will protect that software of your
wallet. It is same as to have a backup of your application on a personal
computer. It proves itself very helpful in the case of hacking.
After this protection your every click will be protected approximately!! You
don't need to worry whether you are clicking or viewing the attachments.
Mobile application safety
If you have downloaded the wallet application. So, it first secured by your
lock. (nobody can even view it without your pin-code). Somehow, if anyone
knows your pin. You don't need to worry even then!! You will be provided (2
Factor authentication). 
No transfer-able sector needed!!
While using the wallet application you can transfer all your funds without
and helping account like PayPal etc. Each transaction will be performed
protectively, and you don't need to risk yourself. We are working as a
responsible team for our clients.
Backups in hardware
It's quite often to make a backup for your general information. It's quite
good !! but what if you lost your pc? or your data is removed by the
software. Here comes the most troubling situation. Moreover, if it’s a
desire for somebody to hack your account. So, he or she will definitely
remove all your backup. Stay calm !! you don't need to worry about it. We
will back up your data in the hardware sector of our record. Doesn't matter
if the pc is broken up or restored you I'll definitely get your data back in
a very promising way.
Access keys 
You will get access keys with the interacting of B4U wallet. We will provide
you access to 
Your phone number,
Email,
Fax or security question.
This access will lead you to have your hands on your account whether it's
lost. It good news!!
Inscription
Within our services, you will be provided with an encrypting sector. Even
though your phone is lost. Your whole devices will be given authority to
change your password as well as to take out your money secure. One thing you
should keep in mind is you one is in safe hand and you will not lose them.
You will be given each solution but the thing that matter is you should be
the rightful owner of the amount.
Keylogger
When you will create a successful account on B4U  wallet and exchange
  . After the perfect evaluation of your account you
will be provided with a key logger. A hacker can have access to your
password but not towards your key log so its 100% safer optimizer to make
your  cryptocurrency    save.
Two step verifications

 
Despite all these services, a service of two-step verification will be
provided you will set up your account, open it, and set a two-step
verification code without knowing this code nobody can even perform a single
transaction from your account. It makes your wallet much safe!!
Cold wallet service.
You can easily, safe all your data with cold wallet it is an offline sector
that helps you to keep bitcoin offline in a drive. It is used for the
every-day transaction. With this service, no one else can get towards your
funds. This wallet is also used for viewing your account details and
savings.
These all kinds of services are provided just to satisfy you guys. So, trust
and we will surely secure your each and every transaction.




--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid fallback

2019-03-05 Thread Amos Jeffries
On 5/03/19 6:50 am, ronin1907 wrote:
> Hello,
> 
> I m installating squid its working fine and when I want to check from
> http://ipv6-test.com/ fallback is running fine. My question is this;
> How can I close this option ?
> 

Er, why?


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Issues setting up a proxy for malware scanning

2019-03-05 Thread Amos Jeffries
On 5/03/19 6:20 am, Egoitz Aurrekoetxea wrote:
> Hi mates!
> 
> 
> I was trying to setup a Squid server for the following matter. I wanted
> to have some modified url pointing to my Squid proxy, so that Squid to
> be able to connect to destination, scan the content and if all is ok,
> return a 3xx to the real URL. For that purpose I use the following
> configuration https://pastebin.com/raw/mP73fame . The url redirector in
> that config is  https://pastebin.com/p6Usmq75
> 
> 
> I'm facing the two following problems, probably due to not having a
> large experience in Squid :


It seems not have much experience with HTTP either.

You would be far better off forgetting this whole fake-domains and URL
redirection thing. It is almost the hardest possible way to do what you
say you are wanting.

Just divert the client traffic to the proxy and scan (or not) as it goes
through. No problems with changing HTTP response objects just to get the
traffic to a state where the scanner receives it. No confusing the
scanner which may at times be checking the domain naming pattern as part
of the signature.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid in container aborted on low memory server

2019-03-05 Thread Alex Rousskov
On 3/4/19 9:45 PM, George Xie wrote:

> #4  0x558a3d0a in comm_init () at comm.cc:1206
> 1206fd_table =(fde *) xcalloc(Squid_MaxFD, sizeof(fde));
> (gdb) p Squid_MaxFD
> $1 = 1048576
> (gdb) p sizeof(fde)
> $2 = 392
> 
> It seems Squid_MaxFD is way too large, and its value is directly from ulimit:
> 
> # ulimit -n
> 1048576
> 
> therefore, I try to add this option:
> 
> max_filedesc 4096
> 
> now squid works and only takes ~50m memory.
> thanks very much for your help!

Glad you figured it out!

Alex.


> Xie Shi
> On Tue, Mar 5, 2019 at 12:22 PM George Xie  wrote:
>>
>>> To correct that default
>>> behavior, add this:
>>>   cache_mem 0
>>
>> thanks for your advice, but actually, I have tried this option before,
>> found no difference. besides, and I have tried `memory_pools off`.
>>
>>> Furthermore, older Squids, possibly including your no-longer-supported
>>> version, may allocate shared memory indexes where none are needed. That
>>> might explain why you see your Squid allocating a 392 MB table.
>>
>> that's fair, I will give squid 4.4 a try later.
>>
>>> If you want to know what is going on for sure, then configure malloc to
>>> dump core on allocation failures and post a stack trace leading to that
>>> allocation failure so that we know _what_ Squid was trying to allocate
>>> when it ran out of RAM.
>>
>> hope following backtrace is helpful:
>>
>> (gdb) bt
>> #0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
>> #1  0x7562e42a in __GI_abort () at abort.c:89
>> #2  0x55728eb5 in fatal_dump (
>> message=0x55e764e0  "xcalloc: Unable to allocate
>> 1048576 blocks of 392 bytes!\n") at fatal.cc:113
>> #3  0x55a09837 in xcalloc (n=1048576, sz=sz@entry=392) at 
>> xalloc.cc:90
>> #4  0x558a3d0a in comm_init () at comm.cc:1206
>> #5  0x55789104 in SquidMain (argc=, 
>> argv=0x7fffed48)
>> at main.cc:1481
>> #6  0x5568a48b in SquidMainSafe (argv=,
>> argc=)
>> at main.cc:1261
>> #7  main (argc=, argv=) at main.cc:1254
>>
>>
>> Xie Shi
>>
>>
>> On Tue, Mar 5, 2019 at 12:34 AM Alex Rousskov
>>  wrote:
>>>
>>> On 3/3/19 9:39 PM, George Xie wrote:
>>>
 Squid version: 3.5.23-5+deb9u1
>>>
 http_port 127.0.0.1:3128
 cache deny all
 access_log none
>>>
>>> Unfortunately, this configuration wastes RAM: Squid is not yet smart
>>> enough to understand that you do not want any caching and may allocate
>>> 256+ MB of memory cache plus supporting indexes. To correct that default
>>> behavior, add this:
>>>
>>>   cache_mem 0
>>>
>>> Furthermore, older Squids, possibly including your no-longer-supported
>>> version, may allocate shared memory indexes where none are needed. That
>>> might explain why you see your Squid allocating a 392 MB table.
>>>
>>> If you want to know what is going on for sure, then configure malloc to
>>> dump core on allocation failures and post a stack trace leading to that
>>> allocation failure so that we know _what_ Squid was trying to allocate
>>> when it ran out of RAM.
>>>
>>>
>>> HTH,
>>>
>>> Alex.
>>>
>>>
 runs in a container with following Dockerfile:

 FROM debian:9
 RUN apt update && \
 apt install --yes squid


 the total memory of the host server is very low, only 592m, about 370m
 free memory.
 if I start squid in the container, squid will abort immediately.

 error messages in /var/log/squid/cache.log:


 FATAL: xcalloc: Unable to allocate 1048576 blocks of 392 bytes!

 Squid Cache (Version 3.5.23): Terminated abnormally.
 CPU Usage: 0.012 seconds = 0.004 user + 0.008 sys
 Maximum Resident Size: 47168 KB


 error message captured with strace -f -e trace=memory:

 [pid   920] mmap(NULL, 411176960, PROT_READ|PROT_WRITE,
 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)


 it appears that squid (or glibc) tries to allocate 392m memory, which is
 larger than host free memory 370m.
 but I guess squid don't need that much memory, I have another running
 squid instance, which only uses < 200m memory.
 the oddest thing is if I run squid on the host (also Debian 9) directly,
 not in the container, squid could start and run as normal.

 am I doing something wrong thing here?

 Xie Shi

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

>>>
>>> ___
>>> squid-users mailing list
>>> squid-users@lists.squid-cache.org
>>> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help blocking an specific HTTPS website

2019-03-05 Thread Felipe Arturo Polanco
I confirm that, I can see TCP_DENIED requests on the access.log to
web.whatsapp.com but still the websites loads.

1551192823.356 47 192.168.112.144 TCP_DENIED/403 4453 GET
https://web.whatsapp.com/ws - HIER_NONE/- text/html

On Mon, Mar 4, 2019 at 7:21 PM Leonardo Rodrigues 
wrote:

> Em 04/03/2019 19:27, Felipe Arturo Polanco escreveu:
>
> Hi,
>
> I have been trying to block https://web.whatsapp.com/ from squid and I
> have been unable to.
>
> So far I have this:
>
> I can block other HTTPS websites fine
> I can block www.whatsapp.com fine
> I cannot block web.whatsapp.com
>
> I have HTTPS transparent interception enabled and I am bumping all TCP
> connections, but still this one doesn't appear to get blocked by squid.
>
> This is part of my configuration:
> ===
> acl blockwa1 url_regex whatsapp\.com$
> acl blockwa2 dstdomain .whatsapp.com
> acl blockwa3 ssl::server_name .whatsapp.com
> acl step1 at_step SslBump1
>
>
> blockwa1 and blockwa2 should definitely block web.whatsapp.com ..
> your rules seems right.
>
> Can you confirm the web.whatsapp.com access are getting through squid
> ? Are these accesses on your access.log with something different than
> DENIED status ?
>
>
>
> --
>
>
>   Atenciosamente / Sincerily,
>   Leonardo Rodrigues
>   Solutti Tecnologia
>   http://www.solutti.com.br
>
>   Minha armadilha de SPAM, NÃO mandem email
>   gertru...@solutti.com.br
>   My SPAMTRAP, do not email it
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and url modifying

2019-03-05 Thread Alex Rousskov
On 3/5/19 1:57 AM, Egoitz Aurrekoetxea wrote:

> I have Squid configured with the virus scanning software using ICAP and
> working. But, when I do :
> 
> acl matchear_todo url_regex [-i] ^.*$

FYI: "[-i]" is documentation syntax that means an optional flag called
"-i". If you want to use that "-i" flag, then type

  acl matchear_todo url_regex -i ^.*$

... but keep in mind that "-i" makes no sense when you regular
expression does not contain small or capital characters. Adding "-i"
would not change what URLs such a regular expression would match.


> http_reply_access deny matchear_todo
> deny_info   http://172.16.8.61/redirigir.php?url=%s matchear_todo

Why are you blocking based on URL instead of blocking based on the ICAP
scan result? In your earlier specifications, you wanted to
block/redirect only those transactions that were certified virus-free by
your ICAP client. The above matchear_todo ACL does not do that.


> it's always redirecting me without passing the own ICAP system...

Looking at the Squid code, what you describe overall seems impossible --
Squid checks http_reply_access _after_ the RESPMOD transaction, not
before it. Adding http_reply_access cannot disable ICAP scans AFAICT!
Are you sure it has that effect in your use case?


> I
> wanted the redirection to be done only when content is clean... this is
> doing it always... have I missed something?

Your ACL says nothing about "clean". It says "always". How does your
ICAP service mark "clean" (or "dirty") HTTP responses? Your ACL needs to
match that marking (or the absence of that marking).

Alex.


> El 2019-03-05 08:13, Alex Rousskov escribió:
> 
>> On 3/4/19 11:20 AM, Egoitz Aurrekoetxea wrote:
>>
>>> Clients, will ask :
>>>
>>> https://ooo..ttt.thesquidserver.org/
>>
>>> So the answer [to the second question] I assume should be yes.
>>
>> If I am interpreting your answers correctly, then your setup looks like
>> a reverse proxy to me. In that case, you do not need SslBump and
>> interception. You do need an web server certificate for the
>> ooo..ttt.thesquidserver.org domain, issued by a well-trusted CA.
>> Do you already have that?
>>
>>
>>> I have DNAT rules, for being able to
>>> redirect tcp/80 and tcp/443 to squid's port silently.
>>
>> Please note that your current Squid configuration is not a reverse proxy
>> configuration. It is an interception configuration. It also lacks
>> https_port for handling port 443 traffic. There are probably some
>> documents on Squid wiki (and/or elsewhere) explaining how to configure
>> Squid to become a reverse proxy. Follow them.
>>
>>
>>> I wanted to setup a proxy machine which I wanted to be able to receive
>>> url like :
>>>
>>> - www.iou.net.theproxy.com/hj.php?ui=9
>>> 
>>>
>>> If this site returns clean content (scanned by Icap server) the url
>>> redirector should return :
>>>
>>> - www.iou.net/hj.php?ui=9 
>>>  (the real
>>> url) as URL.
>>
>> OK.
>>
>>
>>> - Is it possible with Squid to achieve my goal?. With Squid, a
>>> redirector, and a Icap daemon which performs virus scanning...
>>
>> A redirector seems out of scope here -- it works on requests while you
>> want to rewrite (scanned by ICAP) responses.
>>
>> It is probably possible to use deny_info to respond with a redirect
>> message. To trigger a deny_info action, you would have to configure your
>> Squid to block virus-free responses, which is rather strange!
>>
>>
>>> - For plain http the config and the URL seem to be working BUT the virus
>>> are not being scanned. Could the config be adjusted for that?.
>>
>>
>> I would start by removing the redirector, "intercept", SslBump, and
>> disabling ICAP. Configure your Squid as a reverse proxy without any
>> virus scanning. Then add ICAP. Get the virus scanning working without
>> any URL manipulation. Once that is done, you can adjust Squid to block
>> virus-free responses (via http_reply_access) and trigger a deny_info
>> response containing an HTTP redirect.
>>
>>
>> Please note that once the browser gets a redirect to another site, that
>> browser is not going to revisit your reverse proxy for any content
>> related to that other site -- all requests for that other site will go
>> from the browser to that other site. Your proxy will not be in the loop
>> anymore. If that is not what you want, then you cannot use redirects at
>> all -- you would have to accelerate that other site for all requests
>> instead and make sure that other site does not contain absolute URLs
>> pointing the browser away from your reverse proxy.
>>
>>
>> Disclaimer: I have not tested the above ideas and, again, I may be
>> misinterpreting what you really want to achieve.
>>
>> Alex.
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> 
>> http://lists.squid-cache.org/listinfo/s

Re: [squid-users] Squid and url modifying

2019-03-05 Thread Egoitz Aurrekoetxea
Hi Alex!! 

I do answer below!! Many many thanks in advance...

---

EGOITZ AURREKOETXEA 
Dpto. de sistemas 
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia) 
ego...@sarenet.es 
www.sarenet.es [3] 
Antes de imprimir este correo electrónico piense si es necesario
hacerlo. 

El 2019-03-05 17:45, Alex Rousskov escribió:

> On 3/5/19 1:57 AM, Egoitz Aurrekoetxea wrote:
> 
>> I have Squid configured with the virus scanning software using ICAP and
>> working. But, when I do :
>> 
>> acl matchear_todo url_regex [-i] ^.*$
> 
> FYI: "[-i]" is documentation syntax that means an optional flag called
> "-i". If you want to use that "-i" flag, then type
> 
> acl matchear_todo url_regex -i ^.*$
> 
> ... but keep in mind that "-i" makes no sense when you regular
> expression does not contain small or capital characters. Adding "-i"
> would not change what URLs such a regular expression would match. 
> 
> I SEE... I THOUGH IT WAS FOR MATCHING CASE INSENSITIVELY... SOME SORT OF 
> I/__/ 
> 
>> http_reply_access deny matchear_todo
>> deny_info   http://172.16.8.61/redirigir.php?url=%s matchear_todo
> 
> Why are you blocking based on URL instead of blocking based on the ICAP
> scan result? In your earlier specifications, you wanted to
> block/redirect only those transactions that were certified virus-free by
> your ICAP client. The above matchear_todo ACL does not do that. 
> 
> THAT WAS AN ATTEMPT OF ACHIEVING MY GOAL. REDIRECT REQUESTS TO A PHP WHICH 
> DOES THE REQUEST TO A "NEXT SQUID" AND THEN RETURN ONE THING OR ANOTHER 
> 
> SORRY, THAT'S WRONG. I HAVE DONE TONS OF TESTS... AT PRESENT... I DON'T 
> REALLY KNOW HOW TO DO THAT... I WOULD BE VERY THANKFUL IF YOU COULD GUIDE ME 
> ON HOW COULD I DO IT... IS IT POSSIBLE TO BE DONE FROM SQUID SIDE?. OR DOES 
> THE OWN ICAP IMPLEMENTATION DIRECTLY RETURN A 3XX ANSWER?. 
> 
>> it's always redirecting me without passing the own ICAP system...
> 
> Looking at the Squid code, what you describe overall seems impossible --
> Squid checks http_reply_access _after_ the RESPMOD transaction, not
> before it. Adding http_reply_access cannot disable ICAP scans AFAICT!
> Are you sure it has that effect in your use case? 
> 
> IT SEEMED TO DO SO YES I'LL TRY IT AGAIN 
> 
>> I
>> wanted the redirection to be done only when content is clean... this is
>> doing it always... have I missed something?
> 
> Your ACL says nothing about "clean". It says "always". How does your
> ICAP service mark "clean" (or "dirty") HTTP responses? Your ACL needs to
> match that marking (or the absence of that marking). 
> 
> COULD YOU GIVE ME A CLUE OF HOW COULD I DO IT?. 
> 
> Alex. 
> 
> THANKS ALEX 
> 
> El 2019-03-05 08:13, Alex Rousskov escribió:
> 
> On 3/4/19 11:20 AM, Egoitz Aurrekoetxea wrote:
> 
> Clients, will ask :
> 
> https://ooo..ttt.thesquidserver.org/ 
> So the answer [to the second question] I assume should be yes. 
> If I am interpreting your answers correctly, then your setup looks like
> a reverse proxy to me. In that case, you do not need SslBump and
> interception. You do need an web server certificate for the
> ooo..ttt.thesquidserver.org domain, issued by a well-trusted CA.
> Do you already have that?
> 
> I have DNAT rules, for being able to
> redirect tcp/80 and tcp/443 to squid's port silently. 
> Please note that your current Squid configuration is not a reverse proxy
> configuration. It is an interception configuration. It also lacks
> https_port for handling port 443 traffic. There are probably some
> documents on Squid wiki (and/or elsewhere) explaining how to configure
> Squid to become a reverse proxy. Follow them.
> 
> I wanted to setup a proxy machine which I wanted to be able to receive
> url like :
> 
> - www.iou.net.theproxy.com/hj.php?ui=9 [1]
> 
> 
> If this site returns clean content (scanned by Icap server) the url
> redirector should return :
> 
> - www.iou.net/hj.php?ui=9 [2] 
>  (the real
> url) as URL. 
> OK.
> 
> - Is it possible with Squid to achieve my goal?. With Squid, a
> redirector, and a Icap daemon which performs virus scanning... 
> A redirector seems out of scope here -- it works on requests while you
> want to rewrite (scanned by ICAP) responses.
> 
> It is probably possible to use deny_info to respond with a redirect
> message. To trigger a deny_info action, you would have to configure your
> Squid to block virus-free responses, which is rather strange!
> 
> - For plain http the config and the URL seem to be working BUT the virus
> are not being scanned. Could the config be adjusted for that?. 
> 
> I would start by removing the redirector, "intercept", SslBump, and
> disabling ICAP. Configure your Squid as a reverse proxy without any
> virus scanning. Then add ICAP. Get the virus scanning working without
> any URL manipulation. Once that is done, you can adjust Squid to block
> virus-free r

Re: [squid-users] Squid and url modifying

2019-03-05 Thread Alex Rousskov
On 3/5/19 9:59 AM, Egoitz Aurrekoetxea wrote:

> El 2019-03-05 17:45, Alex Rousskov escribió:
>> On 3/5/19 1:57 AM, Egoitz Aurrekoetxea wrote:
>>
>>> I have Squid configured with the virus scanning software using ICAP and
>>> working. But, when I do :
>>>
>>> acl matchear_todo url_regex [-i] ^.*$

>> FYI: "[-i]" is documentation syntax that means an optional flag called
>> "-i". If you want to use that "-i" flag, then type
>>
>>   acl matchear_todo url_regex -i ^.*$
>>
>> ... but keep in mind that "-i" makes no sense when you regular
>> expression does not contain small or capital characters. Adding "-i"
>> would not change what URLs such a regular expression would match.
  
> I see... I though it was for matching case insensitively...

You thought correctly. The -i flag enables case insensitive matches
indeed, but you are specifying that flag incorrectly (extra square
brackets), and it makes no sense to specify it at all for your specific
regular expression!


>>> http_reply_access deny matchear_todo
>>> deny_info   http://172.16.8.61/redirigir.php?url=%s matchear_todo

>> Why are you blocking based on URL instead of blocking based on the ICAP
>> scan result? In your earlier specifications, you wanted to
>> block/redirect only those transactions that were certified virus-free by
>> your ICAP client. The above matchear_todo ACL does not do that.
  
>> *That was an attempt of achieving my goal. Redirect requests to a php
>> which does the request to a "next Squid" and then return one thing or
>> another*

Sounds like you are asking about one thing and then testing/discussing
another. Doing so makes helping you more difficult. Focus on making the
simplest use case working first.


>> Is it possible to be done from Squid side?

Probably (as long as your ICAP service can signal clean/dirty status in
a way Squid ACLs can detect). Since you appear to change the
problem/goal, I am not sure what the answer to this question is.


> Or does the own ICAP implementation directly return a 3xx answer?

That works as well. In that case, you do not need deny_info tricks.

>> Your ACL says nothing about "clean". It says "always". How does your
>> ICAP service mark "clean" (or "dirty") HTTP responses? Your ACL needs to
>> match that marking (or the absence of that marking).
  
> Could you give me a clue of how could I do it?

I cannot because I do not know what your ICAP service is capable of (and
do not have the time to research that). For example, if your ICAP
service can add an HTTP header to dirty HTTP responses, then you can use
the corresponding Squid ACL to detect the presence of that header in the
adapted response.

Alex.
  

>>> El 2019-03-05 08:13, Alex Rousskov escribió:
>>>
 On 3/4/19 11:20 AM, Egoitz Aurrekoetxea wrote:

> Clients, will ask :
>
> https://ooo..ttt.thesquidserver.org/

> So the answer [to the second question] I assume should be yes.

 If I am interpreting your answers correctly, then your setup looks like
 a reverse proxy to me. In that case, you do not need SslBump and
 interception. You do need an web server certificate for the
 ooo..ttt.thesquidserver.org domain, issued by a well-trusted CA.
 Do you already have that?


> I have DNAT rules, for being able to
> redirect tcp/80 and tcp/443 to squid's port silently.

 Please note that your current Squid configuration is not a reverse proxy
 configuration. It is an interception configuration. It also lacks
 https_port for handling port 443 traffic. There are probably some
 documents on Squid wiki (and/or elsewhere) explaining how to configure
 Squid to become a reverse proxy. Follow them.


> I wanted to setup a proxy machine which I wanted to be able to receive
> url like :
>
> - www.iou.net.theproxy.com/hj.php?ui=9
> 
> 
>
> If this site returns clean content (scanned by Icap server) the url
> redirector should return :
>
> - www.iou.net/hj.php?ui=9 
> 
>  (the real
> url) as URL.

 OK.


> - Is it possible with Squid to achieve my goal?. With Squid, a
> redirector, and a Icap daemon which performs virus scanning...

 A redirector seems out of scope here -- it works on requests while you
 want to rewrite (scanned by ICAP) responses.

 It is probably possible to use deny_info to respond with a redirect
 message. To trigger a deny_info action, you would have to configure your
 Squid to block virus-free responses, which is rather strange!


> - For plain http the config and the URL seem to be working BUT the
> virus
> are not being scanned. Could the config be adjusted for that?.


 I would start by removing th

Re: [squid-users] Need help blocking an specific HTTPS website

2019-03-05 Thread Amos Jeffries
On 6/03/19 5:11 am, Felipe Arturo Polanco wrote:
> I confirm that, I can see TCP_DENIED requests on the access.log to
> web.whatsapp.com  but still the websites loads.
> 
> 1551192823.356     47 192.168.112.144 TCP_DENIED/403 4453 GET
> https://web.whatsapp.com/ws - HIER_NONE/- text/html
> 


Perhapse WhatsApp uses other protocols to get through when denied by the
proxy.

Have you tried blocking UDP port 80 and 443 (QUIC protocol) in your
firewall?

And of course ports 4244, 5222, 5223, 5228 and 5242.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and url modifying

2019-03-05 Thread Egoitz Aurrekoetxea
Hi!, 

Thank you so much for all your effort. We have finally got it done by
using a mixed solution. A script plus the Squid actual configured mode
:) 

I wanted to thank really all your time because it has been like gold for
me :) :) 

Bye mates! 

El 2019-03-05 18:48, Alex Rousskov escribió:

> On 3/5/19 9:59 AM, Egoitz Aurrekoetxea wrote:
> 
> El 2019-03-05 17:45, Alex Rousskov escribió: On 3/5/19 1:57 AM, Egoitz 
> Aurrekoetxea wrote:
> 
> I have Squid configured with the virus scanning software using ICAP and
> working. But, when I do :
> 
> acl matchear_todo url_regex [-i] ^.*$

>> FYI: "[-i]" is documentation syntax that means an optional flag called
>> "-i". If you want to use that "-i" flag, then type
>> 
>> acl matchear_todo url_regex -i ^.*$
>> 
>> ... but keep in mind that "-i" makes no sense when you regular
>> expression does not contain small or capital characters. Adding "-i"
>> would not change what URLs such a regular expression would match.

> I see... I though it was for matching case insensitively...

You thought correctly. The -i flag enables case insensitive matches
indeed, but you are specifying that flag incorrectly (extra square
brackets), and it makes no sense to specify it at all for your specific
regular expression!

> http_reply_access deny matchear_todo
> deny_info   http://172.16.8.61/redirigir.php?url=%s matchear_todo

>> Why are you blocking based on URL instead of blocking based on the ICAP
>> scan result? In your earlier specifications, you wanted to
>> block/redirect only those transactions that were certified virus-free by
>> your ICAP client. The above matchear_todo ACL does not do that.

>> *That was an attempt of achieving my goal. Redirect requests to a php
>> which does the request to a "next Squid" and then return one thing or
>> another*

Sounds like you are asking about one thing and then testing/discussing
another. Doing so makes helping you more difficult. Focus on making the
simplest use case working first.

>> Is it possible to be done from Squid side?

Probably (as long as your ICAP service can signal clean/dirty status in
a way Squid ACLs can detect). Since you appear to change the
problem/goal, I am not sure what the answer to this question is.

> Or does the own ICAP implementation directly return a 3xx answer?

That works as well. In that case, you do not need deny_info tricks.

>> Your ACL says nothing about "clean". It says "always". How does your
>> ICAP service mark "clean" (or "dirty") HTTP responses? Your ACL needs to
>> match that marking (or the absence of that marking).

> Could you give me a clue of how could I do it?

I cannot because I do not know what your ICAP service is capable of (and
do not have the time to research that). For example, if your ICAP
service can add an HTTP header to dirty HTTP responses, then you can use
the corresponding Squid ACL to detect the presence of that header in the
adapted response.

Alex.

> El 2019-03-05 08:13, Alex Rousskov escribió:
> 
> On 3/4/19 11:20 AM, Egoitz Aurrekoetxea wrote:
> 
> Clients, will ask :
> 
> https://ooo..ttt.thesquidserver.org/ 
> So the answer [to the second question] I assume should be yes. 
> If I am interpreting your answers correctly, then your setup looks like
> a reverse proxy to me. In that case, you do not need SslBump and
> interception. You do need an web server certificate for the
> ooo..ttt.thesquidserver.org domain, issued by a well-trusted CA.
> Do you already have that?
> 
> I have DNAT rules, for being able to
> redirect tcp/80 and tcp/443 to squid's port silently. 
> Please note that your current Squid configuration is not a reverse proxy
> configuration. It is an interception configuration. It also lacks
> https_port for handling port 443 traffic. There are probably some
> documents on Squid wiki (and/or elsewhere) explaining how to configure
> Squid to become a reverse proxy. Follow them.
> 
> I wanted to setup a proxy machine which I wanted to be able to receive
> url like :
> 
> - www.iou.net.theproxy.com/hj.php?ui=9 [1]
> 
> 
> 
> If this site returns clean content (scanned by Icap server) the url
> redirector should return :
> 
> - www.iou.net/hj.php?ui=9 [2] 
> 
>  (the real
> url) as URL. 
> OK.
> 
> - Is it possible with Squid to achieve my goal?. With Squid, a
> redirector, and a Icap daemon which performs virus scanning... 
> A redirector seems out of scope here -- it works on requests while you
> want to rewrite (scanned by ICAP) responses.
> 
> It is probably possible to use deny_info to respond with a redirect
> message. To trigger a deny_info action, you would have to configure your
> Squid to block virus-free responses, which is rather strange!
> 
> - For plain http the config and the URL seem to be working BUT the
> virus
> are not being scanned.