Re: [squid-users] squid in container aborted on low memory server

2019-03-04 Thread George Xie
>
> > On 4/03/19 5:39 pm, George Xie wrote:
> > > hi all:
> > >
> > > Squid version: 3.5.23-5+deb9u1
> > > Docker version 18.09.3, build 774a1f4
> > > Linux instance-4 4.9.0-8-amd64 #1 SMP Debian 4.9.130-2 (2018-10-27)
> > > x86_64 GNU/Linux
> > >
> > > I have the following squid config:
> > >
> > >
> > > http_port 127.0.0.1:3128
> > > cache deny all
> > > access_log none
> > >
> > What is it exactly that you think this is doing in regards to Squid
> > memory needs?
> >
>

sorry, I don't get your quest.


> > >
> > > runs in a container with following Dockerfile:
> > >
> > > FROM debian:9
> > > RUN apt update && \
> > > apt install --yes squid
> > >
> > >
> > > the total memory of the host server is very low, only 592m, about 370m
> > > free memory.
> > > if I start squid in the container, squid will abort immediately.
> > >
> > > error messages in /var/log/squid/cache.log:
> > >
> > >
> > > FATAL: xcalloc: Unable to allocate 1048576 blocks of 392 bytes!
> > >
> > > Squid Cache (Version 3.5.23): Terminated abnormally.
> > > CPU Usage: 0.012 seconds = 0.004 user + 0.008 sys
> > > Maximum Resident Size: 47168 KB
> > >
> > >
> > > error message captured with strace -f -e trace=memory:
> > >
> > > [pid   920] mmap(NULL, 411176960, PROT_READ|PROT_WRITE,
> > > MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate
> memory)
> > >
> > >
> > > it appears that squid (or glibc) tries to allocate 392m memory, which
> is
> > > larger than host free memory 370m.
> > > but I guess squid don't need that much memory, I have another running
> > > squid instance, which only uses < 200m memory.
> > No doubt it is configured to use less memory. For example by reducing
> > the default memory cache size.
> >
>

that running squid instance has the same config.


> > > the oddest thing is if I run squid on the host (also Debian 9)
> directly,
> > > not in the container, squid could start and run as normal.
> > >
> > Linux typically allows RAM over-allocation. Which works okay so long as
> > there is sufficient swap space and there is time between memory usage to
> > do the swap in/out process.
> > Amos
>

swap is disabled in the host server, so do in the container.

after all, I wonder why squid would try to claim 392m memory if don't need
that much.

XieShi
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-3.5.28 slowdown

2019-03-04 Thread Enrico Heine
Hm, I do at least "believe" that it is very likely to be the same with ecap, 
but I don't know this protocol in anyway, so I can't give a qualified answer on 
that. 

Anyway, if it is your issue then you can use the test command provided anytime 
and see the issue slowly emerging until it reaches an amount where tcp_mem is 
getting to big and a net rate limit is triggered by the kernel which then 
finally results in slow networking performance, which can be only resolved with 
a squid restart  . Also check /var/log/kern.log for the point in time where you 
had the slowness issue, you should see some lines there which you can provide 
here.

Anyway, if it is your bug, please share this info with us.

Br, Flashdown

Am 1. März 2019 23:43:49 MEZ schrieb Michael Hendrie :
>
>> On 1 Mar 2019, at 9:34 pm, Enrico Heine 
>wrote:
>> 
>> >>just a shot into the dark<<, is it possible that you use the
>adaption service for ICAP?
>
>There is an eCAP adaptation service but not ICAP, would eCAP be
>effected by the same condition reported the bug report you linked to?  
>Early in the investigation I did change 'ecap enable off' and do 'squid
>-k reconfigure' while the condition was present but it didn't restore
>speed, a full squid restart was required.
>
>> If so, fast test, this should return 0 if u are not affected by this,
>if higher than 0 check the link below:
>> netstat -pa | grep CLOSE_WAIT | wc -l 
>> 
>> also have a look into /var/log/kern.log 
>
>I will check these out next time the condition occurs
>
>Thanks,
>
>Michael

-- 
Diese Nachricht wurde von meinem Android-Gerät mit K-9 Mail gesendet.___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and url modifying

2019-03-04 Thread Alex Rousskov
On 3/4/19 12:53 AM, Egoitz Aurrekoetxea wrote:

> My idea is simple. I wanted specific url, to be filtered through the
> proxy. How can I manage this URL to be checked by the proxy?.

To answer your questions correctly, we need to translate the vague
description above into one of the many Squid configurations that may
match that vague description. In hope to do that, I am asking these two
basic questions:

1. Do clients/browsers request
https://ooo..ttt.thesquidserver.org/... URLs? Or do they request
https://ooo..ttt/... URLs?

For the purpose of the next question, lets assume that the answer to the
above question is: "Clients request https://publicDomain/... URLs"
(where "publicDomain" is one of the two domains mentioned in that
question). Let's further assume that when clients do a DNS lookup for
publicDomain they get a publicIp IP address back.

2. Does your Squid listen on port 443 of publicIp?

Alex.



> I assumed,
> I could modify the real and original content where urls appeared by
> setting for instance :
> 
> 
> - Being the real url : https://ooo..ttt/u?ii=99&j=88
> 
> 
> - I would rewrite in the own content the URL so that  the new URL is now
> : https://ooo..ttt.thesquidserver.org/u?ii=99&j=88
> 
> The domain thesquidserver.org
>  will be used
> for doing wilcards. For instance : *.thesquidserver.org
> *.*.thesquidserver.org etc... will resolve to the ip of the Squid
> server. But I don't want any url being asked as
> whatever.thesquidserver.org to be checked... just those ones I have
> wrote in some place...
> 
> 
> So I was trying to write some content managing script, which should
> check if that URL is needed to be checked and in case it should, check
> it against an icap service. If Icap service gives you all to be right,
> redirect you to the real site (just removing the thesquidserver.org for
> the URL for instance). If that URL contains malware for instance, give
> you an error page.
> 
> 
> This is all what I was trying to do... Some time ago, I used Squid with
> Dansguardian for this kind of purposes, but now I wanted to do something
> slightly different. I wanted to pass a request (if should be passed) to
> an icap service and later depeding in the result of that ICAP service
> (which I don't really know how could I check with an script) redirect to
> the real site or give an error page.
> 
> 
> For this purpose is perhaps the reason because url redirector programs
> exist?. I'm trying to see the entire puzzle :)


> El 2019-03-02 23:21, Alex Rousskov escribió:
> 
>> On 3/1/19 5:59 AM, Egoitz Aurrekoetxea wrote:
>>
>>> Is it possible for Squid to do something like :
>>
>>> - Receive request :
>>> https://ooo..ttt.thesquidserver.org/u?ii=99&j=88
>>
>>> and
>>
>>> to really perform a request as : https://ooo..ttt/u?ii=99&j=88
>>
>> How does your Squid receive the former request? Amos' answer probably
>> assumes that your Squid is _not_ ooo..ttt.thesquidserver.org,
>> but the name you have chosen for your example may imply that it is.
>>
>> * If your Squid is _intercepting_ traffic destined for the real
>> ooo..ttt.thesquidserver.org, then see Amos' answer.
>>
>> * If your Squid is representing ooo..ttt.thesquidserver.org,
>> then your Squid is a reverse proxy that ought to have the certificate
>> key for that domain, and none of the SslBump problems that Amos
>> mentioned apply.
>>
>> Please clarify what your use case is.
>>
>> Alex.
>>
>>
>>
>>> I mean not to redirect users with url redirection. Just act as a proxy
>>> but having Squid the proper acknoledge internally for being able to make
>>> the proper request to the destination?. Is it possible without
>>> redirecting url, to return for instance a 403 error to the source web
>>> browser in order to not be able to access to the site if some kind of
>>> circumstances are given?.
>>>
>>>
>>> If the last config, was not possible... perhaps I needed to just to
>>> redirect forcibly?. I have read for that purpose you can use URL
>>> redirectors so I assume the concept is :
>>>
>>>
>>> - Receive request :
>>> https://ooo..ttt.thesquidserver.org/u?ii=99&j=88
>>>
>>>
>>> and
>>>
>>>
>>> to really perform a request as : https://ooo..ttt/u?ii=99&j=88
>>> 
>>>
>>>
>>> If all conditions for allowing to see the content are OK, return the web
>>> browser a 301 redirect answer with the
>>> https://ooo..ttt/u?ii=99&j=88
>>>  URL. Else,
>>> just return a 403 or redirect you to a Forbidden page... I think this
>>> could be implemented with URL redirectors...but... the fact is... which
>>> kind of conditions or env situations can you use for validating the
>>> content inside the url redirector?.
>>>
>>>
>>>
>>> Thanks a lot 

Re: [squid-users] squid in container aborted on low memory server

2019-03-04 Thread Alex Rousskov
On 3/3/19 9:39 PM, George Xie wrote:

> Squid version: 3.5.23-5+deb9u1

> http_port 127.0.0.1:3128
> cache deny all
> access_log none

Unfortunately, this configuration wastes RAM: Squid is not yet smart
enough to understand that you do not want any caching and may allocate
256+ MB of memory cache plus supporting indexes. To correct that default
behavior, add this:

  cache_mem 0

Furthermore, older Squids, possibly including your no-longer-supported
version, may allocate shared memory indexes where none are needed. That
might explain why you see your Squid allocating a 392 MB table.

If you want to know what is going on for sure, then configure malloc to
dump core on allocation failures and post a stack trace leading to that
allocation failure so that we know _what_ Squid was trying to allocate
when it ran out of RAM.


HTH,

Alex.


> runs in a container with following Dockerfile:
> 
> FROM debian:9
> RUN apt update && \
> apt install --yes squid
> 
> 
> the total memory of the host server is very low, only 592m, about 370m
> free memory.
> if I start squid in the container, squid will abort immediately. 
> 
> error messages in /var/log/squid/cache.log:
> 
> 
> FATAL: xcalloc: Unable to allocate 1048576 blocks of 392 bytes!
> 
> Squid Cache (Version 3.5.23): Terminated abnormally.
> CPU Usage: 0.012 seconds = 0.004 user + 0.008 sys
> Maximum Resident Size: 47168 KB
> 
> 
> error message captured with strace -f -e trace=memory:
> 
> [pid   920] mmap(NULL, 411176960, PROT_READ|PROT_WRITE,
> MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
> 
> 
> it appears that squid (or glibc) tries to allocate 392m memory, which is
> larger than host free memory 370m.
> but I guess squid don't need that much memory, I have another running
> squid instance, which only uses < 200m memory.
> the oddest thing is if I run squid on the host (also Debian 9) directly,
> not in the container, squid could start and run as normal.
> 
> am I doing something wrong thing here?
> 
> Xie Shi
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Issues setting up a proxy for malware scanning

2019-03-04 Thread Egoitz Aurrekoetxea
Hi mates! 

I was trying to setup a Squid server for the following matter. I wanted
to have some modified url pointing to my Squid proxy, so that Squid to
be able to connect to destination, scan the content and if all is ok,
return a 3xx to the real URL. For that purpose I use the following
configuration https://pastebin.com/raw/mP73fame . The url redirector in
that config is  https://pastebin.com/p6Usmq75 

I'm facing the two following problems, probably due to not having a
large experience in Squid : 

- I needed the Sophos ICAP service to scan content and see there's no
malware there, before returning a 30X redirect to the real url. 

- https content is not being redirected... I get the following error : 

curl -vv
https://2016.eicar.org.cloud-protection.sarenet.es/download/eicarcom2.zip
*   Trying 172.16.8.41...
* TCP_NODELAY set
* Connected to 2016.eicar.org.cloud-protection.sarenet.es (172.16.8.41)
port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection:
ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
* Closing connection 0
curl: (35) error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown
protocol 

Could anyone give us a clue for fixing this two issues?. Is it a
possible configuration?. 

Best regards,___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid fallback

2019-03-04 Thread ronin1907
Hello,

I m installating squid its working fine and when I want to check from
http://ipv6-test.com/ fallback is running fine. My question is this;
How can I close this option ?



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and url modifying

2019-03-04 Thread Egoitz Aurrekoetxea
Hi Alex, 

I'm so sorry... have tried explaining the best I could... sorry 

Clients, will ask : 

https://ooo..ttt.thesquidserver.org/ 

but redirector if site is virus free (checked with an icap daemon)
should return a 302 to https://ooo..ttt/ [2] 

For the second question, I say I have DNAT rules, for being able to
redirect tcp/80 and tcp/443 to squid's port silently. So the answer I
assume should be yes. 

I'll try to say again, what I'm trying to do. 

I wanted to setup a proxy machine which I wanted to be able to receive
url like : 

- www.iou.net.theproxy.com/hj.php?ui=9 [3] 

If this site returns clean content (scanned by Icap server) the url
redirector should return : 

- www.iou.net/hj.php?ui=9 [4] (the real url) as URL. 

I'm using this config https://pastebin.com/raw/mP73fame and this
redirector code https://pastebin.com/p6Usmq75 

So I would say my questions are : 

- Is it possible with Squid to achieve my goal?. With Squid, a
redirector, and a Icap daemon which performs virus scanning... 

- For plain http the config and the URL seem to be working BUT the virus
are not being scanned. Could the config be adjusted for that?. 

Cheers! 

---

EGOITZ AURREKOETXEA 
Dpto. de sistemas 
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia) 
ego...@sarenet.es 
www.sarenet.es [1] 
Antes de imprimir este correo electrónico piense si es necesario
hacerlo. 

El 2019-03-04 17:23, Alex Rousskov escribió:

> On 3/4/19 12:53 AM, Egoitz Aurrekoetxea wrote:
> 
>> My idea is simple. I wanted specific url, to be filtered through the
>> proxy. How can I manage this URL to be checked by the proxy?.
> 
> To answer your questions correctly, we need to translate the vague
> description above into one of the many Squid configurations that may
> match that vague description. In hope to do that, I am asking these two
> basic questions:
> 
> 1. Do clients/browsers request
> https://ooo..ttt.thesquidserver.org/... URLs? Or do they request
> https://ooo..ttt/... URLs?
> 
> For the purpose of the next question, lets assume that the answer to the
> above question is: "Clients request https://publicDomain/... URLs"
> (where "publicDomain" is one of the two domains mentioned in that
> question). Let's further assume that when clients do a DNS lookup for
> publicDomain they get a publicIp IP address back.
> 
> 2. Does your Squid listen on port 443 of publicIp?
> 
> Alex.
> 
>> I assumed,
>> I could modify the real and original content where urls appeared by
>> setting for instance :
>> 
>> - Being the real url : https://ooo..ttt/u?ii=99&j=88
>> 
>> 
>> - I would rewrite in the own content the URL so that  the new URL is now
>> : https://ooo..ttt.thesquidserver.org/u?ii=99&j=88
>> 
>> The domain thesquidserver.org
>>  will be used
>> for doing wilcards. For instance : *.thesquidserver.org
>> *.*.thesquidserver.org etc... will resolve to the ip of the Squid
>> server. But I don't want any url being asked as
>> whatever.thesquidserver.org to be checked... just those ones I have
>> wrote in some place...
>> 
>> So I was trying to write some content managing script, which should
>> check if that URL is needed to be checked and in case it should, check
>> it against an icap service. If Icap service gives you all to be right,
>> redirect you to the real site (just removing the thesquidserver.org for
>> the URL for instance). If that URL contains malware for instance, give
>> you an error page.
>> 
>> This is all what I was trying to do... Some time ago, I used Squid with
>> Dansguardian for this kind of purposes, but now I wanted to do something
>> slightly different. I wanted to pass a request (if should be passed) to
>> an icap service and later depeding in the result of that ICAP service
>> (which I don't really know how could I check with an script) redirect to
>> the real site or give an error page.
>> 
>> For this purpose is perhaps the reason because url redirector programs
>> exist?. I'm trying to see the entire puzzle :)
> 
> El 2019-03-02 23:21, Alex Rousskov escribió:
> 
> On 3/1/19 5:59 AM, Egoitz Aurrekoetxea wrote:
> 
> Is it possible for Squid to do something like : 
> - Receive request :
> https://ooo..ttt.thesquidserver.org/u?ii=99&j=88 
> and 
> to really perform a request as : https://ooo..ttt/u?ii=99&j=88 
> How does your Squid receive the former request? Amos' answer probably
> assumes that your Squid is _not_ ooo..ttt.thesquidserver.org,
> but the name you have chosen for your example may imply that it is.
> 
> * If your Squid is _intercepting_ traffic destined for the real
> ooo..ttt.thesquidserver.org, then see Amos' answer.
> 
> * If your Squid is representing ooo..ttt.thesquidserver.org,
> then your Squid is a reverse proxy that ought to have the certificate
> key for that domain, and none 

Re: [squid-users] squid in container aborted on low memory server

2019-03-04 Thread Matus UHLAR - fantomas

On 3/3/19 9:39 PM, George Xie wrote:

Squid version: 3.5.23-5+deb9u1


debian 9, currently stable, soon to be replaced by debian 10, containing
squid-4.4


http_port 127.0.0.1:3128
cache deny all
access_log none


On 04.03.19 09:34, Alex Rousskov wrote:

Unfortunately, this configuration wastes RAM: Squid is not yet smart
enough to understand that you do not want any caching and may allocate
256+ MB of memory cache plus supporting indexes. To correct that default
behavior, add this:

 cache_mem 0


this should help most.


Furthermore, older Squids, possibly including your no-longer-supported
version


its supported, just not by squid developers. There are many SW distributions
that try to support software for longer than just a few weeks/months, e.g
during whole few-year release cycle.


might explain why you see your Squid allocating a 392 MB table.

If you want to know what is going on for sure, then configure malloc to
dump core on allocation failures and post a stack trace leading to that
allocation failure so that we know _what_ Squid was trying to allocate
when it ran out of RAM.

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
The early bird may get the worm, but the second mouse gets the cheese. 
___

squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Need help blocking an specific HTTPS website

2019-03-04 Thread Felipe Arturo Polanco
Hi,

I have been trying to block https://web.whatsapp.com/ from squid and I have
been unable to.

So far I have this:

I can block other HTTPS websites fine
I can block www.whatsapp.com fine
I cannot block web.whatsapp.com

I have HTTPS transparent interception enabled and I am bumping all TCP
connections, but still this one doesn't appear to get blocked by squid.

This is part of my configuration:
===
acl blockwa1 url_regex whatsapp\.com$
acl blockwa2 dstdomain .whatsapp.com
acl blockwa3 ssl::server_name .whatsapp.com
acl step1 at_step SslBump1

http_access deny blockwa1
http_access deny blockwa2
http_access deny blockwa3

ssl_bump peek step1
ssl_bump bump all


Can anyone advise me here?

Thanks,
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] icap not answering

2019-03-04 Thread steven
Ah thank you for that clarification, the python icap servers i tested so 
far are not very promissing but at least theres a connection now.


sadly squid does not allow http access at all, only https access.



access.log


1551740163.106  0 192.168.10.116 TCP_MISS/500 4776 GET 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-to-listen-to-HTTPS-td4682393.html 
- HIER_NONE/- text/html
1551740163.173  0 192.168.10.116 TCP_IMS_HIT/304 294 GET 
http://backup:3128/squid-internal-static/icons/SN.png - HIER_NONE/- 
image/png


backup is the host where squid is running on


the webpage shown in the browser says: *Unable to forward this request 
at this time.*



cache.log

2019/03/05 00:08:30.319 kid1| 28,4| Eui48.cc(179) lookup: 
id=0x5559d1923114 query ARP table
2019/03/05 00:08:30.319 kid1| 28,4| Eui48.cc(224) lookup: 
id=0x5559d1923114 query ARP on each interface (160 found)
2019/03/05 00:08:30.319 kid1| 28,4| Eui48.cc(230) lookup: 
id=0x5559d1923114 found interface lo
2019/03/05 00:08:30.319 kid1| 28,4| Eui48.cc(230) lookup: 
id=0x5559d1923114 found interface eth0
2019/03/05 00:08:30.319 kid1| 28,4| Eui48.cc(239) lookup: 
id=0x5559d1923114 looking up ARP address for 192.168.10.116 on eth0
2019/03/05 00:08:30.319 kid1| 28,4| Eui48.cc(275) lookup: 
id=0x5559d1923114 got address a4:34:d9:ea:b3:34 on eth0
2019/03/05 00:08:30.319 kid1| 28,3| Checklist.cc(70) preCheck: 
0x5559d14e2f78 checking slow rules
2019/03/05 00:08:30.319 kid1| 28,5| Acl.cc(124) matches: checking 
(ssl_bump rules)
2019/03/05 00:08:30.320 kid1| 28,5| Checklist.cc(397) bannedAction: 
Action 'ALLOWED/3' is not banned
2019/03/05 00:08:30.320 kid1| 28,5| Acl.cc(124) matches: checking 
(ssl_bump rule)

2019/03/05 00:08:30.320 kid1| 28,5| Acl.cc(124) matches: checking step1
2019/03/05 00:08:30.320 kid1| 28,3| Acl.cc(151) matches: checked: step1 = 1
2019/03/05 00:08:30.320 kid1| 28,3| Acl.cc(151) matches: checked: 
(ssl_bump rule) = 1
2019/03/05 00:08:30.320 kid1| 28,3| Acl.cc(151) matches: checked: 
(ssl_bump rules) = 1
2019/03/05 00:08:30.320 kid1| 28,3| Checklist.cc(63) markFinished: 
0x5559d14e2f78 answer ALLOWED for match
2019/03/05 00:08:30.320 kid1| 28,3| Checklist.cc(163) checkCallback: 
ACLChecklist::checkCallback: 0x5559d14e2f78 answer=ALLOWED
2019/03/05 00:08:30.320 kid1| 28,3| Checklist.cc(70) preCheck: 
0x5559d19279a8 checking slow rules
2019/03/05 00:08:30.320 kid1| 28,5| Acl.cc(124) matches: checking 
http_access
2019/03/05 00:08:30.320 kid1| 28,5| Checklist.cc(397) bannedAction: 
Action 'ALLOWED/0' is not banned
2019/03/05 00:08:30.320 kid1| 28,5| Acl.cc(124) matches: checking 
http_access#1

2019/03/05 00:08:30.320 kid1| 28,5| Acl.cc(124) matches: checking localnet
2019/03/05 00:08:30.320 kid1| 28,9| Ip.cc(96) aclIpAddrNetworkCompare: 
aclIpAddrNetworkCompare: compare: 
192.168.10.116:45900/[:::::::ff00] 
(192.168.10.0:45900)  vs 
192.168.10.0-[::]/[:::::::ff00]
2019/03/05 00:08:30.320 kid1| 28,3| Ip.cc(538) match: aclIpMatchIp: 
'192.168.10.116:45900' found
2019/03/05 00:08:30.320 kid1| 28,3| Acl.cc(151) matches: checked: 
localnet = 1
2019/03/05 00:08:30.320 kid1| 28,3| Acl.cc(151) matches: checked: 
http_access#1 = 1
2019/03/05 00:08:30.320 kid1| 28,3| Acl.cc(151) matches: checked: 
http_access = 1
2019/03/05 00:08:30.320 kid1| 28,3| Checklist.cc(63) markFinished: 
0x5559d19279a8 answer ALLOWED for match
2019/03/05 00:08:30.320 kid1| 28,3| Checklist.cc(163) checkCallback: 
ACLChecklist::checkCallback: 0x5559d19279a8 answer=ALLOWED
2019/03/05 00:08:30.320 kid1| 28,4| FilledChecklist.cc(67) 
~ACLFilledChecklist: ACLFilledChecklist destroyed 0x7fff85d5a130
2019/03/05 00:08:30.320 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: 
ACLChecklist::~ACLChecklist: destroyed 0x7fff85d5a130
2019/03/05 00:08:30.320 kid1| 28,4| FilledChecklist.cc(67) 
~ACLFilledChecklist: ACLFilledChecklist destroyed 0x7fff85d5a130
2019/03/05 00:08:30.320 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: 
ACLChecklist::~ACLChecklist: destroyed 0x7fff85d5a130
2019/03/05 00:08:30.320 kid1| 28,4| FilledChecklist.cc(67) 
~ACLFilledChecklist: ACLFilledChecklist destroyed 0x5559d19279a8
2019/03/05 00:08:30.320 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: 
ACLChecklist::~ACLChecklist: destroyed 0x5559d19279a8
2019/03/05 00:08:30.320 kid1| 28,4| FilledChecklist.cc(67) 
~ACLFilledChecklist: ACLFilledChecklist destroyed 0x5559d14e2f78
2019/03/05 00:08:30.320 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: 
ACLChecklist::~ACLChecklist: destroyed 0x5559d14e2f78





current squid config:

#icap
icap_enable off
icap_preview_enable off
icap_send_client_ip on
icap_send_client_username on
icap_service service_req reqmod_precache bypass=1 
icap://127.0.0.1:1344/request

adaptation_access service_req allow all
icap_service service_resp respmod_precache bypass=0 
icap://127.0.0.1:1344/response

adaptation_access service_resp allow all
acl localnet src 192.168.10.0/24
acl CONNECT method CONNECT
http_access allow localnet

Re: [squid-users] Need help blocking an specific HTTPS website

2019-03-04 Thread Leonardo Rodrigues

Em 04/03/2019 19:27, Felipe Arturo Polanco escreveu:

Hi,

I have been trying to block https://web.whatsapp.com/ from squid and I 
have been unable to.


So far I have this:

I can block other HTTPS websites fine
I can block www.whatsapp.com  fine
I cannot block web.whatsapp.com 

I have HTTPS transparent interception enabled and I am bumping all TCP 
connections, but still this one doesn't appear to get blocked by squid.


This is part of my configuration:
===
acl blockwa1 url_regex whatsapp\.com$
acl blockwa2 dstdomain .whatsapp.com 
acl blockwa3 ssl::server_name .whatsapp.com 
acl step1 at_step SslBump1



    blockwa1 and blockwa2 should definitely block web.whatsapp.com .. 
your rules seems right.


    Can you confirm the web.whatsapp.com access are getting through 
squid ? Are these accesses on your access.log with something different 
than DENIED status ?




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Accel Connection Reset

2019-03-04 Thread chia123
Hi Robert,
How did you resolve this issue?
From what I read curl doesn't support https proxy till version 7.52.0
I'm running into similar problem where my machine is sending plaintext
CONNECT to the https proxy instead of starting a TLS handshake.
I'm using python urlib2 but I also used curl pre 7.52.0 version and it has
the same issue.
Any help will be greatly appreciated.

Thanks



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid in container aborted on low memory server

2019-03-04 Thread George Xie
> To correct that default
> behavior, add this:
>   cache_mem 0

thanks for your advice, but actually, I have tried this option before,
found no difference. besides, and I have tried `memory_pools off`.

> Furthermore, older Squids, possibly including your no-longer-supported
> version, may allocate shared memory indexes where none are needed. That
> might explain why you see your Squid allocating a 392 MB table.

that's fair, I will give squid 4.4 a try later.

> If you want to know what is going on for sure, then configure malloc to
> dump core on allocation failures and post a stack trace leading to that
> allocation failure so that we know _what_ Squid was trying to allocate
> when it ran out of RAM.

hope following backtrace is helpful:

(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1  0x7562e42a in __GI_abort () at abort.c:89
#2  0x55728eb5 in fatal_dump (
message=0x55e764e0  "xcalloc: Unable to allocate
1048576 blocks of 392 bytes!\n") at fatal.cc:113
#3  0x55a09837 in xcalloc (n=1048576, sz=sz@entry=392) at xalloc.cc:90
#4  0x558a3d0a in comm_init () at comm.cc:1206
#5  0x55789104 in SquidMain (argc=, argv=0x7fffed48)
at main.cc:1481
#6  0x5568a48b in SquidMainSafe (argv=,
argc=)
at main.cc:1261
#7  main (argc=, argv=) at main.cc:1254


Xie Shi


On Tue, Mar 5, 2019 at 12:34 AM Alex Rousskov
 wrote:
>
> On 3/3/19 9:39 PM, George Xie wrote:
>
> > Squid version: 3.5.23-5+deb9u1
>
> > http_port 127.0.0.1:3128
> > cache deny all
> > access_log none
>
> Unfortunately, this configuration wastes RAM: Squid is not yet smart
> enough to understand that you do not want any caching and may allocate
> 256+ MB of memory cache plus supporting indexes. To correct that default
> behavior, add this:
>
>   cache_mem 0
>
> Furthermore, older Squids, possibly including your no-longer-supported
> version, may allocate shared memory indexes where none are needed. That
> might explain why you see your Squid allocating a 392 MB table.
>
> If you want to know what is going on for sure, then configure malloc to
> dump core on allocation failures and post a stack trace leading to that
> allocation failure so that we know _what_ Squid was trying to allocate
> when it ran out of RAM.
>
>
> HTH,
>
> Alex.
>
>
> > runs in a container with following Dockerfile:
> >
> > FROM debian:9
> > RUN apt update && \
> > apt install --yes squid
> >
> >
> > the total memory of the host server is very low, only 592m, about 370m
> > free memory.
> > if I start squid in the container, squid will abort immediately.
> >
> > error messages in /var/log/squid/cache.log:
> >
> >
> > FATAL: xcalloc: Unable to allocate 1048576 blocks of 392 bytes!
> >
> > Squid Cache (Version 3.5.23): Terminated abnormally.
> > CPU Usage: 0.012 seconds = 0.004 user + 0.008 sys
> > Maximum Resident Size: 47168 KB
> >
> >
> > error message captured with strace -f -e trace=memory:
> >
> > [pid   920] mmap(NULL, 411176960, PROT_READ|PROT_WRITE,
> > MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
> >
> >
> > it appears that squid (or glibc) tries to allocate 392m memory, which is
> > larger than host free memory 370m.
> > but I guess squid don't need that much memory, I have another running
> > squid instance, which only uses < 200m memory.
> > the oddest thing is if I run squid on the host (also Debian 9) directly,
> > not in the container, squid could start and run as normal.
> >
> > am I doing something wrong thing here?
> >
> > Xie Shi
> >
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users
> >
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] icap not answering

2019-03-04 Thread Amos Jeffries
On 5/03/19 12:10 pm, steven wrote:
> Ah thank you for that clarification, the python icap servers i tested so
> far are not very promissing but at least theres a connection now.
> 
> sadly squid does not allow http access at all, only https access.
> 

Er, that would be because the only http_port you have is configured with
'accl' - making it a reverse-proxy port. But you do not have any
cache_peer configured to handle that type of traffic.


So, is there any particular reason you have that port receiving 'accel'
/ reverse-proxy mode traffic?
 If not remove that mode flag and things should all work for HTTP too.


> 
> access.log
> 
> 
> 1551740163.106  0 192.168.10.116 TCP_MISS/500 4776 GET
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-to-listen-to-HTTPS-td4682393.html
> - HIER_NONE/- text/html

> 1551740163.173  0 192.168.10.116 TCP_IMS_HIT/304 294 GET
> http://backup:3128/squid-internal-static/icons/SN.png - HIER_NONE/-
> image/png
> 

That is part of the 500 error page being delivered.

Since you are running a reverse-proxy, the Squid visible host name
really should be a FQDN so visitors can resolve the URLs of content
provided by Squid.


> backup is the host where squid is running on
> 
> 
> the webpage shown in the browser says: *Unable to forward this request
> at this time.*
> 
> 
> cache.log
> 

The log section provided shows only the first http_access and ssl_bump
rules deciding to allow the client to contact the proxy so it can peek
at the TLS client handshake.


> current squid config:
> 
> #icap
> icap_enable off
> icap_preview_enable off
> icap_send_client_ip on
> icap_send_client_username on
> icap_service service_req reqmod_precache bypass=1
> icap://127.0.0.1:1344/request
> adaptation_access service_req allow all
> icap_service service_resp respmod_precache bypass=0
> icap://127.0.0.1:1344/response
> adaptation_access service_resp allow all
> acl localnet src 192.168.10.0/24
> acl CONNECT method CONNECT

NP: the CONNECT ACL should be a built-in now. No need for the line above :-)


> http_access allow localnet
...
> http_port 3128 accel ssl-bump generate-host-certificates=on \
> dynamic_cert_mem_cache_size=4MB cert=/etc/squid/myCA.pem



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and url modifying

2019-03-04 Thread Alex Rousskov
On 3/4/19 11:20 AM, Egoitz Aurrekoetxea wrote:

> Clients, will ask :
> 
> https://ooo..ttt.thesquidserver.org/

> So the answer [to the second question] I assume should be yes.

If I am interpreting your answers correctly, then your setup looks like
a reverse proxy to me. In that case, you do not need SslBump and
interception. You do need an web server certificate for the
ooo..ttt.thesquidserver.org domain, issued by a well-trusted CA.
Do you already have that?


> I have DNAT rules, for being able to
> redirect tcp/80 and tcp/443 to squid's port silently.

Please note that your current Squid configuration is not a reverse proxy
configuration. It is an interception configuration. It also lacks
https_port for handling port 443 traffic. There are probably some
documents on Squid wiki (and/or elsewhere) explaining how to configure
Squid to become a reverse proxy. Follow them.


> I wanted to setup a proxy machine which I wanted to be able to receive
> url like :
> 
> - www.iou.net.theproxy.com/hj.php?ui=9
> 
> If this site returns clean content (scanned by Icap server) the url
> redirector should return :
> 
> - www.iou.net/hj.php?ui=9  (the real
> url) as URL.

OK.


> - Is it possible with Squid to achieve my goal?. With Squid, a
> redirector, and a Icap daemon which performs virus scanning...

A redirector seems out of scope here -- it works on requests while you
want to rewrite (scanned by ICAP) responses.

It is probably possible to use deny_info to respond with a redirect
message. To trigger a deny_info action, you would have to configure your
Squid to block virus-free responses, which is rather strange!


> - For plain http the config and the URL seem to be working BUT the virus
> are not being scanned. Could the config be adjusted for that?.


I would start by removing the redirector, "intercept", SslBump, and
disabling ICAP. Configure your Squid as a reverse proxy without any
virus scanning. Then add ICAP. Get the virus scanning working without
any URL manipulation. Once that is done, you can adjust Squid to block
virus-free responses (via http_reply_access) and trigger a deny_info
response containing an HTTP redirect.


Please note that once the browser gets a redirect to another site, that
browser is not going to revisit your reverse proxy for any content
related to that other site -- all requests for that other site will go
from the browser to that other site. Your proxy will not be in the loop
anymore. If that is not what you want, then you cannot use redirects at
all -- you would have to accelerate that other site for all requests
instead and make sure that other site does not contain absolute URLs
pointing the browser away from your reverse proxy.


Disclaimer: I have not tested the above ideas and, again, I may be
misinterpreting what you really want to achieve.

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and url modifying

2019-03-04 Thread Egoitz Aurrekoetxea
Good morning Alex, 

Thank you so much for your time. Your interpretations I would say are
almost exact. I say almost, because I wanted to be a reverse proxy of
multiple sites. Not just for the sites you host or similar... And yes I
wanted, let's say if all is OK "block" the request by giving a 301 to
directly the site. Yes I know I won't traverse the proxy any more after
that BUT I will only go direct if content is clean. If it is not I will
receive an error response from the ICAP so all is fine then... 

I'll deeply check your comments and will tell here something :) 

Thank you so much!

---

EGOITZ AURREKOETXEA 
Dpto. de sistemas 
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia) 
ego...@sarenet.es 
www.sarenet.es [3] 
Antes de imprimir este correo electrónico piense si es necesario
hacerlo. 

El 2019-03-05 08:13, Alex Rousskov escribió:

> On 3/4/19 11:20 AM, Egoitz Aurrekoetxea wrote:
> 
>> Clients, will ask :
>> 
>> https://ooo..ttt.thesquidserver.org/
> 
>> So the answer [to the second question] I assume should be yes.
> 
> If I am interpreting your answers correctly, then your setup looks like
> a reverse proxy to me. In that case, you do not need SslBump and
> interception. You do need an web server certificate for the
> ooo..ttt.thesquidserver.org domain, issued by a well-trusted CA.
> Do you already have that?
> 
>> I have DNAT rules, for being able to
>> redirect tcp/80 and tcp/443 to squid's port silently.
> 
> Please note that your current Squid configuration is not a reverse proxy
> configuration. It is an interception configuration. It also lacks
> https_port for handling port 443 traffic. There are probably some
> documents on Squid wiki (and/or elsewhere) explaining how to configure
> Squid to become a reverse proxy. Follow them.
> 
>> I wanted to setup a proxy machine which I wanted to be able to receive
>> url like :
>> 
>> - www.iou.net.theproxy.com/hj.php?ui=9 [1]
>> 
>> If this site returns clean content (scanned by Icap server) the url
>> redirector should return :
>> 
>> - www.iou.net/hj.php?ui=9 [2]  (the real
>> url) as URL.
> 
> OK.
> 
>> - Is it possible with Squid to achieve my goal?. With Squid, a
>> redirector, and a Icap daemon which performs virus scanning...
> 
> A redirector seems out of scope here -- it works on requests while you
> want to rewrite (scanned by ICAP) responses.
> 
> It is probably possible to use deny_info to respond with a redirect
> message. To trigger a deny_info action, you would have to configure your
> Squid to block virus-free responses, which is rather strange!
> 
>> - For plain http the config and the URL seem to be working BUT the virus
>> are not being scanned. Could the config be adjusted for that?.
> 
> I would start by removing the redirector, "intercept", SslBump, and
> disabling ICAP. Configure your Squid as a reverse proxy without any
> virus scanning. Then add ICAP. Get the virus scanning working without
> any URL manipulation. Once that is done, you can adjust Squid to block
> virus-free responses (via http_reply_access) and trigger a deny_info
> response containing an HTTP redirect.
> 
> Please note that once the browser gets a redirect to another site, that
> browser is not going to revisit your reverse proxy for any content
> related to that other site -- all requests for that other site will go
> from the browser to that other site. Your proxy will not be in the loop
> anymore. If that is not what you want, then you cannot use redirects at
> all -- you would have to accelerate that other site for all requests
> instead and make sure that other site does not contain absolute URLs
> pointing the browser away from your reverse proxy.
> 
> Disclaimer: I have not tested the above ideas and, again, I may be
> misinterpreting what you really want to achieve.
> 
> Alex.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
 

Links:
--
[1] http://www.iou.net.theproxy.com/hj.php?ui=9
[2] http://www.iou.net/hj.php?ui=9
[3] http://www.sarenet.es___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users