Re: [squid-users] Question: cache_mem share among multiple squid instances with the same service_name in SMP mode

2024-05-20 Thread Zhijian Li (Fujitsu)
Alex,


On 20/05/2024 22:09, Alex Rousskov wrote:
> On 2024-05-20 03:35, Zhijian Li (Fujitsu) wrote:
> 
>> In SMP mode, is it possible that cache_mem can be share among
>> multiple squid instances with the same service_name?
> 
> Short answer: "Do not run multiple SMP Squid instances with the same 
> service_name".
> 
> SMP Squid cache[1] is not supposed to be shared across Squid instances. Any 
> configurations or actions that result in such sharing are unsupported and may 
> lead to undefined behavior. Squid may not emit warnings or other diagnostics 
> in such unsupported cases -- no Squid code has been written specifically to 
> detect and warn about unsupported memory sharing.
> 
> For example, running multiple identically-built Squid instances on the same 
> "box"[2] with the _same_ service name is unsupported and may lead to 
> undefined behavior, especially if SMP Squid cache[1] is enabled.
> 
> Running multiple identically-built Squid instances on the same "box"[2] with 
> _different_ service names is supported on some OSes because it does not lead 
> to unsupported sharing. In other environments, it may lead to undefined 
> behavior. This limitation is a Squid bug or a missing feature. Developers 
> wishing to remove this limitation should look at 
> shm_portable_segment_name_is_path() description and use case.
> 

Understood, many thanks for your detailed explanation.


Thanks
Zhijian

> [1]: Here, "SMP Squid cache" applies to both cache_dir storage (on disk and 
> in shared memory) and cache_mem storage (in shared memory). Very similar 
> reasoning applies to non-caching SMP Squid instances as well, but the 
> question was about caching, so I will not detail these other cases.
> 
> [2]: Here, the term "box" is used to mean "isolation environment": "Same box" 
> means the same OS instance, the same container instance (if containerized), 
> and the same filesystem (i.e. no chroot, jails, or similar isolation tricks 
> for each Squid instance). Various OSes isolate shared memory segments 
> differently, but many use file systems for some shared memory artifacts. If 
> artifacts from different Squid instances clash, Squid behavior is undefined.
> 
> 
> HTH,
> 
> Alex.
> 
>>
>> Per SmpScale[1], "memory object cache (in most environments)" can be share 
>> among workers
>> Per smp-enabled-squid[2], "Each set of SMP-aware processes will interact 
>> only with other processes using the same service name"
>>
>> So if i have multiple (SMP mode + same service_name) squid instances, would 
>> they share the cache_mem objects.
>>
>> [1] https://wiki.squid-cache.org/Features/SmpScale#what-can-workers-share
>> [2] 
>> https://wiki.squid-cache.org/KnowledgeBase/MultipleInstances#smp-enabled-squid
>>
>>
>> Thanks
>> Zhijian
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> https://lists.squid-cache.org/listinfo/squid-users
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question: cache_mem share among multiple squid instances with the same service_name in SMP mode

2024-05-20 Thread Alex Rousskov

On 2024-05-20 03:35, Zhijian Li (Fujitsu) wrote:


In SMP mode, is it possible that cache_mem can be share among
multiple squid instances with the same service_name?


Short answer: "Do not run multiple SMP Squid instances with the same 
service_name".


SMP Squid cache[1] is not supposed to be shared across Squid instances. 
Any configurations or actions that result in such sharing are 
unsupported and may lead to undefined behavior. Squid may not emit 
warnings or other diagnostics in such unsupported cases -- no Squid code 
has been written specifically to detect and warn about unsupported 
memory sharing.


For example, running multiple identically-built Squid instances on the 
same "box"[2] with the _same_ service name is unsupported and may lead 
to undefined behavior, especially if SMP Squid cache[1] is enabled.


Running multiple identically-built Squid instances on the same "box"[2] 
with _different_ service names is supported on some OSes because it does 
not lead to unsupported sharing. In other environments, it may lead to 
undefined behavior. This limitation is a Squid bug or a missing feature. 
Developers wishing to remove this limitation should look at 
shm_portable_segment_name_is_path() description and use case.


[1]: Here, "SMP Squid cache" applies to both cache_dir storage (on disk 
and in shared memory) and cache_mem storage (in shared memory). Very 
similar reasoning applies to non-caching SMP Squid instances as well, 
but the question was about caching, so I will not detail these other cases.


[2]: Here, the term "box" is used to mean "isolation environment": "Same 
box" means the same OS instance, the same container instance (if 
containerized), and the same filesystem (i.e. no chroot, jails, or 
similar isolation tricks for each Squid instance). Various OSes isolate 
shared memory segments differently, but many use file systems for some 
shared memory artifacts. If artifacts from different Squid instances 
clash, Squid behavior is undefined.



HTH,

Alex.



Per SmpScale[1], "memory object cache (in most environments)" can be share 
among workers
Per smp-enabled-squid[2], "Each set of SMP-aware processes will interact only with 
other processes using the same service name"

So if i have multiple (SMP mode + same service_name) squid instances, would 
they share the cache_mem objects.

[1] https://wiki.squid-cache.org/Features/SmpScale#what-can-workers-share
[2] 
https://wiki.squid-cache.org/KnowledgeBase/MultipleInstances#smp-enabled-squid


Thanks
Zhijian
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Question: cache_mem share among multiple squid instances with the same service_name in SMP mode

2024-05-20 Thread Zhijian Li (Fujitsu)
Hi All,

In SMP mode, is it possible that cache_mem can be share among multiple squid 
instances with the same service_name?

Per SmpScale[1], "memory object cache (in most environments)" can be share 
among workers
Per smp-enabled-squid[2], "Each set of SMP-aware processes will interact only 
with other processes using the same service name"

So if i have multiple (SMP mode + same service_name) squid instances, would 
they share the cache_mem objects.

[1] https://wiki.squid-cache.org/Features/SmpScale#what-can-workers-share
[2] 
https://wiki.squid-cache.org/KnowledgeBase/MultipleInstances#smp-enabled-squid


Thanks
Zhijian
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question for Enterprise adoption

2023-04-05 Thread Alex Rousskov

On 4/5/23 14:17, Thoppae, Venkataganesh x (Contractor) wrote:

We are looking to get squid proxy as an authorized product for internal 
use at Fannie Mae and the approval team here has the below question, 
which is generally asked for all third party products. Any response or 
guidance would be extremely helpful.


"Are any advanced algorithms, predictive analytics, dynamic components, 
machine learning, or artificial intelligence are used within or by this 
product, and whether any part of the input or output process involves 
any of these techniques. 


Yes, for some definition of those techniques.

For example, Squid supports such "dynamic components" as eCAP adaptation 
modules (that may use AI and similar techniques), and some Squid 
developers use "auto-complete or suggested text functionalities" and 
automated "translation" mentioned below as examples of (presumably) the 
"techniques" in the above question.


Nothing related to the above question should affect Squid adoption at 
Fannie Mae IMHO: FWIW, Squid code is created and/or curated by humans, 
just like it has been for decades, but we may use any new techniques at 
our disposal, so the past should not really matter.



Examples of non-traditional modeling 
capabilities include but are not limited to: auto-complete or suggested 
text functionalities; optical character recognition (OCR) or other image 
recognition and processing; transcription, translation, speech-to-text 
or text-to-speech; search engines; virtual assistants; and other 
assistive technologies."


Please note that the above examples do not quite match the question: The 
question does not use the words "non-traditional modeling capabilities".


If you explain the actual goal of these questions, you may receive 
better guidance. Right now, it feels like virtually any complex software 
is likely to qualify for a "yes" answer because the question is so vague 
and so broad. For example, auto-complete has been in use for many years, 
long before large language models and AI bots using them became popular.



Cheers,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Question for Enterprise adoption

2023-04-05 Thread Thoppae, Venkataganesh x (Contractor)

Hello
We are looking to get squid proxy as an authorized product for internal use at 
Fannie Mae and the approval team here has the below question, which is 
generally asked for all third party products. Any response or guidance would be 
extremely helpful.

"Are any advanced algorithms, predictive analytics, dynamic components, machine 
learning, or artificial intelligence are used within or by this product, and 
whether any part of the input or output process involves any of these 
techniques. Examples of non-traditional modeling capabilities include but are 
not limited to: auto-complete or suggested text functionalities; optical 
character recognition (OCR) or other image recognition and processing; 
transcription, translation, speech-to-text or text-to-speech; search engines; 
virtual assistants; and other assistive technologies."

Regards
Venkataganesh Thoppae



Fannie Mae Confidential
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question regarding the release process

2023-01-27 Thread Athos Ribeiro

On Sun, Jan 22, 2023 at 02:55:54AM +1300, Amos Jeffries wrote:

On 21/01/2023 4:02 am, Athos Ribeiro wrote:

Hi! I am trying to understand how accurate the discussion in
http://lists.squid-cache.org/pipermail/squid-dev/2015-March/001853.html
is nowadays,


That discussion was the latest on Squid numbering. There have been 
modifications to release timing, but the numbering still follows that 
plan.



so I can understand the guides in
http://wiki.squid-cache.org/DeveloperResources/ReleaseProcess regarding
the release process.


Feature changes should be proposed against the "master" branch in our 
github. If/when accepted they become part of the next Squid-N release 
series. Which are now on a 
.


Point releases within a series should only contain documentation 
corrections, bug and security vulnerability fixes. Exception may occur 
at the release maintainers choice.



What I am interested in understanding is if there is a hard commitment
on the backwards compatibility between point releases since 4.x.
In other words, is there any commitment for no feature changes between
5.x and 5.(x+1)?


Yes. Feature changes should only occur between Squid-(N).x and 
Squid-(N+m).x with non-0 'm'.


HTH


It does. Thanks, Amos!


Amos


--
Athos Ribeiro
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question regarding the release process

2023-01-21 Thread Amos Jeffries

On 21/01/2023 4:02 am, Athos Ribeiro wrote:

Hi! I am trying to understand how accurate the discussion in
http://lists.squid-cache.org/pipermail/squid-dev/2015-March/001853.html
is nowadays,


That discussion was the latest on Squid numbering. There have been 
modifications to release timing, but the numbering still follows that plan.



so I can understand the guides in
http://wiki.squid-cache.org/DeveloperResources/ReleaseProcess regarding
the release process.


Feature changes should be proposed against the "master" branch in our 
github. If/when accepted they become part of the next Squid-N release 
series. Which are now on a .


Point releases within a series should only contain documentation 
corrections, bug and security vulnerability fixes. Exception may occur 
at the release maintainers choice.



What I am interested in understanding is if there is a hard commitment
on the backwards compatibility between point releases since 4.x.
In other words, is there any commitment for no feature changes between
5.x and 5.(x+1)?


Yes. Feature changes should only occur between Squid-(N).x and 
Squid-(N+m).x with non-0 'm'.


HTH
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Question regarding the release process

2023-01-20 Thread Athos Ribeiro

Hi! I am trying to understand how accurate the discussion in
http://lists.squid-cache.org/pipermail/squid-dev/2015-March/001853.html
is nowadays, so I can understand the guides in
http://wiki.squid-cache.org/DeveloperResources/ReleaseProcess regarding
the release process.

What I am interested in understanding is if there is a hard commitment
on the backwards compatibility between point releases since 4.x.

In other words, is there any commitment for no feature changes between
5.x and 5.(x+1)?

regards,


--
Athos Ribeiro
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about compatibility SQUID 3.5.12 and UBUNTU 16.04 or UBUNTU 18.04

2022-01-18 Thread Amos Jeffries

On 19/01/22 04:02, Massimiliano Toscano wrote:


  Hi ,

i have a Linux UBUNTU 16 to update

and possibly to upgrade and bring to UBUNTU 18.04

root@tortella1:~# cat /etc/issue
Ubuntu 16.04.4 LTS

root@tortella1:~# squid -v
Squid Cache: Version 3.5.12
Service Name: squid
Ubuntu linux

when we tried the first time SQUID 3.5 doesn't work more.

Could i ask pls, if UBUNTU 18 doesn't work with SQUID 3.5.12 ?



Major releases of Ubuntu come with entirely different sets of system 
libraries and requirements.


The source code of Squid can usually be said to work for any OS. But the 
compiled binary is specific to that OS version. There is usually a need 
to rebuild if for different OS major versions like Ubuntu 16.04 vs 18.04.




Maybe should i exclude SQUID package fo my upgrade in to UBUNTU 18 ?



That depends on why you have been using squid-3.5.12 with Ubuntu 16.04 
which ships squid-3.3.8.


If you simply needed an upgrade and have no special customization. Then 
you should be able to simply install the squid-3.5.26 package from 
Ubuntu 18.04 and stop using the older 3.5.12.


If you have special customization in your Squid build that are not 
included in official Squid. Then you will need to do one of the following:
 * rebuild your 3.5.12 Squid package binaries for the Ubuntu 18.04 
system, or
 * port your customization to the squid-3.5.26 sources provided by 
Ubuntu 18.04.


I advise the later (see below for why). You can find the necessary 
commands on our Debian wiki page 
. Ubuntu should be 
the same process, except their deb-src URL will be different.




and before i have to update other packages of the UBUNTU 16



If your Squid was built as a .deb package and installed you can use 
"aptitude hold X" to prevent upgrades happening for package X. With that 
you can safely use the Ubuntu APT system to upgrade everything unrelated 
to running Squid first. Then build your new .deb package and install it.



If you do have to rebuild from sources. You can prepare your new build 
of Squid. Do test builds before upgrade, upgrade the OS, then re-build 
for the upgraded system.


There are other more complicated methods if neither of those are doable. 
But I shall not go into specifics unless you need them.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about compatibility SQUID 3.5.12 and UBUNTU 16.04 or UBUNTU 18.04

2022-01-18 Thread Massimiliano Toscano
>  Hi ,
>
> i have a Linux UBUNTU 16 to update
>
> and possibly to upgrade and bring to UBUNTU 18.04
>
> root@tortella1:~# cat /etc/issue
> Ubuntu 16.04.4 LTS
>
> root@tortella1:~# squid -v
> Squid Cache: Version 3.5.12
> Service Name: squid
> Ubuntu linux
>
> when we tried the first time SQUID 3.5 doesn't work more.
>
> Could i ask pls, if UBUNTU 18 doesn't work with SQUID 3.5.12 ?
>
> Maybe should i exclude SQUID package fo my upgrade in to UBUNTU 18 ?
>
> and before i have to update other packages of the UBUNTU 16
>
>
> Thanks a lot in advance
>
> Kind Regards
>
> Max
>
> Milan, Italy
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question regarding TPROXY and sslBump

2020-02-15 Thread Amos Jeffries
On 16/02/20 2:58 am, Felipe Polanco wrote:
> Thanks for the reply,
> 
> Speaking strictly about TPROXY, are there any limitations compared to
> regular transparent intercept?

I assume that by "regular transparent intercept" you mean NAT intercept.

The primary difference between TPROXY and NAT ... is that NAT is *not*
"transparent". All the differences derive from that.

To use TPROXY the machine running it must have the ability to spoof IPs
on packets outgoing from Squid and to properly deliver them afterwards.
This primarily affects Squid hosted in cloud services where that
low-level control is not permitted or quite difficult.

The problems NAT introduces by having a different IP address on traffic
arriving at servers largely disappear. But all other issues related to
middleware touching the messages in transit remain the same.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question regarding TPROXY and sslBump

2020-02-15 Thread Felipe Polanco
Thanks for the reply,

Speaking strictly about TPROXY, are there any limitations compared to
regular transparent intercept?

We have full control of the network and TCP routing.

We have done regular https intercept in the past and is working fine, but
now we would like to try TPROXY in bridging mode instead of routing mode.

Thanks,

On Sat, Feb 15, 2020 at 3:17 AM Amos Jeffries  wrote:

> On 15/02/20 10:28 am, Felipe Polanco wrote:
> > Hi,
> >
> > Can squid running in TPROXY mode intercept and decrypt HTTPS payload
> > with sslBump?
> >
>
> Maybe. It can do so about as well as NAT intercept mode can.
>
> Wherther TPROXY works depends on what level of access you have to
> control the TCP packet routing.
>
> Whether SSL-Bump can decrypt depends on what TLS features are being used
> by the HTTPS traffic - and whether it is HTTPS at all.
>
> These things are only loosely related.
>
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question regarding TPROXY and sslBump

2020-02-14 Thread Amos Jeffries
On 15/02/20 10:28 am, Felipe Polanco wrote:
> Hi,
> 
> Can squid running in TPROXY mode intercept and decrypt HTTPS payload
> with sslBump?
> 

Maybe. It can do so about as well as NAT intercept mode can.

Wherther TPROXY works depends on what level of access you have to
control the TCP packet routing.

Whether SSL-Bump can decrypt depends on what TLS features are being used
by the HTTPS traffic - and whether it is HTTPS at all.

These things are only loosely related.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Question regarding TPROXY and sslBump

2020-02-14 Thread Felipe Polanco
Hi,

Can squid running in TPROXY mode intercept and decrypt HTTPS payload with
sslBump?

This is for an in-line Layer 2 proxy application.

Thanks,
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about HTTPS transparent proxy with cache_peer

2020-02-08 Thread Amos Jeffries
On 9/02/20 3:47 pm, Felipe Arturo Polanco wrote:
> Thanks for the reply,
> 
> Is there a documentation for squid 5 on this feature? 
> 

Just the release notes.

There is nothing special to configure though. If the peer is allowed to
be used by your policy but not supporting TLS on its connections Squid
just automatically tries to use a CONNECT tunnel to go through it.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about HTTPS transparent proxy with cache_peer

2020-02-08 Thread Felipe Arturo Polanco
Thanks for the reply,

Is there a documentation for squid 5 on this feature?

On Sat, Feb 8, 2020, 8:34 PM Amos Jeffries  wrote:

> On 9/02/20 5:17 am, Felipe Arturo Polanco wrote:
> > Hi,
> >
> > Can squid be set up as a transparent proxy for HTTP and HTTPS and at
> > the same time use an upstream proxy?
> >
> > It means converting GET request from a client to a CONNECT request to an
> > upstream server.
> >
>
> That depends on the GET request URL, the Squid version, and upstream
> proxy capabilities.
>
> For http:// URLs, yes all Squid versions can relay to an upstream proxy
> as normal GET requests.
>
> For https:// URLs all Squid which support SSL-Bump can send requests to
> an upstream peer which supports TLS/SSL connections from the Squid as
> normal GET requests.
>
> The ability to generate CONNECT tunnels for sending HTTPS traffic
> through plain-text peers has been added in Squid-5.
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about HTTPS transparent proxy with cache_peer

2020-02-08 Thread Amos Jeffries
On 9/02/20 5:17 am, Felipe Arturo Polanco wrote:
> Hi,
> 
> Can squid be set up as a transparent proxy for HTTP and HTTPS and at
> the same time use an upstream proxy?
> 
> It means converting GET request from a client to a CONNECT request to an
> upstream server.
> 

That depends on the GET request URL, the Squid version, and upstream
proxy capabilities.

For http:// URLs, yes all Squid versions can relay to an upstream proxy
as normal GET requests.

For https:// URLs all Squid which support SSL-Bump can send requests to
an upstream peer which supports TLS/SSL connections from the Squid as
normal GET requests.

The ability to generate CONNECT tunnels for sending HTTPS traffic
through plain-text peers has been added in Squid-5.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Question about HTTPS transparent proxy with cache_peer

2020-02-08 Thread Felipe Arturo Polanco
Hi,

Can squid be set up as a transparent proxy for HTTP and HTTPS and at
the same time use an upstream proxy?

It means converting GET request from a client to a CONNECT request to an
upstream server.

Thanks,
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question: Force the caching of 302 responses without Expires header and with Strict-Transport-Security max-age header?

2020-01-04 Thread Amos Jeffries
On 5/01/20 7:24 am, Andrei Pozolotin wrote:
> Amos, hello:
> 
> On 2020-01-04 05:14, Amos Jeffries wrote:
>> Expires header is an HTTP/1.0 protocol feature. Its absence has no
>> meaning.
>> The 302 response is explicitly defined in HTTP as a *temporary* object
>> which can change at any time. The *presence* of Cache-Control:max-age or
>> Expires set a minimum time the response is guaranteed not to change.
> 
> 1. perhaps an argument could be made that these are semantically identical:
> * Cache-Control: max-age=
> * Strict-Transport-Security: max-age=
> 

They are not. One relates to hop-by-hop message storage. The other
relates to end-to-end connection setup.


> 2. and therefore "Strict-Transport-Security" should be handled
> by squid "Cache-Control" related features such as refresh_pattern
> http://www.squid-cache.org/Doc/config/refresh_pattern/
> 

As Alex said Squid does nothing with Strict-Transport-Security headers.
They are for the client UA software, irrelevant to middleware like Squid.


>> Since your use-case is a software archive mirrors you should investigate
>> whether the objects stored there are truly identical. If they are, the
>> Store-ID feature can be used to de-duplicate the URLs the 302 are
>> pointing at so *they* are cached efficiently.
>>  
> 
> 3. thank you for the StoreID idea
> 
> 4. I have already implemented it:
> https://github.com/random-python/nspawn/tree/master/src/main/nspawn/app/hatcher/service/image-proxy/etc/squid
> 
> 
> 5. it does improve performance, however two preceding TCP_MISS/302 hits
> for every archive url hit, do provide major contribution to the overall
> response delay


(Warning: I have not tested this idea yet, if it does not work it can
break the downloads completely. Treat with extreme care).

You may be able to improve that a little by adding the original 302 URL
to the Store-ID map. However you MUST then add a store_miss rule to
prevent those URLs being stored in the cache.

The idea being that one one of the real download objects is stored Squid
use it as a substitute for the 302. But the 302 payload can never be
used as a substitute for the real object.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question: Force the caching of 302 responses without Expires header and with Strict-Transport-Security max-age header?

2020-01-04 Thread Andrei Pozolotin

Amos, hello:

On 2020-01-04 05:14, Amos Jeffries wrote:
Expires header is an HTTP/1.0 protocol feature. Its absence has no 
meaning.

The 302 response is explicitly defined in HTTP as a *temporary* object
which can change at any time. The *presence* of Cache-Control:max-age 
or

Expires set a minimum time the response is guaranteed not to change.


1. perhaps an argument could be made that these are semantically 
identical:

* Cache-Control: max-age=
* Strict-Transport-Security: max-age=

2. and therefore "Strict-Transport-Security" should be handled
by squid "Cache-Control" related features such as refresh_pattern
http://www.squid-cache.org/Doc/config/refresh_pattern/

Since your use-case is a software archive mirrors you should 
investigate

whether the objects stored there are truly identical. If they are, the
Store-ID feature can be used to de-duplicate the URLs the 302 are
pointing at so *they* are cached efficiently.
 


3. thank you for the StoreID idea

4. I have already implemented it:
https://github.com/random-python/nspawn/tree/master/src/main/nspawn/app/hatcher/service/image-proxy/etc/squid

5. it does improve performance, however two preceding TCP_MISS/302 hits
for every archive url hit, do provide major contribution to the overall 
response delay


Thanks again,

Andrei.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question: Force the caching of 302 responses without Expires header and with Strict-Transport-Security max-age header?

2020-01-04 Thread Amos Jeffries
On 4/01/20 11:49 pm, Andrei Pozolotin wrote:
> Alex:
> 
> On 2020-01-03 14:19, Alex Rousskov wrote:
>>> Question: how can one force the caching of 302 responses
>>> without the Expires header and with Strict-Transport-Security max-age
>>> header?
>>
>>
>> You can modify Squid to handle Strict-Transport-Security specially or
>> you can write an ICAP or eCAP service that would add a "more standard"
>> Cache-Control:max-age header to the response (with even more work, it
>> would be possible to drop the added response header before it leaves
>> Squid).
> 
> 1. thank you for your suggestions
> 
> 2. just to confirm I got this right:
> 
> there is no way to use any current squid configuration options
> or any existing squid plugins to cache 302 responses without Expires
> header,
> instead must write some brand new code, correct?

Expires header is an HTTP/1.0 protocol feature. Its absence has no meaning.

The 302 response is explicitly defined in HTTP as a *temporary* object
which can change at any time. The *presence* of Cache-Control:max-age or
Expires set a minimum time the response is guaranteed not to change.



Since your use-case is a software archive mirrors you should investigate
whether the objects stored there are truly identical. If they are, the
Store-ID feature can be used to de-duplicate the URLs the 302 are
pointing at so *they* are cached efficiently.
 


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question: Force the caching of 302 responses without Expires header and with Strict-Transport-Security max-age header?

2020-01-04 Thread Andrei Pozolotin

Alex:

On 2020-01-03 14:19, Alex Rousskov wrote:

Question: how can one force the caching of 302 responses
without the Expires header and with Strict-Transport-Security max-age 
header?



You can modify Squid to handle Strict-Transport-Security specially or
you can write an ICAP or eCAP service that would add a "more standard"
Cache-Control:max-age header to the response (with even more work, it
would be possible to drop the added response header before it leaves
Squid).


1. thank you for your suggestions

2. just to confirm I got this right:

there is no way to use any current squid configuration options
or any existing squid plugins to cache 302 responses without Expires 
header,

instead must write some brand new code, correct?

Andrei
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question: Force the caching of 302 responses without Expires header and with Strict-Transport-Security max-age header?

2020-01-03 Thread Alex Rousskov

On 1/3/20 11:14, Andrei Pozolotin wrote:

3. here are response details via curl:

a)

curl --head 
https://archive.archlinux.org/repos/2020/01/01/community/os/x86_64/python-wheel-0.33.6-3-any.pkg.tar.xz


HTTP/2 302
server: nginx/1.16.1
date: Fri, 03 Jan 2020 17:56:14 GMT
content-type: text/html
content-length: 145
location: 
https://archive.org/download/archlinux_pkg_python-wheel/python-wheel-0.33.6-3-any.pkg.tar.xz 


strict-transport-security: max-age=31536000; includeSubdomains; preload

b)

curl --head 
https://archive.org/download/archlinux_pkg_python-wheel/python-wheel-0.33.6-3-any.pkg.tar.xz


HTTP/1.1 302 Found
Server: nginx/1.14.0 (Ubuntu)
Date: Fri, 03 Jan 2020 17:56:42 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Accept-Ranges: bytes
Location: 
https://ia803100.us.archive.org/6/items/archlinux_pkg_python-wheel/python-wheel-0.33.6-3-any.pkg.tar.xz 


Strict-Transport-Security: max-age=15724800

4. it seems that Strict-Transport-Security: max-age header is ignored 
here by squid



Correct. Squid does not know anything about the 
Strict-Transport-Security header. The header is treated like an 
extension header (i.e. it is usually forwarded without interpreting its 
value).




5. any attempt to use any of the refresh_pattern options also has no effect:

http://www.squid-cache.org/Doc/config/refresh_pattern/


Yes, the decision to avoid caching of 302 responses without Expires is 
hard-coded. It is made before refresh_pattern is consulted AFAICT.




Question: how can one force the caching of 302 responses
without the Expires header and with Strict-Transport-Security max-age 
header?



You can modify Squid to handle Strict-Transport-Security specially or 
you can write an ICAP or eCAP service that would add a "more standard" 
Cache-Control:max-age header to the response (with even more work, it 
would be possible to drop the added response header before it leaves Squid).



HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Question: Force the caching of 302 responses without Expires header and with Strict-Transport-Security max-age header?

2020-01-03 Thread Andrei Pozolotin
Hello. 

1. this question was asked before, but not yet resolved: 

http://www.squid-cache.org/mail-archive/squid-users/200701/.html 

2. use case: 

the following url goes though double redirect, both times not providing
"Expires:" header, 

which results in repeated TCP_MISS/302 entries in the squid logs: 

2020-Jan-03 17:45:14125 192.168.1.106 TCP_MISS/302 565 GET
https://archive.archlinux.org/repos/2020/01/01/community/os/x86_64/python-wheel-0.33.6-3-any.pkg.tar.xz
- HIER_DIRECT/88.198.91.70 text/html   

2020-Jan-03 17:45:14 82 192.168.1.106 TCP_MISS/302 461 GET
https://archive.org/download/archlinux_pkg_python-wheel/python-wheel-0.33.6-3-any.pkg.tar.xz
- HIER_DIRECT/207.241.224.2 text/html   
 

2020-Jan-03 17:45:14215 192.168.1.106 NONE/200 0 CONNECT
ia803100.us.archive.org:443 - HIER_DIRECT/207.241.232.150 -   

2020-Jan-03 17:45:14  1 192.168.1.106 TCP_HIT/200 38605 GET
https://ia803100.us.archive.org/6/items/archlinux_pkg_python-wheel/python-wheel-0.33.6-3-any.pkg.tar.xz
- HIER_NONE/- application/octet-stream   

3. here are response details via curl: 

a) 

curl --head
https://archive.archlinux.org/repos/2020/01/01/community/os/x86_64/python-wheel-0.33.6-3-any.pkg.tar.xz

HTTP/2 302  
server: nginx/1.16.1 
date: Fri, 03 Jan 2020 17:56:14 GMT 
content-type: text/html 
content-length: 145 
location:
https://archive.org/download/archlinux_pkg_python-wheel/python-wheel-0.33.6-3-any.pkg.tar.xz

strict-transport-security: max-age=31536000; includeSubdomains; preload 

b) 

curl --head
https://archive.org/download/archlinux_pkg_python-wheel/python-wheel-0.33.6-3-any.pkg.tar.xz

HTTP/1.1 302 Found 
Server: nginx/1.14.0 (Ubuntu) 
Date: Fri, 03 Jan 2020 17:56:42 GMT 
Content-Type: text/html; charset=UTF-8 
Connection: keep-alive 
Accept-Ranges: bytes 
Location:
https://ia803100.us.archive.org/6/items/archlinux_pkg_python-wheel/python-wheel-0.33.6-3-any.pkg.tar.xz

Strict-Transport-Security: max-age=15724800

4. it seems that Strict-Transport-Security: max-age header is ignored
here by squid  

5. any attempt to use any of the refresh_pattern options also has no
effect: 

http://www.squid-cache.org/Doc/config/refresh_pattern/ 

6. full squid.conf is posted here: 

https://github.com/random-python/nspawn/blob/master/src/main/nspawn/app/hatcher/service/image-proxy/etc/squid/squid.conf


Question: how can one force the caching of 302 responses 

without the Expires header and with Strict-Transport-Security max-age
header? 

Thank you.___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question on Many Clients to Many Proxy Lists

2018-11-30 Thread Wire Cutter
Yes both were before the cache, but I wasn't calling the correct group in
the ACL, which caused the issue.


Thanks for you help.

Now to figure out why it's slow

On Fri, Nov 30, 2018 at 2:17 PM Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 11/30/18 11:51 AM, Wire Cutter wrote:
>
> > cache_peer_access peerA1 allow port_8080
> >
> > cache_peer 192.168.1.2 parent 8800 0 round-robin no-query name=peerA1
>
>
> > Then this is the error I get when I start the service
> >
> > Bungled /etc/squid/squid.conf line 3148: cache_peer_access peerA1 allow
> port_8080
>
> Did you define peerA1 and port_8080 before (you used them on) line 3148?
> If not, you should.
>
> Alex.
>
>
> > On Thu, Nov 29, 2018 at 10:44 AM Alex Rousskov wrote:
> >
> > On 11/29/18 7:57 AM, Wire Cutter wrote:
> >
> > > I’ve created 4 ports for clients to talk to, then created ACL
> > lists for
> > > those ports.  From there I’ve tried (and failed) to create naming
> > groups
> > > for cacheing peers, then added those to ACLs and it fails. Any
> ideas?
> >
> > Use cache_peer_access to allow http_port X traffic (and only that
> > traffic) to peer group Y:
> >
> >   # rules for peer group A
> >   cache_peer_access peerA1 allow receivedOnPortForPeersA
> >   cache_peer_access peerA2 allow receivedOnPortForPeersA
> >   cache_peer_access peerA3 allow receivedOnPortForPeersA
> >   ...
> >   # rules for peer group B
> >   cache_peer_access peerB1 allow receivedOnPortForPeersB
> >   cache_peer_access peerB2 allow receivedOnPortForPeersB
> >   ...
> >
> >
> > Depending on your traffic and needs, you may also need to allow
> > non-hierarchical requests to go to peer:
> >
> >   nonhierarchical_direct off
> >
> > and/or to prohibit direct connections for portX:
> >
> >   never_direct allow receivedOnPortForPeersA
> >   never_direct allow receivedOnPortForPeersB
> >
> >
> > Once you get this working, please make Squid documentation
> improvements
> > that would have allowed you to figure this out on your own.
> >
> >
> > HTH,
> >
> > Alex.
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > 
> > http://lists.squid-cache.org/listinfo/squid-users
> >
> >
> >
> > --
> >
> _
> >
> > This transmission may contain information that is privileged,
> > confidential and exempt from disclosure under applicable law.  If you
> > are not the intended recipient, you are hereby notified that any
> > disclosure, copying, distribution, or use of the information contained
> > herein (including any reliance thereon) is STRICTLY PROHIBITED.  If you
> > received this transmission in error, please immediately contact the
> > sender and destroy the material in its entirety, whether in electronic
> > or hard copy format.
>
>

-- 
_

This transmission may contain information that is privileged, confidential
and exempt from disclosure under applicable law.  If you are not the
intended recipient, you are hereby notified that any disclosure, copying,
distribution, or use of the information contained herein (including any
reliance thereon) is STRICTLY PROHIBITED.  If you received this
transmission in error, please immediately contact the sender and destroy
the material in its entirety, whether in electronic or hard copy format.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question on Many Clients to Many Proxy Lists

2018-11-30 Thread Alex Rousskov
On 11/30/18 11:51 AM, Wire Cutter wrote:

> cache_peer_access peerA1 allow port_8080
> 
> cache_peer 192.168.1.2 parent 8800 0 round-robin no-query name=peerA1


> Then this is the error I get when I start the service 
> 
> Bungled /etc/squid/squid.conf line 3148: cache_peer_access peerA1 allow 
> port_8080

Did you define peerA1 and port_8080 before (you used them on) line 3148?
If not, you should.

Alex.


> On Thu, Nov 29, 2018 at 10:44 AM Alex Rousskov wrote:
> 
> On 11/29/18 7:57 AM, Wire Cutter wrote:
> 
> > I’ve created 4 ports for clients to talk to, then created ACL
> lists for
> > those ports.  From there I’ve tried (and failed) to create naming
> groups
> > for cacheing peers, then added those to ACLs and it fails. Any ideas?
> 
> Use cache_peer_access to allow http_port X traffic (and only that
> traffic) to peer group Y:
> 
>   # rules for peer group A
>   cache_peer_access peerA1 allow receivedOnPortForPeersA
>   cache_peer_access peerA2 allow receivedOnPortForPeersA
>   cache_peer_access peerA3 allow receivedOnPortForPeersA
>   ...
>   # rules for peer group B
>   cache_peer_access peerB1 allow receivedOnPortForPeersB
>   cache_peer_access peerB2 allow receivedOnPortForPeersB
>   ...
> 
> 
> Depending on your traffic and needs, you may also need to allow
> non-hierarchical requests to go to peer:
> 
>   nonhierarchical_direct off
> 
> and/or to prohibit direct connections for portX:
> 
>   never_direct allow receivedOnPortForPeersA
>   never_direct allow receivedOnPortForPeersB
> 
> 
> Once you get this working, please make Squid documentation improvements
> that would have allowed you to figure this out on your own.
> 
> 
> HTH,
> 
> Alex.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> 
> http://lists.squid-cache.org/listinfo/squid-users
> 
> 
> 
> -- 
> _
> 
> This transmission may contain information that is privileged,
> confidential and exempt from disclosure under applicable law.  If you
> are not the intended recipient, you are hereby notified that any
> disclosure, copying, distribution, or use of the information contained
> herein (including any reliance thereon) is STRICTLY PROHIBITED.  If you
> received this transmission in error, please immediately contact the
> sender and destroy the material in its entirety, whether in electronic
> or hard copy format. 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question on Many Clients to Many Proxy Lists

2018-11-30 Thread Wire Cutter
So thats exactly what I did.

#Rules for Peer group - list 1
   cache_peer_access peerA1 allow port_8080
   cache_peer_access peerA2 allow port_8080
   cache_peer_access peerA3 allow port_8080
   cache_peer_access peerA4 allow port_8080

#cache_peer
cache_peer 192.168.1.2 parent 8800 0 round-robin no-query
name=peerA1
cache_peer 192.168.2.2 parent 8800 0 round-robin no-query
name=peerA2
cache_peer 192.168.2.5 parent 8800 0 round-robin no-query
name=peerA3
cache_peer 192.168.2.6  parent 8800 0 round-robin no-query
name=peerA4

Then this is the error I get when I start the service

Nov 30 18:38:11 ubuntu systemd[1]: Starting LSB: Squid HTTP Proxy version
3.x...
Nov 30 18:38:11 ubuntu squid[13974]: Bungled /etc/squid/squid.conf line
3148: cache_peer_access peerA1 allow port_8080
Nov 30 18:38:11 ubuntu squid[13980]: Bungled /etc/squid/squid.conf line
3148: cache_peer_access peerA1 allow port_8080
Nov 30 18:38:11 ubuntu squid[13957]:  * FATAL: Bungled
/etc/squid/squid.conf line 3148: cache_peer_access peerA1 allow port_8080
Nov 30 18:38:11 ubuntu systemd[1]: squid.service: Control process exited,
code=exited status=3
Nov 30 18:38:11 ubuntu systemd[1]: squid.service: Failed with result
'exit-code'.
Nov 30 18:38:11 ubuntu systemd[1]: Failed to start LSB: Squid HTTP Proxy
version 3.x.



On Thu, Nov 29, 2018 at 10:44 AM Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 11/29/18 7:57 AM, Wire Cutter wrote:
>
> > I’ve created 4 ports for clients to talk to, then created ACL lists for
> > those ports.  From there I’ve tried (and failed) to create naming groups
> > for cacheing peers, then added those to ACLs and it fails. Any ideas?
>
> Use cache_peer_access to allow http_port X traffic (and only that
> traffic) to peer group Y:
>
>   # rules for peer group A
>   cache_peer_access peerA1 allow receivedOnPortForPeersA
>   cache_peer_access peerA2 allow receivedOnPortForPeersA
>   cache_peer_access peerA3 allow receivedOnPortForPeersA
>   ...
>   # rules for peer group B
>   cache_peer_access peerB1 allow receivedOnPortForPeersB
>   cache_peer_access peerB2 allow receivedOnPortForPeersB
>   ...
>
>
> Depending on your traffic and needs, you may also need to allow
> non-hierarchical requests to go to peer:
>
>   nonhierarchical_direct off
>
> and/or to prohibit direct connections for portX:
>
>   never_direct allow receivedOnPortForPeersA
>   never_direct allow receivedOnPortForPeersB
>
>
> Once you get this working, please make Squid documentation improvements
> that would have allowed you to figure this out on your own.
>
>
> HTH,
>
> Alex.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>


-- 
_

This transmission may contain information that is privileged, confidential
and exempt from disclosure under applicable law.  If you are not the
intended recipient, you are hereby notified that any disclosure, copying,
distribution, or use of the information contained herein (including any
reliance thereon) is STRICTLY PROHIBITED.  If you received this
transmission in error, please immediately contact the sender and destroy
the material in its entirety, whether in electronic or hard copy format.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question on Many Clients to Many Proxy Lists

2018-11-29 Thread Alex Rousskov
On 11/29/18 7:57 AM, Wire Cutter wrote:

> I’ve created 4 ports for clients to talk to, then created ACL lists for
> those ports.  From there I’ve tried (and failed) to create naming groups
> for cacheing peers, then added those to ACLs and it fails. Any ideas?

Use cache_peer_access to allow http_port X traffic (and only that
traffic) to peer group Y:

  # rules for peer group A
  cache_peer_access peerA1 allow receivedOnPortForPeersA
  cache_peer_access peerA2 allow receivedOnPortForPeersA
  cache_peer_access peerA3 allow receivedOnPortForPeersA
  ...
  # rules for peer group B
  cache_peer_access peerB1 allow receivedOnPortForPeersB
  cache_peer_access peerB2 allow receivedOnPortForPeersB
  ...


Depending on your traffic and needs, you may also need to allow
non-hierarchical requests to go to peer:

  nonhierarchical_direct off

and/or to prohibit direct connections for portX:

  never_direct allow receivedOnPortForPeersA
  never_direct allow receivedOnPortForPeersB


Once you get this working, please make Squid documentation improvements
that would have allowed you to figure this out on your own.


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Question on Many Clients to Many Proxy Lists

2018-11-29 Thread Wire Cutter
I have an issue with a Squid config I can’t figure out. I’m trying to have
a many (Hosts) to many (Up stream proxies) in a single config

I’ve created 4 ports for clients to talk to, then created ACL lists for
those ports.  From there I’ve tried (and failed) to create naming groups
for cacheing peers, then added those to ACLs and it fails. Any ideas?

Here’s a link on what some people have come up with.
https://www.linuxquestions.org/questions/linux-server-73/squid-multiple-ports-multiple-destinations-4175450243/
linuxquestions.org

Squid multiple ports multiple destinations
Hello there, I want to spawn one squid instance on multiple ports (which I
already have). Code: http_port myip:9000 name=first http_port myip:9001
linuxquestions.org

Squid multiple ports multiple destinations
Hello there, I want to spawn one squid instance on multiple ports (which I
already have). Code: http_port myip:9000 name=first http_port myip:9001


Any help would be appreciated
--
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question about squid and https connection .

2018-07-23 Thread Eliezer Croitoru
OK so it makes more sense when you say it's intentional.

I do not agree with this approach and it's a bit off topic but I got my answer.

Thanks,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
Sent: Friday, July 20, 2018 6:17 PM
To: Eliezer Croitoru ; 'Squid Users' 

Subject: Re: [squid-users] question about squid and https connection .

On 07/20/2018 03:04 AM, Eliezer Croitoru wrote:
> I think we can use MD5/SHA1/SHA256 or even CRC32 to show the "freshness" of 
> the certificate.

Sorry, you lost me: I see no connection between the previous discussion
about CA keys and your new statement about something you call
certificate "freshness".


> Also this way the ssl_db folder will be free of the burden of tight 600 or 
> 700 permissions.
> 
> Did I got it right?

The stored generated certificates include their private keys so the
database should use tight permissions.


Alex.


> -Original Message-
> From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
> Sent: Thursday, July 19, 2018 11:29 PM
> To: Eliezer Croitoru ; 'Squid Users' 
> 
> Subject: Re: [squid-users] question about squid and https connection .
> 
> On 07/19/2018 12:08 PM, Eliezer Croitoru wrote:
> 
>> So the ROOT CA key which squid is using is being used for all the fake 
>> certificates, why do we need so many copies of it?
> 
> FWIW, I cannot think of any reason to store the CA certificate key in
> the database of generated certificates. That key is only used to sign a
> freshly generated certificate, and the certificate generator never
> regenerates certificates, so I do not see the need to reuse that CA key.
> 
> Alex.
> 
> 
>> -Original Message-
>> From: Alex Rousskov [mailto:rouss...@measurement-factory.com]
>> Sent: Wednesday, July 18, 2018 11:45 PM
>> To: Eliezer Croitoru ; 'Squid Users' 
>> 
>> Subject: Re: [squid-users] question about squid and https connection .
>>
>> On 07/18/2018 02:23 PM, Eliezer Croitoru wrote:
>>
>>
>>> Every certificate have the same properties of the original one except 
>>> the "RSA key" part which it's certifiying.
>>
>> Assuming you are talking about the generated certificates for the same real 
>> certificate X, then yes, they will all have the same (mimicked) fields. 
>> Whether they will be signed by the same CA depends on Squid configuration. 
>> In my answers, I assumed that all those Squids are configured with the same 
>> CA (including the same private key).
>>
>>
>>> So what I'm saying is that you cannot say that every certificate which 
>>> will be created with the same CA will be the same for two different 
>>> 2048 bits RSA keys.
>>
>> ... unless the keys are also the same, which was my and, AFAICT, OP 
>> assumption.
>>
>> Also, unless you are doing something nasty, it probably does not make sense 
>> to configure a bumping Squid with a public CA certificate that is identical 
>> to some other public CA certificate but has a different private key. In 
>> other words, if you are using 200 Squids with a single public CA 
>> certificate, then all those Squids should use the same private key.
>>
>> Alex.
>>
>>
>>
>>> -Original Message-
>>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] 
>>> On Behalf Of Alex Rousskov
>>> Sent: Friday, July 13, 2018 2:01 AM
>>> To: 'Squid Users' 
>>> Subject: Re: [squid-users] question about squid and https connection .
>>>
>>> On 07/12/2018 02:35 PM, Eliezer Croitoru wrote:
>>>
>>>> Every RSA key and certificate pair regardless to the origin server 
>>>> and the SSL-BUMP enabled proxy can be different.
>>>
>>> I cannot find a reasonable interpretation of the above that would 
>>> contradict what I have said. Yes, each unique certificate has its own 
>>> private key, but that is not what Ahmad was asking about AFAICT.
>>>
>>>
>>>> Will it be more accurate to say that just as long as these 200 squid 
>>>> instances(different squid.conf and couple other local variables) use 
>>>> the same exact ssl_db cache directory  then it's probable that they 
>>>> will use the same certificate.
>>>
>>> That statement is incorrect. Squids configured with different CA 
>>> certificates will generate different fake certificates for the same 
>>> real certificate.
>>>
>>>

Re: [squid-users] question about squid and https connection .

2018-07-20 Thread Alex Rousskov
On 07/20/2018 03:04 AM, Eliezer Croitoru wrote:
> I think we can use MD5/SHA1/SHA256 or even CRC32 to show the "freshness" of 
> the certificate.

Sorry, you lost me: I see no connection between the previous discussion
about CA keys and your new statement about something you call
certificate "freshness".


> Also this way the ssl_db folder will be free of the burden of tight 600 or 
> 700 permissions.
> 
> Did I got it right?

The stored generated certificates include their private keys so the
database should use tight permissions.


Alex.


> -Original Message-
> From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
> Sent: Thursday, July 19, 2018 11:29 PM
> To: Eliezer Croitoru ; 'Squid Users' 
> 
> Subject: Re: [squid-users] question about squid and https connection .
> 
> On 07/19/2018 12:08 PM, Eliezer Croitoru wrote:
> 
>> So the ROOT CA key which squid is using is being used for all the fake 
>> certificates, why do we need so many copies of it?
> 
> FWIW, I cannot think of any reason to store the CA certificate key in
> the database of generated certificates. That key is only used to sign a
> freshly generated certificate, and the certificate generator never
> regenerates certificates, so I do not see the need to reuse that CA key.
> 
> Alex.
> 
> 
>> -Original Message-
>> From: Alex Rousskov [mailto:rouss...@measurement-factory.com]
>> Sent: Wednesday, July 18, 2018 11:45 PM
>> To: Eliezer Croitoru ; 'Squid Users' 
>> 
>> Subject: Re: [squid-users] question about squid and https connection .
>>
>> On 07/18/2018 02:23 PM, Eliezer Croitoru wrote:
>>
>>
>>> Every certificate have the same properties of the original one except 
>>> the "RSA key" part which it's certifiying.
>>
>> Assuming you are talking about the generated certificates for the same real 
>> certificate X, then yes, they will all have the same (mimicked) fields. 
>> Whether they will be signed by the same CA depends on Squid configuration. 
>> In my answers, I assumed that all those Squids are configured with the same 
>> CA (including the same private key).
>>
>>
>>> So what I'm saying is that you cannot say that every certificate which 
>>> will be created with the same CA will be the same for two different 
>>> 2048 bits RSA keys.
>>
>> ... unless the keys are also the same, which was my and, AFAICT, OP 
>> assumption.
>>
>> Also, unless you are doing something nasty, it probably does not make sense 
>> to configure a bumping Squid with a public CA certificate that is identical 
>> to some other public CA certificate but has a different private key. In 
>> other words, if you are using 200 Squids with a single public CA 
>> certificate, then all those Squids should use the same private key.
>>
>> Alex.
>>
>>
>>
>>> -Original Message-
>>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] 
>>> On Behalf Of Alex Rousskov
>>> Sent: Friday, July 13, 2018 2:01 AM
>>> To: 'Squid Users' 
>>> Subject: Re: [squid-users] question about squid and https connection .
>>>
>>> On 07/12/2018 02:35 PM, Eliezer Croitoru wrote:
>>>
>>>> Every RSA key and certificate pair regardless to the origin server 
>>>> and the SSL-BUMP enabled proxy can be different.
>>>
>>> I cannot find a reasonable interpretation of the above that would 
>>> contradict what I have said. Yes, each unique certificate has its own 
>>> private key, but that is not what Ahmad was asking about AFAICT.
>>>
>>>
>>>> Will it be more accurate to say that just as long as these 200 squid 
>>>> instances(different squid.conf and couple other local variables) use 
>>>> the same exact ssl_db cache directory  then it's probable that they 
>>>> will use the same certificate.
>>>
>>> That statement is incorrect. Squids configured with different CA 
>>> certificates will generate different fake certificates for the same 
>>> real certificate.
>>>
>>> I assume that Ahmad was asking about a situation where 200 Squid 
>>> instances had the same configuration (including CA certificates).
>>>
>>> Please note that the certificate generator helper gets the signing 
>>> (CA) certificate as a parameter with each generation request (because 
>>> different Squid ports may use different CA certificates). Also, Squid 
>>> probably does not officially support sharing the certificate directory 
>

Re: [squid-users] question about squid and https connection .

2018-07-20 Thread Eliezer Croitoru
I think we can use MD5/SHA1/SHA256 or even CRC32 to show the "freshness" of the 
certificate.
Also this way the ssl_db folder will be free of the burden of tight 600 or 700 
permissions.

Did I got it right?

Thanks,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
Sent: Thursday, July 19, 2018 11:29 PM
To: Eliezer Croitoru ; 'Squid Users' 

Subject: Re: [squid-users] question about squid and https connection .

On 07/19/2018 12:08 PM, Eliezer Croitoru wrote:

> So the ROOT CA key which squid is using is being used for all the fake 
> certificates, why do we need so many copies of it?

FWIW, I cannot think of any reason to store the CA certificate key in
the database of generated certificates. That key is only used to sign a
freshly generated certificate, and the certificate generator never
regenerates certificates, so I do not see the need to reuse that CA key.

Alex.


> -Original Message-
> From: Alex Rousskov [mailto:rouss...@measurement-factory.com]
> Sent: Wednesday, July 18, 2018 11:45 PM
> To: Eliezer Croitoru ; 'Squid Users' 
> 
> Subject: Re: [squid-users] question about squid and https connection .
> 
> On 07/18/2018 02:23 PM, Eliezer Croitoru wrote:
> 
> 
>> Every certificate have the same properties of the original one except 
>> the "RSA key" part which it's certifiying.
> 
> Assuming you are talking about the generated certificates for the same real 
> certificate X, then yes, they will all have the same (mimicked) fields. 
> Whether they will be signed by the same CA depends on Squid configuration. In 
> my answers, I assumed that all those Squids are configured with the same CA 
> (including the same private key).
> 
> 
>> So what I'm saying is that you cannot say that every certificate which 
>> will be created with the same CA will be the same for two different 
>> 2048 bits RSA keys.
> 
> ... unless the keys are also the same, which was my and, AFAICT, OP 
> assumption.
> 
> Also, unless you are doing something nasty, it probably does not make sense 
> to configure a bumping Squid with a public CA certificate that is identical 
> to some other public CA certificate but has a different private key. In other 
> words, if you are using 200 Squids with a single public CA certificate, then 
> all those Squids should use the same private key.
> 
> Alex.
> 
> 
> 
>> -Original Message-
>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] 
>> On Behalf Of Alex Rousskov
>> Sent: Friday, July 13, 2018 2:01 AM
>> To: 'Squid Users' 
>> Subject: Re: [squid-users] question about squid and https connection .
>>
>> On 07/12/2018 02:35 PM, Eliezer Croitoru wrote:
>>
>>> Every RSA key and certificate pair regardless to the origin server 
>>> and the SSL-BUMP enabled proxy can be different.
>>
>> I cannot find a reasonable interpretation of the above that would 
>> contradict what I have said. Yes, each unique certificate has its own 
>> private key, but that is not what Ahmad was asking about AFAICT.
>>
>>
>>> Will it be more accurate to say that just as long as these 200 squid 
>>> instances(different squid.conf and couple other local variables) use 
>>> the same exact ssl_db cache directory  then it's probable that they 
>>> will use the same certificate.
>>
>> That statement is incorrect. Squids configured with different CA 
>> certificates will generate different fake certificates for the same 
>> real certificate.
>>
>> I assume that Ahmad was asking about a situation where 200 Squid 
>> instances had the same configuration (including CA certificates).
>>
>> Please note that the certificate generator helper gets the signing 
>> (CA) certificate as a parameter with each generation request (because 
>> different Squid ports may use different CA certificates). Also, Squid 
>> probably does not officially support sharing the certificate directory 
>> across Squid instances (even if it works).
>>
>>
>>> Or these 200 squid instances are in SMP mode with 200 workers... If 
>>> these 200 instances do not share memory and certificate cache then 
>>> there is a possibility that the same site from two different sources 
>>> will serve different certificates(due to the different RSA key which 
>>> is different).
>>
>> 200 SMP workers or 200 identically-configured Squid instances will 
>> generate the same fake certificates for the same real certificate.
>> "Stable certifica

Re: [squid-users] question about squid and https connection .

2018-07-19 Thread Alex Rousskov
On 07/19/2018 12:08 PM, Eliezer Croitoru wrote:

> So the ROOT CA key which squid is using is being used for all the fake 
> certificates, why do we need so many copies of it?

FWIW, I cannot think of any reason to store the CA certificate key in
the database of generated certificates. That key is only used to sign a
freshly generated certificate, and the certificate generator never
regenerates certificates, so I do not see the need to reuse that CA key.

Alex.


> -Original Message-
> From: Alex Rousskov [mailto:rouss...@measurement-factory.com]
> Sent: Wednesday, July 18, 2018 11:45 PM
> To: Eliezer Croitoru ; 'Squid Users' 
> 
> Subject: Re: [squid-users] question about squid and https connection .
> 
> On 07/18/2018 02:23 PM, Eliezer Croitoru wrote:
> 
> 
>> Every certificate have the same properties of the original one except 
>> the "RSA key" part which it's certifiying.
> 
> Assuming you are talking about the generated certificates for the same real 
> certificate X, then yes, they will all have the same (mimicked) fields. 
> Whether they will be signed by the same CA depends on Squid configuration. In 
> my answers, I assumed that all those Squids are configured with the same CA 
> (including the same private key).
> 
> 
>> So what I'm saying is that you cannot say that every certificate which 
>> will be created with the same CA will be the same for two different 
>> 2048 bits RSA keys.
> 
> ... unless the keys are also the same, which was my and, AFAICT, OP 
> assumption.
> 
> Also, unless you are doing something nasty, it probably does not make sense 
> to configure a bumping Squid with a public CA certificate that is identical 
> to some other public CA certificate but has a different private key. In other 
> words, if you are using 200 Squids with a single public CA certificate, then 
> all those Squids should use the same private key.
> 
> Alex.
> 
> 
> 
>> -Original Message-
>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] 
>> On Behalf Of Alex Rousskov
>> Sent: Friday, July 13, 2018 2:01 AM
>> To: 'Squid Users' 
>> Subject: Re: [squid-users] question about squid and https connection .
>>
>> On 07/12/2018 02:35 PM, Eliezer Croitoru wrote:
>>
>>> Every RSA key and certificate pair regardless to the origin server 
>>> and the SSL-BUMP enabled proxy can be different.
>>
>> I cannot find a reasonable interpretation of the above that would 
>> contradict what I have said. Yes, each unique certificate has its own 
>> private key, but that is not what Ahmad was asking about AFAICT.
>>
>>
>>> Will it be more accurate to say that just as long as these 200 squid 
>>> instances(different squid.conf and couple other local variables) use 
>>> the same exact ssl_db cache directory  then it's probable that they 
>>> will use the same certificate.
>>
>> That statement is incorrect. Squids configured with different CA 
>> certificates will generate different fake certificates for the same 
>> real certificate.
>>
>> I assume that Ahmad was asking about a situation where 200 Squid 
>> instances had the same configuration (including CA certificates).
>>
>> Please note that the certificate generator helper gets the signing 
>> (CA) certificate as a parameter with each generation request (because 
>> different Squid ports may use different CA certificates). Also, Squid 
>> probably does not officially support sharing the certificate directory 
>> across Squid instances (even if it works).
>>
>>
>>> Or these 200 squid instances are in SMP mode with 200 workers... If 
>>> these 200 instances do not share memory and certificate cache then 
>>> there is a possibility that the same site from two different sources 
>>> will serve different certificates(due to the different RSA key which 
>>> is different).
>>
>> 200 SMP workers or 200 identically-configured Squid instances will 
>> generate the same fake certificates for the same real certificate.
>> "Stable certificates" is an important requirement for many distributed 
>> Squid deployments.
>>
>> Alex.
>>
>>
>>
>>> -Original Message-
>>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] 
>>> On Behalf Of Alex Rousskov
>>> Sent: Thursday, July 12, 2018 11:27 PM
>>> To: --Ahmad-- ; Squid Users 
>>> 
>>> Subject: Re: [squid-users] question about squid and https connection .
>>>
>>> On 07/12/2018 01:17 PM, --Ahmad-- wrote:
>>>
>&g

Re: [squid-users] question about squid and https connection .

2018-07-19 Thread Eliezer Croitoru
Sorry a keyboard key broke while reviewing the text...

OK so it doesn't make any sense to store so many copies of the exact same "KEY" 
in the ssl_db/certs files..
I took a sample from my certs directory and extracted the keys that are stored 
at the QA server:
## Start
[root@squid4-testing 1]# ll
total 12
-rw-r--r--. 1 root root 1704 Jul 19 20:58 key1.pem
-rw-r--r--. 1 root root 1704 Jul 19 20:58 key2.pem
-rw-r--r--. 1 root root 1704 Jul 19 20:59 rootCA-key.pem
[root@squid4-testing 1]# cat key1.pem |sha256sum
3db2a55499015a4166f8059d378d79032ee85797f92176d7a4d5ad8a2025bec7  -
[root@squid4-testing 1]# cat key2.pem |sha256sum
3db2a55499015a4166f8059d378d79032ee85797f92176d7a4d5ad8a2025bec7  -
[root@squid4-testing 1]# cat rootCA-key.pem |sha256sum
3db2a55499015a4166f8059d378d79032ee85797f92176d7a4d5ad8a2025bec7
## END

So the ROOT CA key which squid is using is being used for all the fake 
certificates, why do we need so many copies of it?
I think that the helper and the DB store can be simplified or added simplicity 
for single servers.
For small servers this space is nothing but... for large systems it's an issue.
Also for embedded devices which every IO r/w counts before the flash/nand dies 
I think we can do something about it.

Thanks,
Eliezer

-
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com]
Sent: Wednesday, July 18, 2018 11:45 PM
To: Eliezer Croitoru ; 'Squid Users' 

Subject: Re: [squid-users] question about squid and https connection .

On 07/18/2018 02:23 PM, Eliezer Croitoru wrote:


> Every certificate have the same properties of the original one except 
> the "RSA key" part which it's certifiying.

Assuming you are talking about the generated certificates for the same real 
certificate X, then yes, they will all have the same (mimicked) fields. Whether 
they will be signed by the same CA depends on Squid configuration. In my 
answers, I assumed that all those Squids are configured with the same CA 
(including the same private key).


> So what I'm saying is that you cannot say that every certificate which 
> will be created with the same CA will be the same for two different 
> 2048 bits RSA keys.

... unless the keys are also the same, which was my and, AFAICT, OP assumption.

Also, unless you are doing something nasty, it probably does not make sense to 
configure a bumping Squid with a public CA certificate that is identical to 
some other public CA certificate but has a different private key. In other 
words, if you are using 200 Squids with a single public CA certificate, then 
all those Squids should use the same private key.

Alex.



> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] 
> On Behalf Of Alex Rousskov
> Sent: Friday, July 13, 2018 2:01 AM
> To: 'Squid Users' 
> Subject: Re: [squid-users] question about squid and https connection .
> 
> On 07/12/2018 02:35 PM, Eliezer Croitoru wrote:
> 
>> Every RSA key and certificate pair regardless to the origin server 
>> and the SSL-BUMP enabled proxy can be different.
> 
> I cannot find a reasonable interpretation of the above that would 
> contradict what I have said. Yes, each unique certificate has its own 
> private key, but that is not what Ahmad was asking about AFAICT.
> 
> 
>> Will it be more accurate to say that just as long as these 200 squid 
>> instances(different squid.conf and couple other local variables) use 
>> the same exact ssl_db cache directory  then it's probable that they 
>> will use the same certificate.
> 
> That statement is incorrect. Squids configured with different CA 
> certificates will generate different fake certificates for the same 
> real certificate.
> 
> I assume that Ahmad was asking about a situation where 200 Squid 
> instances had the same configuration (including CA certificates).
> 
> Please note that the certificate generator helper gets the signing 
> (CA) certificate as a parameter with each generation request (because 
> different Squid ports may use different CA certificates). Also, Squid 
> probably does not officially support sharing the certificate directory 
> across Squid instances (even if it works).
> 
> 
>> Or these 200 squid instances are in SMP mode with 200 workers... If 
>> these 200 instances do not share memory and certificate cache then 
>> there is a possibility that the same site from two different sources 
>> will serve different certificates(due to the different RSA key which 
>> is different).
> 
> 200 SMP workers or 200 identically-configured Squid instances will 
> generate the same fake certificates for the same real certificate.
> "Stable certificates" is an impo

Re: [squid-users] question about squid and https connection .

2018-07-19 Thread Eliezer Croitoru
OK so it doesn't make any sense to store so many copies of the "KEY" in the 
ssl_db/certs files..
I took a sample from my certs directory and extracted the keys that are stored 
at












]\
[root@squid4-testing 1]# ll
total 12
-rw-r--r--. 1 root root 1704 Jul 19 20:58 key1.pem
-rw-r--r--. 1 root root 1704 Jul 19 20:58 key2.pem
-rw-r--r--. 1 root root 1704 Jul 19 20:59 rootCA-key.pem
[root@squid4-testing 1]# cat key1.pem |sha256sum
3db2a55499015a4166f8059d378d79032ee85797f92176d7a4d5ad8a2025bec7  -
[root@squid4-testing 1]# cat key2.pem |sha256sum
3db2a55499015a4166f8059d378d79032ee85797f92176d7a4d5ad8a2025bec7  -
[root@squid4-testing 1]# cat rootCA-key.pem |sha256sum
3db2a55499015a4166f8059d378d79032ee85797f92176d7a4d5ad8a2025bec7  -


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
Sent: Wednesday, July 18, 2018 11:45 PM
To: Eliezer Croitoru ; 'Squid Users' 

Subject: Re: [squid-users] question about squid and https connection .

On 07/18/2018 02:23 PM, Eliezer Croitoru wrote:


> Every certificate have the same properties of the original one except
> the "RSA key" part which it's certifiying.

Assuming you are talking about the generated certificates for the same
real certificate X, then yes, they will all have the same (mimicked)
fields. Whether they will be signed by the same CA depends on Squid
configuration. In my answers, I assumed that all those Squids are
configured with the same CA (including the same private key).


> So what I'm saying is that you cannot say that every certificate
> which will be created with the same CA will be the same for two
> different 2048 bits RSA keys.

... unless the keys are also the same, which was my and, AFAICT, OP
assumption.

Also, unless you are doing something nasty, it probably does not make
sense to configure a bumping Squid with a public CA certificate that is
identical to some other public CA certificate but has a different
private key. In other words, if you are using 200 Squids with a single
public CA certificate, then all those Squids should use the same private
key.

Alex.



> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Alex Rousskov
> Sent: Friday, July 13, 2018 2:01 AM
> To: 'Squid Users' 
> Subject: Re: [squid-users] question about squid and https connection .
> 
> On 07/12/2018 02:35 PM, Eliezer Croitoru wrote:
> 
>> Every RSA key and certificate pair regardless to the origin server
>> and the SSL-BUMP enabled proxy can be different.
> 
> I cannot find a reasonable interpretation of the above that would
> contradict what I have said. Yes, each unique certificate has its own
> private key, but that is not what Ahmad was asking about AFAICT.
> 
> 
>> Will it be more accurate to say that just as long as these 200 squid
>> instances(different squid.conf and couple other local variables) use
>> the same exact ssl_db cache directory  then it's probable that they
>> will use the same certificate.
> 
> That statement is incorrect. Squids configured with different CA
> certificates will generate different fake certificates for the same real
> certificate.
> 
> I assume that Ahmad was asking about a situation where 200 Squid
> instances had the same configuration (including CA certificates).
> 
> Please note that the certificate generator helper gets the signing (CA)
> certificate as a parameter with each generation request (because
> different Squid ports may use different CA certificates). Also, Squid
> probably does not officially support sharing the certificate directory
> across Squid instances (even if it works).
> 
> 
>> Or these 200 squid instances are in SMP mode with 200 workers... If
>> these 200 instances do not share memory and certificate cache then
>> there is a possibility that the same site from two different sources 
>> will serve different certificates(due to the different RSA key which
>> is different).
> 
> 200 SMP workers or 200 identically-configured Squid instances will
> generate the same fake certificates for the same real certificate.
> "Stable certificates" is an important requirement for many distributed
> Squid deployments.
> 
> Alex.
> 
> 
> 
>> -Original Message-
>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
>> Behalf Of Alex Rousskov
>> Sent: Thursday, July 12, 2018 11:27 PM
>> To: --Ahmad-- ; Squid Users 
>> 
>> Subject: Re: [squid-users] question about squid and https connection .
>>
>> On 07/12/2018 01:17 PM, --Ahmad-- wrote:
>>
>>> if i have pc# 1 and that pc open faceboo

Re: [squid-users] Question about traffic calculate

2018-07-19 Thread Tiraen
live access.log streams is probably the most efficient way of doing this.

Concerning this moment

So in the logs only one half of the traffic, and if the incoming + outgoing

https://alter.org.ua/soft/fbsd/squid_tot_sz/

All the patches I found are related to the old versions of the SQUID
for 3.5 this is not

2018-06-21 19:20 GMT+03:00 Alex Rousskov :

> On 06/21/2018 05:14 AM, Tiraen wrote:
> > where i can read more about this (I mean the development of custom
> > ICAP/eCAP modules and their connection to the proxy) ?
>
> The best place to start is probably
> https://wiki.squid-cache.org/SquidFaq/ContentAdaptation
>
> If you decide to go the ICAP route, you will need to find the right ICAP
> server for your project. After that, the development will revolve around
> writing a custom adapter for that ICAP server. The above URL links to a
> page with a list of ICAP servers:
> https://wiki.squid-cache.org/Features/ICAP
>
> If you decide to go the eCAP route, you will need to (find somebody to)
> write an eCAP adapter (no server required).
>
> In either case, the required development is similar to writing a plugin
> or loadable module. Any capable developer can do it, but understanding
> of HTTP concepts and familiarity with the ICAP server or eCAP API helps.
>
>
> HTH,
>
> Alex.
>
>
> > 2018-06-13 18:35 GMT+03:00 Alex Rousskov:
> >
> > On 06/13/2018 07:09 AM, Matus UHLAR - fantomas wrote:
> > > On 13.06.18 13:26, Tiraen wrote:
> > >> ICAP will help provide data on incoming / outgoing traffic?
> >
> > > icap can get the data and work with it.
> > > you don't have to manipulate, just do the accounting.
> > > you just need ICAP module that will do it.
> >
> >
> > Yes, it is possible to collect more-or-less accurate incoming request
> > and incoming response stats using an ICAP service, but doing so
> would be
> > very inefficient. Using eCAP would improve performance, but
> interpreting
> > live access.log streams is probably the most efficient way of doing
> > this.
> >
> > IIRC, both eCAP and ICAP interfaces do not see the exact incoming
> > requests and incoming responses because Squid may strip hop-by-hop
> HTTP
> > headers and decode chunked HTTP message bodies before forwarding the
> > incoming message to the adaptation service. If you need exact headers
> > and exact body sizes, then you need more than just the basic ICAP and
> > eCAP interface. Again, access.log is probably an overall better
> choice
> > for capturing that info.
> >
> > Both eCAP and ICAP interfaces do not see outgoing requests and
> outgoing
> > responses because Squid only supports pre-cache vectoring points.
> >
> >
> > HTH,
> >
> > Alex.
> > P.S. In the above, "incoming" is "to Squid" and "outgoing" is "from
> > Squid".
> >
> >
> > >> 2018-06-13 12:54 GMT+03:00 Matus UHLAR - fantomas <
> uh...@fantomas.sk >:
> > >>
> > >>> On 13.06.18 11:51, Tiraen wrote:
> > >>>
> >  either such a question, perhaps someone in the course
> > 
> >  in the SQUID is still not implemented radius accounting?
> > 
> > >>>
> > >>> authentication - yes. But squid doese not support accounting
> (afaik).
> > >>>
> > >>> Maybe there are any third-party modules working correctly?
> > 
> > >>>
> > >>> maybe iCAP module.
> >
> >
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > 
> > http://lists.squid-cache.org/listinfo/squid-users
> > 
> >
> >
> >
> >
> > --
> > With best regards,
> >
> > Vyacheslav Yakushev,
> >
> > Unix system administrator
> >
> > https://t.me/kelewind
>
>


-- 
With best regards,

Vyacheslav Yakushev,

Unix system administrator

https://t.me/kelewind
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question about squid and https connection .

2018-07-18 Thread Alex Rousskov
On 07/18/2018 02:23 PM, Eliezer Croitoru wrote:


> Every certificate have the same properties of the original one except
> the "RSA key" part which it's certifiying.

Assuming you are talking about the generated certificates for the same
real certificate X, then yes, they will all have the same (mimicked)
fields. Whether they will be signed by the same CA depends on Squid
configuration. In my answers, I assumed that all those Squids are
configured with the same CA (including the same private key).


> So what I'm saying is that you cannot say that every certificate
> which will be created with the same CA will be the same for two
> different 2048 bits RSA keys.

... unless the keys are also the same, which was my and, AFAICT, OP
assumption.

Also, unless you are doing something nasty, it probably does not make
sense to configure a bumping Squid with a public CA certificate that is
identical to some other public CA certificate but has a different
private key. In other words, if you are using 200 Squids with a single
public CA certificate, then all those Squids should use the same private
key.

Alex.



> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Alex Rousskov
> Sent: Friday, July 13, 2018 2:01 AM
> To: 'Squid Users' 
> Subject: Re: [squid-users] question about squid and https connection .
> 
> On 07/12/2018 02:35 PM, Eliezer Croitoru wrote:
> 
>> Every RSA key and certificate pair regardless to the origin server
>> and the SSL-BUMP enabled proxy can be different.
> 
> I cannot find a reasonable interpretation of the above that would
> contradict what I have said. Yes, each unique certificate has its own
> private key, but that is not what Ahmad was asking about AFAICT.
> 
> 
>> Will it be more accurate to say that just as long as these 200 squid
>> instances(different squid.conf and couple other local variables) use
>> the same exact ssl_db cache directory  then it's probable that they
>> will use the same certificate.
> 
> That statement is incorrect. Squids configured with different CA
> certificates will generate different fake certificates for the same real
> certificate.
> 
> I assume that Ahmad was asking about a situation where 200 Squid
> instances had the same configuration (including CA certificates).
> 
> Please note that the certificate generator helper gets the signing (CA)
> certificate as a parameter with each generation request (because
> different Squid ports may use different CA certificates). Also, Squid
> probably does not officially support sharing the certificate directory
> across Squid instances (even if it works).
> 
> 
>> Or these 200 squid instances are in SMP mode with 200 workers... If
>> these 200 instances do not share memory and certificate cache then
>> there is a possibility that the same site from two different sources 
>> will serve different certificates(due to the different RSA key which
>> is different).
> 
> 200 SMP workers or 200 identically-configured Squid instances will
> generate the same fake certificates for the same real certificate.
> "Stable certificates" is an important requirement for many distributed
> Squid deployments.
> 
> Alex.
> 
> 
> 
>> -----Original Message-
>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
>> Behalf Of Alex Rousskov
>> Sent: Thursday, July 12, 2018 11:27 PM
>> To: --Ahmad-- ; Squid Users 
>> 
>> Subject: Re: [squid-users] question about squid and https connection .
>>
>> On 07/12/2018 01:17 PM, --Ahmad-- wrote:
>>
>>> if i have pc# 1 and that pc open facebook .
>>>
>>> then i have other pc # 2 and that other pc open facebook .
>>>
>>>
>>> now  as we know facebook is https .
>>>
>>> so is the key/ cert that used on pc # 1 is same as cert in pc # 2 to 
>>> decrypt the fb encrypted traffic ?
>>
>> Certificates themselves are not used (directly) to decrypt traffic
>> AFAIK, but yes, both PCs will see the same server certificate (ignoring
>> CDNs and other complications).
>>
>>
>>
>>> now in the presence of squid .
>>>
>>> if i used tcp connect method  , will it be different than above ?
>>
>> If you are not bumping the connection, then both PCs will see the same
>> real Facebook certificate as if those PCs did not use a proxy.
>>
>> If you are bumping the connection, then both PCs will see the same fake
>> certificate generated by Squid.
>>
>>
>>
>>> say i used 200 proxies in same squid machine and i used to access FB from 
>>> t

Re: [squid-users] question about squid and https connection .

2018-07-18 Thread Eliezer Croitoru
Alex,

Some properties of the certificate are static but...
A certificate is certifying a specific key.
If every certificate would be exactly the same as the other on all its 
properties including the key then we would be able to..
Fake any certificate in the world very very fast.

Correct me if I'm wrong:
Every certificate have the same properties of the original one except the "RSA 
key" part which it's certifiying.
There is a dynamic variable in every certificate when it's being created(not 
talking about time stamps...).

So what I'm saying is that you cannot say that every certificate which will be 
created with the same CA will be the same for two different 2048 bits RSA keys.

Let me know if I got it right.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Alex Rousskov
Sent: Friday, July 13, 2018 2:01 AM
To: 'Squid Users' 
Subject: Re: [squid-users] question about squid and https connection .

On 07/12/2018 02:35 PM, Eliezer Croitoru wrote:

> Every RSA key and certificate pair regardless to the origin server
> and the SSL-BUMP enabled proxy can be different.

I cannot find a reasonable interpretation of the above that would
contradict what I have said. Yes, each unique certificate has its own
private key, but that is not what Ahmad was asking about AFAICT.


> Will it be more accurate to say that just as long as these 200 squid
> instances(different squid.conf and couple other local variables) use
> the same exact ssl_db cache directory  then it's probable that they
> will use the same certificate.

That statement is incorrect. Squids configured with different CA
certificates will generate different fake certificates for the same real
certificate.

I assume that Ahmad was asking about a situation where 200 Squid
instances had the same configuration (including CA certificates).

Please note that the certificate generator helper gets the signing (CA)
certificate as a parameter with each generation request (because
different Squid ports may use different CA certificates). Also, Squid
probably does not officially support sharing the certificate directory
across Squid instances (even if it works).


> Or these 200 squid instances are in SMP mode with 200 workers... If
> these 200 instances do not share memory and certificate cache then
> there is a possibility that the same site from two different sources 
> will serve different certificates(due to the different RSA key which
> is different).

200 SMP workers or 200 identically-configured Squid instances will
generate the same fake certificates for the same real certificate.
"Stable certificates" is an important requirement for many distributed
Squid deployments.

Alex.



> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Alex Rousskov
> Sent: Thursday, July 12, 2018 11:27 PM
> To: --Ahmad-- ; Squid Users 
> 
> Subject: Re: [squid-users] question about squid and https connection .
> 
> On 07/12/2018 01:17 PM, --Ahmad-- wrote:
> 
>> if i have pc# 1 and that pc open facebook .
>>
>> then i have other pc # 2 and that other pc open facebook .
>>
>>
>> now  as we know facebook is https .
>>
>> so is the key/ cert that used on pc # 1 is same as cert in pc # 2 to decrypt 
>> the fb encrypted traffic ?
> 
> Certificates themselves are not used (directly) to decrypt traffic
> AFAIK, but yes, both PCs will see the same server certificate (ignoring
> CDNs and other complications).
> 
> 
> 
>> now in the presence of squid .
>>
>> if i used tcp connect method  , will it be different than above ?
> 
> If you are not bumping the connection, then both PCs will see the same
> real Facebook certificate as if those PCs did not use a proxy.
> 
> If you are bumping the connection, then both PCs will see the same fake
> certificate generated by Squid.
> 
> 
> 
>> say i used 200 proxies in same squid machine and i used to access FB from 
>> the same pc same browser .
>>
>> will facebook see my cert/key i used to decrypt its traffic ?
> 
> If you are asking whether Facebook will know anything about the fake
> certificate generated by Squid for clients, then the answer is "no,
> unless Facebook runs some special client code to deliver (Squid)
> certificate back to Facebook".
> 
> In general, the origin server assumes that the client is talking to it
> directly. Clients may pin or otherwise restrict certificates that they
> trust, but after the connection is successfully established, the server
> may assume that it is talking to the client directly. A paranoid server
> may del

Re: [squid-users] question about squid and https connection .

2018-07-12 Thread Amos Jeffries
On 13/07/18 08:27, Eliezer Croitoru wrote:
> Alex,
> 
> Just to be sure:
> Every RSA key and certificate pair regardless to the origin server and the 
> SSL-BUMP enabled proxy can be different.
> If the key would be the exact same one then we will probably have a very big 
> security issue/risk to my understanding (leaving aside DH).
> 
> Will it be more accurate to say that just as long as these 200 squid 
> instances(different squid.conf and couple other local variables)
> use the same exact ssl_db cache directory  then it's probable that they will 
> use the same certificate.
> Or these 200 squid instances are in SMP mode with 200 workers...
> If these 200 instances do not share memory and certificate cache then there 
> is a possibility that the same site from two different sources
> will serve different certificates(due to the different RSA key which is 
> different).
> 

Instances (in terms of how we defined the term "Squid instance") cannot
share memory. They are completely separate processes. Even when in
SMP-aware operation, they are separate process groups. That is why you
have to use the -n name command line parameter to direct signals at
specific instances.


In regards to the certs. The generating of a fake cert is a hard-coded
algorithm - using the inputs Alex mentioned. The only way differences
occur between any two Squid fake certs is when the real origin server
cert given to each of them is different.
In that case you *do* absolutely want the fake ones to differ as well -
even (and especially) when they come from the same origin server.

Think of Squid as copy-n-pasting cert field values from the origin cert
to the fake cert. You wont be far off whats really happening.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question about squid and https connection .

2018-07-12 Thread Alex Rousskov
On 07/12/2018 02:35 PM, Eliezer Croitoru wrote:

> Every RSA key and certificate pair regardless to the origin server
> and the SSL-BUMP enabled proxy can be different.

I cannot find a reasonable interpretation of the above that would
contradict what I have said. Yes, each unique certificate has its own
private key, but that is not what Ahmad was asking about AFAICT.


> Will it be more accurate to say that just as long as these 200 squid
> instances(different squid.conf and couple other local variables) use
> the same exact ssl_db cache directory  then it's probable that they
> will use the same certificate.

That statement is incorrect. Squids configured with different CA
certificates will generate different fake certificates for the same real
certificate.

I assume that Ahmad was asking about a situation where 200 Squid
instances had the same configuration (including CA certificates).

Please note that the certificate generator helper gets the signing (CA)
certificate as a parameter with each generation request (because
different Squid ports may use different CA certificates). Also, Squid
probably does not officially support sharing the certificate directory
across Squid instances (even if it works).


> Or these 200 squid instances are in SMP mode with 200 workers... If
> these 200 instances do not share memory and certificate cache then
> there is a possibility that the same site from two different sources 
> will serve different certificates(due to the different RSA key which
> is different).

200 SMP workers or 200 identically-configured Squid instances will
generate the same fake certificates for the same real certificate.
"Stable certificates" is an important requirement for many distributed
Squid deployments.

Alex.



> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Alex Rousskov
> Sent: Thursday, July 12, 2018 11:27 PM
> To: --Ahmad-- ; Squid Users 
> 
> Subject: Re: [squid-users] question about squid and https connection .
> 
> On 07/12/2018 01:17 PM, --Ahmad-- wrote:
> 
>> if i have pc# 1 and that pc open facebook .
>>
>> then i have other pc # 2 and that other pc open facebook .
>>
>>
>> now  as we know facebook is https .
>>
>> so is the key/ cert that used on pc # 1 is same as cert in pc # 2 to decrypt 
>> the fb encrypted traffic ?
> 
> Certificates themselves are not used (directly) to decrypt traffic
> AFAIK, but yes, both PCs will see the same server certificate (ignoring
> CDNs and other complications).
> 
> 
> 
>> now in the presence of squid .
>>
>> if i used tcp connect method  , will it be different than above ?
> 
> If you are not bumping the connection, then both PCs will see the same
> real Facebook certificate as if those PCs did not use a proxy.
> 
> If you are bumping the connection, then both PCs will see the same fake
> certificate generated by Squid.
> 
> 
> 
>> say i used 200 proxies in same squid machine and i used to access FB from 
>> the same pc same browser .
>>
>> will facebook see my cert/key i used to decrypt its traffic ?
> 
> If you are asking whether Facebook will know anything about the fake
> certificate generated by Squid for clients, then the answer is "no,
> unless Facebook runs some special client code to deliver (Squid)
> certificate back to Facebook".
> 
> In general, the origin server assumes that the client is talking to it
> directly. Clients may pin or otherwise restrict certificates that they
> trust, but after the connection is successfully established, the server
> may assume that it is talking to the client directly. A paranoid server
> may deliver special code to double check that assumption, but there are
> other, more standard methods to prevent bumping such as certificate
> pinning and certificate transparency cervices.
> 
> 
> 
>> is the key/cert of FB to decrypt the https content is same on all browsers 
>> on all computers ?
> 
> If you are asking whether the generated certificates are going to be the
> same for all clients, then the answer is "yes, provided all those 200
> Squids use the same configuration (including the CA certificate) and
> receive the same real certificate from Facebook". Squid's certificate
> generation algorithm generates the same certificate given the same
> configuration and the same origin server certificate.
> 
> 
> HTH,
> 
> Alex.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question about squid and https connection .

2018-07-12 Thread login mogin
Hi Ahmad,

Proxy will just change your ip when you are connecting FB in this way, But
FB probably has or at least should, so many other ways to detect if thats
the same person connecting, just to name one browser based profiling. They
have your user_agent, browser extensions, cookies, etc..
In other words you will have so many other footprints.

Best
Logan

--Ahmad-- , 12 Tem 2018 Per, 15:15 tarihinde şunu
yazdı:

> TAHNK YOU Guys ALL .
>
>
> so my question is in another way is :
>
>
> if i have squid proxy using it using the TCP_Connect way .
>
> and from the same pc and same browser and try to open facebook from 200
> different address .
>
> then facebook wont have a footprint that there is 200 different addresses
> hit FB from the same public key /cert .
>
> i just ant to make sure there is no footprint happen .
>
> thats way i asked .
>
> let me know concerns Guys , thanks alot Guys !
>
> > On 12 Jul 2018, at 23:35, Eliezer Croitoru  wrote:
> >
> > Alex,
> >
> > Just to be sure:
> > Every RSA key and certificate pair regardless to the origin server and
> the SSL-BUMP enabled proxy can be different.
> > If the key would be the exact same one then we will probably have a very
> big security issue/risk to my understanding (leaving aside DH).
> >
> > Will it be more accurate to say that just as long as these 200 squid
> instances(different squid.conf and couple other local variables)
> > use the same exact ssl_db cache directory  then it's probable that they
> will use the same certificate.
> > Or these 200 squid instances are in SMP mode with 200 workers...
> > If these 200 instances do not share memory and certificate cache then
> there is a possibility that the same site from two different sources
> > will serve different certificates(due to the different RSA key which is
> different).
> >
> > Thanks,
> > Eliezer
> >
> > 
> > Eliezer Croitoru
> > Linux System Administrator
> > Mobile: +972-5-28704261
> > Email: elie...@ngtech.co.il
> >
> >
> >
> > -Original Message-
> > From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
> Behalf Of Alex Rousskov
> > Sent: Thursday, July 12, 2018 11:27 PM
> > To: --Ahmad-- ; Squid Users <
> squid-users@lists.squid-cache.org>
> > Subject: Re: [squid-users] question about squid and https connection .
> >
> > On 07/12/2018 01:17 PM, --Ahmad-- wrote:
> >
> >> if i have pc# 1 and that pc open facebook .
> >>
> >> then i have other pc # 2 and that other pc open facebook .
> >>
> >>
> >> now  as we know facebook is https .
> >>
> >> so is the key/ cert that used on pc # 1 is same as cert in pc # 2 to
> decrypt the fb encrypted traffic ?
> >
> > Certificates themselves are not used (directly) to decrypt traffic
> > AFAIK, but yes, both PCs will see the same server certificate (ignoring
> > CDNs and other complications).
> >
> >
> >
> >> now in the presence of squid .
> >>
> >> if i used tcp connect method  , will it be different than above ?
> >
> > If you are not bumping the connection, then both PCs will see the same
> > real Facebook certificate as if those PCs did not use a proxy.
> >
> > If you are bumping the connection, then both PCs will see the same fake
> > certificate generated by Squid.
> >
> >
> >
> >> say i used 200 proxies in same squid machine and i used to access FB
> from the same pc same browser .
> >>
> >> will facebook see my cert/key i used to decrypt its traffic ?
> >
> > If you are asking whether Facebook will know anything about the fake
> > certificate generated by Squid for clients, then the answer is "no,
> > unless Facebook runs some special client code to deliver (Squid)
> > certificate back to Facebook".
> >
> > In general, the origin server assumes that the client is talking to it
> > directly. Clients may pin or otherwise restrict certificates that they
> > trust, but after the connection is successfully established, the server
> > may assume that it is talking to the client directly. A paranoid server
> > may deliver special code to double check that assumption, but there are
> > other, more standard methods to prevent bumping such as certificate
> > pinning and certificate transparency cervices.
> >
> >
> >
> >> is the key/cert of FB to decrypt the https content is same on all
> browsers on all computers ?
> >
> > If you are asking whether the generated certificates are going 

Re: [squid-users] question about squid and https connection .

2018-07-12 Thread --Ahmad--
TAHNK YOU Guys ALL .


so my question is in another way is :


if i have squid proxy using it using the TCP_Connect way .

and from the same pc and same browser and try to open facebook from 200 
different address .

then facebook wont have a footprint that there is 200 different addresses hit 
FB from the same public key /cert .

i just ant to make sure there is no footprint happen .

thats way i asked .

let me know concerns Guys , thanks alot Guys ! 

> On 12 Jul 2018, at 23:35, Eliezer Croitoru  wrote:
> 
> Alex,
> 
> Just to be sure:
> Every RSA key and certificate pair regardless to the origin server and the 
> SSL-BUMP enabled proxy can be different.
> If the key would be the exact same one then we will probably have a very big 
> security issue/risk to my understanding (leaving aside DH).
> 
> Will it be more accurate to say that just as long as these 200 squid 
> instances(different squid.conf and couple other local variables)
> use the same exact ssl_db cache directory  then it's probable that they will 
> use the same certificate.
> Or these 200 squid instances are in SMP mode with 200 workers...
> If these 200 instances do not share memory and certificate cache then there 
> is a possibility that the same site from two different sources
> will serve different certificates(due to the different RSA key which is 
> different).
> 
> Thanks,
> Eliezer
> 
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
> 
> 
> 
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Alex Rousskov
> Sent: Thursday, July 12, 2018 11:27 PM
> To: --Ahmad-- ; Squid Users 
> 
> Subject: Re: [squid-users] question about squid and https connection .
> 
> On 07/12/2018 01:17 PM, --Ahmad-- wrote:
> 
>> if i have pc# 1 and that pc open facebook .
>> 
>> then i have other pc # 2 and that other pc open facebook .
>> 
>> 
>> now  as we know facebook is https .
>> 
>> so is the key/ cert that used on pc # 1 is same as cert in pc # 2 to decrypt 
>> the fb encrypted traffic ?
> 
> Certificates themselves are not used (directly) to decrypt traffic
> AFAIK, but yes, both PCs will see the same server certificate (ignoring
> CDNs and other complications).
> 
> 
> 
>> now in the presence of squid .
>> 
>> if i used tcp connect method  , will it be different than above ?
> 
> If you are not bumping the connection, then both PCs will see the same
> real Facebook certificate as if those PCs did not use a proxy.
> 
> If you are bumping the connection, then both PCs will see the same fake
> certificate generated by Squid.
> 
> 
> 
>> say i used 200 proxies in same squid machine and i used to access FB from 
>> the same pc same browser .
>> 
>> will facebook see my cert/key i used to decrypt its traffic ?
> 
> If you are asking whether Facebook will know anything about the fake
> certificate generated by Squid for clients, then the answer is "no,
> unless Facebook runs some special client code to deliver (Squid)
> certificate back to Facebook".
> 
> In general, the origin server assumes that the client is talking to it
> directly. Clients may pin or otherwise restrict certificates that they
> trust, but after the connection is successfully established, the server
> may assume that it is talking to the client directly. A paranoid server
> may deliver special code to double check that assumption, but there are
> other, more standard methods to prevent bumping such as certificate
> pinning and certificate transparency cervices.
> 
> 
> 
>> is the key/cert of FB to decrypt the https content is same on all browsers 
>> on all computers ?
> 
> If you are asking whether the generated certificates are going to be the
> same for all clients, then the answer is "yes, provided all those 200
> Squids use the same configuration (including the CA certificate) and
> receive the same real certificate from Facebook". Squid's certificate
> generation algorithm generates the same certificate given the same
> configuration and the same origin server certificate.
> 
> 
> HTH,
> 
> Alex.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question about squid and https connection .

2018-07-12 Thread Eliezer Croitoru
Alex,

Just to be sure:
Every RSA key and certificate pair regardless to the origin server and the 
SSL-BUMP enabled proxy can be different.
If the key would be the exact same one then we will probably have a very big 
security issue/risk to my understanding (leaving aside DH).

Will it be more accurate to say that just as long as these 200 squid 
instances(different squid.conf and couple other local variables)
use the same exact ssl_db cache directory  then it's probable that they will 
use the same certificate.
Or these 200 squid instances are in SMP mode with 200 workers...
If these 200 instances do not share memory and certificate cache then there is 
a possibility that the same site from two different sources
will serve different certificates(due to the different RSA key which is 
different).

Thanks,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Alex Rousskov
Sent: Thursday, July 12, 2018 11:27 PM
To: --Ahmad-- ; Squid Users 

Subject: Re: [squid-users] question about squid and https connection .

On 07/12/2018 01:17 PM, --Ahmad-- wrote:

> if i have pc# 1 and that pc open facebook .
> 
> then i have other pc # 2 and that other pc open facebook .
> 
> 
> now  as we know facebook is https .
> 
> so is the key/ cert that used on pc # 1 is same as cert in pc # 2 to decrypt 
> the fb encrypted traffic ?

Certificates themselves are not used (directly) to decrypt traffic
AFAIK, but yes, both PCs will see the same server certificate (ignoring
CDNs and other complications).



> now in the presence of squid .
> 
> if i used tcp connect method  , will it be different than above ?

If you are not bumping the connection, then both PCs will see the same
real Facebook certificate as if those PCs did not use a proxy.

If you are bumping the connection, then both PCs will see the same fake
certificate generated by Squid.



> say i used 200 proxies in same squid machine and i used to access FB from the 
> same pc same browser .
> 
> will facebook see my cert/key i used to decrypt its traffic ?

If you are asking whether Facebook will know anything about the fake
certificate generated by Squid for clients, then the answer is "no,
unless Facebook runs some special client code to deliver (Squid)
certificate back to Facebook".

In general, the origin server assumes that the client is talking to it
directly. Clients may pin or otherwise restrict certificates that they
trust, but after the connection is successfully established, the server
may assume that it is talking to the client directly. A paranoid server
may deliver special code to double check that assumption, but there are
other, more standard methods to prevent bumping such as certificate
pinning and certificate transparency cervices.



> is the key/cert of FB to decrypt the https content is same on all browsers on 
> all computers ?

If you are asking whether the generated certificates are going to be the
same for all clients, then the answer is "yes, provided all those 200
Squids use the same configuration (including the CA certificate) and
receive the same real certificate from Facebook". Squid's certificate
generation algorithm generates the same certificate given the same
configuration and the same origin server certificate.


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question about squid and https connection .

2018-07-12 Thread Alex Rousskov
On 07/12/2018 01:17 PM, --Ahmad-- wrote:

> if i have pc# 1 and that pc open facebook .
> 
> then i have other pc # 2 and that other pc open facebook .
> 
> 
> now  as we know facebook is https .
> 
> so is the key/ cert that used on pc # 1 is same as cert in pc # 2 to decrypt 
> the fb encrypted traffic ?

Certificates themselves are not used (directly) to decrypt traffic
AFAIK, but yes, both PCs will see the same server certificate (ignoring
CDNs and other complications).



> now in the presence of squid .
> 
> if i used tcp connect method  , will it be different than above ?

If you are not bumping the connection, then both PCs will see the same
real Facebook certificate as if those PCs did not use a proxy.

If you are bumping the connection, then both PCs will see the same fake
certificate generated by Squid.



> say i used 200 proxies in same squid machine and i used to access FB from the 
> same pc same browser .
> 
> will facebook see my cert/key i used to decrypt its traffic ?

If you are asking whether Facebook will know anything about the fake
certificate generated by Squid for clients, then the answer is "no,
unless Facebook runs some special client code to deliver (Squid)
certificate back to Facebook".

In general, the origin server assumes that the client is talking to it
directly. Clients may pin or otherwise restrict certificates that they
trust, but after the connection is successfully established, the server
may assume that it is talking to the client directly. A paranoid server
may deliver special code to double check that assumption, but there are
other, more standard methods to prevent bumping such as certificate
pinning and certificate transparency cervices.



> is the key/cert of FB to decrypt the https content is same on all browsers on 
> all computers ?

If you are asking whether the generated certificates are going to be the
same for all clients, then the answer is "yes, provided all those 200
Squids use the same configuration (including the CA certificate) and
receive the same real certificate from Facebook". Squid's certificate
generation algorithm generates the same certificate given the same
configuration and the same origin server certificate.


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] question about squid and https connection .

2018-07-12 Thread --Ahmad--
my 1st Q.

if i have pc# 1 
and that pc open facebook .


then i have other pc # 2 
and that other pc open facebook .


now  as we know facebook is https .

so is the key/ cert that used on pc # 1 is same as cert in pc # 2 to decrypt 
the fb encrypted traffic ?


now in the presence of squid .

if i used tcp connect method  , will it be different than above ?

my question in other way .


say i used 200 proxies in same squid machine and i used to access FB from the 
same pc same browser .

will facebook see my cert/key i used to decrypt its traffic ?

is the key/cert of FB to decrypt the https content is same on all browsers on 
all computers ?



kind regards 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about traffic calculate

2018-06-21 Thread Tiraen
and where i can read more about this (I mean the development of custom
ICAP/eCAP modules and their connection to the proxy) ?

2018-06-13 18:35 GMT+03:00 Alex Rousskov :

> On 06/13/2018 07:09 AM, Matus UHLAR - fantomas wrote:
> > On 13.06.18 13:26, Tiraen wrote:
> >> ICAP will help provide data on incoming / outgoing traffic?
>
> > icap can get the data and work with it.
> > you don't have to manipulate, just do the accounting.
> > you just need ICAP module that will do it.
>
>
> Yes, it is possible to collect more-or-less accurate incoming request
> and incoming response stats using an ICAP service, but doing so would be
> very inefficient. Using eCAP would improve performance, but interpreting
> live access.log streams is probably the most efficient way of doing this.
>
> IIRC, both eCAP and ICAP interfaces do not see the exact incoming
> requests and incoming responses because Squid may strip hop-by-hop HTTP
> headers and decode chunked HTTP message bodies before forwarding the
> incoming message to the adaptation service. If you need exact headers
> and exact body sizes, then you need more than just the basic ICAP and
> eCAP interface. Again, access.log is probably an overall better choice
> for capturing that info.
>
> Both eCAP and ICAP interfaces do not see outgoing requests and outgoing
> responses because Squid only supports pre-cache vectoring points.
>
>
> HTH,
>
> Alex.
> P.S. In the above, "incoming" is "to Squid" and "outgoing" is "from Squid".
>
>
> >> 2018-06-13 12:54 GMT+03:00 Matus UHLAR - fantomas :
> >>
> >>> On 13.06.18 11:51, Tiraen wrote:
> >>>
>  either such a question, perhaps someone in the course
> 
>  in the SQUID is still not implemented radius accounting?
> 
> >>>
> >>> authentication - yes. But squid doese not support accounting (afaik).
> >>>
> >>> Maybe there are any third-party modules working correctly?
> 
> >>>
> >>> maybe iCAP module.
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
With best regards,

Vyacheslav Yakushev,

Unix system administrator

https://t.me/kelewind
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about traffic calculate

2018-06-13 Thread Alex Rousskov
On 06/13/2018 07:09 AM, Matus UHLAR - fantomas wrote:
> On 13.06.18 13:26, Tiraen wrote:
>> ICAP will help provide data on incoming / outgoing traffic?

> icap can get the data and work with it.
> you don't have to manipulate, just do the accounting.
> you just need ICAP module that will do it.


Yes, it is possible to collect more-or-less accurate incoming request
and incoming response stats using an ICAP service, but doing so would be
very inefficient. Using eCAP would improve performance, but interpreting
live access.log streams is probably the most efficient way of doing this.

IIRC, both eCAP and ICAP interfaces do not see the exact incoming
requests and incoming responses because Squid may strip hop-by-hop HTTP
headers and decode chunked HTTP message bodies before forwarding the
incoming message to the adaptation service. If you need exact headers
and exact body sizes, then you need more than just the basic ICAP and
eCAP interface. Again, access.log is probably an overall better choice
for capturing that info.

Both eCAP and ICAP interfaces do not see outgoing requests and outgoing
responses because Squid only supports pre-cache vectoring points.


HTH,

Alex.
P.S. In the above, "incoming" is "to Squid" and "outgoing" is "from Squid".


>> 2018-06-13 12:54 GMT+03:00 Matus UHLAR - fantomas :
>>
>>> On 13.06.18 11:51, Tiraen wrote:
>>>
 either such a question, perhaps someone in the course

 in the SQUID is still not implemented radius accounting?

>>>
>>> authentication - yes. But squid doese not support accounting (afaik).
>>>
>>> Maybe there are any third-party modules working correctly?

>>>
>>> maybe iCAP module.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about traffic calculate

2018-06-13 Thread Matus UHLAR - fantomas

On 13.06.18 13:26, Tiraen wrote:

ICAP will help provide data on incoming / outgoing traffic?


icap can get the data and work with it.

you don't have to manipulate, just do the accounting.

you just need ICAP module that will do it.



2018-06-13 12:54 GMT+03:00 Matus UHLAR - fantomas :


On 13.06.18 11:51, Tiraen wrote:


either such a question, perhaps someone in the course

in the SQUID is still not implemented radius accounting?



authentication - yes. But squid doese not support accounting (afaik).

Maybe there are any third-party modules working correctly?




maybe iCAP module.


--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
REALITY.SYS corrupted. Press any key to reboot Universe.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about traffic calculate

2018-06-13 Thread Tiraen
ICAP will help provide data on incoming / outgoing traffic?

2018-06-13 12:54 GMT+03:00 Matus UHLAR - fantomas :

> On 13.06.18 11:51, Tiraen wrote:
>
>> either such a question, perhaps someone in the course
>>
>> in the SQUID is still not implemented radius accounting?
>>
>
> authentication - yes. But squid doese not support accounting (afaik).
>
> Maybe there are any third-party modules working correctly?
>>
>
> maybe iCAP module.
>
> --
> Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
> Warning: I wish NOT to receive e-mail advertising to this address.
> Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
> Micro$oft random number generator: 0, 0, 0, 4.33e+67, 0, 0, 0...
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
With best regards,

Vyacheslav Yakushev,

Unix system administrator

https://t.me/kelewind
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about traffic calculate

2018-06-13 Thread Matus UHLAR - fantomas

On 13.06.18 11:51, Tiraen wrote:

either such a question, perhaps someone in the course

in the SQUID is still not implemented radius accounting?


authentication - yes. But squid doese not support accounting (afaik).


Maybe there are any third-party modules working correctly?


maybe iCAP module.

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Micro$oft random number generator: 0, 0, 0, 4.33e+67, 0, 0, 0...
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about traffic calculate

2018-06-13 Thread Tiraen
either such a question, perhaps someone in the course

in the SQUID is still not implemented radius accounting?

Maybe there are any third-party modules working correctly?

2018-06-08 22:55 GMT+03:00 Tiraen :

>
>
>
>
>
>
> *What is "online mode" ?> > Perhaps there are other solutions besides log
> parsing?What information are you trying to get exactly? *
>
> There are actual data on incoming / outgoing traffic per user (when
> autorization by login is on)
>
> In principle, all the data is, except for the thing that I wrote above -
> and I do not understand whether it is possible through this method to get
> the actual
>
>
>
> *SNMP is built in to squid.  *
> Can I get data on traffic by users? If it's not difficult to give a link
> to a piece of documentation ?
>
>
> 2018-06-08 20:04 GMT+03:00 Alex Crow :
>
>>
>>
>> On 08/06/18 17:29, Amos Jeffries wrote:
>>
>>> On 09/06/18 02:56, Tiraen wrote:
>>>
 Small clarification

 If the normal behavior of the proxy server described above is correct,
 then maybe there are other methods of gathering information on traffic
 in online mode?

>>> What is "online mode" ?
>>>
>>
>> SNMP is built in to squid. You can use it in conjunction with net-snmp
>> proxy mode to gather far more granular performance/caching/response
>> time/per-ip stats than squidclient or logs if that's what you're after.
>>
>>
>> --
>> This message is intended only for the addressee and may contain
>> confidential information. Unless you are that person, you may not
>> disclose its contents or use it in any way and are requested to delete
>> the message along with any attachments and notify us immediately.
>> This email is not intended to, nor should it be taken to, constitute
>> advice.
>> The information provided is correct to our knowledge & belief and must not
>> be used as a substitute for obtaining tax, regulatory, investment, legal
>> or
>> any other appropriate advice.
>>
>> "Transact" is operated by Integrated Financial Arrangements Ltd.
>> 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
>> 5300.
>> (Registered office: as above; Registered in England and Wales under
>> number: 3727592). Authorised and regulated by the Financial Conduct
>> Authority (entered on the Financial Services Register; no. 190856).
>>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>
>
>
> --
> With best regards,
>
> Vyacheslav Yakushev,
>
> Unix system administrator
>
> https://t.me/kelewind
>



-- 
With best regards,

Vyacheslav Yakushev,

Unix system administrator

https://t.me/kelewind
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about traffic calculate

2018-06-08 Thread Tiraen
*What is "online mode" ?> > Perhaps there are other solutions besides log
parsing?What information are you trying to get exactly? *

There are actual data on incoming / outgoing traffic per user (when
autorization by login is on)

In principle, all the data is, except for the thing that I wrote above -
and I do not understand whether it is possible through this method to get
the actual



*SNMP is built in to squid.  *
Can I get data on traffic by users? If it's not difficult to give a link to
a piece of documentation ?


2018-06-08 20:04 GMT+03:00 Alex Crow :

>
>
> On 08/06/18 17:29, Amos Jeffries wrote:
>
>> On 09/06/18 02:56, Tiraen wrote:
>>
>>> Small clarification
>>>
>>> If the normal behavior of the proxy server described above is correct,
>>> then maybe there are other methods of gathering information on traffic
>>> in online mode?
>>>
>> What is "online mode" ?
>>
>
> SNMP is built in to squid. You can use it in conjunction with net-snmp
> proxy mode to gather far more granular performance/caching/response
> time/per-ip stats than squidclient or logs if that's what you're after.
>
>
> --
> This message is intended only for the addressee and may contain
> confidential information. Unless you are that person, you may not
> disclose its contents or use it in any way and are requested to delete
> the message along with any attachments and notify us immediately.
> This email is not intended to, nor should it be taken to, constitute
> advice.
> The information provided is correct to our knowledge & belief and must not
> be used as a substitute for obtaining tax, regulatory, investment, legal or
> any other appropriate advice.
>
> "Transact" is operated by Integrated Financial Arrangements Ltd.
> 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
> 5300.
> (Registered office: as above; Registered in England and Wales under
> number: 3727592). Authorised and regulated by the Financial Conduct
> Authority (entered on the Financial Services Register; no. 190856).
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
With best regards,

Vyacheslav Yakushev,

Unix system administrator

https://t.me/kelewind
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about traffic calculate

2018-06-08 Thread Alex Crow



On 08/06/18 17:29, Amos Jeffries wrote:

On 09/06/18 02:56, Tiraen wrote:

Small clarification

If the normal behavior of the proxy server described above is correct,
then maybe there are other methods of gathering information on traffic
in online mode?

What is "online mode" ?


SNMP is built in to squid. You can use it in conjunction with net-snmp 
proxy mode to gather far more granular performance/caching/response 
time/per-ip stats than squidclient or logs if that's what you're after.



--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about traffic calculate

2018-06-08 Thread Amos Jeffries
On 09/06/18 02:56, Tiraen wrote:
> Small clarification
> 
> If the normal behavior of the proxy server described above is correct,
> then maybe there are other methods of gathering information on traffic
> in online mode?

What is "online mode" ?

> 
> Perhaps there are other solutions besides log parsing?

What information are you trying to get exactly?


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about traffic calculate

2018-06-08 Thread Tiraen
Small clarification

If the normal behavior of the proxy server described above is correct, then
maybe there are other methods of gathering information on traffic in online
mode?

Perhaps there are other solutions besides log parsing?

2018-06-06 12:21 GMT+03:00 Tiraen :

> >If you are using SSL-Bump features, please consider Squid-4 instead
>
> It is not used at all.Squid does not work with ssl. Frontend only
>
>
> Concerning incorrectly specified options at build
>
> Here on this squid happens the same thing:
>
> * squid3 -v*
> *Squid Cache: Version 3.4.8*
> * linux*
> *configure options:  '--build=x86_64-linux-gnu' '--prefix=/usr'
> '--includedir=${prefix}/include' '--mandir=${prefix}/share/man'
> '--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var'
> '--libexecdir=${prefix}/lib/squid3' '--srcdir=.'
> '--disable-maintainer-mode' '--disable-dependency-tracking'
> '--disable-silent-rules' '--datadir=/usr/share/squid3'
> '--sysconfdir=/etc/squid3' '--mandir=/usr/share/man' '--enable-inline'
> '--disable-arch-native' '--enable-async-io=8'
> '--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap'
> '--enable-delay-pools' '--enable-cache-digests' '--enable-icap-client'
> '--enable-follow-x-forwarded-for'
> '--enable-auth-basic=DB,fake,getpwnam,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB'
> '--enable-auth-digest=file,LDAP' '--enable-auth-negotiate=kerberos,wrapper'
> '--enable-auth-ntlm=fake,smb_lm'
> '--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,unix_group,wbinfo_group'
> '--enable-url-rewrite-helpers=fake' '--enable-eui' '--enable-esi'
> '--enable-icmp' '--enable-zph-qos' '--enable-ecap' '--disable-translation'
> '--with-swapdir=/var/spool/squid3' '--with-logdir=/var/log/squid3'
> '--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536'
> '--with-large-files' '--with-default-user=proxy' '--enable-build-info=
> linux' '--enable-linux-netfilter' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g
> -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wall'
> 'LDFLAGS=-fPIE -pie -Wl,-z,relro -Wl,-z,now' 'CPPFLAGS=-D_FORTIFY_SOURCE=2'
> 'CXXFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat
> -Werror=format-security'*
>
>
> *no out data*
>
>
> *Connection: 0x7f18ee951c58*
> * FD 15, read 10070, wrote 19018*
> * FD desc: Reading next request*
> * in: buf 0x7f18ee952070, offset 0, size 4096*
> * remote: 127.0.0.1:52827 *
> * local: 127.0.0.1:8080 *
> * nrequests: 38*
> *uri http://icanhazip.com/ *
> *logType TCP_MISS*
> *out.offset 0, out.size 0*
> *req_sz 265*
> *entry 0x7f18ee3bc740/F9929050DEE6E67D2DF51EDCBC0CB80F*
> *start 1528276856.390709 (2.640371 seconds ago)*
> *username*
> *delay_pool 0*
>
> *Connection: 0x7f18ee874168*
> * FD 13, read 10070, wrote 19018*
> * FD desc: Reading next request*
> * in: buf 0x7f18ee86bb60, offset 0, size 4096*
> * remote: 127.0.0.1:52825 *
> * local: 127.0.0.1:8080 *
> * nrequests: 38*
> *uri http://icanhazip.com/ *
> *logType TCP_MISS*
> *out.offset 0, out.size 0*
> *req_sz 265*
> *entry 0x7f18ee87fde0/560E3AC236A180ECB815B5B41527D2BA*
> *start 1528276856.368609 (2.662471 seconds ago)*
> *username*
>
>
> *delay_pool 0*
>
>
>
> 2018-06-06 7:51 GMT+03:00 Amos Jeffries :
>
>> On 06/06/18 07:12, Tiraen wrote:
>> > /The second transaction has not yet reached that state despite 81017sec
>> > having past.
>> > /
>> > Thank you for clarification.
>> >
>> > About squid version
>> >
>> > /squid -v/
>> > /Squid Cache: Version 3.5.27/
>> ...
>>
>> If you are using SSL-Bump features, please consider Squid-4 instead. The
>> strangely long timeouts on transactions is likely to be a side effect of
>> on old behaviour in Squid-3 seen with transactions that were bumped.
>>
>>
>> > '--enable-ssl'
>> > '--with-open-ssl=/etc/ssl/openssl.cnf'
>>
>> Two problems with the above:
>>
>>  1) the option name is "--with-openssl".
>>
>>  2) that option takes the directory PATH where the OpenSSL development
>> files were installed. If using the OS provided library package *omit*
>> the =PATH portion.
>>
>>
>> Amos
>>
>
>
>
> --
> With best regards,
>
> Vyacheslav Yakushev,
>
> Unix system administrator
>
> https://t.me/kelewind
>



-- 
With best regards,

Vyacheslav Yakushev,

Unix system administrator

https://t.me/kelewind
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about traffic calculate

2018-06-06 Thread Tiraen
>If you are using SSL-Bump features, please consider Squid-4 instead

It is not used at all.Squid does not work with ssl. Frontend only


Concerning incorrectly specified options at build

Here on this squid happens the same thing:

* squid3 -v*
*Squid Cache: Version 3.4.8*
* linux*
*configure options:  '--build=x86_64-linux-gnu' '--prefix=/usr'
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man'
'--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var'
'--libexecdir=${prefix}/lib/squid3' '--srcdir=.'
'--disable-maintainer-mode' '--disable-dependency-tracking'
'--disable-silent-rules' '--datadir=/usr/share/squid3'
'--sysconfdir=/etc/squid3' '--mandir=/usr/share/man' '--enable-inline'
'--disable-arch-native' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap'
'--enable-delay-pools' '--enable-cache-digests' '--enable-icap-client'
'--enable-follow-x-forwarded-for'
'--enable-auth-basic=DB,fake,getpwnam,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB'
'--enable-auth-digest=file,LDAP' '--enable-auth-negotiate=kerberos,wrapper'
'--enable-auth-ntlm=fake,smb_lm'
'--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,unix_group,wbinfo_group'
'--enable-url-rewrite-helpers=fake' '--enable-eui' '--enable-esi'
'--enable-icmp' '--enable-zph-qos' '--enable-ecap' '--disable-translation'
'--with-swapdir=/var/spool/squid3' '--with-logdir=/var/log/squid3'
'--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536'
'--with-large-files' '--with-default-user=proxy' '--enable-build-info=
linux' '--enable-linux-netfilter' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g
-O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wall'
'LDFLAGS=-fPIE -pie -Wl,-z,relro -Wl,-z,now' 'CPPFLAGS=-D_FORTIFY_SOURCE=2'
'CXXFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat
-Werror=format-security'*


*no out data*


*Connection: 0x7f18ee951c58*
* FD 15, read 10070, wrote 19018*
* FD desc: Reading next request*
* in: buf 0x7f18ee952070, offset 0, size 4096*
* remote: 127.0.0.1:52827 *
* local: 127.0.0.1:8080 *
* nrequests: 38*
*uri http://icanhazip.com/ *
*logType TCP_MISS*
*out.offset 0, out.size 0*
*req_sz 265*
*entry 0x7f18ee3bc740/F9929050DEE6E67D2DF51EDCBC0CB80F*
*start 1528276856.390709 (2.640371 seconds ago)*
*username*
*delay_pool 0*

*Connection: 0x7f18ee874168*
* FD 13, read 10070, wrote 19018*
* FD desc: Reading next request*
* in: buf 0x7f18ee86bb60, offset 0, size 4096*
* remote: 127.0.0.1:52825 *
* local: 127.0.0.1:8080 *
* nrequests: 38*
*uri http://icanhazip.com/ *
*logType TCP_MISS*
*out.offset 0, out.size 0*
*req_sz 265*
*entry 0x7f18ee87fde0/560E3AC236A180ECB815B5B41527D2BA*
*start 1528276856.368609 (2.662471 seconds ago)*
*username*


*delay_pool 0*



2018-06-06 7:51 GMT+03:00 Amos Jeffries :

> On 06/06/18 07:12, Tiraen wrote:
> > /The second transaction has not yet reached that state despite 81017sec
> > having past.
> > /
> > Thank you for clarification.
> >
> > About squid version
> >
> > /squid -v/
> > /Squid Cache: Version 3.5.27/
> ...
>
> If you are using SSL-Bump features, please consider Squid-4 instead. The
> strangely long timeouts on transactions is likely to be a side effect of
> on old behaviour in Squid-3 seen with transactions that were bumped.
>
>
> > '--enable-ssl'
> > '--with-open-ssl=/etc/ssl/openssl.cnf'
>
> Two problems with the above:
>
>  1) the option name is "--with-openssl".
>
>  2) that option takes the directory PATH where the OpenSSL development
> files were installed. If using the OS provided library package *omit*
> the =PATH portion.
>
>
> Amos
>



-- 
With best regards,

Vyacheslav Yakushev,

Unix system administrator

https://t.me/kelewind
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about traffic calculate

2018-06-05 Thread Amos Jeffries
On 06/06/18 07:12, Tiraen wrote:
> /The second transaction has not yet reached that state despite 81017sec
> having past.
> /
> Thank you for clarification.
> 
> About squid version
> 
> /squid -v/
> /Squid Cache: Version 3.5.27/
...

If you are using SSL-Bump features, please consider Squid-4 instead. The
strangely long timeouts on transactions is likely to be a side effect of
on old behaviour in Squid-3 seen with transactions that were bumped.


> '--enable-ssl'
> '--with-open-ssl=/etc/ssl/openssl.cnf'

Two problems with the above:

 1) the option name is "--with-openssl".

 2) that option takes the directory PATH where the OpenSSL development
files were installed. If using the OS provided library package *omit*
the =PATH portion.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about traffic calculate

2018-06-05 Thread Tiraen
*The second transaction has not yet reached that state despite
81017sechaving past. *
Thank you for clarification.

About squid version

*squid -v*
*Squid Cache: Version 3.5.27*
*Service Name: squid*
*configure options:  '--build=x86_64-linux-gnu' '--prefix=/usr'
'--includedir=/include' '--mandir=/share/man' '--infodir=/share/info'
'--sysconfdir=/etc' '--localstatedir=/var' '--libexecdir=/lib/squid3'
'--srcdir=.' '--disable-maintainer-mode' '--disable-dependency-tracking'
'--disable-silent-rules' '--datadir=/usr/share/squid3'
'--sysconfdir=/etc/squid3' '--mandir=/usr/share/man' '--enable-inline'
'--disable-arch-native' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap'
'--enable-delay-pools' '--enable-cache-digests' '--enable-icap-client'
'--enable-follow-x-forwarded-for'
'--enable-auth-basic=DB,fake,getpwnam,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB'
'--enable-basic-auth-helpers=squid_radius_auth'
'--enable-auth-digest=file,LDAP' '--enable-auth-negotiate=kerberos,wrapper'
'--enable-auth-ntlm=fake,smb_lm'
'--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,unix_group,wbinfo_group'
'--enable-url-rewrite-helpers=fake' '--enable-eui' '--enable-esi'
'--enable-http-violations' '--enable-icmp' '--enable-zph-qos'
'--disable-translation' '--with-swapdir=/var/spool/squid3'
'--with-logdir=/var/log/squid3' '--with-pidfile=/var/run/squid3.pid'
'--with-filedescriptors=65536' '--with-large-files'
'--with-default-user=proxy' '--enable-ssl'
'--with-open-ssl=/etc/ssl/openssl.cnf' '--enable-linux-netfilter'
'CFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat
-Werror=format-security -Wall' 'LDFLAGS=-fPIE -pie -Wl,-z,relro -Wl,-z,now'
'CPPFLAGS=-D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fPIE
-fstack-protector-strong -Wformat -Werror=format-security'
'build_alias=x86_64-linux-gnu'*

Regarding the configuration when there is no out data

Squid himself listen localhost without https in SNP mode (i checked without
SNP with same result)

*netstat -anp | grep squid*
*tcp0  0 127.0.0.1:8080 
0.0.0.0:*   LISTEN  835/(squid-coord-3)*
*tcp0  0 127.0.0.1:8081 
0.0.0.0:*   LISTEN  835/(squid-coord-3)*

In front of the SQUID stand nghtttpx as ssl/spdy frontend with backend squid


*frontend=0.0.0.0,3000*
*backend=127.0.0.1,8080*
*backend=127.0.0.1,8081*

In the specified configuration, there are no out data



2018-06-05 6:25 GMT+03:00 Amos Jeffries :

> On 05/06/18 11:34, Tiraen wrote:
> > Good day. I apologize in advance if this has already been discussed, if
> > so - just give a link to the discussion
> >
> > The proxy server has an interface for viewing current active sessions
> >
> > http://{}:{}/squid-internal-mgr/active_requests
> >
>
> Please be aware these are *not* "sessions". These are transactions,
> which  have one request, one response, and maybe some informational
> messages.
>
> A "session" as far as it relates to HTTP is a application level thing
> which includes _multiple_ transactions, and possibly even multiple TCP
> connections at the client end.
>
>
> > or
> >
> > cache_object://%s/active_requests
> >
> > There there is some set of parameters which allow to get the data on
> traffic
> >
> > If the connection to the proxy goes directly and by http we see like
> this:
> >
> > /Connection: 0x8050e0518/
> > /FD 29, read 4247, wrote 13479/
> > /FD desc: Reading next request/
> > /in: buf 0x8045a6fe0, used 0, free 39/
> > /remote: :50340/
> > /local: :8080/
> > /nrequests: 1/
> > /uri ХХХ:443/
> > /logType TCP_TUNNEL/
> > /out.offset 0, out.size 13440/
> > /req_sz 235/
> > /entry 0x0/N/A/
> > /start 1527608373.902584 (73.252258 seconds ago)/
> > /username -/
> > /delay_pool 0/
> >
> >
> > We have both traffic stat
> >
> > /out.offset 0, out.size 13440/
> > /req_sz 235/
> >
>
> The latest transactions request was 235 bytes, its reply was 13440 bytes
> (so far).
>
>
> > But if there is a frontend in front of the SQUID (nghttpx for example
> > and https)
> >
> > we have this
> >
> > /Connection: 0x7f66a317ecf8/
> > /FD 222, read 9192, wrote 526/
> > /FD desc: Reading next request/
> > /in: buf 0x7f66a294fb90, used 0, free 39/
> > /remote: 127.0.0.1:2314 /
> > /local: 127.0.0.1:8081 /
> > /nrequests: 2/
> > /uri nererut.com:443 /
> > /logType TAG_NONE/
> > /out.offset 0, out.size 0/
> > /req_sz 334/
> > /entry (nil)/N/A/
> > /start 1527526715.189831 (81017.831772 seconds ago)/
> > /username 8355fcec-94fd-496c-94d1-a195a5ca7148/
> > /delay_pool 0
> > /
> > without out traffic
> >
> > /out.offset 0, out.size 0/
> > /req_sz 334/
>
> This transaction request was 334 bytes, its reply was 0 bytes (so far).
>
>
> >
> > I certainly did not test why it happens - due to https or proxy, but is
> > it 

[squid-users] Question about traffic calculate

2018-06-04 Thread Tiraen
Good day. I apologize in advance if this has already been discussed, if so
- just give a link to the discussion

The proxy server has an interface for viewing current active sessions

http://{}:{}/squid-internal-mgr/active_requests

or

cache_object://%s/active_requests

There there is some set of parameters which allow to get the data on traffic

If the connection to the proxy goes directly and by http we see like this:

*Connection: 0x8050e0518*
*FD 29, read 4247, wrote 13479*
*FD desc: Reading next request*
*in: buf 0x8045a6fe0, used 0, free 39*
*remote: :50340*
*local: :8080*
*nrequests: 1*
*uri ХХХ:443*
*logType TCP_TUNNEL*
*out.offset 0, out.size 13440*
*req_sz 235*
*entry 0x0/N/A*
*start 1527608373.902584 (73.252258 seconds ago)*
*username -*
*delay_pool 0*


We have both traffic stat

*out.offset 0, out.size 13440*
*req_sz 235*

But if there is a frontend in front of the SQUID (nghttpx for example and
https)

we have this

*Connection: 0x7f66a317ecf8*
*FD 222, read 9192, wrote 526*
*FD desc: Reading next request*
*in: buf 0x7f66a294fb90, used 0, free 39*
*remote: 127.0.0.1:2314 *
*local: 127.0.0.1:8081 *
*nrequests: 2*
*uri nererut.com:443 *
*logType TAG_NONE*
*out.offset 0, out.size 0*
*req_sz 334*
*entry (nil)/N/A*
*start 1527526715.189831 (81017.831772 seconds ago)*
*username 8355fcec-94fd-496c-94d1-a195a5ca7148*

*delay_pool 0*
without out traffic

*out.offset 0, out.size 0*
*req_sz 334*

I certainly did not test why it happens - due to https or proxy, but is it
possible to clarify this case?

Thank you in advance for your help


-- 
With best regards,

Vyacheslav Yakushev,

Unix system administrator

https://t.me/kelewind
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about shutdown_lifetime behavior.

2018-05-03 Thread Cody Herzog
Thanks, Alex.

I do have some concerns that 'reconfigure' may cause disruptions in certain 
situations, but I haven't seen it yet.

Perhaps it is most likely to cause problems when connections are first being 
established, or when they are changing states.

It seems to do a good job of not disrupting established WebSocket connections.

One other idea I had was to use 'iptables' to prevent new TCP connections to 
port 443 as the first phase of shutdown. I think that would probably work, and 
would not have the potential bad behavior of 'reconfigure'.

For my 'reconfigure', I'm simply commenting out the https_port line which 
causes Squid to listen on 443.

Thanks again.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about shutdown_lifetime behavior.

2018-05-02 Thread Alex Rousskov
On 05/02/2018 03:07 PM, Cody Herzog wrote:

> So, here is my shutdown sequence:
> 
> 1.) Modify config file to prevent new client connections and 'reconfigure'.
> 2.) Poll active requests until there are no connections to critical services.
> 3.) Issue the shutdown command with a small value for shutdown_lifetime.
> 
> Does that sounds reasonable?

It sounds like a reasonable (and clever!) workaround to me.

Ideally, a single "squid -k shutdown" should result in everything you
need done by Squid automatically, with a new ACL-driven directive to
identify "connections to critical services".

Please keep in mind that Squid reconfiguration is still a disruptive
action (unfortunately). If it does not usually affect your critical
services, great, but I am fairly sure it is possible to come up with
specific cases where reconfiguration kills in-progress transactions.

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about shutdown_lifetime behavior.

2018-05-02 Thread Cody Herzog
Thanks again, Amos.

>Then Squids behaviour already matches your requirements. The "or when timeout 
>occurs" is shutdown_lifetime and you do not have to do anything.

I'm confused by this. After issuing the first shutdown command, my desired 
behavior is for Squid to shut itself down fully as soon as it detects that 
there is no more client activity. My understanding from your first response is 
that Squid will always wait the full timeout, regardless of whether activity 
seems to have stopped.

Ultimately, I ended up having to implement something custom anyway.

My clients have multiple persistent WebSocket connections to different 
services. Some of those services are critical, and some are not. Shutdown must 
be postponed until there are no more active connections to critical services. 
Connections to the non-critical services can last a very long time, and I don't 
want to postpone shutdown because of those connections.
 
To get my desired behavior, I ended up polling 
'cache_object://cache.host.name/active_requests' to check if any critical 
requests are active, and if not, then I issue the shutdown command.

The tricky thing is that 'cache_object' cannot be queried after the first 
shutdown command has been issued, because Squid does not accept any new 
connections. Therefore, I had to find a way to prevent new client connections, 
while still allowing 'cache_object' to keep working. I was able to accomplish 
this by modifying squid.conf and issuing a 'reconfigure'.  Thankfully, the 
'reconfigure' which prevents new client connections does not seem to break any 
of the active WebSocket connections.

So, here is my shutdown sequence:

1.) Modify config file to prevent new client connections and 'reconfigure'.
2.) Poll active requests until there are no connections to critical services.
3.) Issue the shutdown command with a small value for shutdown_lifetime.

Does that sounds reasonable?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about shutdown_lifetime behavior.

2018-05-01 Thread Amos Jeffries
On 02/05/18 05:17, Cody Herzog wrote:
> Thanks very much for the quick response, Amos.
> 
>  
> 
> For my use case, I would like Squid to exit when all client connections
> have been closed or when the timeout occurs, whichever comes first.
> 

Then Squids behaviour already matches your requirements. The "or when
timeout occurs" is shutdown_lifetime and you do not have to do anything.

> 
> My instances of Squid may be handling several persistent WebSocket
> connections, and I don't want to disrupt those. I will occasionally need
> to perform maintenance, so I want a safe way to stop Squid without
> disrupting user activity.
> 

Nod. It will also likely be handling CONNECT tunnels for other things.
Be aware that these connections can last indefinitely - some have been
known to last on a timescale of several weeks.

If your maintenance is with squid.conf or things loaded by it use "squid
-k reconfigure" instead of a restart cycle. Squid can reload its config
fine with just a pause for any active clients - your netstat approach
could be useful to pick a time with minimal connections to reload the
config.

> 
> I am using a fairly simple Squid configuration, with no caching, so I
> suspect that I can simply monitor the number of active Squid TCP
> connections using 'netstat', and then execute the second shutdown
> command when I detect that all those connections are closed.
> 

Since WebSockets is part of your situation netstat will almost certainly
not work as well as you suspect. These connections can be very
surprising in their lifetimes.


> 
> I've been using the following command to count the number of active
> Squid TCP connections of port 443, which is the only port I use:
> 
>  
> 
> netstat -nat | grep ".*:443.*:" | grep ESTABLISHED | wc -l
> 
>  
> 
> That seems to give me what I want.
> 

Hmm, you might be able to get more useful info from the Squid
filedescriptors report.
  squidclient mgr:filedescriptors


>  
> 
> Is it possible that bad things could happen by stopping Squid when I see
> that all the TCP connections have closed?

Should not be any bad things if you just use -k shutdown (twice). Squid
will take the time it needs for a clean (but immediate) shutdown so long
as you only do it twice, not more and do not use "kill" command.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about shutdown_lifetime behavior.

2018-05-01 Thread Cody Herzog
Thanks very much for the quick response, Amos.

For my use case, I would like Squid to exit when all client connections have 
been closed or when the timeout occurs, whichever comes first.

My instances of Squid may be handling several persistent WebSocket connections, 
and I don't want to disrupt those. I will occasionally need to perform 
maintenance, so I want a safe way to stop Squid without disrupting user 
activity.

I am using a fairly simple Squid configuration, with no caching, so I suspect 
that I can simply monitor the number of active Squid TCP connections using 
'netstat', and then execute the second shutdown command when I detect that all 
those connections are closed.

I've been using the following command to count the number of active Squid TCP 
connections of port 443, which is the only port I use:

netstat -nat | grep ".*:443.*:" | grep ESTABLISHED | wc -l

That seems to give me what I want.

Is it possible that bad things could happen by stopping Squid when I see that 
all the TCP connections have closed?

Thanks.

-Cody
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about shutdown_lifetime behavior.

2018-05-01 Thread Amos Jeffries
On 01/05/18 18:04, Cody Herzog wrote:
> Hello.
> 
> I have a question about shutdown_lifetime:
> 
> http://www.squid-cache.org/Doc/config/shutdown_lifetime/
> 
> Does Squid always wait the full amount of time before shutting down, even 
> after all active connections have closed?

For now yes. Long-term the plan is to have it exit if no clients are
connected, but we have not yet finalized how that is to work internally.


> 
> I'm running Squid 3.5.27 on Ubuntu 16.04.
> 
> If Squid does not have an internal mechanism to complete the shutdown when 
> all active connections have closed, then I may have to create my own based on 
> polling with 'netstat'.
> 

You can try. Squid maintains a number of helpers though which may still
be doing things with sockets after clients are gone. Besides dev time
detecting when all that is completed is the largest blocker at present
to the long-term project.

If you really need a fast shutdown feel free to set the lifetime config
value to a shorter time, or simply call "squid -k shutdown" (or
equivalent init script command).

The "-k shutdown" behaviour is; on first use to begin the shutdown
period, on second call to trigger the end of shutdown as if
shutdown_lifetime was reached.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Question about shutdown_lifetime behavior.

2018-05-01 Thread Cody Herzog
Hello.

I have a question about shutdown_lifetime:

http://www.squid-cache.org/Doc/config/shutdown_lifetime/

Does Squid always wait the full amount of time before shutting down, even after 
all active connections have closed?

Based on my testing, it seems like it does.

However, I found some documentation which indicates that Squid should close as 
soon as all active connections have closed:

https://www.safaribooksonline.com/library/view/squid-the-definitive/0596001622/re91.html

"Squid finally exits when all client connections have been closed or when this 
timeout occurs."

Is that the expected behavior, or will Squid always stay open for the full 
timeout, as I'm observing in my testing?

I searched the FAQ and the Internet at large, but couldn't find a definitive 
answer.

I'm running Squid 3.5.27 on Ubuntu 16.04.

If Squid does not have an internal mechanism to complete the shutdown when all 
active connections have closed, then I may have to create my own based on 
polling with 'netstat'.

Thanks.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question with ACL and UrlRewrite ?

2018-01-16 Thread Yuri
May be, because of FB is some years ago under HTTPS?


17.01.2018 03:17, Aismel пишет:
>
> Hi,
>
>  
>
> I need allow  all my users navigate through internet but starting at
> 14:00pm to 20:00pm to X pages only so before no one can access to that
> X pages.
>
>  
>
> I need redirect when a user ask www.facebook.com
>  to m.facebook.com
>
>  
>
> I found this script but do not why don’t work
>
>  
>
> #!/usr/bin/perl
>
> $mirror = "m.facebook.com";
>
>  
>
> $| = 1;
>
> while (<>) {
>
>     @line = split;
>
>     $_ = $line[0];
>
>     if (m/^http:\/\/((?:[a-z0-9]+\.)?\.facebook\.com)\/(.*)/ &&
>
>     $1 ne $mirror) {
>
>     print "http://; . $mirror . "/" . $2 . "\n";
>
>     } else {
>
>     print $_ . "\n";
>
>     }
>
> }
>
>  
>
> Pd: I set the chmod +x to the script.
>
>  
>
> Thanks any help
>
>  
>
> Best regards
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
*
* C++20 : Bug to the future *
*

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Question with ACL and UrlRewrite ?

2018-01-16 Thread Aismel
Hi,

 

I need allow  all my users navigate through internet but starting at 14:00pm
to 20:00pm to X pages only so before no one can access to that X pages.

 

I need redirect when a user ask www.facebook.com 
to m.facebook.com

 

I found this script but do not why don't work

 

#!/usr/bin/perl

$mirror = "m.facebook.com";

 

$| = 1;

while (<>) {

@line = split;

$_ = $line[0];

if (m/^http:\/\/((?:[a-z0-9]+\.)?\.facebook\.com)\/(.*)/ &&

$1 ne $mirror) {

print "http://; . $mirror . "/" . $2 . "\n";

} else {

print $_ . "\n";

}

}

 

Pd: I set the chmod +x to the script.

 

Thanks any help

 

Best regards

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question in :src/cf.data.pre "

2017-12-13 Thread Matus UHLAR - fantomas

On 13.12.17 12:13, --Ahmad-- wrote:

ok great point amos .
is there a way to change the default config squid file for squid ?

i mean i want to change the default location  from /etc/squid/squid.conf to 
something else
?

can i change that from code cc before compile ?


https://wiki.squid-cache.org/SquidFaq/CompilingSquid

you can define --sysconfdir at compile time.

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Emacs is a complicated operating system without good text editor.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question in :src/cf.data.pre "

2017-12-13 Thread --Ahmad--
ok great point amos .
is there a way to change the default config squid file for squid ?

i mean i want to change the default location  from /etc/squid/squid.conf to 
something else 
?

can i change that from code cc before compile ?


cheers 


> On Dec 12, 2017, at 6:42 PM, Amos Jeffries  wrote:
> 
> On 13/12/17 04:12, --Ahmad-- wrote:
>> @amos about your question .
>> i think “include option “ is ok , but i also need it to be hidden and not 
>> added to squid.conf
>> say like :
>> include /var/test.cc
>> would like this is loaded automatically and don’t need to add it in 
>> squid.conf ???
>> i mean each time i run squid it will load like include other file .
>> i tried to add into pre file below config :
>> NAME: include
>> TYPE: string
>> LOC: Config.include
>> DEFAULT: /var/test.cc 
>> DOC_START
>> Hello
>> DOC_END
> 
> Not like this. "include" is a hard-coded behaviour, not a normal directive. 
> It needs to be used like the Debian patch does, with "squid.conf" file being 
> the pre-determined settings and users in control of the other stuff pulled in 
> by 'include' from somewhere else.
> 
> 
> OR, you just use the -f command line option to point at your custom config 
> file when you run Squid "from terminal".
> 
> 
> I wasn't going to bother asking for a fourth time, but since Anthony did "who 
> are you trying to hide this from?" is still unanswered. Anyone who might see 
> these settings in the config file can also find them by other means (ie 
> looking at the machines open ports). So the lengths you are going to hide 
> them is very weird.
> 
> 
> Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question in :src/cf.data.pre "

2017-12-12 Thread Amos Jeffries

On 13/12/17 04:12, --Ahmad-- wrote:

@amos about your question .

i think “include option “ is ok , but i also need it to be hidden and 
not added to squid.conf


say like :

include /var/test.cc

would like this is loaded automatically and don’t need to add it in 
squid.conf ???



i mean each time i run squid it will load like include other file .

i tried to add into pre file below config :

NAME: include
TYPE: string
LOC: Config.include
DEFAULT: /var/test.cc 
DOC_START
     Hello
DOC_END



Not like this. "include" is a hard-coded behaviour, not a normal 
directive. It needs to be used like the Debian patch does, with 
"squid.conf" file being the pre-determined settings and users in control 
of the other stuff pulled in by 'include' from somewhere else.



OR, you just use the -f command line option to point at your custom 
config file when you run Squid "from terminal".



I wasn't going to bother asking for a fourth time, but since Anthony did 
"who are you trying to hide this from?" is still unanswered. Anyone who 
might see these settings in the config file can also find them by other 
means (ie looking at the machines open ports). So the lengths you are 
going to hide them is very weird.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question in :src/cf.data.pre "

2017-12-12 Thread --Ahmad--
@antony 

i will wait amos reply  !

thanks for your time 

cheers 

> On Dec 12, 2017, at 5:12 PM, --Ahmad--  wrote:
> 
> @amos about your question .
> 
> i think “include option “ is ok , but i also need it to be hidden and not 
> added to squid.conf 
> 
> say like :
> 
> include /var/test.cc 
> 
> would like this is loaded automatically and don’t need to add it in 
> squid.conf ???
> 
> 
> i mean each time i run squid it will load like include other file .
> 
> i tried to add into pre file below config :
> 
> NAME: include
> TYPE: string
> LOC: Config.include
> DEFAULT: /var/test.cc 
> DOC_START
> Hello
> DOC_END
> 
> 
> i guess it will load the line built in squid.conf like ==> include 
> /var/test.cc 
> 
> and then i can add what i like into the file ==>  /var/test.cc 
> 
> 
> 
> but i got fail during compilation process .
> 
> may be my point above help ?
> 
> 
> 
> 
> 
>> On Dec 12, 2017, at 4:31 PM, --Ahmad-- > > wrote:
>> 
>> acl ip1 myip 1.2.3.4
>> http_access allow ip1
>> http_port 6532
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question in :src/cf.data.pre "

2017-12-12 Thread Antony Stone
On Tuesday 12 December 2017 at 16:12:59, --Ahmad-- wrote:

> @amos about your question .
> 
> i think “include option “ is ok , but i also need it to be hidden and not
> added to squid.conf

Who are you trying to hide this from?

Antony.

-- 
In Heaven, the beer is Belgian, the chefs are Italian, the supermarkets are 
British, the mechanics are German, the lovers are French, the entertainment is 
American, and everything is organised by the Swiss.

In Hell, the beer is American, the chefs are British, the supermarkets are 
German, the mechanics are French, the lovers are Swiss, the entertainment is 
Belgian, and everything is organised by the Italians.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question in :src/cf.data.pre "

2017-12-12 Thread --Ahmad--
@amos about your question .

i think “include option “ is ok , but i also need it to be hidden and not added 
to squid.conf 

say like :

include /var/test.cc

would like this is loaded automatically and don’t need to add it in squid.conf 
???


i mean each time i run squid it will load like include other file .

i tried to add into pre file below config :

NAME: include
TYPE: string
LOC: Config.include
DEFAULT: /var/test.cc
DOC_START
Hello
DOC_END


i guess it will load the line built in squid.conf like ==> include /var/test.cc

and then i can add what i like into the file ==>  /var/test.cc


but i got fail during compilation process .

may be my point above help ?





> On Dec 12, 2017, at 4:31 PM, --Ahmad--  wrote:
> 
> acl ip1 myip 1.2.3.4
> http_access allow ip1
> http_port 6532

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question in :src/cf.data.pre "

2017-12-12 Thread --Ahmad--
you correct .

but I’m asking about something different like non config in the file :



acl ip1 myip 1.2.3.4
http_access allow ip1
http_port 6532





i know the pre.cc file has some config 

but i want to add something different 

thanks 
> On Dec 12, 2017, at 4:31 PM, --Ahmad--  wrote:
> 
> acl ip1 myip 1.2.3.4
> http_access allow ip1
> http_port 6532

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question in :src/cf.data.pre "

2017-12-12 Thread Amos Jeffries

On 13/12/17 03:31, --Ahmad-- wrote:

as an example i want directives to be added automatically without adding them 
to squid.conf

look below :

acl ip1 myip 1.2.3.4
http_access allow ip1
http_port 6532

so above i want them to be added to squid.conf without add them there

so i run squid in terminal it will contact default squid.conf which has no 
config

but it will have the config above added automatically

make sense ?



Sort of. See should be able to see how to add fixed default values for 
those directives are already being defined for how to add fixed values.


Your problem will be that the default config is loaded *before* Squid 
starts receiving traffic. The myip value is very dynamic and changes or 
only identifiable *after* Squid is fully running. It can even change 
between TCP connections arriving if the machines NIC assignments change.



Since your values are fixed at compile time and cannot be changed 
dynamically you may do better to have a squid.conf with those settings 
that gets loaded always and uses the include directive to load any other 
user configurable content.


Have a look at the way I'm doing separation between Squid packages 
config and admin config for Debian Squid-4 packages:







Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question in :src/cf.data.pre "

2017-12-12 Thread Antony Stone
On Tuesday 12 December 2017 at 14:52:08, --Ahmad-- wrote:

> Hello folks .
> wanna ask if possible to add  some directives to be by default added to the
> squid config file and when the squid run after compilation to take effect
> even i don’t add them to squid .conf

http://lists.squid-cache.org/pipermail/squid-users/2016-July/011732.html

Was there anything about the reply you got then which doesn't answer your 
current question?


Antony.

-- 
"640 kilobytes (of RAM) should be enough for anybody."

 - Bill Gates

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question in :src/cf.data.pre "

2017-12-12 Thread --Ahmad--
as an example i want directives to be added automatically without adding them 
to squid.conf 

look below :

acl ip1 myip 1.2.3.4
http_access allow ip1
http_port 6532

so above i want them to be added to squid.conf without add them there 

so i run squid in terminal it will contact default squid.conf which has no 
config 

but it will have the config above added automatically 

make sense ?



cheers 

> On Dec 12, 2017, at 4:06 PM, Amos Jeffries  wrote:
> 
> On 13/12/17 02:52, --Ahmad-- wrote:
>> Hello folks .
>> wanna ask if possible to add  some directives to be by default added to the 
>> squid config file and when the squid run after compilation to take effect 
>> even i don’t add them to squid .conf
> 
> Directives to do what?
> 
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question in :src/cf.data.pre "

2017-12-12 Thread Amos Jeffries

On 13/12/17 02:52, --Ahmad-- wrote:

Hello folks .
wanna ask if possible to add  some directives to be by default added to the 
squid config file and when the squid run after compilation to take effect even 
i don’t add them to squid .conf



Directives to do what?

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] question in :src/cf.data.pre "

2017-12-12 Thread --Ahmad--
Hello folks .
wanna ask if possible to add  some directives to be by default added to the 
squid config file and when the squid run after compilation to take effect even 
i don’t add them to squid .conf 

id there a way to add them one shot in that file ?
src/cf.data.pre ??


i had a look and found like a formula like default value , name , doc stat , 
doc end … etc


thanks 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about: ext_session_acl Splash/Portal solution.

2017-10-19 Thread Amos Jeffries

On 19/10/17 19:11, Klaus Tachtler wrote:

Hi Amos,
hi list,

is there a problem with the Splash-Screen example from squid homepage
  - https://wiki.squid-cache.org/ConfigExamples/Portal/Splash
if the squid is BEHIND a e2Guardian(Fork of DansGuardian)?



No, http_access and the related session things are all done first using 
the URLs sent by the client.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about: ext_session_acl Splash/Portal solution.

2017-10-19 Thread Klaus Tachtler

Hi Amos,
hi list,

is there a problem with the Splash-Screen example from squid homepage
 - https://wiki.squid-cache.org/ConfigExamples/Portal/Splash
if the squid is BEHIND a e2Guardian(Fork of DansGuardian)?


Thank you!
Klaus.


--

--
e-Mail  : kl...@tachtler.net
Homepage: http://www.tachtler.net
DokuWiki: http://www.dokuwiki.tachtler.net
--


binlZQvYSw0A5.bin
Description: Öffentlicher PGP-Schlüssel
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about: ext_session_acl Splash/Portal solution.

2017-10-16 Thread Klaus Tachtler

Hi Amos,

first of all, thank you for help and advise, BUT I have still a  
problem - see my latest try:


--- code ---

# Set up the session helper in active mode. Mind the wrap - this is  
one line: - MODIFIED -
external_acl_type session concurrency=100 ttl=3 negative_ttl=0  
children-max=1 %SRC /usr/lib64/squid/ext_session_acl -a -T 60 -b  
/var/lib/squid/sessions/


# Pass the LOGIN command to the session helper with this ACL
acl session_login external session LOGIN

# Normal session ACL as per simple example
acl session_is_active external session

# ACL to match URL - MODIFIED -
acl clicked_login_url url_regex -i http://my.page.net/html/accept.php

# First check for the login URL. If present, login session
http_access allow clicked_login_url session_login

# If we get here, URL not present, so renew session or deny request.
http_access deny !session_is_active

# Deny page to display - MODIFIED -
deny_info 511:splash.php session_is_active

--- code ---

1.) I changed in the external_acl_type from: %LOGIN to: %SRC - after  
that NO authentication request against LDAP was done! - If I go back  
to %LOGIN a authentication request against LDAP comes as popup back!


2.) Disabled redirect insie the page -  
http://my.page.net/html/accept.php - so loading the page are done with  
200 and NO redirect inside - only the  
http://my.page.net/html/accept.php was displayed.


3.) deny_info uses now 511 and a "symbolic link" inside  
"/usr/share/squid/errors/templates" goes to the real location.


---> How it works <---

The splash.php page was shown. If I click on the submit button the  
http://my.page.net/html/accept.php was loaded and shown too, but after  
that it's NOT POSSIBLE to go to Google for example, the splash page  
was shwon over and over again! - I'm a little bit frustrated.


I use the CentOS 7.4 squid version 3.5.20 which comes with the base  
repository.


Thank you so much for your patience and help!
Klaus.


--

--
e-Mail  : kl...@tachtler.net
Homepage: http://www.tachtler.net
DokuWiki: http://www.dokuwiki.tachtler.net
--


binmOExC8xQjV.bin
Description: Öffentlicher PGP-Schlüssel
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about: ext_session_acl Splash/Portal solution.

2017-10-16 Thread Amos Jeffries

On 16/10/17 07:17, Klaus Tachtler wrote:

Hi Amos,

after a little bit more testing, of course I must agree with you, it 
doesn't work as expected.


Please can you give me another advice? Where is my fault?

I tried to use the *ACTIVE* example from the squid documentation and 
modified it a little bit on 3 parts of the code, BUT a LOOP are still 
there!


https://wiki.squid-cache.org/ConfigExamples/Portal/Splash#Squid_Configuration_File_-_Active_Mode 



--- code ---

# Set up the session helper in active mode. Mind the wrap - this is one 
line: - *MODIFIED* - (all in one line)
external_acl_type session concurrency=100 ttl=3 negative_ttl=0 
children-max=1 %LOGIN /usr/lib64/squid/ext_session_acl -a -T 60 -b 
/var/lib/squid/sessions/


# Pass the LOGIN command to the session helper with this ACL
acl session_login external session LOGIN

# Normal session ACL as per simple example
acl session_is_active external session

# ACL to match URL - *MODIFIED* -
acl clicked_login_url url_regex -i http://my.pages.net/html/accept.php

# First check for the login URL. If present, login session
http_access allow clicked_login_url session_login

# If we get here, URL not present, so renew session or deny request.
http_access deny !session_is_active

# Deny page to display - *MODIFIED* - NOT using a template with 
HTML-Code 511!

deny_info http://my.pages.net/html/splash.php?url=%u session_is_active



Please double-check the cacheing related headers on both your custom 
URLs are set to make them non-cacheable. 302 is a weak substitute for 
511 semantics, and requires caching headers to clearly and explicitly 
prevent caching *and* to be followed by the client or the system can 
breaks badly (which is why 511 was created).



Which exact version of Squid are you using? some of the early v4 had 
issues with the format parameter changes which broke the active session 
mode for a while.



Also, be aware that since the helper API is *only* using %LOGIN if any 
visitor happens to send a request for the clicked_login_url without 
credentials attached they will make a logged-in session for anonymous 
access and the proxy becomes an 'open proxy' for any subsequent client 
requests from *anywhere* for 63 seconds. Things like that are why %SRC 
is usually used to make a session depend on things not as easily under 
client control - such as src-IP.



If those don't work I'm stuck as well. The wiki config examples are ones 
I used myself for many years before I moved to the sql_session helper.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about: ext_session_acl Splash/Portal solution.

2017-10-15 Thread Klaus Tachtler

Hi Amos,

after a little bit more testing, of course I must agree with you, it  
doesn't work as expected.


Please can you give me another advice? Where is my fault?

I tried to use the *ACTIVE* example from the squid documentation and  
modified it a little bit on 3 parts of the code, BUT a LOOP are still  
there!


https://wiki.squid-cache.org/ConfigExamples/Portal/Splash#Squid_Configuration_File_-_Active_Mode

--- code ---

# Set up the session helper in active mode. Mind the wrap - this is  
one line: - *MODIFIED* - (all in one line)
external_acl_type session concurrency=100 ttl=3 negative_ttl=0  
children-max=1 %LOGIN /usr/lib64/squid/ext_session_acl -a -T 60 -b  
/var/lib/squid/sessions/


# Pass the LOGIN command to the session helper with this ACL
acl session_login external session LOGIN

# Normal session ACL as per simple example
acl session_is_active external session

# ACL to match URL - *MODIFIED* -
acl clicked_login_url url_regex -i http://my.pages.net/html/accept.php

# First check for the login URL. If present, login session
http_access allow clicked_login_url session_login

# If we get here, URL not present, so renew session or deny request.
http_access deny !session_is_active

# Deny page to display - *MODIFIED* - NOT using a template with HTML-Code 511!
deny_info http://my.pages.net/html/splash.php?url=%u session_is_active

--- code ---

If you want, I can share the code from the pages
- http://my.pages.net/html/accept.php
- http://my.pages.net/html/splash.php?url=%u
too?

The idea behind the two PHP pages are, to store the original URL with  
the splash.php inside a PHP session and make a redirect inside the  
accept.php to the original URL.



Thank you for your time and your patience with me!
Klaus.


On 15/10/17 10:02, Klaus Tachtler wrote:

Hi Amos,


On 14/10/17 04:40, Klaus Tachtler wrote:>

Why I'm on a loop between splash page and accept page?



You have two *separate* active (-a) session contexts going on  
simultaneously. They are both fighting over the session database.


Oh my god, to delete "-a" on the "session_active_def" was the solution.
I was searching hours and hours for that!

Thank you so much for the simple line you wrote to me!


If that is the only change you made it is still not solved either,  
your sessions will never end.


You need *one* session. You get to pick active or passive:

* Active has specific ACLs for LOGIN/LOGOUT/test.
 - for when the clicked URL just *has* to be the specific one.

* Passive has only one ACL for atomic test+login.
 - for when either click OR refresh OR any other page fetch is  
enough to continue.

 - every test of the ACL updates the session timestamp not to expire.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



- Ende der Nachricht von Amos Jeffries  -




--

--
e-Mail  : kl...@tachtler.net
Homepage: http://www.tachtler.net
DokuWiki: http://www.dokuwiki.tachtler.net
--


bin6U5wiVIyS6.bin
Description: Öffentlicher PGP-Schlüssel
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about: ext_session_acl Splash/Portal solution.

2017-10-15 Thread Amos Jeffries

On 15/10/17 10:02, Klaus Tachtler wrote:

Hi Amos,


On 14/10/17 04:40, Klaus Tachtler wrote:>

Why I'm on a loop between splash page and accept page?



You have two *separate* active (-a) session contexts going on 
simultaneously. They are both fighting over the session database.


Oh my god, to delete "-a" on the "session_active_def" was the solution.
I was searching hours and hours for that!

Thank you so much for the simple line you wrote to me!


If that is the only change you made it is still not solved either, your 
sessions will never end.


You need *one* session. You get to pick active or passive:

* Active has specific ACLs for LOGIN/LOGOUT/test.
 - for when the clicked URL just *has* to be the specific one.

* Passive has only one ACL for atomic test+login.
 - for when either click OR refresh OR any other page fetch is enough 
to continue.

 - every test of the ACL updates the session timestamp not to expire.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about: ext_session_acl Splash/Portal solution.

2017-10-14 Thread Klaus Tachtler

Hi Amos,


On 14/10/17 04:40, Klaus Tachtler wrote:>

Why I'm on a loop between splash page and accept page?



You have two *separate* active (-a) session contexts going on  
simultaneously. They are both fighting over the session database.


Oh my god, to delete "-a" on the "session_active_def" was the solution.
I was searching hours and hours for that!

Thank you so much for the simple line you wrote to me!


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Thank you!
Klaus.


--

--
e-Mail  : kl...@tachtler.net
Homepage: http://www.tachtler.net
DokuWiki: http://www.dokuwiki.tachtler.net
--


binN9h7DbTXC2.bin
Description: Öffentlicher PGP-Schlüssel
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about: ext_session_acl Splash/Portal solution.

2017-10-14 Thread Amos Jeffries

On 14/10/17 04:40, Klaus Tachtler wrote:>

Why I'm on a loop between splash page and accept page?



You have two *separate* active (-a) session contexts going on 
simultaneously. They are both fighting over the session database.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Question about: ext_session_acl Splash/Portal solution.

2017-10-13 Thread Klaus Tachtler

Hi,

i have a running squid and I would like to show a splash screen and  
did following configuration:



--- code snipped ---
external_acl_type session concurrency=100 ttl=12000 negative_ttl=0  
children=1 %LOGIN /usr/lib64/squid/ext_session_acl -a -T 12000 -b  
/var/lib/squid/sessions/


acl session_login external session LOGIN

external_acl_type session_active_def concurrency=100 ttl=12000  
negative_ttl=0 children-max=1 %LOGIN /usr/lib64/squid/ext_session_acl  
-a -T 12000 -b /var/lib/squid/sessions/


acl session_is_active external session_active_def

acl clicked_login_url url_regex -i http://my.page.net/accept.php

http_access allow clicked_login_url session_login

http_access deny !session_is_active

deny_info http://my.page.net/splash.php?url=%u session_is_active
--- code snipped ---


After user authentication against ldap and enter the URL google.de the  
http://my.page.net/splash.php?url=%u will be shown, no problem to this  
point.


BUT, after reaching the http://my.page.net/accept.php (by pressing a  
button on the splash page) the splash page comes over and over again.


The /var/log/squid/access.log will show me that:

--- log ---

1507908437.361  1 192.168.0.10 TCP_DENIED/302 448 GET  
http://google.de/ username HIER_NONE/- text/html


--- log ---

Why I'm on a loop between splash page and accept page?

Thank you for any help!
Klaus.


--

--
e-Mail  : kl...@tachtler.net
Homepage: http://www.tachtler.net
DokuWiki: http://www.dokuwiki.tachtler.net
--


binwkR9_VV56k.bin
Description: Öffentlicher PGP-Schlüssel
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about refresh_pattern and access time...

2017-10-12 Thread Amos Jeffries

On 12/10/17 19:43, EdouardM wrote:

Amos,
here is an example:
http://au.download.windowsupdate.com/d/msdownload/update/software/updt/2017/09/windows10.0-kb4023814-x86_f07f9708c76807b1b19545521f17d6e2fefdd627.cab
   HTTP/1.1 200 OK
   Cache-Control: public,max-age=172800
   Content-Length: 6150968
   Content-Type: application/vnd.ms-cab-compressed
   Last-Modified: Wed, 27 Sep 2017 21:16:36 GMT
   Accept-Ranges: bytes
   ETag: "09295e6d537d31:0"
   Server: Microsoft-IIS/8.5
   X-Powered-By: ASP.NET
   X-CID: 7
   X-CCC: IN
   X-MSEdge-Ref: Ref A: 7A93444FAE0B4491BE72E049496DEC25 Ref B: BOM02EDGE0119
Ref C: 2017-10-12T06:36:40Z
   X-MSEdge-Ref-OriginShield: Ref A: C8FD899B17694B8AA6641257C2636B9B Ref B:
BOM01EDGE0318 Ref C: 2017-10-11T14:02:56Z
   Date: Thu, 12 Oct 2017 06:36:40 GMT
Length: 6150968 (5.9M) [application/vnd.ms-cab-compressed]

does it mean the object will be valid for 2 days (max-age=172800) only, then
the Squid will download a fresh copy from internet even if we set the
refresh_pattern to 1 month ?


CC:max-age=172800 means it can be served unconditionally from cache for 
172800 seconds from Date: Thu, 12 Oct 2017 06:36:40 GMT.


After Sat, 14 Oct 2017 06:36:40 GMT (Date + CC:max-age) it is stale and 
needs revalidating before any use. The headers from the revalidation 
response will alter the above headers to provide new values for Date and 
maybe other bits.


Altering Date naturally alters the base time for the max-age freshness 
calculation. So it should be served unconditionally for the next 48hrs 
from time of revalidation. Rinse and repeat. **


refresh_pattern is irrelevant here because all the necessary algorithm 
parameters for exact expiry (Date and CC:max-age) are provided by that 
server.



** NP: that is what should happen anyway, modulo bugs in Squid 
calculations or admin configuration settings to force other behaviour.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about refresh_pattern and access time...

2017-10-12 Thread EdouardM
Amos,
here is an example:
http://au.download.windowsupdate.com/d/msdownload/update/software/updt/2017/09/windows10.0-kb4023814-x86_f07f9708c76807b1b19545521f17d6e2fefdd627.cab
  HTTP/1.1 200 OK
  Cache-Control: public,max-age=172800
  Content-Length: 6150968
  Content-Type: application/vnd.ms-cab-compressed
  Last-Modified: Wed, 27 Sep 2017 21:16:36 GMT
  Accept-Ranges: bytes
  ETag: "09295e6d537d31:0"
  Server: Microsoft-IIS/8.5
  X-Powered-By: ASP.NET
  X-CID: 7
  X-CCC: IN
  X-MSEdge-Ref: Ref A: 7A93444FAE0B4491BE72E049496DEC25 Ref B: BOM02EDGE0119
Ref C: 2017-10-12T06:36:40Z
  X-MSEdge-Ref-OriginShield: Ref A: C8FD899B17694B8AA6641257C2636B9B Ref B:
BOM01EDGE0318 Ref C: 2017-10-11T14:02:56Z
  Date: Thu, 12 Oct 2017 06:36:40 GMT
Length: 6150968 (5.9M) [application/vnd.ms-cab-compressed]

does it mean the object will be valid for 2 days (max-age=172800) only, then
the Squid will download a fresh copy from internet even if we set the
refresh_pattern to 1 month ?

Ed.



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about refresh_pattern and access time...

2017-10-11 Thread Amos Jeffries

On 12/10/17 04:57, EdouardM wrote:

Hi All,
question i have in mind about the refresh_pattern and multi-access...
let say i use a refresh_pattern about 1 month with an already cached object,
the question is:
- scenario 1: whatever the number of times the object will be requested,
once the delay (1 month) is expired the Squid will download a fresh copy
from internet ?
- scenario 2: each time the object is requested the ending-date will be
re-calculated for +1 month ?

in both scenarii, the remote object never changes...
from your point of view, what's the correct scenario here ?


Neither of those.

HTTP defines an algorithm for calculating object freshness.


What refresh_pattern does is provide default values for the 
section-4.2.2 parameters *if* the server does not deliver the values itself.


The most recent releases of Squid can also infer Date or Last-Modified 
from message arrival times for the age calculation algorithm.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Question about refresh_pattern and access time...

2017-10-11 Thread EdouardM
Hi All,
question i have in mind about the refresh_pattern and multi-access...
let say i use a refresh_pattern about 1 month with an already cached object,
the question is:
- scenario 1: whatever the number of times the object will be requested,
once the delay (1 month) is expired the Squid will download a fresh copy
from internet ?
- scenario 2: each time the object is requested the ending-date will be
re-calculated for +1 month ?

in both scenarii, the remote object never changes...
from your point of view, what's the correct scenario here ?

thanks guys 

bye Ed.



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question about : NOTICE: Authentication not applicable onintercepted requests.

2017-02-20 Thread Eliezer Croitoru
Hey,

What you see is not a misconfiguration in the general meaning.
Squid and any other proxy cannot authenticate without some kind of special
tricks on an Intercept mode and port.
There are products which offers transparently to hijack the web traffic and
authenticate with the windows credentials to the proxy.
These agents uses some proxy but the interception does on the client side
while the connection to the proxy is similar to a regular one(non
intercept).
There are technical options to "mark" connections or requests in some levels
that will satisfy your needs but needs to be built or published but I have
yet to see one of these.
One solution which I have seen does something that is close to such a thing
is "proxifier" [ https://www.proxifier.com].
I have seen couple other Chinese developments which are doing something
similar but yet to sit on their code to say I understand what they do.
Take a peek at
https://www.raymond.cc/blog/route-all-internet-software-and-game-connection-
through-open-proxy-servers/
This is the next best option for transparently hijack connections:
https://github.com/ambrop72/badvpn

Hope it Helps,
Eliezer 


http://ngtech.co.il/lmgtfy/
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
Behalf Of L.P.H. van Belle
Sent: Wednesday, February 15, 2017 11:54 AM
To: squid-us...@squid-cache.org
Subject: [squid-users] question about : NOTICE: Authentication not
applicable onintercepted requests.

Hai, 

In configuring my debian jessie with squid 3.5.24 ( with ssl enabled )
 c-icap squidclamav and winbind 4.5.5 for kerberos keytab refresing. 

Now, im at the point of reducing my logs and i nocited : 
NOTICE: Authentication not applicable on intercepted requests. 
Messages in squid/cache.log 

I know this is some misconfiguration somewhere but im having a hardtime to
finding/understanding it. 
Where and why, so is anyone can help me finding and understanding it, that
would be very nice. 

I cant see my error and everything else is working fine, execept i havent
tested the kerberos group acl yet. 
So i didnt set that http_access yet. 

Im having the following firewall rules 

# Not authenticated web traffice, redirected to squid in intercept mode.
-A PREROUTING -p tcp -i eth0 --dport 80 -j DNAT --to-destination
192.168.0.2:3128
-A PREROUTING -p tcp -i eth0 --dport 443 -j DNAT --to-destination
192.168.0.2:3129
Port 8080 is also open. 

Web traffic for pc’s which are domain joint have set the proxy by GPO to
hostname.domain.tld port 8080 
Web traffic for other devices dont need to authenticate. 
WPAD and DNS wpad is also set. 

Below is mostly from the updated wiki pages. 
A big thank you to Amos Victor and others who changed the pages, looks good.
I have some small changed for a pure debian based setup with samba4 as addc
and winbind for the squid member server. 


This is my squid config. 
# Created from a running squid version : 3.5.24
# Running os : Debian GNU/Linux 8 (jessie)
# Creation date: 2017-02-15

auth_param negotiate program /usr/lib/squid/negotiate_wrapper_auth
--kerberos /usr/lib/squid/negotiate_kerberos_auth -s
mailto:HTTP/proxy2.internal.domain@internal.domain.tld --ntlm
/usr/bin/ntlm_auth --helper-protocol=gss-spnego --domain=NTDOM
auth_param negotiate children 10 startup=5 idle=5
auth_param negotiate keep_alive on
external_acl_type memberof ttl=3600 negative_ttl=3600 %LOGIN
/usr/lib/squid3/ext_kerberos_ldap_group_acl -d -i -m 4 -g
mailto:internet-allo...@internal.domain.tld -N
mailto:nt...@internal.domain.tld -S
mailto:dc1.internal.domain@internal.domain.tld -D INTERNAL.DOMAIN.TLD
acl authenticated proxy_auth REQUIRED

acl certificates rep_mime_type -i ^application/pkix-crl$

acl windows-updates dstdomain "/etc/squid/lists/updates-windows"
acl antivirus-updates dstdomain "/etc/squid/lists/updates-antivirus"
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines
acl localnet src 192.168.249.0/24    # Company-1
acl localnet src 10.249.2.0/24   # Company-2
acl localnet src 10.249.3.0/24   # Company-3
acl localnet src 10.249.4.0/24   # Company-4
acl localnet src 10.249.5.0/24   # Company-5

acl SSL_ports port 443  # https
acl SSL_ports port 3952 # CIC client
acl SSL_ports port 10443    # https Cisco 5506x
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 3952    # CIC client
acl Safe_ports 

Re: [squid-users] question about : NOTICE: Authentication not applicable onintercepted requests. ( SOLVED )

2017-02-16 Thread Amos Jeffries
On 16/02/2017 3:38 a.m., L.P.H. van Belle wrote:
> If this one arived in the list. 
> 
>  
> 
> This is solved, the wpad.dat was guiding my to the other proxy while my 
> gateway was set to me new proxy. 
> 
> This happend at the policy refresh and did not notice it. 
> 
> Sorry for the noice. 
> 
>  
> 
> But if you see anything that incorrect, or can have a better setup, please 
> let me know. 
> 
> I always like improvements. 
> 

"no_cache" is an alias of "cache". So you can remove the "no_cache" line
from your config entirely.

>  
> 
> Thanks
> 
>  
> 
> Louis
> 
>  
> 
>  
> 
> 
> Van: L.P.H. van Belle [mailto:be...@bazuin.nl] 
> Verzonden: woensdag 15 februari 2017 10:54
> Aan: 'squid-us...@squid-cache.org'
> Onderwerp: question about : NOTICE: Authentication not applicable on 
> intercepted requests. 
> 
> 
>  
> 
> Hai, 
> 
>  
> 
> In configuring my debian jessie with squid 3.5.24 ( with ssl enabled )  
> c-icap squidclamav and winbind 4.5.5 for kerberos keytab refresing. 
> 
>  
> 
> Now, im at the point of reducing my logs and i nocited : 
> 
> NOTICE: Authentication not applicable on intercepted requests. 
> 
> Messages in squid/cache.log 
> 
>  
> 
> I know this is some misconfiguration somewhere but im having a hardtime to 
> finding/understanding it. 
> 
> Where and why, so is anyone can help me finding and understanding it, that 
> would be very nice. 
> 
>  
> 
> I cant see my error and everything else is working fine, execept i havent 
> tested the kerberos group acl yet. 
> 
> So i didnt set that http_access yet. 
> 
>  
> 
> Im having the following firewall rules 
> 
>  
> 
> # Not authenticated web traffice, redirected to squid in intercept mode.
> 
> -A PREROUTING -p tcp -i eth0 --dport 80 -j DNAT --to-destination 
> 192.168.0.2:3128
> 
> -A PREROUTING -p tcp -i eth0 --dport 443 -j DNAT --to-destination 
> 192.168.0.2:3129
> 
> Port 8080 is also open. 
> 
>  
> 
> Web traffic for pc’s which are domain joint have set the proxy by GPO to 
> hostname.domain.tld port 8080 
> 
> Web traffic for other devices dont need to authenticate. 
> 
> WPAD and DNS wpad is also set. 
> 
>  
> 
> Below is mostly from the updated wiki pages. 
> 
> A big thank you to Amos Victor and others who changed the pages, looks good.
> 
> I have some small changed for a pure debian based setup with samba4 as addc 
> and winbind for the squid member server. 
> 
>  
> 
>  
> 
> This is my squid config. 
> 
> # Created from a running squid version : 3.5.24
> 
> # Running os : Debian GNU/Linux 8 (jessie)
> 
> # Creation date: 2017-02-15
> 
>  
> 
> auth_param negotiate program /usr/lib/squid/negotiate_wrapper_auth --kerberos 
> /usr/lib/squid/negotiate_kerberos_auth -s 
> HTTP/proxy2.internal.domain@internal.domain.tld --ntlm /usr/bin/ntlm_auth 
> --helper-protocol=gss-spnego --domain=NTDOM
> 
> auth_param negotiate children 10 startup=5 idle=5
> 
> auth_param negotiate keep_alive on
> 
> external_acl_type memberof ttl=3600 negative_ttl=3600 %LOGIN 
> /usr/lib/squid3/ext_kerberos_ldap_group_acl -d -i -m 4 -g 
> internet-allo...@internal.domain.tld -N nt...@internal.domain.tld -S 
> dc1.internal.domain@internal.domain.tld -D INTERNAL.DOMAIN.TLD
> 
> acl authenticated proxy_auth REQUIRED
> 
>  
> 
> acl certificates rep_mime_type -i ^application/pkix-crl$
> 
>  
> 
> acl windows-updates dstdomain "/etc/squid/lists/updates-windows"
> 
> acl antivirus-updates dstdomain "/etc/squid/lists/updates-antivirus"
> 
> acl localnet src fc00::/7   # RFC 4193 local private network range
> 
> acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
> machines
> 
> acl localnet src 192.168.249.0/24# Company-1
> 
> acl localnet src 10.249.2.0/24   # Company-2
> 
> acl localnet src 10.249.3.0/24   # Company-3
> 
> acl localnet src 10.249.4.0/24   # Company-4
> 
> acl localnet src 10.249.5.0/24   # Company-5
> 

Small optimization here. You can configure the 10/8 lines as:

  acl localnet 10.29.2.0-10.249.5.0/24

That reduces 3 IP comparisions per request.


>  
> 
> acl SSL_ports port 443  # https
> 
> acl SSL_ports port 3952 # CIC client
> 
> acl SSL_ports port 10443# https Cisco 5506x
> 
> acl Safe_ports port 80  # http
> 
> acl Safe_ports port 21  # ftp
> 
> acl Safe_ports port 443 # https
> 
> acl Safe_ports port 70  # gopher
> 
> acl Safe_ports port 210 # wais
> 
> acl Safe_ports port 1025-65535  # unregistered ports
> 
> acl Safe_ports port 280 # http-mgmt
> 
> acl Safe_ports port 488 # gss-http
> 
> acl Safe_ports port 591 # filemaker
> 
> acl Safe_ports port 777 # multiling http
> 
> acl Safe_ports port 3952# CIC client
> 
> acl Safe_ports port 10443   # https Cisco 5506x

Port numbers over 1024 are already included in the "unregistered ports"
entry. You can simplify by removing these last two lines of Safe_ports.

> 
> acl CONNECT method CONNECT
> 
>  
> 
> ## Added : 

Re: [squid-users] question about : NOTICE: Authentication not applicable onintercepted requests. ( SOLVED )

2017-02-16 Thread L . P . H . van Belle
If this one arived in the list. 

 

This is solved, the wpad.dat was guiding my to the other proxy while my gateway 
was set to me new proxy. 

This happend at the policy refresh and did not notice it. 

Sorry for the noice. 

 

But if you see anything that incorrect, or can have a better setup, please let 
me know. 

I always like improvements. 

 

Thanks

 

Louis

 

 


Van: L.P.H. van Belle [mailto:be...@bazuin.nl] 
Verzonden: woensdag 15 februari 2017 10:54
Aan: 'squid-us...@squid-cache.org'
Onderwerp: question about : NOTICE: Authentication not applicable on 
intercepted requests. 


 

Hai, 

 

In configuring my debian jessie with squid 3.5.24 ( with ssl enabled )  c-icap 
squidclamav and winbind 4.5.5 for kerberos keytab refresing. 

 

Now, im at the point of reducing my logs and i nocited : 

NOTICE: Authentication not applicable on intercepted requests. 

Messages in squid/cache.log 

 

I know this is some misconfiguration somewhere but im having a hardtime to 
finding/understanding it. 

Where and why, so is anyone can help me finding and understanding it, that 
would be very nice. 

 

I cant see my error and everything else is working fine, execept i havent 
tested the kerberos group acl yet. 

So i didnt set that http_access yet. 

 

Im having the following firewall rules 

 

# Not authenticated web traffice, redirected to squid in intercept mode.

-A PREROUTING -p tcp -i eth0 --dport 80 -j DNAT --to-destination 
192.168.0.2:3128

-A PREROUTING -p tcp -i eth0 --dport 443 -j DNAT --to-destination 
192.168.0.2:3129

Port 8080 is also open. 

 

Web traffic for pc’s which are domain joint have set the proxy by GPO to 
hostname.domain.tld port 8080 

Web traffic for other devices dont need to authenticate. 

WPAD and DNS wpad is also set. 

 

Below is mostly from the updated wiki pages. 

A big thank you to Amos Victor and others who changed the pages, looks good.

I have some small changed for a pure debian based setup with samba4 as addc and 
winbind for the squid member server. 

 

 

This is my squid config. 

# Created from a running squid version : 3.5.24

# Running os : Debian GNU/Linux 8 (jessie)

# Creation date: 2017-02-15

 

auth_param negotiate program /usr/lib/squid/negotiate_wrapper_auth --kerberos 
/usr/lib/squid/negotiate_kerberos_auth -s 
HTTP/proxy2.internal.domain@internal.domain.tld --ntlm /usr/bin/ntlm_auth 
--helper-protocol=gss-spnego --domain=NTDOM

auth_param negotiate children 10 startup=5 idle=5

auth_param negotiate keep_alive on

external_acl_type memberof ttl=3600 negative_ttl=3600 %LOGIN 
/usr/lib/squid3/ext_kerberos_ldap_group_acl -d -i -m 4 -g 
internet-allo...@internal.domain.tld -N nt...@internal.domain.tld -S 
dc1.internal.domain@internal.domain.tld -D INTERNAL.DOMAIN.TLD

acl authenticated proxy_auth REQUIRED

 

acl certificates rep_mime_type -i ^application/pkix-crl$

 

acl windows-updates dstdomain "/etc/squid/lists/updates-windows"

acl antivirus-updates dstdomain "/etc/squid/lists/updates-antivirus"

acl localnet src fc00::/7   # RFC 4193 local private network range

acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines

acl localnet src 192.168.249.0/24    # Company-1

acl localnet src 10.249.2.0/24   # Company-2

acl localnet src 10.249.3.0/24   # Company-3

acl localnet src 10.249.4.0/24   # Company-4

acl localnet src 10.249.5.0/24   # Company-5

 

acl SSL_ports port 443  # https

acl SSL_ports port 3952 # CIC client

acl SSL_ports port 10443    # https Cisco 5506x

acl Safe_ports port 80  # http

acl Safe_ports port 21  # ftp

acl Safe_ports port 443 # https

acl Safe_ports port 70  # gopher

acl Safe_ports port 210 # wais

acl Safe_ports port 1025-65535  # unregistered ports

acl Safe_ports port 280 # http-mgmt

acl Safe_ports port 488 # gss-http

acl Safe_ports port 591 # filemaker

acl Safe_ports port 777 # multiling http

acl Safe_ports port 3952    # CIC client

acl Safe_ports port 10443   # https Cisco 5506x

acl CONNECT method CONNECT

 

## Added : Advertising Server Block List merge from YoYo.org and Host-file.net

acl block-asbl dstdomain "/etc/squid/lists/block-asbl-merged-dstdomain"

http_access deny block-asbl

 

acl google_recaptcha urlpath_regex ^\/recaptcha\/api.js

http_access allow google_recaptcha

 

acl NO-CACHE-SITES url_regex "/etc/squid/lists/no-cache-sites"

no_cache deny NO-CACHE-SITES

always_direct allow NO-CACHE-SITES

cache deny NO-CACHE-SITES

 

# 

http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports

http_access allow localhost manager

http_access deny manager

http_access deny to_localhost

 

## allow before auth so all pc's get the needed updates

http_access allow windows-updates

http_access allow antivirus-updates

 

http_access allow authenticated

http_access allow localnet

http_access allow localhost

http_access deny all


[squid-users] question about : NOTICE: Authentication not applicable onintercepted requests.

2017-02-16 Thread L . P . H . van Belle
Hai, 

 

In configuring my debian jessie with squid 3.5.24 ( with ssl enabled )  c-icap 
squidclamav and winbind 4.5.5 for kerberos keytab refresing. 

 

Now, im at the point of reducing my logs and i nocited : 

NOTICE: Authentication not applicable on intercepted requests. 

Messages in squid/cache.log 

 

I know this is some misconfiguration somewhere but im having a hardtime to 
finding/understanding it. 

Where and why, so is anyone can help me finding and understanding it, that 
would be very nice. 

 

I cant see my error and everything else is working fine, execept i havent 
tested the kerberos group acl yet. 

So i didnt set that http_access yet. 

 

Im having the following firewall rules 

 

# Not authenticated web traffice, redirected to squid in intercept mode.

-A PREROUTING -p tcp -i eth0 --dport 80 -j DNAT --to-destination 
192.168.0.2:3128

-A PREROUTING -p tcp -i eth0 --dport 443 -j DNAT --to-destination 
192.168.0.2:3129

Port 8080 is also open. 

 

Web traffic for pc’s which are domain joint have set the proxy by GPO to 
hostname.domain.tld port 8080 

Web traffic for other devices dont need to authenticate. 

WPAD and DNS wpad is also set. 

 

Below is mostly from the updated wiki pages. 

A big thank you to Amos Victor and others who changed the pages, looks good.

I have some small changed for a pure debian based setup with samba4 as addc and 
winbind for the squid member server. 

 

 

This is my squid config. 

# Created from a running squid version : 3.5.24

# Running os : Debian GNU/Linux 8 (jessie)

# Creation date: 2017-02-15

 

auth_param negotiate program /usr/lib/squid/negotiate_wrapper_auth --kerberos 
/usr/lib/squid/negotiate_kerberos_auth -s 
HTTP/proxy2.internal.domain@internal.domain.tld --ntlm /usr/bin/ntlm_auth 
--helper-protocol=gss-spnego --domain=NTDOM

auth_param negotiate children 10 startup=5 idle=5

auth_param negotiate keep_alive on

external_acl_type memberof ttl=3600 negative_ttl=3600 %LOGIN 
/usr/lib/squid3/ext_kerberos_ldap_group_acl -d -i -m 4 -g 
internet-allo...@internal.domain.tld -N nt...@internal.domain.tld -S 
dc1.internal.domain@internal.domain.tld -D INTERNAL.DOMAIN.TLD

acl authenticated proxy_auth REQUIRED

 

acl certificates rep_mime_type -i ^application/pkix-crl$

 

acl windows-updates dstdomain "/etc/squid/lists/updates-windows"

acl antivirus-updates dstdomain "/etc/squid/lists/updates-antivirus"

acl localnet src fc00::/7   # RFC 4193 local private network range

acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines

acl localnet src 192.168.249.0/24    # Company-1

acl localnet src 10.249.2.0/24   # Company-2

acl localnet src 10.249.3.0/24   # Company-3

acl localnet src 10.249.4.0/24   # Company-4

acl localnet src 10.249.5.0/24   # Company-5

 

acl SSL_ports port 443  # https

acl SSL_ports port 3952 # CIC client

acl SSL_ports port 10443    # https Cisco 5506x

acl Safe_ports port 80  # http

acl Safe_ports port 21  # ftp

acl Safe_ports port 443 # https

acl Safe_ports port 70  # gopher

acl Safe_ports port 210 # wais

acl Safe_ports port 1025-65535  # unregistered ports

acl Safe_ports port 280 # http-mgmt

acl Safe_ports port 488 # gss-http

acl Safe_ports port 591 # filemaker

acl Safe_ports port 777 # multiling http

acl Safe_ports port 3952    # CIC client

acl Safe_ports port 10443   # https Cisco 5506x

acl CONNECT method CONNECT

 

## Added : Advertising Server Block List merge from YoYo.org and Host-file.net

acl block-asbl dstdomain "/etc/squid/lists/block-asbl-merged-dstdomain"

http_access deny block-asbl

 

acl google_recaptcha urlpath_regex ^\/recaptcha\/api.js

http_access allow google_recaptcha

 

acl NO-CACHE-SITES url_regex "/etc/squid/lists/no-cache-sites"

no_cache deny NO-CACHE-SITES

always_direct allow NO-CACHE-SITES

cache deny NO-CACHE-SITES

 

# 

http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports

http_access allow localhost manager

http_access deny manager

http_access deny to_localhost

 

## allow before auth so all pc's get the needed updates

http_access allow windows-updates

http_access allow antivirus-updates

 

http_access allow authenticated

http_access allow localnet

http_access allow localhost

http_access deny all

 

http_port 192.168.249.222:3128 intercept connection-auth=off

https_port 192.168.249.222:3129 intercept connection-auth=off ssl-bump 
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB 
cert=/etc/ssl/local/CAcert.pem options=NO_SSLv3 key=/etc/ssl/local/CAkey.pem

 

http_port 192.168.249.222:8080 ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=4MB cert=/etc/ssl/local/CAcert.pem options=NO_SSLv3 
key=/etc/ssl/local/CAkey.pem

sslcrtd_program /usr/lib/squid/ssl_crtd -s /var/lib/ssl_db -M 8MB

acl step1 at_step SslBump1

ssl_bump peek step1

ssl_bump bump all


Re: [squid-users] Question on no-cache

2016-11-09 Thread Amos Jeffries
On 10/11/2016 10:00 a.m., Adiseshu Channasamudhram wrote:
> Hello There,
> 
> I recently upgrade squid from 2.7 to 3.3.8 and started seeing a problem where 
> in the squid was caching
> even when no-cache was set.
> 
> I upgraded to 3.5.20 and now what I see is that the content is cached for 60 
> sec even when no-cache directive
> is set.
> 
> I know that lot of changes have been implemented in 3.5.20 - Can someone 
> please help me in configuring this
> new squid 3.5.20 such that it does not cache at all when no-cache directive 
> is set?

If you do not want a response to be stored use "Cache-Control:
no-store". Do not use "no-cache", it does not mean what you think it does.

This should help explain:


And this is probably where that 60 seconds is coming from:

If you just want the backend to be contacted and it supports
revalidation I suggest just reducing that minimum to 0 or 1 seconds.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Question on no-cache

2016-11-09 Thread Adiseshu Channasamudhram
Hello There,

I recently upgrade squid from 2.7 to 3.3.8 and started seeing a problem where 
in the squid was caching
even when no-cache was set.

I upgraded to 3.5.20 and now what I see is that the content is cached for 60 
sec even when no-cache directive
is set.

I know that lot of changes have been implemented in 3.5.20 - Can someone please 
help me in configuring this
new squid 3.5.20 such that it does not cache at all when no-cache directive is 
set?

Thanks and lot in advance

Regards

Adi

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question: Is it possible adaptation_service_chain from services with different access lists?

2016-09-26 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 

27.09.2016 0:08, Alex Rousskov пишет:
> On 09/26/2016 11:32 AM, Yuri Voinov wrote:
>> 26.09.2016 23:16, Alex Rousskov пишет:
>>> On 09/26/2016 10:42 AM, Yuri Voinov wrote:
 How can I make a chain of adaptation with
 different acl's for different chained services?
>
>>> By configuring several chains and then writing adaptation_access rules
>>> to select the right chain for a given message.
>
>
>> Aha. I.e., I can specify chain_A with own access rules and one
>> service_A in chain, and then chain_B, also with own access rules and one
>> service_B, and, finally, specify chain_C with chain_A+chain_B and with
>> access "all". Right?
>
> Whether that is right or wrong depends on the specific ACLs. Also, there
> is no need to create single-service chains. If your rulesA are mutually
> exclusive with rulesB, then you can use them like this:
>
>   adaptation_access serviceA rulesA
>   adaptation_access serviceB rulesB
>   adaptation_access chainAB all
>
> However, again, I discourage you from saying "chain_A with own access
> rulesA" because access rules do not belong to a chain. Squid evaluates
> adaptation_access lines in the squid.conf order. Thus, if rulesA are NOT
> mutually exclusive with rulesB, then the following configuration will
> have a different effect from the above three lines:
>
>   adaptation_access serviceB rulesB
>   adaptation_access serviceA rulesA
>   adaptation_access chainAB all
>
> and this configuration does not make any sense at all:
>
>   adaptation_access chainAB all
>   adaptation_access serviceA rulesA
>   adaptation_access serviceB rulesB
>
>
> It is better to think like this:
>
>   adaptation_access serviceA rules1
>   adaptation_access serviceB rules2
>   adaptation_access chainC rules3
>
> serviceA is used when and only when "rules1" matches
> serviceB is used when and only when "!rules1 rules2" matches
> chainC is used when and only when "!rules1 !rules2 rules3" matches
>
> Each message will be sent to either just serviceA, or just serviceB, or
> just ChainC, or no services/chains/sets at all.
Ah, yes. Understand.

It is now clear. I rewrote the access rules and now adaptation works in
the chain at the right logic. Thank you for your explanations and your time!
>
>
>
> HTH,
>
> Alex.

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJX6WcFAAoJENNXIZxhPexGWfEIAL8e3Al3C2lAxoKC8qCByzch
iKqBwOUbQBBoiQDsrKG0qgF4B+VMpalnO7OvtNOw/P9zcVAU27kzh643H3ynJCHY
gxEtrc2wjJjM1OlIEg0qR8cs4chC+bQ9eaySJtArAFnWktS6hm7VjebgivZq5IMT
eCz9EFizwVLld04QLKbOAX5cL2z8+ScumKPYH9ygEhllnNAdtg+9r3GwFJoOGPyM
JebsZjUTX56SrGZyEro89T2acGWC4rwJ1+oBwcMtp+rD5RUjAUStG/4teAdPopIA
R6v2hBHQSSsyttpaP9QL55JmhQmeV21FCAvyuU58pVv05UDVh4iROcWY43XY4IE=
=++B5
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


  1   2   3   4   5   6   7   >