Re: [squid-users] FATAL: assertion failed: mem/PageStack.cc:159: "StoredNode().is_lock_free()"

2024-06-28 Thread Nishant Sharma

On 28/06/24 19:44, Alex Rousskov wrote:
I do not know the answer to your question. SMP performance penalties are 
often smaller for smaller cache sizes, but cache size is not the only 
performance-affecting locking-sensitive parameter, so YMMV.


I was able to compile after commenting the specific line of code. Squid 
workers start and I am able to bind them to specific CPU cores.


I will do some extensive testing in the next few days in SMP and non-SMP 
mode before rolling the new version out in the field.
Just to avoid a misunderstanding: Other than commenting out the 
assertion line, no code removal is suggested in my bulleted list quoted 
above. The first bullet is a speculative "remove the assertion and see 
what happens" experiment. The second bullet is about reviewing existing 
code (without code modifications) to validate the need for that 
assertion. That audit/validation is required to remove the assertion 
from official Squid sources. That need (and that decision) do not depend 
on cache sizes and other deployment specifics.


I have already acted on first of the bulleted suggestion items list :)

For the next two, I can run tests on these devices under various 
workloads and scenarios, if that helps in validation and further 
decision making.


Thanks again for your help.

Regards,
Nishant
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FATAL: assertion failed: mem/PageStack.cc:159: "StoredNode().is_lock_free()"

2024-06-28 Thread Alex Rousskov

On 2024-06-28 01:38, Nishant Sharma wrote:

On 27/06/24 23:06, Alex Rousskov wrote:
and how your traffic tickles them, SMP Squid without atomic locks 
might become very slow! We do not (and, IMO, should not) optimize 
performance for environments without lock-free atomics!


I see the following options for going forward:

* Comment out the assertion, void your warranty, and hope for the best.
* Audit relevant code to confirm that the assertion is safe to remove.
* Find a usable OS/environment that has lock-free 64-bit atomics.


I am not a developer, so it will take me some help to get the code and 
repercussions of it's modification understood.


In our use case, we do not use caching at all except a small in-memory 
cache of say 64MB.


Squid is used for access control with external acl helper and SSL Bump 
where SMP used to help with version 4.x.


Would it be catastrophic to comment out the assertion and then remove 
relevant code for such a use case where there is no disk cache available 
for probable corruption?


I do not know the answer to your question. SMP performance penalties are 
often smaller for smaller cache sizes, but cache size is not the only 
performance-affecting locking-sensitive parameter, so YMMV.



> ... and then remove relevant code ...

Just to avoid a misunderstanding: Other than commenting out the 
assertion line, no code removal is suggested in my bulleted list quoted 
above. The first bullet is a speculative "remove the assertion and see 
what happens" experiment. The second bullet is about reviewing existing 
code (without code modifications) to validate the need for that 
assertion. That audit/validation is required to remove the assertion 
from official Squid sources. That need (and that decision) do not depend 
on cache sizes and other deployment specifics.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FATAL: assertion failed: mem/PageStack.cc:159: "StoredNode().is_lock_free()"

2024-06-27 Thread Nishant Sharma

Thanks for your reply Alex.

On 27/06/24 23:06, Alex Rousskov wrote:
and how your traffic tickles them, SMP Squid without atomic locks might 
become very slow! We do not (and, IMO, should not) optimize performance 
for environments without lock-free atomics!


I see the following options for going forward:

* Comment out the assertion, void your warranty, and hope for the best.
* Audit relevant code to confirm that the assertion is safe to remove.
* Find a usable OS/environment that has lock-free 64-bit atomics.


I am not a developer, so it will take me some help to get the code and 
repercussions of it's modification understood.


In our use case, we do not use caching at all except a small in-memory 
cache of say 64MB.


Squid is used for access control with external acl helper and SSL Bump 
where SMP used to help with version 4.x.


Would it be catastrophic to comment out the assertion and then remove 
relevant code for such a use case where there is no disk cache available 
for probable corruption?


Regards,
Nishant
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FATAL: assertion failed: mem/PageStack.cc:159: "StoredNode().is_lock_free()"

2024-06-27 Thread Alex Rousskov

On 2024-06-27 10:35, Nishant Sharma wrote:

I am running squid 6.10 on Openwrt 23.05.2, which is cross compiled for 
ramips / mipsel_24kc which has a 32 bit CPU (MT7621A) with 2 cores and 2 
threads.


Squid fails to start in SMP mode when I set workers > 1.


The assertion in question may be overreaching -- I suspect the relevant 
code works correctly when std::atomic::is_lock_free() is 
false. Depending on how those locks are implemented in your environment, 
and how your traffic tickles them, SMP Squid without atomic locks might 
become very slow! We do not (and, IMO, should not) optimize performance 
for environments without lock-free atomics!


I see the following options for going forward:

* Comment out the assertion, void your warranty, and hope for the best.
* Audit relevant code to confirm that the assertion is safe to remove.
* Find a usable OS/environment that has lock-free 64-bit atomics.



SMP worked fine with squid 4.13 on same architecture.


SMP Squid v4 has an ABA problem that could, at least in theory, result 
in silent cache corruption. If you are interested in low-level details, 
please see commit 7a5af8db message:

https://github.com/squid-cache/squid/commit/7a5af8db


HTH,

Alex.



I have filed a bug report with Openwrt at

https://github.com/openwrt/packages/issues/24469

where someone suggested, "ramips has one CPU and the assert is that 
system pointers are not 64bit."


Below are the logs for debug_options 54,9:

2024/06/27 19:48:45.888| 54,3| mem/Segment.cc(245) unlink: unlinked 
/squid-cf__metadata.shm segment
2024/06/27 19:48:45.888| 54,3| mem/Segment.cc(128) create: created 
/squid-cf__metadata.shm segment: 8
2024/06/27 19:48:45.888| 54,5| mem/Segment.cc(211) lock: mlock(2)-ing 
disabled
2024/06/27 19:48:45.889| 54,3| mem/Segment.cc(245) unlink: unlinked 
/squid-cf__queues.shm segment
2024/06/27 19:48:45.889| 54,3| mem/Segment.cc(128) create: created 
/squid-cf__queues.shm segment: 32852
2024/06/27 19:48:45.889| 54,5| mem/Segment.cc(211) lock: mlock(2)-ing 
disabled
2024/06/27 19:48:45.890| 54,3| mem/Segment.cc(245) unlink: unlinked 
/squid-cf__readers.shm segment
2024/06/27 19:48:45.890| 54,3| mem/Segment.cc(128) create: created 
/squid-cf__readers.shm segment: 40
2024/06/27 19:48:45.890| 54,5| mem/Segment.cc(211) lock: mlock(2)-ing 
disabled

2024/06/27 19:48:45.891| 54,7| Queue.cc(50) QueueReader: constructed ipcQR1
2024/06/27 19:48:45.891| 54,7| Queue.cc(50) QueueReader: constructed ipcQR2
2024/06/27 19:48:45.891| 54,5| mem/PageStack.cc(129) IdSetMeasurements: 
rounded capacity up from 8192 to 8192
2024/06/27 19:48:45.891| 54,3| mem/Segment.cc(245) unlink: unlinked 
/squid-squid-page-pool.shm segment
2024/06/27 19:48:45.892| 54,3| mem/Segment.cc(128) create: created 
/squid-squid-page-pool.shm segment: 268437592
2024/06/27 19:48:45.892| 54,5| mem/Segment.cc(211) lock: mlock(2)-ing 
disabled
2024/06/27 19:48:45.892| 54,5| mem/PageStack.cc(129) IdSetMeasurements: 
rounded capacity up from 8192 to 8192
2024/06/27 19:48:45| FATAL: assertion failed: mem/PageStack.cc:159: 
"StoredNode().is_lock_free()"



Any pointers would be really helpful.

Thanks in advance.

Regards,
Nishant
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FATAL: assertion failed: mem/PageStack.cc:159: "StoredNode().is_lock_free()"

2024-06-27 Thread Jonathan Lee
I have Squid 5.8 I can’t start it with multiple workers enabled in pfSense 
also. It is a 64bit 2100MAX
Sent from my iPhone

> On Jun 27, 2024, at 08:12, Nishant Sharma  wrote:
> 
> Hello,
> 
> I am running squid 6.10 on Openwrt 23.05.2, which is cross compiled for 
> ramips / mipsel_24kc which has a 32 bit CPU (MT7621A) with 2 cores and 2 
> threads.
> 
> Squid fails to start in SMP mode when I set workers > 1.
> 
> SMP worked fine with squid 4.13 on same architecture.
> 
> I have filed a bug report with Openwrt at
> 
> https://github.com/openwrt/packages/issues/24469
> 
> where someone suggested, "ramips has one CPU and the assert is that system 
> pointers are not 64bit."
> 
> Below are the logs for debug_options 54,9:
> 
> 2024/06/27 19:48:45.888| 54,3| mem/Segment.cc(245) unlink: unlinked 
> /squid-cf__metadata.shm segment
> 2024/06/27 19:48:45.888| 54,3| mem/Segment.cc(128) create: created 
> /squid-cf__metadata.shm segment: 8
> 2024/06/27 19:48:45.888| 54,5| mem/Segment.cc(211) lock: mlock(2)-ing disabled
> 2024/06/27 19:48:45.889| 54,3| mem/Segment.cc(245) unlink: unlinked 
> /squid-cf__queues.shm segment
> 2024/06/27 19:48:45.889| 54,3| mem/Segment.cc(128) create: created 
> /squid-cf__queues.shm segment: 32852
> 2024/06/27 19:48:45.889| 54,5| mem/Segment.cc(211) lock: mlock(2)-ing disabled
> 2024/06/27 19:48:45.890| 54,3| mem/Segment.cc(245) unlink: unlinked 
> /squid-cf__readers.shm segment
> 2024/06/27 19:48:45.890| 54,3| mem/Segment.cc(128) create: created 
> /squid-cf__readers.shm segment: 40
> 2024/06/27 19:48:45.890| 54,5| mem/Segment.cc(211) lock: mlock(2)-ing disabled
> 2024/06/27 19:48:45.891| 54,7| Queue.cc(50) QueueReader: constructed ipcQR1
> 2024/06/27 19:48:45.891| 54,7| Queue.cc(50) QueueReader: constructed ipcQR2
> 2024/06/27 19:48:45.891| 54,5| mem/PageStack.cc(129) IdSetMeasurements: 
> rounded capacity up from 8192 to 8192
> 2024/06/27 19:48:45.891| 54,3| mem/Segment.cc(245) unlink: unlinked 
> /squid-squid-page-pool.shm segment
> 2024/06/27 19:48:45.892| 54,3| mem/Segment.cc(128) create: created 
> /squid-squid-page-pool.shm segment: 268437592
> 2024/06/27 19:48:45.892| 54,5| mem/Segment.cc(211) lock: mlock(2)-ing disabled
> 2024/06/27 19:48:45.892| 54,5| mem/PageStack.cc(129) IdSetMeasurements: 
> rounded capacity up from 8192 to 8192
> 2024/06/27 19:48:45| FATAL: assertion failed: mem/PageStack.cc:159: 
> "StoredNode().is_lock_free()"
> 
> 
> Any pointers would be really helpful.
> 
> Thanks in advance.
> 
> Regards,
> Nishant
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FATAL: assertion failed: mem/PageStack.cc:159: "StoredNode().is_lock_free()"

2024-06-27 Thread Jonathan Lee
Has anyone ran this on a Banana Pi r3 or r4? 
Sent from my iPhone

> On Jun 27, 2024, at 08:12, Nishant Sharma  wrote:
> 
> Hello,
> 
> I am running squid 6.10 on Openwrt 23.05.2, which is cross compiled for 
> ramips / mipsel_24kc which has a 32 bit CPU (MT7621A) with 2 cores and 2 
> threads.
> 
> Squid fails to start in SMP mode when I set workers > 1.
> 
> SMP worked fine with squid 4.13 on same architecture.
> 
> I have filed a bug report with Openwrt at
> 
> https://github.com/openwrt/packages/issues/24469
> 
> where someone suggested, "ramips has one CPU and the assert is that system 
> pointers are not 64bit."
> 
> Below are the logs for debug_options 54,9:
> 
> 2024/06/27 19:48:45.888| 54,3| mem/Segment.cc(245) unlink: unlinked 
> /squid-cf__metadata.shm segment
> 2024/06/27 19:48:45.888| 54,3| mem/Segment.cc(128) create: created 
> /squid-cf__metadata.shm segment: 8
> 2024/06/27 19:48:45.888| 54,5| mem/Segment.cc(211) lock: mlock(2)-ing disabled
> 2024/06/27 19:48:45.889| 54,3| mem/Segment.cc(245) unlink: unlinked 
> /squid-cf__queues.shm segment
> 2024/06/27 19:48:45.889| 54,3| mem/Segment.cc(128) create: created 
> /squid-cf__queues.shm segment: 32852
> 2024/06/27 19:48:45.889| 54,5| mem/Segment.cc(211) lock: mlock(2)-ing disabled
> 2024/06/27 19:48:45.890| 54,3| mem/Segment.cc(245) unlink: unlinked 
> /squid-cf__readers.shm segment
> 2024/06/27 19:48:45.890| 54,3| mem/Segment.cc(128) create: created 
> /squid-cf__readers.shm segment: 40
> 2024/06/27 19:48:45.890| 54,5| mem/Segment.cc(211) lock: mlock(2)-ing disabled
> 2024/06/27 19:48:45.891| 54,7| Queue.cc(50) QueueReader: constructed ipcQR1
> 2024/06/27 19:48:45.891| 54,7| Queue.cc(50) QueueReader: constructed ipcQR2
> 2024/06/27 19:48:45.891| 54,5| mem/PageStack.cc(129) IdSetMeasurements: 
> rounded capacity up from 8192 to 8192
> 2024/06/27 19:48:45.891| 54,3| mem/Segment.cc(245) unlink: unlinked 
> /squid-squid-page-pool.shm segment
> 2024/06/27 19:48:45.892| 54,3| mem/Segment.cc(128) create: created 
> /squid-squid-page-pool.shm segment: 268437592
> 2024/06/27 19:48:45.892| 54,5| mem/Segment.cc(211) lock: mlock(2)-ing disabled
> 2024/06/27 19:48:45.892| 54,5| mem/PageStack.cc(129) IdSetMeasurements: 
> rounded capacity up from 8192 to 8192
> 2024/06/27 19:48:45| FATAL: assertion failed: mem/PageStack.cc:159: 
> "StoredNode().is_lock_free()"
> 
> 
> Any pointers would be really helpful.
> 
> Thanks in advance.
> 
> Regards,
> Nishant
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] FATAL: assertion failed: mem/PageStack.cc:159: "StoredNode().is_lock_free()"

2024-06-27 Thread Nishant Sharma

Hello,

I am running squid 6.10 on Openwrt 23.05.2, which is cross compiled for 
ramips / mipsel_24kc which has a 32 bit CPU (MT7621A) with 2 cores and 2 
threads.


Squid fails to start in SMP mode when I set workers > 1.

SMP worked fine with squid 4.13 on same architecture.

I have filed a bug report with Openwrt at

https://github.com/openwrt/packages/issues/24469

where someone suggested, "ramips has one CPU and the assert is that 
system pointers are not 64bit."


Below are the logs for debug_options 54,9:

2024/06/27 19:48:45.888| 54,3| mem/Segment.cc(245) unlink: unlinked 
/squid-cf__metadata.shm segment
2024/06/27 19:48:45.888| 54,3| mem/Segment.cc(128) create: created 
/squid-cf__metadata.shm segment: 8
2024/06/27 19:48:45.888| 54,5| mem/Segment.cc(211) lock: mlock(2)-ing 
disabled
2024/06/27 19:48:45.889| 54,3| mem/Segment.cc(245) unlink: unlinked 
/squid-cf__queues.shm segment
2024/06/27 19:48:45.889| 54,3| mem/Segment.cc(128) create: created 
/squid-cf__queues.shm segment: 32852
2024/06/27 19:48:45.889| 54,5| mem/Segment.cc(211) lock: mlock(2)-ing 
disabled
2024/06/27 19:48:45.890| 54,3| mem/Segment.cc(245) unlink: unlinked 
/squid-cf__readers.shm segment
2024/06/27 19:48:45.890| 54,3| mem/Segment.cc(128) create: created 
/squid-cf__readers.shm segment: 40
2024/06/27 19:48:45.890| 54,5| mem/Segment.cc(211) lock: mlock(2)-ing 
disabled

2024/06/27 19:48:45.891| 54,7| Queue.cc(50) QueueReader: constructed ipcQR1
2024/06/27 19:48:45.891| 54,7| Queue.cc(50) QueueReader: constructed ipcQR2
2024/06/27 19:48:45.891| 54,5| mem/PageStack.cc(129) IdSetMeasurements: 
rounded capacity up from 8192 to 8192
2024/06/27 19:48:45.891| 54,3| mem/Segment.cc(245) unlink: unlinked 
/squid-squid-page-pool.shm segment
2024/06/27 19:48:45.892| 54,3| mem/Segment.cc(128) create: created 
/squid-squid-page-pool.shm segment: 268437592
2024/06/27 19:48:45.892| 54,5| mem/Segment.cc(211) lock: mlock(2)-ing 
disabled
2024/06/27 19:48:45.892| 54,5| mem/PageStack.cc(129) IdSetMeasurements: 
rounded capacity up from 8192 to 8192
2024/06/27 19:48:45| FATAL: assertion failed: mem/PageStack.cc:159: 
"StoredNode().is_lock_free()"



Any pointers would be really helpful.

Thanks in advance.

Regards,
Nishant
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Requesting Help to debug my squid

2024-06-25 Thread Alex Rousskov

On 2024-06-25 02:34, Ohms, Jannis wrote:

I run Squid 6.6  my  squid crashes  periodically I recive logs similar 
to this  one


2024-06-25T08:14:05.044043+02:00 surfer-proxy squid[585630]: FATAL: 
assertion failed: FilledChecklist.cc:263: "!rfc931[0]"


The above assertion is a Squid bug related to Ident support (RFC 931 
obsoleted by RFC 1413). There are other Ident bugs and problems in 
Squid. Our recent comprehensive fix[1] for master/v7 was superseded by 
removing Ident support itself (master/v7 commit e94ff52 [2]). Squid v6 
still has (buggy) Ident code.


If you do not really need Ident, stop using Ident features[3] in 
squid.conf and disable Ident support when building Squid:

./configure --disable-ident-lookups ...

If you do need Ident, consider writing an external_acl helper that 
performs Ident lookups and then disable native Ident support in Squid.



HTH,

Alex.

[1] https://github.com/squid-cache/squid/pull/1815

[2] 
https://github.com/squid-cache/squid/commit/e94ff5274ce05e6f06d7c789bb2c6452c7886584


[3] Ident features include: ident/ident_regex ACLs, %ui logformat codes, 
%IDENT external_acl_type format code, and ident_lookup_access/ident_timeout

directives.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Requesting Help to debug my squid

2024-06-25 Thread Ohms, Jannis
Hi all,

I run Squid 6.6  my  squid crashes  periodically I recive logs similar to this  
one

2024-06-25T08:14:04.539274+02:00 surfer-proxy squid[585630]: Starting Squid 
Cache version 6.6 for x86_64-pc-linux-gnu...
2024-06-25T08:14:04.539391+02:00 surfer-proxy squid[585630]: Service Name: squid
2024-06-25T08:14:04.539457+02:00 surfer-proxy squid[585630]: Process ID 585630
2024-06-25T08:14:04.539525+02:00 surfer-proxy squid[585630]: Process Roles: 
worker
2024-06-25T08:14:04.539587+02:00 surfer-proxy squid[585630]: With 16348 file 
descriptors available
2024-06-25T08:14:04.539647+02:00 surfer-proxy squid[585630]: Initializing IP 
Cache...
2024-06-25T08:14:04.539821+02:00 surfer-proxy squid[585630]: DNS IPv6 socket 
created at [::], FD 8
2024-06-25T08:14:04.539893+02:00 surfer-proxy squid[585630]: DNS IPv4 socket 
created at 0.0.0.0, FD 9
2024-06-25T08:14:04.539955+02:00 surfer-proxy squid[585630]: Adding nameserver 
127.0.0.1 from /etc/resolv.conf
2024-06-25T08:14:04.540026+02:00 surfer-proxy squid[585630]: Adding domain 
biblio.etc.tu-bs.de from /etc/resolv.conf
2024-06-25T08:14:04.540091+02:00 surfer-proxy squid[585630]: Adding domain 
surfer.unibib.local from /etc/resolv.conf
2024-06-25T08:14:04.552194+02:00 surfer-proxy squid[585630]: Logfile: opening 
log stdio:/var/log/squid/access.log
2024-06-25T08:14:04.804840+02:00 surfer-proxy squid[585630]: Unlinkd pipe 
opened on FD 14
2024-06-25T08:14:04.809285+02:00 surfer-proxy squid[585630]: Local cache digest 
enabled; rebuild/rewrite every 3600/3600 sec
2024-06-25T08:14:04.809435+02:00 surfer-proxy squid[585630]: Store logging 
disabled
2024-06-25T08:14:04.809513+02:00 surfer-proxy squid[585630]: Swap maxSize 
102400 + 98304 KB, estimated 15438 objects
2024-06-25T08:14:04.809577+02:00 surfer-proxy squid[585630]: Target number of 
buckets: 771
2024-06-25T08:14:04.809647+02:00 surfer-proxy squid[585630]: Using 8192 Store 
buckets
2024-06-25T08:14:04.809715+02:00 surfer-proxy squid[585630]: Max Mem  size: 
98304 KB
2024-06-25T08:14:04.809775+02:00 surfer-proxy squid[585630]: Max Swap size: 
102400 KB
2024-06-25T08:14:04.810390+02:00 surfer-proxy squid[585630]: Rebuilding storage 
in /var/squid/cache (dirty log)
2024-06-25T08:14:04.810535+02:00 surfer-proxy squid[585630]: Using Least Load 
store dir selection
2024-06-25T08:14:04.810635+02:00 surfer-proxy squid[585630]: Current Directory 
is /
2024-06-25T08:14:04.860606+02:00 surfer-proxy squid[585630]: Finished loading 
MIME types and icons.
2024-06-25T08:14:04.861090+02:00 surfer-proxy squid[585630]: HTCP Disabled.
2024-06-25T08:14:04.862578+02:00 surfer-proxy squid[585630]: Pinger socket 
opened on FD 19
2024-06-25T08:14:04.862850+02:00 surfer-proxy squid[585630]: Squid plugin 
modules loaded: 0
2024-06-25T08:14:04.862931+02:00 surfer-proxy squid[585630]: Adaptation support 
is off.
2024-06-25T08:14:04.862999+02:00 surfer-proxy squid[585630]: Accepting HTTP 
Socket connections at conn3 local=[::]:8080 remote=[::] FD 17 flags=9#012
listening port: 8080
2024-06-25T08:14:04.882088+02:00 surfer-proxy squid[585630]: Done reading 
/var/squid/cache swaplog (540 entries)
2024-06-25T08:14:04.882254+02:00 surfer-proxy squid[585630]: Finished 
rebuilding storage from disk.#012540 Entries scanned#012  0 
Invalid entries#012  0 With invalid flags#012540 Objects 
loaded#012  0 Objects expired#012  0 Objects canceled#012   
   0 Duplicate URLs purged#012  0 Swapfile clashes avoided#012Took 
0.07 seconds (7526.24 objects/sec).
2024-06-25T08:14:04.882339+02:00 surfer-proxy squid[585630]: Beginning 
Validation Procedure
2024-06-25T08:14:04.882619+02:00 surfer-proxy squid[585630]: Completed 
Validation Procedure#012Validated 540 Entries#012store_swap_size = 
18780.00 KB
2024-06-25T08:14:05.044043+02:00 surfer-proxy squid[585630]: FATAL: assertion 
failed: FilledChecklist.cc:263: "!rfc931[0]"#012current master transaction: 
master57
2024-06-25T08:14:05.181283+02:00 surfer-proxy squid[585262]: Squid Parent: 
squid-1 process 585630 exited due to signal 6 with status 0
2024-06-25T08:14:05.181485+02:00 surfer-proxy squid[585262]: Squid Parent: 
squid-1 process 585630 will not be restarted for 3600 seconds due to repeated, 
frequent failures
2024-06-25T08:14:05.182410+02:00 surfer-proxy squid[585262]: Exiting due to 
repeated, frequent failures
2024-06-25T08:14:05.182520+02:00 surfer-proxy squid[585262]: Removing PID file 
(/run/squid.pid)
2024-06-25T08:14:05.191194+02:00 surfer-proxy systemd[1]: squid.service: Main 
process exited, code=exited, status=1/FAILURE
2024-06-25T08:14:05.191417+02:00 surfer-proxy systemd[1]: squid.service: 
Killing process 585610 (pinger) with signal SIGKILL.
2024-06-25T08:14:05.191477+02:00 surfer-proxy systemd[1]: squid.service: 
Killing process 585615 (pinger) with signal SIGKILL.
2024-06-25T08:14:05.191515+02:00 surfer-proxy systemd[1]: squid.service: 
Killing process 585622 (pinger) with signal SIGKILL.

Re: [squid-users] Simulate connections for tuning squid?

2024-06-20 Thread Periko Support
Wow, yeay I test the tool and let you know, thanks!!!

On Sun, Jun 16, 2024 at 4:25 AM Antony Stone
 wrote:
>
> On Sunday 16 June 2024 at 12:27:03, David Touzeau wrote:
>
> > Hi
> >
> > We have made such a tool for us.
>
> > Available free of charge with no restrictions
>
> I think it would be a good idea to publish a tool like this explicitly under a
> well-known licence.  It makes life considerably simpler for people who care
> about compliance if they can say "these things we use are all under licence X"
> because then everyone knows exactly what that means.
>
>
> Antony.
>
> --
> I don't know, maybe if we all waited then cosmic rays would write all our
> software for us. Of course it might take a while.
>
>  - Ron Minnich, Los Alamos National Laboratory
>
>Please reply to the list;
>  please *don't* CC me.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Anybody still using src_as and dst_as ACLs?

2024-06-20 Thread Stuart Henderson
On 2024-06-16, Alex Rousskov  wrote:
>  Does anybody still have src_as and dst_as ACLs configured in their 
> production Squids? There are several serious problems with those ACLs, 
> and those problems have been present in Squid for many years. I hope 
> that virtually nobody uses those ACLs today.

In case anyone still does, replacing with a config file fragment
included from the main squid.conf and generated by simple processing
on output from bgpq4 or a similar tool would be fairly straightforward
(and more featureful, as it can follow AS-SET macros).


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] url_rewrite (with rewrite-url): PinnedConnection failure results in total failure

2024-06-18 Thread Alex Rousskov

On 2024-06-18 11:34, Andreas Weigel wrote:

updating from a (security-patched) Squidv4 to Squidv6, problems occurred 
with regard to url_rewrite Feature used for content-filtering. The 
external helper uses "OK rewrite-url=" to point to a custom page that 
Squid used to retrieve and present to the Client. This used to work with 
SSL-Interception, but it does no longer in Squidv6. It does not matter 
in that regard if peek+splice or stare+bump is used, Squid always fails 
with ERR_CANNOT_FORWARD.


The peek+splice and stare+bump cases are very different with regard to 
how the request destination should be computed (including how pinned 
connection should be reused) IMO. In my response, I will focus on the 
stare+bump case.



I understand that using the rewrite-url mechanism is discouraged, but 
would still like to understand if this is intended behavior (and the 
reasoning behind it) or if this has to be considered a bug.


Overall, I believe that a Squid admin should be able to rewrite any HTTP 
request (destination). Whether to classify the lack of this 
functionality in current Squid bumping code as a regression bug or a 
missing feature is currently unimportant AFAICT.


Needless to say, using another destination after a Squid-to-server 
connection has been pinned to the client-to-Squid connection is not 
quite compatible with some guarantees implied by "pinning", so extra 
care should be done when (re)enabling this behavior. I would not be 
surprised if we need to introduce several pinning categories, with 
different guarantees and re-pinning options!


For example, a client pinned to a server after a failed Host validation 
should not be able to send a second request to another destination 
without Squid admin permission. I am not sure how to detect such a 
permission when, say, a URL rewriter simply echoes client-chosen 
destination (possibly trusting Squid to block any requests that are 
attempting to escape their "pinned jail"). Distinguishing "intended" 
destination rewrites from canonicalizing or DNS-resolving echoes may be 
difficult!


It is also not clear whether the originally pinned Squid-to-server 
connection should be preserved in such cases (to be used for future 
non-redirected requests received on the same client-to-Squid connection, 
if any). Again, the correct answer may depend on the "pinning category".



HTH,

Alex.





Digging into the codebase, I can see that with v4, Squid first
checked the validity of any pinned connection and if none is found,
still retrieves the "replacement page". It seems that
daf80700130b6f98256b75c511916d1a79b23597 changed the logic in that
regard, causing the FwdState to fail hard and not try any of the
remaining peer options (see log excerpts below), although I can see
that it added two selections (PINNED and HIER_DIRECT).



PS:

log excerpt from squid-v4.16 (successful rewrite-url)

2024/06/14 14:46:30.750 kid1| 61,2| /client_side_request.cc(1266) 
clientRedirectDone: URL-rewriter diverts URL from https://youtube.com/ 
to 
http://127.0.0.1:8081/?cat=_all==URLLIST==https%3a%2f%2fyoutube%2ecom%2f=
2024/06/14 14:46:30.750 kid1| 33,5| /store_client.cc(317) doCopy: 
store_client::doCopy: co: 0, hi: 0
2024/06/14 14:46:30.750 kid1| 17,3| /FwdState.cc(340) Start: 
'http://127.0.0.1:8081/?cat=_all==URLLIST==https%3a%2f%2fyoutube%2ecom%2f='
2024/06/14 14:46:30.750 kid1| 17,2| /FwdState.cc(142) FwdState: 
Forwarding client request local=172.217.13.206:443 
remote=192.168.180.10:33836 FD 11 flags=33, 
url=http://127.0.0.1:8081/?cat=_all==URLLIST==https%3a%2f%2fyoutube%2ecom%2f
2024/06/14 14:46:30.750 kid1| 17,3| /FwdState.cc(149) FwdState: FwdState 
constructed, this=0x559ad7e89b88
2024/06/14 14:46:30.750 kid1| 44,3| /peer_select.cc(161) peerSelect: 
e:=XIV/0x559ad7e89500*2 
http://127.0.0.1:8081/?cat=_all==URLLIST==https%3a%2f%2fyoutube%2ecom%2f=
2024/06/14 14:46:30.750 kid1| 44,3| /peer_select.cc(472) peerSelectFoo: 
GET 127.0.0.1
2024/06/14 14:46:30.750 kid1| 44,3| /peer_select.cc(477) peerSelectFoo: 
peerSelectFoo: direct = DIRECT_UNKNOWN (always_direct to be checked)
2024/06/14 14:46:30.750 kid1| 44,3| /peer_select.cc(218) 
peerCheckAlwaysDirectDone: peerCheckAlwaysDirectDone: ALLOWED
2024/06/14 14:46:30.750 kid1| 44,3| /peer_select.cc(224) 
peerCheckAlwaysDirectDone: direct = DIRECT_YES (always_direct allow)
2024/06/14 14:46:30.750 kid1| 44,3| /peer_select.cc(472) peerSelectFoo: 
GET 127.0.0.1
2024/06/14 14:46:30.750 kid1| 33,7| /client_side.cc(4119) 
validatePinnedConnection: local=192.168.41.35:47842 
remote=172.217.13.206:443 FD 13 flags=1
2024/06/14 14:46:30.750 kid1| 33,3| /client_side.cc(4154) 
unpinConnection: local=192.168.41.35:47842 remote=172.217.13.206:443 FD 
13 flags=1
2024/06/14 14:46:30.750 kid1| 33,5| src/base/AsyncCall.cc(54) cancel: 
will not call ConnStateData::clientPinnedConnectionClosed [call63] 
because comm_remove_close_handler
2024/06/14 14:46:30.750 kid1| 33,3| src/base/AsyncCall.cc(54) cancel: 
will not call 

[squid-users] url_rewrite (with rewrite-url): PinnedConnection failure results in total failure

2024-06-18 Thread Andreas Weigel

Hi,

updating from a (security-patched) Squidv4 to Squidv6, problems  
occurred with regard to url_rewrite Feature used for  
content-filtering. The external helper uses "OK rewrite-url=" to point  
to a custom page that Squid used to retrieve and present to the  
Client. This used to work with SSL-Interception, but it does no longer  
in Squidv6. It does not matter in that regard if peek+splice or  
stare+bump is used, Squid always fails with ERR_CANNOT_FORWARD.


Digging into the codebase, I can see that with v4, Squid first checked  
the validity of any pinned connection and if none is found, still  
retrieves the "replacement page". It seems that  
daf80700130b6f98256b75c511916d1a79b23597 changed the logic in that  
regard, causing the FwdState to fail hard and not try any of the  
remaining peer options (see log excerpts below), although I can see  
that it added two selections (PINNED and HIER_DIRECT).


I understand that using the rewrite-url mechanism is discouraged, but  
would still like to understand if this is intended behavior (and the  
reasoning behind it) or if this has to be considered a bug. Any  
pointers to additional ressources or explanations to understand the  
behavior would be much appreciated.


Kind regards,
Andreas Weigel

PS:

log excerpt from squid-v4.16 (successful rewrite-url)

2024/06/14 14:46:30.750 kid1| 61,2| /client_side_request.cc(1266)  
clientRedirectDone: URL-rewriter diverts URL from https://youtube.com/  
to  
http://127.0.0.1:8081/?cat=_all==URLLIST==https%3a%2f%2fyoutube%2ecom%2f=
2024/06/14 14:46:30.750 kid1| 33,5| /store_client.cc(317) doCopy:  
store_client::doCopy: co: 0, hi: 0
2024/06/14 14:46:30.750 kid1| 17,3| /FwdState.cc(340) Start:  
'http://127.0.0.1:8081/?cat=_all==URLLIST==https%3a%2f%2fyoutube%2ecom%2f='
2024/06/14 14:46:30.750 kid1| 17,2| /FwdState.cc(142) FwdState:  
Forwarding client request local=172.217.13.206:443  
remote=192.168.180.10:33836 FD 11 flags=33,  
url=http://127.0.0.1:8081/?cat=_all==URLLIST==https%3a%2f%2fyoutube%2ecom%2f
2024/06/14 14:46:30.750 kid1| 17,3| /FwdState.cc(149) FwdState:  
FwdState constructed, this=0x559ad7e89b88
2024/06/14 14:46:30.750 kid1| 44,3| /peer_select.cc(161) peerSelect:  
e:=XIV/0x559ad7e89500*2  
http://127.0.0.1:8081/?cat=_all==URLLIST==https%3a%2f%2fyoutube%2ecom%2f=
2024/06/14 14:46:30.750 kid1| 44,3| /peer_select.cc(472)  
peerSelectFoo: GET 127.0.0.1
2024/06/14 14:46:30.750 kid1| 44,3| /peer_select.cc(477)  
peerSelectFoo: peerSelectFoo: direct = DIRECT_UNKNOWN (always_direct  
to be checked)
2024/06/14 14:46:30.750 kid1| 44,3| /peer_select.cc(218)  
peerCheckAlwaysDirectDone: peerCheckAlwaysDirectDone: ALLOWED
2024/06/14 14:46:30.750 kid1| 44,3| /peer_select.cc(224)  
peerCheckAlwaysDirectDone: direct = DIRECT_YES (always_direct allow)
2024/06/14 14:46:30.750 kid1| 44,3| /peer_select.cc(472)  
peerSelectFoo: GET 127.0.0.1
2024/06/14 14:46:30.750 kid1| 33,7| /client_side.cc(4119)  
validatePinnedConnection: local=192.168.41.35:47842  
remote=172.217.13.206:443 FD 13 flags=1
2024/06/14 14:46:30.750 kid1| 33,3| /client_side.cc(4154)  
unpinConnection: local=192.168.41.35:47842 remote=172.217.13.206:443  
FD 13 flags=1
2024/06/14 14:46:30.750 kid1| 33,5| src/base/AsyncCall.cc(54) cancel:  
will not call ConnStateData::clientPinnedConnectionClosed [call63]  
because comm_remove_close_handler
2024/06/14 14:46:30.750 kid1| 33,3| src/base/AsyncCall.cc(54) cancel:  
will not call ConnStateData::clientPinnedConnectionRead [call64]  
because comm_read_cancel
2024/06/14 14:46:30.750 kid1| 33,3| src/base/AsyncCall.cc(54) cancel:  
will not call ConnStateData::clientPinnedConnectionRead [call64] also  
because comm_read_cancel
2024/06/14 14:46:30.750 kid1| 44,3| /peer_select.cc(978)  
peerAddFwdServer: adding HIER_DIRECT#127.0.0.1
2024/06/14 14:46:30.750 kid1| 44,2| /peer_select.cc(295)  
peerSelectDnsPaths: Find IP destination for:  
http://127.0.0.1:8081/?cat=_all==URLLIST==https%3a%2f%2fyoutube%2ecom%2f=' via  
127.0.0.1
2024/06/14 14:46:30.750 kid1| 44,2| /peer_select.cc(316)  
peerSelectDnsPaths: Found sources for  
'http://127.0.0.1:8081/?cat=_all==URLLIST==https%3a%2f%2fyoutube%2ecom%2f='
2024/06/14 14:46:30.750 kid1| 44,2| /peer_select.cc(317)  
peerSelectDnsPaths:   always_direct = ALLOWED
2024/06/14 14:46:30.750 kid1| 44,2| /peer_select.cc(318)  
peerSelectDnsPaths:never_direct = DENIED
2024/06/14 14:46:30.750 kid1| 44,2| /peer_select.cc(322)  
peerSelectDnsPaths:  DIRECT = local=0.0.0.0  
remote=127.0.0.1:8081 flags=1
2024/06/14 14:46:30.750 kid1| 44,2| /peer_select.cc(331)  
peerSelectDnsPaths:timedout = 0
2024/06/14 14:46:30.750 kid1| 17,3| /FwdState.cc(418)  
startConnectionOrFail:  
http://127.0.0.1:8081/?cat=_all==URLLIST==https%3a%2f%2fyoutube%2ecom%2f=
2024/06/14 14:46:30.750 kid1| 17,3| /FwdState.cc(832) connectStart:  
fwdConnectStart:  
http://127.0.0.1:8081/?cat=_all==URLLIST==https%3a%2f%2fyoutube%2ecom%2f=
2024/06/14 14:46:30.750 kid1| 17,3| 

Re: [squid-users] Anybody still using src_as and dst_as ACLs?

2024-06-17 Thread Jonathan Lee
Is there a different type of directive for source and destination acts?
Sent from my iPhone

> On Jun 17, 2024, at 11:03, Alex Rousskov  
> wrote:
> 
> On 2024-06-17 11:43, Jonathan Lee wrote:
>> acl to_ipv6 dst ipv6
>> acl from_ipv6 src ipv6
> 
> 
> Glad I asked! The above configuration is not using "src_as" and "dst_as" ACL 
> types that I am asking about. It is using "src" and "dst" ACL types.
> 
> 
> > I hope that helps our isp is ipv6 only
> 
> Matching IPv6 addresses is completely unrelated to this thread topic, but you 
> may want to see the following commit message about "ipv6" problems recently 
> fixed in master/v7. If you want to discuss IPv6 matching, please start a new 
> mailing list thread!
> https://github.com/squid-cache/squid/commit/51c518d5
> 
> 
> 
> Thank you,
> 
> Alex.
> 
> 
 On Jun 17, 2024, at 08:17, Alex Rousskov 
  wrote:
>>> 
>>> On 2024-06-16 19:46, Jonathan Lee wrote:
 I use them for ipv6 blocks they seem to work that way in 5.8
>>> 
>>> Just to double check that we are on the same page here, please share an 
>>> example (or two) of your src_as or dst_as ACL definitions (i.e., "acl ... 
>>> dst_as ..." or similar lines). I do _not_ need the corresponding directives 
>>> that use those AS-based ACLs (e.g., "http_access deny..."), just the "acl" 
>>> lines themselves.
>>> 
>>> As an added bonus, I may be able to confirm whether Squid v5.8 can grok 
>>> responses about Autonomous System Numbers used by your specific 
>>> configuration :-).
>>> 
>>> 
>>> Thank you,
>>> 
>>> Alex.
>>> 
>>> 
> On Jun 16, 2024, at 17:00, Alex Rousskov 
>  wrote:
> 
> Hello,
> 
>Does anybody still have src_as and dst_as ACLs configured in their 
> production Squids? There are several serious problems with those ACLs, 
> and those problems have been present in Squid for many years. I hope that 
> virtually nobody uses those ACLs today.
> 
> If you do use them, please respond (publicly or privately) and, if 
> possible, please indicate whether you have verified that those ACLs are 
> working correctly in your deployment environment.
> 
> 
> Thank you,
> 
> Alex.
> 
> 
>>acl aclname src_as number ...
>>acl aclname dst_as number ...
>>  # [fast]
>>  # Except for access control, AS numbers can be used for
>>  # routing of requests to specific caches. Here's an
>>  # example for routing all requests for AS#1241 and only
>>  # those to mycache.mydomain.net:
>>  # acl asexample dst_as 1241
>>  # cache_peer_access mycache.mydomain.net allow asexample
>>  # cache_peer_access mycache_mydomain.net deny all
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>>> 
> 
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Anybody still using src_as and dst_as ACLs?

2024-06-17 Thread Alex Rousskov

On 2024-06-17 11:43, Jonathan Lee wrote:

acl to_ipv6 dst ipv6
acl from_ipv6 src ipv6



Glad I asked! The above configuration is not using "src_as" and "dst_as" 
ACL types that I am asking about. It is using "src" and "dst" ACL types.



> I hope that helps our isp is ipv6 only

Matching IPv6 addresses is completely unrelated to this thread topic, 
but you may want to see the following commit message about "ipv6" 
problems recently fixed in master/v7. If you want to discuss IPv6 
matching, please start a new mailing list thread!

https://github.com/squid-cache/squid/commit/51c518d5



Thank you,

Alex.


On Jun 17, 2024, at 08:17, Alex Rousskov 
 wrote:


On 2024-06-16 19:46, Jonathan Lee wrote:

I use them for ipv6 blocks they seem to work that way in 5.8


Just to double check that we are on the same page here, please share 
an example (or two) of your src_as or dst_as ACL definitions (i.e., 
"acl ... dst_as ..." or similar lines). I do _not_ need the 
corresponding directives that use those AS-based ACLs (e.g., 
"http_access deny..."), just the "acl" lines themselves.


As an added bonus, I may be able to confirm whether Squid v5.8 can 
grok responses about Autonomous System Numbers used by your specific 
configuration :-).



Thank you,

Alex.


On Jun 16, 2024, at 17:00, Alex Rousskov 
 wrote:


Hello,

   Does anybody still have src_as and dst_as ACLs configured in 
their production Squids? There are several serious problems with 
those ACLs, and those problems have been present in Squid for many 
years. I hope that virtually nobody uses those ACLs today.


If you do use them, please respond (publicly or privately) and, if 
possible, please indicate whether you have verified that those ACLs 
are working correctly in your deployment environment.



Thank you,

Alex.



   acl aclname src_as number ...
   acl aclname dst_as number ...
 # [fast]
 # Except for access control, AS numbers can be used for
 # routing of requests to specific caches. Here's an
 # example for routing all requests for AS#1241 and only
 # those to mycache.mydomain.net:
 # acl asexample dst_as 1241
 # cache_peer_access mycache.mydomain.net allow asexample
 # cache_peer_access mycache_mydomain.net deny all

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users




___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Anybody still using src_as and dst_as ACLs?

2024-06-17 Thread Jonathan Lee
acl to_ipv6 dst ipv6
acl from_ipv6 src ipv6

I after block them with terminate connections.

I hope that helps our isp is ipv6 only
Sent from my iPhone

> On Jun 17, 2024, at 08:17, Alex Rousskov  
> wrote:
> 
> On 2024-06-16 19:46, Jonathan Lee wrote:
>> I use them for ipv6 blocks they seem to work that way in 5.8
> 
> Just to double check that we are on the same page here, please share an 
> example (or two) of your src_as or dst_as ACL definitions (i.e., "acl ... 
> dst_as ..." or similar lines). I do _not_ need the corresponding directives 
> that use those AS-based ACLs (e.g., "http_access deny..."), just the "acl" 
> lines themselves.
> 
> As an added bonus, I may be able to confirm whether Squid v5.8 can grok 
> responses about Autonomous System Numbers used by your specific configuration 
> :-).
> 
> 
> Thank you,
> 
> Alex.
> 
> 
 On Jun 16, 2024, at 17:00, Alex Rousskov 
  wrote:
>>> 
>>> Hello,
>>> 
>>>Does anybody still have src_as and dst_as ACLs configured in their 
>>> production Squids? There are several serious problems with those ACLs, and 
>>> those problems have been present in Squid for many years. I hope that 
>>> virtually nobody uses those ACLs today.
>>> 
>>> If you do use them, please respond (publicly or privately) and, if 
>>> possible, please indicate whether you have verified that those ACLs are 
>>> working correctly in your deployment environment.
>>> 
>>> 
>>> Thank you,
>>> 
>>> Alex.
>>> 
>>> 
acl aclname src_as number ...
acl aclname dst_as number ...
  # [fast]
  # Except for access control, AS numbers can be used for
  # routing of requests to specific caches. Here's an
  # example for routing all requests for AS#1241 and only
  # those to mycache.mydomain.net:
  # acl asexample dst_as 1241
  # cache_peer_access mycache.mydomain.net allow asexample
  # cache_peer_access mycache_mydomain.net deny all
>>> ___
>>> squid-users mailing list
>>> squid-users@lists.squid-cache.org
>>> https://lists.squid-cache.org/listinfo/squid-users
> 
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Anybody still using src_as and dst_as ACLs?

2024-06-17 Thread Alex Rousskov

On 2024-06-16 19:46, Jonathan Lee wrote:

I use them for ipv6 blocks they seem to work that way in 5.8


Just to double check that we are on the same page here, please share an 
example (or two) of your src_as or dst_as ACL definitions (i.e., "acl 
... dst_as ..." or similar lines). I do _not_ need the corresponding 
directives that use those AS-based ACLs (e.g., "http_access deny..."), 
just the "acl" lines themselves.


As an added bonus, I may be able to confirm whether Squid v5.8 can grok 
responses about Autonomous System Numbers used by your specific 
configuration :-).



Thank you,

Alex.



On Jun 16, 2024, at 17:00, Alex Rousskov  
wrote:

Hello,

Does anybody still have src_as and dst_as ACLs configured in their 
production Squids? There are several serious problems with those ACLs, and 
those problems have been present in Squid for many years. I hope that virtually 
nobody uses those ACLs today.

If you do use them, please respond (publicly or privately) and, if possible, 
please indicate whether you have verified that those ACLs are working correctly 
in your deployment environment.


Thank you,

Alex.



acl aclname src_as number ...
acl aclname dst_as number ...
  # [fast]
  # Except for access control, AS numbers can be used for
  # routing of requests to specific caches. Here's an
  # example for routing all requests for AS#1241 and only
  # those to mycache.mydomain.net:
  # acl asexample dst_as 1241
  # cache_peer_access mycache.mydomain.net allow asexample
  # cache_peer_access mycache_mydomain.net deny all

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Anybody still using src_as and dst_as ACLs?

2024-06-16 Thread Jonathan Lee
I use them for ipv6 blocks they seem to work that way in 5.8
Sent from my iPhone

> On Jun 16, 2024, at 17:00, Alex Rousskov  
> wrote:
> 
> Hello,
> 
>Does anybody still have src_as and dst_as ACLs configured in their 
> production Squids? There are several serious problems with those ACLs, and 
> those problems have been present in Squid for many years. I hope that 
> virtually nobody uses those ACLs today.
> 
> If you do use them, please respond (publicly or privately) and, if possible, 
> please indicate whether you have verified that those ACLs are working 
> correctly in your deployment environment.
> 
> 
> Thank you,
> 
> Alex.
> 
> 
>>acl aclname src_as number ...
>>acl aclname dst_as number ...
>>  # [fast]
>>  # Except for access control, AS numbers can be used for
>>  # routing of requests to specific caches. Here's an
>>  # example for routing all requests for AS#1241 and only
>>  # those to mycache.mydomain.net:
>>  # acl asexample dst_as 1241
>>  # cache_peer_access mycache.mydomain.net allow asexample
>>  # cache_peer_access mycache_mydomain.net deny all
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Anybody still using src_as and dst_as ACLs?

2024-06-16 Thread Alex Rousskov

Hello,

Does anybody still have src_as and dst_as ACLs configured in their 
production Squids? There are several serious problems with those ACLs, 
and those problems have been present in Squid for many years. I hope 
that virtually nobody uses those ACLs today.


If you do use them, please respond (publicly or privately) and, if 
possible, please indicate whether you have verified that those ACLs are 
working correctly in your deployment environment.



Thank you,

Alex.



acl aclname src_as number ...
acl aclname dst_as number ...
  # [fast]
  # Except for access control, AS numbers can be used for
  # routing of requests to specific caches. Here's an
  # example for routing all requests for AS#1241 and only
  # those to mycache.mydomain.net:
  # acl asexample dst_as 1241
  # cache_peer_access mycache.mydomain.net allow asexample
  # cache_peer_access mycache_mydomain.net deny all

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Simulate connections for tuning squid?

2024-06-16 Thread Antony Stone
On Sunday 16 June 2024 at 12:27:03, David Touzeau wrote:

> Hi
> 
> We have made such a tool for us.

> Available free of charge with no restrictions

I think it would be a good idea to publish a tool like this explicitly under a 
well-known licence.  It makes life considerably simpler for people who care 
about compliance if they can say "these things we use are all under licence X" 
because then everyone knows exactly what that means.


Antony.

-- 
I don't know, maybe if we all waited then cosmic rays would write all our 
software for us. Of course it might take a while.

 - Ron Minnich, Los Alamos National Laboratory

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Simulate connections for tuning squid?

2024-06-16 Thread David Touzeau


Hi

We have made such a tool for us.
I suggest downloading our ISO and install a new server ( virtual )
You will have this feature:
https://wiki.articatech.com/en/proxy-service/tuning/stress-your-proxy-server

You can easily use this feature in a variety of scenarios.

Available free of charge with no restrictions



Le 24/05/2024 à 16:01, Alex Rousskov a écrit :

On 2024-05-24 01:43, Periko Support wrote:


I would like to know if there exists a tool that helps us simulate
connections to squid and helps us tune squid for different scenarios
like small, medium or large networks?


Yes, there are many tools, offering various tradeoffs, including:

* Apache "ab": Not designed for testing proxies but well-known and 
fairly simple.


* Web Polygraph: Designed for testing proxies but has a steep learning 
curve and lacks fresh releases.


* curl/wget/netcat: Not designed for testing performance but 
well-known and very simple.


Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Information Request: "Accept-Ranges" with use of SSL intercept and dynamic update caching

2024-06-14 Thread Jonathan Lee
Thanks for the info. That makes this directive very clear.
Sent from my iPhone

> On Jun 14, 2024, at 01:46, Amos Jeffries  wrote:
> 
> On 11/06/24 16:47, Jonathan Lee wrote:
>> The reason I ask is sometimes Facebook when I am using it locks up and my 
>> fan goes crazy I close Safari and restart the browser and it works fine 
>> again. It acts like it is restarting a download over and over again.
> 
> Because it is. Those websites use "infinite scrolling" for delivery. 
> Accept-Ranges tells the server that it does not have to re-deliver the entire 
> JSON dataset for the scrolling part in full, every few seconds.
> 
> That header is defined by 
> 
> 
> 
> HTH
> Amos
> 
> 
 On Jun 10, 2024, at 21:45, Jonathan Lee wrote:
>>> 
>>> Hello fellow Squid community can you please help?
>>> 
>>> Should I be using the following if I have SSL certificates, dynamic 
>>> updates, StoreID, and ClamAV running?
>>> 
>>> *request_header_access Accept-Ranges deny all reply_header_access 
>>> Accept-Ranges deny all request_header_replace Accept-Ranges none 
>>> reply_header_replace Accept-Ranges none*
>>> 
>>> None of the documents show what Accept-Ranges does
>>> 
>>> Can anyone help explain this to me?
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Upgrade path from squid 4.15 to 6.x

2024-06-14 Thread Amos Jeffries

On 14/06/24 20:43, NgTech LTD wrote:

Hey Amis,

Ok, so with the tools we have available, can we take this case and maybe 
write a brief summary of changes between the squid features versions?




That what the Release Notes are.

Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Information Request: "Accept-Ranges" with use of SSL intercept and dynamic update caching

2024-06-14 Thread Amos Jeffries

On 11/06/24 16:47, Jonathan Lee wrote:
The reason I ask is sometimes Facebook when I am using it locks up and 
my fan goes crazy I close Safari and restart the browser and it works 
fine again. It acts like it is restarting a download over and over again.




Because it is. Those websites use "infinite scrolling" for delivery. 
Accept-Ranges tells the server that it does not have to re-deliver the 
entire JSON dataset for the scrolling part in full, every few seconds.


That header is defined by 




HTH
Amos



On Jun 10, 2024, at 21:45, Jonathan Lee wrote:

Hello fellow Squid community can you please help?

Should I be using the following if I have SSL certificates, dynamic updates, 
StoreID, and ClamAV running?

*request_header_access Accept-Ranges deny all reply_header_access 
Accept-Ranges deny all request_header_replace Accept-Ranges none 
reply_header_replace Accept-Ranges none*


None of the documents show what Accept-Ranges does

Can anyone help explain this to me?



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Upgrade path from squid 4.15 to 6.x

2024-06-14 Thread NgTech LTD
Hey Amis,

Ok, so with the tools we have available, can we take this case and maybe
write a brief summary of changes between the squid features versions?

I can't guarantee time limit but it would be very helpful from the
community to get feedback in such cases.

If anyone have done this kind of task, please share with us the details so
others will be able to benefit from your invested time.

Thanks,
Eliezer

* I am well aware...

בתאריך יום ו׳, 14 ביוני 2024, 11:36, מאת Amos Jeffries ‏<
squ...@treenet.co.nz>:

>
> Regarding the OP question:
>
> Upgrade for all Squid-3 is to:
>   * read release notes of N thru M versions (as-needed) about existing
> feature changes
>   * install the new version
>   * run "squid -k parse" to identify mandatory changes
>   * fix all "FATAL" and "ERROR" identified
>   * run with new version
>
> ... look at all logged "NOTICE", "UPGRADE" etc, and the Release Notes
> new feature additions to work on operational improvements possible with
> the new version.
>
>
> HTH
> Amos
>
>
> On 10/06/24 19:43, ngtech1ltd wrote:
> >
> > @Alex and @Amos, can you try to help me compile a menu list of
> > functionalities that Squid-Cache can be used for?
> >
>
> The Squid wiki ConfigExamples section does that. Or at least is supposed
> to, with examples per use-case.
>
>
> FYI, this line of discussion you have is well off-topic for Akash's
> original question and I think is probably just adding confusion.
>
>
> Cheers
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Upgrade path from squid 4.15 to 6.x

2024-06-14 Thread Amos Jeffries



Regarding the OP question:

Upgrade for all Squid-3 is to:
 * read release notes of N thru M versions (as-needed) about existing 
feature changes

 * install the new version
 * run "squid -k parse" to identify mandatory changes
 * fix all "FATAL" and "ERROR" identified
 * run with new version

... look at all logged "NOTICE", "UPGRADE" etc, and the Release Notes 
new feature additions to work on operational improvements possible with 
the new version.



HTH
Amos


On 10/06/24 19:43, ngtech1ltd wrote:


@Alex and @Amos, can you try to help me compile a menu list of 
functionalities that Squid-Cache can be used for?




The Squid wiki ConfigExamples section does that. Or at least is supposed 
to, with examples per use-case.



FYI, this line of discussion you have is well off-topic for Akash's 
original question and I think is probably just adding confusion.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-13 Thread Alex Rousskov

On 2024-06-13 11:07, Jonathan Lee wrote:

Bug #1: Coredumps not functional for non-root processes.
https://redmine.pfsense.org/issues/1#change-73638

There is a bug in pfSense not allowing core dumps.


Glad you are making progress! Since pfSense folks believe this bug is 
not specific to Squid, there is nothing more for us to do here (for now).


Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-13 Thread Jonathan Lee
Bug #1: Coredumps not functional for non-root processes. - pfSense - pfSense bugtrackerredmine.pfsense.orgThere is a bug in pfSense not allowing core dumps. Sent from my iPhoneOn Jun 12, 2024, at 17:58, Jonathan Lee  wrote:Shell Output - ls -l /var/log/squid/try.sh-rwxrwxrwx  1 root  squid  46 Jun 12 17:55 /var/log/squid/try.shOn Jun 12, 2024, at 15:38, Alex Rousskov  wrote:If same user does not expose the difference, start the test script from the directory where you told Squid to dump core.___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-12 Thread Jonathan Lee
Shell Output - ls -l /var/log/squid/try.sh
-rwxrwxrwx  1 root  squid  46 Jun 12 17:55 /var/log/squid/try.sh
> On Jun 12, 2024, at 15:38, Alex Rousskov  
> wrote:
> 
> If same user does not expose the difference, start the test script from the 
> directory where you told Squid to dump core.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-12 Thread Jonathan Lee
If same user does not expose the difference, start the test script from the 
directory where you told Squid to dump core.

Shell Output - /var/log/squid/try.sh
sh: /var/log/squid/try.sh: Permission denied
I can’t run it I have set it to chmod 777 and running it as root.
I do not have the sudo enabled currently however I wonder if I add root to the 
/var/log/squid it that would fix it


> On Jun 12, 2024, at 15:38, Alex Rousskov  
> wrote:
> 
> If same user does not expose the difference, start the test script from the 
> directory where you told Squid to dump core.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-12 Thread Alex Rousskov

On 2024-06-12 17:51, Jonathan Lee wrote:
when killing squid I only get the following and no core dumps core does 
does work


Glad you have a working "sanity check" test! I agree with FreeBSD forum 
folks that you have proven that your OS does have core dumps enabled (in 
general). Now we need to figure out what is the difference between that 
working test script and Squid.


Please start Squid from the command line, with -N command line option 
(among others that you might be using already), just like you start the 
"sanity check" test script. And then kill Squid as you kill the test script.


If the above does not produce a Squid core file, then I would suspect 
that Squid runs as "squid" user while the test script runs as "root". 
Try starting the test script as "squid" user (you may be able to use 
"sudo -u squid ..." for that).


If same user does not expose the difference, start the test script from 
the directory where you told Squid to dump core.



HTH,

Alex.


I have tested it with a sanity check with the help of FreeBSD 
forum users. However it just does not show a core dump for me on 
anything kill -11 kill -6 killall or kill -SIGABRT. I have it set in the 
config to use coredump directory also
forums.freebsd.org 




Jun 12 14:49:09 kernel  pid 87824 (squid), jid 0, uid 100: exited on signal 6
Jun 12 14:47:52 kernel  pid 87551 (squid), jid 0, uid 0: exited on signal 11



On Jun 12, 2024, at 10:19, Jonathan Lee  wrote:

You know what it was, it needed to be bound to the loopback and not 
just the LAN, again I am still working on getting a core dump file 
manually. Will update once I get one. Chmod might be needed.

Sent from my iPhone

On Jun 12, 2024, at 06:13, Alex Rousskov 
 wrote:


On 2024-06-11 23:32, Jonathan Lee wrote:


So I just run this on command line SIGABRT squid?


On Unix-like systems, the command to send a process a signal is 
called "kill": https://www.man7.org/linux/man-pages/man1/kill.1p.html


For example, if you want to abort a Squid worker process that has OS 
process ID (PID) 12345, you may do something like this:


  sudo kill -SIGABRT 12345

You can use "ps" or "top" commands to learn PIDs of processes you 
want to signal.



also added an item to the Netgate forum to, but not many users are 
Squid wizards


Beyond using a reasonable coredump_dir value in squid.conf, the 
system administration problems you need to solve to enable Squid core 
dumps are most likely not specific to Squid.



HTH,

Alex.


It’s funny as soon as I enabled the sysctl command and set the 
directory it won’t crash anymore. I also changed it to reside on the 
loopback before it was only on my lan interface. I run an external 
drive as my swap partition or a swap drive, it works I get crash 
reports when playing around with stuff. /dev/da0 or something it 
dumps to it and when it reboots shows in the var/crash folder and 
will display on gui report ready, again if anyone else knows pfSense 
let me know. I also added an item to the Netgate forum to, but not 
many users are Squid wizards so it might take a long time to get any 
community input over there.






___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-12 Thread Jonathan Lee
when killing squid I only get the following and no core dumps core does does 
work I have tested it with a sanity check with the help of FreeBSD forum users. 
However it just does not show a core dump for me on anything kill -11 kill -6 
killall or kill -SIGABRT. I have it set in the config to use coredump directory 
also 
https://forums.freebsd.org/threads/core-dumps.93778/page-2


Jun 12 14:49:09 kernel  pid 87824 (squid), jid 0, uid 100: exited on 
signal 6
Jun 12 14:47:52 kernel  pid 87551 (squid), jid 0, uid 0: exited on 
signal 11


> On Jun 12, 2024, at 10:19, Jonathan Lee  wrote:
> 
> You know what it was, it needed to be bound to the loopback and not just the 
> LAN, again I am still working on getting a core dump file manually. Will 
> update once I get one. Chmod might be needed. 
> Sent from my iPhone
> 
>> On Jun 12, 2024, at 06:13, Alex Rousskov  
>> wrote:
>> 
>> On 2024-06-11 23:32, Jonathan Lee wrote:
>> 
>>> So I just run this on command line SIGABRT squid?
>> 
>> On Unix-like systems, the command to send a process a signal is called 
>> "kill": https://www.man7.org/linux/man-pages/man1/kill.1p.html
>> 
>> For example, if you want to abort a Squid worker process that has OS process 
>> ID (PID) 12345, you may do something like this:
>> 
>>   sudo kill -SIGABRT 12345
>> 
>> You can use "ps" or "top" commands to learn PIDs of processes you want to 
>> signal.
>> 
>> 
>>> also added an item to the Netgate forum to, but not many users are Squid 
>>> wizards
>> 
>> Beyond using a reasonable coredump_dir value in squid.conf, the system 
>> administration problems you need to solve to enable Squid core dumps are 
>> most likely not specific to Squid.
>> 
>> 
>> HTH,
>> 
>> Alex.
>> 
>> 
>>> It’s funny as soon as I enabled the sysctl command and set the directory it 
>>> won’t crash anymore. I also changed it to reside on the loopback before it 
>>> was only on my lan interface. I run an external drive as my swap partition 
>>> or a swap drive, it works I get crash reports when playing around with 
>>> stuff. /dev/da0 or something it dumps to it and when it reboots shows in 
>>> the var/crash folder and will display on gui report ready, again if anyone 
>>> else knows pfSense let me know. I also added an item to the Netgate forum 
>>> to, but not many users are Squid wizards so it might take a long time to 
>>> get any community input over there.
>> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-12 Thread Jonathan Lee
You know what it was, it needed to be bound to the loopback and not just the 
LAN, again I am still working on getting a core dump file manually. Will update 
once I get one. Chmod might be needed. 
Sent from my iPhone

> On Jun 12, 2024, at 06:13, Alex Rousskov  
> wrote:
> 
> On 2024-06-11 23:32, Jonathan Lee wrote:
> 
>> So I just run this on command line SIGABRT squid?
> 
> On Unix-like systems, the command to send a process a signal is called 
> "kill": https://www.man7.org/linux/man-pages/man1/kill.1p.html
> 
> For example, if you want to abort a Squid worker process that has OS process 
> ID (PID) 12345, you may do something like this:
> 
>sudo kill -SIGABRT 12345
> 
> You can use "ps" or "top" commands to learn PIDs of processes you want to 
> signal.
> 
> 
>> also added an item to the Netgate forum to, but not many users are Squid 
>> wizards
> 
> Beyond using a reasonable coredump_dir value in squid.conf, the system 
> administration problems you need to solve to enable Squid core dumps are most 
> likely not specific to Squid.
> 
> 
> HTH,
> 
> Alex.
> 
> 
>> It’s funny as soon as I enabled the sysctl command and set the directory it 
>> won’t crash anymore. I also changed it to reside on the loopback before it 
>> was only on my lan interface. I run an external drive as my swap partition 
>> or a swap drive, it works I get crash reports when playing around with 
>> stuff. /dev/da0 or something it dumps to it and when it reboots shows in the 
>> var/crash folder and will display on gui report ready, again if anyone else 
>> knows pfSense let me know. I also added an item to the Netgate forum to, but 
>> not many users are Squid wizards so it might take a long time to get any 
>> community input over there.
> 
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-12 Thread Alex Rousskov

On 2024-06-11 23:32, Jonathan Lee wrote:


So I just run this on command line SIGABRT squid?


On Unix-like systems, the command to send a process a signal is called 
"kill": https://www.man7.org/linux/man-pages/man1/kill.1p.html


For example, if you want to abort a Squid worker process that has OS 
process ID (PID) 12345, you may do something like this:


sudo kill -SIGABRT 12345

You can use "ps" or "top" commands to learn PIDs of processes you want 
to signal.



also added an item to the Netgate forum to, but not many users are 
Squid wizards


Beyond using a reasonable coredump_dir value in squid.conf, the system 
administration problems you need to solve to enable Squid core dumps are 
most likely not specific to Squid.



HTH,

Alex.



It’s funny as soon as I enabled the sysctl command and set the directory it 
won’t crash anymore. I also changed it to reside on the loopback before it was 
only on my lan interface. I run an external drive as my swap partition or a 
swap drive, it works I get crash reports when playing around with stuff. 
/dev/da0 or something it dumps to it and when it reboots shows in the var/crash 
folder and will display on gui report ready, again if anyone else knows pfSense 
let me know. I also added an item to the Netgate forum to, but not many users 
are Squid wizards so it might take a long time to get any community input over 
there.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-11 Thread Jonathan Lee
So I just run this on command line SIGABRT squid? It’s funny as soon as I 
enabled the sysctl command and set the directory it won’t crash anymore. I also 
changed it to reside on the loopback before it was only on my lan interface. I 
run an external drive as my swap partition or a swap drive, it works I get 
crash reports when playing around with stuff. /dev/da0 or something it dumps to 
it and when it reboots shows in the var/crash folder and will display on gui 
report ready, again if anyone else knows pfSense let me know. I also added an 
item to the Netgate forum to, but not many users are Squid wizards so it might 
take a long time to get any community input over there. 
Sent from my iPhone

> On Jun 11, 2024, at 19:15, Alex Rousskov  
> wrote:
> 
> SIGABRT
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-11 Thread Alex Rousskov

On 2024-06-11 18:09, Jonathan Lee wrote:
When I run sysctl debug.kdb.panic=1 I get a crash report for pfsense in 
var/crash should my path for core dumps use my swap drive too?


It is a pfsense-specific question that I do not know the answer for. 
Perhaps others do. However, you may be able to get an answer faster if 
you set coredump_dir in squid.conf to /var/crash, start Squid with that 
configuration, and then kill a running Squid worker with SIGABRT.



HTH,

Alex.



On Jun 11, 2024, at 14:42, Alex Rousskov wrote:

On 2024-06-11 17:06, Jonathan Lee wrote:

I can’t locate the dump file for segmentation fault it never 
generates one.


I assume that you cannot locate the core dump file because your 
OS/environment is not configured to produce core dump files. Enabling 
core dumps is a sysadmin task that is mostly independent from Squid 
specifics. The FAQ I linked to earlier has some hints, but none are 
pfsense-specific. If others on the list do not tell you how to enable 
coredumps on pfsense, you may want to ask on pfsense or sysadmin forums.


Alternatively, you can try starting Squid from gdb or attacking gdb to 
a running Squid kid process, but neither is trivial, especially if you 
are using SMP Squid. The same FAQ has some hints.


BTW, to test whether core dumps are enabled in your environment, you 
do not need to wait for Squid to crash. Instead, you can send a 
SIGABRT signal (as "root" if needed) to any running test process and 
see whether it creates a core dump file when crashing.




I am running cache it shows a swap file however it is not readable.


There are many kinds of swap files, but the core file we need is 
probably not one of them.




I fixed the other issues.


Glad to hear that!

Alex.


On Jun 11, 2024, at 14:00, Alex Rousskov 
 wrote:


On 2024-06-11 14:46, Jonathan Lee wrote:
2024-05-16 14:10:23 [60780] loading dbfile 
/var/db/squidGuard/Nick_Blocks/urls.db

2024/06/11 10:23:05 kid1| FATAL: Received Segment Violation...dying.
2024/06/11 10:23:25 kid1| Starting Squid Cache version 5.8 for 
aarch64-portbld-freebsd14.0...

2024/06/11 10:23:25 kid1| Service Name: squid
2024-06-11 10:23:25 [9471] (squidGuard): can't write to logfile 
/var/log/squidGuard/squidGuard.log

2024-06-11 10:23:25 [9471] New setting: logdir: /var/squidGuard/log
2024-06-11 10:23:25 [9471] New setting: dbhome: /var/db/squidGuard
2024-06-11 10:23:25 [9471] init domainlist 
/var/db/squidGuard/blk_blacklists_adult/domains
2024-06-11 10:23:25 [9471] loading dbfile 
/var/db/squidGuard/blk_blacklists_adult/domains.db
2024-06-11 10:23:25 [9471] init expressionlist 
/var/db/squidGuard/blk_blacklists_adult/expressions

There is my log file being blocked for some reason


Just to avoid a misunderstanding: This mailing list thread is about 
the segmentation fault bug you have reported earlier. The above log 
is _not_ the requested backtrace that we need to triage that bug. If 
there is another problem you need help with, please start a new 
mailing list thread and detail _that_ problem there.



Thank you,

Alex.


On Jun 11, 2024, at 11:24, Jonathan Lee  
wrote:


thanks i have enabled

coredump_dir /var/squid/logs

I will submit a dump as soon as it occurs again

On Jun 11, 2024, at 11:17, Jonathan Lee 
 wrote:


I have attempted to upgrade the program fails to recognize 
”DHParamas Key Size” and will no longer use my certificates and 
shows many errors. I am kind of stuck on 5.8


I do not know where the core dump would be located on pfSense let 
me research this and get back to you.


On Jun 11, 2024, at 11:04, Alex Rousskov 
 wrote:


On 2024-06-11 13:24, Jonathan Lee wrote:

FATAL: Received Segment Violation...dying.
Does any know how to fix this??


Please post full backtrace from this failure:
https://wiki.squid-cache.org/SquidFaq/BugReporting#crashes-and-core-dumps

The other information you have already provided may help, but 
without a usable stack trace, it is unlikely that somebody will 
guess what is going on with your Squid.


Please note that you are running Squid v5 that is no longer 
supported by the Squid Project. You should upgrade to v6+. 
However, I do not know whether that upgrade is going to address 
the specific problem you are suffering from.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users






___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-11 Thread Jonathan Lee
When I run  sysctl debug.kdb.panic=1 I get a crash report for pfsense in 
var/crash should my path for core dumps use my swap drive too?



> On Jun 11, 2024, at 14:42, Alex Rousskov  
> wrote:
> 
> On 2024-06-11 17:06, Jonathan Lee wrote:
> 
>> I can’t locate the dump file for segmentation fault it never generates one.
> 
> I assume that you cannot locate the core dump file because your 
> OS/environment is not configured to produce core dump files. Enabling core 
> dumps is a sysadmin task that is mostly independent from Squid specifics. The 
> FAQ I linked to earlier has some hints, but none are pfsense-specific. If 
> others on the list do not tell you how to enable coredumps on pfsense, you 
> may want to ask on pfsense or sysadmin forums.
> 
> Alternatively, you can try starting Squid from gdb or attacking gdb to a 
> running Squid kid process, but neither is trivial, especially if you are 
> using SMP Squid. The same FAQ has some hints.
> 
> BTW, to test whether core dumps are enabled in your environment, you do not 
> need to wait for Squid to crash. Instead, you can send a SIGABRT signal (as 
> "root" if needed) to any running test process and see whether it creates a 
> core dump file when crashing.
> 
> 
>> I am running cache it shows a swap file however it is not readable.
> 
> There are many kinds of swap files, but the core file we need is probably not 
> one of them.
> 
> 
>> I fixed the other issues.
> 
> Glad to hear that!
> 
> Alex.
> 
> 
>>> On Jun 11, 2024, at 14:00, Alex Rousskov  
>>> wrote:
>>> 
>>> On 2024-06-11 14:46, Jonathan Lee wrote:
 2024-05-16 14:10:23 [60780] loading dbfile 
 /var/db/squidGuard/Nick_Blocks/urls.db
 2024/06/11 10:23:05 kid1| FATAL: Received Segment Violation...dying.
 2024/06/11 10:23:25 kid1| Starting Squid Cache version 5.8 for 
 aarch64-portbld-freebsd14.0...
 2024/06/11 10:23:25 kid1| Service Name: squid
 2024-06-11 10:23:25 [9471] (squidGuard): can't write to logfile 
 /var/log/squidGuard/squidGuard.log
 2024-06-11 10:23:25 [9471] New setting: logdir: /var/squidGuard/log
 2024-06-11 10:23:25 [9471] New setting: dbhome: /var/db/squidGuard
 2024-06-11 10:23:25 [9471] init domainlist 
 /var/db/squidGuard/blk_blacklists_adult/domains
 2024-06-11 10:23:25 [9471] loading dbfile 
 /var/db/squidGuard/blk_blacklists_adult/domains.db
 2024-06-11 10:23:25 [9471] init expressionlist 
 /var/db/squidGuard/blk_blacklists_adult/expressions
 There is my log file being blocked for some reason
>>> 
>>> Just to avoid a misunderstanding: This mailing list thread is about the 
>>> segmentation fault bug you have reported earlier. The above log is _not_ 
>>> the requested backtrace that we need to triage that bug. If there is 
>>> another problem you need help with, please start a new mailing list thread 
>>> and detail _that_ problem there.
>>> 
>>> 
>>> Thank you,
>>> 
>>> Alex.
>>> 
>>> 
> On Jun 11, 2024, at 11:24, Jonathan Lee  wrote:
> 
> thanks i have enabled
> 
> coredump_dir /var/squid/logs
> 
> I will submit a dump as soon as it occurs again
> 
>> On Jun 11, 2024, at 11:17, Jonathan Lee  wrote:
>> 
>> I have attempted to upgrade the program fails to recognize ”DHParamas 
>> Key Size” and will no longer use my certificates and shows many errors. 
>> I am kind of stuck on 5.8
>> 
>> I do not know where the core dump would be located on pfSense let me 
>> research this and get back to you.
>> 
>>> On Jun 11, 2024, at 11:04, Alex Rousskov 
>>>  wrote:
>>> 
>>> On 2024-06-11 13:24, Jonathan Lee wrote:
 FATAL: Received Segment Violation...dying.
 Does any know how to fix this??
>>> 
>>> Please post full backtrace from this failure:
>>> https://wiki.squid-cache.org/SquidFaq/BugReporting#crashes-and-core-dumps
>>> 
>>> The other information you have already provided may help, but without a 
>>> usable stack trace, it is unlikely that somebody will guess what is 
>>> going on with your Squid.
>>> 
>>> Please note that you are running Squid v5 that is no longer supported 
>>> by the Squid Project. You should upgrade to v6+. However, I do not know 
>>> whether that upgrade is going to address the specific problem you are 
>>> suffering from.
>>> 
>>> 
>>> HTH,
>>> 
>>> Alex.
>>> 
>>> ___
>>> squid-users mailing list
>>> squid-users@lists.squid-cache.org
>>> https://lists.squid-cache.org/listinfo/squid-users
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-11 Thread Alex Rousskov

On 2024-06-11 17:06, Jonathan Lee wrote:

I can’t locate the dump file for segmentation fault it never generates 
one. 


I assume that you cannot locate the core dump file because your 
OS/environment is not configured to produce core dump files. Enabling 
core dumps is a sysadmin task that is mostly independent from Squid 
specifics. The FAQ I linked to earlier has some hints, but none are 
pfsense-specific. If others on the list do not tell you how to enable 
coredumps on pfsense, you may want to ask on pfsense or sysadmin forums.


Alternatively, you can try starting Squid from gdb or attacking gdb to a 
running Squid kid process, but neither is trivial, especially if you are 
using SMP Squid. The same FAQ has some hints.


BTW, to test whether core dumps are enabled in your environment, you do 
not need to wait for Squid to crash. Instead, you can send a SIGABRT 
signal (as "root" if needed) to any running test process and see whether 
it creates a core dump file when crashing.




I am running cache it shows a swap file however it is not readable.


There are many kinds of swap files, but the core file we need is 
probably not one of them.




I fixed the other issues.


Glad to hear that!

Alex.


On Jun 11, 2024, at 14:00, Alex Rousskov 
 wrote:


On 2024-06-11 14:46, Jonathan Lee wrote:
2024-05-16 14:10:23 [60780] loading dbfile 
/var/db/squidGuard/Nick_Blocks/urls.db

2024/06/11 10:23:05 kid1| FATAL: Received Segment Violation...dying.
2024/06/11 10:23:25 kid1| Starting Squid Cache version 5.8 for 
aarch64-portbld-freebsd14.0...

2024/06/11 10:23:25 kid1| Service Name: squid
2024-06-11 10:23:25 [9471] (squidGuard): can't write to logfile 
/var/log/squidGuard/squidGuard.log

2024-06-11 10:23:25 [9471] New setting: logdir: /var/squidGuard/log
2024-06-11 10:23:25 [9471] New setting: dbhome: /var/db/squidGuard
2024-06-11 10:23:25 [9471] init domainlist 
/var/db/squidGuard/blk_blacklists_adult/domains
2024-06-11 10:23:25 [9471] loading dbfile 
/var/db/squidGuard/blk_blacklists_adult/domains.db
2024-06-11 10:23:25 [9471] init expressionlist 
/var/db/squidGuard/blk_blacklists_adult/expressions

There is my log file being blocked for some reason


Just to avoid a misunderstanding: This mailing list thread is about 
the segmentation fault bug you have reported earlier. The above log is 
_not_ the requested backtrace that we need to triage that bug. If 
there is another problem you need help with, please start a new 
mailing list thread and detail _that_ problem there.



Thank you,

Alex.


On Jun 11, 2024, at 11:24, Jonathan Lee  
wrote:


thanks i have enabled

coredump_dir /var/squid/logs

I will submit a dump as soon as it occurs again

On Jun 11, 2024, at 11:17, Jonathan Lee  
wrote:


I have attempted to upgrade the program fails to recognize 
”DHParamas Key Size” and will no longer use my certificates and 
shows many errors. I am kind of stuck on 5.8


I do not know where the core dump would be located on pfSense let 
me research this and get back to you.


On Jun 11, 2024, at 11:04, Alex Rousskov 
 wrote:


On 2024-06-11 13:24, Jonathan Lee wrote:

FATAL: Received Segment Violation...dying.
Does any know how to fix this??


Please post full backtrace from this failure:
https://wiki.squid-cache.org/SquidFaq/BugReporting#crashes-and-core-dumps

The other information you have already provided may help, but 
without a usable stack trace, it is unlikely that somebody will 
guess what is going on with your Squid.


Please note that you are running Squid v5 that is no longer 
supported by the Squid Project. You should upgrade to v6+. 
However, I do not know whether that upgrade is going to address 
the specific problem you are suffering from.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users




___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-11 Thread Jonathan Lee
I can’t locate the dump file for segmentation fault it never generates one. I 
am running cache it shows a swap file however it is not readable.

I fixed the other issues. 

> On Jun 11, 2024, at 14:00, Alex Rousskov  
> wrote:
> 
> On 2024-06-11 14:46, Jonathan Lee wrote:
>> 2024-05-16 14:10:23 [60780] loading dbfile 
>> /var/db/squidGuard/Nick_Blocks/urls.db
>> 2024/06/11 10:23:05 kid1| FATAL: Received Segment Violation...dying.
>> 2024/06/11 10:23:25 kid1| Starting Squid Cache version 5.8 for 
>> aarch64-portbld-freebsd14.0...
>> 2024/06/11 10:23:25 kid1| Service Name: squid
>> 2024-06-11 10:23:25 [9471] (squidGuard): can't write to logfile 
>> /var/log/squidGuard/squidGuard.log
>> 2024-06-11 10:23:25 [9471] New setting: logdir: /var/squidGuard/log
>> 2024-06-11 10:23:25 [9471] New setting: dbhome: /var/db/squidGuard
>> 2024-06-11 10:23:25 [9471] init domainlist 
>> /var/db/squidGuard/blk_blacklists_adult/domains
>> 2024-06-11 10:23:25 [9471] loading dbfile 
>> /var/db/squidGuard/blk_blacklists_adult/domains.db
>> 2024-06-11 10:23:25 [9471] init expressionlist 
>> /var/db/squidGuard/blk_blacklists_adult/expressions
>> There is my log file being blocked for some reason
> 
> Just to avoid a misunderstanding: This mailing list thread is about the 
> segmentation fault bug you have reported earlier. The above log is _not_ the 
> requested backtrace that we need to triage that bug. If there is another 
> problem you need help with, please start a new mailing list thread and detail 
> _that_ problem there.
> 
> 
> Thank you,
> 
> Alex.
> 
> 
>>> On Jun 11, 2024, at 11:24, Jonathan Lee  wrote:
>>> 
>>> thanks i have enabled
>>> 
>>> coredump_dir /var/squid/logs
>>> 
>>> I will submit a dump as soon as it occurs again
>>> 
 On Jun 11, 2024, at 11:17, Jonathan Lee  wrote:
 
 I have attempted to upgrade the program fails to recognize ”DHParamas Key 
 Size” and will no longer use my certificates and shows many errors. I am 
 kind of stuck on 5.8
 
 I do not know where the core dump would be located on pfSense let me 
 research this and get back to you.
 
> On Jun 11, 2024, at 11:04, Alex Rousskov 
>  wrote:
> 
> On 2024-06-11 13:24, Jonathan Lee wrote:
>> FATAL: Received Segment Violation...dying.
>> Does any know how to fix this??
> 
> Please post full backtrace from this failure:
> https://wiki.squid-cache.org/SquidFaq/BugReporting#crashes-and-core-dumps
> 
> The other information you have already provided may help, but without a 
> usable stack trace, it is unlikely that somebody will guess what is going 
> on with your Squid.
> 
> Please note that you are running Squid v5 that is no longer supported by 
> the Squid Project. You should upgrade to v6+. However, I do not know 
> whether that upgrade is going to address the specific problem you are 
> suffering from.
> 
> 
> HTH,
> 
> Alex.
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-11 Thread Alex Rousskov

On 2024-06-11 14:46, Jonathan Lee wrote:

2024-05-16 14:10:23 [60780] loading dbfile 
/var/db/squidGuard/Nick_Blocks/urls.db
2024/06/11 10:23:05 kid1| FATAL: Received Segment Violation...dying.
2024/06/11 10:23:25 kid1| Starting Squid Cache version 5.8 for 
aarch64-portbld-freebsd14.0...
2024/06/11 10:23:25 kid1| Service Name: squid
2024-06-11 10:23:25 [9471] (squidGuard): can't write to logfile 
/var/log/squidGuard/squidGuard.log
2024-06-11 10:23:25 [9471] New setting: logdir: /var/squidGuard/log
2024-06-11 10:23:25 [9471] New setting: dbhome: /var/db/squidGuard
2024-06-11 10:23:25 [9471] init domainlist 
/var/db/squidGuard/blk_blacklists_adult/domains
2024-06-11 10:23:25 [9471] loading dbfile 
/var/db/squidGuard/blk_blacklists_adult/domains.db
2024-06-11 10:23:25 [9471] init expressionlist 
/var/db/squidGuard/blk_blacklists_adult/expressions

There is my log file being blocked for some reason


Just to avoid a misunderstanding: This mailing list thread is about the 
segmentation fault bug you have reported earlier. The above log is _not_ 
the requested backtrace that we need to triage that bug. If there is 
another problem you need help with, please start a new mailing list 
thread and detail _that_ problem there.



Thank you,

Alex.



On Jun 11, 2024, at 11:24, Jonathan Lee  wrote:

thanks i have enabled

coredump_dir /var/squid/logs

I will submit a dump as soon as it occurs again


On Jun 11, 2024, at 11:17, Jonathan Lee  wrote:

I have attempted to upgrade the program fails to recognize ”DHParamas Key Size” 
and will no longer use my certificates and shows many errors. I am kind of 
stuck on 5.8

I do not know where the core dump would be located on pfSense let me research 
this and get back to you.


On Jun 11, 2024, at 11:04, Alex Rousskov  
wrote:

On 2024-06-11 13:24, Jonathan Lee wrote:

FATAL: Received Segment Violation...dying.
Does any know how to fix this??


Please post full backtrace from this failure:
https://wiki.squid-cache.org/SquidFaq/BugReporting#crashes-and-core-dumps

The other information you have already provided may help, but without a usable 
stack trace, it is unlikely that somebody will guess what is going on with your 
Squid.

Please note that you are running Squid v5 that is no longer supported by the 
Squid Project. You should upgrade to v6+. However, I do not know whether that 
upgrade is going to address the specific problem you are suffering from.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users






___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-11 Thread Jonathan Lee
2024-05-16 14:10:23 [60780] loading dbfile 
/var/db/squidGuard/Nick_Blocks/urls.db
2024/06/11 10:23:05 kid1| FATAL: Received Segment Violation...dying.
2024/06/11 10:23:25 kid1| Starting Squid Cache version 5.8 for 
aarch64-portbld-freebsd14.0...
2024/06/11 10:23:25 kid1| Service Name: squid
2024-06-11 10:23:25 [9471] (squidGuard): can't write to logfile 
/var/log/squidGuard/squidGuard.log
2024-06-11 10:23:25 [9471] New setting: logdir: /var/squidGuard/log
2024-06-11 10:23:25 [9471] New setting: dbhome: /var/db/squidGuard
2024-06-11 10:23:25 [9471] init domainlist 
/var/db/squidGuard/blk_blacklists_adult/domains
2024-06-11 10:23:25 [9471] loading dbfile 
/var/db/squidGuard/blk_blacklists_adult/domains.db
2024-06-11 10:23:25 [9471] init expressionlist 
/var/db/squidGuard/blk_blacklists_adult/expressions

There is my log file being blocked for some reason 

> On Jun 11, 2024, at 11:24, Jonathan Lee  wrote:
> 
> thanks i have enabled
> 
> coredump_dir /var/squid/logs
> 
> I will submit a dump as soon as it occurs again
> 
>> On Jun 11, 2024, at 11:17, Jonathan Lee  wrote:
>> 
>> I have attempted to upgrade the program fails to recognize ”DHParamas Key 
>> Size” and will no longer use my certificates and shows many errors. I am 
>> kind of stuck on 5.8
>> 
>> I do not know where the core dump would be located on pfSense let me 
>> research this and get back to you. 
>> 
>>> On Jun 11, 2024, at 11:04, Alex Rousskov  
>>> wrote:
>>> 
>>> On 2024-06-11 13:24, Jonathan Lee wrote:
 FATAL: Received Segment Violation...dying.
 Does any know how to fix this??
>>> 
>>> Please post full backtrace from this failure:
>>> https://wiki.squid-cache.org/SquidFaq/BugReporting#crashes-and-core-dumps
>>> 
>>> The other information you have already provided may help, but without a 
>>> usable stack trace, it is unlikely that somebody will guess what is going 
>>> on with your Squid.
>>> 
>>> Please note that you are running Squid v5 that is no longer supported by 
>>> the Squid Project. You should upgrade to v6+. However, I do not know 
>>> whether that upgrade is going to address the specific problem you are 
>>> suffering from.
>>> 
>>> 
>>> HTH,
>>> 
>>> Alex.
>>> 
>>> ___
>>> squid-users mailing list
>>> squid-users@lists.squid-cache.org
>>> https://lists.squid-cache.org/listinfo/squid-users
>> 
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-11 Thread Jonathan Lee
thanks i have enabled

coredump_dir /var/squid/logs

I will submit a dump as soon as it occurs again

> On Jun 11, 2024, at 11:17, Jonathan Lee  wrote:
> 
> I have attempted to upgrade the program fails to recognize ”DHParamas Key 
> Size” and will no longer use my certificates and shows many errors. I am kind 
> of stuck on 5.8
> 
> I do not know where the core dump would be located on pfSense let me research 
> this and get back to you. 
> 
>> On Jun 11, 2024, at 11:04, Alex Rousskov  
>> wrote:
>> 
>> On 2024-06-11 13:24, Jonathan Lee wrote:
>>> FATAL: Received Segment Violation...dying.
>>> Does any know how to fix this??
>> 
>> Please post full backtrace from this failure:
>> https://wiki.squid-cache.org/SquidFaq/BugReporting#crashes-and-core-dumps
>> 
>> The other information you have already provided may help, but without a 
>> usable stack trace, it is unlikely that somebody will guess what is going on 
>> with your Squid.
>> 
>> Please note that you are running Squid v5 that is no longer supported by the 
>> Squid Project. You should upgrade to v6+. However, I do not know whether 
>> that upgrade is going to address the specific problem you are suffering from.
>> 
>> 
>> HTH,
>> 
>> Alex.
>> 
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> https://lists.squid-cache.org/listinfo/squid-users
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-11 Thread Jonathan Lee
I have attempted to upgrade the program fails to recognize ”DHParamas Key Size” 
and will no longer use my certificates and shows many errors. I am kind of 
stuck on 5.8

I do not know where the core dump would be located on pfSense let me research 
this and get back to you. 

> On Jun 11, 2024, at 11:04, Alex Rousskov  
> wrote:
> 
> On 2024-06-11 13:24, Jonathan Lee wrote:
>> FATAL: Received Segment Violation...dying.
>> Does any know how to fix this??
> 
> Please post full backtrace from this failure:
> https://wiki.squid-cache.org/SquidFaq/BugReporting#crashes-and-core-dumps
> 
> The other information you have already provided may help, but without a 
> usable stack trace, it is unlikely that somebody will guess what is going on 
> with your Squid.
> 
> Please note that you are running Squid v5 that is no longer supported by the 
> Squid Project. You should upgrade to v6+. However, I do not know whether that 
> upgrade is going to address the specific problem you are suffering from.
> 
> 
> HTH,
> 
> Alex.
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-11 Thread Alex Rousskov

On 2024-06-11 13:24, Jonathan Lee wrote:

FATAL: Received Segment Violation...dying.

Does any know how to fix this??


Please post full backtrace from this failure:
https://wiki.squid-cache.org/SquidFaq/BugReporting#crashes-and-core-dumps

The other information you have already provided may help, but without a 
usable stack trace, it is unlikely that somebody will guess what is 
going on with your Squid.


Please note that you are running Squid v5 that is no longer supported by 
the Squid Project. You should upgrade to v6+. However, I do not know 
whether that upgrade is going to address the specific problem you are 
suffering from.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-11 Thread Jonathan Lee
> Can you give any more information such as:
> 
> 1. Which version of Squid is this?
> 
>   squidclamav-7.2     
> squid_radius_auth-1.10     
> squid-5.8     c-icap-modules-0.5.5_1 
>  
> 2. Which version of which Operating System is this running under?
>   pfSense 23.05.1-Release (arm64)
> 3. What appears in Squid's log files preceding this report?
>   Nothing out of the ordinary 

n 11 10:29:15   kernel  pid 4993 (squid), jid 0, uid 100: exited on 
signal 6
Jun 11 10:29:15 (squid-1)   4993FATAL: Received Segment 
Violation...dying. listening port: 127.0.0.1:3128
Jun 11 10:23:13 kernel  pid 43536 (squid), jid 0, uid 100: exited on 
signal 6
Jun 11 10:23:05 (squid-1)   43536   FATAL: Received Segment 
Violation...dying. connection: conn749025 local=192.168.1.1:3128 
remote=192.168.1.5:59502 flags=1
Jun 11 10:19:00 sshguard98282   Now monitoring attacks.

prior is just normal logs 
> 4. What traffic was going through Squid at the time?
Facebook causes issues every time this occurs
> 
> 5. Is this an isolated incident or do you see this more frequently?
This is not isolated it occurs often
> 
> 6. Can you reproduce the problem (and if so, tell us how to do so)?
This can be reproduced by looking at Facebook reels for 
10-15 minutes after it freezes restarts Squid and error resolves
I have tested many custom options they all do this with or without StoreID both 
do the same thing. 
CONFIG:
# This file is automatically generated by pfSense
# Do not edit manually !

http_port 192.168.1.1:3128 ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
tls-cafile=/usr/local/share/certs/ca-root-nss.crt 
capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 options=NO_SSLv3

http_port 127.0.0.1:3128 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
tls-cafile=/usr/local/share/certs/ca-root-nss.crt 
capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 options=NO_SSLv3

https_port 127.0.0.1:3129 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
tls-cafile=/usr/local/share/certs/ca-root-nss.crt 
capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 options=NO_SSLv3

icp_port 0
digest_generation off
dns_v4_first on
pid_filename /var/run/squid/squid.pid
cache_effective_user squid
cache_effective_group proxy
error_default_language en
icon_directory /usr/local/etc/squid/icons
visible_hostname Lee_Family.home.arpa
cache_mgr jonathanlee...@gmail.com
access_log /var/squid/logs/access.log
cache_log /var/squid/logs/cache.log
cache_store_log none
netdb_filename /var/squid/logs/netdb.state
pinger_enable on
pinger_program /usr/local/libexec/squid/pinger
sslcrtd_program /usr/local/libexec/squid/security_file_certgen -s 
/var/squid/lib/ssl_db -M 4MB -b 2048
tls_outgoing_options cafile=/usr/local/share/certs/ca-root-nss.crt
tls_outgoing_options capath=/usr/local/share/certs/
tls_outgoing_options options=NO_SSLv3
tls_outgoing_options 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
sslcrtd_children 10

logfile_rotate 0
debug_options rotate=0
shutdown_lifetime 3 seconds
# Allow local network(s) on interface(s)
acl localnet src  192.168.1.0/27
forwarded_for transparent
httpd_suppress_version_string on
uri_whitespace strip

acl block_hours time 00:30-05:00
ssl_bump terminate all block_hours
http_access deny all block_hours
acl getmethod method GET
acl to_ipv6 dst ipv6
acl from_ipv6 src ipv6

#tls_outgoing_options options=0x4
request_header_access Accept-Ranges deny all
reply_header_access Accept-Ranges deny all
request_header_replace Accept-Ranges none
reply_header_replace Accept-Ranges none


tls_outgoing_options 
cipher=HIGH:MEDIUM:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
tls_outgoing_options 

Re: [squid-users] Error Question

2024-06-11 Thread Antony Stone
On Tuesday 11 June 2024 at 19:24:43, Jonathan Lee wrote:

> FATAL: Received Segment Violation...dying. connection: conn749025
> local=192.168.1.1:3128 remote=192.168.1.5:59502 flags=1
> 
> Does any know how to fix this??

Can you give any more information such as:

1. Which version of Squid is this?

2. Which version of which Operating System is this running under?

3. What appears in Squid's log files preceding this report?

4. What traffic was going through Squid at the time?

5. Is this an isolated incident or do you see this more frequently?

6. Can you reproduce the problem (and if so, tell us how to do so)?


The more information you can give us about your system, the more likely it is 
that we can understand what it might be doing.


Antony.

-- 
"I estimate there's a world market for about five computers."

 - Thomas J Watson, Chairman of IBM

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Error Question

2024-06-11 Thread Jonathan Lee
FATAL: Received Segment Violation...dying. connection: conn749025 
local=192.168.1.1:3128 remote=192.168.1.5:59502 flags=1

Does any know how to fix this??___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Howto enable openssl option UNSAFE_LEGACY_RENEGOTIATION ?

2024-06-11 Thread Alex Rousskov

On 2024-06-11 03:33, Dieter Bloms wrote:


I've added that option like:
tls_outgoing_options options=0x4 ...
but no change.

I tried 0x4 (for SSL_OP_LEGACY_SERVER_CONNECT), but also without any change.


I have seen this behavior before. My current working theory is that 
Squid ignores tls_outgoing_options when SslBump peeks or stares at 
Squid-to-server TLS connection. In case of staring, this smells like a 
Squid bug to me. Peeking case is more nuanced, but Squid code 
modifications are warranted in that case as well.


If your Squid is peeking and splicing Squid-origin connection, then 
please try the following unofficial patch:

https://github.com/measurement-factory/squid/commit/4dad35eb.patch

The patch sets SSL_OP_LEGACY_SERVER_CONNECT unconditionally when 
peeking, for the reasons explained in the patch. This change has been 
proposed for official adoption at

https://github.com/squid-cache/squid/pull/1839


I do not have a patch for the staring use case.


HTH,

Alex.




I use a debian bookworm container and when I use openssl s_client
without -legacy_server_connect I can't established a tls connection

--snip--
root@tarski:/# openssl s_client -connect cisco.com:443
CONNECTED(0003)
4097F217F17F:error:0A000152:SSL routines:final_renegotiate:unsafe legacy 
renegotiation disabled:../ssl/statem/extensions.c:893:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 5177 bytes and written 322 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
 Protocol  : TLSv1.2
 Cipher: 
 Session-ID: 
869B4016868DFF23D1DAB3A33F99F9879274C1F62FD45BF9DF839B27735FC72C
 Session-ID-ctx:
 Master-Key:
 PSK identity: None
 PSK identity hint: None
 SRP username: None
 Start Time: 1718090662
 Timeout   : 7200 (sec)
 Verify return code: 0 (ok)
 Extended master secret: no
---
root@tarski:/#
--snip--

but when I add the -legacy_server_connect option I can as shown here:

--snip--
---
root@cdxiaphttpproxy04:/# openssl s_client -legacy_server_connect -connect 
cisco.com:443
CONNECTED(0003)
depth=2 C = US, O = IdenTrust, CN = IdenTrust Commercial Root CA 1
verify return:1
depth=1 C = US, O = IdenTrust, OU = HydrantID Trusted Certificate Service, CN = 
HydrantID Server CA O1
verify return:1
depth=0 C = US, ST = California, L = San Jose, O = Cisco Systems Inc., CN = 
www.cisco.com
verify return:1
---
Certificate chain
  0 s:C = US, ST = California, L = San Jose, O = Cisco Systems Inc., CN = 
www.cisco.com
i:C = US, O = IdenTrust, OU = HydrantID Trusted Certificate Service, CN = 
HydrantID Server CA O1
a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
v:NotBefore: Nov 14 05:48:20 2023 GMT; NotAfter: Nov 13 05:47:20 2024 GMT
  1 s:C = US, O = IdenTrust, OU = HydrantID Trusted Certificate Service, CN = 
HydrantID Server CA O1
i:C = US, O = IdenTrust, CN = IdenTrust Commercial Root CA 1
a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
v:NotBefore: Dec 12 16:56:15 2019 GMT; NotAfter: Dec 12 16:56:15 2029 GMT
  2 s:C = US, O = IdenTrust, CN = IdenTrust Commercial Root CA 1
i:C = US, O = IdenTrust, CN = IdenTrust Commercial Root CA 1
a:PKEY: rsaEncryption, 4096 (bit); sigalg: RSA-SHA256
v:NotBefore: Jan 16 18:12:23 2014 GMT; NotAfter: Jan 16 18:12:23 2034 GMT
---
Server certificate
-BEGIN CERTIFICATE-
MIIHkDCCBnigAwIBAgIQQAGLzF+ffeG2bq2GaN2HuTANBgkqhkiG9w0BAQsFADBy
MQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0MS4wLAYDVQQLEyVIeWRy
YW50SUQgVHJ1c3RlZCBDZXJ0aWZpY2F0ZSBTZXJ2aWNlMR8wHQYDVQQDExZIeWRy
YW50SUQgU2VydmVyIENBIE8xMB4XDTIzMTExNDA1NDgyMFoXDTI0MTExMzA1NDcy
MFowajELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExETAPBgNVBAcT
CFNhbiBKb3NlMRswGQYDVQQKExJDaXNjbyBTeXN0ZW1zIEluYy4xFjAUBgNVBAMT
DXd3dy5jaXNjby5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC5
CZi7tsogSJCAE5Zu78Z57FBC67OpK0OkIyVeixqKg57K/wqE4UF59GHHHVwOZhGv
VgsD3jjiQOhxZbUJnaen0+cMH6s1lSRZtiIi2K/Z1Oy+1Gytpw2bYZTbuWHWk1/e
VUgH8dS6PbwQp+/KAzV52Z98asWGzxWYqfJV5GUdC5V2MPDuDRfbrrl6uxVb05tN
69xfCIAR2KJtM64UJifesa7ItQBMzh1TYqPa4A15Ku6MgiuOkUddCrkZWRt1uevD
E6k47uR4wcuM/hF/eSX8wl/BaKrM3eiAc94Thom0wvKzlG0uziL4cux/O6O0na0w
o3WPfbSQltquqVPb9Z1JAgMBAAGjggQoMIIEJDAOBgNVHQ8BAf8EBAMCBaAwgYUG
CCsGAQUFBwEBBHkwdzAwBggrBgEFBQcwAYYkaHR0cDovL2NvbW1lcmNpYWwub2Nz
cC5pZGVudHJ1c3QuY29tMEMGCCsGAQUFBzAChjdodHRwOi8vdmFsaWRhdGlvbi5p
ZGVudHJ1c3QuY29tL2NlcnRzL2h5ZHJhbnRpZGNhTzEucDdjMB8GA1UdIwQYMBaA
FIm4m7ae7fuwxr0N7GdOPKOSnS35MCEGA1UdIAQaMBgwCAYGZ4EMAQICMAwGCmCG
SAGG+S8ABgMwRgYDVR0fBD8wPTA7oDmgN4Y1aHR0cDovL3ZhbGlkYXRpb24uaWRl
bnRydXN0LmNvbS9jcmwvaHlkcmFudGlkY2FvMS5jcmwwggE9BgNVHREEggE0MIIB
MIIJY2lzY28uY29tgg13d3cuY2lzY28uY29tgg53d3cxLmNpc2NvLmNvbYIOd3d3
Mi5jaXNjby5jb22CDnd3dzMuY2lzY28uY29tghB3d3ctMDEuY2lzY28uY29tghB3
d3ctMDIuY2lzY28uY29tghF3d3ctcnRwLmNpc2NvLmNvbYISd3d3MS1zczIuY2lz

Re: [squid-users] Howto enable openssl option UNSAFE_LEGACY_RENEGOTIATION ?

2024-06-11 Thread Dieter Bloms
Hello Alex,

thank you for your answer!

On Mon, Jun 10, Alex Rousskov wrote:

> On 2024-06-10 08:10, Dieter Bloms wrote:
> 
> > I have activated ssl_bump and must activate the UNSAFE_LEGACY_RENEGOTIATION 
> > option to enable access to https://cisco.com.
> > The web server does not support secure renegotiation.
> > 
> > I have tried to set the following options, but squid does not recognize any 
> > of them:
> > 
> > tls_outgoing_options options=UNSAFE_LEGACY_RENEGOTIATION
> > 
> > or
> > 
> > tls_outgoing_options options=ALLOW_UNSAFE_LEGACY_RENEGOTIATION
> > 
> > and
> > 
> > tls_outgoing_options options=SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION
> > 
> > but no matter which syntax I use, I always get the message during squid-k 
> > parse:
> > 
> > “2024/06/10 14:08:17| ERROR: Unknown TLS option 
> > ALLOW_UNSAFE_LEGACY_RENEGOTIATION”
> > 
> > How can I activate secure renegotiation for squid?
> 
> To set an OpenSSL connection option that Squid does not know by name, use
> that option hex value (based on your OpenSSL sources). For example:
> 
> # SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION is defined to be
> # SSL_OP_BIT(18) which is equal to (1 << 18) or 0x4 in hex.
> tls_outgoing_options options=0x4
> 
> Disclaimer: I have not tested the above and do not know whether adding that
> option achieves what you want to achieve.

I've added that option like:
tls_outgoing_options options=0x4 capath=/etc/ssl/certs min-version=1.2 
cipher=TLSv1.2:+aRSA:+SHA384:+SHA256:+DH:-kRSA:!PSK:!eNULL:!aNULL:!DSS:!AESCCM:!CAMELLIA:!ARIA:AES256-SHA:AES128-SHA:@SECLEVEL=1
but no change.

I tried 0x4 (for SSL_OP_LEGACY_SERVER_CONNECT), but also without any change.

I use a debian bookworm container and when I use openssl s_client
without -legacy_server_connect I can't established a tls connection

--snip--
root@tarski:/# openssl s_client -connect cisco.com:443
CONNECTED(0003)
4097F217F17F:error:0A000152:SSL routines:final_renegotiate:unsafe legacy 
renegotiation disabled:../ssl/statem/extensions.c:893:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 5177 bytes and written 322 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol  : TLSv1.2
Cipher: 
Session-ID: 869B4016868DFF23D1DAB3A33F99F9879274C1F62FD45BF9DF839B27735FC72C
Session-ID-ctx: 
Master-Key: 
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1718090662
Timeout   : 7200 (sec)
Verify return code: 0 (ok)
Extended master secret: no
---
root@tarski:/# 
--snip--

but when I add the -legacy_server_connect option I can as shown here:

--snip--
---
root@cdxiaphttpproxy04:/# openssl s_client -legacy_server_connect -connect 
cisco.com:443
CONNECTED(0003)
depth=2 C = US, O = IdenTrust, CN = IdenTrust Commercial Root CA 1
verify return:1
depth=1 C = US, O = IdenTrust, OU = HydrantID Trusted Certificate Service, CN = 
HydrantID Server CA O1
verify return:1
depth=0 C = US, ST = California, L = San Jose, O = Cisco Systems Inc., CN = 
www.cisco.com
verify return:1
---
Certificate chain
 0 s:C = US, ST = California, L = San Jose, O = Cisco Systems Inc., CN = 
www.cisco.com
   i:C = US, O = IdenTrust, OU = HydrantID Trusted Certificate Service, CN = 
HydrantID Server CA O1
   a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
   v:NotBefore: Nov 14 05:48:20 2023 GMT; NotAfter: Nov 13 05:47:20 2024 GMT
 1 s:C = US, O = IdenTrust, OU = HydrantID Trusted Certificate Service, CN = 
HydrantID Server CA O1
   i:C = US, O = IdenTrust, CN = IdenTrust Commercial Root CA 1
   a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
   v:NotBefore: Dec 12 16:56:15 2019 GMT; NotAfter: Dec 12 16:56:15 2029 GMT
 2 s:C = US, O = IdenTrust, CN = IdenTrust Commercial Root CA 1
   i:C = US, O = IdenTrust, CN = IdenTrust Commercial Root CA 1
   a:PKEY: rsaEncryption, 4096 (bit); sigalg: RSA-SHA256
   v:NotBefore: Jan 16 18:12:23 2014 GMT; NotAfter: Jan 16 18:12:23 2034 GMT
---
Server certificate
-BEGIN CERTIFICATE-
MIIHkDCCBnigAwIBAgIQQAGLzF+ffeG2bq2GaN2HuTANBgkqhkiG9w0BAQsFADBy
MQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0MS4wLAYDVQQLEyVIeWRy
YW50SUQgVHJ1c3RlZCBDZXJ0aWZpY2F0ZSBTZXJ2aWNlMR8wHQYDVQQDExZIeWRy
YW50SUQgU2VydmVyIENBIE8xMB4XDTIzMTExNDA1NDgyMFoXDTI0MTExMzA1NDcy
MFowajELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExETAPBgNVBAcT
CFNhbiBKb3NlMRswGQYDVQQKExJDaXNjbyBTeXN0ZW1zIEluYy4xFjAUBgNVBAMT
DXd3dy5jaXNjby5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC5
CZi7tsogSJCAE5Zu78Z57FBC67OpK0OkIyVeixqKg57K/wqE4UF59GHHHVwOZhGv
VgsD3jjiQOhxZbUJnaen0+cMH6s1lSRZtiIi2K/Z1Oy+1Gytpw2bYZTbuWHWk1/e
VUgH8dS6PbwQp+/KAzV52Z98asWGzxWYqfJV5GUdC5V2MPDuDRfbrrl6uxVb05tN
69xfCIAR2KJtM64UJifesa7ItQBMzh1TYqPa4A15Ku6MgiuOkUddCrkZWRt1uevD
E6k47uR4wcuM/hF/eSX8wl/BaKrM3eiAc94Thom0wvKzlG0uziL4cux/O6O0na0w

Re: [squid-users] Information Request: "Accept-Ranges" with use of SSL intercept and dynamic update caching

2024-06-10 Thread Jonathan Lee
The reason I ask is sometimes Facebook when I am using it locks up and my fan 
goes crazy I close Safari and restart the browser and it works fine again. It 
acts like it is restarting a download over and over again. 

> On Jun 10, 2024, at 21:45, Jonathan Lee  wrote:
> 
> Hello fellow Squid community can you please help?
> 
> Should I be using the following if I have SSL certificates, dynamic updates, 
> StoreID, and ClamAV running?
> 
> request_header_access Accept-Ranges deny all
> reply_header_access Accept-Ranges deny all
> request_header_replace Accept-Ranges none
> reply_header_replace Accept-Ranges none
> 
> None of the documents show what Accept-Ranges does
> 
> Can anyone help explain this to me?

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Information Request: "Accept-Ranges" with use of SSL intercept and dynamic update caching

2024-06-10 Thread Jonathan Lee
Hello fellow Squid community can you please help?

Should I be using the following if I have SSL certificates, dynamic updates, 
StoreID, and ClamAV running?

request_header_access Accept-Ranges deny all
reply_header_access Accept-Ranges deny all
request_header_replace Accept-Ranges none
reply_header_replace Accept-Ranges none

None of the documents show what Accept-Ranges does

Can anyone help explain this to me?___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Howto enable openssl option UNSAFE_LEGACY_RENEGOTIATION ?

2024-06-10 Thread Alex Rousskov

On 2024-06-10 08:10, Dieter Bloms wrote:


I have activated ssl_bump and must activate the UNSAFE_LEGACY_RENEGOTIATION 
option to enable access to https://cisco.com.
The web server does not support secure renegotiation.

I have tried to set the following options, but squid does not recognize any of 
them:

tls_outgoing_options options=UNSAFE_LEGACY_RENEGOTIATION

or

tls_outgoing_options options=ALLOW_UNSAFE_LEGACY_RENEGOTIATION

and

tls_outgoing_options options=SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION

but no matter which syntax I use, I always get the message during squid-k parse:

“2024/06/10 14:08:17| ERROR: Unknown TLS option 
ALLOW_UNSAFE_LEGACY_RENEGOTIATION”

How can I activate secure renegotiation for squid?


To set an OpenSSL connection option that Squid does not know by name, 
use that option hex value (based on your OpenSSL sources). For example:


# SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION is defined to be
# SSL_OP_BIT(18) which is equal to (1 << 18) or 0x4 in hex.
tls_outgoing_options options=0x4

Disclaimer: I have not tested the above and do not know whether adding 
that option achieves what you want to achieve.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Howto enable openssl option UNSAFE_LEGACY_RENEGOTIATION ?

2024-06-10 Thread Dieter Bloms
Hello,

I have activated ssl_bump and must activate the UNSAFE_LEGACY_RENEGOTIATION 
option to enable access to https://cisco.com.
The web server does not support secure renegotiation.

I have tried to set the following options, but squid does not recognize any of 
them:

tls_outgoing_options options=UNSAFE_LEGACY_RENEGOTIATION

or 

tls_outgoing_options options=ALLOW_UNSAFE_LEGACY_RENEGOTIATION

and

tls_outgoing_options options=SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION

but no matter which syntax I use, I always get the message during squid-k parse:

“2024/06/10 14:08:17| ERROR: Unknown TLS option 
ALLOW_UNSAFE_LEGACY_RENEGOTIATION”

How can I activate secure renegotiation for squid?

-- 
Regeards

  Dieter Bloms

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
From field.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Upgrade path from squid 4.15 to 6.x

2024-06-10 Thread ngtech1ltd
Hey Akash,
(Is this your first name?)
 
There are ways to test the config step by step with docker containers but it 
depends on the config size and complexity.
Even if you cannot share the squid.conf you can still summarize it to a degree.
There are 2 types of proxy services which can be implemented by Squid:
*   Forward
*   Reverse
 
With these there are tons of bricks which can be pilled up to achieve 
functionality.
@Alex and @Amos, can you try to help me compile a menu list of functionalities 
that Squid-Cache can be used for?
Ie as a forward proxy..
 
I believe that the project can offer a set of generic recipes for use cases 
that every support case will be able to look at to ask about.
Currently there are many questions which were answered but not all of them a 
documented enough to answer the use cases.
I can take this 4.15 to 6.9 as an example project and to document it for 
simplicity of others.
 
I will try to take a peek at the release notes from 4.15 till 6.9 to understand 
if there are specific things to be aware of for my specific use case.
 
The first thing I think should be mentioned is the list of deprecated helpers.
 
Thanks,
Eliezer
 
From: squid-users  On Behalf Of 
Akash Karki (CONT)
Sent: Wednesday, June 5, 2024 5:31 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Upgrade path from squid 4.15 to 6.x
 
Hi Team,
 
We are running on squid ver 4.15 and want to update to n-1 of the latest ver(I 
believe 6.9 is the latest ver).
 
I want to understand if we can go straight from 4.15 to 6.x (n-1 of latest 
version) without any intermediary steps or do we have to  update to 
intermediary first and then move to the n-1 version of 6.9?
 
Kindly send us the detailed guidance!
 
On Wed, Jun 5, 2024 at 3:20 PM Akash Karki (CONT) mailto:akash.ka...@capitalone.com> > wrote:
Hi Team,
 
We are running on squid ver 4.15 and want to update to n-1 of the latest ver(I 
believe 6.9 is the latest ver).
 
I want to understand if we can go straight from 4.15 to 6.x (n-1 of latest 
version) without any intermediary steps or do we have to  update to 
intermediary first and then move to the n-1 version of 6.9?
 
Kindly send us the detailed guidance!

 
-- 
Thanks & Regards,
Akash Karki
 
 
Save Nature to Save yourself :) 


 
-- 
Thanks & Regards,
Akash Karki
UK Hawkeye Team
Slack : #uk-monitoring
Confluence :   UK 
Hawkeye
 
Save Nature to Save yourself :) 
  _  


 

The information contained in this e-mail may be confidential and/or proprietary 
to Capital One and/or its affiliates and may only be used solely in performance 
of work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.




 
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Samba DNS Invalid zone operation IsSigned

2024-06-09 Thread Amos Jeffries

Hi Ronny,
 This is the Squid users mailing list.  You would be better served 
contacting the Samba help channels for this problem.



Cheers
Amos


On 8/06/24 23:05, Ronny Preiss wrote:

Hi Everybody,

Does someone know where this comes from and how to solve it? I've 
changed nothing for weeks.


- BIND 9.18.18-0ubuntu0.22.04.2-Ubuntu
- Ubuntu 22.04.4 LTS \n \l
- Samba Version 4.19.0 AC-DC

### /var/log/syslog
Jun  8 13:01:31 01-dc02 samba[996]: [2024/06/08 13:01:31.057443,  0] 
../../source4/rpc_server/dn  
   
  sserver/dcerpc_dnsserver.c:1076(dnsserver_query_zone)
Jun  8 13:01:31 01-dc02 samba[996]:   dnsserver: Invalid zone operation 
IsSigned
Jun  8 13:01:31 01-dc02 samba[996]: [2024/06/08 13:01:31.060313,  0] 
../../source4/rpc_server/dn  
   
  sserver/dcerpc_dnsserver.c:1076(dnsserver_query_zone)
Jun  8 13:01:31 01-dc02 samba[996]:   dnsserver: Invalid zone operation 
IsSigned
Jun  8 13:01:31 01-dc02 samba[996]: [2024/06/08 13:01:31.061385,  0] 
../../source4/rpc_server/dn  
   
  sserver/dcerpc_dnsserver.c:1076(dnsserver_query_zone)
Jun  8 13:01:31 01-dc02 samba[996]:   dnsserver: Invalid zone operation 
IsSigned


Regards, Ronny

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Any ideas for a project and\or research with AI about squid-cache?

2024-06-09 Thread ngtech1ltd
Hey Jonathan,

First of all, thanks for the response.
I think that all squid-users knows that AI is there since very long ago.
However, since it's a tool of the current times I want to be familiar with the 
tool capabilities.
The AI tools which are published these days gives a specific response to a 
specific requirement and need with the growth of IT in the world.

Squid-Cache users list is a place which you can ask a question and a human with 
emotions, sensitivity and knowledge try to help.
This is one of the only places which I don't even remember someone responding 
with "google it" or something similar.
The question I am asking myself is: 
I and others will not be here some time in the future and we try to document 
and leave after us things for the future.
I can try to say that we are working here with UDP and not TCP ie we are 
sending what we have into the world with the hope
it will reach others and will help them and make them happy and lively.

I believe that it's possible to learn things about Squid-Cache with the 
existing tools in an interactive way.
There are many documentations on Squid-Cache but some are old and others are 
just plain wrong.

I have some spare time here and there and I want to write a set of challenges 
for proxy admins.
I am thinking about it something like:
What might certify a Squid-Cache admin to be capable to be a successful admin?

The first things that most certifications do is to make sure there is 
"knowledge" or that the admin can implement specific use cases.
I believe that above the knowledge and technical capabilities there is a whole 
other level which might be lost when some will not be here.
I would be happy if the AI tools will be able to grasp from the mailing list 
threads something more then just the technical aspect of things.

Do you think a SoundBlaster 16 ISA card on a 386 or Pentium can transfer that?

Squid indeed is a very complex software!!

How can we attract some new Squid users into the list or to try and complete 
couple challenges?
Also, there are new versions of Linux distros around and these are a great 
playground for testing.

I will try to see if the AI can summarize the functionality of Squid-Cache 
(else then the cache itself).
I wrote couple caching tools in the past 10 years and I think that the new AI 
tools might be able to find
couple things which I missed and maybe offer better solutions for couple things.

Maybe some external_acl helpers or maybe to convert existing tools to rust or 
golang.
Maybe even these tools will be able to offer some ideas on how to fix specific 
bugs.
Even if they will not write the whole code for it the fact that what someone 
else wrote somewhere on the internet is
reaching to the prompt user we might be able to understand how a single 
document can affect a lot on the end result.

Let's try to follow on this thread later on.

Thanks,
Eliezer

-Original Message-
From: Jonathan Lee  
Sent: Sunday, June 9, 2024 7:43 PM
To: ngtech1...@gmail.com
Cc: squid-users 
Subject: Re: [squid-users] Any ideas for a project and\or research with AI 
about squid-cache?

I hate to tell you this AI that you know has been around for many years.
Anyone remember Sandblaster 16 ISA card software Dr. Spatzo?
All AI is, just adapted improved 1980s ideas. It’s not new, its been here for 
years, still just if else code with more data analytics. 

Anyway I use Proxy for checking URL requests and blocking them if needed, 
inspecting HTTPS with antivirus software, caching content and having the 
ability to scan it before it hits users and block it.
Web acceleration.
I primarily use it for inspection and security.
Squid could simply block out all requests to AI if you wanted, I have it set to 
block some.
CCPA in California provides legal avenues for user privacy, not many web 
analytics companies follow the requests to not track so they can be simply put 
blocked out. 

Squid is very complex software.

> On Jun 9, 2024, at 03:10, ngtech1...@gmail.com wrote:
> 
> Hey Everyone,
> 
> I was wondering if there are specific things which can be worked on with an 
> AI as a testing project to challenge an AI.
> I am looking for a set of projects which a beginner squid-cache admin can try 
> to implement to certify himself with real world experience.
> 
> What are the most common use cases of squid-cache these days?
> * Forward proxy
> * Reverse proxy
> * Public proxy services with authentication
> * Caching
> * Authentication proxy against a DB
> * Authentication proxy against LDAP and/or AD
> * Radius authentication
> * Multi factor authentication
> * Captive portal
> * SSL SNI inspection
> * Traffic classification (based on APPS list)
> * Url Filtering
> * Domain based Filtering
> * Internet Usage time limit (30 minutes or any other) based on login or 
> actual traffic.
> * Outband IP address selection
> Etc
> 
> Please help me to fill the list.
> 
> Thanks,
> Eliezer
> 
> 
> Eliezer Croitoru
> NgTech, Tech Support
> 

Re: [squid-users] Any ideas for a project and\or research with AI about squid-cache?

2024-06-09 Thread Jonathan Lee
I hate to tell you this AI that you know has been around for many years. Anyone 
remember Sandblaster 16 ISA card software Dr. Spatzo? All AI is, just adapted 
improved 1980s ideas. It’s not new, its been here for years, still just if else 
code with more data analytics. 

Anyway I use Proxy for checking URL requests and blocking them if needed, 
inspecting HTTPS with antivirus software, caching content and having the 
ability to scan it before it hits users and block it. Web acceleration. I 
primarily use it for inspection and security. Squid could simply block out all 
requests to AI if you wanted, I have it set to block some. CCPA in California 
provides legal avenues for user privacy, not many web analytics companies 
follow the requests to not track so they can be simply put blocked out. Squid 
is very complex software.

> On Jun 9, 2024, at 03:10, ngtech1...@gmail.com wrote:
> 
> Hey Everyone,
> 
> I was wondering if there are specific things which can be worked on with an 
> AI as a testing project to challenge an AI.
> I am looking for a set of projects which a beginner squid-cache admin can try 
> to implement to certify himself with real world experience.
> 
> What are the most common use cases of squid-cache these days?
> * Forward proxy
> * Reverse proxy
> * Public proxy services with authentication
> * Caching
> * Authentication proxy against a DB
> * Authentication proxy against LDAP and/or AD
> * Radius authentication
> * Multi factor authentication
> * Captive portal
> * SSL SNI inspection
> * Traffic classification (based on APPS list)
> * Url Filtering
> * Domain based Filtering
> * Internet Usage time limit (30 minutes or any other) based on login or 
> actual traffic.
> * Outband IP address selection
> Etc
> 
> Please help me to fill the list.
> 
> Thanks,
> Eliezer
> 
> 
> Eliezer Croitoru
> NgTech, Tech Support
> Mobile: +972-5-28704261
> Email: ngtech1...@gmail.com
> Web: https://www.ngtech.co.il/
> 
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Any ideas for a project and\or research with AI about squid-cache?

2024-06-09 Thread ngtech1ltd
Hey Everyone,

I was wondering if there are specific things which can be worked on with an AI 
as a testing project to challenge an AI.
I am looking for a set of projects which a beginner squid-cache admin can try 
to implement to certify himself with real world experience.

What are the most common use cases of squid-cache these days?
* Forward proxy
* Reverse proxy
* Public proxy services with authentication
* Caching
* Authentication proxy against a DB
* Authentication proxy against LDAP and/or AD
* Radius authentication
* Multi factor authentication
* Captive portal
* SSL SNI inspection
* Traffic classification (based on APPS list)
* Url Filtering
* Domain based Filtering
* Internet Usage time limit (30 minutes or any other) based on login or actual 
traffic.
* Outband IP address selection
Etc

Please help me to fill the list.

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://www.ngtech.co.il/


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Samba DNS Invalid zone operation IsSigned

2024-06-08 Thread Ronny Preiss
Hi Everybody,

Does someone know where this comes from and how to solve it? I've changed
nothing for weeks.

- BIND 9.18.18-0ubuntu0.22.04.2-Ubuntu
- Ubuntu 22.04.4 LTS \n \l
- Samba Version 4.19.0 AC-DC

### /var/log/syslog
Jun  8 13:01:31 01-dc02 samba[996]: [2024/06/08 13:01:31.057443,  0]
../../source4/rpc_server/dn

 sserver/dcerpc_dnsserver.c:1076(dnsserver_query_zone)
Jun  8 13:01:31 01-dc02 samba[996]:   dnsserver: Invalid zone operation
IsSigned
Jun  8 13:01:31 01-dc02 samba[996]: [2024/06/08 13:01:31.060313,  0]
../../source4/rpc_server/dn

 sserver/dcerpc_dnsserver.c:1076(dnsserver_query_zone)
Jun  8 13:01:31 01-dc02 samba[996]:   dnsserver: Invalid zone operation
IsSigned
Jun  8 13:01:31 01-dc02 samba[996]: [2024/06/08 13:01:31.061385,  0]
../../source4/rpc_server/dn

 sserver/dcerpc_dnsserver.c:1076(dnsserver_query_zone)
Jun  8 13:01:31 01-dc02 samba[996]:   dnsserver: Invalid zone operation
IsSigned

Regards, Ronny
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] DNS Issue samba_dnsupdate: dns.resolver.NoAnswer

2024-06-08 Thread Ronny Preiss
Hi,

By doing my weekly task log checking, I saw the following error in syslog.
I've changed nothing the last couple of months.

My Environment:

2x Server Ubuntu 22.04.4 LTS with:
- Samba Version 4.19.0 AC-DC (Selfcompiled default values)

Samba version: 4.19.0
Build environment:
Paths:
   BINDIR: /usr/local/samba/bin
   SBINDIR: /usr/local/samba/sbin
   CONFIGFILE: /usr/local/samba/etc/smb.conf
   NCALRPCDIR: /usr/local/samba/var/run/ncalrpc
   LOGFILEBASE: /usr/local/samba/var
   LMHOSTSFILE: /usr/local/samba/etc/lmhosts
   DATADIR: /usr/local/samba/share
   MODULESDIR: /usr/local/samba/lib
   LOCKDIR: /usr/local/samba/var/lock
   STATEDIR: /usr/local/samba/var/locks
   CACHEDIR: /usr/local/samba/var/cache
   PIDDIR: /usr/local/samba/var/run
   PRIVATE_DIR: /usr/local/samba/private
   CODEPAGEDIR: /usr/local/samba/share/codepages
   SETUPDIR: /usr/local/samba/share/setup
   WINBINDD_SOCKET_DIR: /usr/local/samba/var/run/winbindd
   NTP_SIGND_SOCKET_DIR: /usr/local/samba/var/lib/ntp_signd


- DNS Backend Bind (BIND 9.18.18-0ubuntu0.22.04.2-Ubuntu)
- SysVol is  in sync with rsync

### /var/log/syslog
Jun  8 11:54:11 01-dc01 samba[931]: [2024/06/08 11:54:11.351034,  0]
../../lib/util/util_runcmd.c:355(samba_runcmd_io_handler)
Jun  8 11:54:11 01-dc01 samba[931]:
 /usr/local/samba/sbin/samba_dnsupdate: Traceback (most recent call last):
Jun  8 11:54:11 01-dc01 samba[931]: [2024/06/08 11:54:11.352082,  0]
../../lib/util/util_runcmd.c:355(samba_runcmd_io_handler)
Jun  8 11:54:11 01-dc01 samba[931]:
 /usr/local/samba/sbin/samba_dnsupdate:   File
"/usr/local/samba/sbin/samba_dnsupdate", line 883, in 
Jun  8 11:54:11 01-dc01 samba[931]: [2024/06/08 11:54:11.352119,  0]
../../lib/util/util_runcmd.c:355(samba_runcmd_io_handler)
Jun  8 11:54:11 01-dc01 samba[931]:
 /usr/local/samba/sbin/samba_dnsupdate: creds = get_credentials(lp)
Jun  8 11:54:11 01-dc01 samba[931]: [2024/06/08 11:54:11.352132,  0]
../../lib/util/util_runcmd.c:355(samba_runcmd_io_handler)
Jun  8 11:54:11 01-dc01 samba[931]:
 /usr/local/samba/sbin/samba_dnsupdate:   File
"/usr/local/samba/sbin/samba_dnsupdate", line 184, in get_credentials
Jun  8 11:54:11 01-dc01 samba[931]: [2024/06/08 11:54:11.352144,  0]
../../lib/util/util_runcmd.c:355(samba_runcmd_io_handler)
Jun  8 11:54:11 01-dc01 samba[931]:
 /usr/local/samba/sbin/samba_dnsupdate: get_krb5_rw_dns_server(creds,
sub_vars['DNSDOMAIN'] + '.')
Jun  8 11:54:11 01-dc01 samba[931]: [2024/06/08 11:54:11.352158,  0]
../../lib/util/util_runcmd.c:355(samba_runcmd_io_handler)
Jun  8 11:54:11 01-dc01 samba[931]:
 /usr/local/samba/sbin/samba_dnsupdate:   File
"/usr/local/samba/sbin/samba_dnsupdate", line 143, in get_krb5_rw_dns_server
Jun  8 11:54:11 01-dc01 samba[931]: [2024/06/08 11:54:11.352203,  0]
../../lib/util/util_runcmd.c:355(samba_runcmd_io_handler)
Jun  8 11:54:11 01-dc01 samba[931]:
 /usr/local/samba/sbin/samba_dnsupdate: rw_dns_servers =
get_possible_rw_dns_server(creds, domain)
Jun  8 11:54:11 01-dc01 samba[931]: [2024/06/08 11:54:11.352239,  0]
../../lib/util/util_runcmd.c:355(samba_runcmd_io_handler)
Jun  8 11:54:11 01-dc01 samba[931]:
 /usr/local/samba/sbin/samba_dnsupdate:   File
"/usr/local/samba/sbin/samba_dnsupdate", line 122, in
get_possible_rw_dns_server
Jun  8 11:54:11 01-dc01 samba[931]: [2024/06/08 11:54:11.352253,  0]
../../lib/util/util_runcmd.c:355(samba_runcmd_io_handler)
Jun  8 11:54:11 01-dc01 samba[931]:
 /usr/local/samba/sbin/samba_dnsupdate: ans_soa =
check_one_dns_name(domain, 'SOA')
Jun  8 11:54:11 01-dc01 samba[931]: [2024/06/08 11:54:11.352267,  0]
../../lib/util/util_runcmd.c:355(samba_runcmd_io_handler)
Jun  8 11:54:11 01-dc01 samba[931]:
 /usr/local/samba/sbin/samba_dnsupdate:   File
"/usr/local/samba/sbin/samba_dnsupdate", line 274, in check_one_dns_name
Jun  8 11:54:11 01-dc01 samba[931]: [2024/06/08 11:54:11.352287,  0]
../../lib/util/util_runcmd.c:355(samba_runcmd_io_handler)
Jun  8 11:54:11 01-dc01 samba[931]:
 /usr/local/samba/sbin/samba_dnsupdate: return resolver.resolve(name,
name_type)
Jun  8 11:54:11 01-dc01 samba[931]: [2024/06/08 11:54:11.352302,  0]
../../lib/util/util_runcmd.c:355(samba_runcmd_io_handler)
Jun  8 11:54:11 01-dc01 samba[931]:
 /usr/local/samba/sbin/samba_dnsupdate:   File
"/usr/lib/python3/dist-packages/dns/resolver.py", line 1202, in resolve
Jun  8 11:54:11 01-dc01 samba[931]: [2024/06/08 11:54:11.352510,  0]
../../lib/util/util_runcmd.c:355(samba_runcmd_io_handler)
Jun  8 11:54:11 01-dc01 samba[931]:
 /usr/local/samba/sbin/samba_dnsupdate: (answer, done) =
resolution.query_result(response, None)
Jun  8 11:54:11 01-dc01 samba[931]: [2024/06/08 11:54:11.352551,  0]
../../lib/util/util_runcmd.c:355(samba_runcmd_io_handler)
Jun  8 11:54:11 01-dc01 samba[931]:
 /usr/local/samba/sbin/samba_dnsupdate:   File
"/usr/lib/python3/dist-packages/dns/resolver.py", line 674, in query_result
Jun  8 11:54:11 01-dc01 samba[931]: [2024/06/08 11:54:11.352693,  0]
../../lib/util/util_runcmd.c:355(samba_runcmd_io_handler)
Jun  8 

Re: [squid-users] [External Sender] Re: Upgrade path from squid 4.15 to 6.x

2024-06-07 Thread Akash Karki (CONT)
Thanks a lot Alex!!

I will get back if anything is required, thanks again :)

On Wed, Jun 5, 2024 at 5:30 PM Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 2024-06-05 12:05, Akash Karki (CONT) wrote:
>
> > Anything specific do we need to check from any documents?
>
> Yes, anything that mentions features or directives you are using (or
> would like to use).
>
>
> > If I can get any document to refer to, that would be great!!
>
> Release notes for Squid vN should be in doc/release-notes/release-N.html
> file inside official Squid source code tarball for vN. For a given major
> Squid version, use the tarball for latest available minor release.
> Official tarballs for various versions are currently available by
> following version-specific links at
> https://urldefense.com/v3/__http://www.squid-cache.org/Versions/__;!!FrPt2g6CO4Wadw!Knqvw8ZjzeEeJdjvVy3aMDC3CmDXZBLRpezg16IbnAgHi_8g-pzwZ6maEjjw11B9_z2M0ioXIzHJ6LTwVAD-WZIJuv2dpgaw$
>
> The following wiki pages may also contain useful info:
>
> https://urldefense.com/v3/__https://wiki.squid-cache.org/Releases/Squid-5__;!!FrPt2g6CO4Wadw!Knqvw8ZjzeEeJdjvVy3aMDC3CmDXZBLRpezg16IbnAgHi_8g-pzwZ6maEjjw11B9_z2M0ioXIzHJ6LTwVAD-WZIJuolRbUO1$
>
> https://urldefense.com/v3/__https://wiki.squid-cache.org/Releases/Squid-6__;!!FrPt2g6CO4Wadw!Knqvw8ZjzeEeJdjvVy3aMDC3CmDXZBLRpezg16IbnAgHi_8g-pzwZ6maEjjw11B9_z2M0ioXIzHJ6LTwVAD-WZIJupzLWS1_$
>
>
> HTH,
>
> Alex.
>
>
> > On Wed, Jun 5, 2024 at 4:31 PM Alex Rousskov wrote:
> >
> > On 2024-06-05 10:30, Akash Karki (CONT) wrote:
> >
> >  > I want to understand if we can go straight from 4.15 to 6.x (n-1
> of
> >  > latest version) without any intermediary steps or do we have to
> > update
> >  > to intermediary first and then move to the n-1 version of 6.9?
> >
> > Go straight to the latest v6. While doing that, study release notes
> for
> > all the Squid versions you are skipping to flag configuration
> > directives
> > that you need to adjust (and other upgrade caveats).
> >
> > Needless to say, test before you deploy the upgraded version -- a
> > lot of
> > things have changed since v4, and not all of the changes may be
> covered
> > in release notes. When in doubt, ask (specific) questions.
> >
> >
> > HTH,
> >
> > Alex.
> >
> >
> >
> >  > On Wed, Jun 5, 2024 at 3:20 PM Akash Karki (CONT) wrote:
> >  >
> >  > Hi Team,
> >  >
> >  > We are running on squid ver 4.15 and want to update to n-1 of
> the
> >  > latest ver(I believe 6.9 is the latest ver).
> >  >
> >  > I want to understand if we can go straight from 4.15 to 6.x
> > (n-1 of
> >  > latest version) without any intermediary steps or do we
> have to
> >  > update to intermediary first and then move to the n-1 version
> > of 6.9?
> >  >
> >  > Kindly send us the detailed guidance!
> >  >
> >  > --
> >  > Thanks & Regards,
> >  > Akash Karki
> >  >
> >  >
> >  > Save Nature to Save yourself :)
> >  >
> >  >
> >  >
> >  > --
> >  > Thanks & Regards,
> >  > Akash Karki
> >  > UK Hawkeye Team*
> >  > *
> >  > *Slack : *#uk-monitoring
> >  > *Confluence : *UK Hawkeye
> >  >  > >
> >  >
> >  > Save Nature to Save yourself :)
> >  >
> >
>  
> >  >
> >  >
> >  > The information contained in this e-mail may be confidential
> and/or
> >  > proprietary to Capital One and/or its affiliates and may only be
> > used
> >  > solely in performance of work or services for Capital One. The
> >  > information transmitted herewith is intended only for use by the
> >  > individual or entity to which it is addressed. If the reader of
> this
> >  > message is not the intended recipient, you are hereby notified
> > that any
> >  > review, retransmission, dissemination, distribution, copying or
> > other
> >  > use of, or taking of any action in reliance upon this information
> is
> >  > strictly prohibited. If you have received this communication in
> > error,
> >  > please contact the sender and delete the material from your
> computer.
> >  >
> >  >
> >  >
> >  > ___
> >  > squid-users mailing list
> >  > squid-users@lists.squid-cache.org
> > 
> >  >
> >
> https://urldefense.com/v3/__https://lists.squid-cache.org/listinfo/squid-users__;!!FrPt2g6CO4Wadw!LFF_SotEcUzjJJWv4JR26UjdlUap9Xu0U8n45FREcUUkuWDemCLyI2vLlkSciqHShz-wFylLp3e-TZwmlCNKQc28HngJzWvg$
> <
> 

Re: [squid-users] can't explain 403 denied for authenticated

2024-06-07 Thread Amos Jeffries


On 7/06/24 07:08, Kevin wrote:

 >
 >> acl trellix_phone_cloud dstdomain amcore-ens.rest.gti.trellix.com
 >> http_access deny trellix_phone_cloud
 >> external_acl_type host_based_filter children-max=15 ttl=0 0X0P+0CL 
 >> acl HostBasedRules external host_based_filter

 >> http_access allow HostBasedRules
 >> auth_param digest program /usr/lib/squid/digest_file_auth -c 
/etc/squid/passwd

 >> auth_param digest realm squid
 >> auth_param digest children 2
 >> auth_param basic program /usr/lib/squid/basic_ncsa_auth 
/etc/squid/basic_passwd

 >> auth_param basic children 2
 >> auth_param basic realm squidb
 >> auth_param basic credentialsttl 2 hours
 >
 >> acl auth_users proxy_auth REQUIRED
 >> external_acl_type custom_acl_db children-max=15 ttl=0 0X0P+0CL >> 
acl CustomAclDB external custom_acl_db

 >> http_access allow CustomAclDB
 >
 >
 >Hmm, this use of combined authentication+authorization is a bit tricky
 >with two layers of asynchronous helper lookups going on. That alone
 >might be what is going on with the weird 403's.
 >
 >
 >A better sequence would be:
 >
 ># ensure login is performed
 >http_access deny !auth_users
 >
 ># check the access permissions for whichever user logged in
 >http_access allow CustomAclDB


The first call the the external_acl is to process unauthenticated 
requests.   Is the suggestion to replace


acl auth_users proxy_auth REQUIRED

with

http_access deny !auth_users

before the second external_acl (for authenticated requests)?



No. It is to ensure that "missing credentials" are treated differently 
than "bad credentials". Specifically that any auth challenge response is 
never able to be given "allow" permission.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] can't explain 403 denied for authenticated

2024-06-07 Thread Jonathan Lee
You can also add this to lock down the proxy after hours so nothing is used 
much like locking a door, whatever is inside is going to keep working ie 
connections already established however all new connections will be blocked. I 
love this one 

acl block_hours time 00:30-05:00
ssl_bump terminate all block_hours
http_access deny all block_hours

however this is for use with ssl intercept and root certificates installed 
however it also works for spliced connections as it comes before everything 
else the terminate everything line 

you can also if you want to lock down to specific mac addresses ie small office 
home network 

use 

eui_lookup on


Example of use with mac addresses

acl splice_only src 192.168.1.8 #Tasha iPhone
acl splice_only src 192.168.1.10 #Jon iPhone
acl splice_only src 192.168.1.11 #Amazon Fire
acl splice_only src 192.168.1.15 #Tasha HP
acl splice_only src 192.168.1.16 #iPad

acl splice_only_mac arp  (unique 48bit hardware address here)
acl splice_only_mac arp (unique 48bit hardware address here)
acl splice_only_mac arp (unique 48bit hardware address here)
acl splice_only_mac arp  (unique 48bit hardware address here)
acl splice_only_mac arp  (unique 48bit hardware address here)

acl NoSSLIntercept ssl::server_name_regex -i "/usr/local/pkg/reg.url.nobump"
acl NoBumpDNS dstdomain "/usr/local/pkg/dns.nobump"

acl markBumped annotate_client bumped=true
acl active_use annotate_client active=true
acl bump_only src 192.168.1.3 #webtv
acl bump_only src 192.168.1.4 #toshiba
acl bump_only src 192.168.1.5 #imac
acl bump_only src 192.168.1.9 #macbook
acl bump_only src 192.168.1.13 #dell

acl bump_only_mac arp (unique 48bit hardware address here)
acl bump_only_mac arp  (unique 48bit hardware address here)
acl bump_only_mac arp  (unique 48bit hardware address here)
acl bump_only_mac arp  (unique 48bit hardware address here)
acl bump_only_mac arp  (unique 48bit hardware address here)

ssl_bump peek step1
miss_access deny no_miss active_use
ssl_bump splice https_login active_use
ssl_bump splice splice_only_mac splice_only active_use (this works as “and 
logic” except my annotate active use)
ssl_bump splice NoBumpDNS active_use
ssl_bump splice NoSSLIntercept active_use
ssl_bump bump bump_only_mac bump_only active_use
acl activated note active_use true
ssl_bump terminate !activated

acl markedBumped note bumped true
url_rewrite_access deny markedBumped

> On Jun 6, 2024, at 12:08, Kevin  wrote:
> 
> >> uri_whitespace encode
> >
> >Hmm. Accepting whitespace in URLs is a risky choice. One can never be
> >completely sure how third-party agents in the network are handling it
> >before the request arrived.
> >
> >If (big IF) you are able to use "uri_whitespace deny" this proxy would
> >be a bit more secure. This is just a suggestion, you know best here.
> 
> I think that was a workaround for a vulnerability.  If it was, it may no 
> longer be needed.
> 
> 
> >
> >> acl trellix_phone_cloud dstdomain amcore-ens.rest.gti.trellix.com
> >> http_access deny trellix_phone_cloud
> >> external_acl_type host_based_filter children-max=15 ttl=0 0X0P+0CL >> acl 
> >> HostBasedRules external host_based_filter
> >> http_access allow HostBasedRules
> >> auth_param digest program /usr/lib/squid/digest_file_auth -c 
> >> /etc/squid/passwd
> >> auth_param digest realm squid
> >> auth_param digest children 2
> >> auth_param basic program /usr/lib/squid/basic_ncsa_auth 
> >> /etc/squid/basic_passwd
> >> auth_param basic children 2
> >> auth_param basic realm squidb
> >> auth_param basic credentialsttl 2 hours
> >
> >> acl auth_users proxy_auth REQUIRED
> >> external_acl_type custom_acl_db children-max=15 ttl=0 0X0P+0CL >> acl 
> >> CustomAclDB external custom_acl_db
> >> http_access allow CustomAclDB
> >
> >
> >Hmm, this use of combined authentication+authorization is a bit tricky
> >with two layers of asynchronous helper lookups going on. That alone
> >might be what is going on with the weird 403's.
> >
> >
> >A better sequence would be:
> >
> ># ensure login is performed
> >http_access deny !auth_users
> >
> ># check the access permissions for whichever user logged in
> >http_access allow CustomAclDB
> 
> 
> The first call the the external_acl is to process unauthenticated requests.   
> Is the suggestion to replace
> 
> acl auth_users proxy_auth REQUIRED
> 
> with
> 
> http_access deny !auth_users
> 
> before the second external_acl (for authenticated requests)?
> 
> 
> 
> 
> Thanks again, very much
> 
> 
> Kevin
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] can't explain 403 denied for authenticated

2024-06-06 Thread Kevin
>> uri_whitespace encode 
> 
>Hmm. Accepting whitespace in URLs is a risky choice. One can never be 
>completely sure how third-party agents in the network are handling it 
>before the request arrived. 
> 
>If (big IF) you are able to use "uri_whitespace deny" this proxy would 
>be a bit more secure. This is just a suggestion, you know best here. 

I think that was a workaround for a vulnerability. If it was, it may no longer 
be needed. 


> 
>> acl trellix_phone_cloud dstdomain amcore-ens.rest.gti.trellix.com 
>> http_access deny trellix_phone_cloud 
>> external_acl_type host_based_filter children-max=15 ttl=0 0X0P+0CL >> acl 
>> HostBasedRules external host_based_filter 
>> http_access allow HostBasedRules 
>> auth_param digest program /usr/lib/squid/digest_file_auth -c 
>> /etc/squid/passwd 
>> auth_param digest realm squid 
>> auth_param digest children 2 
>> auth_param basic program /usr/lib/squid/basic_ncsa_auth 
>> /etc/squid/basic_passwd 
>> auth_param basic children 2 
>> auth_param basic realm squidb 
>> auth_param basic credentialsttl 2 hours 
> 
>> acl auth_users proxy_auth REQUIRED 
>> external_acl_type custom_acl_db children-max=15 ttl=0 0X0P+0CL >> acl 
>> CustomAclDB external custom_acl_db 
>> http_access allow CustomAclDB 
> 
> 
>Hmm, this use of combined authentication+authorization is a bit tricky 
>with two layers of asynchronous helper lookups going on. That alone 
>might be what is going on with the weird 403's. 
> 
> 
>A better sequence would be: 
> 
># ensure login is performed 
>http_access deny !auth_users 
> 
># check the access permissions for whichever user logged in 
>http_access allow CustomAclDB 


The first call the the external_acl is to process unauthenticated requests. Is 
the suggestion to replace 

acl auth_users proxy_auth REQUIRED 

with 

http_access deny !auth_users 

before the second external_acl (for authenticated requests)? 




Thanks again, very much 


Kevin 

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] can't explain 403 denied for authenticated

2024-06-05 Thread Amos Jeffries

Free config audit inline ...

On 6/06/24 05:24, Kevin wrote:


Understood.   Here it is:


acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines
acl windows_net src 172.18.114.0/24
acl sample_host src 172.18.115.1/32
acl rsync port 873


You can remove the above line. This "rsync" ACL is unused and the port 
is added directly to the SSL_Ports and Safe_ports.




acl SSL_ports port 443
acl SSL_ports port 873  #873 is rsync
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 873
acl CONNECT method CONNECT


You can remove the above line. "CONNECT" is a built-in ACL.



acl PURGE method PURGE


Your proxy is configured with "cache deny all" preventing anything being 
stored.


As such you can improve performance somewhat by removing the "acl PURGE" 
and all config below that uses it.




acl localhost src 127.0.0.1


You can remove the above line. "localhost" is a built-in ACL.



http_access allow PURGE localhost
http_access deny PURGE
acl URN proto URN
http_access deny URN
http_access deny manager
acl API_FIREFOX dstdomain api.profiler.firefox.com
http_access deny API_FIREFOX
acl ff_browser browser ^Mozilla/5\.0
acl rma_ua browser ^RMA/1\.0.*compatible;.RealMedia
uri_whitespace encode


Hmm. Accepting whitespace in URLs is a risky choice. One can never be 
completely sure how third-party agents in the network are handling it 
before the request arrived.


If (big IF) you are able to use "uri_whitespace deny" this proxy would 
be a bit more secure. This is just a suggestion, you know best here.




acl trellix_phone_cloud dstdomain amcore-ens.rest.gti.trellix.com
http_access deny trellix_phone_cloud
external_acl_type host_based_filter children-max=15 ttl=0 %ACL %DATA %SRC %>rd 
%>rP  /PATH/TO/FILTER/SCRIPT.py
acl HostBasedRules external host_based_filter
http_access allow HostBasedRules
auth_param digest program /usr/lib/squid/digest_file_auth -c /etc/squid/passwd
auth_param digest realm squid
auth_param digest children 2
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/basic_passwd
auth_param basic children 2
auth_param basic realm squidb
auth_param basic credentialsttl 2 hours



acl auth_users proxy_auth REQUIRED
external_acl_type custom_acl_db children-max=15 ttl=0 %ACL %DATA %ul %SRC %>rd 
%>rP %credentials /PATH/TO/FILTER/SCRIPT.py
acl CustomAclDB external custom_acl_db
http_access allow CustomAclDB



Hmm, this use of combined authentication+authorization is a bit tricky 
with two layers of asynchronous helper lookups going on. That alone 
might be what is going on with the weird 403's.



A better sequence would be:

 # ensure login is performed
 http_access deny !auth_users

 # check the access permissions for whichever user logged in
 http_access allow CustomAclDB



acl CRLs url_regex "/etc/squid/conf.d/CRL_urls.txt"
http_access allow CRLs
deny_info 303:https://abc.def.com/



FYI; deny_info is a way to customize what happens when the "deny" action 
is performed an a specific ACL match.


The above deny_info line does nothing unless you name which ACL(s) it is 
to become the action for and also those ACLs are used in a "deny" rule.


For example:

 acl redirect_HTTPS url_regex ^http://example\.com
 deny_info 303:https://example.com%rp redirect_HTTPS
 http_access deny redirect_HTTPS




http_access deny all



So ... none of the http_access lines below here are doing anything.



acl apache rep_header Server ^Apache
icp_access allow localnet
icp_access deny all



These lines...


http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports


.. to here are supposed to protect your proxy against some nasty DDoS 
type attacks.


They need to be first out of all your http_access lines in order to do 
that efficiently.



The http_access below are optional from our default squid.conf setup. 
Since your install does not appear to need them they can just be removed.




http_access allow localhost manager
http_access deny manager
http_access allow localhost



http_port 3128
coredump_dir /var/cache/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
logformat squid %ts.%03tu %6tr %>a %Ss/%>Hs %a %Ss/%>Hs %h] [%a %ui %un 

Re: [squid-users] can't explain 403 denied for authenticated

2024-06-05 Thread Kevin

I appreciate very much your looking at my question. 

>Date: Thu, 30 May 2024 22:15:27 +1200 
>From: Amos Jeffries  
>To: squid-users@lists.squid-cache.org 
>Subject: Re: [squid-users] can't explain 403 denied for authenticated 
> user 
>Message-ID: <96746157-da51-47db-8d52-d65239f27...@treenet.co.nz> 
>Content-Type: text/plain; charset=UTF-8; format=flowed 
> 
>On 25/05/24 07:28, Kevin wrote: 
>> Hi, 
>> 
>> We have 2 external ACLs that take a request's data (IP, authenticated 
>> username, URL, user-agent, etc) and uses that information to determine 
>> whether a user or host should be permitted to access that URL.?? It 
>> almost always works well, but we have a recurring occasional issue that 
>> I can't figure out. 
>> 
>> When it occurs, it's always around 4AM.?? This particular request occurs 
>> often - averages about once a second throughout the day. 
>> 
>> What we see is a "403 forbidden" for a (should be) permitted site from 
>> an authenticated user from the same IP/user and to the same site that 
>> gets a "202 connection established" every other time. 
> 
>Is maybe 4am the time when your auth system refreshes all nonce? 
> - thus making any currently in-use by the clients invalid until they 
>re-auth. You might see a mix of 403/401/407 in a bunch at such times. 

My auth system is just squid with one flat file each for basic and digest 
authentication (we allow basic only if an application doesn't support digest). 


>Or maybe in a similar style one/some of the clients is broken and fails 
>to update its nonce before it expires at 4am? 

The client is a Java-based application. I can find out more from the team that 
wrote it. 

> - looking at which client agent and IP were getting the 403 and/or the 
>nonce which received 403 will give you hints about this possibility. 

Oh: it's only this one application from this one particular host. 


> 
>Or your network router(s) do garbage collection and terminate 
>long-running connections to free up TCP resources? 
> - thus forcing a lot of client re-connects at 4am, which may: 
> a) overload the auth system/helper, or 
> b) break a transaction that included nonce update for clients - 
>resulting in their next request being invalid nonce. 

I'm not aware of a 4AM connection-terminating task but will confirm. As 
mentioned (in answer to a), the auth system is just squid. Re: b - I hadn't 
thought of that. 


> 
> 
>Or maybe you have log processing software that does "squid -k restart" 
>instead of the proper "squid -k rotate" to get access to the log files? 

I hadn't checked that! But logrotate runs at 00:00 and the rotate script does 
use "squid -k rotate". 

>Or maybe your auth system has a limit on how large nonce-count can become? 
> - I notice that the working request has 0x2F uses and the forbidden 
>has 0x35 (suspiciously close to 50 in decimal) 

I don't know what that is. I didn't find a lot of helpful detail on squid's 
digest auth implementation. 

> 
>> 
>> The difference I see in the logs:? though all the digest auth info looks 
>> okay, the %un field in the log for the usual (successful) request is the 
>> authenticated username, while in the failed request, it's "-".?? So 
>> though there hasn't been an authentication error or "authentication 
>> required" in the log - and the username is in the authentication details 
>> in the log entry -? it seems like squid isn't recognizing that username 
>> as %un. 
> 
> 
>Be aware that a properly behaving client will *not* send credentials 
>until they are requested, after which it should *always* send 
>credentials on that same connection (even if they are not requested 
>explicitly). 

>That means some requests on a multiplex/pipeline/keep-alive connection 
>MAY arrive with credentials and be accepted(2xx)/denied(403) without 
>authentication having occured. Entirely due to your *_access directives 
>sequence. In these cases the log will show auth headers but no value for 
>%un and/or %ul. 
> 
> 
>> 
>> My squid.conf first tests a request to see if an unauthenticated request 
>> from a particular host is permitted.? That external ACL doesn't take a 
>> username as an argument.?? If that external ACL passes, the request is 
>> allowed. 
>> 
> 
>Please *show* the config lines rather than describing what you *think* 
>they do. Their exact ordering matters *a lot*. 
> Obfuscation of sensitive details is okay/expected so long as you makes 
>it easy for us to tell that value A and value B are different. 
> 
>FWIW; if your config file is *actually* containing only what you 
>described it is missing a number of things necessary to have a safe and 
>secure proxy. A look at your full config (without comments or empty 
>lines) will help us point out any unnoticed issues for you to consider 
>fixing. 


Understood. Here it is: 


acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # 

Re: [squid-users] [External Sender] Re: Upgrade path from squid 4.15 to 6.x

2024-06-05 Thread Alex Rousskov

On 2024-06-05 12:05, Akash Karki (CONT) wrote:


Anything specific do we need to check from any documents?


Yes, anything that mentions features or directives you are using (or 
would like to use).




If I can get any document to refer to, that would be great!!


Release notes for Squid vN should be in doc/release-notes/release-N.html 
file inside official Squid source code tarball for vN. For a given major 
Squid version, use the tarball for latest available minor release. 
Official tarballs for various versions are currently available by 
following version-specific links at http://www.squid-cache.org/Versions/


The following wiki pages may also contain useful info:
https://wiki.squid-cache.org/Releases/Squid-5
https://wiki.squid-cache.org/Releases/Squid-6


HTH,

Alex.



On Wed, Jun 5, 2024 at 4:31 PM Alex Rousskov wrote:

On 2024-06-05 10:30, Akash Karki (CONT) wrote:

 > I want to understand if we can go straight from 4.15 to 6.x (n-1 of
 > latest version) without any intermediary steps or do we have to 
update

 > to intermediary first and then move to the n-1 version of 6.9?

Go straight to the latest v6. While doing that, study release notes for
all the Squid versions you are skipping to flag configuration
directives
that you need to adjust (and other upgrade caveats).

Needless to say, test before you deploy the upgraded version -- a
lot of
things have changed since v4, and not all of the changes may be covered
in release notes. When in doubt, ask (specific) questions.


HTH,

Alex.



 > On Wed, Jun 5, 2024 at 3:20 PM Akash Karki (CONT) wrote:
 >
 >     Hi Team,
 >
 >     We are running on squid ver 4.15 and want to update to n-1 of the
 >     latest ver(I believe 6.9 is the latest ver).
 >
 >     I want to understand if we can go straight from 4.15 to 6.x
(n-1 of
 >     latest version) without any intermediary steps or do we have to
 >     update to intermediary first and then move to the n-1 version
of 6.9?
 >
 >     Kindly send us the detailed guidance!
 >
 >     --
 >     Thanks & Regards,
 >     Akash Karki
 >
 >
 >     Save Nature to Save yourself :)
 >
 >
 >
 > --
 > Thanks & Regards,
 > Akash Karki
 > UK Hawkeye Team*
 > *
 > *Slack : *#uk-monitoring
 > *Confluence : *UK Hawkeye
 > >
 >
 > Save Nature to Save yourself :)
 >

 >
 >
 > The information contained in this e-mail may be confidential and/or
 > proprietary to Capital One and/or its affiliates and may only be
used
 > solely in performance of work or services for Capital One. The
 > information transmitted herewith is intended only for use by the
 > individual or entity to which it is addressed. If the reader of this
 > message is not the intended recipient, you are hereby notified
that any
 > review, retransmission, dissemination, distribution, copying or
other
 > use of, or taking of any action in reliance upon this information is
 > strictly prohibited. If you have received this communication in
error,
 > please contact the sender and delete the material from your computer.
 >
 >
 >
 > ___
 > squid-users mailing list
 > squid-users@lists.squid-cache.org

 >

https://urldefense.com/v3/__https://lists.squid-cache.org/listinfo/squid-users__;!!FrPt2g6CO4Wadw!LFF_SotEcUzjJJWv4JR26UjdlUap9Xu0U8n45FREcUUkuWDemCLyI2vLlkSciqHShz-wFylLp3e-TZwmlCNKQc28HngJzWvg$
 




--
Thanks & Regards,
Akash Karki
UK Hawkeye Team*
*
*Slack : *#uk-monitoring
*Confluence : *UK Hawkeye 



Save Nature to Save yourself :)



The information contained in this e-mail may be confidential and/or 
proprietary to Capital One and/or its affiliates and may only be used 
solely in performance of work or services for Capital One. The 
information transmitted herewith is intended only for use by the 
individual or entity to which it is addressed. If the reader of this 
message is not the intended recipient, you are hereby notified that any 
review, retransmission, dissemination, distribution, copying or other 
use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, 
please contact 

Re: [squid-users] [External Sender] Re: Upgrade path from squid 4.15 to 6.x

2024-06-05 Thread Akash Karki (CONT)
Hi Alex,

Thanks a lot for the response!

Anything specific do we need to check from any documents?
If I can get any document to refer to, that would be great!!

On Wed, Jun 5, 2024 at 4:31 PM Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 2024-06-05 10:30, Akash Karki (CONT) wrote:
>
> > I want to understand if we can go straight from 4.15 to 6.x (n-1 of
> > latest version) without any intermediary steps or do we have to  update
> > to intermediary first and then move to the n-1 version of 6.9?
>
> Go straight to the latest v6. While doing that, study release notes for
> all the Squid versions you are skipping to flag configuration directives
> that you need to adjust (and other upgrade caveats).
>
> Needless to say, test before you deploy the upgraded version -- a lot of
> things have changed since v4, and not all of the changes may be covered
> in release notes. When in doubt, ask (specific) questions.
>
>
> HTH,
>
> Alex.
>
>
>
> > On Wed, Jun 5, 2024 at 3:20 PM Akash Karki (CONT) wrote:
> >
> > Hi Team,
> >
> > We are running on squid ver 4.15 and want to update to n-1 of the
> > latest ver(I believe 6.9 is the latest ver).
> >
> > I want to understand if we can go straight from 4.15 to 6.x (n-1 of
> > latest version) without any intermediary steps or do we have to
> > update to intermediary first and then move to the n-1 version of 6.9?
> >
> > Kindly send us the detailed guidance!
> >
> > --
> > Thanks & Regards,
> > Akash Karki
> >
> >
> > Save Nature to Save yourself :)
> >
> >
> >
> > --
> > Thanks & Regards,
> > Akash Karki
> > UK Hawkeye Team*
> > *
> > *Slack : *#uk-monitoring
> > *Confluence : *UK Hawkeye
> > 
> >
> > Save Nature to Save yourself :)
> > 
> >
> >
> > The information contained in this e-mail may be confidential and/or
> > proprietary to Capital One and/or its affiliates and may only be used
> > solely in performance of work or services for Capital One. The
> > information transmitted herewith is intended only for use by the
> > individual or entity to which it is addressed. If the reader of this
> > message is not the intended recipient, you are hereby notified that any
> > review, retransmission, dissemination, distribution, copying or other
> > use of, or taking of any action in reliance upon this information is
> > strictly prohibited. If you have received this communication in error,
> > please contact the sender and delete the material from your computer.
> >
> >
> >
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> >
> https://urldefense.com/v3/__https://lists.squid-cache.org/listinfo/squid-users__;!!FrPt2g6CO4Wadw!LFF_SotEcUzjJJWv4JR26UjdlUap9Xu0U8n45FREcUUkuWDemCLyI2vLlkSciqHShz-wFylLp3e-TZwmlCNKQc28HngJzWvg$
>
>

-- 
Thanks & Regards,
Akash Karki
UK Hawkeye Team
*Slack : *#uk-monitoring
*Confluence : *UK Hawkeye


Save Nature to Save yourself :)

__



The information contained in this e-mail may be confidential and/or proprietary 
to Capital One and/or its affiliates and may only be used solely in performance 
of work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Upgrade path from squid 4.15 to 6.x

2024-06-05 Thread Alex Rousskov

On 2024-06-05 10:30, Akash Karki (CONT) wrote:

I want to understand if we can go straight from 4.15 to 6.x (n-1 of 
latest version) without any intermediary steps or do we have to  update 
to intermediary first and then move to the n-1 version of 6.9?


Go straight to the latest v6. While doing that, study release notes for 
all the Squid versions you are skipping to flag configuration directives 
that you need to adjust (and other upgrade caveats).


Needless to say, test before you deploy the upgraded version -- a lot of 
things have changed since v4, and not all of the changes may be covered 
in release notes. When in doubt, ask (specific) questions.



HTH,

Alex.




On Wed, Jun 5, 2024 at 3:20 PM Akash Karki (CONT) wrote:

Hi Team,

We are running on squid ver 4.15 and want to update to n-1 of the
latest ver(I believe 6.9 is the latest ver).

I want to understand if we can go straight from 4.15 to 6.x (n-1 of
latest version) without any intermediary steps or do we have to 
update to intermediary first and then move to the n-1 version of 6.9?


Kindly send us the detailed guidance!

-- 
Thanks & Regards,

Akash Karki


Save Nature to Save yourself :)



--
Thanks & Regards,
Akash Karki
UK Hawkeye Team*
*
*Slack : *#uk-monitoring
*Confluence : *UK Hawkeye 



Save Nature to Save yourself :)



The information contained in this e-mail may be confidential and/or 
proprietary to Capital One and/or its affiliates and may only be used 
solely in performance of work or services for Capital One. The 
information transmitted herewith is intended only for use by the 
individual or entity to which it is addressed. If the reader of this 
message is not the intended recipient, you are hereby notified that any 
review, retransmission, dissemination, distribution, copying or other 
use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, 
please contact the sender and delete the material from your computer.




___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [External Sender] Re: Upgrade path from squid 4.15 to 6.x

2024-06-05 Thread Akash Karki (CONT)
Hi  Eliezer,

Thanks for the reply!
I can't share the details with you over mail, sorry for that!

Is there anything which I can refer to in order to figure it out or if we
can get onto a call to chat about it?

On Wed, Jun 5, 2024 at 3:47 PM NgTech LTD  wrote:

> Depends on the config structure.
> If you can send me a private email with the config reduced sensitive
> details it will to understand the scenario.
>
> Eliezer
>
> בתאריך יום ד׳, 5 ביוני 2024, 17:31, מאת Akash Karki (CONT) ‏<
> akash.ka...@capitalone.com>:
>
>> Hi Team,
>>
>> We are running on squid ver 4.15 and want to update to n-1 of the latest
>> ver(I believe 6.9 is the latest ver).
>>
>> I want to understand if we can go straight from 4.15 to 6.x (n-1 of
>> latest version) without any intermediary steps or do we have to  update to
>> intermediary first and then move to the n-1 version of 6.9?
>>
>> Kindly send us the detailed guidance!
>>
>> On Wed, Jun 5, 2024 at 3:20 PM Akash Karki (CONT) <
>> akash.ka...@capitalone.com> wrote:
>>
>>> Hi Team,
>>>
>>> We are running on squid ver 4.15 and want to update to n-1 of the latest
>>> ver(I believe 6.9 is the latest ver).
>>>
>>> I want to understand if we can go straight from 4.15 to 6.x (n-1 of
>>> latest version) without any intermediary steps or do we have to  update to
>>> intermediary first and then move to the n-1 version of 6.9?
>>>
>>> Kindly send us the detailed guidance!
>>>
>>> --
>>> Thanks & Regards,
>>> Akash Karki
>>>
>>>
>>> Save Nature to Save yourself :)
>>>
>>
>>
>> --
>> Thanks & Regards,
>> Akash Karki
>> UK Hawkeye Team
>> *Slack : *#uk-monitoring
>> *Confluence : *UK Hawkeye
>> 
>>
>> Save Nature to Save yourself :)
>> --
>>
>> The information contained in this e-mail may be confidential and/or
>> proprietary to Capital One and/or its affiliates and may only be used
>> solely in performance of work or services for Capital One. The information
>> transmitted herewith is intended only for use by the individual or entity
>> to which it is addressed. If the reader of this message is not the intended
>> recipient, you are hereby notified that any review, retransmission,
>> dissemination, distribution, copying or other use of, or taking of any
>> action in reliance upon this information is strictly prohibited. If you
>> have received this communication in error, please contact the sender and
>> delete the material from your computer.
>>
>>
>>
>>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> https://lists.squid-cache.org/listinfo/squid-users
>> 
>>
>

-- 
Thanks & Regards,
Akash Karki
UK Hawkeye Team
*Slack : *#uk-monitoring
*Confluence : *UK Hawkeye


Save Nature to Save yourself :)

__



The information contained in this e-mail may be confidential and/or proprietary 
to Capital One and/or its affiliates and may only be used solely in performance 
of work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Upgrade path from squid 4.15 to 6.x

2024-06-05 Thread NgTech LTD
Depends on the config structure.
If you can send me a private email with the config reduced sensitive
details it will to understand the scenario.

Eliezer

בתאריך יום ד׳, 5 ביוני 2024, 17:31, מאת Akash Karki (CONT) ‏<
akash.ka...@capitalone.com>:

> Hi Team,
>
> We are running on squid ver 4.15 and want to update to n-1 of the latest
> ver(I believe 6.9 is the latest ver).
>
> I want to understand if we can go straight from 4.15 to 6.x (n-1 of latest
> version) without any intermediary steps or do we have to  update to
> intermediary first and then move to the n-1 version of 6.9?
>
> Kindly send us the detailed guidance!
>
> On Wed, Jun 5, 2024 at 3:20 PM Akash Karki (CONT) <
> akash.ka...@capitalone.com> wrote:
>
>> Hi Team,
>>
>> We are running on squid ver 4.15 and want to update to n-1 of the latest
>> ver(I believe 6.9 is the latest ver).
>>
>> I want to understand if we can go straight from 4.15 to 6.x (n-1 of
>> latest version) without any intermediary steps or do we have to  update to
>> intermediary first and then move to the n-1 version of 6.9?
>>
>> Kindly send us the detailed guidance!
>>
>> --
>> Thanks & Regards,
>> Akash Karki
>>
>>
>> Save Nature to Save yourself :)
>>
>
>
> --
> Thanks & Regards,
> Akash Karki
> UK Hawkeye Team
> *Slack : *#uk-monitoring
> *Confluence : *UK Hawkeye
> 
>
> Save Nature to Save yourself :)
> --
>
>
> The information contained in this e-mail may be confidential and/or
> proprietary to Capital One and/or its affiliates and may only be used
> solely in performance of work or services for Capital One. The information
> transmitted herewith is intended only for use by the individual or entity
> to which it is addressed. If the reader of this message is not the intended
> recipient, you are hereby notified that any review, retransmission,
> dissemination, distribution, copying or other use of, or taking of any
> action in reliance upon this information is strictly prohibited. If you
> have received this communication in error, please contact the sender and
> delete the material from your computer.
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Upgrade path from squid 4.15 to 6.x

2024-06-05 Thread Akash Karki (CONT)
Hi Team,

We are running on squid ver 4.15 and want to update to n-1 of the latest
ver(I believe 6.9 is the latest ver).

I want to understand if we can go straight from 4.15 to 6.x (n-1 of latest
version) without any intermediary steps or do we have to  update to
intermediary first and then move to the n-1 version of 6.9?

Kindly send us the detailed guidance!

On Wed, Jun 5, 2024 at 3:20 PM Akash Karki (CONT) <
akash.ka...@capitalone.com> wrote:

> Hi Team,
>
> We are running on squid ver 4.15 and want to update to n-1 of the latest
> ver(I believe 6.9 is the latest ver).
>
> I want to understand if we can go straight from 4.15 to 6.x (n-1 of latest
> version) without any intermediary steps or do we have to  update to
> intermediary first and then move to the n-1 version of 6.9?
>
> Kindly send us the detailed guidance!
>
> --
> Thanks & Regards,
> Akash Karki
>
>
> Save Nature to Save yourself :)
>


-- 
Thanks & Regards,
Akash Karki
UK Hawkeye Team
*Slack : *#uk-monitoring
*Confluence : *UK Hawkeye


Save Nature to Save yourself :)

__



The information contained in this e-mail may be confidential and/or proprietary 
to Capital One and/or its affiliates and may only be used solely in performance 
of work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IPv6 happy eyeball on dualstack host

2024-06-05 Thread Alex Rousskov

On 2024-06-05 07:31, sachin gupta wrote:

We are shifting to IPv6 dual stack hosts. As per squid documentation 
, IPv6 is enabled by 
default.


That statement is a bit misleading: IPv6 detection or probing is enabled 
in default Squid builds (i.e. ./configure --enable-ipv6 is the default), 
but whether a Squid instance will actually "enable IPv6" also depends on 
the result of certain startup probes or checks. If those startup checks 
fail, Squid will not send DNS  queries.



As per documentation, based on DNS response squid will try both IP4 and 
IPv6 if DNS return both addresses. 


FWIW, this summary does not quite match modern Squid behavior. The 
difference is _not_ important for your current triage because your Squid 
currently does not even request an IPv6 address from DNS. Once you fix 
that, you should _not_ expect Squid to use both IPv4 and IPv6 TCP/IP 
connections in every test case: Squid may or may not use both address 
families, depending on various runtime factors that affect Squid's Happy 
Eyeballs algorithm (e.g., see happy_eyeballs_connect_timeout directive).




But I see that squid is only getting IPv4 address


To be more precise, your Squid does not send a DNS  query after 
sending a DNS A query (no idnsSendSlaveQuery line after idnsALookup 
in your cache.log). That fact suggests that your Squid runs with 
disabled IPv6. I suggest the following triage steps:


1. Examine "/path/to/your/executable/squid -v" output to make sure your 
Squid executable is _not_ built with --disable-ipv6.


2. Examine level-1 cache.log for startup BCP 177 warnings like this one:
   WARNING: BCP 177 violation. Detected non-functional IPv6 loopback

3. Examine _early_ level-2 startup ProbeTransport messages. For example:
   $ your/squid -f your.squid.conf -N -X -d9 2>&1 | grep ProbeTransport
ProbeTransport: Detected IPv6 hybrid or v4-mapping stack...
ProbeTransport: Detected functional IPv6 loopback ...
ProbeTransport: IPv6 transport Enabled


Someday, somebody will (a) completely remove --disable-ipv6 and (b) 
improve startup probing code to make steps 1 and 3 completely 
unnecessary. We have recently done a couple of baby steps towards (a).



HTH,

Alex.


though with dis command I can see IPv6 address as well. 
Also from same host, I am able to make curl command to google using IPv6.


DNS logs for squid

24/06/05 10:41:54.953 kid1| 5,4| AsyncCallQueue.cc(59) fireNext: 
entering helperHandleRead(conn4 local=[::] remote=[::] FD 13 flags=1, 
data=0x55c87a45bb38, size=5, buf=0x55c87a45bd60)


2024/06/05 10:41:54.953 kid1| 5,4| AsyncCall.cc(41) make: make call 
helperHandleRead [call4]


2024/06/05 10:41:54.953 kid1| 78,3| dns_internal.cc(1792) idnsALookup: 
idnsALookup: buf is 32 bytes for www.google.com , 
id = 0xe006


2024/06/05 10:41:54.953 kid1| 5,4| AsyncCall.cc(29) AsyncCall: The 
AsyncCall helperHandleRead constructed, this=0x55c87a9301e0 [call89]


2024/06/05 10:41:54.953 kid1| 5,5| Read.cc(58) comm_read_base: 
comm_read, queueing read for conn4 local=[::] remote=[::] FD 13 flags=1; 
asynCall 0x55c87a9301e0*1


2024/06/05 10:41:54.954 kid1| 5,5| ModEpoll.cc(116) SetSelect: FD 13, 
type=1, handler=1, client_data=0x7f183475a700, timeout=0


2024/06/05 10:41:54.954 kid1| 5,4| AsyncCallQueue.cc(61) fireNext: 
leaving helperHandleRead(conn4 local=[::] remote=[::] FD 13 flags=1, 
data=0x55c87a45bb38, size=5, buf=0x55c87a45bd60)


2024/06/05 10:41:54.955 kid1| 78,3| dns_internal.cc(1318) idnsRead: 
idnsRead: starting with FD 11


2024/06/05 10:41:54.955 kid1| 5,5| ModEpoll.cc(116) SetSelect: FD 11, 
type=1, handler=1, client_data=0, timeout=0


2024/06/05 10:41:54.955 kid1| 78,3| dns_internal.cc(1364) idnsRead: 
idnsRead: FD 11: received 48 bytes from 10.0.32.2:53 


2024/06/05 10:41:54.955 kid1| 78,3| dns_internal.cc(1171) idnsGrokReply: 
idnsGrokReply: QID 0xe006, 1 answers


2024/06/05 10:41:54.955 kid1| 5,5| Connection.cc(99) cloneProfile: 
0x55c87a944210 made conn56 local=0.0.0.0 remote=142.251.215.228:80 
 HIER_DIRECT flags=1


2024/06/05 10:41:54.955 kid1| 5,5| Connection.cc(99) cloneProfile: 
0x55c87a944830 made conn57 local=0.0.0.0 remote=142.251.215.228:80 
 HIER_DIRECT flags=1


2024/06/05 10:41:54.955 kid1| 5,3| ConnOpener.cc(43) ConnOpener: will 
connect to conn57 local=0.0.0.0 remote=142.251.215.228:80 
 HIER_DIRECT flags=1 with 15 timeout


2024/06/05 10:41:54.955 kid1| 5,5| comm.cc(428) comm_init_opened: conn58 
local=0.0.0.0 remote=[::] FD 16 flags=1 is a new socket


2024/06/05 10:41:54.955 kid1| 5,4| AsyncCall.cc(29) AsyncCall: The 
AsyncCall Comm::ConnOpener::earlyAbort constructed, this=0x55c87a944cd0 
[call95]


2024/06/05 10:41:54.955 kid1| 5,5| comm.cc(1004) comm_add_close_handler: 
comm_add_close_handler: FD 16, AsyncCall=0x55c87a944cd0*1


2024/06/05 10:41:54.955 kid1| 5,4| 

[squid-users] IPv6 happy eyeball on dualstack host

2024-06-05 Thread sachin gupta
Hi

We are shifting to IPv6 dual stack hosts. As per squid documentation
, IPv6 is enabled by default. I
tried a request on www.google.com which has both IPv4 and IPv6 address.
As per documentation, based on DNS response squid will try both IP4 and
IPv6 if DNS return both addresses. But I see that squid is only getting
IPv4 address though with dis command I can see IPv6 address as well. Also
from same host, I am able to make curl command to google using IPv6.

DNS logs for squid

24/06/05 10:41:54.953 kid1| 5,4| AsyncCallQueue.cc(59) fireNext: entering
helperHandleRead(conn4 local=[::] remote=[::] FD 13 flags=1,
data=0x55c87a45bb38, size=5, buf=0x55c87a45bd60)

2024/06/05 10:41:54.953 kid1| 5,4| AsyncCall.cc(41) make: make call
helperHandleRead [call4]

2024/06/05 10:41:54.953 kid1| 78,3| dns_internal.cc(1792) idnsALookup:
idnsALookup: buf is 32 bytes for www.google.com, id = 0xe006

2024/06/05 10:41:54.953 kid1| 5,4| AsyncCall.cc(29) AsyncCall: The
AsyncCall helperHandleRead constructed, this=0x55c87a9301e0 [call89]

2024/06/05 10:41:54.953 kid1| 5,5| Read.cc(58) comm_read_base: comm_read,
queueing read for conn4 local=[::] remote=[::] FD 13 flags=1; asynCall
0x55c87a9301e0*1

2024/06/05 10:41:54.954 kid1| 5,5| ModEpoll.cc(116) SetSelect: FD 13,
type=1, handler=1, client_data=0x7f183475a700, timeout=0

2024/06/05 10:41:54.954 kid1| 5,4| AsyncCallQueue.cc(61) fireNext: leaving
helperHandleRead(conn4 local=[::] remote=[::] FD 13 flags=1,
data=0x55c87a45bb38, size=5, buf=0x55c87a45bd60)

2024/06/05 10:41:54.955 kid1| 78,3| dns_internal.cc(1318) idnsRead:
idnsRead: starting with FD 11

2024/06/05 10:41:54.955 kid1| 5,5| ModEpoll.cc(116) SetSelect: FD 11,
type=1, handler=1, client_data=0, timeout=0

2024/06/05 10:41:54.955 kid1| 78,3| dns_internal.cc(1364) idnsRead:
idnsRead: FD 11: received 48 bytes from 10.0.32.2:53

2024/06/05 10:41:54.955 kid1| 78,3| dns_internal.cc(1171) idnsGrokReply:
idnsGrokReply: QID 0xe006, 1 answers

2024/06/05 10:41:54.955 kid1| 5,5| Connection.cc(99) cloneProfile:
0x55c87a944210 made conn56 local=0.0.0.0 remote=142.251.215.228:80
HIER_DIRECT flags=1

2024/06/05 10:41:54.955 kid1| 5,5| Connection.cc(99) cloneProfile:
0x55c87a944830 made conn57 local=0.0.0.0 remote=142.251.215.228:80
HIER_DIRECT flags=1

2024/06/05 10:41:54.955 kid1| 5,3| ConnOpener.cc(43) ConnOpener: will
connect to conn57 local=0.0.0.0 remote=142.251.215.228:80 HIER_DIRECT
flags=1 with 15 timeout

2024/06/05 10:41:54.955 kid1| 5,5| comm.cc(428) comm_init_opened: conn58
local=0.0.0.0 remote=[::] FD 16 flags=1 is a new socket

2024/06/05 10:41:54.955 kid1| 5,4| AsyncCall.cc(29) AsyncCall: The
AsyncCall Comm::ConnOpener::earlyAbort constructed, this=0x55c87a944cd0
[call95]

2024/06/05 10:41:54.955 kid1| 5,5| comm.cc(1004) comm_add_close_handler:
comm_add_close_handler: FD 16, AsyncCall=0x55c87a944cd0*1

2024/06/05 10:41:54.955 kid1| 5,4| AsyncCall.cc(29) AsyncCall: The
AsyncCall Comm::ConnOpener::timeout constructed, this=0x55c87a944d70
[call96]


Dig Output


dig www.google.com  


; <<>> DiG 9.16.23-RH <<>> www.google.com 

;; global options: +cmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27477

;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1


;; OPT PSEUDOSECTION:

; EDNS: version: 0, flags:; udp: 4096

;; QUESTION SECTION:

;www.google.com. IN 


;; ANSWER SECTION:

www.google.com. 237 IN  2607:f8b0:400a:804::2004


;; Query time: 0 msec

;; SERVER: 10.0.32.2#53(10.0.32.2)


Can you please help and let me know if I am missing anything.


Regards

Sachin
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] urlfilterdb.com

2024-06-01 Thread Marcus Kool

I am not :-)

On 01/06/2024 06:24, Jonathan Lee wrote:

Marcus are you the same guy that does the pfSense Squid GUI package 
interference code??
Sent from my iPhone


On May 30, 2024, at 01:38, Marcus Kool  wrote:

Not sure if this message was meant for the Squid mailing list but for those 
who are interested, the DNS provider had an issue with DNSSEC resigning and all 
is well now.

Marcus



On 28/05/2024 15:23, Anton Kornexl wrote:
Hello,

since two days the domain urlfilterdb.com is not resolved to an IP.  We get no 
updates to the urlfiter-DB and the homepage can´t be opned.

Does someone know the reason?

Kind regards

Anton


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] urlfilterdb.com

2024-05-31 Thread Jonathan Lee
Marcus are you the same guy that does the pfSense Squid GUI package 
interference code??
Sent from my iPhone

> On May 30, 2024, at 01:38, Marcus Kool  wrote:
> 
> Not sure if this message was meant for the Squid mailing list but for those 
> who are interested, the DNS provider had an issue with DNSSEC resigning and 
> all is well now.
> 
> Marcus
> 
> 
>> On 28/05/2024 15:23, Anton Kornexl wrote:
>> Hello,
>> 
>> since two days the domain urlfilterdb.com is not resolved to an IP.  We get 
>> no updates to the urlfiter-DB and the homepage can´t be opned.
>> 
>> Does someone know the reason?
>> 
>> Kind regards
>> 
>> Anton
>> 
>> 
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> https://lists.squid-cache.org/listinfo/squid-users
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Validation of IP address for SSL spliced connections

2024-05-30 Thread Alex Rousskov

On 2024-05-30 02:30, Rik Theys wrote:

On 5/29/24 11:31 PM, Alex Rousskov wrote:

On 2024-05-29 17:06, Rik Theys wrote:

On 5/29/24 5:29 PM, Alex Rousskov wrote:

On 2024-05-29 05:01, Rik Theys wrote:
squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


That observation matches my reading of Squid Host header forgery 
detection code which says "we do not yet handle CONNECT tunnels 
well, so ignore for them". To validate that theory, use 
"debug_options ALL,3" and look for "SECURITY ALERT: Host header 
forgery detected" messages in cache.log.


I've enabled this debug option, but I never see the security alert in 
the logs. Maybe it was introduced in more recent versions? I'm 
currently using Squid 5.5 that comes with Rocky Linux 9.4.


The code logging "SECURITY ALERT: Host header forgery detected" 
messages is present in v5.5, but perhaps it is not triggered in that 
version (or even in modern/supported Squids) when I expect it to be 
triggered. Unfortunately, there are too many variables for me to 
predict what exactly went wrong in your particular test case without 
doing a lot more work (and I cannot volunteer to do that work right now).


Looking at https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery, 
it always seems to mention the Host header. It has no mention of 
performing the same checks for the SNI value. Since we're peeking at the 
request, we can't see the actual Host header being sent.


As Amos has explained, SslBump at step2 is supposed to relay TLS Client 
Hello information via fake CONNECT request headers. SNI should go into 
CONNECT Host header and CONNECT target pseudo-header. That fake CONNECT 
request should then be checked for forgery.


Whether all of the above actually happens is an open question. I bet a 
short answer is "no". I am not just being extra cautious here based on 
overall poor SslBump code quality! I believe there are "real bugs" on 
that code path because we have fixed some of them (and I hope to find 
the time to post a polished version of those fixes for the official 
review in the foreseeable future). For an example that fuels my 
concerns, see the following unofficial commit message:

https://github.com/measurement-factory/squid/commit/462aedcc


I believe that for my use-case (only splice certain domains and prevent 
connecting to a wrong IP address), there's currently no solution then.


I suspect that there is currently no solution that does not involve 
writing complex external ACL helpers or complex Squid code fixes.



I guess that explains why if I add "%ssl::logformat for the access log, the field is always "-"?


It may explain that, but other problems may lead to the same "no 
certificate" result as well, of course. You can kind of check by using 
stare/bump instead of peek/splice -- if you see certificate details 
logged in that bumping test, then it is more likely that Squid just 
does not get a plain text certificate in peeking configurations.


I've updated the configuration to use stare/bump instead. The field is 
then indeed added to the log file. A curl request that forces the 
connection to a different IP address then also fails because the 
certificate isn't valid for the name. There's no mention of the Host 
header not matching the IP address, but I assume that check comes after 
the certificate check then.


In most cases, the forgery check should happen before the certificate 
check. I suspect that it does not happen at all in your test case.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] can't explain 403 denied for authenticated user

2024-05-30 Thread Amos Jeffries

On 25/05/24 07:28, Kevin wrote:

Hi,

We have 2 external ACLs that take a request's data (IP, authenticated 
username, URL, user-agent, etc) and uses that information to determine 
whether a user or host should be permitted to access that URL.   It 
almost always works well, but we have a recurring occasional issue that 
I can't figure out.


When it occurs, it's always around 4AM.   This particular request occurs 
often - averages about once a second throughout the day.


What we see is a "403 forbidden" for a (should be) permitted site from 
an authenticated user from the same IP/user and to the same site that 
gets a "202 connection established" every other time.


Is maybe 4am the time when your auth system refreshes all nonce?
 - thus making any currently in-use by the clients invalid until they 
re-auth. You might see a mix of 403/401/407 in a bunch at such times.



Or maybe in a similar style one/some of the clients is broken and fails 
to update its nonce before it expires at 4am?
 - looking at which client agent and IP were getting the 403 and/or the 
nonce which received 403 will give you hints about this possibility.



Or your network router(s) do garbage collection and terminate 
long-running connections to free up TCP resources?

 - thus forcing a lot of client re-connects at 4am, which may:
 a) overload the auth system/helper, or
 b) break a transaction that included nonce update for clients - 
resulting in their next request being invalid nonce.



Or maybe you have log processing software that does "squid -k restart" 
instead of the proper "squid -k rotate" to get access to the log files?


Or maybe your auth system has a limit on how large nonce-count can become?
 - I notice that the working request has 0x2F uses and the forbidden 
has 0x35 (suspiciously close to 50 in decimal)







The difference I see in the logs:  though all the digest auth info looks 
okay, the %un field in the log for the usual (successful) request is the 
authenticated username, while in the failed request, it's "-".   So 
though there hasn't been an authentication error or "authentication 
required" in the log - and the username is in the authentication details 
in the log entry -  it seems like squid isn't recognizing that username 
as %un.



Be aware that a properly behaving client will *not* send credentials 
until they are requested, after which it should *always* send 
credentials on that same connection (even if they are not requested 
explicitly).


That means some requests on a multiplex/pipeline/keep-alive connection 
MAY arrive with credentials and be accepted(2xx)/denied(403) without 
authentication having occured. Entirely due to your *_access directives 
sequence. In these cases the log will show auth headers but no value for 
%un and/or %ul.





My squid.conf first tests a request to see if an unauthenticated request 
from a particular host is permitted.  That external ACL doesn't take a 
username as an argument.   If that external ACL passes, the request is 
allowed.




Please *show* the config lines rather than describing what you *think* 
they do. Their exact ordering matters *a lot*.


 Obfuscation of sensitive details is okay/expected so long as you makes 
it easy for us to tell that value A and value B are different.



FWIW; if your config file is *actually* containing only what you 
described it is missing a number of things necessary to have a safe and 
secure proxy. A look at your full config (without comments or empty 
lines) will help us point out any unnoticed issues for you to consider 
fixing.




The next line in squid.conf is

acl auth_users proxy_auth REQUIRED



FYI the above just means that Squid is using authentication. It says 
nothing about when the authentication will be (or not be) performed.



... and after that, the external ACL that takes the username as well as 
the other info.





HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Validation of IP address for SSL spliced connections

2024-05-30 Thread Amos Jeffries

On 30/05/24 18:30, Rik Theys wrote:

Hi,

On 5/29/24 11:31 PM, Alex Rousskov wrote:

On 2024-05-29 17:06, Rik Theys wrote:

On 5/29/24 5:29 PM, Alex Rousskov wrote:

On 2024-05-29 05:01, Rik Theys wrote:



squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


That observation matches my reading of Squid Host header forgery 
detection code which says "we do not yet handle CONNECT tunnels 
well, so ignore for them". To validate that theory, use 
"debug_options ALL,3" and look for "SECURITY ALERT: Host header 
forgery detected" messages in cache.log.


I've enabled this debug option, but I never see the security alert in 
the logs. Maybe it was introduced in more recent versions? I'm 
currently using Squid 5.5 that comes with Rocky Linux 9.4.


The code logging "SECURITY ALERT: Host header forgery detected" 
messages is present in v5.5, but perhaps it is not triggered in that 
version (or even in modern/supported Squids) when I expect it to be 
triggered. Unfortunately, there are too many variables for me to 
predict what exactly went wrong in your particular test case without 
doing a lot more work (and I cannot volunteer to do that work right now).


Looking at https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery, 
it always seems to mention the Host header. It has no mention of 
performing the same checks for the SNI value. Since we're peeking at the 
request, we can't see the actual Host header being sent.




FYI, The SSL-Bump feature uses a CONNECT tunnel at the HTTP layer to 
transfer the HTTPS (or encrypted non-HTTPS) content through Squid. The 
SNI value, cert altSubjectName, or raw-IP (whichever most-trusted is 
available) received from peek/bump gets used as the Host header on that 
internal CONNECT tunnel.


The Host header forgery check at HTTP layer is performed on that 
HTTP-level CONNECT request regardless of whether a specific SNI-vs-IP 
check was done by the TLS logic. Ideally both layers would do it, but 
SSL-Bump permutations/complexity makes that hard.




And indeed: if I perform the same test for HTTP traffic, I do see the 
error message:


curl http://wordpress.org --connect-to wordpress.org:80:8.8.8.8:80


I believe that for my use-case (only splice certain domains and prevent 
connecting to a wrong IP address), there's currently no solution then. 
Squid would also have to perform a similar check as the Host check for 
the SNI information. Maybe I can perform the same function with an 
external acl as you've mentioned. I will look into that later. Thanks 
for your time.



IIRC there is at least one SSL-Bump permutation which does server name 
vs IP validation (in a way, not explicitly). But that particular code 
path is not always taken and the SSL-Bump logic does not go out of its 
way to lookup missing details. So likely you are just not encountering 
the rare case that SNI gets verified.




HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] urlfilterdb.com

2024-05-30 Thread Marcus Kool

Not sure if this message was meant for the Squid mailing list but for those who 
are interested, the DNS provider had an issue with DNSSEC resigning and all is 
well now.

Marcus


On 28/05/2024 15:23, Anton Kornexl wrote:

Hello,

since two days the domain urlfilterdb.com is not resolved to an IP.  We get no 
updates to the urlfiter-DB and the homepage can´t be opned.

Does someone know the reason?

Kind regards

Anton


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Validation of IP address for SSL spliced connections

2024-05-30 Thread Rik Theys

Hi,

On 5/29/24 11:31 PM, Alex Rousskov wrote:

On 2024-05-29 17:06, Rik Theys wrote:

On 5/29/24 5:29 PM, Alex Rousskov wrote:

On 2024-05-29 05:01, Rik Theys wrote:



squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


That observation matches my reading of Squid Host header forgery 
detection code which says "we do not yet handle CONNECT tunnels 
well, so ignore for them". To validate that theory, use 
"debug_options ALL,3" and look for "SECURITY ALERT: Host header 
forgery detected" messages in cache.log.


I've enabled this debug option, but I never see the security alert in 
the logs. Maybe it was introduced in more recent versions? I'm 
currently using Squid 5.5 that comes with Rocky Linux 9.4.


The code logging "SECURITY ALERT: Host header forgery detected" 
messages is present in v5.5, but perhaps it is not triggered in that 
version (or even in modern/supported Squids) when I expect it to be 
triggered. Unfortunately, there are too many variables for me to 
predict what exactly went wrong in your particular test case without 
doing a lot more work (and I cannot volunteer to do that work right now).


Looking at https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery, 
it always seems to mention the Host header. It has no mention of 
performing the same checks for the SNI value. Since we're peeking at the 
request, we can't see the actual Host header being sent.


And indeed: if I perform the same test for HTTP traffic, I do see the 
error message:


curl http://wordpress.org --connect-to wordpress.org:80:8.8.8.8:80


I believe that for my use-case (only splice certain domains and prevent 
connecting to a wrong IP address), there's currently no solution then. 
Squid would also have to perform a similar check as the Host check for 
the SNI information. Maybe I can perform the same function with an 
external acl as you've mentioned. I will look into that later. Thanks 
for your time.





Looking at the logs, I'm also having problems determining where each 
ssl-bump step is started.


Yes, it is a known problem (even for developers). There are also bugs 
related to step boundaries.



Peeking at the server certificates happens at step3. In many modern 
use cases, server certificates are encrypted, so a _peeking_ Squid 
cannot see them. To validate, Squid has to bump the tunnel 
(supported today but problematic for other reasons) or be enhanced 
to use out-of-band validation tricks (that come with their own set 
of problems).


I guess that explains why if I add "%ssl::logformat for the access log, the field is always "-"?


It may explain that, but other problems may lead to the same "no 
certificate" result as well, of course. You can kind of check by using 
stare/bump instead of peek/splice -- if you see certificate details 
logged in that bumping test, then it is more likely that Squid just 
does not get a plain text certificate in peeking configurations.


I've updated the configuration to use stare/bump instead. The field is 
then indeed added to the log file. A curl request that forces the 
connection to a different IP address then also fails because the 
certificate isn't valid for the name. There's no mention of the Host 
header not matching the IP address, but I assume that check comes after 
the certificate check then.


Regards,

Rik


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Validation of IP address for SSL spliced connections

2024-05-29 Thread Alex Rousskov

On 2024-05-29 17:06, Rik Theys wrote:

On 5/29/24 5:29 PM, Alex Rousskov wrote:

On 2024-05-29 05:01, Rik Theys wrote:

acl allowed_clients src "/etc/squid/allowed_clients"
acl allowed_domains dstdomain "/etc/squid/allowed_domains"



http_access allow allowed_clients allowed_domains
http_access allow allowed_clients CONNECT
http_access deny all


Please note that the second http_access rule in the above 
configuration allows CONNECT tunnels to prohibited domains (i.e. 
domains that do not match allowed_domains). Consider restricting your 
"allow...CONNECT" rule to step1. For example:


    http_access allow allowed_clients step1 CONNECT

Thanks, I've updated my configuration.



Please do test any suggested changes. There are too many variables here 
for me to guarantee that a particular set of http_access and ssl_bump 
rules works as expected.



squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


That observation matches my reading of Squid Host header forgery 
detection code which says "we do not yet handle CONNECT tunnels well, 
so ignore for them". To validate that theory, use "debug_options 
ALL,3" and look for "SECURITY ALERT: Host header forgery detected" 
messages in cache.log.


I've enabled this debug option, but I never see the security alert in 
the logs. Maybe it was introduced in more recent versions? I'm currently 
using Squid 5.5 that comes with Rocky Linux 9.4.


The code logging "SECURITY ALERT: Host header forgery detected" messages 
is present in v5.5, but perhaps it is not triggered in that version (or 
even in modern/supported Squids) when I expect it to be triggered. 
Unfortunately, there are too many variables for me to predict what 
exactly went wrong in your particular test case without doing a lot more 
work (and I cannot volunteer to do that work right now).



Looking at the logs, I'm also having problems determining where each 
ssl-bump step is started.


Yes, it is a known problem (even for developers). There are also bugs 
related to step boundaries.



Peeking at the server certificates happens at step3. In many modern 
use cases, server certificates are encrypted, so a _peeking_ Squid 
cannot see them. To validate, Squid has to bump the tunnel (supported 
today but problematic for other reasons) or be enhanced to use 
out-of-band validation tricks (that come with their own set of problems).


I guess that explains why if I add "%ssl::for the access log, the field is always "-"?


It may explain that, but other problems may lead to the same "no 
certificate" result as well, of course. You can kind of check by using 
stare/bump instead of peek/splice -- if you see certificate details 
logged in that bumping test, then it is more likely that Squid just does 
not get a plain text certificate in peeking configurations.



Is there a way to configure squid to validate that the server 
certificate is valid for the host specified in the SNI header?


IIRC, that validation happens automatically in modern Squid versions 
when Squid receives an (unencrypted) server certificate.



Do you happen to known which version of Squid introduced that check?


IIRC, Squid v5.5 has that code.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Validation of IP address for SSL spliced connections

2024-05-29 Thread Rik Theys

Hi,

On 5/29/24 5:29 PM, Alex Rousskov wrote:

On 2024-05-29 05:01, Rik Theys wrote:

acl allowed_clients src "/etc/squid/allowed_clients"
acl allowed_domains dstdomain "/etc/squid/allowed_domains"



http_access allow allowed_clients allowed_domains
http_access allow allowed_clients CONNECT
http_access deny all


Please note that the second http_access rule in the above 
configuration allows CONNECT tunnels to prohibited domains (i.e. 
domains that do not match allowed_domains). Consider restricting your 
"allow...CONNECT" rule to step1. For example:


    http_access allow allowed_clients step1 CONNECT

Thanks, I've updated my configuration.



squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


That observation matches my reading of Squid Host header forgery 
detection code which says "we do not yet handle CONNECT tunnels well, 
so ignore for them". To validate that theory, use "debug_options 
ALL,3" and look for "SECURITY ALERT: Host header forgery detected" 
messages in cache.log.


I've enabled this debug option, but I never see the security alert in 
the logs. Maybe it was introduced in more recent versions? I'm currently 
using Squid 5.5 that comes with Rocky Linux 9.4.


Looking at the logs, I'm also having problems determining where each 
ssl-bump step is started.


Please note that in many environments forgery detection does not work 
well (for cases where it is performed) due to clients and Squid seeing 
different sets of IP addresses for the same host name. There are 
numerous complains about that in squid-users archives.



For example, if I add "wordpress.org" to my allowed_domains list, the 
following request is allowed:


curl -v https://wordpress.org --connect-to wordpress.org:443:8.8.8.8:443

8.8.8.8 is not a valid IP address for wordpress.org. This could be 
used to bypass the restrictions.


Agreed.


Is there an option in squid to make it perform a forward DNS lookup 
for the domain from the SNI information from step1


FYI: SNI comes from step2. step1 looks at TCP/IP client info.


to validate that the IP address we're trying to connect to is 
actually valid for that host? In the example above, a DNS lookup for 
wordpress.org would return 198.143.164.252 as the IP address. This is 
not the IP address we're trying to connect to, so squid should block 
the request.


AFAICT, there is no built-in support for that in current Squid code. 
One could enhance Squid or write an external ACL to perform that kind 
of validation. See above for details/caveats.



Similar question for the server certificate: I've configured the 
'ssl_bump peek step2 https_domains' line so squid can peek at the 
server certificate.


Peeking at the server certificates happens at step3. In many modern 
use cases, server certificates are encrypted, so a _peeking_ Squid 
cannot see them. To validate, Squid has to bump the tunnel (supported 
today but problematic for other reasons) or be enhanced to use 
out-of-band validation tricks (that come with their own set of problems).


I guess that explains why if I add "%ssl::for the access log, the field is always "-"?





Is there a way to configure squid to validate that the server 
certificate is valid for the host specified in the SNI header?


IIRC, that validation happens automatically in modern Squid versions 
when Squid receives an (unencrypted) server certificate.



Do you happen to known which version of Squid introduced that check?

Regards,

Rik


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Validation of IP address for SSL spliced connections

2024-05-29 Thread Alex Rousskov

On 2024-05-29 05:01, Rik Theys wrote:

acl allowed_clients src "/etc/squid/allowed_clients"
acl allowed_domains dstdomain "/etc/squid/allowed_domains"



http_access allow allowed_clients allowed_domains
http_access allow allowed_clients CONNECT
http_access deny all


Please note that the second http_access rule in the above configuration 
allows CONNECT tunnels to prohibited domains (i.e. domains that do not 
match allowed_domains). Consider restricting your "allow...CONNECT" rule 
to step1. For example:


http_access allow allowed_clients step1 CONNECT


squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


That observation matches my reading of Squid Host header forgery 
detection code which says "we do not yet handle CONNECT tunnels well, so 
ignore for them". To validate that theory, use "debug_options ALL,3" and 
look for "SECURITY ALERT: Host header forgery detected" messages in 
cache.log.


Please note that in many environments forgery detection does not work 
well (for cases where it is performed) due to clients and Squid seeing 
different sets of IP addresses for the same host name. There are 
numerous complains about that in squid-users archives.



For example, if I add "wordpress.org" to my allowed_domains list, the 
following request is allowed:


curl -v https://wordpress.org --connect-to wordpress.org:443:8.8.8.8:443

8.8.8.8 is not a valid IP address for wordpress.org. This could be used 
to bypass the restrictions.


Agreed.


Is there an option in squid to make it perform a forward DNS lookup for 
the domain from the SNI information from step1


FYI: SNI comes from step2. step1 looks at TCP/IP client info.


to validate that the IP 
address we're trying to connect to is actually valid for that host? In 
the example above, a DNS lookup for wordpress.org would return 
198.143.164.252 as the IP address. This is not the IP address we're 
trying to connect to, so squid should block the request.


AFAICT, there is no built-in support for that in current Squid code. One 
could enhance Squid or write an external ACL to perform that kind of 
validation. See above for details/caveats.



Similar question for the server certificate: I've configured the 
'ssl_bump peek step2 https_domains' line so squid can peek at the server 
certificate.


Peeking at the server certificates happens at step3. In many modern use 
cases, server certificates are encrypted, so a _peeking_ Squid cannot 
see them. To validate, Squid has to bump the tunnel (supported today but 
problematic for other reasons) or be enhanced to use out-of-band 
validation tricks (that come with their own set of problems).



Is there a way to configure squid to validate that the 
server certificate is valid for the host specified in the SNI header?


IIRC, that validation happens automatically in modern Squid versions 
when Squid receives an (unencrypted) server certificate.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Validation of IP address for SSL spliced connections

2024-05-29 Thread Rik Theys

Hi,

I'm configuring squid as a transparent proxy where local outbound 
traffic is redirect to a local squid process using tproxy.


I would like to limit the domains the host can contact by having an 
allow list. I have the following config file:


--

acl allowed_clients src "/etc/squid/allowed_clients"

acl allowed_domains dstdomain "/etc/squid/allowed_domains"

acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 443
acl CONNECT method CONNECT

acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3

# Additional access control lists
acl https_domains ssl::server_name "/etc/squid/allowed_domains"

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
http_access allow allowed_clients allowed_domains
http_access allow allowed_clients CONNECT

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
#http_access allow localnet
#http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 3128
http_port 3129 tproxy
https_port 3130 tproxy ssl-bump cert=/etc/squid/cert/local_ca.pem

# SSL bump configuration
ssl_bump peek step1
ssl_bump peek step2 https_domains
ssl_bump splice step3 https_domains
ssl_bump terminate all

--

When the Host header in an intercepted request matches a domain on the 
allowed_domains list, the request is allowed. Otherwise it's denied as 
expected.


But squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


For example, if I add "wordpress.org" to my allowed_domains list, the 
following request is allowed:


curl -v https://wordpress.org --connect-to wordpress.org:443:8.8.8.8:443

8.8.8.8 is not a valid IP address for wordpress.org. This could be used 
to bypass the restrictions.


Is there an option in squid to make it perform a forward DNS lookup for 
the domain from the SNI information from step1 to validate that the IP 
address we're trying to connect to is actually valid for that host? In 
the example above, a DNS lookup for wordpress.org would return 
198.143.164.252 as the IP address. This is not the IP address we're 
trying to connect to, so squid should block the request.


Similar question for the server certificate: I've configured the 
'ssl_bump peek step2 https_domains' line so squid can peek at the server 
certificate. Is there a way to configure squid to validate that the 
server certificate is valid for the host specified in the SNI header?



Regards,

Rik
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] urlfilterdb.com

2024-05-28 Thread UnveilTech - Support
Hello,

Forget our answer, wrong reply...

Best regards,
UnveilTech Support Team
www.unveiltech.com
Skype: unveiltechsupportteam

-Message d'origine-
De : squid-users [mailto:squid-users-boun...@lists.squid-cache.org] De la part 
de UnveilTech - Support
Envoyé : mardi 28 mai 2024 19:12
À : Anton Kornexl ; squid-users@lists.squid-cache.org
Objet : Re: [squid-users] urlfilterdb.com

Hello Anton,

We don't use this domain anymore.

Best regards,
UnveilTech Support Team
www.unveiltech.com
Skype: unveiltechsupportteam

-Message d'origine-
De : squid-users [mailto:squid-users-boun...@lists.squid-cache.org] De la part 
de Anton Kornexl Envoyé : mardi 28 mai 2024 16:24 À : 
squid-users@lists.squid-cache.org Objet : [squid-users] urlfilterdb.com

Hello,

since two days the domain urlfilterdb.com is not resolved to an IP.  We get no 
updates to the urlfiter-DB and the homepage can´t be opned.

Does someone know the reason?

Kind regards

Anton


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] urlfilterdb.com

2024-05-28 Thread UnveilTech - Support
Hello Anton,

We don't use this domain anymore.

Best regards,
UnveilTech Support Team
www.unveiltech.com
Skype: unveiltechsupportteam

-Message d'origine-
De : squid-users [mailto:squid-users-boun...@lists.squid-cache.org] De la part 
de Anton Kornexl
Envoyé : mardi 28 mai 2024 16:24
À : squid-users@lists.squid-cache.org
Objet : [squid-users] urlfilterdb.com

Hello,

since two days the domain urlfilterdb.com is not resolved to an IP.  We get no 
updates to the urlfiter-DB and the homepage can´t be opned.

Does someone know the reason?

Kind regards

Anton


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] urlfilterdb.com

2024-05-28 Thread Anton Kornexl

Hello,

since two days the domain urlfilterdb.com is not resolved to an IP.  We 
get no updates to the urlfiter-DB and the homepage can´t be opned.


Does someone know the reason?

Kind regards

Anton


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] can't explain 403 denied for authenticated user

2024-05-24 Thread Kevin
Hi, 

We have 2 external ACLs that take a request's data (IP, authenticated username, 
URL, user-agent, etc) and uses that information to determine whether a user or 
host should be permitted to access that URL. It almost always works well, but 
we have a recurring occasional issue that I can't figure out. 

When it occurs, it's always around 4AM. This particular request occurs often - 
averages about once a second throughout the day. 

What we see is a "403 forbidden" for a (should be) permitted site from an 
authenticated user from the same IP/user and to the same site that gets a "202 
connection established" every other time. 

The difference I see in the logs: though all the digest auth info looks okay, 
the %un field in the log for the usual (successful) request is the 
authenticated username, while in the failed request, it's "-". So though there 
hasn't been an authentication error or "authentication required" in the log - 
and the username is in the authentication details in the log entry - it seems 
like squid isn't recognizing that username as %un. 

My squid.conf first tests a request to see if an unauthenticated request from a 
particular host is permitted. That external ACL doesn't take a username as an 
argument. If that external ACL passes, the request is allowed. 

The next line in squid.conf is 

acl auth_users proxy_auth REQUIRED 

... and after that, the external ACL that takes the username as well as the 
other info. 

The filter has its own log. For most authenticated requests, we see four 
entries in that log for each of this particular request: 

HostBasedRequest 
HostBasedResponse 
UserBasedRequest 
UserBasedResponse 

When the issue occurs, we see a blip in the pattern when looking at only this 
particular request, like: 


HostBasedRequest 
HostBasedRespons 
HostBasedRequest 
HostBasedResponse 
UserBasedRequest 
UserBasedResponse 

so it looks like the requests that experience the issue don't get past that 
"acl auth_users proxy_auth REQUIRED" directive. 

One more clue: The last morning the issue occurred, we saw 8 instances of "403 
forbidden" responses (out of roughly 5800 that hour). When I looked at the log 
entry for one of them (included below) and looked for other instances of the 
cnonce in the digest auth info, I saw that cnonce in five of the eight log 
entries showing the issue. 

Any ideas or suggestions? Here are two logs: one illustrating the issue and the 
other showing how a typical successful request is logged. 



thanks! 

12.34.56.78 - - [20/May/2024:04:00:00 -0400] "CONNECT abc.defg.net:443 
HTTP/1.1" 403 4276 "-" "Java/21.0.1" TCP_DENIED:HIER_NONE [User-Agent: 
Java/21.0.1\r\nAccept: */*\r\nProxy-Connection: 
keep-alive\r\nProxy-Authorization: Digest username="service_uname", 
realm="squid", nonce="eca7c1c8831a2fc2b0afb3ee95862950", nc=0035, 
uri="abc.defg.net:443", response="88f2110f926ba56c3b1a84a1321a051c", 
algorithm=MD5, cnonce="BHGEEDNBMEPIIMIDLABNJHJNAIEHLKPGGEDFCLHG", 
qop=auth\r\nHost: abc.defg.net:443\r\n] [HTTP/1.1 403 Forbidden\r\nServer: 
squid/5.7\r\nMime-Version: 1.0\r\nDate: Mon, 20 May 2024 08:00:00 
GMT\r\nContent-Type: text/html;charset=utf-8\r\nContent-Length: 
3884\r\nX-Squid-Error: ERR_ACCESS_DENIED 0\r\nVary: 
Accept-Language\r\nContent-Language: en\r\nX-Cache: MISS from 
proxy.acme.com\r\nX-Cache-Lookup: NONE from proxy.acme.com:3128\r\nVia: 1.1 
proxy.acme.com (squid/5.7)\r\nConnection: keep-alive\r\n\r\n] 



12.34.56.78 - service_uname [20/May/2024:10:50:10 -0400] "CONNECT 
abc.defg.net:443 HTTP/1.1" 200 2939 "-" "Java/21.0.1" TCP_TUNNEL:HIER_DIRECT 
[User-Agent: Java/21.0.1\r\nAccept: */*\r\nProxy-Connection: 
keep-alive\r\nProxy-Authorization: Digest username="service_uname", 
realm="squid", nonce="3e30376ba9c74cb016b3d8cfe1bf8a81", nc=002f, 
uri="abc.defg.net:443", response="f8f720d4e2bb9324e1a90bcfafecd1c5", 
algorithm=MD5, cnonce="KDJPEGFBCHFLMCJMAPEBMEGJNKOHOBEDOAEINPKL", 
qop=auth\r\nHost: abc.defg.net:443\r\n] [HTTP/1.1 200 Connection 
established\r\n\r\n] 



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Simulate connections for tuning squid?

2024-05-24 Thread Alex Rousskov

On 2024-05-24 01:43, Periko Support wrote:


I would like to know if there exists a tool that helps us simulate
connections to squid and helps us tune squid for different scenarios
like small, medium or large networks?


Yes, there are many tools, offering various tradeoffs, including:

* Apache "ab": Not designed for testing proxies but well-known and 
fairly simple.


* Web Polygraph: Designed for testing proxies but has a steep learning 
curve and lacks fresh releases.


* curl/wget/netcat: Not designed for testing performance but well-known 
and very simple.


Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Simulate connections for tuning squid?

2024-05-23 Thread Periko Support
Hello guys.

I would like to know if there exists a tool that helps us simulate
connections to squid and helps us tune squid for different scenarios
like small, medium or large networks?

Any info I would appreciate, thanks!!!
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Adding an extra header to TLS connection

2024-05-23 Thread Alex Rousskov

On 2024-05-23 13:06, Robin Wood wrote:
I've tried searching for Squid and sslbump and not found anything useful 
that works with the current version, that is why I'm asking here, I was 
hoping someone could point me at an example that would definitely work 
with the current version of Squid.


FWIW, most of the basics are covered at
https://wiki.squid-cache.org/Features/SslPeekAndSplice

That page was written for a feature introduced in v3.5, but it is not 
specific to that Squid version.



HTH,

Alex.



 > On May 23, 2024, at 08:49, Alex Rousskov wrote:
 >
 > On 2024-05-22 03:49, Robin Wood wrote:
 >
 >> I'm trying to work out how to add an extra header to a TLS
connection.
 >
 > I assume that you want to add a header field to an HTTP request
or response that is being transmitted inside a TLS connection
between a TLS client (e.g., a user browser) and an HTTPS origin server.
 >
 > Do you control the client that originates that TLS connection (or
its OS/environment) or the origin server? If you do not, then what
you want is impossible -- TLS encryption exists, in part, to prevent
such traffic modifications.
 >
 > If you control the client that originates that TLS connection (or
its OS/environment), then you may be able to, in _some_ cases, add
that header by configuring the client (or its OS/environment) to
trust you as a Certificate Authority, minting your own X509
certificates, and configuring Squid to perform a "man in the middle"
attack on client-server traffic, using your minted certificates. You
can search for Squid SslBump to get more information about this
feature, but the area is full of insurmountable difficulties and
misleading advice. Avoid it if at all possible!
 >
 >
 > HTH,
 >
 > Alex.
 >
 >
 >> I've found information on how to do it on what I think is the
pre-3.5 release, but I can't find any useful information on doing it
on the current version.
 >> Could someone give me an example or point me at some
documentation on how to do it.
 >> Thanks
 >> Robin
 >> ___
 >> squid-users mailing list
 >> squid-users@lists.squid-cache.org

 >> https://lists.squid-cache.org/listinfo/squid-users

 >
 > ___
 > squid-users mailing list
 > squid-users@lists.squid-cache.org

 > https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org

https://lists.squid-cache.org/listinfo/squid-users




___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Adding an extra header to TLS connection

2024-05-23 Thread Robin Wood
On Thu, 23 May 2024 at 18:00, Jonathan Lee  wrote:

> I do use ssl bump again it requires certificates installed on the devices,
> and or some and a splice for the others. You must also add a url list for
> items that must never be intercepted like banks etc. I agree it is not an
> easy task, it took me years to get it to work correctly for what I needed.
> When it does work it works beautifully, you can cache updates and reuse
> them, you can use clam AV on https traffic. It’s not for everyone it will
> make you a wizard level 1000 if you can get it going.
>

Jonathan, can you give me an example of it working?

Oddly, you are replying to a message from Alex that I never received.

Alex, in answer to your questions...

I'm doing some testing against a client's site, they require a custom
header to allow my connections through their WAF. I could try to do this
manually with all my tools, but it would be easier to just have Squid do it
for me and then have the tools use Squid as their proxy. I can tell them to
not do cert checking or I can use my own CA and import it into the system
store, that is not a problem.

I've tried searching for Squid and sslbump and not found anything useful
that works with the current version, that is why I'm asking here, I was
hoping someone could point me at an example that would definitely work with
the current version of Squid.

Robin


> Sent from my iPhone
>
> > On May 23, 2024, at 08:49, Alex Rousskov <
> rouss...@measurement-factory.com> wrote:
> >
> > On 2024-05-22 03:49, Robin Wood wrote:
> >
> >> I'm trying to work out how to add an extra header to a TLS connection.
> >
> > I assume that you want to add a header field to an HTTP request or
> response that is being transmitted inside a TLS connection between a TLS
> client (e.g., a user browser) and an HTTPS origin server.
> >
> > Do you control the client that originates that TLS connection (or its
> OS/environment) or the origin server? If you do not, then what you want is
> impossible -- TLS encryption exists, in part, to prevent such traffic
> modifications.
> >
> > If you control the client that originates that TLS connection (or its
> OS/environment), then you may be able to, in _some_ cases, add that header
> by configuring the client (or its OS/environment) to trust you as a
> Certificate Authority, minting your own X509 certificates, and configuring
> Squid to perform a "man in the middle" attack on client-server traffic,
> using your minted certificates. You can search for Squid SslBump to get
> more information about this feature, but the area is full of insurmountable
> difficulties and misleading advice. Avoid it if at all possible!
> >
> >
> > HTH,
> >
> > Alex.
> >
> >
> >> I've found information on how to do it on what I think is the pre-3.5
> release, but I can't find any useful information on doing it on the current
> version.
> >> Could someone give me an example or point me at some documentation on
> how to do it.
> >> Thanks
> >> Robin
> >> ___
> >> squid-users mailing list
> >> squid-users@lists.squid-cache.org
> >> https://lists.squid-cache.org/listinfo/squid-users
> >
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > https://lists.squid-cache.org/listinfo/squid-users
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Adding an extra header to TLS connection

2024-05-23 Thread Jonathan Lee
I do use ssl bump again it requires certificates installed on the devices, and 
or some and a splice for the others. You must also add a url list for items 
that must never be intercepted like banks etc. I agree it is not an easy task, 
it took me years to get it to work correctly for what I needed. When it does 
work it works beautifully, you can cache updates and reuse them, you can use 
clam AV on https traffic. It’s not for everyone it will make you a wizard level 
1000 if you can get it going.
Sent from my iPhone

> On May 23, 2024, at 08:49, Alex Rousskov  
> wrote:
> 
> On 2024-05-22 03:49, Robin Wood wrote:
> 
>> I'm trying to work out how to add an extra header to a TLS connection.
> 
> I assume that you want to add a header field to an HTTP request or response 
> that is being transmitted inside a TLS connection between a TLS client (e.g., 
> a user browser) and an HTTPS origin server.
> 
> Do you control the client that originates that TLS connection (or its 
> OS/environment) or the origin server? If you do not, then what you want is 
> impossible -- TLS encryption exists, in part, to prevent such traffic 
> modifications.
> 
> If you control the client that originates that TLS connection (or its 
> OS/environment), then you may be able to, in _some_ cases, add that header by 
> configuring the client (or its OS/environment) to trust you as a Certificate 
> Authority, minting your own X509 certificates, and configuring Squid to 
> perform a "man in the middle" attack on client-server traffic, using your 
> minted certificates. You can search for Squid SslBump to get more information 
> about this feature, but the area is full of insurmountable difficulties and 
> misleading advice. Avoid it if at all possible!
> 
> 
> HTH,
> 
> Alex.
> 
> 
>> I've found information on how to do it on what I think is the pre-3.5 
>> release, but I can't find any useful information on doing it on the current 
>> version.
>> Could someone give me an example or point me at some documentation on how to 
>> do it.
>> Thanks
>> Robin
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> https://lists.squid-cache.org/listinfo/squid-users
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Adding an extra header to TLS connection

2024-05-23 Thread Alex Rousskov

On 2024-05-22 03:49, Robin Wood wrote:


I'm trying to work out how to add an extra header to a TLS connection.


I assume that you want to add a header field to an HTTP request or 
response that is being transmitted inside a TLS connection between a TLS 
client (e.g., a user browser) and an HTTPS origin server.


Do you control the client that originates that TLS connection (or its 
OS/environment) or the origin server? If you do not, then what you want 
is impossible -- TLS encryption exists, in part, to prevent such traffic 
modifications.


If you control the client that originates that TLS connection (or its 
OS/environment), then you may be able to, in _some_ cases, add that 
header by configuring the client (or its OS/environment) to trust you as 
a Certificate Authority, minting your own X509 certificates, and 
configuring Squid to perform a "man in the middle" attack on 
client-server traffic, using your minted certificates. You can search 
for Squid SslBump to get more information about this feature, but the 
area is full of insurmountable difficulties and misleading advice. Avoid 
it if at all possible!



HTH,

Alex.


I've found information on how to do it on what I think is the pre-3.5 
release, but I can't find any useful information on doing it on the 
current version.


Could someone give me an example or point me at some documentation on 
how to do it.


Thanks

Robin

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] log_referrer question

2024-05-22 Thread Amos Jeffries

On 22/05/24 07:51, Alex Rousskov wrote:

On 2024-05-21 13:50, Bobby Matznick wrote:
I have been trying to use a combined log format for squid. The below 
line in the squid config is my current attempt.


logformat combined %>a %[ui %[un [%tl "%rm %ru HTTP/%rv" %>Hs %"%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh


Please do not redefine built-in logformat configurations like "squid" 
and "combined". Name and define your own instead.




For built-in formats do not use logformat directive at all. Just 
configure the log output:


 access_log daemon:/var/log/squid/access.log combined


As Alex said, please do not try to re-define the built-in formats. If 
you must define *a* format with the same/similar details, use a custom 
name for yours.




So, checked with squid -v and do not see “—enable-referrer_log” as one 
of the configure options used during install. Would I need to 
reinstall, or is that no longer necessary in version 4.13?


referer_log and the corresponding ./configure options have been removed 
long time ago, probably before v4.13 was released.




Since Squid v3.2 that log has been a built-in logformat. Just configure 
a log like this:


 access_log daemon:/var/log/squid/access.log referrer


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Adding an extra header to TLS connection

2024-05-22 Thread Robin Wood
Hi
I'm trying to work out how to add an extra header to a TLS connection.

I've found information on how to do it on what I think is the pre-3.5
release, but I can't find any useful information on doing it on the current
version.

Could someone give me an example or point me at some documentation on how
to do it.

Thanks

Robin
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] log_referrer question

2024-05-21 Thread Alex Rousskov

On 2024-05-21 14:47, Bobby Matznick wrote:
To add and maybe clarify what my confusion is, the log entries below 
(hidden internal/external IP’s, domain and username) don’t seem to show 
what I expected, a line marked “referrer”. Am I misunderstanding how 
that should show up in the log?


Kind of: HTTP CONNECT requests normally do not have Referer headers. 
These requests establish a TCP tunnel to an origin server through Squid. 
The "real" requests to origin server are inside that tunnel.


In some cases, it is possible to configure the client and Squid in such 
a way that Squid can look inside that tunnel and find "real" requests, 
but doing so well requires a lot of effort, including becoming a 
Certificate Authority and configuring client to trust certificates 
produced by that Certificate Authority. You can search for SslBump to 
get more information, but the area is full of insurmountable 
difficulties and misleading advice. Avoid it if at all possible.



HTH,

Alex.



--

Message: 1
Date: Tue, 21 May 2024 17:50:49 +
From: Bobby Matznick mailto:bmatzn...@pbandt.bank>>
To: "squid-users@lists.squid-cache.org 
"
>

Subject: [squid-users] log_referrer question
Message-ID:
mailto:mw5pr14mb52897188c2ed83596b406151b0...@mw5pr14mb5289.namprd14.prod.outlook.com>>

Content-Type: text/plain; charset="utf-8"

I have been trying to use a combined log format for squid. The below 
line in the squid config is my current attempt.


logformat combined %>a %[ui %[un [%tl "%rm %ru HTTP/%rv" %>Hs %"%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh


It is working, as far as logging the normal stuff I would see before 
having tried to implement referrer. I noticed somewhere that you need to 
build squid with -enable-referrer-log, it was an older version, looked 
like 3.1 and lower, I am using 4.13. So, checked with squid -v and do 
not see "-enable-referrer_log" as one of the configure options used 
during install. Would I need to reinstall, or is that no longer 
necessary in version 4.13? Thanks!!


Bobby


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] log_referrer question

2024-05-21 Thread Alex Rousskov

On 2024-05-21 13:50, Bobby Matznick wrote:
I have been trying to use a combined log format for squid. The below 
line in the squid config is my current attempt.


logformat combined %>a %[ui %[un [%tl "%rm %ru HTTP/%rv" %>Hs %"%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh


Please do not redefine built-in logformat configurations like "squid" 
and "combined". Name and define your own instead.



It is working, as far as logging the normal stuff I would see before 
having tried to implement referrer. I noticed somewhere that you need to 
build squid with –enable-referrer-log, it was an older version, looked 
like 3.1 and lower, I am using 4.13.


Please upgrade to v6. Squid v4 is not supported by the Squid Project.


So, checked with squid -v and do 
not see “—enable-referrer_log” as one of the configure options used 
during install. Would I need to reinstall, or is that no longer 
necessary in version 4.13?


referer_log and the corresponding ./configure options have been removed 
long time ago, probably before v4.13 was released.


HTH,

Alex.


*From:*squid-users  *On 
Behalf Of *squid-users-requ...@lists.squid-cache.org

*Sent:* Tuesday, April 23, 2024 6:00 AM
*To:* squid-users@lists.squid-cache.org
*Subject:* [External] squid-users Digest, Vol 116, Issue 31



*Caution:*This is an external email and has a suspicious subject or 
content. Please take care when clicking links or opening attachments. 
When in doubt, contact your IT Department


Send squid-users mailing list submissions to
squid-users@lists.squid-cache.org 

To subscribe or unsubscribe via the World Wide Web, visit
https://lists.squid-cache.org/listinfo/squid-users 


or, via email, send a message with subject or body 'help' to
squid-users-requ...@lists.squid-cache.org 



You can reach the person managing the list at
squid-users-ow...@lists.squid-cache.org 



When replying, please edit your Subject line so it is more specific
than "Re: Contents of squid-users digest..."


Today's Topics:

1. Re: Warm cold times (Amos Jeffries)
2. Re: Container Based Issues Lock Down Password and Terminate
SSL (Amos Jeffries)


--

Message: 1
Date: Tue, 23 Apr 2024 19:41:37 +1200
From: Amos Jeffries mailto:squ...@treenet.co.nz>>
To: squid-users@lists.squid-cache.org 


Subject: Re: [squid-users] Warm cold times
Message-ID: <9d8f4de6-c797-4e70-aaf5-c073f45c3...@treenet.co.nz 
>

Content-Type: text/plain; charset=UTF-8; format=flowed

On 22/04/24 17:42, Jonathan Lee wrote:
 > Has anyone else taken up the fun challenge of doing windows update 
caching. It is amazing when it works right. It is a complex 
configuration, but it is worth it to see a warm download come down that 
originally took 30 mins instantly to a second client. I didn?t know how 
much of the updates are the same across different vendor laptops.

 >

There have been several people over the years.
The collected information is being gathered at
>


If you would like to check and update the information for the current
Windows 11 and Squid 6, etc. that would be useful.

Wiki updates are now made using github PRs against the repository at
>.





 > Amazing stuff Squid team.
 > I wish I could get some of the Roblox Xbox stuff to cache but it?s a 
night to get running with squid in the first place, I had to splice a 
bunch of stuff and also wpad the Xbox system.


FWIW, what I have seen from routing perspective is that Roblox likes to
use custom ports and P2P connections for a lot of things. So no high
expectations there, but anything cacheable is great news.



 >> On Apr 18, 2024, at 23:55, Jonathan Lee wrote:
 >>
 >> ?Does anyone know the current warm cold download times for dynamic 
cache of windows updates?

 >>
 >> I can say my experience was a massive increase in the warm download 
it was delivered in under a couple mins versus 30 or so to download it 
cold. The warm download was almost instant on the second device. Very 
green energy efficient.

 >>
 >>
 >> Does squid 5.8 or 6 work better on warm delivery?

There is no significant differences AFAIK. They both come down to what
you have configured. That said, the ongoing improvements may make v6
some amount of "better" - even if only trivial.



 >> Is there a way to make 100 percent sure a docker container can?t get 
inside the cache?


For Windows I would expect the only "100% sure" way is to completely
forbid access to the disk where the 

[squid-users] log_referrer question

2024-05-21 Thread Bobby Matznick
To add and maybe clarify what my confusion is, the log entries below (hidden 
internal/external IP's, domain and username) don't seem to show what I 
expected, a line marked "referrer". Am I misunderstanding how that should show 
up in the log? Thanks

1716316179.294  0 ***.***.***.*** TCP_DENIED/407 4048 CONNECT 
cc-api-data.adobe.io:443 - HIER_NONE/- text/html
1716316179.297  0 ***.***.***.***TCP_DENIED/407 4048 CONNECT 
cc-api-data.adobe.io:443 - HIER_NONE/- text/html
1716316179.310  0 ***.***.***.***TCP_DENIED/407 4048 CONNECT 
cc-api-data.adobe.io:443 - HIER_NONE/- text/html
1716316179.313  0 ***.***.***.***TCP_DENIED/407 4112 CONNECT 
ib.adnxs.com:443 - HIER_NONE/- text/html
1716316179.316  0 ***.***.***.***TCP_DENIED/407 4144 CONNECT 
htlb.casalemedia.com:443 - HIER_NONE/- text/html
1716316179.316  0 ***.***.***.***TCP_DENIED/407 4048 CONNECT 
cc-api-data.adobe.io:443 - HIER_NONE/- text/html
1716316179.318  0 ***.***.***.***TCP_DENIED/407 4172 CONNECT 
fastlane.rubiconproject.com:443 - HIER_NONE/- text/html
1716316179.320  0 ***.***.***.***TCP_DENIED/407 4152 CONNECT 
hbopenbid.pubmatic.com:443 - HIER_NONE/- text/html
1716316179.322  20103 ***.***.***.***TCP_TUNNEL/200 3363 CONNECT 
th.bing.com:443 ***\\Username HIER_DIRECT/***.***.***.***-
1716316179.324  0 ***.***.***.***TCP_DENIED/407 4132 CONNECT 
bidder.criteo.com:443 - HIER_NONE/- text/html
1716316179.328  0 ***.***.***.***TCP_DENIED/407 4048 CONNECT 
cc-api-data.adobe.io:443 - HIER_NONE/- text/html
1716316179.331  0 ***.***.***.***TCP_DENIED/407 4048 CONNECT 
cc-api-data.adobe.io:443 - HIER_NONE/- text/html

From: squid-users  On Behalf Of 
squid-users-requ...@lists.squid-cache.org
Sent: Tuesday, May 21, 2024 11:51 AM
To: squid-users@lists.squid-cache.org
Subject: [External] squid-users Digest, Vol 117, Issue 23

Send squid-users mailing list submissions to
squid-users@lists.squid-cache.org

To subscribe or unsubscribe via the World Wide Web, visit
https://lists.squid-cache.org/listinfo/squid-users
or, via email, send a message with subject or body 'help' to
squid-users-requ...@lists.squid-cache.org

You can reach the person managing the list at
squid-users-ow...@lists.squid-cache.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of squid-users digest..."


Today's Topics:

1. log_referrer question (Bobby Matznick)


--

Message: 1
Date: Tue, 21 May 2024 17:50:49 +
From: Bobby Matznick mailto:bmatzn...@pbandt.bank>>
To: 
"squid-users@lists.squid-cache.org"
mailto:squid-users@lists.squid-cache.org>>
Subject: [squid-users] log_referrer question
Message-ID:
mailto:mw5pr14mb52897188c2ed83596b406151b0...@mw5pr14mb5289.namprd14.prod.outlook.com>>

Content-Type: text/plain; charset="utf-8"

I have been trying to use a combined log format for squid. The below line in 
the squid config is my current attempt.

logformat combined %>a %[ui %[un [%tl "%rm %ru HTTP/%rv" %>Hs %h" "%{User-Agent}>h" %Ss:%Sh

It is working, as far as logging the normal stuff I would see before having 
tried to implement referrer. I noticed somewhere that you need to build squid 
with -enable-referrer-log, it was an older version, looked like 3.1 and lower, 
I am using 4.13. So, checked with squid -v and do not see 
"-enable-referrer_log" as one of the configure options used during install. 
Would I need to reinstall, or is that no longer necessary in version 4.13? 
Thanks!!

Bobby

From: squid-users 
mailto:squid-users-boun...@lists.squid-cache.org>>
 On Behalf Of 
squid-users-requ...@lists.squid-cache.org
Sent: Tuesday, April 23, 2024 6:00 AM
To: squid-users@lists.squid-cache.org
Subject: [External] squid-users Digest, Vol 116, Issue 31

Caution: This is an external email and has a suspicious subject or content. 
Please take care when clicking links or opening attachments. When in doubt, 
contact your IT Department
Send squid-users mailing list submissions to
squid-users@lists.squid-cache.org>

To subscribe or unsubscribe via the World Wide Web, visit
https://lists.squid-cache.org/listinfo/squid-users>
or, via email, send a message with subject or body 'help' to

  1   2   3   4   5   6   7   8   9   10   >