Re: [squid-users] squidclient ERR_ACCESS_DENIED

2024-02-28 Thread Andrea Venturoli

On 2/28/24 12:51, Francesco Chemolli wrote:


Hi Andrea,
   there's https://wiki.squid-cache.org/Features/CacheManager/Index 
 ,

although it could probably be more explicit


Hello and thanks.

I had seen that document before posting, but, possibly due to my 
ignorance, I cannot understand how to use it.

For example I see some endpoints listed under the SMP chapter (e.g.
curl http://localhost:8080/squid-internal-mgr/info), but I guess that's 
not a complete list.

Does such a list exist? Where?

I'm in need to purge some objects from the cache and I always used 
something like:

squidclient mgr:objects | grep -i somesite | grep GET | sed "s/.*GET //"rgs -n 
1 squidclient -m PURGE


What could be an equivalent using curl/wget?

 bye & Thanks
av.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidclient ERR_ACCESS_DENIED

2024-02-27 Thread Andrea Venturoli

On 2/27/24 18:02, Alex Rousskov wrote:

Hello and thanks for answering.



You are suffering from one or several known problems[1,2] related to 
cache manager changes in v6+ code. Without going into complicated 
details, I recommend that you replace deprecated squidclient with curl, 
wget, or another popular client of your choice _and_ then use the URL 
host name (or IP address) and other client configuration parameters that 
"work" in your specific Squid environment. You may need to adjust them 
later, but at least you will have a temporary workaround.


I vaguely remembered squidclient deprecation (although I searched for it 
and could not find official info on the site).


WRT to moving to curl/wget/whatever, is there any documentation I can use?

 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] squidclient ERR_ACCESS_DENIED

2024-02-27 Thread Andrea Venturoli

Hello.

I'm having trouble accessing cachemgr with squidclient.

As a test, I've added the following to my squid.conf as the first 
http_access line:

http_access manager


(I know this is dangerous and I've removed it after the test).


Opening "http://10.1.2.39:8080/squid-internal-mgr/info; from a client, I 
see all the stats.

However, squidclient still gets an access denied error:

# squidclient -vv -p 8080 -h 10.1.2.39 mgr:info
verbosity level set to 2
Request:
GET http://10.1.2.39:8080/squid-internal-mgr/info HTTP/1.0
Host: 10.1.2.39:8080
User-Agent: squidclient/6.6
Accept: */*
Connection: close


.
Transport detected: IPv4-only
Resolving 10.1.2.39 ...
Connecting... 10.1.2.39 (10.1.2.39:8080)
Connected to: 10.1.2.39 (10.1.2.39:8080)
Sending HTTP request ... 
done.

HTTP/1.1 403 Forbidden
Server: squid
Mime-Version: 1.0
Date: Tue, 27 Feb 2024 15:33:55 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 3691
X-Squid-Error: ERR_ACCESS_DENIED 0
Vary: Accept-Language
Content-Language: en
Cache-Status: proxy2.ventu;fwd=miss;detail=mismatch
Via: 1.1 proxy2.ventu (squid), 1.1 proxy2.ventu (squid)
Cache-Status: proxy2.ventu;fwd=miss;detail=no-cache
Connection: close


This happens indifferently if I run it on the cache host itself or from 
the same client where the browser works.


In cache.log I see:

2024/02/27 16:34:48 kid1| WARNING: Forwarding loop detected for:
GET /squid-internal-mgr/info HTTP/1.1
Host: proxy2.ventu:8080
User-Agent: squidclient/6.6
Accept: */*
Via: 1.0 proxy2.ventu (squid)
X-Forwarded-For: 10.1.2.18
Cache-Control: max-age=259200
Connection: keep-alive


current master transaction: master2562


Does this mean Squid is connecting to itself as a proxy in order to 
connect to himself?
I removed all "*proxy*" env vars and tried running squidclient again, 
but there was no difference.


Any hint?
Is there a way to get more debugging info from Squid on this?

 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Intercepted connections are not bumped [SOLVED]

2023-12-15 Thread Andrea Venturoli

On 11/27/23 16:59, Andrea Venturoli wrote:


That behaviour is why we typically recommend doing "peek" first


Well, I thought this was what I was doing.

As I said I had:

acl step1 at_step SslBump1
ssl_bump splice !bumphosts !jails
ssl_bump splice splicedom
ssl_bump peek step1
ssl_bump bump all


and I expected "peek step1" would decide what to do first.



However, it seems the order of directives matters more and I solved with:

acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump splice !bumphosts !jails
ssl_bump splice splicedom
ssl_bump bump all




Both seems to work equally, however, when explicitly using the proxy.

 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Intercepted connections are not bumped

2023-11-27 Thread Andrea Venturoli

On 11/27/23 11:11, Amos Jeffries wrote:


First off, thanks for answering.



For further assistance please also show your http_access and ACL config 
lines. They will be needed for a better analysis of what is going on.


I'll start from here.
It's quite long, but a reduced example is:

acl localnet src 10.1.2.0/24
acl bumphosts src 10.1.2.18
acl SSL_ports port 443
acl SSL_ports port 563 801 3001 8443 19996 19997
acl Safe_ports port 80  # http
acl Safe_ports port 800
acl ftptraffic myportname ftpport
acl fetched_certificate transaction_initiator certificate-fetching
acl splicedom ssl::server_name_regex -i "/usr/local/etc/squid/nobumpsites"
acl step1 at_step SslBump1
ssl_bump splice !bumphosts
ssl_bump splice splicedom
ssl_bump peek step1
ssl_bump bump all
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
adaptation_access service_req deny ftptraffic
adaptation_access service_resp deny ftptraffic
http_access allow localnet
http_access allow localhost


For the sake of an example, let's say I connect from 10.1.2.18 to 
www.google.com.




FYI, Intercepted traffic first gets interpreted as a CONNECT tunnel to 
the TCP dst-IP:port and processed by http_access to see if the client is 
allowed to make that type of connection.


Fine.
Traffic is in fact allowed.



To guess based on the info provided above I suspect that the 
fake-CONNECT raw-IP does not match your "bumphosts" ACL test. Causing 
that "ssl_bump splice !bumphosts" to occur.


Not sure I understand what you mean: is raw-IP the source (in my case 
10.1.2.18) or the destination IP (142.251.209.36)?


"bumphosts" ACLs are local clients (those that SSLBump should be applied 
to): 10.1.2.18 is in this list (in fact it gets SSLBump if explicitly 
using the proxy).




This is what I see in the logs for an intercepted connection (after it's 
closed):



1701100166.601   2203 10.1.2.18 TCP_TUNNEL/500 6622 CONNECT 142.251.209.36:443 
- ORIGINAL_DST/142.251.209.36 -




This is what I see using a proxy-aware application:


1701100243.374172 10.1.2.18 TCP_MISS/200 49333 GET https://www.google.com/? 
- HIER_DIRECT/142.251.209.36 text/html






That behaviour is why we typically recommend doing "peek" first, then 
the splice checks can be based on whatever TLS SNI value is found.


I don't think it should matter: neither www.google.com nor 
142.251.209.36 are in any ACL.

Or did I understand wrong?
Is this needed for intercepted SSLBump?



I think it worked in the past: has anything changed in this regard 
with Squid 6?



Changed since what version? Over time a lot of small changes can add up 
to large differences.


I first noticed this on 6.4.
Unfortunately I don't remember which version I was using at the time I 
set this up, maybe 5.x, maybe even 4.x.




 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Intercepted connections are not bumped

2023-11-23 Thread Andrea Venturoli

Hello.

I've got the following config:


...
http_port 8080 ssl-bump cert=/usr/local/etc/squid/proxyCA.pem 
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
https_port 3129 intercept ssl-bump cert=/usr/local/etc/squid/proxyCA.pem 
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
...
acl step1 at_step SslBump1
ssl_bump splice !bumphosts
ssl_bump splice splicedom
ssl_bump peek step1
ssl_bump bump all
...


So I've got port 8080 where proxy-aware client connect and 3129, which 
is feeded intercepted https connection by ipfw.


Problem is: if a client connects explicitly via proxy (port 8080) it 
gets SSLBumped; if a client simply connects to its destination https 
port (so directed to 3129) it is tunneled.


Anything wrong in my config?
I think it worked in the past: has anything changed in this regard with 
Squid 6?


 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ftp_port and squidclamav

2021-11-02 Thread Andrea Venturoli



On 10/12/21 16:51, Alex Rousskov wrote:


Squid has a configuration option to work around such adaptation service
deficiencies: force_request_body_continuation. Please see if enabling
that workaround helps in your environment:
http://www.squid-cache.org/Doc/config/force_request_body_continuation/


Thanks, but that didn't work: with "force_request_body_continuation 
allow ftptraffic", I'm able to delete remote files, create remote 
directories, but file upload fails.


I'm back to
adaptation_access service_req deny ftptraffic
adaptation_access service_resp deny ftptraffic
which works fine.

 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to pass TeamViewer traffic

2021-10-23 Thread Andrea Venturoli



On 10/23/21 18:56, Marcus Kool wrote:

sslbump can be used in peek+splice and peek+bump modes.


Sure.



Depending on what Squid finds in the peek (e.g. a teamviewer FQDN) Squid 
can decide to splice (not interfere) the connection.


I know.



Perhaps I wasn't clear.
What I was saying is that teamviewer traffic must be spliced, not bumped.

 bye
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to pass TeamViewer traffic

2021-10-23 Thread Andrea Venturoli

On 10/22/21 17:24, Alex Rousskov wrote:

I do not know much about TeamViewer, 
...

You do not need SslBump and https_port for this.


AFAIK you *cannot* use SslBump, as TeamViewer pinpoints certificates.
If someone can prove me wrong, I'd be curious to know how they manage this.

 bye
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ftp_port and squidclamav

2021-10-12 Thread Andrea Venturoli

On 10/12/21 16:51, Alex Rousskov wrote:


I am not sure, but I suspect that you are suffering from your ICAP
service inability to handle REQMOD transactions with HTTP 100-Conntinue
semantics, including (but not limited to) FTP STOR requests (translated
into HTTP by Squid).

Squid has a configuration option to work around such adaptation service
deficiencies: force_request_body_continuation. Please see if enabling
that workaround helps in your environment:
http://www.squid-cache.org/Doc/config/force_request_body_continuation/


I'll try and let you know.




P.S. When you sanitized your cache.log, you have stripped lines dumping
message headers. Those lines do not start with the usual
"2021/10/12...|" prefix. Stripping them complicates triage.


Oh, sorry!
I didn't notice them and "grep"ped the time period I was interested in.
Do you want me to extract the logs again?



 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ftp_port and squidclamav

2021-10-12 Thread Andrea Venturoli



On 8/28/21 17:10, Alex Rousskov wrote:

Sorry for taking so long.
Meanwhile I upgraded to Squid 5.0.6, but the problem was not solved.




Reproduce the problem using a single transaction on an otherwise idle
Squid with full debugging enabled and share the corresponding cache.log:
https://wiki.squid-cache.org/SquidFaq/BugReporting#Debugging_a_single_transaction


It's here:
https://www.netfence.it/download/cache.log.bz2




Or, is there any way I can tell Squid to avoid passing FTP traffic
(coming on port 2121) to ICAP (while of course doing that for the rest)?


Yes, the adaptation_access directive controls what traffic goes to your
ICAP services. To match ftp_port traffic, I would give the ftp_port a
name and then try using that name in a myportname ACL. Other ACLs may
also work, but I would start with myportname. If myportname does not
work for ftp_port traffic, it is a Squid bug.


This works.
Thanks!

 bye
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ftp_port and squidclamav

2021-08-28 Thread Andrea Venturoli

Hello.

I've got Squid (4.15) configured as an HTTP[s] server, with squidclamav:


icap_enable on
icap_send_client_ip on
icap_preview_enable on
icap_preview_size 1024
icap_service service_req reqmod_precache bypass=0 
icap://127.0.0.1:1344/squidclamav
adaptation_access service_req allow all
icap_service service_resp respmod_precache bypass=0 
icap://127.0.0.1:1344/squidclamav
adaptation_access service_resp allow all


Everything is fine on this side.



Now I'm trying to make it act as an FTP proxy, with:
> ftp_port 2121

This works partially: I'm usually able to see remote directories, but 
uploads will fail (timing out on the client side).


If I disable ICAP at all (comment the above lines), then the FTP proxy 
works properly.




I'm failing to understand the interaction between the two: even simple 
files fail to upload and I see no signs of ClamAV taking much time to 
scan them.

Is this some known problem?
Any suggestion on how to gain a better understanding?

Or, is there any way I can tell Squid to avoid passing FTP traffic 
(coming on port 2121) to ICAP (while of course doing that for the rest)?


 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid caching webpages now days?

2021-08-01 Thread Andrea Venturoli

On 8/1/21 3:48 AM, Periko Support wrote:


with most of the web sites running under https.


SSL Bumping might help here.
Whether it's worth the hassle, legal, etc... depends on your situation.




Does caching still a good option with squid?


Generally speaking, I find caching is nowadays mostly irrelevant.

However it can make a huge difference in some edge cases: I had a server 
distributing packages to several groups of machines and it was behind a 
very slow line.
Downloading the package once or several times would make the difference 
betweeen minutes and hours.


So YMMV.

 bye
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid "suspending ICAP service for too many failures"

2021-02-01 Thread Andrea Venturoli

On 2/1/21 8:56 AM, Andrea Venturoli wrote:


It could be a network problem.
However, I think that's unlikely (also given the host is monitored and I 
don't see alerts or other signs of such troubles).
While I cannot exclude that completely, I think I should first 
investigate in other directions.


Finally I have some insight: this happens when ClamAV receives a new 
virus definitions database and so reloads.


Notice I'm using 0.103, which "reloads the signature database without 
blocking scanning" (and no I didn't disable this).
So probably, while it works in theory, this slows the system and hence 
the timeouts.


I'm now trying with increased timeouts or with disabling ICAP failure 
limits.


Thanks to all who helped.

 bye
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid "suspending ICAP service for too many failures"

2021-01-31 Thread Andrea Venturoli

On 1/31/21 1:11 AM, Amos Jeffries wrote:


As I said, they live on the same host, so it can't be a network problem.



FYI, that conclusion does not follow. Even on the same host there is a 
full TCP/IP networking stack between Squid and ICAP server doing things 
to the packets. All localhost removes is the potential problems due to 
differences in machine networking stacks.


Network config, firewall rules, packet handling, and/or protocol 
negotiation activities between the software are all still happening that 
may affect the outcome.


Right.
It could be a network problem.
However, I think that's unlikely (also given the host is monitored and I 
don't see alerts or other signs of such troubles).
While I cannot exclude that completely, I think I should first 
investigate in other directions.


 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid "suspending ICAP service for too many failures"

2021-01-30 Thread Andrea Venturoli

On 1/29/21 8:38 PM, Alex Rousskov wrote:


IIRC, you did not disclose timeout suspicions before. This explanation
is news to me, and it eliminates several suspects.


Sorry, I didn't say much in fact.
I gave for granted that it was C-ICAP who stopped answering; I didn't 
suspect a Squid bug and had no other idea.





If you are talking about Squid timing out when attempting to establish a
TCP connection with the ICAP server, then this may by as much insight as
you can get from the Squid side.


What I hoped to find in Squid logs was *what* was being passed to C-ICAP 
when it locked.

I'll try on the C-ICAP side then.




I do not know much about c-icap, but I would check whether its
configuration or something like crontab results in hourly restarts and
associated loss of connectivity.


AFAIK no.




The network interface or the routing tables might also be reset hourly


They live on the same host.




The ICAP server/service might be running out of descriptors or memory.


I'd expect it to log that, but I'll investigate better.




One potentially useful test is to try to connect to the ICAP server
_while the problem is happening_ using telnet or netcat. When Squid
cannot establish a connection, can you?


I'll try, but it's going to be hard, since this happens for a few 
minutes once a day at most.





Packet captures can tell you whether other Squid-ICAP server connections
were active at the time, whether from-Squid SYN packets were able to
reach the ICAP server, etc.

In other words, basic network troubleshooting steps...


As I said, they live on the same host, so it can't be a network problem.




Higher timeout will delay HTTP client transactions for longer periods of
time, of course. If you want to go down the road of finding workarounds,
then check whether raising that timeout actually helps. It is not yet
clear (to me) whether the connections just need more time to be
established or are simply doomed.


It's not clear to me either, but I suspect so, given the trouble only 
last a few minutes.






Same for disabling icap_service_failure_limit?


This is an essential ICAP service (icap_service bypass=off). I assume
there is no backup service -- no adaptation_service_set in play here. If
so, disabling the limit means that fewer HTTP transactions will be
inconvenienced in the long run than if the service were to be suspended.
  Hence, fewer ICAP errors will be delivered to Squid clients.


Agreed.




You can also enable bypass.


I guess this would open a potential for an attack.
DoS the service (antivirus), then let something nasty pass...




Fixing the problem would be a much better solution, of course.


Sure, I know these are workarounds and I'd rather avoid them, but I'll 
need to consider them as a last resort.




 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid "suspending ICAP service for too many failures"

2021-01-29 Thread Andrea Venturoli

On 1/27/21 6:11 PM, Alex Rousskov wrote:


Enable ICAP debugging and study cache.log for relevant messages,
especially just before the "suspending ICAP service" message shown above.

 debug_options ALL,1 93,7


Thanks a lot.

As expected, I see Squid connections to C-ICAP starting to time out: 
when the number of errors reach 10, Squid marks squidclamav service as 
"suspended".


No big surprise. Still I don't get any more insight (Is C-ICAP choking? 
Why? What data triggers this?).




Is it a really bad idea to raise icap_connect_timeout?
Same for disabling icap_service_failure_limit?

Other hints?

 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid "suspending ICAP service for too many failures"

2021-01-27 Thread Andrea Venturoli

Hello.

On a box I manage, Squids occasionally stops for a few minutes, blaming 
a communication error with C-ICAP (running SquidClamAV).


In cache.log I see:

2021/01/04 14:24:24 kid1| suspending ICAP service for too many failures
2021/01/04 14:24:24 kid1| essential ICAP service is suspended: 
icap://127.0.0.1:1344/squidclamav [down,susp,fail11]


This happens usually once a day, always at the same time.
AFAIK there's no particular job running on the server at that time; I 
analyzed squid.log to see whether some client accesses something 
specific at that hour of the day, but came up empty.


Obviously I looked into C-ICAP logs, but, again, found no hint of any 
error or trouble.



Any suggestion on what to do to investigate this?

 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FTP proxy

2020-12-07 Thread Andrea Venturoli

On 12/7/20 4:08 PM, Alex Rousskov wrote:

On 12/7/20 5:03 AM, Andrea Venturoli wrote:


I'm talking about the ports used by the clients to conect to Squid
(besides 21), using passive FTP (i.e. those returned by PASV command).


Just to avoid misunderstanding, "those returned by PASV command" should
be interpreted as "ports returned by Squid to the client in response to
the client PASV command". The PASV command itself does not list ports.


Yes, that's what I meant.
Thanks for clarifying.




When handling a PASV command, Squid creates a listening socket bound to
an ephemeral TCP port selected by the operating system. Ephemeral port
ranges are usually handled by your OS ephemeral ports setting (e.g.,
sysctl net.ipv4.ip_local_port_range).


For the record, since I'm not using Linux, but FreeBSD, I guess that 
would be net.inet.ip.portrange.first/net.inet.ip.portrange.last (or, 
possibly, net.inet.ip.portrange.hifirst/net.inet.ip.portrange.hilast, 
I'd have to check the source).


However those are system wide settings; I guess there is no equivalent 
of frox.conf's "PassivePorts" settings, then.


Thanks.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FTP proxy

2020-12-07 Thread Andrea Venturoli

On 12/6/20 8:41 PM, Alex Rousskov wrote:


AFAIK, FTP proxy is successfully used in some production environments,
but I bet that most Squid deployments do not use this feature. YMMV.


Thanks.



Is there a way to restrict the port range of the additional connections
(e.g. to 4-5)?


I do not know what connections you are talking about (there are at least
four connections when it comes to a typical proxied FTP transaction).


I'm talking about the ports used by the clients to conect to Squid 
(besides 21), using passive FTP (i.e. those returned by PASV command).


 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FTP proxy

2020-12-06 Thread Andrea Venturoli

On 12/6/20 5:01 PM, Antony Stone wrote:


Oh, so you're in charge of both?


Yes.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FTP proxy

2020-12-06 Thread Andrea Venturoli

On 12/6/20 4:44 PM, Antony Stone wrote:


Where is the firewall, compared to your Squid proxy, in the network?


Squid runs on the firewall itself.




I'm just wondering how you plan to use Squid's native FTP mode to bypass a
firewall, which is therefore presumably blocking FTP...?


It's not blocking FTP for itself, but it's blocking FTP to internal clients.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] FTP proxy

2020-12-06 Thread Andrea Venturoli

Hello.

I'm trying to evaulate FTP proxying with squid and I have a couple of 
questions.
To be clear, I'm not talking about FTP through HTTP, but about the 
ftp_port option.

I've used frox (http://frox.sourceforge.net/) in the past for this.



I see this feature was introduced in 3.5 as an experimental one; at 4.13 
is it still so or is it considered stable and dependable?
(For now I'm not interested in logging, interception, etc..., I just 
need to bypass a firewall easily).


Is there a way to restrict the port range of the additional connections 
(e.g. to 4-5)?


 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] (92) Protocol error (TLS code: X509_V_ERR_CERT_HAS_EXPIRED)

2020-06-23 Thread Andrea Venturoli

Hello.

Running Squid 4.11 on FreeBSD 11.3 with SSLBump, since a few days, I've 
got several sites (e.g. https://www.kawsaki.it/) failing with:



The following error was encountered while trying to retrieve the URL: 
https://www.kawasaki.it/*

Failed to establish a secure connection to 54.39.161.167

The system returned:

(92) Protocol error (TLS code: X509_V_ERR_CERT_HAS_EXPIRED)

SSL Certificate expired on: May 30 10:48:38 2020 GMT

This proxy and the remote host failed to negotiate a mutually acceptable 
security settings for handling your request. It is possible that the remote 
host does not support secure connections, or the proxy is not satisfied with 
the host security credentials.




When this happens, in cache.log I see:

2020/06/23 15:03:31 kid1| ERROR: negotiating TLS on FD 33: error:14090086:SSL 
routines:ssl3_get_server_certificate:certificate verify failed (1/-1/0)
2020/06/23 15:03:31 kid1| ERROR: negotiating TLS on FD 33: error:14090086:SSL 
routines:ssl3_get_server_certificate:certificate verify failed (1/-1/0)
2020/06/23 15:03:31 kid1| ERROR: negotiating TLS on FD 53: error:14090086:SSL 
routines:ssl3_get_server_certificate:certificate verify failed (1/-1/0)




I know an intermediate certificate expired, but a new one should have 
been published.




What I find strange, is that using openssl directly succeeds:


# openssl s_client -connect www.kawasaki.it:https
CONNECTED(0003)
depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global 
Root CA
verify return:1
depth=1 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert CN RSA 
CA G1
verify return:1
depth=0 C = CN, ST = \E7\A6\8F\E5\BB\BA\E7\9C\81, L = 
\E5\8E\A6\E9\97\A8\E5\B8\82, O = 
\E7\BD\91\E5\AE\BF\E7\A7\91\E6\8A\80\E8\82\A1\E4\BB\BD\E6\9C\89\E9\99\90\E5\85\AC\E5\8F\B8\E5\8E\A6\E9\97\A8\E5\88\86\E5\85\AC\E5\8F\B8,
 OU = IT, CN = webssl.chinanetcenter.com
verify return:1
---
Certificate chain
 0 
s:/C=CN/ST=\xE7\xA6\x8F\xE5\xBB\xBA\xE7\x9C\x81/L=\xE5\x8E\xA6\xE9\x97\xA8\xE5\xB8\x82/O=\xE7\xBD\x91\xE5\xAE\xBF\xE7\xA7\x91\xE6\x8A\x80\xE8\x82\xA1\xE4\xBB\xBD\xE6\x9C\x89\xE9\x99\x90\xE5\x85\xAC\xE5\x8F\xB8\xE5\x8E\xA6\xE9\x97\xA8\xE5\x88\x86\xE5\x85\xAC\xE5\x8F\xB8/OU=IT/CN=webssl.chinanetcenter.com
   i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert CN RSA CA G1
 1 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert CN RSA CA G1
   i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Global Root CA
---
Server certificate
-BEGIN CERTIFICATE-
MIIWEDCCFPigAwIBAgIQA1RHNwOepXqwyoBuZiYbQTANBgkqhkiG9w0BAQsFADBf
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
d3cuZGlnaWNlcnQuY29tMR4wHAYDVQQDExVEaWdpQ2VydCBDTiBSU0EgQ0EgRzEw
HhcNMjAwNjE5MDAwMDAwWhcNMjAxMTA5MTIwMDAwWjCBnjELMAkGA1UEBhMCQ04x
EjAQBgNVBAgMCeemj+W7uuecgTESMBAGA1UEBwwJ5Y6m6Zeo5biCMTYwNAYDVQQK
DC3nvZHlrr/np5HmioDogqHku73mnInpmZDlhazlj7jljqbpl6jliIblhazlj7gx
CzAJBgNVBAsTAklUMSIwIAYDVQQDExl3ZWJzc2wuY2hpbmFuZXRjZW50ZXIuY29t
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA11rUTXwosZacGiiTO6+o
Qhm7qZzl8T5fGeNwXsZw/EGtCcySXD8pQ33+IpMdXq8hi5EaBXeHCpUs4UCg4S1S
WXlfGr3PbP+SwLRiXGGNPOPYywLX8N0SyDy1VOkrMHHDRscbf1x6pSJpSTRkNqXS
7+/zFTP26fDpvVlgG3U9VpAf7jpCg+xO2ppCbEyKEd02DGNSzSC0vmBJnsg/vI+j
E8kpiDLjBXAIl5nSns6rChXgxH9/BO60Vef+R3lA5EMVUp31CzhkvjNrk9pcSVbw
6AVKlEU314G5diBe/ju0Vie/rnUHXb9FIIHN8+XiNhLBGK2TrgpYvba7gC+wkvVu
ZwIDAQABo4IShjCCEoIwHwYDVR0jBBgwFoAU70ULeBWRpbbRc6SSb2NaWdNfPp0w
HQYDVR0OBBYEFGdOsyhZD+/HmDtCHqpecLdpZvYeMIIPwgYDVR0RBIIPuTCCD7WC
CiouY2N0di5jb22CCSouMTYzLmNvbYIPKi5jaGluYWxpdmUuY29tgg0qLmNjdHZw
aWMuY29tggwqLmlzZWV5b28uY26CECouMjAxMGV4cG90di5jb22CCSouY2N0di5j
boIMKi5zYW1sY3IuY29tggoqLnN5eXguY29tgg8qLnpodW9xdWFwcC5jb22CDSou
NTA1NDM5OS5jb22CDyouYWl3YW40Mzk5LmNvbYIKKi4zODM5LmNvbYIJKi40Mzk5
LmNugggqLjU2LmNvbYIJKi5jbnR2LmNugg4qLml3YW40Mzk5LmNvbYIOKi5saXZl
Y2hpbmEuY26CDyoubGl2ZWNoaW5hLmNvbYIQKi5taXRhZ3Rlbm5pLm5ldIIOKi5v
dXJkdnNzcy5jb22CDSouZGViZW5jZS5uZXSCDSouMzgzOWFwcC5jb22CDSouZGF5
amF1eS5uZXSCDSouYm13Z3JvdXAuY26CDCouZm94aWpuLmNvbYIPKi5jaGlkYXJl
c3MuY29tgg4qLmNvdmluaXlhLmNvbYINKi5pbWc0Mzk5LmNvbYIKKi4xMjM3MS5j
boIMKi5pcGFuZGEubmV0ggwzMDAwdGVzdC5jb22CCyouaTM4MzkuY29tggsqLmlw
MTM4LmNvbYIJaXAxMzguY29tggwqLnpoZTgwMC5jb22CDSoudnhpbnlvdS5jb22C
DCouNDM5OWtlLmNvbYIKKi4zMDAwLmNvbYIQKi40Mzk5eW91cGFpLmNvbYIMKi55
eGhoZGwuY29tgg0qLjMwMDBhcGkuY29tgg4qLmt1eWlueXVuLmNvbYIOKi5rdXlp
bjEyMy5jb22CDCouZGl5cmluZy5jY4IOKi4zMDAwdGVzdC5jb22CDCoubWVpcGFp
LmNvbYISKi5jYW5rYW94aWFveGkuY29tggsqLmNudHZ3Yi5jboIQKi5pYW5ub25l
a3RtLm5ldIIMKi5pcGFuZGEuY29tggsqLmlwYW5kYS5jboINKi40Mzk5YXBpLm5l
dIINKi51bmNjb2RvLmNvbYIPKi5tZWl0dWRhdGEuY29tggsqLm1laXR1LmNvbYIK
Ki40Mzk5LmNvbYIPKi5uZXdzLmNjdHYuY29tghEqLm5ld2VyYS5jY3R2LmNvbYIS
Ki5vcGVuY2xhLmNjdHYuY29tghYqLm5ld3Njb250ZW50LmNjdHYuY29tgg4qLm5u
bi5jY3R2LmNvbYIOKi5uZXdzLmNudHYuY26CDyoubGl2ZS5jY3R2LmNvbYIUKi5u
ZXdjb21tZW50LmNudHYuY26CESouaXR2LmNjdHZwaWMuY29tgg0qLmltZy5jbnR2
LmNughMqLmltZy5saXZlY2hpbmEuY29tgg8qLmlwYW5kYS5jb20uY26CDiouaXBy
LmNjdHYuY29tgg0qLmlwci5jbnR2LmNughAqLmlzaG93LmNjdHYuY29tgg4qLml0

Re: [squid-users] [ext] Squid + ClamAV

2020-03-09 Thread Andrea Venturoli

On 2020-03-09 16:01, Ralf Hildebrandt wrote:


Actually, I don't know :)


Thanks anyway.




In my setung I'm using squid & c-icap with CLAMD. I'm scanning a few
types only:

virus_scan.ScanFileTypes EXECUTABLE ARCHIVE FWS CWS DOCUMENT DATA TEXT


That was an idea I had to, i.e. limiting scanned types.
With FWS and CWS you mean Flash???

I see you don't scan JavaScript: I thought it would be the first thing 
to look into...

Any reasoning behind this?



 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [ext] Squid + ClamAV

2020-03-08 Thread Andrea Venturoli

On 2020-03-06 16:24, Ralf Hildebrandt wrote:

* Andrea Venturoli :

Hello.

Is this the right place to discuss Squid + C-ICAP + SquidClamAV + ClamAV?


What do you need SquidClamAV for?


Interesting question.

I find information on the web scarce, but here (*) it states "In 
practice, configuration with clamd and squidclamav is fastest".


(*) https://wiki.squid-cache.org/ConfigExamples/ContentAdaptation/C-ICAP

Is that wrong? Outdated?
Also, squidclamav allows for whitelists, which I don't see mentioned in 
the other setups.




Do you believe any of the different configuration outlined in that 
document is better?


What do you suggest?
I-CAP + clamd?
I-CAP + libclamav?

Keep in mind I will run clamd anyway for other services.

Or should I ignore that document completely and use something else?

Also, I heard about e-cap, but IIUIC it's still immature. Is that correct?



In any case, are you getting satisfactory performance? Did you need any 
tweak to ClamAV config?




 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid + ClamAV

2020-03-06 Thread Andrea Venturoli

Hello.

Is this the right place to discuss Squid + C-ICAP + SquidClamAV + ClamAV?
Normally I'd look for a specific mailing list, but it seems SquidClamAV 
has none.

If this isn't the right place, can someone give a pointer on where to go?



I setup the whole thing and it's working.
However I often get terrible performance (with ClamAV eating a lot of 
CPU), but find it hard to understand what is being scanned that takes so 
long, ad I find the logs of little help.
Also, this does not seem to be always reproducible, since many sites 
will sometimes be very fast and sometimes very slow.


I looked for suggestions on how to tweak ClamAV and/or SquidClamaAV 
(e.g. with whitelists), but came up empty.


Any hint?

 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and DoH

2020-03-01 Thread Andrea Venturoli

On 2020-02-29 10:19, Amos Jeffries wrote:


With ACL that identify the relevant messages:

   acl dns-query-url urlpath_regex ^/dns-query\??
   acl dns-req-message req_header Content-Type ^application/dns-message$

   acl doh_request any-of dns-query-url dns-req-message

   acl doh_reply rep_header Content-Type ^application/dns-message$


Thanks a lot.
I thought maybe there was a specific ready-made keyword, but the above 
is fine.


 bye
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and DoH

2020-03-01 Thread Andrea Venturoli

On 2020-02-29 14:17, Matus UHLAR - fantomas wrote:


I guess DoH means dns over https and thus needs sslbump enabled.  the easy
but limited way would be to disable connections to publicly available DoH
servers.


Thanks.
Is someone maintaining such a list?

 bye
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid and DoH

2020-02-28 Thread Andrea Venturoli

Hello.

In some corporate environment it might be desiderable to have all 
clients use the internal DNS.

This is easily done with firewalls until DNS-over-HTTP comes into play.

How does Squid deals with this?
How to block it?

 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid won't download intermediate certificates

2020-01-30 Thread Andrea Venturoli

On 2020-01-30 09:15, i...@schroeffu.ch wrote:

acl fetched_certificate transaction_initiator certificate-fetching
cache allow fetched_certificate
http_access allow fetched_certificate


Thanks!
This is exactly it.

 bye
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid won't download intermediate certificates

2020-01-29 Thread Andrea Venturoli

Hello.

I'm experimenting SSLBump and I've got a problem: when a client visits a 
site which won't provide intermediate SSL certificates, the connection 
will fail.
I read Squid 4 should download such certificates itself, however this 
does not succeed.

I see in the logs something like:

1580334345.045  1 - TCP_DENIED/403 3634 GET 
http://secure.globalsign.com/cacert/gsorganizationvalsha2g2r1.crt - HIER_NONE/- 
text/html;charset=utf-8


Seems like an ACL problem.
There is no source IP, but a - (dash): I guess this means the connection 
was originated from Squid itself.


Is there a specific keyword I need to use to allow such connections?
"localhost" doesn't seem to do the trink.

Any help appreciated.

 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SqStat [was How to catch a big spender ?]

2019-04-05 Thread Andrea Venturoli

On 3/25/19 9:03 PM, Bruno de Paula Larini wrote:


Search for "sqstat". The tool is very simple, but it works for me.


Hello.
I got curious about this and decided to try sqstat.
I'm on FreeBSD and used the version in the port, which is 1.20; this 
seems to be the last available.


It doesn't even work out of the box, see:

https://mysolution.lk/sqstat-error-error-1-cannot-get-data-server-answered-http1-1-200-ok/


With the patches on that page I was able to start it, but still it gets 
some data wrong and I had to make further corrections.



So, is this an abandoned project?
Are there similar alternatives?
Better ways to do the same thing?

 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] What's the best way to ban Let's encrypt based certificates? or whitelist a very narrow list of Root and Intermediates CA?

2019-01-21 Thread Andrea Venturoli

On 1/20/19 11:02 PM, Eliezer Croitoru wrote:

The issue is that these sites are encrypted but do not offer any way of 
assuring real ISO and couple other compatibilities of the ORG.


For a simple home user it’s fine most of the time but for some it’s not.


Just out of curiosity, could you better explain this?
Pointer are enough if you prefer.

 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Regression after upgrading 3.5.27 -> 4.1

2018-07-26 Thread Andrea Venturoli

On 7/25/18 7:07 PM, Andrea Venturoli wrote:

On 7/25/18 6:46 PM, Amos Jeffries wrote:


What is your "squid -v" output?

If --disable-http-violations is used then relaxed parser will not
include those "must never be transmitted in un-escaped form" (RFC 2396)
characters.


It's there!!!

Thanks for pointing me in the correct direction.
I'm off recompiling... will let you know if this solves.


I can confirm removing this flag solved my problem.

Thanks to all.

 bye
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Regression after upgrading 3.5.27 -> 4.1

2018-07-25 Thread Andrea Venturoli

On 7/25/18 6:46 PM, Amos Jeffries wrote:


What is your "squid -v" output?

If --disable-http-violations is used then relaxed parser will not
include those "must never be transmitted in un-escaped form" (RFC 2396)
characters.


It's there!!!

Thanks for pointing me in the correct direction.
I'm off recompiling... will let you know if this solves.

 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Regression after upgrading 3.5.27 -> 4.1

2018-07-25 Thread Andrea Venturoli

On 7/25/18 4:54 PM, Alex Rousskov wrote:

On 07/25/2018 01:12 AM, Andrea Venturoli wrote:

On 7/22/18 3:29 PM, Andrea Venturoli wrote:


http://xxx.xxx.xx/rest?method=navi_path.add=I029=0=X%20-%20Xxx%20xxx%20xxx%2030/12/2014%20-%20_=0={idDoc:%27C0002019%27,clasDoc:%27XX%27,nomeDoc:%27X%20-%20Xxx%20xxx%20xxx%2030/12/2014%20-%20%27,_X_TRACK_ID:%xx----%27}&_ts=1532264445584&_dc=1532264445584



Upon furhter investigations, I see the problems are the curly braces.



Was disallowing curly brackets a choice or is it a bug?


If your relaxed_header_parser is on, and Squid rejects URLs because they
have curly braces in the path, then this is a Squid bug.

N.B. relaxed_header_parser is on by default.


I have no such option in my squid.conf, so it should be on.
I added it just to be sure the default wasn't off for some reason, but 
it did not change.


So, should I file a bug on https://bugs.squid-cache.org?

 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Regression after upgrading 3.5.27 -> 4.1

2018-07-25 Thread Andrea Venturoli

On 7/22/18 3:29 PM, Andrea Venturoli wrote:

http://xxx.xxx.xx/rest?method=navi_path.add=I029=0=X%20-%20Xxx%20xxx%20xxx%2030/12/2014%20-%20_=0={idDoc:%27C0002019%27,clasDoc:%27XX%27,nomeDoc:%27X%20-%20Xxx%20xxx%20xxx%2030/12/2014%20-%20%27,_X_TRACK_ID:%xx----%27}&_ts=1532264445584&_dc=1532264445584 


Upon furhter investigations, I see the problems are the curly braces.
If I encode them (changing { to %7B and } to %7D), the request is 
successful.


While I was not able to determine if that URL is valid (seems not 
according to old RFC1738, but maybe yes, according to newer RFCs), I 
have no control on that side.

All my users see is that this won't work with Squid, but will work without.



Was disallowing curly brackets a choice or is it a bug?
Perhaps there's some option to tweak?

 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Regression after upgrading 3.5.27 -> 4.1

2018-07-23 Thread Andrea Venturoli

On 7/23/18 2:59 AM, Amos Jeffries wrote:


FYI: The template delivered has inline javascript for hiding the
messages that are irrelevant to this particular request.


Sorry, I'm not sure I understand: template = squid's error page?




If you open the
URL in the browser (not debugging) it should reduce down to the ones
which are relevant.


That's what I've done (and what I reported came after I did this).




You could also look at the debugger info abut the request message sent
and compare those values yourself.


Again, please forgive me... maybe I'm too ignorant about web 
applications, but I'm not understanding what you suggest I should do.




 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Regression after upgrading 3.5.27 -> 4.1

2018-07-22 Thread Andrea Venturoli

Hello.

I'm maintaining several installations on FreeBSD and I've been notified 
a specific web application is not working anymore after the upgrade.




Accessing this app with FireFox and Squid 3.5.27, it works correctly.

Doing the same after the upgrade to 4.1 lets the user arrive up to a 
point and then get a "Loading" message which will never go away.




Using FireFox network debugger, I see a couple of 400 error and in fact, 
if I try to open those URL I get:



Invalid Request error was encountered while trying to process the request:

Some possible problems are:

Missing or unknown request method.

Missing HTTP Identifier (HTTP/1.0).

Request is too large.

Content-Length missing for POST or PUT requests.

Illegal character in hostname; underscores are not allowed.

HTTP/1.1 "Expect:" feature is being asked from an HTTP/1.0 software.




The above error is not quite informative (too broad) and there's nothing 
useful in the logs.


Here are those two URL (which unfortunately I have to partially obfuscate):


http://xxx.xxx.xx/rest?method=navi_path.add=I029=0=X%20-%20Xxx%20xxx%20xxx%2030/12/2014%20-%20_=0={idDoc:%27C0002019%27,clasDoc:%27XX%27,nomeDoc:%27X%20-%20Xxx%20xxx%20xxx%2030/12/2014%20-%20%27,_X_TRACK_ID:%xx----%27}&_ts=1532264445584&_dc=1532264445584



http://xxx.xxx.xx/php/ajax/openDocumentREST.php?core=xxx={%22field%22:%22id%22,%22mode%22:%22EQUAL%22,%22value%22:%xx_X%22}I029


(the x and X are always alphanumeric characters).




I'm seeking help on how to better diagnose this: how can I find what 
Squid 4 does not like in those URLs?


None of the above causes seems to apply, IMVHO.

Has some default changed from 3.5 to 4.1 which might trigger this problem?



 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] X-Forwarded-For breaks a site

2017-01-30 Thread Andrea Venturoli

Hello.

I've been invited to visit a web site and I couldn't see it.
Bypassing squid would solve the problem, so I made some some researches 
and saw that adding "forwarded_for transparent" to my config would do.


I'm wondering what the reason might be...

tcpdump showed that:
1) initial connection to http:/www.xxx.com yields a 302 redirect to 
http:/www.xxx.com/md;
2) so a second request goes out to http:/www.xxx.com/md and yields a 
301, again redirecting to http:/www.xxx.com/md/ (notice the last slash);
3) finally a request goes out for http:/www.xxx.com/md/ and here's 
where a difference arises between a direct connection and one through 
Squid (without "forwarded_for transparent").


The answer to a direct connection (or to Squid with "forwarded_for 
transparent") is:

HTTP/1.1 303 See other
Date: Mon, 30 Jan 2017 09:56:18 GMT
Server: Apache
X-Powered-By: PHP/5.3.29
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Set-Cookie: PHPSESSID=www; path=/
Set-Cookie: yy=z; path=/; HttpOnly
Location: http://www.xxx.com/md/it/
Content-Length: 0
Connection: close
Content-Type: text/html; charset=utf-8


The answer to Squid without "forwarded_for transparent") is:

HTTP/1.1 200 OK
Date: Mon, 30 Jan 2017 09:33:51 GMT
Server: Apache
X-Powered-By: PHP/5.3.29
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Set-Cookie: PHPSESSID=vv; path=/
Content-Length: 0
Keep-Alive: timeout=15, max=98
Connection: Keep-Alive
Content-Type: text/html



The site is a commercial one and, altough it features a reserved area, I 
don't see any point in loosing visibility to corporate users.
Also the webserver belongs to a famous ISP which should also hosts 
thousands of other sites, so I guess it should have nothing fancy.




Anyone can shed some light on this behaviour?
Is this Squid's fault (I don't think so, but I'll just ask)?
Is this a known bug in some version of Apache or PHP or whatever?
Is it dangerous to keep "forwarded_for transparent" in my config?



 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] skype connection problem

2016-10-25 Thread Andrea Venturoli

On 10/25/16 16:43, Yuri Voinov wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Wireshark? :)


No good: I don't trust MS not to change them the next day.




In my environment this not required.


Neither in mine, but some customer insists on using this Skype crap and 
while the Windows version will work through Squid, the Mac one won't (at 
least not with a "new" account).


 bye & Thanks
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] skype connection problem

2016-10-25 Thread Andrea Venturoli

On 10/25/16 16:26, Yuri Voinov wrote:


You LAN settings is too restrictive. AFAIK you require to permit traffic
to skype servers directly from your clients. Without proxy.


Any hint on how to identify those server?
Any IP list?

 bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users