Re: [squid-users] problem in configuring squid

2016-10-04 Thread Amos Jeffries
On 5/10/2016 4:42 a.m., Shark wrote:
> Sorry for my bad english,
> 
> I want to make a anonymous https & http proxy that pass through any
> requests without decrypting or change them,
> only change ip address from client ip to my server ip address and define ip
> address of my websites that i want to access them from my client in
> /etc/hosts,
> so i try to install squid on my server and i have good experience when i
> set proxy in client with server ip and port 3128 and i can access http &
> https behind this proxy,

By configuring your client with details about the proxy you have
configured a forward (aka explicit) proxy.

That is the best type to have when you can. Because it lets you use the
full capabilities of proxying in HTTP.

However, it also means that the clients do not use DNS nor /etc/hosts
file. The proxy is what does DNS lookups about where to send the traffic
the client(s) ask it to fetch.


> but when i try to using /etc/hosts i cannot access to https websites.

HTTPS is designed to prevent people playing around with the traffic. The
'S' means *secure(d)* - for a good reason.

> i try
> to install squid lot of time with any install instructions that i found
> from googling.
> I have server with CentOS 7 with one valid internet ip address.
> 
> For more explain of what i want to do, i need my squid to work like this ip
> 173.161.0.227
> When i add *173.161.0.227 www.iplocation.net * to
> my client /etc/hosts
> I can browse https://www.iplocation.net that tell me my client ip address
> is 173.161.0.227
> I want do my proxy server same as 173.161.0.227
> 

From what you have said so far it is clear the domain names you plan to
use this for are owned by somebody who is not you.


> *My problem is now with below config is:*
> 
> when i define *216.55.x.x www.iplocation.net * to
> /etc/hosts in my client i cannot access to https://www.iplocation.net and
> hang on connecting and then give me timeout error,
> I`m appreciate for help me to resolve this problem.
> I ask it before in
> http://serverfault.com/questions/805413/squid-with-iptables-bypass-https
>  but i cannot resolve it

When you are not the owner of that domain name; ..

That means you do not own the secret encryption key that HTTPS
associates with that domain name.

That means you cannot setup your proxy to perform encryption/decryption
of traffic when acting as a web server for it.

The only options you have for HTTPS are:

1) to use the proxy as a proper forward/explicit proxy the normal way
HTTP does that.

Or

2) to forget the idea of setting your own IP as web server and use MITM
interception of the clients normal port 443 traffic with SSL-Bump
feature enabled in your Squid.


> 
> *My Iptables config is:*
> 
> iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 3130
> 

That is okay. It is the (2) option mentioned above.

Be aware that it is incompatible with the idea of setting /etc/hosts IP
address for the domain as a way to get it to the proxy.

This iptables rules is the way to catch client traffic already on its
way to the *real* domain server(s) and send it through the proxy instead.

It is a bit nasty to work with, but still way better than MITM through
/etc/hosts entries.


> *My squid config is:*
> 

> 
> http_port 3128

Okay. This port will accept traffic from the above option (1) setups.


> http_port 80

No.

> http_port 0.0.0.0:3129 ssl-bump  cert=/etc/squid/ssl_cert/myCA.pem
> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
> https_port 0.0.0.0:3130 ssl-bump intercept
> cert=/etc/squid/ssl_cert/myCA.pem generate-host-certificates=on
> dynamic_cert_mem_cache_size=4MB
> 

Okay. These ports will accept traffic for the above option (2) setups.


> sslproxy_cert_error allow all
> sslproxy_flags DONT_VERIFY_PEER
> 

Those are wrong for any installation. Even testing ones. You need to see
the errors to even start to find solutions.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Whitelist domain ignored?

2016-10-04 Thread Alex Rousskov
On 10/04/2016 05:16 PM, Jok Thuau wrote:
> On Tue, Oct 4, 2016 at 1:41 PM, Jose Torres-Berrocal wrote:

>> I have some clients that use a program that tries to connect to:
>> https://neodecksoftware.com/NeoMedOnline/NeoMedOnlineService.svc


>> /var/squid/acl/whitelist.acl:

>> .assertus.com
>> .neodecksoftware.com


> your whitelist for this domain says that it has "something" followed by
> that domain name...

Good catch! Actually, the problem is even worse. The dstdom_regex will
match even notneodecksoftwarexcom.org IIRC.


>> acl whitelist dstdom_regex -i "/var/squid/acl/whitelist.acl"

Perhaps the configuration author meant to say dstdomain instead of
dstdom_regex? Are there any intentional regular expressions in
/var/squid/acl/whitelist.acl?

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Introducing delay to HTTP 407 responses

2016-10-04 Thread Alex Rousskov
On 10/04/2016 06:20 AM, Amos Jeffries wrote:
> On 5/10/2016 12:47 a.m., squid-users wrote:

>> I set this up as you suggested, then triggered a 407 response from
>> the cache.  It seems that way; I couldn't see aclMatchHTTPStatus or
>> http-response-407 in the log


> Strange. I was sure Alex did some tests recently and proved that even
> internally generated responses get http_reply_access applied to them.


Yes, see
http://lists.squid-cache.org/pipermail/squid-users/2016-August/012048.html

However, there is a difference between my August tests and this thread.
My tests were for a request parsing error response. Access denials do
not reach the same http_reply_access checks! See "early return"
statements in clientReplyContext::processReplyAccess(), including:

> /** Don't block our own responses or HTTP status messages */
> if (http->logType.oldType == LOG_TCP_DENIED ||
> http->logType.oldType == LOG_TCP_DENIED_REPLY ||
> alwaysAllowResponse(reply->sline.status())) {
> headers_sz = reply->hdr_sz;
> processReplyAccessResult(ACCESS_ALLOWED);
> return;
> }

I am not sure whether avoiding http_reply_access in such cases is a
bug/misfeature or the right behavior. As any exception, it certainly
creates problems for those who want to [ab]use http_reply_access as a
delay hook. FWIW, Squid had this exception since 2007:

> revno: 8474
> committer: hno
> branch nick: HEAD
> timestamp: Thu 2007-08-30 19:03:42 +
> message:
>   Bug #2028: FATAL error if using http_reply_access in combination with 
> authentication
>   
>   The attached patch bypasses http_reply_access on access denied messages
>   generated by this Squid, and also optimizes processing slightly in the
>   common case of not using any http_reply_access rules at all.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Whitelist domain ignored?

2016-10-04 Thread Jok Thuau
On Tue, Oct 4, 2016 at 1:41 PM, Jose Torres-Berrocal <
jetsystemservi...@gmail.com> wrote:

> I  do not know the correct terms to the problem I have.
>
> I have some clients that use a program that tries to connect to:
> https://neodecksoftware.com/NeoMedOnline/NeoMedOnlineService.svc
>
>
note that there is nothing between "//" and "neodecksoftware.com"...

[snip]

>
> 
> --
> 1475581614.208  0 192.168.1.20 TCP_DENIED/407 3917 CONNECT
> neodecksoftware.com:443 - HIER_NONE/- text/html
> 1475582327.774  0 192.168.1.20 TCP_DENIED/407 3917 CONNECT
> neodecksoftware.com:443 - HIER_NONE/- text/html
>
>
note that the ACL applies on that connect string. Specifically "
neodecksoftware.com"



> /var/squid/acl/whitelist.acl:
>
[snip]

> .assertus.com
> .neodecksoftware.com


your whitelist for this domain says that it has "something" followed by
that domain name...


>
> .office.net

[snip]


>
> # This file is automatically generated by pfSense
> # Do not edit manually !
>
> http_port 192.168.1.1:3128
> http_port 127.0.0.1:3128
>
[snip]

> acl whitelist dstdom_regex -i "/var/squid/acl/whitelist.acl"
>

and your ACL refers to a regular expression...


> http_access allow manager localhost
>
[snip]

> # Always allow access to whitelist domains
> http_access allow whitelist
>

and you allow that whitelist...

in the end, your regular expression doesn't match.
"." means "any single character". you should replace that line with
something like this:
^neodecksoftware\.com

(this is untested).

Note that all your entries need adjusting as well (they may be working, but
not matching the way you think they do).

HTH,
Jok
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Introducing delay to HTTP 407 responses

2016-10-04 Thread squid-users
> > I set this up as you suggested, then triggered a 407 response from the
> cache.  It seems that way; I couldn't see aclMatchHTTPStatus or http-
> response-407 in the log:
> >
> 
> Strange. I was sure Alex did some tests recently and proved that even
> internally generated responses get http_reply_access applied to them.
> Yet no sign of that in your log.
> 
> Is this a very old Squid version?

It's a recent Squid version - 3.5.20 on CentOS 6, built from the SRPM kindly 
provided by Eliezer.

> Or are the "checking http_reply_access" lines just later in the log than
> your snippet covered?

There was nothing more in the log previously posted at the point the 407 
response was returned to the client.

That log did have a lot of other stuff in it though.  Using a much simpler 
squid.conf (attached), I tested for differences in authenticated vs 
unauthenticated requests, when "http_reply_access deny all" is in place.  When 
credentials are supplied, a http/403 (forbidden) response is provided, as you 
would expect.  But when credentials are not supplied, a http/407 response is 
provided.  The divergence seems to start around line 31 in cache_noauth.log:

Checklist.cc(63) markFinished: 0x331e4a8 answer AUTH_REQUIRED for 
AuthenticateAcl exception

Perhaps when answer=AUTH_REQUIRED (line 35), http_reply_access is not checked?  
Another difference is that Acl.cc(158) reports async when an authenticated 
request is in place, but not otherwise.  If someone could give me some pointers 
where to look in the source, I can start digging to see if I can find out more.

Luke



cache_auth.log
Description: Binary data


cache_noauth.log
Description: Binary data


squid.conf
Description: Binary data
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Whitelist domain ignored?

2016-10-04 Thread Benjamin E. Nichols

Yes we can see your messages to the group..

While im responding, this doesnt adress you problem, but we have a free 
whitelist that we maintain you may or may not be interested in, but its 
quite  a bit larger. No adult, and no torrent sites.


http://www.squidblacklist.org/downloads/whitelist.txt




Good Luck!


On 10/4/2016 4:22 PM, Jose Torres-Berrocal wrote:

Just to confirm that I sent the email

Jose E Torres
939-777-4030
JET System Services


On Tue, Oct 4, 2016 at 4:41 PM, Jose Torres-Berrocal
 wrote:

I  do not know the correct terms to the problem I have.

I have some clients that use a program that tries to connect to:
https://neodecksoftware.com/NeoMedOnline/NeoMedOnlineService.svc

Went to the access.log and found the neodecksoftware.com is being
denied even that I have it in a whitelist file.

The below info is the error lines fund, the whitelist file content,
and the squid conf:

--
1475581614.208  0 192.168.1.20 TCP_DENIED/407 3917 CONNECT
neodecksoftware.com:443 - HIER_NONE/- text/html
1475582327.774  0 192.168.1.20 TCP_DENIED/407 3917 CONNECT
neodecksoftware.com:443 - HIER_NONE/- text/html

/var/squid/acl/whitelist.acl:
.familymedicinepr.com
.anydesk.com
.teamviewer.com
.secureserver.net
.gmail.com
.mail.yahoo.com
.outlook.com
.aol.com
.libertypr.net
.coqui.net
.prtc.net
.assertus.com
.neodecksoftware.com
.office.net
.microsoft.com
.office.com
.live.com

# This file is automatically generated by pfSense
# Do not edit manually !

http_port 192.168.1.1:3128
http_port 127.0.0.1:3128
icp_port 0
dns_v4_first off
pid_filename /var/run/squid/squid.pid
cache_effective_user squid
cache_effective_group proxy
error_default_language en
icon_directory /usr/local/etc/squid/icons
visible_hostname pfsense
cache_mgr jetsystemservi...@gmail.com
access_log /var/squid/logs/access.log
cache_log /var/squid/logs/cache.log
cache_store_log none
netdb_filename /var/squid/logs/netdb.state
pinger_enable on
pinger_program /usr/local/libexec/squid/pinger

logfile_rotate 31
debug_options rotate=31
shutdown_lifetime 3 seconds
# Allow local network(s) on interface(s)
acl localnet src  192.168.1.0/24 127.0.0.0/8
forwarded_for on
uri_whitespace strip

acl dynamic urlpath_regex cgi-bin \?
cache deny dynamic

cache_mem 512 MB
maximum_object_size_in_memory 256 KB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
minimum_object_size 0 KB
maximum_object_size 4 MB

offline_mode off
cache_swap_low 90
cache_swap_high 95
cache allow all
# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:1440  20%  10080
refresh_pattern ^gopher:  1440  0%  1440
refresh_pattern -i (/cgi-bin/|\?) 0  0%  0
refresh_pattern .0  20%  4320


#Remote proxies


# Setup some default acls
# From 3.2 further configuration cleanups have been done to make
things easier and safer. The manager, localhost, and to_localhost ACL
definitions are now built-in.
# acl localhost src 127.0.0.1/32
acl allsrc src all
acl safeports port 21 70 80 210 280 443 488 563 591 631 777 901  3128
3129 1025-65535 444
acl sslports port 443 563  444

# From 3.2 further configuration cleanups have been done to make
things easier and safer. The manager, localhost, and to_localhost ACL
definitions are now built-in.
#acl manager proto cache_object

acl purge method PURGE
acl connect method CONNECT

# Define protocols used for redirects
acl HTTP proto HTTP
acl HTTPS proto HTTPS
acl whitelist dstdom_regex -i "/var/squid/acl/whitelist.acl"
http_access allow manager localhost

http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !safeports
http_access deny CONNECT !sslports

# Always allow localhost connections
# From 3.2 further configuration cleanups have been done to make
things easier and safer.
# The manager, localhost, and to_localhost ACL definitions are now built-in.
# http_access allow localhost

request_body_max_size 0 KB
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 -1/-1
delay_initial_bucket_level 100
delay_access 1 allow allsrc

# Reverse Proxy settings


# Custom options before auth
connect_timeout 2

# Always allow access to whitelist domains
http_access allow whitelist
auth_param basic program /usr/local/libexec/squid/basic_radius_auth -w
Maint4030 -h pfsense -p
auth_param basic children 5
auth_param basic realm Please enter your credentials to access the proxy
auth_param basic credentialsttl 5 minutes
acl password proxy_auth REQUIRED
# Custom options after auth


http_access allow password localnet
# Default block all to be sure
http_access deny allsrc

--

Cordially,
Jose

___
squid-users mailing list
squid-users@lists.squid-cache.org

Re: [squid-users] Whitelist domain ignored?

2016-10-04 Thread Jose Torres-Berrocal
Just to confirm that I sent the email

Jose E Torres
939-777-4030
JET System Services


On Tue, Oct 4, 2016 at 4:41 PM, Jose Torres-Berrocal
 wrote:
> I  do not know the correct terms to the problem I have.
>
> I have some clients that use a program that tries to connect to:
> https://neodecksoftware.com/NeoMedOnline/NeoMedOnlineService.svc
>
> Went to the access.log and found the neodecksoftware.com is being
> denied even that I have it in a whitelist file.
>
> The below info is the error lines fund, the whitelist file content,
> and the squid conf:
>
> --
> 1475581614.208  0 192.168.1.20 TCP_DENIED/407 3917 CONNECT
> neodecksoftware.com:443 - HIER_NONE/- text/html
> 1475582327.774  0 192.168.1.20 TCP_DENIED/407 3917 CONNECT
> neodecksoftware.com:443 - HIER_NONE/- text/html
>
> /var/squid/acl/whitelist.acl:
> .familymedicinepr.com
> .anydesk.com
> .teamviewer.com
> .secureserver.net
> .gmail.com
> .mail.yahoo.com
> .outlook.com
> .aol.com
> .libertypr.net
> .coqui.net
> .prtc.net
> .assertus.com
> .neodecksoftware.com
> .office.net
> .microsoft.com
> .office.com
> .live.com
>
> # This file is automatically generated by pfSense
> # Do not edit manually !
>
> http_port 192.168.1.1:3128
> http_port 127.0.0.1:3128
> icp_port 0
> dns_v4_first off
> pid_filename /var/run/squid/squid.pid
> cache_effective_user squid
> cache_effective_group proxy
> error_default_language en
> icon_directory /usr/local/etc/squid/icons
> visible_hostname pfsense
> cache_mgr jetsystemservi...@gmail.com
> access_log /var/squid/logs/access.log
> cache_log /var/squid/logs/cache.log
> cache_store_log none
> netdb_filename /var/squid/logs/netdb.state
> pinger_enable on
> pinger_program /usr/local/libexec/squid/pinger
>
> logfile_rotate 31
> debug_options rotate=31
> shutdown_lifetime 3 seconds
> # Allow local network(s) on interface(s)
> acl localnet src  192.168.1.0/24 127.0.0.0/8
> forwarded_for on
> uri_whitespace strip
>
> acl dynamic urlpath_regex cgi-bin \?
> cache deny dynamic
>
> cache_mem 512 MB
> maximum_object_size_in_memory 256 KB
> memory_replacement_policy heap GDSF
> cache_replacement_policy heap LFUDA
> minimum_object_size 0 KB
> maximum_object_size 4 MB
>
> offline_mode off
> cache_swap_low 90
> cache_swap_high 95
> cache allow all
> # Add any of your own refresh_pattern entries above these.
> refresh_pattern ^ftp:1440  20%  10080
> refresh_pattern ^gopher:  1440  0%  1440
> refresh_pattern -i (/cgi-bin/|\?) 0  0%  0
> refresh_pattern .0  20%  4320
>
>
> #Remote proxies
>
>
> # Setup some default acls
> # From 3.2 further configuration cleanups have been done to make
> things easier and safer. The manager, localhost, and to_localhost ACL
> definitions are now built-in.
> # acl localhost src 127.0.0.1/32
> acl allsrc src all
> acl safeports port 21 70 80 210 280 443 488 563 591 631 777 901  3128
> 3129 1025-65535 444
> acl sslports port 443 563  444
>
> # From 3.2 further configuration cleanups have been done to make
> things easier and safer. The manager, localhost, and to_localhost ACL
> definitions are now built-in.
> #acl manager proto cache_object
>
> acl purge method PURGE
> acl connect method CONNECT
>
> # Define protocols used for redirects
> acl HTTP proto HTTP
> acl HTTPS proto HTTPS
> acl whitelist dstdom_regex -i "/var/squid/acl/whitelist.acl"
> http_access allow manager localhost
>
> http_access deny manager
> http_access allow purge localhost
> http_access deny purge
> http_access deny !safeports
> http_access deny CONNECT !sslports
>
> # Always allow localhost connections
> # From 3.2 further configuration cleanups have been done to make
> things easier and safer.
> # The manager, localhost, and to_localhost ACL definitions are now built-in.
> # http_access allow localhost
>
> request_body_max_size 0 KB
> delay_pools 1
> delay_class 1 2
> delay_parameters 1 -1/-1 -1/-1
> delay_initial_bucket_level 100
> delay_access 1 allow allsrc
>
> # Reverse Proxy settings
>
>
> # Custom options before auth
> connect_timeout 2
>
> # Always allow access to whitelist domains
> http_access allow whitelist
> auth_param basic program /usr/local/libexec/squid/basic_radius_auth -w
> Maint4030 -h pfsense -p
> auth_param basic children 5
> auth_param basic realm Please enter your credentials to access the proxy
> auth_param basic credentialsttl 5 minutes
> acl password proxy_auth REQUIRED
> # Custom options after auth
>
>
> http_access allow password localnet
> # Default block all to be sure
> http_access deny allsrc
>
> --
>
> Cordially,
> Jose
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Whitelist domain ignored?

2016-10-04 Thread Jose Torres-Berrocal
I  do not know the correct terms to the problem I have.

I have some clients that use a program that tries to connect to:
https://neodecksoftware.com/NeoMedOnline/NeoMedOnlineService.svc

Went to the access.log and found the neodecksoftware.com is being
denied even that I have it in a whitelist file.

The below info is the error lines fund, the whitelist file content,
and the squid conf:

--
1475581614.208  0 192.168.1.20 TCP_DENIED/407 3917 CONNECT
neodecksoftware.com:443 - HIER_NONE/- text/html
1475582327.774  0 192.168.1.20 TCP_DENIED/407 3917 CONNECT
neodecksoftware.com:443 - HIER_NONE/- text/html

/var/squid/acl/whitelist.acl:
.familymedicinepr.com
.anydesk.com
.teamviewer.com
.secureserver.net
.gmail.com
.mail.yahoo.com
.outlook.com
.aol.com
.libertypr.net
.coqui.net
.prtc.net
.assertus.com
.neodecksoftware.com
.office.net
.microsoft.com
.office.com
.live.com

# This file is automatically generated by pfSense
# Do not edit manually !

http_port 192.168.1.1:3128
http_port 127.0.0.1:3128
icp_port 0
dns_v4_first off
pid_filename /var/run/squid/squid.pid
cache_effective_user squid
cache_effective_group proxy
error_default_language en
icon_directory /usr/local/etc/squid/icons
visible_hostname pfsense
cache_mgr jetsystemservi...@gmail.com
access_log /var/squid/logs/access.log
cache_log /var/squid/logs/cache.log
cache_store_log none
netdb_filename /var/squid/logs/netdb.state
pinger_enable on
pinger_program /usr/local/libexec/squid/pinger

logfile_rotate 31
debug_options rotate=31
shutdown_lifetime 3 seconds
# Allow local network(s) on interface(s)
acl localnet src  192.168.1.0/24 127.0.0.0/8
forwarded_for on
uri_whitespace strip

acl dynamic urlpath_regex cgi-bin \?
cache deny dynamic

cache_mem 512 MB
maximum_object_size_in_memory 256 KB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
minimum_object_size 0 KB
maximum_object_size 4 MB

offline_mode off
cache_swap_low 90
cache_swap_high 95
cache allow all
# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:1440  20%  10080
refresh_pattern ^gopher:  1440  0%  1440
refresh_pattern -i (/cgi-bin/|\?) 0  0%  0
refresh_pattern .0  20%  4320


#Remote proxies


# Setup some default acls
# From 3.2 further configuration cleanups have been done to make
things easier and safer. The manager, localhost, and to_localhost ACL
definitions are now built-in.
# acl localhost src 127.0.0.1/32
acl allsrc src all
acl safeports port 21 70 80 210 280 443 488 563 591 631 777 901  3128
3129 1025-65535 444
acl sslports port 443 563  444

# From 3.2 further configuration cleanups have been done to make
things easier and safer. The manager, localhost, and to_localhost ACL
definitions are now built-in.
#acl manager proto cache_object

acl purge method PURGE
acl connect method CONNECT

# Define protocols used for redirects
acl HTTP proto HTTP
acl HTTPS proto HTTPS
acl whitelist dstdom_regex -i "/var/squid/acl/whitelist.acl"
http_access allow manager localhost

http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !safeports
http_access deny CONNECT !sslports

# Always allow localhost connections
# From 3.2 further configuration cleanups have been done to make
things easier and safer.
# The manager, localhost, and to_localhost ACL definitions are now built-in.
# http_access allow localhost

request_body_max_size 0 KB
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 -1/-1
delay_initial_bucket_level 100
delay_access 1 allow allsrc

# Reverse Proxy settings


# Custom options before auth
connect_timeout 2

# Always allow access to whitelist domains
http_access allow whitelist
auth_param basic program /usr/local/libexec/squid/basic_radius_auth -w
Maint4030 -h pfsense -p
auth_param basic children 5
auth_param basic realm Please enter your credentials to access the proxy
auth_param basic credentialsttl 5 minutes
acl password proxy_auth REQUIRED
# Custom options after auth


http_access allow password localnet
# Default block all to be sure
http_access deny allsrc

--

Cordially,
Jose
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Kerberos Ne

2016-10-04 Thread erdosain9
so... any advice about this??
Thanks!



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Kerberos-appropriate-log-file-tp4679740p4679901.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid - AD kerberos auth and Linux Server proxy access not working

2016-10-04 Thread Nilesh Gavali
Hi Amos;
Ok, we can discussed the issue in Two part  1. For Windows AD 
Authentication & SSO and 2. Linux server unable to access via squid proxy.

For First point-
Requirement to have SSO for accessing internet via squid proxy and based 
on user's AD group membership allow access to specific sites only. I 
believe current configuration of squid is working as expected.

For Second point -
Point I would like to highlight here is, the Linux server IWCCP01 is not 
part of domain at all. Hence the below error as squid configured for 
AD_auth. So how can we allow Linux server or non domain machine to access 
specific sites?

> Error 407 is "proxy auth required", so the proxy is expecting 
authentication 
> for some reason.

 > Can you confirm that the hostname vseries-test.bottomline.com is 
contained in 
> your site file /etc/squid/sitelist/dbs_allowed_site ?

YES, we have entry as .bottomline.com , which work fine when access via 
windows machine having proxy enabled for that user.
==
> Can you temporarily change the line "http_access allow IWCCP01 
allowedsite" to 
> "http_access allow IWCCP01" and see whether the machine then gets 
access?

I made the changes as suggested but still it is giving same Error 407.

If that works, please list the output of the command:
  grep "bottomline.com" /etc/squid/sitelist/dbs_allowed_site

o/p of above command as below -

[root@Proxy02 ~]# grep "bottomline.com" 
/etc/squid/sitelist/dbs_allowed_site
.bottomline.com
[root@Proxy02 ~]#

===

Thanks & Regards
Nilesh Suresh Gavali




 
Message: 2
Date: Wed, 5 Oct 2016 00:11:08 +1300
From: Amos Jeffries 
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid - AD kerberos auth and Linux Server
 proxy access not working
Message-ID: 
Content-Type: text/plain; charset=utf-8

On 4/10/2016 11:36 p.m., Antony Stone wrote:
> On Tuesday 04 October 2016 at 12:28:44, Nilesh Gavali wrote:
> 
>> Hello Antony;
>> I have double checked the current working configuration of my 
squid.conf
>> and it has same settings which I posted earlier. somehow it is working 
for
>> us.
> 
> I'm not saying the whole thing won't work; I'm saying there is no point 
in 
> having a line "http_access allow ad_auth" following the line 
"http_access deny 
> all".  The ad_auth line can never be invoked.

Not knowing why authentication works is dangerous. You might have been
allowing non-authenticated traffic and invalid user accounts through.

The only reason it does "work" is that the ACL called "USERS" is _not_
actually checking user logins. It is a group checking ACL which requires
authentication to happen before it can be checked.

In this specific case invalid logins cannot be a member of the group. So
they will not get through the proxy.

However, people who accidentally type the user/password wrong, or whose
machines automatically login with an account not a member of the group
will not be allowed any way to try again short of shutting down their
browser or maybe even logging out of the machine and trying from another
one.

That may or may not be a problem for you.

> 
>> below is the error from access.log file.
>>
>> 1475518342.279  0 10.xx.15.103 TCP_DENIED/407 3589 CONNECT
>> vseries-test.bottomline.com:443 - NONE/- text/html
> 
> Error 407 is "proxy auth required", so the proxy is expecting 
authentication 
> for some reason.
> 
> Can you confirm that the hostname vseries-test.bottomline.com is 
contained in 
> your site file /etc/squid/sitelist/dbs_allowed_site ?
> 
> Can you temporarily change the line "http_access allow IWCCP01 
allowedsite" to 
> "http_access allow IWCCP01" and see whether the machine then gets 
access?
> 

If that works, please list the output of the command:
  grep "bottomline.com" /etc/squid/sitelist/dbs_allowed_site

Amos

***

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problem with Squid3 Caches

2016-10-04 Thread Antony Stone
On Tuesday 04 October 2016 at 19:43:21, KR wrote:

> > On Oct 4, 2016, at 11:45 AM, Antony Stone wrote:
> > 
> > On Tuesday 04 October 2016 at 17:00:24, KR wrote:
> >> Hello Anthony, Yuri,
> >> 
> >> It seems every line is commented out in the config?
> > 
> > Impossible - otherwise it couldn't generate the error message "FATAL:
> > Bungled /etc/squid/squid.conf line 3467: cache_dir rock /ssd3 ..."
> > 
> > That is telling you that line 3467 of squid.conf starts with the
> > directive "cache_dir”.
> 
> I see, is there an easy way to omit all lines that begin with the # sign?

Well, grep?

eg: grep -v "^[^#]" will show all lines which start with something other than 
a # - in other words, it will omit blank lines and comments.

> The line in question is
> 
> # Uncomment and adjust the following to add a disk cache directory.
> #cache_dir ufs /var/spool/squid 100 16 256

Please confirm which file you are showing us the information from.

> > Standard Ubuntu?  Which version?
> 
> Standard and current.

So, 16.04?

> >> Attached are two screenshots that are suspect.
> > 
> > Er, what are those screenshots of?  It's certainly not the output of
> > Squid, or its config file.

An answer to this would be helpful.

> >> Ubuntu is running inside of a vm,
> > 
> > Er, so /ssd3 is not an actual SSD, then?  What is it?
> 
> I suspect it is an SSD drive

"Suspect"?

How have you set up this VM?  Is there an actual device mounted on /ssd3, or 
is it just some directory name in your VM?

> > I'm suspicious that you may be used webmin, and we've had someone here on
> > the list recently who installed Squid on Ubuntu along with webmin, and
> > we then found out that the package maintainer had put the documentation
> > file for squid.conf in place of the actual squid.conf.
> 
> I tried it both its webadmin

Please specify what yu mean by this - what is the "it" which "its" refers to 
above?

> and terminal to install.  Same result.  Squid seems to want a cache folder
> one very partition that exists.

I recommend you stop using any graphical tool to try to manage Squid, remove 
the package, and then simply:

1. Install the Squid (maybe called Squid3?  I can't quite recall for Ubuntu) 
package using apt-get or aptitude.

2. Edit the config file /etc/squid/squid.conf to your needs.

Hope that helps,


Antony.

-- 
"The future is already here.   It's just not evenly distributed yet."

 - William Gibson

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problem with Squid3 Caches

2016-10-04 Thread KR
I uncommented that line and now I get

Initializing the Squid cache with the command squid3 -f /etc/squid/squid.conf 
-z ..

FATAL: Bungled /etc/squid/squid.conf line 3410: cache_dir rock /hdd1 ... 
min-size=10
Squid Cache (Version 3.5.12): Terminated abnormally.
CPU Usage: 0.008 seconds = 0.004 user + 0.004 sys
Maximum Resident Size: 114480 KB
Page faults with physical i/o: 6



> On Oct 4, 2016, at 11:45 AM, Antony Stone  
> wrote:
> 
> On Tuesday 04 October 2016 at 17:00:24, KR wrote:
> 
>> Hello Anthony, Yuri,
>> 
>> It seems every line is commented out in the config?
> 
> Impossible - otherwise it couldn't generate the error message "FATAL: Bungled 
> /etc/squid/squid.conf line 3467: cache_dir rock /ssd3 ..."
> 
> Thta is telling you that line 3467 of squid.conf starts with the directive 
> "cache_dir".
> 
>> This is a fresh install.
> 
> Standard Ubuntu?  Which version?
> 
>> ls -al /ssd3 outputs:
>> 
>> total 8
>> drwxr-xr-x  2 root root 4096 Aug 13 18:20 .
>> drwxr-xr-x 30 root root 4096 Oct  3 13:49 ..
> 
> Hm, okay, so that really does exist on your machine, then...
> 
>> Attached are two screenshots that are suspect.
> 
> Er, what are those screenshots of?  It's certainly not the output of Squid, 
> or 
> its config file.
> 
>> Do I need all of these cache folders on every partition?
> 
> You can put your cache directories wherever you like.
> 
>> Ubuntu is running inside of a vm,
> 
> Er, so /ssd3 is not an actual SSD, then?  What is it?
> 
>> default installation method using the setup wizard.
> 
> I'm suspicious that you may be used webmin, and we've had someone here on the 
> list recently who installed Squid on Ubuntu along with webmin, and we then 
> found out that the package maintainer had put the documentation file for 
> squid.conf in place of the actual squid.conf.
> 
> It can still work (not everything is commented out) but it's *far* bigger 
> than 
> it needs to be, and is somewhat confusing to work with.
> 
> 
> Regards,
> 
> 
> Antony.
> 
> -- 
> It may not seem obvious, but (6 x 5 + 5) x 5 - 55 equals 5!
> 
>   Please reply to the list;
> please *don't* CC me.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid crash - 3.5.21

2016-10-04 Thread Jasper Van Der Westhuizen


On Mon, 2016-10-03 at 11:33 -0600, Alex Rousskov wrote:

On 10/03/2016 04:50 AM, Jasper Van Der Westhuizen wrote:


This morning I had some problems with some of our proxies. 2 Proxies in
cluster A crashed with the below errors. The shortly afterwards 4 in
cluster B did the same. Both clusters are configured to run their cache
in memory with SMP and 4 workers configured.

FATAL: Received Bus Error...dying.




There are at least two possible reasons:

  1. A bug in Squid and
  2. Memory overallocation by the OS kernel.

To fix the former, the developers will need a stack trace (at least). I
recommend filing a bug report after getting that trace and excluding
reason #2. Squid wiki and various system administration guides explain
how to make Squid dump core files.

To check for memory overallocation, you can temporary start Squid v4.0
with "shared_memory_locking on". Unfortunately, that squid.conf
directive is not available in Squid v3. You may be able to emulate it
using some OS-specific sysctl or environment variables, but doing so may
be far from trivial, and I do not have instructions.




Thanks Alex. We have patched the servers to the latest and will monitor. If it 
happens again I will fill in a bug report and see where it takes us.

Regards
Jasper





Disclaimer:
http://www.shopriteholdings.co.za/Pages/ShopriteE-mailDisclaimer.aspx
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-3.5.21: filter FTP content or FTP commands

2016-10-04 Thread oleg gv
Thank you very much. It's my fault - wrote wrong ACL .

That'll do it! Yahooo!  LIST , C.?D blocked ok.

2016-10-04 17:55 GMT+03:00 Alex Rousskov :

> On 10/04/2016 06:24 AM, oleg gv wrote:
>
> > Then I try to block FTP-Command and nothing happen. Some from my config:
> >
> > acl rh req_header -i ^FTP-Command
>
> Wrong syntax. Please read req_header documentation carefully and try
> something like:
>
>   acl rh req_header FTP-Command -i LIST
>
> I also recommend renaming the "rh" ACL to something more meaningful like
> "ForbiddenCommand".
>
> Finally, since a regular HTTP request might have an FTP-Command header
> field, you should probably limit your rh-based http_access deny rule to
> transactions accepted at ftp_port(s).
>
>
> > http_access permit all
>
> There is no "permit" action AFAIK. Please use documented "allow" and
> "deny" actions only and copy-paste exact configuration lines when asking
> questions.
>
>
> > request_header_access  "FTP-Command: LIST" deny all
>
> Wrong syntax and wrong option. You want to deny a transaction, not to
> remove a header from that transaction.
>
>
> HTH,
>
> Alex.
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problem with Squid3 Caches

2016-10-04 Thread Antony Stone
On Tuesday 04 October 2016 at 17:00:24, KR wrote:

> Hello Anthony, Yuri,
> 
> It seems every line is commented out in the config?

Impossible - otherwise it couldn't generate the error message "FATAL: Bungled 
/etc/squid/squid.conf line 3467: cache_dir rock /ssd3 ..."

Thta is telling you that line 3467 of squid.conf starts with the directive 
"cache_dir".

> This is a fresh install.

Standard Ubuntu?  Which version?

> ls -al /ssd3 outputs:
> 
> total 8
> drwxr-xr-x  2 root root 4096 Aug 13 18:20 .
> drwxr-xr-x 30 root root 4096 Oct  3 13:49 ..

Hm, okay, so that really does exist on your machine, then...

> Attached are two screenshots that are suspect.

Er, what are those screenshots of?  It's certainly not the output of Squid, or 
its config file.

> Do I need all of these cache folders on every partition?

You can put your cache directories wherever you like.

> Ubuntu is running inside of a vm,

Er, so /ssd3 is not an actual SSD, then?  What is it?

> default installation method using the setup wizard.

I'm suspicious that you may be used webmin, and we've had someone here on the 
list recently who installed Squid on Ubuntu along with webmin, and we then 
found out that the package maintainer had put the documentation file for 
squid.conf in place of the actual squid.conf.

It can still work (not everything is commented out) but it's *far* bigger than 
it needs to be, and is somewhat confusing to work with.


Regards,


Antony.

-- 
It may not seem obvious, but (6 x 5 + 5) x 5 - 55 equals 5!

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] problem in configuring squid

2016-10-04 Thread Shark
Sorry for my bad english,

I want to make a anonymous https & http proxy that pass through any
requests without decrypting or change them,
only change ip address from client ip to my server ip address and define ip
address of my websites that i want to access them from my client in
/etc/hosts,
so i try to install squid on my server and i have good experience when i
set proxy in client with server ip and port 3128 and i can access http &
https behind this proxy,
but when i try to using /etc/hosts i cannot access to https websites. i try
to install squid lot of time with any install instructions that i found
from googling.
I have server with CentOS 7 with one valid internet ip address.

For more explain of what i want to do, i need my squid to work like this ip
173.161.0.227
When i add *173.161.0.227 www.iplocation.net * to
my client /etc/hosts
I can browse https://www.iplocation.net that tell me my client ip address
is 173.161.0.227
I want do my proxy server same as 173.161.0.227

*My problem is now with below config is:*

when i define *216.55.x.x www.iplocation.net * to
/etc/hosts in my client i cannot access to https://www.iplocation.net and
hang on connecting and then give me timeout error,
I`m appreciate for help me to resolve this problem.
I ask it before in
http://serverfault.com/questions/805413/squid-with-iptables-bypass-https
 but i cannot resolve it

*My Iptables config is:*

iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 3130

*My squid config is:*

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl localnet src 127.0.0.1

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http

acl CONNECT method CONNECT

http_access allow !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access allow manager
http_access allow localnet
http_access allow localhost
http_access allow all

http_port 3128
http_port 80
http_port 0.0.0.0:3129 ssl-bump  cert=/etc/squid/ssl_cert/myCA.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
https_port 0.0.0.0:3130 ssl-bump intercept
cert=/etc/squid/ssl_cert/myCA.pem generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB

sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER

cache_dir ufs /var/cache/squid 100 16 256

coredump_dir /var/cache/squid

sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/squid/ssl_db -M 4MB
sslcrtd_children 50 startup=1 idle=1

sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER

ssl_bump peek all
ssl_bump splice all
ssl_bump bump all

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern .   0   20% 4320forwarded_for delete



On Tue, Oct 4, 2016 at 4:44 PM, Antony Stone <
antony.st...@squid.open.source.it> wrote:

> On Tuesday 04 October 2016 at 14:51:13, Mehdi Yeganeh wrote:
>
> > Thanks for quick replay,
> > I need to use my server, i configure my ip address in some software like
> > antivirus and ...
>
> ... and what?
>
> I do not understand what antivirus software has to do with our discussion.
> Please give details, don't just write "...".
>
> > So, I want all of that working
>
> All of what?
>
> > with my server ip address and for this reason I cannot use torproxy or
> > torproject. I need a proxy server (squid) on my server
>
> In that case install Squid on your server.  What is the problem?
>
> > More details about 173.161.0.227:
> > Its sophos web appliance that use squid on debian and using some other
> > proxy software (Astaro HttpProxy) with squid and
> > iptables for forwarding ports. but i can`t find the other proxy software
> > for download. so, i just have squid alone (although iptables is present)
>
> Okay, so I understand that the machine on that IP address (which appears
> to be
> serving Pennoyer School in Illinois, with connectivity provided by
> Comcast) is
> a "Sophos web appliance" - some sort of combined firewall / proxy / port
> forwarder.
>
> What is the relevance of that machine to your question?
>
> > Please tell me that should i use other tools or squid can do it?
>
> Do what?
>
> Please explain exactly what it is you are trying to achieve, and hoping
> that
> Squid is a solution for.
>
>
> Regards,
>
>
> Antony.
>
> --
> Police have found a cartoonist dead in his house.  They say that 

Re: [squid-users] Squid-3.5.21: filter FTP content or FTP commands

2016-10-04 Thread Alex Rousskov
On 10/04/2016 06:24 AM, oleg gv wrote:

> Then I try to block FTP-Command and nothing happen. Some from my config:
> 
> acl rh req_header -i ^FTP-Command

Wrong syntax. Please read req_header documentation carefully and try
something like:

  acl rh req_header FTP-Command -i LIST

I also recommend renaming the "rh" ACL to something more meaningful like
"ForbiddenCommand".

Finally, since a regular HTTP request might have an FTP-Command header
field, you should probably limit your rh-based http_access deny rule to
transactions accepted at ftp_port(s).


> http_access permit all

There is no "permit" action AFAIK. Please use documented "allow" and
"deny" actions only and copy-paste exact configuration lines when asking
questions.


> request_header_access  "FTP-Command: LIST" deny all

Wrong syntax and wrong option. You want to deny a transaction, not to
remove a header from that transaction.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Introducing delay to HTTP 407 responses

2016-10-04 Thread Alex Rousskov
On 10/04/2016 05:18 AM, Amos Jeffries wrote:
> On 4/10/2016 11:53 p.m., squid-us...@filter.luko.org wrote:
>> Would the developers be open to adding a configuration-based throttle to 
>> authentication responses

> This helper is the mechanism that we accepted. Anything else would be
> far less useful.

For the record, I agree that the external ACL is the right solution for
now. However, supporting a general built-in "delay" ACL would be a
useful feature worth accepting IMO.

I know this does not help with the problem at hand. I just wanted to
make a note that there is certainly room for an improvement here if
somebody wants to work on it; the "anything else" phrase was too harsh.

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching http google deb files

2016-10-04 Thread Hardik Dangar
Wow, i couldn't think about that. google might need tracking data that
could be the reason they have blindly put vary * header. oh Irony, company
which talks to all of us on how to deliver content is trying to do such
thing.

I have looked at your patch but how do i enable that ? do i need to write
custom ACL ? i know i need to compile and reinstall after applying patch
but what do i need to do exactly in squid.conf file as looking at your
patch i am guessing i need to write archive acl or i am too naive to
understand C code :)

Also

reply_header_replace is any good for this ?


On Tue, Oct 4, 2016 at 7:47 PM, Amos Jeffries  wrote:

> On 5/10/2016 2:34 a.m., Hardik Dangar wrote:
> > Hey Amos,
> >
> > We have about 50 clients which downloads same google chrome update every
> 2
> > or 3 days means 2.4 gb. although response says vary but requested file is
> > same and all is downloaded via apt update.
> >
> > Is there any option just like ignore-no-store? I know i am asking for too
> > much but it seems very silly on google's part that they are sending very
> > header at a place where they shouldn't as no matter how you access those
> > url's you are only going to get those deb files.
>
>
> Some things G does only make sense whan you ignore all the PR about
> wanting to make the web more efficient and consider it's a company whose
> income is derived by recording data about peoples habits and activities.
> Caching can hide that info from them.
>
> >
> > can i hack squid source code to ignore very header ?
> >
>
> Google are explicitly saying the response changes. I suspect there is
> something involving Google account data being embeded in some of the
> downloads. For tracking, etc.
>
>
> If you are wanting to test it I have added a patch to
>  that should implement
> archival of responses where the ACLs match. It is completely untested by
> me beyond building, so YMMV.
>
> Amos
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IPv6 interception crash: Ip::Address::getInAddr : Cannot convert non-IPv4 to IPv4.

2016-10-04 Thread Egerváry Gergely
> Thanks for the testing and feedback. I've applied this as part-2 of the
> bug 4302 updates. It will be in the next releases of 3.5 and 4.x.

you are the hero of the day, thank you very much!

-- 
Gergely EGERVARY

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching http google deb files

2016-10-04 Thread Amos Jeffries
On 5/10/2016 2:34 a.m., Hardik Dangar wrote:
> Hey Amos,
> 
> We have about 50 clients which downloads same google chrome update every 2
> or 3 days means 2.4 gb. although response says vary but requested file is
> same and all is downloaded via apt update.
> 
> Is there any option just like ignore-no-store? I know i am asking for too
> much but it seems very silly on google's part that they are sending very
> header at a place where they shouldn't as no matter how you access those
> url's you are only going to get those deb files.


Some things G does only make sense whan you ignore all the PR about
wanting to make the web more efficient and consider it's a company whose
income is derived by recording data about peoples habits and activities.
Caching can hide that info from them.

> 
> can i hack squid source code to ignore very header ?
> 

Google are explicitly saying the response changes. I suspect there is
something involving Google account data being embeded in some of the
downloads. For tracking, etc.


If you are wanting to test it I have added a patch to
 that should implement
archival of responses where the ACLs match. It is completely untested by
me beyond building, so YMMV.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching http google deb files

2016-10-04 Thread Hardik Dangar
Hey Amos,

after referring to one of your old posts i found, we can use

reply_header_replace

to replace headers. Is it possible to replace vary * header  with something
appropriate?

or

i need to look at squid's source code to ignore vary header and recompile ?



On Tue, Oct 4, 2016 at 7:04 PM, Hardik Dangar 
wrote:

> Hey Amos,
>
> We have about 50 clients which downloads same google chrome update every 2
> or 3 days means 2.4 gb. although response says vary but requested file is
> same and all is downloaded via apt update.
>
> Is there any option just like ignore-no-store? I know i am asking for too
> much but it seems very silly on google's part that they are sending very
> header at a place where they shouldn't as no matter how you access those
> url's you are only going to get those deb files.
>
> can i hack squid source code to ignore very header ?
>
>
>
> On Tue, Oct 4, 2016 at 6:51 PM, Amos Jeffries 
> wrote:
>
>> On 5/10/2016 2:05 a.m., Hardik Dangar wrote:
>> > Hello,
>> >
>> > I am trying to cache following deb files as its most requested file in
>> > network. ( google chrome almost every few days many clients update it ).
>> >
>> > http://dl.google.com/linux/direct/google-chrome-stable_curre
>> nt_amd64.deb
>> > http://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-beta_
>> current_i386.deb
>> >
>> > Response headers for both contains Last modified date which is 10 to 15
>> > days old but squid does not seem to cache it somehow. here is sample
>> > response header for one of the file,
>> >
>> > HTTP Response Header
>> >
>> > Status: HTTP/1.1 200 OK
>> > Accept-Ranges: bytes
>> > Content-Length: 6662208
>> > Content-Type: application/x-debian-package
>> > Etag: "fa383"
>> > Last-Modified: Thu, 15 Sep 2016 19:24:00 GMT
>> > Server: downloads
>> > Vary: *
>>
>> The Vary header says that this response is just one of many that can
>> happen for this URL.
>>
>> The "*" in that header says that the way to determine which the clietn
>> gets is based on something no proxy can ever do. Thus no cache can ever
>> re-use any content it wanted to store. Making any attempts to store it a
>> pointless waste of CPU time, disk and memory space that could better be
>> used by some other more useful object. Squid will not ever cache these
>> responses.
>>
>> (Thank you for the well written request for help anyhow.)
>>
>> Amos
>>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid - AD kerberos auth and Linux Server proxy access not working

2016-10-04 Thread Nilesh Gavali
Hi Amos;
Ok, we can discussed the issue in Two part  1. For Windows AD 
Authentication & SSO and 2. Linux server unable to access via squid proxy.

For First point-
Requirement to have SSO for accessing internet via squid proxy and based 
on user's AD group membership allow access to specific sites only. I 
believe current configuration of squid is working as expected.

For Second point -
Point I would like to highlight here is, the Linux server IWCCP01 is not 
part of domain at all. Hence the below error as squid configured for 
AD_auth. So how can we allow Linux server or non domain machine to access 
specific sites?

> Error 407 is "proxy auth required", so the proxy is expecting 
authentication 
> for some reason.

 > Can you confirm that the hostname vseries-test.bottomline.com is 
contained in 
> your site file /etc/squid/sitelist/dbs_allowed_site ?

YES, we have entry as .bottomline.com , which work fine when access via 
windows machine having proxy enabled for that user.
==
> Can you temporarily change the line "http_access allow IWCCP01 
allowedsite" to 
> "http_access allow IWCCP01" and see whether the machine then gets 
access?

 I will test this, and update the results.

If that works, please list the output of the command:
  grep "bottomline.com" /etc/squid/sitelist/dbs_allowed_site

o/p of above command as below -

[root@Proxy02 ~]# grep "bottomline.com" 
/etc/squid/sitelist/dbs_allowed_site
.bottomline.com
[root@Proxy02 ~]#

===

Thanks & Regards
Nilesh Suresh Gavali




 
Message: 2
Date: Wed, 5 Oct 2016 00:11:08 +1300
From: Amos Jeffries 
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid - AD kerberos auth and Linux Server
 proxy access not working
Message-ID: 
Content-Type: text/plain; charset=utf-8

On 4/10/2016 11:36 p.m., Antony Stone wrote:
> On Tuesday 04 October 2016 at 12:28:44, Nilesh Gavali wrote:
> 
>> Hello Antony;
>> I have double checked the current working configuration of my 
squid.conf
>> and it has same settings which I posted earlier. somehow it is working 
for
>> us.
> 
> I'm not saying the whole thing won't work; I'm saying there is no point 
in 
> having a line "http_access allow ad_auth" following the line 
"http_access deny 
> all".  The ad_auth line can never be invoked.

Not knowing why authentication works is dangerous. You might have been
allowing non-authenticated traffic and invalid user accounts through.

The only reason it does "work" is that the ACL called "USERS" is _not_
actually checking user logins. It is a group checking ACL which requires
authentication to happen before it can be checked.

In this specific case invalid logins cannot be a member of the group. So
they will not get through the proxy.

However, people who accidentally type the user/password wrong, or whose
machines automatically login with an account not a member of the group
will not be allowed any way to try again short of shutting down their
browser or maybe even logging out of the machine and trying from another
one.

That may or may not be a problem for you.

> 
>> below is the error from access.log file.
>>
>> 1475518342.279  0 10.xx.15.103 TCP_DENIED/407 3589 CONNECT
>> vseries-test.bottomline.com:443 - NONE/- text/html
> 
> Error 407 is "proxy auth required", so the proxy is expecting 
authentication 
> for some reason.
> 
> Can you confirm that the hostname vseries-test.bottomline.com is 
contained in 
> your site file /etc/squid/sitelist/dbs_allowed_site ?
> 
> Can you temporarily change the line "http_access allow IWCCP01 
allowedsite" to 
> "http_access allow IWCCP01" and see whether the machine then gets 
access?
> 

If that works, please list the output of the command:
  grep "bottomline.com" /etc/squid/sitelist/dbs_allowed_site

Amos

***

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching http google deb files

2016-10-04 Thread Hardik Dangar
Hey Amos,

We have about 50 clients which downloads same google chrome update every 2
or 3 days means 2.4 gb. although response says vary but requested file is
same and all is downloaded via apt update.

Is there any option just like ignore-no-store? I know i am asking for too
much but it seems very silly on google's part that they are sending very
header at a place where they shouldn't as no matter how you access those
url's you are only going to get those deb files.

can i hack squid source code to ignore very header ?



On Tue, Oct 4, 2016 at 6:51 PM, Amos Jeffries  wrote:

> On 5/10/2016 2:05 a.m., Hardik Dangar wrote:
> > Hello,
> >
> > I am trying to cache following deb files as its most requested file in
> > network. ( google chrome almost every few days many clients update it ).
> >
> > http://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
> > http://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-
> beta_current_i386.deb
> >
> > Response headers for both contains Last modified date which is 10 to 15
> > days old but squid does not seem to cache it somehow. here is sample
> > response header for one of the file,
> >
> > HTTP Response Header
> >
> > Status: HTTP/1.1 200 OK
> > Accept-Ranges: bytes
> > Content-Length: 6662208
> > Content-Type: application/x-debian-package
> > Etag: "fa383"
> > Last-Modified: Thu, 15 Sep 2016 19:24:00 GMT
> > Server: downloads
> > Vary: *
>
> The Vary header says that this response is just one of many that can
> happen for this URL.
>
> The "*" in that header says that the way to determine which the clietn
> gets is based on something no proxy can ever do. Thus no cache can ever
> re-use any content it wanted to store. Making any attempts to store it a
> pointless waste of CPU time, disk and memory space that could better be
> used by some other more useful object. Squid will not ever cache these
> responses.
>
> (Thank you for the well written request for help anyhow.)
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching http google deb files

2016-10-04 Thread Amos Jeffries
On 5/10/2016 2:05 a.m., Hardik Dangar wrote:
> Hello,
> 
> I am trying to cache following deb files as its most requested file in
> network. ( google chrome almost every few days many clients update it ).
> 
> http://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
> http://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-beta_current_i386.deb
> 
> Response headers for both contains Last modified date which is 10 to 15
> days old but squid does not seem to cache it somehow. here is sample
> response header for one of the file,
> 
> HTTP Response Header
> 
> Status: HTTP/1.1 200 OK
> Accept-Ranges: bytes
> Content-Length: 6662208
> Content-Type: application/x-debian-package
> Etag: "fa383"
> Last-Modified: Thu, 15 Sep 2016 19:24:00 GMT
> Server: downloads
> Vary: *

The Vary header says that this response is just one of many that can
happen for this URL.

The "*" in that header says that the way to determine which the clietn
gets is based on something no proxy can ever do. Thus no cache can ever
re-use any content it wanted to store. Making any attempts to store it a
pointless waste of CPU time, disk and memory space that could better be
used by some other more useful object. Squid will not ever cache these
responses.

(Thank you for the well written request for help anyhow.)

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IPv6 interception crash: Ip::Address::getInAddr : Cannot convert non-IPv4 to IPv4.

2016-10-04 Thread Stephen Borrill
On 04/10/2016 14:10, Amos Jeffries wrote:
> On 5/10/2016 1:16 a.m., Egerváry Gergely wrote:
>>> Getting closer, but still not there...
>>
>> Hah, we need to apply the kern/50198 patch to ipnat_6.c too.
>>
>> --- ip_nat6.c.orig  2015-08-08 18:31:21.0 +0200
>> +++ ip_nat6.c   2016-10-04 14:04:21.0 +0200
>> @@ -2470,8 +2469,8 @@
>> }
>> }
>>
>> -   np->nl_realip6 = nat->nat_ndst6.in6;
>> -   np->nl_realport = nat->nat_ndport;
>> +   np->nl_realip6 = nat->nat_odst6.in6;
>> +   np->nl_realport = nat->nat_odport;
>> }
>> }
>>
>> Thank you very much, Amos, your Squid patch works good with it!
>>
>> Gergely EGERVARY
> 
> Thanks for the testing and feedback. I've applied this as part-2 of the
> bug 4302 updates. It will be in the next releases of 3.5 and 4.x.

Gergely, please update the NetBSD PR with your working kernel patch(es)
and I'll commit them, can't wait for Darren any longer.

-- 
Stephen


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] problem in configuring squid

2016-10-04 Thread Antony Stone
On Tuesday 04 October 2016 at 14:51:13, Mehdi Yeganeh wrote:

> Thanks for quick replay,
> I need to use my server, i configure my ip address in some software like
> antivirus and ...

... and what?

I do not understand what antivirus software has to do with our discussion.  
Please give details, don't just write "...".

> So, I want all of that working

All of what?

> with my server ip address and for this reason I cannot use torproxy or
> torproject. I need a proxy server (squid) on my server

In that case install Squid on your server.  What is the problem?

> More details about 173.161.0.227:
> Its sophos web appliance that use squid on debian and using some other
> proxy software (Astaro HttpProxy) with squid and
> iptables for forwarding ports. but i can`t find the other proxy software
> for download. so, i just have squid alone (although iptables is present)

Okay, so I understand that the machine on that IP address (which appears to be 
serving Pennoyer School in Illinois, with connectivity provided by Comcast) is 
a "Sophos web appliance" - some sort of combined firewall / proxy / port 
forwarder.

What is the relevance of that machine to your question?

> Please tell me that should i use other tools or squid can do it?

Do what?

Please explain exactly what it is you are trying to achieve, and hoping that 
Squid is a solution for.


Regards,


Antony.

-- 
Police have found a cartoonist dead in his house.  They say that details are 
currently sketchy.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IPv6 interception crash: Ip::Address::getInAddr : Cannot convert non-IPv4 to IPv4.

2016-10-04 Thread Amos Jeffries
On 5/10/2016 1:16 a.m., Egerváry Gergely wrote:
>> Getting closer, but still not there...
> 
> Hah, we need to apply the kern/50198 patch to ipnat_6.c too.
> 
> --- ip_nat6.c.orig  2015-08-08 18:31:21.0 +0200
> +++ ip_nat6.c   2016-10-04 14:04:21.0 +0200
> @@ -2470,8 +2469,8 @@
> }
> }
> 
> -   np->nl_realip6 = nat->nat_ndst6.in6;
> -   np->nl_realport = nat->nat_ndport;
> +   np->nl_realip6 = nat->nat_odst6.in6;
> +   np->nl_realport = nat->nat_odport;
> }
> }
> 
> Thank you very much, Amos, your Squid patch works good with it!
> 
> Gergely EGERVARY

Thanks for the testing and feedback. I've applied this as part-2 of the
bug 4302 updates. It will be in the next releases of 3.5 and 4.x.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Caching http google deb files

2016-10-04 Thread Hardik Dangar
Hello,

I am trying to cache following deb files as its most requested file in
network. ( google chrome almost every few days many clients update it ).

http://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
http://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-beta_current_i386.deb

Response headers for both contains Last modified date which is 10 to 15
days old but squid does not seem to cache it somehow. here is sample
response header for one of the file,

HTTP Response Header

Status: HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 6662208
Content-Type: application/x-debian-package
Etag: "fa383"
Last-Modified: Thu, 15 Sep 2016 19:24:00 GMT
Server: downloads
Vary: *
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 1; mode=block
Date: Tue, 04 Oct 2016 12:51:57 GMT
Alt-Svc: quic=":443"; ma=2592000; v="36,35,34,33,32"
Connection: close


I have tried various refresh patterns to cache it but it seems somehow it's
not cached no matter what i try, below are 6 different methods i have
already tried one by one

1) refresh_pattern dl-ssl.google.com  2160 100% 10080 ignore-no-cache
reload-into-ims

2) refresh_pattern
http://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_i386.deb
129600 100% 129600 reload-into-ims

3) refresh_pattern dl-ssl.google.com\/dl\/linux\/direct/.*\(.deb|.zip)
43200 80% 129600 reload-into-ims override-lastmod ignore-no-store
refresh-ims store-stale

4) refresh_pattern ^http:\/\/dl-ssl.google.com.*\.(deb|zip)  43200 80%
129600 reload-into-ims

5) refresh_pattern dl.google.com\/.*\.(deb)  129600 100% 129600
reload-into-ims

6) refresh_pattern dl-ssl.google.com\/.*\.(deb)  129600 100% 129600
reload-into-ims


My cache is working fine as at the same time i am able to cache files in
oracle servers via following refresh pattern,
refresh_pattern -i download.oracle.com 5259487 20% 5259487 override-expire
override-lastmod ignore-reload ignore-private ignore-auth


So i am not sure what's the issue with http://dl.google.com/linux servers.
Can anyone give me any clue why it is not working ? anyone out there who
are able to cache those files from google?

Here is the TCP_Miss entry in squid's access.log file for the above file,
04/Oct/2016:16:37:07 +0530.695  78902 192.168.1.76 TCP_MISS/200 6662561 GET
http://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_i386.deb
- HIER_DIRECT/74.125.68.91 application/x-debian-package

Here is my squid config file,
http://pastebin.com/raw/jY57XJPp


Thanks.
Hardik
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] problem in configuring squid

2016-10-04 Thread Mehdi Yeganeh
Thanks for quick replay,
I need to use my server, i configure my ip address in some software like
antivirus and ...
So, I want all of that working with my server ip address and for this
reason I cannot use torproxy or torproject.
I need a proxy server (squid) on my server ...

More details about 173.161.0.227:
Its sophos web appliance that use squid on debian and using some other
proxy software (Astaro HttpProxy) with squid and
iptables for forwarding ports. but i can`t find the other proxy software
for download. so, i just have squid alone (although iptables is present)

Please tell me that should i use other tools or squid can do it?
Thanks.


On Mon, Oct 3, 2016 at 6:52 PM, Antony Stone <
antony.st...@squid.open.source.it> wrote:

> On Monday 03 October 2016 at 17:03:13, Shark wrote:
>
> > I want to config squid to make "open proxy" for both http & https
> > I want make anonymous proxy, without decrypting traffic or etc, just
> change
> > ip address, like this:
> >
> > i find lot of ip port in internet for example: 173.161.0.227
> > when i add some host to /etc/hosts like this:
> >
> > 173.161.0.227 www.iplocation.net
> >
> > its give me true way without ssl blocking in client and my ip changes to
> > 173.161.0.227,
>
> Squid is the wrong tool for this job.
>
> You probably want something like https://www.torproject.org/
> ‎
>
> Antony.
>
> --
> There are only 10 types of people in the world:
> those who understand binary notation,
> and those who don't.
>
>Please reply to the
> list;
>  please *don't* CC
> me.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-3.5.21: filter FTP content or FTP commands

2016-10-04 Thread oleg gv
Finally I've managed to go on ftp.intel.com using FileZilla through my
squid gateway in standart (proxy) mode.

Squid conf:
ftp_port  x.x.x.x  2122

Then I try to block FTP-Command and nothing happen. Some from my config:

acl rh req_header -i ^FTP-Command
http_access deny rh
http_access permit all

And also add following:

request_header_access  "FTP-Command: LIST" deny all


Connect and browsing of remote ftp.intel.com is  OK - nothing blocked.

In squid log i see (fragment):


2016/10/04 15:23:04.177 kid1| 9,2| FtpServer.cc(495) writeReply: FTP Client
REPLY:
-
227 Entering Passive Mode (192,168,33,254,230,30).

--
2016/10/04 15:23:04.177 kid1| 20,2| store.cc(949) checkCachable:
StoreEntry::checkCachable: NO: not cachable
2016/10/04 15:23:04.177 kid1| 20,2| store.cc(949) checkCachable:
StoreEntry::checkCachable: NO: not cachable
2016/10/04 15:23:04.178 kid1| 33,2| FtpServer.cc(699) parseOneRequest:
>>ftp LIST
2016/10/04 15:23:04.178 kid1| 9,2| FtpServer.cc(1320) handleRequest: FTP
Client local=192.168.33.254:2122 remote=192.168.33.10:60838 FD 9 flags=1
2016/10/04 15:23:04.178 kid1| 9,2| FtpServer.cc(1322) handleRequest: FTP
Client REQUEST:
-
GET / HTTP/1.1
FTP-Command: LIST
FTP-Arguments:

--
2016/10/04 15:23:04.178 kid1| 85,2| client_side_request.cc(744)
clientAccessCheckDone: The request GET ftp://ftp.intel.com/ is ALLOWED;
last ACL checked: net33
2016/10/04 15:23:04.178 kid1| 85,2| client_side_request.cc(720)
clientAccessCheck2: No adapted_http_access configuration. default: ALLOW
2016/10/04 15:23:04.178 kid1| 85,2| client_side_request.cc(744)
clientAccessCheckDone: The request GET ftp://ftp.intel.com/ is ALLOWED;
last ACL checked: net33
2016/10/04 15:23:04.178 kid1| 17,2| FwdState.cc(133) FwdState: Forwarding
client request local=192.168.33.254:2122 remote=192.168.33.10:60838 FD 9
flags=1, url=ftp://ftp.intel.com/
2016/10/04 15:23:04.178 kid1| 44,2| peer_select.cc(258) peerSelectDnsPaths:
Find IP destination for: ftp://ftp.intel.com/' via ftp.intel.com
2016/10/04 15:23:04.178 kid1| 44,2| peer_select.cc(258) peerSelectDnsPaths:
Find IP destination for: ftp://ftp.intel.com/' via ftp.intel.com
2016/10/04 15:23:04.178 kid1| 44,2| peer_select.cc(280) peerSelectDnsPaths:
Found sources for 'ftp://ftp.intel.com/'



But I need to block FTP-Command: LIST (for example)


2016-10-03 20:34 GMT+03:00 Alex Rousskov :

> Please ask these questions on squid-users...
>
> On 10/03/2016 05:51 AM, oleg gv wrote:
> > Thanks, but problems still exist - FTP doesn't work through proxy.
> >
> > 1. I've set in proxy
> > ftp_port 192.168.0.1:2121 
> > 2. set in client browser to use proxy for FTP on 192.168.0.1:2121
> > 
> >
> > Trying to go ftp://ftp.intel.com  and In log of squid i see:
> >
> > FTP Client REPLY:
> > -
> > 530 Must login first
> >
> > 
> >
> > Another variant: setup inerception ftp_proxy (with nat redirect) - and
> > it also doesn'nt work: last commands in log:
> > 2016/10/03 14:43:09.929 kid1| 9,2| FtpRelay.cc(733)
> > dataChannelConnected: connected FTP server data channel:
> > local=8x.xxx.xxx.xxx:41231 remote=192.198.164.82:36034
> >  FD 19 flags=1
> > 2016/10/03 14:43:09.929 kid1| 9,2| FtpClient.cc(791) writeCommand: ftp<<
> > LIST
> >
> > 2016/10/03 14:43:10.125 kid1| 9,2| FtpClient.cc(1108) parseControlReply:
> > ftp>> 125 Data connection already open; Transfer starting.
> >
> > And ftp.intel com is hang, trying to open..
> >
> >
> >
> >
> >
> > 2016-10-01 2:12 GMT+03:00 Alex Rousskov
> >  > >:
> >
> > On 09/30/2016 10:42 AM, oleg gv wrote:
> >
> > > Hello, I've found that NativeFtpRelay appeared in squid 3.5 . Is it
> > > possible to apply http-access acl for FTP proto concerning
> filtering of
> > > FTP methods(commands)
> >
> > Yes, it should be possible.
> >
> >
> > > by analogy of HTTP methods ?
> >
> > Not quite. IIRC, when the HTTP message representing the FTP
> transaction
> > is relayed through Squid, the FTP command name is _not_ stored as an
> > HTTP method. The FTP command name is stored as HTTP "FTP-Command"
> header
> > value. See http://wiki.squid-cache.org/Features/FtpRelay
> > 
> >
> > You should be able to block FTP commands using a req_header ACL.
> >
> >
> > > what other possibilities in squid exist to do this ?
> >
> > An ICAP or eCAP service can also filter relayed FTP messages.
> >
> > Alex.
> >
> >
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Introducing delay to HTTP 407 responses

2016-10-04 Thread Amos Jeffries
On 5/10/2016 12:47 a.m., squid-users wrote:
> Amos,
> 
>> This helper is the mechanism that we accepted. Anything else would be far
>> less useful.
> 
> Makes sense.
> 
>> I think the results you are getting show that the http_status ACL is not
>> working properly.
>>
>> Can you get a "debug_options 28,5" cache.log trace and see if
>> "aclMatchHTTPStatus" is matching anything or "http-response-407" even
>> being tested?
> 
> I set this up as you suggested, then triggered a 407 response from the cache. 
>  It seems that way; I couldn't see aclMatchHTTPStatus or http-response-407 in 
> the log:
> 

Strange. I was sure Alex did some tests recently and proved that even
internally generated responses get http_reply_access applied to them.
Yet no sign of that in your log.

Is this a very old Squid version?

Or are the "checking http_reply_access" lines just later in the log than
your snippet covered?

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IPv6 interception crash: Ip::Address::getInAddr : Cannot convert non-IPv4 to IPv4.

2016-10-04 Thread Egerváry Gergely
> Getting closer, but still not there...

Hah, we need to apply the kern/50198 patch to ipnat_6.c too.

--- ip_nat6.c.orig  2015-08-08 18:31:21.0 +0200
+++ ip_nat6.c   2016-10-04 14:04:21.0 +0200
@@ -2470,8 +2469,8 @@
}
}

-   np->nl_realip6 = nat->nat_ndst6.in6;
-   np->nl_realport = nat->nat_ndport;
+   np->nl_realip6 = nat->nat_odst6.in6;
+   np->nl_realport = nat->nat_odport;
}
}

Thank you very much, Amos, your Squid patch works good with it!

Gergely EGERVARY
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: squid tproxy ssl-bump and Protocol error (TLS code: SQUID_ERR_SSL_HANDSHAKE)

2016-10-04 Thread Amos Jeffries
On 5/10/2016 12:07 a.m., Vieri wrote:
> Hi,
> 
>>> Whatever the reason, for an end-user like me it seems that the XP
>>> client is able to negotiate TLS correctly with Google and
>>> presumably using the cipher DES-CBC3-SHA (maybe after failing
>>> with RC4-MD5 on a first attempt), whereas Squid immediately fails
>>> with RC4-MD5. It doesn't ever seem to try DES-CBC3-SHA even
>>> though it's available in openssl.
>> 
>> ... in this case it might be. But not for the reasons stated. The 
>> problem known so far is that RC4-MD5 cipher. Why it is not being
>> used by your OpenSSL library.
>> 
>> That could bear some further investigation. There may be things you
>> need to enable in the config passed to OpenSSL, or a different
>> build of the library needed. Something along those lines - Im just
>> guessing here.
> 
> Thanks for your reply.
> 
> I don't fully understand your point. I hope you don't mind if I try
> to make a quick recap here below:
> 
> 1) www.google.com ONLY allows the following ciphers for TLS V 1.0
> (which is the highest TLS version for WinXP IE8):
> 
> TLSv1.0: ciphers: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA - strong 
> TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA - strong 
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA - strong 
> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA - strong 
> TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong TLS_RSA_WITH_AES_128_CBC_SHA -
> strong TLS_RSA_WITH_AES_256_CBC_SHA - strong
> 
> Correct?
> 

Insufficient data. Assuming true ...


> 2) According to https://www.ssllabs.com/ssltest/viewMyClient.html the
> Windows XP IE8 client supports: TLS 1.0 and the following cipher
> list: TLS_RSA_WITH_RC4_128_MD5 (0x4)   INSECURE 128 
> TLS_RSA_WITH_RC4_128_SHA (0x5)   INSECURE 128 
> TLS_RSA_WITH_3DES_EDE_CBC_SHA (0xa)  112 TLS_RSA_WITH_DES_CBC_SHA
> (0x9)   WEAK 56 TLS_RSA_EXPORT1024_WITH_RC4_56_SHA (0x64)   INSECURE
> 56 TLS_RSA_EXPORT1024_WITH_DES_CBC_SHA (0x62)   WEAK 56 
> TLS_RSA_EXPORT_WITH_RC4_40_MD5 (0x3)   INSECURE 40 
> TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 (0x6)   INSECURE 40 
> TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA (0x13)   Forward Secrecy2  112 
> TLS_DHE_DSS_WITH_DES_CBC_SHA (0x12)   WEAK 56 
> TLS_DHE_DSS_EXPORT1024_WITH_DES_CBC_SHA (0x63)   WEAK 56
> 
> of which the least weak are:
> 
> TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA
> 
> Does that sound correct?
> 

Insufficient data. Assuming true ...


> 3) I'm deducing from the previous two points that the only eligible
> cipher is TLS_RSA_WITH_3DES_EDE_CBC_SHA because it's the only cipher
> supported by both google.com and WinXP
> 
> Right?
> 

Yes. Qualified by above assumptions.


> 4) According to https://testssl.sh/openssl-rfc.mappping.html the
> openssl cipher name equivalent for TLS_RSA_WITH_3DES_EDE_CBC_SHA is
> DES-CBC3-SHA.
> 
> Correct?

Yes.

> 
> 5) So if all the previous points are correct, now I'm assuming that
> if I run openssl at the command line on the same system where Squid
> is running then I can "reproduce" what the WinXP client "wants". I
> run the following:
> 
> # openssl s_client -connect google.com:443 -tls1 -cipher
> DES-CBC3-SHA [...] SSL-Session: Protocol  : TLSv1 Cipher:
> DES-CBC3-SHA [...] (that went well)
> 
> I also run this other command:
> 
> # curl --tlsv1.0 --ciphers DES-CBC3-SHA https://www.google.com
> --trace trace.log
> 
> The trace.log file contains lines such as: == Info: Cipher selection:
> DES-CBC3-SHA Handshake OK and web page is accessed.
> 
> Is it correct to assume at this point that the current openssl build
> on this system is "OK" as far as supporting "Win XP TLS 1.0 ciphers
> to access at least google.com"?

Yes. The build is capable of it. That is one of 3 conditions that must
be met for it to work.

The other two being:

* whether it is enabled in the library config.
 - OpenSSL library has its own conf file somewhere.
 - it is possible that curl and other tools whose primary design purpose
is communication (not testing) override the library normal defaults for
their own use, or re-try certain things after failures. That needs to be
eliminated to be sure.

* that the squid.conf settings combine with those library settings to
cause it to be (or stay) enabled.


> 
> 6) I don't understand why you say that my openssl library does not
> use RC4-MD5 (did I understand your sentence correctly?). Why should
> the RC4-MD5 cipher be used in the first place? Who is requesting it?
> If it's the Windows XP client then it should obviously be discarded
> since google.com does not support it. So maybe this is what IE8 on XP
> does: it first tries RC4-MD5 and when that fails, it goes for
> DES-CBC3-SHA. In any case, when the WinXP client uses Squid as MITM,
> Squid *IS* using the RC4-MD5 cipher *AND* my openssl library *does*
> support this cipher as shown in the following command:
> 
> # openssl ciphers 
> 

Re: [squid-users] Introducing delay to HTTP 407 responses

2016-10-04 Thread squid-users
Amos,

> This helper is the mechanism that we accepted. Anything else would be far
> less useful.

Makes sense.

> I think the results you are getting show that the http_status ACL is not
> working properly.
> 
> Can you get a "debug_options 28,5" cache.log trace and see if
> "aclMatchHTTPStatus" is matching anything or "http-response-407" even
> being tested?

I set this up as you suggested, then triggered a 407 response from the cache.  
It seems that way; I couldn't see aclMatchHTTPStatus or http-response-407 in 
the log:

2016/10/04 22:37:12.656 kid1| 28,3| Checklist.cc(70) preCheck: 0x7ffcaaa6a540 
checking fast rules
2016/10/04 22:37:12.656 kid1| 28,5| Checklist.cc(346) fastCheck: aclCheckFast: 
list: 0x1c3da68
2016/10/04 22:37:12.656 kid1| 28,5| Acl.cc(138) matches: checking snmp_access
2016/10/04 22:37:12.656 kid1| 28,5| Checklist.cc(400) bannedAction: Action 
'ALLOWED/0is not banned
2016/10/04 22:37:12.656 kid1| 28,5| Acl.cc(138) matches: checking snmp_access#1
2016/10/04 22:37:12.656 kid1| 28,5| Acl.cc(138) matches: checking localhost
2016/10/04 22:37:12.656 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: 
'127.0.0.1:34818' found
2016/10/04 22:37:12.656 kid1| 28,3| Acl.cc(158) matches: checked: localhost = 1
2016/10/04 22:37:12.656 kid1| 28,3| Acl.cc(158) matches: checked: snmp_access#1 
= 1
2016/10/04 22:37:12.656 kid1| 28,3| Acl.cc(158) matches: checked: snmp_access = 
1
2016/10/04 22:37:12.656 kid1| 28,3| Checklist.cc(63) markFinished: 
0x7ffcaaa6a540 answer ALLOWED for match
2016/10/04 22:37:12.656 kid1| 28,4| FilledChecklist.cc(66) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0x7ffcaaa6a540
2016/10/04 22:37:12.656 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: 
ACLChecklist::~ACLChecklist: destroyed 0x7ffcaaa6a540
2016/10/04 22:37:12.657 kid1| 28,3| Checklist.cc(70) preCheck: 0x7ffcaaa6a540 
checking fast rules
2016/10/04 22:37:12.657 kid1| 28,5| Checklist.cc(346) fastCheck: aclCheckFast: 
list: 0x1c3da68
2016/10/04 22:37:12.657 kid1| 28,5| Acl.cc(138) matches: checking snmp_access
2016/10/04 22:37:12.657 kid1| 28,5| Checklist.cc(400) bannedAction: Action 
'ALLOWED/0is not banned
2016/10/04 22:37:12.657 kid1| 28,5| Acl.cc(138) matches: checking snmp_access#1
2016/10/04 22:37:12.657 kid1| 28,5| Acl.cc(138) matches: checking localhost
2016/10/04 22:37:12.657 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: 
'127.0.0.1:34818' found
2016/10/04 22:37:12.657 kid1| 28,3| Acl.cc(158) matches: checked: localhost = 1
2016/10/04 22:37:12.657 kid1| 28,3| Acl.cc(158) matches: checked: snmp_access#1 
= 1
2016/10/04 22:37:12.657 kid1| 28,3| Acl.cc(158) matches: checked: snmp_access = 
1
2016/10/04 22:37:12.657 kid1| 28,3| Checklist.cc(63) markFinished: 
0x7ffcaaa6a540 answer ALLOWED for match
2016/10/04 22:37:12.657 kid1| 28,4| FilledChecklist.cc(66) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0x7ffcaaa6a540
2016/10/04 22:37:12.657 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: 
ACLChecklist::~ACLChecklist: destroyed 0x7ffcaaa6a540
2016/10/04 22:37:17.697 kid1| 28,3| Checklist.cc(70) preCheck: 0x7ffcaaa6a540 
checking fast rules
2016/10/04 22:37:17.697 kid1| 28,5| Checklist.cc(346) fastCheck: aclCheckFast: 
list: 0x1c3da68
2016/10/04 22:37:17.697 kid1| 28,5| Acl.cc(138) matches: checking snmp_access
2016/10/04 22:37:17.697 kid1| 28,5| Checklist.cc(400) bannedAction: Action 
'ALLOWED/0is not banned
2016/10/04 22:37:17.697 kid1| 28,5| Acl.cc(138) matches: checking snmp_access#1
2016/10/04 22:37:17.697 kid1| 28,5| Acl.cc(138) matches: checking localhost
2016/10/04 22:37:17.697 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: 
'127.0.0.1:34912' found
2016/10/04 22:37:17.697 kid1| 28,3| Acl.cc(158) matches: checked: localhost = 1
2016/10/04 22:37:17.697 kid1| 28,3| Acl.cc(158) matches: checked: snmp_access#1 
= 1
2016/10/04 22:37:17.697 kid1| 28,3| Acl.cc(158) matches: checked: snmp_access = 
1
2016/10/04 22:37:17.697 kid1| 28,3| Checklist.cc(63) markFinished: 
0x7ffcaaa6a540 answer ALLOWED for match
2016/10/04 22:37:17.697 kid1| 28,4| FilledChecklist.cc(66) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0x7ffcaaa6a540
2016/10/04 22:37:17.697 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: 
ACLChecklist::~ACLChecklist: destroyed 0x7ffcaaa6a540
2016/10/04 22:37:17.698 kid1| 28,3| Checklist.cc(70) preCheck: 0x7ffcaaa6a540 
checking fast rules
2016/10/04 22:37:17.698 kid1| 28,5| Checklist.cc(346) fastCheck: aclCheckFast: 
list: 0x1c3da68
2016/10/04 22:37:17.698 kid1| 28,5| Acl.cc(138) matches: checking snmp_access
2016/10/04 22:37:17.698 kid1| 28,5| Checklist.cc(400) bannedAction: Action 
'ALLOWED/0is not banned
2016/10/04 22:37:17.698 kid1| 28,5| Acl.cc(138) matches: checking snmp_access#1
2016/10/04 22:37:17.698 kid1| 28,5| Acl.cc(138) matches: checking localhost
2016/10/04 22:37:17.698 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: 
'127.0.0.1:34912' found
2016/10/04 22:37:17.698 kid1| 28,3| Acl.cc(158) matches: checked: localhost = 1
2016/10/04 22:37:17.698 kid1| 28,3| Acl.cc(158) matches: checked: snmp_access#1 
= 

Re: [squid-users] IPv6 interception crash: Ip::Address::getInAddr : Cannot convert non-IPv4 to IPv4.

2016-10-04 Thread Egerváry Gergely
> Aha. Damn macros.
> 
> There are a few changes needed, for both v4/v6 inputs and 'realip'
> processing. This attached patch should be what you need for Squid-3.5 to
> work.

Getting closer, but still not there...

The browser client is 2001:738:7a00:a::a:d, the remote destination is
2001:4c48:2:268::2:1c

The ipnat state table entry:
RDR 2001:738:7a00:a::14 3128  <- -> 2001:4c48:2:268::2:1c 80
[2001:738:7a00:a::a:d 56623]

Squid log:

2016/10/04 13:16:33.365 kid1| 51,3| fd.cc(198) fd_open: fd_open() FD 22
HTTP Request
2016/10/04 13:16:33.366 kid1| 89,5| Intercept.cc(391) Lookup: address
BEGIN: me/client= [2001:738:7a00:a::14]:3128, destination/me=
[2001:738:7a00:a::14]:65491
2016/10/04 13:16:33.366 kid1| 89,9| Intercept.cc(290) IpfInterception:
address: local=[2001:738:7a00:a::14]:3128
remote=[2001:738:7a00:a::14]:65491 FD 22 flags=33
2016/10/04 13:16:33.366 kid1| ERROR: NAT/TPROXY lookup failed to locate
original IPs on local=[2001:738:7a00:a::14]:3128
remote=[2001:738:7a00:a::14]:65491 FD 22 flags=33
2016/10/04 13:16:33.366 kid1| 5,5| TcpAcceptor.cc(287) acceptOne:
Listener: local=[2001:738:7a00:a::14]:3128 remote=[::] FD 1
9 flags=41 accepted new connection local=[2001:738:7a00:a::14]:3128
remote=[2001:738:7a00:a::14]:65491 FD 22 flags=33 handler
 Subscription: 0x16acf40*1

--
Gergely EGERVARY

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Introducing delay to HTTP 407 responses

2016-10-04 Thread Amos Jeffries
On 4/10/2016 11:53 p.m., squid-us...@filter.luko.org wrote:
> Eliezer,
> 
> Thankyou for your reply, I tried the following:
> 
>> Hey Luke,
>>
>> Try to use the next line instead:
>> external_acl_type delay ttl=1 negative_ttl=0 cache=0 %SRC %SRCPORT %URI 
>> /tmp/delay.pl
>>
>> And see what happens.
> 
> But it's not introducing a delay into the response.  Running strace across 
> the pid of each child helper doesn't show any activity across those processes 
> either.
> 

The purpose of that helper is to receive all lookups, and actively pause
responding to them. Having any TTL/cache values except "ttl=0
negative_ttl=0 cache=0" in those options bypasses the helper.


> I also tried the approach suggested by Amos:
> 
>> The outcome of that was a 'ext_delayer_acl helper in Squid-3.5
>>
>> 
>>
>> It works slightly differently to what was being discussed in the thread.
>> see the man page for details on how to configure it.
> 
> Using the following config:
> 
> external_acl_type delay concurrency=10 children-max=2 children-startup=1 
> children-idle=1 cache=10 %URI /tmp/ext_delayer_acl -w 1000 -d
> acl http-response-407 http_status 407
> acl delay-1sec external delay
> http_reply_access deny http-response-407 delay-1sec !all
> 
> Debug information from ext_delayer_acl is written to the cache log; I see the 
> processes start up but they are not hit with any requests by Squid.  I also 
> added %SRC %SRCPORT into the configuration, but that didn't seem to help 
> either.
> 
> Would the developers be open to adding a configuration-based throttle to 
> authentication responses, avoiding the need for an external helper?  Or 
> alternatively, is there another way to slow down auth responses?  It's 
> comprising about 90% of the log volume (450,000 requests/hr) in badly 
> affected sites at the moment.
> 

This helper is the mechanism that we accepted. Anything else would be
far less useful.

I think the results you are getting show that the http_status ACL is not
working properly.

Can you get a "debug_options 28,5" cache.log trace and see if
"aclMatchHTTPStatus" is matching anything or "http-response-407" even
being tested?

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid - AD kerberos auth and Linux Server proxy access not working

2016-10-04 Thread Amos Jeffries
On 4/10/2016 11:36 p.m., Antony Stone wrote:
> On Tuesday 04 October 2016 at 12:28:44, Nilesh Gavali wrote:
> 
>> Hello Antony;
>> I have double checked the current working configuration of my squid.conf
>> and it has same settings which I posted earlier. somehow it is working for
>> us.
> 
> I'm not saying the whole thing won't work; I'm saying there is no point in 
> having a line "http_access allow ad_auth" following the line "http_access 
> deny 
> all".  The ad_auth line can never be invoked.

Not knowing why authentication works is dangerous. You might have been
allowing non-authenticated traffic and invalid user accounts through.

The only reason it does "work" is that the ACL called "USERS" is _not_
actually checking user logins. It is a group checking ACL which requires
authentication to happen before it can be checked.

In this specific case invalid logins cannot be a member of the group. So
they will not get through the proxy.

However, people who accidentally type the user/password wrong, or whose
machines automatically login with an account not a member of the group
will not be allowed any way to try again short of shutting down their
browser or maybe even logging out of the machine and trying from another
one.

That may or may not be a problem for you.

> 
>> below is the error from access.log file.
>>
>> 1475518342.279  0 10.xx.15.103 TCP_DENIED/407 3589 CONNECT
>> vseries-test.bottomline.com:443 - NONE/- text/html
> 
> Error 407 is "proxy auth required", so the proxy is expecting authentication 
> for some reason.
> 
> Can you confirm that the hostname vseries-test.bottomline.com is contained in 
> your site file /etc/squid/sitelist/dbs_allowed_site ?
> 
> Can you temporarily change the line "http_access allow IWCCP01 allowedsite" 
> to 
> "http_access allow IWCCP01" and see whether the machine then gets access?
> 

If that works, please list the output of the command:
  grep "bottomline.com" /etc/squid/sitelist/dbs_allowed_site

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: squid tproxy ssl-bump and Protocol error (TLS code: SQUID_ERR_SSL_HANDSHAKE)

2016-10-04 Thread Vieri
Hi,

>> Whatever the reason,
>> for an end-user like me it seems that the XP client is able to
>> negotiate TLS correctly with Google and presumably using the cipher
>> DES-CBC3-SHA (maybe after failing with RC4-MD5 on a first attempt),
>> whereas Squid immediately fails with RC4-MD5. It doesn't ever seem to
>> try DES-CBC3-SHA even though it's available in openssl.
>
> ... in this case it might be. But not for the reasons stated. The
> problem known so far is that RC4-MD5 cipher. Why it is not being used by
> your OpenSSL library.
>
> That could bear some further investigation. There may be things you need
> to enable in the config passed to OpenSSL, or a different build of the
> library needed. Something along those lines - Im just guessing here.

Thanks for your reply.

I don't fully understand your point.
I hope you don't mind if I try to make a quick recap here below:

1) www.google.com ONLY allows the following ciphers for TLS V 1.0 (which is the 
highest TLS version for WinXP IE8):

TLSv1.0:
ciphers:
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA - strong
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA - strong
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA - strong
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA - strong
TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong
TLS_RSA_WITH_AES_128_CBC_SHA - strong
TLS_RSA_WITH_AES_256_CBC_SHA - strong

Correct?

2) According to https://www.ssllabs.com/ssltest/viewMyClient.html the Windows 
XP IE8 client supports:
TLS 1.0 and the following cipher list:
TLS_RSA_WITH_RC4_128_MD5 (0x4)   INSECURE 128
TLS_RSA_WITH_RC4_128_SHA (0x5)   INSECURE 128
TLS_RSA_WITH_3DES_EDE_CBC_SHA (0xa)  112
TLS_RSA_WITH_DES_CBC_SHA (0x9)   WEAK 56
TLS_RSA_EXPORT1024_WITH_RC4_56_SHA (0x64)   INSECURE 56
TLS_RSA_EXPORT1024_WITH_DES_CBC_SHA (0x62)   WEAK 56
TLS_RSA_EXPORT_WITH_RC4_40_MD5 (0x3)   INSECURE 40
TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 (0x6)   INSECURE 40
TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA (0x13)   Forward Secrecy2  112
TLS_DHE_DSS_WITH_DES_CBC_SHA (0x12)   WEAK 56
TLS_DHE_DSS_EXPORT1024_WITH_DES_CBC_SHA (0x63)   WEAK 56

of which the least weak are:

TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA
TLS_RSA_WITH_3DES_EDE_CBC_SHA

Does that sound correct?

3) I'm deducing from the previous two points that the only eligible cipher is 
TLS_RSA_WITH_3DES_EDE_CBC_SHA because it's the only cipher supported by both 
google.com and WinXP

Right?

4) According to https://testssl.sh/openssl-rfc.mappping.html the openssl cipher 
name equivalent for TLS_RSA_WITH_3DES_EDE_CBC_SHA is DES-CBC3-SHA.

Correct?

5) So if all the previous points are correct, now I'm assuming that if I run 
openssl at the command line on the same system where Squid is running then I 
can "reproduce" what the WinXP client "wants".
I run the following:

# openssl s_client -connect google.com:443 -tls1 -cipher DES-CBC3-SHA
[...]
SSL-Session:
Protocol  : TLSv1
Cipher: DES-CBC3-SHA
[...]
(that went well)

I also run this other command:

# curl --tlsv1.0 --ciphers DES-CBC3-SHA https://www.google.com --trace trace.log

The trace.log file contains lines such as:
== Info: Cipher selection: DES-CBC3-SHA
Handshake OK and web page is accessed.

Is it correct to assume at this point that the current openssl build on this 
system is "OK" as far as supporting "Win XP TLS 1.0 ciphers to access at least 
google.com"?

6) I don't understand why you say that my openssl library does not use RC4-MD5 
(did I understand your sentence correctly?). Why should the RC4-MD5 cipher be 
used in the first place? Who is requesting it? If it's the Windows XP client 
then it should obviously be discarded since google.com does not support it. So 
maybe this is what IE8 on XP does: it first tries RC4-MD5 and when that fails, 
it goes for DES-CBC3-SHA.
In any case, when the WinXP client uses Squid as MITM, Squid *IS* using the 
RC4-MD5 cipher *AND* my openssl library *does* support this cipher as shown in 
the following command:

# openssl ciphers

Re: [squid-users] IPv6 interception crash: Ip::Address::getInAddr : Cannot convert non-IPv4 to IPv4.

2016-10-04 Thread Amos Jeffries
On 4/10/2016 10:52 p.m., Egerváry Gergely wrote:
>> Is there another defined somewhere else? For some reason your Squid is
>> managing to build with just "nl_inip" (no 'addr') in the field name.
> 
> There's a copy in /usr/include/netinet, but it's the same:
> 
> typedef   struct  natlookup {
>   i6addr_tnl_inipaddr;
>   i6addr_tnl_outipaddr;
>   i6addr_tnl_realipaddr;
>   int nl_v;
>   int nl_flags;
>   u_short nl_inport;
>   u_short nl_outport;
>   u_short nl_realport;
> } natlookup_t;
> 
> #define   nl_inip nl_inipaddr.in4
> #define   nl_outipnl_outipaddr.in4
> #define   nl_realip   nl_realipaddr.in4
> #define   nl_inip6nl_inipaddr.in6
> #define   nl_outip6   nl_outipaddr.in6
> #define   nl_realip6  nl_realipaddr.in6
> 
> ... so "nl_inip" is a simple #define to nl_inipaddr.in4
> 
> This is from Squid's Intercept.cc:
> 
> natLookup.nl_inport = htons(newConn->local.port());
> newConn->local.getInAddr(natLookup.nl_inip);
> natLookup.nl_outport = htons(newConn->remote.port());
> newConn->remote.getInAddr(natLookup.nl_outip);
> 
> Is this correct?
> Should we have this in the "else" section of
>   if (newConn->remote.isIPv6()) ... instead?
> 

Aha. Damn macros.

There are a few changes needed, for both v4/v6 inputs and 'realip'
processing. This attached patch should be what you need for Squid-3.5 to
work.

Amos
=== modified file 'src/ip/Intercept.cc'
--- src/ip/Intercept.cc 2016-04-12 06:52:39 +
+++ src/ip/Intercept.cc 2016-10-04 10:35:52 +
@@ -207,16 +207,21 @@
 debugs(89, warningLevel, "IPF (IPFilter v4) NAT does not support IPv6. 
Please upgrade to IPFilter v5.1");
 warningLevel = (warningLevel + 1) % 10;
 return false;
+}
+newConn->local.getInAddr(natLookup.nl_inip);
+newConn->remote.getInAddr(natLookup.nl_outip);
 #else
 natLookup.nl_v = 6;
+newConn->local.getInAddr(natLookup.nl_inipaddr.in6);
+newConn->remote.getInAddr(natLookup.nl_outipaddr.in6);
 } else {
 natLookup.nl_v = 4;
+newConn->local.getInAddr(natLookup.nl_inipaddr.in4);
+newConn->remote.getInAddr(natLookup.nl_outipaddr.in4);
+}
 #endif
-}
 natLookup.nl_inport = htons(newConn->local.port());
-newConn->local.getInAddr(natLookup.nl_inip);
 natLookup.nl_outport = htons(newConn->remote.port());
-newConn->remote.getInAddr(natLookup.nl_outip);
 // ... and the TCP flag
 natLookup.nl_flags = IPN_TCP;
 
@@ -281,7 +286,14 @@
 debugs(89, 9, HERE << "address: " << newConn);
 return false;
 } else {
+#if IPFILTER_VERSION < 503
 newConn->local = natLookup.nl_realip;
+#else
+if (newConn->remote.isIPv6())
+newConn->local = natLookup.nl_realipaddr.in6;
+else
+newConn->local = natLookup.nl_realipaddr.in4;
+#endif
 newConn->local.port(ntohs(natLookup.nl_realport));
 debugs(89, 5, HERE << "address NAT: " << newConn);
 return true;

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Introducing delay to HTTP 407 responses

2016-10-04 Thread squid-users
Eliezer,

Thankyou for your reply, I tried the following:

> Hey Luke,
> 
> Try to use the next line instead:
> external_acl_type delay ttl=1 negative_ttl=0 cache=0 %SRC %SRCPORT %URI 
> /tmp/delay.pl
> 
> And see what happens.

But it's not introducing a delay into the response.  Running strace across the 
pid of each child helper doesn't show any activity across those processes 
either.

I also tried the approach suggested by Amos:

> The outcome of that was a 'ext_delayer_acl helper in Squid-3.5
> 
> 
> 
> It works slightly differently to what was being discussed in the thread.
> see the man page for details on how to configure it.

Using the following config:

external_acl_type delay concurrency=10 children-max=2 children-startup=1 
children-idle=1 cache=10 %URI /tmp/ext_delayer_acl -w 1000 -d
acl http-response-407 http_status 407
acl delay-1sec external delay
http_reply_access deny http-response-407 delay-1sec !all

Debug information from ext_delayer_acl is written to the cache log; I see the 
processes start up but they are not hit with any requests by Squid.  I also 
added %SRC %SRCPORT into the configuration, but that didn't seem to help either.

Would the developers be open to adding a configuration-based throttle to 
authentication responses, avoiding the need for an external helper?  Or 
alternatively, is there another way to slow down auth responses?  It's 
comprising about 90% of the log volume (450,000 requests/hr) in badly affected 
sites at the moment.

Luke


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid - AD kerberos auth and Linux Server proxy access not working

2016-10-04 Thread Antony Stone
On Tuesday 04 October 2016 at 12:28:44, Nilesh Gavali wrote:

> Hello Antony;
> I have double checked the current working configuration of my squid.conf
> and it has same settings which I posted earlier. somehow it is working for
> us.

I'm not saying the whole thing won't work; I'm saying there is no point in 
having a line "http_access allow ad_auth" following the line "http_access deny 
all".  The ad_auth line can never be invoked.

> below is the error from access.log file.
> 
> 1475518342.279  0 10.xx.15.103 TCP_DENIED/407 3589 CONNECT
> vseries-test.bottomline.com:443 - NONE/- text/html

Error 407 is "proxy auth required", so the proxy is expecting authentication 
for some reason.

Can you confirm that the hostname vseries-test.bottomline.com is contained in 
your site file /etc/squid/sitelist/dbs_allowed_site ?

Can you temporarily change the line "http_access allow IWCCP01 allowedsite" to 
"http_access allow IWCCP01" and see whether the machine then gets access?


Antony.

-- 
+++ Divide By Cucumber Error.  Please Reinstall Universe And Reboot +++

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid - AD kerberos auth and Linux Server proxy access not working

2016-10-04 Thread Nilesh Gavali
Hello Antony;
I have double checked the current working configuration of my squid.conf 
and it has same settings which I posted earlier. somehow it is working for 
us.

below is the error from access.log file.

1475518342.279  0 10.xx.15.103 TCP_DENIED/407 3589 CONNECT 
vseries-test.bottomline.com:443 - NONE/- text/html


Thanks & Regards
Nilesh Suresh Gavali
--

Message: 5
Date: Tue, 4 Oct 2016 11:08:27 +0100
From: Nilesh Gavali <nilesh.gav...@tcs.com>
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Squid - AD kerberos auth and Linux Server proxy
 access not working
Message-ID:
 <of227b7bea.03fd80e0-on80258042.0036fe6d-80258042.0037b...@tcs.com>
Content-Type: text/plain; charset="utf-8"

All;

we have Squid proxy configured with Windows SSO with Kerberos which work 
fine for WIndows AD users.
we have new requirement where one Linux application server need to access 
Internet via squid proxy, we allowed Linux host access via ACL but getting 

denied access error.
below is the configuration done to allow Linux Server host IWCCP02.

###
auth_param negotiate program /usr/lib64/squid/squid_kerb_auth -s 
HTTP/proxy02.cust...@cust.in
auth_param negotiate children 20
auth_param negotiate keep_alive on

acl ad_auth proxy_auth REQUIRED

  AD Group membership  

external_acl_type AD_Group ttl=300 negative_ttl=0 %LOGIN 
/usr/lib64/squid/squid_ldap_group -P -R -b "DC=CUST, DC=IN" -D svcproxy -W 

/etc/squid/pswd/pswd -f 
"(&(objectclass=person)(userPrincipalName=%v)(memberof=cn=%a,ou=InternetAccess,ou=Groups,DC=CUST,
 

DC=IN))" -h Cust.in -s sub -v 3
#
#
acl USER external AD_Group lgInternetAccess_Users
acl allowedsite dstdomain "/etc/squid/sitelist/dbs_allowed_site"

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed

acl IWCCP01 src 10.xx.15.103   # Linux Application server
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines
#
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed


http_access allow IWCCP01 allowedsite
http_access allow USER allowedsite
http_access deny all
http_access allow ad_auth

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 8080
never_direct allow all

cache_peer 10.xx.xx.108 parent 8080 0 default
###


Thanks & Regards
Nilesh Suresh Gavali
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


-- next part --
An HTML attachment was scrubbed...
URL: <
http://lists.squid-cache.org/pipermail/squid-users/attachments/20161004/92d8b1fa/attachment-0001.html
>

--

Re: [squid-users] Squid - AD kerberos auth and Linux Server proxy access not working

2016-10-04 Thread Antony Stone
On Tuesday 04 October 2016 at 12:08:27, Nilesh Gavali wrote:

> All;
> 
> we have Squid proxy configured with Windows SSO with Kerberos which work
> fine for WIndows AD users.
> we have new requirement where one Linux application server need to access
> Internet via squid proxy, we allowed Linux host access via ACL but getting
> denied access error.

> http_access allow IWCCP01 allowedsite
> http_access allow USER allowedsite
> http_access deny all
> http_access allow ad_auth

That makes no sense.  The last rule can never be triggered.  "deny all" does 
exactly what it says.

However, that doesn't explain your problem, so please show what you get in 
your access log for a request from this Linux machine IWCCP01.

Thanks,

Antony.

-- 
"In fact I wanted to be John Cleese and it took me some time to realise that 
the job was already taken."

 - Douglas Adams

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid - AD kerberos auth and Linux Server proxy access not working

2016-10-04 Thread Nilesh Gavali
All;

we have Squid proxy configured with Windows SSO with Kerberos which work 
fine for WIndows AD users.
we have new requirement where one Linux application server need to access 
Internet via squid proxy, we allowed Linux host access via ACL but getting 
denied access error.
below is the configuration done to allow Linux Server host IWCCP02.

###
auth_param negotiate program /usr/lib64/squid/squid_kerb_auth -s 
HTTP/proxy02.cust...@cust.in
auth_param negotiate children 20
auth_param negotiate keep_alive on

acl ad_auth proxy_auth REQUIRED

  AD Group membership  

external_acl_type AD_Group ttl=300 negative_ttl=0 %LOGIN 
/usr/lib64/squid/squid_ldap_group -P -R -b "DC=CUST, DC=IN" -D svcproxy -W 
/etc/squid/pswd/pswd -f 
"(&(objectclass=person)(userPrincipalName=%v)(memberof=cn=%a,ou=InternetAccess,ou=Groups,DC=CUST,
 
DC=IN))" -h Cust.in -s sub -v 3
#
#
acl USER external AD_Group lgInternetAccess_Users
acl allowedsite dstdomain "/etc/squid/sitelist/dbs_allowed_site"

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed

acl IWCCP01 src 10.xx.15.103   # Linux Application server
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines
#
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed


http_access allow IWCCP01 allowedsite
http_access allow USER allowedsite
http_access deny all
http_access allow ad_auth

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 8080
never_direct allow all

cache_peer 10.xx.xx.108 parent 8080 0 default
###


Thanks & Regards
Nilesh Suresh Gavali
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IPv6 interception crash: Ip::Address::getInAddr : Cannot convert non-IPv4 to IPv4.

2016-10-04 Thread Egerváry Gergely
> Is there another defined somewhere else? For some reason your Squid is
> managing to build with just "nl_inip" (no 'addr') in the field name.

There's a copy in /usr/include/netinet, but it's the same:

typedef struct  natlookup {
i6addr_tnl_inipaddr;
i6addr_tnl_outipaddr;
i6addr_tnl_realipaddr;
int nl_v;
int nl_flags;
u_short nl_inport;
u_short nl_outport;
u_short nl_realport;
} natlookup_t;

#define nl_inip nl_inipaddr.in4
#define nl_outipnl_outipaddr.in4
#define nl_realip   nl_realipaddr.in4
#define nl_inip6nl_inipaddr.in6
#define nl_outip6   nl_outipaddr.in6
#define nl_realip6  nl_realipaddr.in6

... so "nl_inip" is a simple #define to nl_inipaddr.in4

This is from Squid's Intercept.cc:

natLookup.nl_inport = htons(newConn->local.port());
newConn->local.getInAddr(natLookup.nl_inip);
natLookup.nl_outport = htons(newConn->remote.port());
newConn->remote.getInAddr(natLookup.nl_outip);

Is this correct?
Should we have this in the "else" section of
  if (newConn->remote.isIPv6()) ... instead?

--
Gergely EGERVARY
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IPv6 interception crash: Ip::Address::getInAddr : Cannot convert non-IPv4 to IPv4.

2016-10-04 Thread Amos Jeffries
On 4/10/2016 8:57 p.m., Egerváry Gergely wrote:
>> Apparently the IPFilter 5.1 code defines an 32-bit IPv4-only structure
>> for 64-bit IPv6 addresses to be placed into. That was supposed to be
>> fixed in IPFilter 5.0.3.
>>
>> Can you look through your system for code header files that define
>> "struct natlookup" and show me what they contain?
> 
> in sys/external/bsd/ipf/netinet/ip_nat.h:
> 
> typedef struct  natlookup {
> i6addr_tnl_inipaddr;
> i6addr_tnl_outipaddr;
> i6addr_tnl_realipaddr;
> int nl_v;
> int nl_flags;
> u_short nl_inport;
> u_short nl_outport;
> u_short nl_realport;
> } natlookup_t;
> 

Is there another defined somewhere else? For some reason your Squid is
managing to build with just "nl_inip" (no 'addr') in the field name.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IPv6 interception crash: Ip::Address::getInAddr : Cannot convert non-IPv4 to IPv4.

2016-10-04 Thread Egerváry Gergely

Apparently the IPFilter 5.1 code defines an 32-bit IPv4-only structure
for 64-bit IPv6 addresses to be placed into. That was supposed to be
fixed in IPFilter 5.0.3.

Can you look through your system for code header files that define
"struct natlookup" and show me what they contain?


in sys/external/bsd/ipf/netinet/ip_nat.h:

typedef struct  natlookup {
i6addr_tnl_inipaddr;
i6addr_tnl_outipaddr;
i6addr_tnl_realipaddr;
int nl_v;
int nl_flags;
u_short nl_inport;
u_short nl_outport;
u_short nl_realport;
} natlookup_t;


--
Gergely EGERVARY

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] intercept + IPv6 + IPFilter 5.1

2016-10-04 Thread Stephen Borrill
On 01/10/2016 23:48, Egerváry Gergely wrote:
> Hi,
> 
> Should "intercept" work with IPv6 on NetBSD 7-STABLE and IPFilter 5.1?
> 
> I have the patch applied for kern/50198, and it's working fine with
> IPv4. I only get a connection reset by peer on IPv6.

I found the IPv4 bug and that PR and patch was done by my work
colleague. Unfortunately we've not done any IPv6 testing.

As well as finding the kernel side of the bug, we found and fixed a
squid-side bug which was related to IPv4 vs IPv6, so this is probably a
good place to start looking:

http://bugs.squid-cache.org/show_bug.cgi?id=4302

-- 
Stephen

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IPv6 interception crash: Ip::Address::getInAddr : Cannot convert non-IPv4 to IPv4.

2016-10-04 Thread Amos Jeffries
On 4/10/2016 7:25 p.m., Egerváry Gergely wrote:
>>> 2016/10/03 17:08:03.233 kid1| Ip::Address::getInAddr : Cannot convert
>>> non-IPv4 to IPv4. IPA=[2001:738:7a00:a::14]:3128
>>

Okay your setup looks fine.

Apparently the IPFilter 5.1 code defines an 32-bit IPv4-only structure
for 64-bit IPv6 addresses to be placed into. That was supposed to be
fixed in IPFilter 5.0.3.

Can you look through your system for code header files that define
"struct natlookup" and show me what they contain?

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IPv6 interception crash: Ip::Address::getInAddr : Cannot convert non-IPv4 to IPv4.

2016-10-04 Thread Egerváry Gergely

2016/10/03 17:08:03.233 kid1| Ip::Address::getInAddr : Cannot convert
non-IPv4 to IPv4. IPA=[2001:738:7a00:a::14]:3128


And what are your squid.conf http_port line(s) ?


http_port 127.0.0.1:8080
http_port [::1]:8080
http_port 172.28.0.20:3128 intercept
http_port 172.28.0.20:8080
http_port [2001:738:7a00:a::14]:3128 intercept
http_port [2001:738:7a00:a::14]:8080


What does squid log about listening HTTP ports on startup?


2016/10/04 08:25:16 kid1| Accepting HTTP Socket connections at 
local=127.0.0.1:8080 remote=[::] FD 14 flags=9
2016/10/04 08:25:16 kid1| Accepting HTTP Socket connections at 
local=[::1]:8080 remote=[::] FD 15 flags=9
2016/10/04 08:25:16 kid1| Accepting NAT intercepted HTTP Socket 
connections at local=172.28.0.20:3128 remote=[::] FD 16 flags=41
2016/10/04 08:25:16 kid1| Accepting HTTP Socket connections at 
local=172.28.0.20:8080 remote=[::] FD 17 flags=9
2016/10/04 08:25:16 kid1| Accepting NAT intercepted HTTP Socket 
connections at local=[2001:738:7a00:a::14]:3128 remote=[::] FD 18 flags=41
2016/10/04 08:25:16 kid1| Accepting HTTP Socket connections at 
local=[2001:738:7a00:a::14]:8080 remote=[::] FD 19 flags=9


--
Gergely EGERVARY

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users