Re: [squid-users] large downloads got interrupted

2016-08-12 Thread Amos Jeffries
On 11/08/2016 11:17 p.m., Eugene M. Zheganin wrote:
> Hi.
> 
> On 30.06.16 17:19, Amos Jeffries wrote:
>>
>> Okay, I wasn't suggesting you post it here. Its likely to be too big for
>> that.
>>
>> I would look for the messages about the large object, and its FD. Then,
>> for anthing about why it was closed by Squid. Not sure what tha would be
>> at this point though.
>> There are some scripts in the Squid sources scripts/ directory that
>> might help wade through the log. Or the grep tool.
>>
>>
> I enabled logLevel 2 for all squid facilities, but so far I didn't
> fugura out any pattern from log. The only thing I noticed - is that for
> large download the Recv-Q value reported by the netstat for a particular
> squid-to-server connection is extremely high, so is the Send-Q value for
> a connection from squid to client. I don't know if it's a cause or a
> consequence, but from my point of view this may indicate that buffers
> are overflown for some reason, I think this may cause, in turn, RSTs and
> connection closing - am I right ?. I still don't know whether it's a
> squid fault of may be it's local OS misconfiguration.

Er. That indicates buffers are probably full, but not necessarily overflown.

If the Send-Q is high that means the client is not reading what Squid
has sent. If the client stays hung like that for too long (15min IIRC)
Squid will give up on it and close the connections so it can move on to
handle other more responsive clients. TCP has a much shorter timeout
than Squid, so it may not be Squid aborting, but the TCP stack. Either
way its for the same reason - client is not read(2)'ing the traffic.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] large downloads got interrupted

2016-08-11 Thread Eugene M. Zheganin
Hi.

On 30.06.16 17:19, Amos Jeffries wrote:
>
> Okay, I wasn't suggesting you post it here. Its likely to be too big for
> that.
>
> I would look for the messages about the large object, and its FD. Then,
> for anthing about why it was closed by Squid. Not sure what tha would be
> at this point though.
> There are some scripts in the Squid sources scripts/ directory that
> might help wade through the log. Or the grep tool.
>
>
I enabled logLevel 2 for all squid facilities, but so far I didn't
fugura out any pattern from log. The only thing I noticed - is that for
large download the Recv-Q value reported by the netstat for a particular
squid-to-server connection is extremely high, so is the Send-Q value for
a connection from squid to client. I don't know if it's a cause or a
consequence, but from my point of view this may indicate that buffers
are overflown for some reason, I think this may cause, in turn, RSTs and
connection closing - am I right ?. I still don't know whether it's a
squid fault of may be it's local OS misconfiguration.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] large downloads got interrupted

2016-06-30 Thread Amos Jeffries
On 30/06/2016 2:24 a.m., Eugene M. Zheganin wrote:
> Hi.
> 
> On 29.06.16 05:26, Amos Jeffries wrote:
>> On 28/06/2016 8:46 p.m., Eugene M. Zheganin wrote:
>>> Hi,
>>>
>>> recently I started to get the problem when large downloads via squid are
>>> often interrupted. I tried to investigate it, but, to be honest, got
>>> nowhere. However, I took two tcpdump captures, and it seems to me that
>>> for some reason squid sends FIN to it's client and correctly closes the
>>> connection (wget reports that connection is closed), and in the same
>>> time for some reason it sends like tonns of RSTs towards the server. No
>>> errors in logs are reported (at least on a  ALL,1 loglevel).
>>>
>> It sounds like a timeout or such has happened inside Squid. We'd need to
>> see your squid.conf to see if that was it.
> Well... it quite long, since it's at large production site. I guess you
> don't need the acl and auth lines, so without them it's as follows
> (nothing secret in them, just that they are really numerous):

Okay. I was kind of hoping you had set some of the timeouts to a
unusually low value. Since its all default, then I think its one of the
much more difficult bug related issues.


> 
> The download I test this issue on is:
> - a large iso file, 4G from Yandex mirror
> - goes via plain http (so no sslBump)
> - client is authenticated using basic authentication
> - you can see a delay pools in squid.config, but this is just a
> definition, no clients are assigned into it
> 
> 
> When connection is closed the client receives FIN sequence, and squid
> sends a lt of RSTs towards target server I'm downloading the file from.
> 
>>
>> What version are you using? there have been a few bugs found that can
>> cause unrelated connections to be closed early like this.
> I noticed this problem on squid 3.5.11, but it's reproducible on 3.5.19
> as well.
> 
>> Screen dump of packet capture does not usually help. We usually only ask
>> for packet captures when one of the dev needs to personally analyse the
>> full traffic behaviour.
>>
>> A cache.log trace at debug level 11,2 shows all the HTTP messages going
>> through in an easier format to read. There might be hints in there, but
>> if it is a timeout like I suspect probably not.
> Well... do you need it already ? I should say that it will be way huge.
> May be there's a way to grep only the interesting parts ?
> 

Okay, I wasn't suggesting you post it here. Its likely to be too big for
that.

I would look for the messages about the large object, and its FD. Then,
for anthing about why it was closed by Squid. Not sure what tha would be
at this point though.
There are some scripts in the Squid sources scripts/ directory that
might help wade through the log. Or the grep tool.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] large downloads got interrupted

2016-06-29 Thread Eugene M. Zheganin
Hi.

On 29.06.16 05:26, Amos Jeffries wrote:
> On 28/06/2016 8:46 p.m., Eugene M. Zheganin wrote:
>> Hi,
>>
>> recently I started to get the problem when large downloads via squid are
>> often interrupted. I tried to investigate it, but, to be honest, got
>> nowhere. However, I took two tcpdump captures, and it seems to me that
>> for some reason squid sends FIN to it's client and correctly closes the
>> connection (wget reports that connection is closed), and in the same
>> time for some reason it sends like tonns of RSTs towards the server. No
>> errors in logs are reported (at least on a  ALL,1 loglevel).
>>
> It sounds like a timeout or such has happened inside Squid. We'd need to
> see your squid.conf to see if that was it.
Well... it quite long, since it's at large production site. I guess you
don't need the acl and auth lines, so without them it's as follows
(nothing secret in them, just that they are really numerous):

===Cut===
# cat /usr/local/etc/squid/squid.conf | grep -v http_access | grep -v
acl | grep -v http_reply_access | egrep -v '^#' | egrep -v '^$'
visible_hostname proxy1.domain1.com
debug_options ALL,1
http_port [fd00::301]:3128 ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
http_port [fd00::316]:3128 ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
http_port 192.168.3.1:3128 ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
http_port 127.0.0.1:3128 ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
http_port 127.0.0.1:3129 intercept
http_port [::1]:3128
http_port [::1]:3129 intercept
https_port 127.0.0.1:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
https_port [::1]:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
icp_port 3130
dns_v4_first off
shutdown_lifetime 5 seconds
workers 2
no_cache deny QUERY
cache_mem 256 MB
cache_dir rock /var/squid/cache 1100
cache_access_log stdio:/var/log/squid/access.fifo
cache_log /var/log/squid/cache.log
cache_store_log none
cache_peer localhost parent 8118 0 no-query defaultauth_param negotiate
program /usr/local/libexec/squid/negotiate_wrapper_auth --ntlm
/usr/local/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --kerberos
/usr/local
authenticate_ip_ttl 60 seconds
positive_dns_ttl 20 minutes
negative_dns_ttl 120 seconds
negative_ttl 30 seconds
pid_filename /var/run/squid/squid.pid
ftp_user anonymous
ftp_passive on
ipcache_size 16384
fqdncache_size 16384
redirect_children 10
refresh_pattern -i . 0 20% 4320
sslcrtd_program /usr/local/libexec/squid/ssl_crtd -s /var/squid/ssl -M 4MB
sslcrtd_children 15
auth_param negotiate program
/usr/local/libexec/squid/negotiate_wrapper_auth --ntlm
/usr/local/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --kerberos
/usr/local/libexec/squid/negotiate_kerberos_auth -s
HTTP/proxy1.domain1@domain.com
auth_param negotiate children 40 startup=5 idle=5
auth_param negotiate keep_alive on
auth_param ntlm program /usr/local/bin/ntlm_auth -d 0
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 60
auth_param basic program /usr/local/libexec/squid/basic_pam_auth
auth_param basic children 35 startup=5 idle=2
auth_param basic realm Squid
auth_param basic credentialsttl 10 minute
auth_param basic casesensitive off
authenticate_ttl 10 minute
authenticate_cache_garbage_interval 10 minute
snmp_access allow fromintranet
snmp_access allow localhost
snmp_access deny all
snmp_port 340${process_number}
snmp_incoming_address 192.168.3.22
tcp_outgoing_address 192.168.3.22 intranet
tcp_outgoing_address fd00::316 intranet6
tcp_outgoing_address 86.109.196.3 ad-megafon
redirector_access deny localhost
redirector_access deny SSL_ports
icp_access allow children
icp_access deny all
always_direct deny fuck-the-system-dstdomain
always_direct deny fuck-the-system
always_direct deny onion
always_direct allow all
never_direct allow fuck-the-system-dstdomain
never_direct allow fuck-the-system
never_direct allow onion
never_direct deny all
miss_access allow manager
miss_access allow all
cache_mgr e...@domain1.com
cache_effective_user squid
cache_effective_group squid
sslproxy_cafile /usr/local/etc/squid/certs/ca.pem
sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER
deny_info ERR_NO_BANNER banner
deny_info ERR_UNAUTHORIZED unauthorized
deny_info ERR_OVERQUOTA overquotasall
deny_info ERR_ENTERTAINMENT 

Re: [squid-users] large downloads got interrupted

2016-06-28 Thread Amos Jeffries
On 28/06/2016 8:46 p.m., Eugene M. Zheganin wrote:
> Hi,
> 
> recently I started to get the problem when large downloads via squid are
> often interrupted. I tried to investigate it, but, to be honest, got
> nowhere. However, I took two tcpdump captures, and it seems to me that
> for some reason squid sends FIN to it's client and correctly closes the
> connection (wget reports that connection is closed), and in the same
> time for some reason it sends like tonns of RSTs towards the server. No
> errors in logs are reported (at least on a  ALL,1 loglevel).
> 

It sounds like a timeout or such has happened inside Squid. We'd need to
see your squid.conf to see if that was it.

What version are you using? there have been a few bugs found that can
cause unrelated connections to be closed early like this.

> Screenshots of wireshark interpreting the tcpdump capture are here:
> 

?? URL sems not to have made it to the mailing list.


> Squid(2a00:7540:1::4) to target server(2a02:6b8::183):
> 
> http://static.enaza.ru/userupload/gyazo/e5b976bf6f3d0cb666f0d504de04.png
> (here you can see that all of a sudden squid starts sending RSTs, that
> come long way down the screen, then connection reestablishes (not on the
> screenshot taken))

Screen dump of packet capture does not usually help. We usually only ask
for packet captures when one of the dev needs to personally analyse the
full traffic behaviour.

A cache.log trace at debug level 11,2 shows all the HTTP messages going
through in an easier format to read. There might be hints in there, but
if it is a timeout like I suspect probably not.

> 
> Squid(fd00::301) to client(fd00::73d):
> 
> http://static.enaza.ru/userupload/gyazo/ccf4982593dc6047edb5d734160e.png  
> (here
> you can see the client connection got closed)

So Squid is closing both connections from the middle. That is pointing
strongly at a timeout, bug, or error in the data transfer.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] large downloads got interrupted

2016-06-28 Thread Eugene M. Zheganin
Hi,

recently I started to get the problem when large downloads via squid are
often interrupted. I tried to investigate it, but, to be honest, got
nowhere. However, I took two tcpdump captures, and it seems to me that
for some reason squid sends FIN to it's client and correctly closes the
connection (wget reports that connection is closed), and in the same
time for some reason it sends like tonns of RSTs towards the server. No
errors in logs are reported (at least on a  ALL,1 loglevel).

Screenshots of wireshark interpreting the tcpdump capture are here:

Squid(2a00:7540:1::4) to target server(2a02:6b8::183):

http://static.enaza.ru/userupload/gyazo/e5b976bf6f3d0cb666f0d504de04.png
(here you can see that all of a sudden squid starts sending RSTs, that
come long way down the screen, then connection reestablishes (not on the
screenshot taken))

Squid(fd00::301) to client(fd00::73d):

http://static.enaza.ru/userupload/gyazo/ccf4982593dc6047edb5d734160e.png  (here
you can see the client connection got closed)
I'm open to any idea that will help me to get rid of this issue.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users