Re: [squid-users] TCP out of memory

2018-01-29 Thread Vieri
Hi,

I reproduced the problem, and saw that the c-icap server (or its squidclamav 
module) reports a 500 internal server error when clamd is down. I guess that's 
not bypassable?


The c-icap server log reports:

Mon Jan 29 08:30:35 2018, 5134/1290311424, squidclamav.c(1934) dconnect: Mon 
Jan 29 08:30:35 2018, 5134/1290311424, entering.
Mon Jan 29 08:30:35 2018, 5134/1290311424, squidclamav.c(2015) connectINET: Mon 
Jan 29 08:30:35 2018, 5134/1290311424, ERROR Can't connect on 127.0.0.1:3310.
Mon Jan 29 08:30:35 2018, 5134/1290311424, squidclamav.c(2015) connectINET: Mon 
Jan 29 08:30:35 2018, 5134/1290311424, ERROR Can't connect on 127.0.0.1:3310.
Mon Jan 29 08:30:35 2018, 5134/1290311424, squidclamav.c(744) 
squidclamav_end_of_data_handler: Mon Jan 29 08:30:35 2018, 5134/1290311424, 
ERROR Can't connect to Clamd daemon.
Mon Jan 29 08:30:35 2018, 5134/1290311424, An error occured in end-of-data 
handler !return code : -1, req->allow204=1, req->allow206=0


Here's Squid's log:

https://drive.google.com/file/d/18HmM8pOuDQmE4W_vwmSncXEeJSvgDjDo/view?usp=sharing

I was hoping I could relate this to the original topic, but I'm afraid they are 
two different issues.


Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-27 Thread Alex Rousskov
On 01/27/2018 10:47 AM, Yuri wrote:

> He's just disabled icap-based service without disabling icap itself. So
> - yes - this is as expected.

The above logic is flawed: Vieri told Squid to bypass (bypassable) ICAP
errors, but Squid did not bypass an ICAP error. Whether that outcome is
expected depends on whether that specific ICAP error was bypassable.

Yes, I understand that c-icap did not successfully process the message
after clamd went down, but that fact is not important here. What is
important here is how c-icap relayed that problem to Squid. That part I
do not know, so I cannot say whether Squid just could not bypass this
particular ICAP problem (i.e., Squid behavior is expected) or there is a
bug in Squid's bypass code.


> bupass=1 permit squid to bypass
> adaptation in case of overloading icap service. 

Yes, but service overload is _not_ the only problem that bypass=1 can
bypass. The documentation for that option describes the option scope:

> If set to 'on' or '1', the ICAP service is treated as
> optional. If the service cannot be reached or malfunctions,
> Squid will try to ignore any errors and process the message as
> if the service was not enabled. No all ICAP errors can be
> bypassed.  If set to 0, the ICAP service is treated as
> essential and all ICAP errors will result in an error page
> returned to the HTTP client.
> 
> Bypass is off by default: services are treated as essential.

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-27 Thread Yuri

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
He's just disabled icap-based service without disabling icap itself. So
- yes - this is as expected.

Vieri, bupass=1 is different thing. This permit squid to bypass
adaptation in case of overloading icap service. And irrelevant thing you
done.

27.01.2018 23:41, Alex Rousskov пишет:
> On 01/27/2018 10:33 AM, Vieri wrote: > >> I noticed that if I set bypass=1 in 
> squid.conf (regarding ICAP),
and >> if I stop the local clamd service (not the c-icap service), then
the >> clients see Squid's ERR_ICAP_FAILURE page. Is this expected? > >
Difficult to say for sure without knowing what went wrong between Squid
> and c-icap: Not all ICAP errors can be bypassed. Consider sharing a >
(link to) compressed ALL,9 cache.log collected while reproducing the >
problem using a single HTTP transaction through an otherwise idle Squid.
> > Alex. > ___ >
squid-users mailing list > squid-users@lists.squid-cache.org >
http://lists.squid-cache.org/listinfo/squid-users
- -- 
*
* C++20 : Bug to the future *
*
-BEGIN PGP SIGNATURE-
 
iQGzBAEBCAAdFiEEUAYDxOalHloZaGP7S+6Uoz43Q6cFAlpsuzEACgkQS+6Uoz43
Q6cogwv8D1lqLBtJeIMIbwgv/u6OBEtYAZbx7hAQpj4Hic8SBYsnHJdxUtHYchkC
pDFcrxPyp0b/59U26ngg5pObOHT5tynWYgH2tGp0/raJgPddD2G+xKyztvpawuz8
c1zHvUmyYhwHM96T99eTsI0r4qUq+e91krhK+6JxvSHh0CM8NnwUGycOhKBqNRHE
HGWOjXCLUf9QTw/C4QG2slpO5TJbVvKJjO+lnLyTxBZr+hFmORjbsAXMOsz83Txb
k0u4XuipQOqu3CcfLE/rSZPUX/r5YRQiV9roZdfy/IjYOdsU0+SvnO37RZLr9Z26
hKuu0OaZKGYcKtzy20lQJanIDzUOv7sDvNfqM3tWGTRNnxRD/1S3s2eFyuSZSx7/
0G4rbs/oj/Y7Ksa9mlGW02QNmjOmrUMB1iVwQu3IBWeef7WUQtgvFV+JuSGY8+q/
V605z627s6S8+fHAPzZyLGt/J0kMLWGbImOFUIIBBA+0g1yx41bUswo2feEivzzb
CwRJXfhc
=ZP4V
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-27 Thread Alex Rousskov
On 01/27/2018 10:33 AM, Vieri wrote:

> I noticed that if I set bypass=1 in squid.conf (regarding ICAP), and
> if I stop the local clamd service (not the c-icap service), then the
> clients see Squid's ERR_ICAP_FAILURE page. Is this expected?

Difficult to say for sure without knowing what went wrong between Squid
and c-icap: Not all ICAP errors can be bypassed. Consider sharing a
(link to) compressed ALL,9 cache.log collected while reproducing the
problem using a single HTTP transaction through an otherwise idle Squid.

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-27 Thread Vieri
Hi,

I just wanted to add some information to this topic, although I'm not sure if 
it's related.


I noticed that if I set bypass=1 in squid.conf (regarding ICAP), and if I stop 
the local clamd service (not the c-icap service), then the clients see Squid's 
ERR_ICAP_FAILURE page.
Is this expected?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-18 Thread Vieri

From: Amos Jeffries 
>
> Sorry I have a bit of a distraction going on ATM so have not got to that 

> detailed check yet. Good to hear you found a slightly better situation > 
> though.
[...]
> In normal network conditions it should rise and fall with your peak vs 
> off-peak traffic times. I expect with your particular trouble it will 
> mostly just go upwards.


No worries. I'd like to confirm that I'm still seeing the same issue with 
c-icap-modules, even though it's slightly better in that the FD numbers grow 
slower, at least at first.
I must say that it seems to be growing faster now. I had 4k two days ago, now I 
have:
Largest file desc currently in use:   6664
Number of file desc currently in use: 6270
So it seesm that the more days go by, the faster the FD numbers rise.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-16 Thread Amos Jeffries

On 17/01/18 02:37, Vieri wrote:

Hi,

Just a quick follow-up on this.

I dropped squidclamav so I could test c-icap-modules's clamd service instead.
The only difference between the two is that squidclamav was using unix sockets 
while c-icap-modules is using clamd.

At first, the results were good. The open fd numbers were fluctuating, but 
within the 1k-2k limit during the first days. However, today I'm getting 4k, 
and it's only day 5. I suspect I'll be getting 10k+ numbers within another week 
or two. That's when I'll have to restart squid if I don't want the system to 
come to a network crawl.

I'm posting info and filedescriptors here:

https://drive.google.com/file/d/1V7Horvvak62U-HjSh5pVEBvVnZhu-iQY/view?usp=sharing

https://drive.google.com/file/d/1P1DAX-dOfW0fzt1sAeyT35brQyoPVodX/view?usp=sharing



Sorry I have a bit of a distraction going on ATM so have not got to that 
detailed check yet. Good to hear you found a slightly better situation 
though.




By the way, what does "Largest file desc currently in use" mean exactly? Should 
this value also drop (eventually) under sane conditions?


The OS assigns FD numbers and prefers to assign with a strong bias 
towards the lowest values. So that can be seen as a fluctuating "water 
level" of approximately how many FD are currently in use. If there are a 
few very long-lived connections and many short ones it may be variably 
incorrect - but is good enough for a rough guide of FD usage.


In normal network conditions it should rise and fall with your peak vs 
off-peak traffic times. I expect with your particular trouble it will 
mostly just go upwards.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-16 Thread Vieri
Hi,

Just a quick follow-up on this.

I dropped squidclamav so I could test c-icap-modules's clamd service instead.
The only difference between the two is that squidclamav was using unix sockets 
while c-icap-modules is using clamd.

At first, the results were good. The open fd numbers were fluctuating, but 
within the 1k-2k limit during the first days. However, today I'm getting 4k, 
and it's only day 5. I suspect I'll be getting 10k+ numbers within another week 
or two. That's when I'll have to restart squid if I don't want the system to 
come to a network crawl.

I'm posting info and filedescriptors here:

https://drive.google.com/file/d/1V7Horvvak62U-HjSh5pVEBvVnZhu-iQY/view?usp=sharing

https://drive.google.com/file/d/1P1DAX-dOfW0fzt1sAeyT35brQyoPVodX/view?usp=sharing

By the way, what does "Largest file desc currently in use" mean exactly? Should 
this value also drop (eventually) under sane conditions?

So I guess moving from squidclamav to c-icap-modules did improve things, but 
I'm still facing something wrong. I could try moving back to squidclamav in 
"clamd mode" instead of unix sockets just to see if I get the same partial 
improvement as the one I've witnessed this week.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-11 Thread Vieri
Hi,

I don't know how to cleanly seperate the 93,* from the 11,* log lines. I posted 
the following:


https://drive.google.com/file/d/1PRJOc6czrA0QEDHkqn3MrmNh08K8JajR/view?usp=sharing

It contains a cache.log generated by:
debug_options rotate=1 ALL,0 93,6 11,6

I also ran :info and :filedescriptors when I applied the new debug_options 
(*1), and again when I reverted back the debug_options (*2).

I'm using c-icap with squidclamav. I'll try to use c-icap-modules instead asap 
so I can hopefully remove a few variables in the issue (if it keeps giving me 
this issue then it must be a c-icap service flaw).

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-09 Thread Vieri


From: Amos Jeffries 
>
> I have only taken a brief look, but so far it looks like the problematic 

> sockets are not participating in any ICAP activity.

Do you see that from the cache.log, or from ":filedescriptors"?
If I list my current filedescriptors right now, I get this:

# squidclient mgr:filedescriptors | grep "127.0.0.1:1344"
20 Socket  899  25*   10001  127.0.0.1:1344127.0.0.1
30 Socket0   7164872210  127.0.0.1:1344127.0.0.1
38 Socket  900   0*2564  127.0.0.1:1344127.0.0.1
102 Socket0   6722267689  127.0.0.1:1344127.0.0.1
107 Socket0  102679   203677  127.0.0.1:1344127.0.0.1
113 Socket0   6722267709  127.0.0.1:1344127.0.0.1
115 Socket  886   0*2588  127.0.0.1:1344127.0.0.1
116 Socket  873  25*   10395  127.0.0.1:1344127.0.0.1
129 Socket0  114892   144095  127.0.0.1:1344127.0.0.1
134 Socket  900  25*8863  127.0.0.1:1344127.0.0.1
160 Socket0   6722267687  127.0.0.1:1344127.0.0.1
165 Socket0   7783378401  127.0.0.1:1344127.0.0.1
166 Socket0   6722267702  127.0.0.1:1344127.0.0.1
175 Socket0   6722267698  127.0.0.1:1344127.0.0.1
176 Socket0   6722267698  127.0.0.1:1344127.0.0.1
212 Socket0   6722267742  127.0.0.1:1344127.0.0.1
213 Socket  878   0*2533  127.0.0.1:1344127.0.0.1
226 Socket  873   0*2531  127.0.0.1:1344127.0.0.1
236 Socket0   78332   180786  127.0.0.1:1344127.0.0.1
244 Socket0   6722267698  127.0.0.1:1344127.0.0.1
281 Socket0   6722267685  127.0.0.1:1344127.0.0.1
285 Socket0   78253   149568  127.0.0.1:1344127.0.0.1
298 Socket0   7783378451  127.0.0.1:1344127.0.0.1
305 Socket0   74366   168309  127.0.0.1:1344127.0.0.1
307 Socket0  114519   115068  127.0.0.1:1344127.0.0.1
326 Socket0   6722267698  127.0.0.1:1344127.0.0.1
327 Socket0   6722267687  127.0.0.1:1344127.0.0.1
365 Socket0   70248   114918  127.0.0.1:1344127.0.0.1
372 Socket0   6722267698  127.0.0.1:1344127.0.0.1
390 Socket0   7783378483  127.0.0.1:1344127.0.0.1
404 Socket0   9002290703  127.0.0.1:1344127.0.0.1
464 Socket0   78253   144095  127.0.0.1:1344127.0.0.1
472 Socket0   6722267698  127.0.0.1:1344127.0.0.1
480 Socket  891   0*2514  127.0.0.1:1344127.0.0.1
491 Socket0   6722267685  127.0.0.1:1344127.0.0.1
509 Socket0   6722267687  127.0.0.1:1344127.0.0.1
512 Socket0   6722267703  127.0.0.1:1344127.0.0.1
528 Socket0  131176   155548  127.0.0.1:1344127.0.0.1
536 Socket0   70111   134058  127.0.0.1:1344127.0.0.1
547 Socket0   6722267689  127.0.0.1:1344127.0.0.1
554 Socket0  131860   152673  127.0.0.1:1344127.0.0.1
570 Socket0   6722267707  127.0.0.1:1344127.0.0.1
572 Socket  893   0*2706  127.0.0.1:1344127.0.0.1
596 Socket0   78390   114864  127.0.0.1:1344127.0.0.1
602 Socket0   6722267691  127.0.0.1:1344127.0.0.1
624 Socket0   7267873442  127.0.0.1:1344127.0.0.1
631 Socket0   7164672250  127.0.0.1:1344127.0.0.1
635 Socket0  104333   104896  127.0.0.1:1344127.0.0.1
641 Socket0   6722267687  127.0.0.1:1344127.0.0.1
646 Socket0   6722267698  127.0.0.1:1344127.0.0.1
662 Socket0   6722267698  127.0.0.1:1344127.0.0.1
674 Socket0   6722267691  127.0.0.1:1344127.0.0.1
678 Socket0   6722267687  127.0.0.1:1344127.0.0.1
687 Socket0   6722267702  127.0.0.1:1344127.0.0.1
730 Socket0   6722267691  127.0.0.1:1344127.0.0.1
767 Socket0   74465   152811  127.0.0.1:1344127.0.0.1
772 Socket0   6721767747  127.0.0.1:1344127.0.0.1
815 Socket0   7786478246  127.0.0.1:1344127.0.0.1
848 Socket0   6722267743  127.0.0.1:1344127.0.0.1
865 Socket0   6722267747  127.0.0.1:1344127.0.0.1
890 Socket0   6722267699  127.0.0.1:1344127.0.0.1
943 Socket0   7783378501  127.0.0.1:1344127.0.0.1
1008 Socket0   7421278383  127.0.0.1:1344127.0.0.1
1018 Socket0   7446690630  127.0.0.1:1344127.0.0.1
1099 Socket0   6722267687  127.0.0.1:1344127.0.0.1
1124 Socket0   6722267683  127.0.0.1:1344127.0.0.1
1167 Socket0   6722267687  127.0.0.1:1344127.0.0.1
1273 Socket0   6725867879  127.0.0.1:1344127.0.0.1
1337 Socket0   7424378265  127.0.0.1:1344127.0.0.1


Both Nread and Nwrite seem to be well over 

Re: [squid-users] TCP out of memory

2018-01-08 Thread Amos Jeffries

On 08/01/18 11:13, Vieri wrote:



From: Amos Jeffries 


The open sockets to 127.0.0.1:1344 keep increasing steadily even on high 
network usage, but they do not decrease when there's
little or no traffic.>> So, day after day the overall number keeps growing 
until I have to restart squid once or twice a week.

In other words, this value keeps growing:
Largest file desc currently in use:   
This other value can decrease at times, but in the long run it keeps growing 
too:
Number of file desc currently in use: 


Ah. What does the cachemgr "filedescriptors" report show when there are
a lot starting to accumulate?

And, are you able to get a cache.log trace with "debug_options 93,6" ?



Here's my cache.log:

https://drive.google.com/file/d/1I8R5sCsIGhYa69QmGrOoHVITuom4uW0k/view?usp=sharing

squidclient's filedescriptors:

https://drive.google.com/file/d/1o6zn-o0atqeqFGSMRhPA9r1AAFJpnpBZ/view?usp=sharing

The info page:

https://drive.google.com/file/d/11iWqjgdt2KK1yWPMsr5o-IyWGyKS7joc/view?usp=sharing

The open fds are at around 7k, but they can easily reach 12k or 13k. That's 
when I start running into trouble.



Thank you.

I have only taken a brief look, but so far it looks like the problematic 
sockets are not participating in any ICAP activity. That implies they 
are possibly TCP connections which never complete their opening 
sequence, or at least the result of connection attempts does not make it 
back to the ICAP code somehow.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-07 Thread Vieri


From: Amos Jeffries 

>> The open sockets to 127.0.0.1:1344 keep increasing steadily even on high 
>> network usage, but they do not decrease when there's
>> little or no traffic.>> So, day after day the overall number keeps growing 
>> until I have to restart squid once or twice a week.
>> 
>> In other words, this value keeps growing:
>> Largest file desc currently in use:   
>> This other value can decrease at times, but in the long run it keeps growing 
>> too:
>> Number of file desc currently in use: 
>> 
> Ah. What does the cachemgr "filedescriptors" report show when there are 
> a lot starting to accumulate?
>
> And, are you able to get a cache.log trace with "debug_options 93,6" ?


Here's my cache.log:

https://drive.google.com/file/d/1I8R5sCsIGhYa69QmGrOoHVITuom4uW0k/view?usp=sharing

squidclient's filedescriptors:

https://drive.google.com/file/d/1o6zn-o0atqeqFGSMRhPA9r1AAFJpnpBZ/view?usp=sharing

The info page:

https://drive.google.com/file/d/11iWqjgdt2KK1yWPMsr5o-IyWGyKS7joc/view?usp=sharing

The open fds are at around 7k, but they can easily reach 12k or 13k. That's 
when I start running into trouble.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-05 Thread Amos Jeffries

On 05/01/18 21:59, Vieri wrote:

The open sockets to 127.0.0.1:1344 keep increasing steadily even on high 
network usage, but they do not decrease when there's little or no traffic.
So, day after day the overall number keeps growing until I have to restart 
squid once or twice a week.

In other words, this value keeps growing:
Largest file desc currently in use:   
This other value can decrease at times, but in the long run it keeps growing 
too:
Number of file desc currently in use: 



Ah. What does the cachemgr "filedescriptors" report show when there are 
a lot starting to accumulate?


And, are you able to get a cache.log trace with "debug_options 93,6" ?

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-05 Thread Vieri
The open sockets to 127.0.0.1:1344 keep increasing steadily even on high 
network usage, but they do not decrease when there's little or no traffic.
So, day after day the overall number keeps growing until I have to restart 
squid once or twice a week.

In other words, this value keeps growing:
Largest file desc currently in use:   
This other value can decrease at times, but in the long run it keeps growing 
too: 
Number of file desc currently in use: 

I tried changing squid parameters such as:

icap_io_timeout time-units
icap_service_failure_limit
icap_persistent_connections on

I also tried changing c-icap parameters such as:

Timeout
MaxKeepAliveRequests
KeepAliveTimeout
StartServers
MaxServers
MinSpareThreads
MaxSpareThreads
ThreadsPerChild
MaxRequestsPerChild

However, I'm still seeing the same behavior so I reverted back to defaults.

This is a small sample of an icap_log taken from Squid:

1515056868.505  9 ::::::: ICAP_OPT/200 353 
OPTIONS icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.506 10 10.215.246.143 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.506 10 10.215.246.143 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.511   1624 10.215.246.143 ICAP_MOD/200 306783 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.547   1450 10.215.246.143 ICAP_ECHO/204 130 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.560 64 10.215.246.143 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.560 64 10.215.246.143 ICAP_ECHO/204 130 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.561  0 10.215.248.99 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.561 65 10.215.246.143 ICAP_ECHO/204 130 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.615  5 10.215.246.143 ICAP_ECHO/204 130 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.660  3 10.215.246.143 ICAP_ECHO/204 130 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.669  3 10.215.246.143 ICAP_ECHO/204 130 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.681  6 10.215.246.143 ICAP_ECHO/204 130 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.701  5 10.215.248.31 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.712 18 10.215.246.143 ICAP_ECHO/204 130 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.720  0 10.215.246.143 ICAP_MOD/200 489 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.739 29 10.215.246.143 ICAP_ECHO/204 130 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.740  0 10.215.246.143 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.750  0 10.215.246.143 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.774  0 10.215.246.143 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.848  0 10.215.246.143 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.865  0 10.215.246.143 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.866  6 10.215.246.143 ICAP_ECHO/204 130 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.879  0 10.215.246.143 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.891  0 10.215.248.31 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.893  2 10.215.246.143 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056868.906  2 10.215.246.143 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056869.132  0 10.215.246.218 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056869.149  0 10.215.246.143 ICAP_ECHO/204 130 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056869.316  6 10.215.246.143 ICAP_ECHO/204 130 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056869.322  3 10.215.246.143 ICAP_ECHO/204 130 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056869.329  1 10.215.246.143 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056869.338  1 10.215.246.143 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056869.346  1 10.215.246.143 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056869.354  1 10.215.246.143 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056869.408  1 10.215.246.143 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056869.421  2 10.215.246.143 ICAP_ECHO/204 130 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056869.431  1 

Re: [squid-users] TCP out of memory

2018-01-04 Thread Amos Jeffries

On 04/01/18 21:51, Vieri wrote:

Hi again,

I haven't taken a look at Squid's source code, but I guess that when Squid 
communicates with a c-icap service it acts as a typical socket client, right?
eg. connect(), write(), read(), close()


Uh, Those are system calls for receiving TCP connections and data I/O.

Squid uses ICAP protocol to communicate with ICAP services. ICAP 
operates as a layer over TCP. So in a way yes, and in a way no.





Does Squid consider forcing disconnection (close()) if the read() is "too long"?


If you mean "too long" in terms of memory size - there is no such thing 
in TCP. Squid provides a buffer and tells the kernel how much space is 
available there. The OS writes up to that much, no more, maybe less.


If your title "TCP out of memory" is an error you are seeing somewhere. 
That is entirely your system kernel and networking stacks issue. Squid 
has nothing to do with the internal memory management of TCP traffic.




Is there such a timeout? Is it configurable in squid.conf (only for the c-icap 
connection)?



What timeout you speak of?

If you mean "too long" earlier in terms of duration to perform a read() 
- there is also no such thing. The OS tells Squid when data is ready and 
the data copy from OS memory to Squid buffer takes trivial amount of 
time to actually happen.



Timeouts can happen between different parts of the ICAP protocol waiting 
for I/O to happen for related message parts. But those have nothing to 
do with TCP memory unless you are suffering a bad case of buffer bloat 
in the network itself.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP out of memory

2018-01-04 Thread Vieri
Hi again,

I haven't taken a look at Squid's source code, but I guess that when Squid 
communicates with a c-icap service it acts as a typical socket client, right?
eg. connect(), write(), read(), close()

Does Squid consider forcing disconnection (close()) if the read() is "too long"?
Is there such a timeout? Is it configurable in squid.conf (only for the c-icap 
connection)?


Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2017-12-21 Thread Vieri
BTW, I set icap_service_failure_limit -1 because If I don't the HTTP clients 
get the ERR_ICAP_FAILURE page for *a very long time* if I restart the c-icap 
service.
I have no idea how much they would have to wait because I can't afford users 
seeing this page for more than a minute.
The following test was done in 2 minutes with the default 
icap_service_failure_limit:

# /etc/init.d/squid reload

# /etc/init.d/c-icap restart

# squidclient mgr:info
HTTP/1.1 200 OK
Server: squid
Mime-Version: 1.0
Date: Thu, 21 Dec 2017 08:07:07 GMT
Content-Type: text/plain;charset=utf-8
Expires: Thu, 21 Dec 2017 08:07:07 GMT
Last-Modified: Thu, 21 Dec 2017 08:07:07 GMT
X-Cache: MISS from inf-fw2
X-Cache-Lookup: MISS from inf-fw2:3128
Connection: close

Squid Object Cache: Version 3.5.27-20171101-re69e56c
Build Info:
Service Name: squid
Start Time: Mon, 18 Dec 2017 08:29:33 GMT
Current Time:   Thu, 21 Dec 2017 08:07:07 GMT
Connection information for squid:
Number of clients accessing cache:  569
Number of HTTP requests received:   3381083
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Number of HTCP messages received:   0
Number of HTCP messages sent:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   786.7
Average ICP messages per minute since start:0.0
Select loop called: 93431818 times, 2.760 ms avg
Cache information for squid:
Hits as % of all requests:  5min: 0.9%, 60min: 2.0%
Hits as % of bytes sent:5min: 5.2%, 60min: 9.5%
Memory hits as % of hit requests:   5min: 88.4%, 60min: 60.2%
Disk hits as % of hit requests: 5min: 0.0%, 60min: 0.1%
Storage Swap size:  29156 KB
Storage Swap capacity:  89.0% used, 11.0% free
Storage Mem size:   30648 KB
Storage Mem capacity:   93.5% used,  6.5% free
Mean Object Size:   18.03 KB
Requests given to unlinkd:  49975
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.05633  0.04047
Cache Misses:  0.10281  0.11465
Cache Hits:0.0  0.0
Near Hits: 0.01745  0.01955
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.04048  0.03868
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:257853.454 seconds
CPU Time:   6005.270 seconds
CPU Usage:  2.33%
CPU Usage, 5 minute avg:6.76%
CPU Usage, 60 minute avg:   2.99%
Maximum Resident Size: 5549424 KB
Page faults with physical i/o: 0
Memory accounted for:
Total accounted:   1004217 KB
memPoolAlloc calls: 980681181
memPoolFree calls:  999432183
File descriptor usage for squid:
Maximum number of file descriptors:   65536
Largest file desc currently in use:   4399
Number of file desc currently in use: 4052
Files queued for open:   0
Available number of file descriptors: 61484
Reserved number of file descriptors:   100
Store Disk files open:   0
Internal Data Structures:
2074 StoreEntries
1911 StoreEntries with MemObjects
1687 Hot Object Cache Items
1617 on-disk objects

Two minutes later and clients are still seeing ERR_ICAP_FAILURE, so I'm setting 
back to icap_service_failure_limit -1.

# /etc/init.d/squid reload

# squidclient mgr:info
HTTP/1.1 200 OK
Server: squid
Mime-Version: 1.0
Date: Thu, 21 Dec 2017 08:09:26 GMT
Content-Type: text/plain;charset=utf-8
Expires: Thu, 21 Dec 2017 08:09:26 GMT
Last-Modified: Thu, 21 Dec 2017 08:09:26 GMT
X-Cache: MISS from inf-fw2
X-Cache-Lookup: MISS from inf-fw2:3128
Connection: close

Squid Object Cache: Version 3.5.27-20171101-re69e56c
Build Info:
Service Name: squid
Start Time: Mon, 18 Dec 2017 08:29:33 GMT
Current Time:   Thu, 21 Dec 2017 08:09:26 GMT
Connection information for squid:
Number of clients accessing cache:  569
Number of HTTP requests received:   3382478
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Number of HTCP messages received:   0
Number of HTCP messages sent:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   786.6
Average ICP messages per minute since start:0.0
Select loop called: 93457469 times, 2.761 ms avg
Cache information for squid:
Hits as % of all requests:  5min: 1.2%, 60min: 2.0%
Hits as % of bytes sent:5min: 10.9%, 60min: 9.7%
Memory hits as % of hit requests:   5min: 88.3%, 60min: 60.1%
Disk hits as % of hit requests: 5min: 0.0%, 60min: 0.1%
Storage Swap size:  29156 KB
Storage Swap capacity:  89.0% used, 11.0% free
Storage Mem size:   30648 KB
Storage Mem capacity:   93.5% used,  6.5% free
Mean Object Size:   18.03 KB
Requests given to unlinkd:  49975
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.05046  0.04047
Cache Misses:  0.11465  0.11465
Cache Hits:0.0  0.0
Near Hits: 0.0  0.01955
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.04237  

Re: [squid-users] TCP out of memory

2017-12-18 Thread Vieri


From: Amos Jeffries 
>
> What is your ICAP configuration in squid.conf?


icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service squidclamav respmod_precache bypass=0 icap://127.0.0.1:1344/clamav
adaptation_access squidclamav allow all

icap_service_failure_limit -1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2017-12-18 Thread Amos Jeffries

On 18/12/17 21:02, Vieri wrote:

Hi,

I need to restart Squid once a week because I see "TCP out of memory" messages 
in syslog.

I see lots of open file descriptors of type "127.0.0.1:1344".

There could be an issue with the c-icap service.

As suggested previously, I dumped a packet trace here:

https://drive.google.com/file/d/1qCkH6YYa7fgeYzm-AoJEpXTDVpzILCQ9/view?usp=sharing

Can anyone please take a look at it? I'm trying to determine whether c-icap is 
closing connections properly.
Maybe the dump's time range is too short to see anything useful?
I also tried looking at the c-icap logs, but unfortunately I don't see anything 
(or I don't know how to interpret them correctly).



What is your ICAP configuration in squid.conf?

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP out of memory

2017-12-18 Thread Vieri
Hi,

I need to restart Squid once a week because I see "TCP out of memory" messages 
in syslog.

I see lots of open file descriptors of type "127.0.0.1:1344".

There could be an issue with the c-icap service.

As suggested previously, I dumped a packet trace here:

https://drive.google.com/file/d/1qCkH6YYa7fgeYzm-AoJEpXTDVpzILCQ9/view?usp=sharing

Can anyone please take a look at it? I'm trying to determine whether c-icap is 
closing connections properly.
Maybe the dump's time range is too short to see anything useful?
I also tried looking at the c-icap logs, but unfortunately I don't see anything 
(or I don't know how to interpret them correctly).


Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP: out of memory -- consider tuning tcp_mem

2017-09-18 Thread Vieri
Regarding my previous post, here's some more info.

On a 32 GB RAM system, RAM usage grew up to 20GB when I had those "out of 
memory" messages.

After restarting or even shutting down all 5 Squid instances I still get high 
RAM usage albeit lower than before.

# top

top - 22:42:23 up 19 days, 15:08,  2 users,  load average: 1.30, 1.48, 1.50
Tasks: 342 total,   1 running, 341 sleeping,   0 stopped,   0 zombie
%Cpu0  :  1.5 us,  1.5 sy,  0.0 ni, 95.5 id,  0.0 wa,  0.0 hi,  1.5 si,  0.0 st
%Cpu1  :  0.0 us,  0.0 sy,  0.0 ni, 98.5 id,  0.0 wa,  0.0 hi,  1.5 si,  0.0 st
%Cpu2  :  0.0 us,  0.0 sy,  0.0 ni, 98.5 id,  0.0 wa,  0.0 hi,  1.5 si,  0.0 st
%Cpu3  :  0.0 us,  0.0 sy,  0.0 ni, 97.0 id,  0.0 wa,  0.0 hi,  3.0 si,  0.0 st
%Cpu4  :  0.0 us,  0.0 sy,  0.0 ni, 97.0 id,  0.0 wa,  0.0 hi,  3.0 si,  0.0 st
%Cpu5  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu6  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu7  :  0.0 us,  0.0 sy,  0.0 ni, 95.5 id,  0.0 wa,  0.0 hi,  4.5 si,  0.0 st
KiB Mem : 32865056 total,  3679916 free, 17272208 used, 11912932 buff/cache
KiB Swap: 37036988 total, 35700916 free,  1336072 used. 15100248 avail Mem 

A "ps aux --sort -rss" show c-icap in the lead, so I restarted it.

I then got this reading:

# top

top - 22:55:15 up 19 days, 15:21,  2 users,  load average: 0.91, 1.06, 1.32
Tasks: 276 total,   1 running, 275 sleeping,   0 stopped,   0 zombie
%Cpu0  :  0.0 us,  0.0 sy,  0.0 ni, 98.1 id,  0.0 wa,  0.0 hi,  1.9 si,  0.0 st
%Cpu1  :  0.0 us,  0.0 sy,  0.0 ni, 97.2 id,  0.0 wa,  0.0 hi,  2.8 si,  0.0 st
%Cpu2  :  0.0 us,  0.0 sy,  0.0 ni, 99.1 id,  0.0 wa,  0.0 hi,  0.9 si,  0.0 st
%Cpu3  :  0.0 us,  0.9 sy,  0.0 ni, 98.1 id,  0.0 wa,  0.0 hi,  0.9 si,  0.0 st
%Cpu4  :  0.0 us,  0.0 sy,  0.0 ni, 98.1 id,  0.0 wa,  0.0 hi,  1.9 si,  0.0 st
%Cpu5  :  0.0 us,  0.0 sy,  0.0 ni, 99.1 id,  0.0 wa,  0.0 hi,  0.9 si,  0.0 st
%Cpu6  :  0.0 us,  0.0 sy,  0.0 ni, 97.2 id,  0.0 wa,  0.0 hi,  2.8 si,  0.0 st
%Cpu7  :  0.0 us,  0.0 sy,  0.0 ni, 92.5 id,  0.0 wa,  0.0 hi,  7.5 si,  0.0 st
KiB Mem : 32865056 total, 19145456 free,  1809168 used, 11910432 buff/cache
KiB Swap: 37036988 total, 36221980 free,   815008 used. 30568180 avail Mem 

Starting c-icap again, along with all 5 Squid instances yields this:

# top

top - 22:59:20 up 19 days, 15:25,  2 users,  load average: 1.25, 1.06, 1.24
Tasks: 292 total,   1 running, 291 sleeping,   0 stopped,   0 zombie
%Cpu0  :  0.0 us,  0.0 sy,  0.0 ni, 98.3 id,  0.0 wa,  0.0 hi,  1.7 si,  0.0 st
%Cpu1  :  0.0 us,  0.0 sy,  0.0 ni, 97.7 id,  0.0 wa,  0.0 hi,  2.3 si,  0.0 st
%Cpu2  :  0.3 us,  0.0 sy,  0.0 ni, 98.0 id,  0.0 wa,  0.0 hi,  1.7 si,  0.0 st
%Cpu3  :  0.0 us,  0.0 sy,  0.0 ni, 99.0 id,  0.0 wa,  0.0 hi,  1.0 si,  0.0 st
%Cpu4  :  0.3 us,  0.0 sy,  0.0 ni, 98.3 id,  0.0 wa,  0.0 hi,  1.3 si,  0.0 st
%Cpu5  :  0.0 us,  0.0 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi,  0.3 si,  0.0 st
%Cpu6  :  0.0 us,  0.0 sy,  0.0 ni, 97.3 id,  0.0 wa,  0.0 hi,  2.7 si,  0.0 st
%Cpu7  :  0.0 us,  0.0 sy,  0.0 ni, 93.4 id,  0.0 wa,  0.0 hi,  6.6 si,  0.0 st
KiB Mem : 32865056 total, 19103744 free,  1843460 used, 11917852 buff/cache
KiB Swap: 37036988 total, 36221980 free,   815008 used. 30527568 avail Mem

So, it seems c-icap and/or squidclamav (that's what I'm using for content 
scanning) are responsible for this.

Nothing apparently relevant in the c-icap log though...

Has anyone experienced issue such as memory leaks with c-icap and/or 
squidclamav?
I'm using c-icap-0.5.2.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP: out of memory -- consider tuning tcp_mem

2017-09-18 Thread Vieri
Hi again,

I'm suddenly getting these errors in the log:

2017/09/18 18:13:48 kid1| Error negotiating SSL on FD 11010: error:1409F07F:SSL 
routines:ssl3_write_pending:bad write retry (1/-1/0)
2017/09/18 18:13:57 kid1| Error negotiating SSL on FD 11124: error:1409F07F:SSL 
routines:ssl3_write_pending:bad write retry (1/-1/0)
2017/09/18 18:13:57 kid1| Error negotiating SSL on FD 11124: error:1409F07F:SSL 
routines:ssl3_write_pending:bad write retry (1/-1/0)
2017/09/18 18:14:00 kid1| Error negotiating SSL connection on FD 11064: 
error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher (1/-1)
2017/09/18 18:14:00 kid1| Error negotiating SSL connection on FD 11064: 
error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher (1/-1)
2017/09/18 18:14:03 kid1| Error negotiating SSL connection on FD 10857: 
error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher (1/-1)
2017/09/18 18:14:04 kid1| Error negotiating SSL connection on FD 10857: 
error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher (1/-1)

This must be a kernel issue because I'm getting lots of these in 
/var/log/messages:

kernel: TCP: out of memory -- consider tuning tcp_mem

Here are my values:

# sysctl net.ipv4.tcp_mem
net.ipv4.tcp_mem = 384027   512036  768054
# sysctl net.ipv4.tcp_rmem
net.ipv4.tcp_rmem = 409687380   6291456
# sysctl net.ipv4.tcp_wmem
net.ipv4.tcp_wmem = 409616384   4194304
# sysctl net.core.rmem_max
net.core.rmem_max = 212992
# sysctl net.core.wmem_max
net.core.wmem_max = 212992

# uname -a
Linux inf-fw2 4.9.34-gentoo #1 SMP Mon Jul 10 11:05:23 CEST 2017 x86_64 AMD 
FX(tm)-8320 Eight-Core Processor AuthenticAMD GNU/Linux

# top
top - 17:51:33 up 19 days, 10:18,  2 users,  load average: 1.38, 1.49, 1.42
Tasks: 344 total,   1 running, 343 sleeping,   0 stopped,   0 zombie
%Cpu0  :  2.2 us,  0.5 sy,  0.0 ni, 93.0 id,  0.0 wa,  0.0 hi,  4.3 si,  0.0 st
%Cpu1  :  0.5 us,  0.0 sy,  0.0 ni, 97.9 id,  0.0 wa,  0.0 hi,  1.6 si,  0.0 st
%Cpu2  :  1.1 us,  0.0 sy,  0.5 ni, 95.2 id,  0.0 wa,  0.0 hi,  3.2 si,  0.0 st
%Cpu3  :  1.1 us,  0.5 sy,  0.0 ni, 96.3 id,  0.0 wa,  0.0 hi,  2.1 si,  0.0 st
%Cpu4  :  2.1 us,  0.0 sy,  0.0 ni, 96.3 id,  0.0 wa,  0.0 hi,  1.6 si,  0.0 st
%Cpu5  :  0.5 us,  0.0 sy,  0.0 ni, 98.9 id,  0.0 wa,  0.0 hi,  0.5 si,  0.0 st
%Cpu6  :  0.5 us,  1.1 sy,  0.0 ni, 96.8 id,  0.0 wa,  0.0 hi,  1.6 si,  0.0 st
%Cpu7  :  1.6 us,  0.0 sy,  0.0 ni, 90.9 id,  0.0 wa,  0.0 hi,  7.5 si,  0.0 st
KiB Mem : 32865056 total,   820664 free, 20358972 used, 11685420 buff/cache
KiB Swap: 37036988 total, 34924984 free,  2112004 used. 12014564 avail Mem

# cat /proc/net/sockstat
sockets: used 13121
TCP: inuse 10010 orphan 11 tw 246 alloc 12597 mem 772909
UDP: inuse 92 mem 59
UDPLITE: inuse 0
RAW: inuse 7
FRAG: inuse 0 memory 0

# cat /proc/net/sockstat6
TCP6: inuse 282
UDP6: inuse 40
UDPLITE6: inuse 0
RAW6: inuse 5
FRAG6: inuse 0 memory 0

#  sysctl -a |grep tcp
fs.nfs.nfs_callback_tcpport = 0
fs.nfs.nlm_tcpport = 0
net.ipv4.tcp_abort_on_overflow = 0
net.ipv4.tcp_adv_win_scale = 1
net.ipv4.tcp_allowed_congestion_control = cubic reno
net.ipv4.tcp_app_win = 31
net.ipv4.tcp_autocorking = 1
net.ipv4.tcp_available_congestion_control = cubic reno
net.ipv4.tcp_base_mss = 1024
net.ipv4.tcp_challenge_ack_limit = 1000
sysctl: net.ipv4.tcp_congestion_control = cubic
reading key "net.ipv6.conf.all.stable_secret"net.ipv4.tcp_dsack = 1

net.ipv4.tcp_early_retrans = 3
net.ipv4.tcp_ecn = 2
net.ipv4.tcp_ecn_fallback = 1
net.ipv4.tcp_fack = 1
net.ipv4.tcp_fastopen = 1
net.ipv4.tcp_fastopen_key = 6707aeac-2dd079df-0dee3da3-befd1107
net.ipv4.tcp_fin_timeout = 60
net.ipv4.tcp_frto = 2
net.ipv4.tcp_fwmark_accept = 0
net.ipv4.tcp_invalid_ratelimit = 500
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200
net.ipv4.tcp_limit_output_bytes = 262144
net.ipv4.tcp_low_latency = 0
net.ipv4.tcp_max_orphans = 131072
net.ipv4.tcp_max_reordering = 300
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_max_tw_buckets = 131072
net.ipv4.tcp_mem = 384027   512036  768054
net.ipv4.tcp_min_rtt_wlen = 300
net.ipv4.tcp_min_tso_segs = 2
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_mtu_probing = 0
net.ipv4.tcp_no_metrics_save = 0
net.ipv4.tcp_notsent_lowat = -1
net.ipv4.tcp_orphan_retries = 0
net.ipv4.tcp_pacing_ca_ratio = 120
net.ipv4.tcp_pacing_ss_ratio = 200
net.ipv4.tcp_probe_interval = 600
net.ipv4.tcp_probe_threshold = 8
net.ipv4.tcp_recovery = 1
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_retrans_collapse = 1
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_rfc1337 = 0
net.ipv4.tcp_rmem = 409687380   6291456
net.ipv4.tcp_sack = 1
net.ipv4.tcp_slow_start_after_idle = 1
net.ipv4.tcp_stdurg = 0
net.ipv4.tcp_syn_retries = 6
net.ipv4.tcp_synack_retries = 5
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_thin_dupack = 0
net.ipv4.tcp_thin_linear_timeouts = 0
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_tso_win_divisor = 3
net.ipv4.tcp_tw_recycle = 0