RE: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-19 Thread Yeggy Javadi
Hi Chris,
Thanks for your email. Below are replies to your questions:

1. Did you upgrade anything recently (like Java VM)?
[YJ] To support Linux 8, only Postgres was upgraded from version 9.3 to 9.6.

2. What is error you are seeing? A full stack trace would be helpful.
[YJ] The application error can occur in any place with the (Too many open 
files) error when the limit of open files for tomcat has reached (262144 in my 
environment), for example, I can get a connectivity error when pulling info 
from a server as below :

 [InventoryPullerTask - 10.1.6.25] ERROR FSSDataCollectorService - Error : The 
XML configuration file failed to be retrieved for server 10.1.6.25. Check 
server connectivity.Error retrieving IPStorConfg for server=10.1.6.25 
error=java.io.FileNotFoundException: 
/usr/local/apache-tomcat-8.5.59/webapps/ROOT/WEB-INF/lib/spring-orm-3.2.10.RELEASE.jar
 (Too many open files) restatus=-1 output=

3. What is your  configuration?
[YJ] It is as below in server.xml:


...






4. Are you counting all the open files for a single process or all of the 
sub-processes which represent the threads of the main process? 
Different kernel versions count things differently.
[YJ] I am just getting the process ID of tomcat and count open files for that 
process

5. Running lsof, netstat, etc. can you see if some large number of those 
sockets are bound to any specific port (listen or connect)?
[YJ] Here is the netstat output:
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address   Foreign Address State
tcp0  0 Yeggy-F8-FMSVA:ssh  10.12.3.33:55236ESTABLISHED
tcp0 64 Yeggy-F8-FMSVA:ssh  10.197.255.10:60378 ESTABLISHED
tcp0  0 localhost.loca:postgres localhost.localdo:36846 ESTABLISHED
tcp0  0 localhost.loca:postgres localhost.localdo:36850 ESTABLISHED
tcp0  0 localhost.localdo:11753 localhost.localdo:51302 ESTABLISHED
tcp0  0 localhost.loca:postgres localhost.localdo:36844 ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48922ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48964ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48968ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:http 10.197.255.10:60817 TIME_WAIT
tcp6   0  0 Yeggy-F8-FMSVA:48968Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48974ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:http 10.197.255.10:60771 TIME_WAIT
tcp6   0  0 Yeggy-F8-FMSVA:48934Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48936Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48954Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48970Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48932Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48938ESTABLISHED
tcp6   0  0 localhost.localdo:51302 localhost.localdo:11753 ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48956Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48928ESTABLISHED
tcp6   0  0 localhost.localdo:36844 localhost.loca:postgres ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48930Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 localhost.localdo:35202 localhost.localdo:vrace ESTABLISHED
tcp6   0  0 localhost.localdo:36850 localhost.loca:postgres ESTABLISHED
tcp6   0  0 localhost.localdo:vrace localhost.localdo:35202 ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48966ESTABLISHED
tcp6   0  0 localhost.localdo:51298 localhost.localdo:11753 TIME_WAIT
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48954ESTABLISHED
tcp6   0   1045 Yeggy-F8-FMSVA:54246172.22.22.192:https ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48970ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48918Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48972ESTABLISHED
tcp6   0  0 localhost.localdo:36846 localhost.loca:postgres ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48960Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48960ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48974Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:http 10.197.255.10:60714 TIME_WAIT
tcp6   0  0 Yeggy-F8-FMSVA:48924Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48924ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48924ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48972

Fwd: [Community] try to add an community growth graph to the website

2021-05-19 Thread Shuyang Wu
Hi Mark,

I try to get contributors from svn side with `svn log --quiet -v
http://svn.apache.org/repos/asf/tomcat | grep "^r"` and get 100
contributors in total. After removing duplicates with git, right now we
have 95 contributors till 2012 and ~150 till now (compared to 20/90 and
10/60 before).
(See
https://www.apiseven.com/en/contributor-graph?chart=contributorOverTime=apache/tomcat
for the graph)

Since email that is not bound to GitHub is regarded as "anonymous"
contributors, and I could only get 31 anonymous ones from tomcat github
repo, we could say there is quite a lot of contributors lost happened
during the switch. Actually, the earliest commit of tomcat github repo I
could get is from 2006, so I could say of course there is something lost.

Talking about contributors in 2012, it seems from the data I get, there are
still no new contributors in 2012. There are some commits with "no author",
but not for 2012. (I try to figure out how to deal with "no author" but it
seems I could do nothing on my side. Please correct me if I'm wrong.) I'll
list the svn side contributors here, so maybe you could tell if anything
goes wrong.

duncan, 1999-10-08 20:05:52 -0400 EDT
(no author), 1999-10-08 20:05:52 -0400 EDT
costin, 1999-10-10 17:19:44 -0400 EDT
craigmcc, 1999-10-12 01:42:09 -0400 EDT
gonzo, 1999-10-12 03:17:47 -0400 EDT
bergsten, 1999-10-12 04:33:51 -0400 EDT
stefano, 1999-10-12 18:43:06 -0400 EDT
akv, 1999-10-12 21:12:44 -0400 EDT
mode, 1999-10-14 21:24:49 -0400 EDT
harishp, 1999-10-14 23:20:35 -0400 EDT
arun, 1999-10-15 18:26:05 -0400 EDT
mandar, 1999-10-17 14:40:06 -0400 EDT
jhunter, 1999-10-17 23:03:40 -0400 EDT
vanitha, 1999-10-18 19:49:24 -0400 EDT
jons, 1999-11-23 14:46:12 -0500 EST
pier, 1999-12-03 07:41:42 -0500 EST
rubys, 1999-12-07 20:37:20 -0500 EST
shemnon, 2000-01-12 01:14:36 -0500 EST
preston, 2000-01-17 05:17:36 -0500 EST
shachor, 2000-02-17 05:37:42 -0500 EST
jon, 2000-03-24 13:43:48 -0500 EST
jluc, 2000-03-29 14:30:26 -0500 EST
nacho, 2000-04-03 20:56:55 -0400 EDT
ed, 2000-06-15 14:58:19 -0400 EDT
alex, 2000-06-22 19:02:53 -0400 EDT
glenn, 2000-07-25 08:13:53 -0400 EDT
jiricka, 2000-07-28 17:41:44 -0400 EDT
pierred, 2000-08-11 17:32:39 -0400 EDT
remm, 2000-08-11 20:17:35 -0400 EDT
dannyc, 2000-08-16 15:53:22 -0400 EDT
horwat, 2000-08-16 20:58:20 -0400 EDT
larryi, 2000-08-26 09:03:38 -0400 EDT
santosh, 2000-10-02 18:44:57 -0400 EDT
arieh, 2000-10-06 16:42:00 -0400 EDT
eduardop, 2000-10-11 20:29:52 -0400 EDT
hgomez, 2000-11-15 06:37:25 -0500 EST
rameshm, 2000-12-15 19:23:33 -0500 EST
marcsaeg, 2000-12-21 14:24:19 -0500 EST
danmil, 2000-12-25 17:31:58 -0500 EST
keith, 2001-02-02 11:41:52 -0500 EST
kief, 2001-02-13 03:58:54 -0500 EST
melaquias, 2001-03-04 17:38:14 -0500 EST
amyroh, 2001-03-21 16:31:46 -0500 EST
clucas, 2001-03-24 01:49:30 -0500 EST
bip, 2001-04-25 21:30:27 -0400 EDT
seguin, 2001-05-12 01:52:38 -0400 EDT
jfclere, 2001-06-05 03:55:52 -0400 EDT
andya, 2001-06-13 17:26:45 -0400 EDT
mmanders, 2001-06-13 17:28:28 -0400 EDT
ccain, 2001-08-31 16:15:12 -0400 EDT
bojan, 2001-09-25 00:33:45 -0400 EDT
billbarker, 2001-10-02 01:38:21 -0400 EDT
kinman, 2001-10-03 15:26:47 -0400 EDT
patrickl, 2001-11-06 16:52:14 -0500 EST
rlubke, 2001-12-12 08:11:47 -0500 EST
manveen, 2002-01-26 15:52:58 -0500 EST
ekr, 2002-05-28 10:19:47 -0400 EDT
dsandberg, 2002-06-05 15:09:17 -0400 EDT
cks, 2002-06-20 12:16:00 -0400 EDT
mturk, 2002-06-23 01:40:29 -0400 EDT
luehe, 2002-06-26 12:50:38 -0400 EDT
morgand, 2002-07-22 14:41:34 -0400 EDT
bobh, 2002-08-14 16:54:57 -0400 EDT
jfarcand, 2002-08-22 08:48:56 -0400 EDT
idarwin, 2002-09-13 12:53:33 -0400 EDT
fhanik, 2003-02-19 15:24:10 -0500 EST
funkman, 2003-06-01 16:57:00 -0400 EDT
yoavs, 2003-06-06 23:35:38 -0400 EDT
ecarmich, 2003-08-23 21:18:44 -0400 EDT
markt, 2003-12-10 16:29:06 -0500 EST
truk, 2004-01-30 16:54:40 -0500 EST
fuankg, 2004-04-06 12:07:58 -0400 EDT
pero, 2004-09-21 03:30:32 -0400 EDT
wrowe, 2005-05-11 19:38:30 -0400 EDT
clar, 2005-06-10 12:24:35 -0400 EDT
bayard, 2005-08-04 20:24:38 -0400 EDT
jim, 2005-11-04 18:47:49 -0500 EST
jhook, 2006-03-05 14:18:11 -0500 EST
rjung, 2006-05-10 04:12:29 -0400 EDT
fcarrion, 2007-03-24 21:08:07 -0400 EDT
kkolinko, 2009-05-15 18:50:29 -0400 EDT
rahul, 2009-08-17 13:39:14 -0400 EDT
timw, 2010-02-07 02:33:25 -0500 EST
kfujino, 2010-03-31 02:08:32 -0400 EDT
jboynes, 2010-07-08 02:12:25 -0400 EDT
schultz, 2010-11-23 17:03:23 -0500 EST
slaurent, 2010-12-02 17:14:23 -0500 EST
eijit, 2011-08-26 00:36:56 -0400 EDT
olamy, 2011-08-30 04:23:49 -0400 EDT
violetagg, 2013-01-31 09:49:04 -0500 EST
kpreisser, 2013-09-24 15:10:44 -0400 EDT
fschumacher, 2014-09-19 11:25:29 -0400 EDT
ognjen, 2015-10-23 11:08:40 -0400 EDT
mgrigorov, 2015-10-27 03:50:01 -0400 EDT
huxing, 2016-08-31 10:04:33 -0400 EDT
csutherl, 2016-10-03 11:55:16 -0400 EDT
ebourg, 2017-01-20 15:24:24 -0500 EST
isapir, 2018-05-21 15:30:01 -0400 EDT
michaelo, 2018-08-21 04:16:42 -0400 EDT
woonsan, 2019-01-08 00:01:45 -0500 EST

I'm not familiar with svn at all :( so 

Re: [Community] try to add an community growth graph to the website

2021-05-19 Thread Wo Soyoung
Hi Mark,

I try to get contributors from svn side with `svn log --quiet -v
http://svn.apache.org/repos/asf/tomcat | grep "^r"` and get 100
contributors in total. After removing duplicates with git, right now we
have 95 contributors till 2012 and ~150 till now (compared to 20/90 and
10/60 before).
(See
https://www.apiseven.com/en/contributor-graph?chart=contributorOverTime=apache/tomcat
for the graph)

Since email that is not bound to GitHub is regarded as "anonymous"
contributors, and I could only get 29 anonymous ones from tomcat github
repo, we could say there is quite a lot of contributors lost happened
during the switch. Actually, the earliest commit of tomcat github repo I
could get is from 2006, so I could say of course there is something lost.

Talking about contributors in 2012, it seems from the data I get, there are
still no new contributors in 2012. There are some commits with "no author",
but not for 2012. (I try to figure out how to deal with "no author" but it
seems I could do nothing on my side. Please correct me if I'm wrong.) I'll
list the svn side contributors here, so maybe you could tell if anything
goes wrong.

duncan, 1999-10-08 20:05:52 -0400 EDT
(no author), 1999-10-08 20:05:52 -0400 EDT
costin, 1999-10-10 17:19:44 -0400 EDT
craigmcc, 1999-10-12 01:42:09 -0400 EDT
gonzo, 1999-10-12 03:17:47 -0400 EDT
bergsten, 1999-10-12 04:33:51 -0400 EDT
stefano, 1999-10-12 18:43:06 -0400 EDT
akv, 1999-10-12 21:12:44 -0400 EDT
mode, 1999-10-14 21:24:49 -0400 EDT
harishp, 1999-10-14 23:20:35 -0400 EDT
arun, 1999-10-15 18:26:05 -0400 EDT
mandar, 1999-10-17 14:40:06 -0400 EDT
jhunter, 1999-10-17 23:03:40 -0400 EDT
vanitha, 1999-10-18 19:49:24 -0400 EDT
jons, 1999-11-23 14:46:12 -0500 EST
pier, 1999-12-03 07:41:42 -0500 EST
rubys, 1999-12-07 20:37:20 -0500 EST
shemnon, 2000-01-12 01:14:36 -0500 EST
preston, 2000-01-17 05:17:36 -0500 EST
shachor, 2000-02-17 05:37:42 -0500 EST
jon, 2000-03-24 13:43:48 -0500 EST
jluc, 2000-03-29 14:30:26 -0500 EST
nacho, 2000-04-03 20:56:55 -0400 EDT
ed, 2000-06-15 14:58:19 -0400 EDT
alex, 2000-06-22 19:02:53 -0400 EDT
glenn, 2000-07-25 08:13:53 -0400 EDT
jiricka, 2000-07-28 17:41:44 -0400 EDT
pierred, 2000-08-11 17:32:39 -0400 EDT
remm, 2000-08-11 20:17:35 -0400 EDT
dannyc, 2000-08-16 15:53:22 -0400 EDT
horwat, 2000-08-16 20:58:20 -0400 EDT
larryi, 2000-08-26 09:03:38 -0400 EDT
santosh, 2000-10-02 18:44:57 -0400 EDT
arieh, 2000-10-06 16:42:00 -0400 EDT
eduardop, 2000-10-11 20:29:52 -0400 EDT
hgomez, 2000-11-15 06:37:25 -0500 EST
rameshm, 2000-12-15 19:23:33 -0500 EST
marcsaeg, 2000-12-21 14:24:19 -0500 EST
danmil, 2000-12-25 17:31:58 -0500 EST
keith, 2001-02-02 11:41:52 -0500 EST
kief, 2001-02-13 03:58:54 -0500 EST
melaquias, 2001-03-04 17:38:14 -0500 EST
amyroh, 2001-03-21 16:31:46 -0500 EST
clucas, 2001-03-24 01:49:30 -0500 EST
bip, 2001-04-25 21:30:27 -0400 EDT
seguin, 2001-05-12 01:52:38 -0400 EDT
jfclere, 2001-06-05 03:55:52 -0400 EDT
andya, 2001-06-13 17:26:45 -0400 EDT
mmanders, 2001-06-13 17:28:28 -0400 EDT
ccain, 2001-08-31 16:15:12 -0400 EDT
bojan, 2001-09-25 00:33:45 -0400 EDT
billbarker, 2001-10-02 01:38:21 -0400 EDT
kinman, 2001-10-03 15:26:47 -0400 EDT
patrickl, 2001-11-06 16:52:14 -0500 EST
rlubke, 2001-12-12 08:11:47 -0500 EST
manveen, 2002-01-26 15:52:58 -0500 EST
ekr, 2002-05-28 10:19:47 -0400 EDT
dsandberg, 2002-06-05 15:09:17 -0400 EDT
cks, 2002-06-20 12:16:00 -0400 EDT
mturk, 2002-06-23 01:40:29 -0400 EDT
luehe, 2002-06-26 12:50:38 -0400 EDT
morgand, 2002-07-22 14:41:34 -0400 EDT
bobh, 2002-08-14 16:54:57 -0400 EDT
jfarcand, 2002-08-22 08:48:56 -0400 EDT
idarwin, 2002-09-13 12:53:33 -0400 EDT
fhanik, 2003-02-19 15:24:10 -0500 EST
funkman, 2003-06-01 16:57:00 -0400 EDT
yoavs, 2003-06-06 23:35:38 -0400 EDT
ecarmich, 2003-08-23 21:18:44 -0400 EDT
markt, 2003-12-10 16:29:06 -0500 EST
truk, 2004-01-30 16:54:40 -0500 EST
fuankg, 2004-04-06 12:07:58 -0400 EDT
pero, 2004-09-21 03:30:32 -0400 EDT
wrowe, 2005-05-11 19:38:30 -0400 EDT
clar, 2005-06-10 12:24:35 -0400 EDT
bayard, 2005-08-04 20:24:38 -0400 EDT
jim, 2005-11-04 18:47:49 -0500 EST
jhook, 2006-03-05 14:18:11 -0500 EST
rjung, 2006-05-10 04:12:29 -0400 EDT
fcarrion, 2007-03-24 21:08:07 -0400 EDT
kkolinko, 2009-05-15 18:50:29 -0400 EDT
rahul, 2009-08-17 13:39:14 -0400 EDT
timw, 2010-02-07 02:33:25 -0500 EST
kfujino, 2010-03-31 02:08:32 -0400 EDT
jboynes, 2010-07-08 02:12:25 -0400 EDT
schultz, 2010-11-23 17:03:23 -0500 EST
slaurent, 2010-12-02 17:14:23 -0500 EST
eijit, 2011-08-26 00:36:56 -0400 EDT
olamy, 2011-08-30 04:23:49 -0400 EDT
violetagg, 2013-01-31 09:49:04 -0500 EST
kpreisser, 2013-09-24 15:10:44 -0400 EDT
fschumacher, 2014-09-19 11:25:29 -0400 EDT
ognjen, 2015-10-23 11:08:40 -0400 EDT
mgrigorov, 2015-10-27 03:50:01 -0400 EDT
huxing, 2016-08-31 10:04:33 -0400 EDT
csutherl, 2016-10-03 11:55:16 -0400 EDT
ebourg, 2017-01-20 15:24:24 -0500 EST
isapir, 2018-05-21 15:30:01 -0400 EDT
michaelo, 2018-08-21 04:16:42 -0400 EDT
woonsan, 2019-01-08 00:01:45 -0500 EST

I'm not familiar with svn at all :( so 

Tomcat SSL stops working after an undetermined amount of time

2021-05-19 Thread Ezsra McDonald
Environment:
OS: CentOS 7
Apache: apache-tomcat-8.5.65
Java: jdk1.8.0_281

Greetings,

I recently enabled SSL on my Tomcat server HTTP connectors. Something odd
is happening. After some undetermined amount of time the connector stops
responding appropriately to requests. My browser returns the following
message:

"An error occurred during a connection to target.host.com:8080. SSL
received a malformed Alert record.

Error code: SSL_ERROR_RX_MALFORMED_ALERT
"
I do not see anything in the logs to clue me in on what is happening.

I have the following configured for the connector.


When I restart the instance everything works fine for a while. Later, when
I try to look at the tomcat manager, SSL is no longer functioning properly.

Any assistance would be appreciated.

regards,


-- Ez


#tomcat on Freenode?

2021-05-19 Thread Coty Sutherland
Hi all,

I was just notified about some mess going on with Freenode which has
seemingly resulted in a mass exodus of users from the freenode servers.
There are some updates available at
https://gist.github.com/joepie91/df80d8d36cd9d1bde46ba018af497409/ which
make it seem like we should no longer point users to #tomcat on freenode
(we point to it on https://tomcat.apache.org/irc.html).

Should we take any action on that, like remove the page or update it to
point to https://libera.chat/ after we establish a channel there? I'm not
sure how much value there is/was in the freenode channel because questions
are so infrequent, so we may be able to safely drop the reference.



Thanks,
Coty


Re: JEP 411: Deprecate the Security Manager for Removal

2021-05-19 Thread Mark Thomas

On 19/05/2021 17:37, Robert Hicks wrote:

Is that the "same" security manager we flip on for Tomcat or just an
unfortunate naming coincidence?


It is the same one.

If you need the security manager I'd expect, based on typical lifetimes 
of Tomcat major versions, that you'd have a supported version of Tomcat 
where you could use a security manager in its current form for at least 
the next decade. Longer term solutions are still very much TBD.


Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Reload rewrite rules

2021-05-19 Thread Mark Thomas

On 19/05/2021 15:50, Chris Cheshire wrote:

Tomcat 9.0.45 - is there a way to reload the config for the rewrite valve at 
runtime without reloading the web app entirely? JMX operation perhaps?


Nor cleanly, no.

You stop and start the Valve via JMX but you might see odd redirects 
while that is happening.


Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



JEP 411: Deprecate the Security Manager for Removal

2021-05-19 Thread Robert Hicks
Is that the "same" security manager we flip on for Tomcat or just an
unfortunate naming coincidence?

-- 
Bob


Reload rewrite rules

2021-05-19 Thread Chris Cheshire
Tomcat 9.0.45 - is there a way to reload the config for the rewrite valve at 
runtime without reloading the web app entirely? JMX operation perhaps?
-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: AW: AW: AW: maxConnections behaving unexpected - no connection gets ever refused

2021-05-19 Thread Mark Thomas

On 19/05/2021 13:32, Paul P Wolf wrote:




So we have:
maxThreads=4
maxConnections=10
acceptCount=20





The processing time of each request is 10s (thanks to a 10s sleep, which blocks 
the Thread).

So here is what I see instead (note I don't guess the response time, but do 
actually see/measure it):

0s - 4 requests processing, 11 connections maintained, 20 connections in 
acceptCount, 19 not in acceptCount

5s - 4 requests processing, 11 connections maintained, 20 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out


OK, that looks like clients with a connection timeout of 5s.


10s - 4 requests processing, 11 connections maintained, 16 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 4 requests processed

20s - 4 requests processing, 11 connections maintained, 12 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 8 requests processed

30s - 4 requests processing, 11 connections maintained, 8 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 12 requests processed

40s - 4 requests processing, 11 connections maintained, 4 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 16 requests processed

50s - 4 requests processing, 11 connections maintained, 0 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 20 requests processed

60s - 4 requests processing, 7 connections maintained, 0 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 24 requests processed

70s - 3 requests processing, 3 connections maintained, 0 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 28 requests processed

80s - 0 requests processing, 0 connections maintained, 0 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 31 requests processed

The interesting thing to note is, that contrary to your statement, I don't see 
timeouts for requests in the acceptCount/backlog.


Two possibilities there.

1. The write timeout is > ~70s.

2. The write is buffered in the network stack (more likely in the client 
I think) and the read timeout is > ~70s.


2 strikes me as more likely than 1.

With a request body bigger than the total amount of buffering in the 
client and server network stacks (typically much larger than you might 
expect) I think you'll see different results, namely twrite timeouts for 
client connections still in the acceptCount - assuming the write timeout 
is less than ~70s.



To verify my interpretation, I increased the acceptCount and I saw that the 
time outs decrease by the same number. I decreased the acceptCount and saw that 
the time outs increase.


I agree that is a strong indication that your interpretation is correct. 
Further, I don't see anything to suggest your interpretation is incorrect.



The behaviour seems 100% predictable to me... just not as expected based on the 
documentation.


I was going to ask which bit of the documentation but you cover that below.


Not up to Tomcat. Tomcat can only call Socket.accept() and does so
under the control of maxConnections.

Connection refused == acceptCount/backlog full (or no listening socket).

Connection refusal is entirely under the control of the OS and will be
driven largely by the actual value of acceptCount/backlog.

So if it is not up to Tomcat, then the documentation "Any further simultaneous requests will 
receive "connection refused" errors" is clearly wrong, isn't it?


That statement in the docs ignores acceptCount so it certainly needs 
updating to reflect that.


Beyond that I suspect we are getting into variation in behaviour between 
operating systems. There might also be a local host vs remote host 
aspect but I don't think that is a factor here. The bigger factors will 
be the timeouts configured on the client, the network stack buffering 
behaviour and how the OS handles new connections when the acceptCount is 
full.


I think if we replaced "connection refused" with "connection refused or 
connection timeout" that should cover the possible variations.


Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



AW: AW: AW: maxConnections behaving unexpected - no connection gets ever refused

2021-05-19 Thread Paul P Wolf
sorry, my message was misformated, so here again with hopefully better 
formatting:

> The clients timeout because they spend more than timeout in the
> acceptCount/backlog queue waiting for Tomcat to call Socket.accept()

Ok, so you are stating, that clients timeout while their requests are in the 
acceptCount/backlog. This is not what I am seeing. See below.

> So we have:
> maxThreads=4
> maxConnections=10
> acceptCount=20
>
> and a request processing time of 1 second.
>
> I'd guess that the OS is using a much large accept count. Let's model it.
>
> 0s - 50 connections in acceptCount
> 1s - 39 connections in acceptCount, 11 connections maintained by Tomcat,
>   4 requests processing
> 2s - 35 connections in acceptCount, 11 connections maintained by Tomcat,
>   4 requests processing, 4 completed requests 3s - 31 connections
> in acceptCount, 11 connections maintained by Tomcat,
>   4 requests processing, 8 completed requests 4s - 27 connections
> in acceptCount, 11 connections maintained by Tomcat,
>   4 requests processing, 12 completed requests 5s - 23 connections
> in acceptCount, 11 connections maintained by Tomcat,
>   4 requests processing, 16 completed requests 6s - 19 connections
> timed out, 11 connections maintained by Tomcat,
>   4 requests processing, 20 completed requests 7s - 19 connections
> timed out, 7 connections maintained by Tomcat,
>   4 requests processing, 24 completed requests 8s - 19 connections
> timed out, 3 connections maintained by Tomcat,
>   3 requests processing, 28 completed requests 9s - 19 connections
> timed out, 0 connections maintained by Tomcat,
>   0 requests processing, 31 completed requests
>
> That seems to match what you observed. That suggests the OS is using
> an acceptCount of at least 50.

I can see what you mean, but this is not what I see, because your assumption of 
a 1s processing time is wrong. The processing time of each request is 10s 
(thanks to a 10s sleep, which blocks the Thread).

So here is what I see instead (note I don't guess the response time, but do 
actually see/measure it):

0s - 4 requests processing, 11 connections maintained, 20 connections in 
acceptCount, 19 not in acceptCount

5s - 4 requests processing, 11 connections maintained, 20 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out

10s - 4 requests processing, 11 connections maintained, 16 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 4 requests processed

20s - 4 requests processing, 11 connections maintained, 12 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 8 requests processed

30s - 4 requests processing, 11 connections maintained, 8 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 12 requests processed

40s - 4 requests processing, 11 connections maintained, 4 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 16 requests processed

50s - 4 requests processing, 11 connections maintained, 0 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 20 requests processed

60s - 4 requests processing, 7 connections maintained, 0 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 24 requests processed

70s - 3 requests processing, 3 connections maintained, 0 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 28 requests processed

80s - 0 requests processing, 0 connections maintained, 0 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 31 requests processed

The interesting thing to note is, that contrary to your statement, I don't see 
timeouts for requests in the acceptCount/backlog.

To verify my interpretation, I increased the acceptCount and I saw that the 
time outs decrease by the same number. I decreased the acceptCount and saw that 
the time outs increase.

The behaviour seems 100% predictable to me... just not as expected based on the 
documentation.

> Not up to Tomcat. Tomcat can only call Socket.accept() and does so
> under the control of maxConnections.
>
> Connection refused == acceptCount/backlog full (or no listening socket).
>
> Connection refusal is entirely under the control of the OS and will be
> driven largely by the actual value of acceptCount/backlog.
So if it is not up to Tomcat, then the documentation "Any further simultaneous 
requests will receive "connection refused" errors" is clearly wrong, isn't it?



Pflichtangaben anzeigen

Nähere Informationen zur Datenverarbeitung im DB-Konzern finden Sie hier: 
http://www.deutschebahn.com/de/konzern/datenschutz

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



AW: AW: AW: maxConnections behaving unexpected - no connection gets ever refused

2021-05-19 Thread Paul P Wolf

> The clients timeout because they spend more than timeout in the
> acceptCount/backlog queue waiting for Tomcat to call Socket.accept()

Ok, so you are stating, that clients timeout while their requests are in the 
acceptCount/backlog. This is not what I am seeing. See below.

> So we have:
> maxThreads=4
> maxConnections=10
> acceptCount=20
>
> and a request processing time of 1 second.
>
> I'd guess that the OS is using a much large accept count. Let's model it.
>
> 0s - 50 connections in acceptCount
> 1s - 39 connections in acceptCount, 11 connections maintained by Tomcat,
>   4 requests processing
> 2s - 35 connections in acceptCount, 11 connections maintained by Tomcat,
>   4 requests processing, 4 completed requests 3s - 31 connections in
> acceptCount, 11 connections maintained by Tomcat,
>   4 requests processing, 8 completed requests 4s - 27 connections in
> acceptCount, 11 connections maintained by Tomcat,
>   4 requests processing, 12 completed requests 5s - 23 connections in
> acceptCount, 11 connections maintained by Tomcat,
>   4 requests processing, 16 completed requests 6s - 19 connections timed
> out, 11 connections maintained by Tomcat,
>   4 requests processing, 20 completed requests 7s - 19 connections timed
> out, 7 connections maintained by Tomcat,
>   4 requests processing, 24 completed requests 8s - 19 connections timed
> out, 3 connections maintained by Tomcat,
>   3 requests processing, 28 completed requests 9s - 19 connections timed
> out, 0 connections maintained by Tomcat,
>   0 requests processing, 31 completed requests
>
> That seems to match what you observed. That suggests the OS is using an
> acceptCount of at least 50.

I can see what you mean, but this is not what I see, because your assumption of 
a 1s processing time is wrong. The processing time of each request is 10s 
(thanks to a 10s sleep, which blocks the Thread).

So here is what I see instead (note I don't guess the response time, but do 
actually see/measure it):
0s - 4 requests processing, 11 connections maintained, 20 connections in 
acceptCount, 19 not in acceptCount
5s - 4 requests processing, 11 connections maintained, 20 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out
10s - 4 requests processing, 11 connections maintained, 16 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 4 requests processed
20s - 4 requests processing, 11 connections maintained, 12 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 8 requests processed
30s - 4 requests processing, 11 connections maintained, 8 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 12 requests processed
40s - 4 requests processing, 11 connections maintained, 4 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 16 requests processed
50s - 4 requests processing, 11 connections maintained, 0 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 20 requests processed
60s - 4 requests processing, 7 connections maintained, 0 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 24 requests processed
70s - 3 requests processing, 3 connections maintained, 0 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 28 requests processed
80s - 0 requests processing, 0 connections maintained, 0 connections in 
acceptCount, 0 not in acceptCount, 19 timed-out, 31 requests processed

The interesting thing to note is, that contrary to your statement, I don't see 
timeouts for requests in the acceptCount/backlog.

To verify my interpretation, I increased the acceptCount and I saw that the 
time outs decrease by the same number. I decreased the acceptCount and saw that 
the time outs increase.

The behaviour seems 100% predictable to me... just not as expected based on the 
documentation.

> Not up to Tomcat. Tomcat can only call Socket.accept() and does so under
> the control of maxConnections.
>
> Connection refused == acceptCount/backlog full (or no listening socket).
>
> Connection refusal is entirely under the control of the OS and will be driven
> largely by the actual value of acceptCount/backlog.
So if it is not up to Tomcat, then the documentation "Any further simultaneous 
requests will receive "connection refused" errors" is clearly wrong, isn't it?




Pflichtangaben anzeigen

Nähere Informationen zur Datenverarbeitung im DB-Konzern finden Sie hier: 
http://www.deutschebahn.com/de/konzern/datenschutz

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: AW: AW: maxConnections behaving unexpected - no connection gets ever refused

2021-05-19 Thread Mark Thomas

On 19/05/2021 12:24, Paul P Wolf wrote:

Thank you Thomas. I carefully read your explanation. It makes sense to me and 
is completely different from what I understood up until this point. With this 
new understanding, the problem still persists. Please let me rephrase my issues 
in the light of what I just learned.

To summarize:
- thread limit defines how many requests can be processed concurrently.


Yes, via maxThreads.


- maxConnections defines how many connections are accepted by tomcat via 
socket.accept() and can be monitored by tomcat. this does not include the 
connections/requests being currently processed in an active thread.


Not quite. This does include connections currently being processed in an 
active thread.



- acceptCount is an OS backlag, which is not monitored by tomcat and the OS may 
decides to override the value.


Correct.


- if all threads, maxConnections and acceptCount backlog are full, further 
requests get refused by the OS


Threads don't matter here. Just maxConnections and acceptCount/backlog


Now my still persisting issues:

Say Tomcat can process 2000 requests a second and the typical client timeout
is 5s, then an acceptCount/backlog of anything up to 1 should be OK but
above that some clients will time out because Tomcat won't be able to clear
the all backlog before the unprocessed client connections timeout.

If there are more requests than there is space in the backlog and the maxConnections is reached, 
why would you expect client timeouts instead of refused connections? Timeouts are what I see, but 
not what I expect, when I read "Any further simultaneous requests will receive 
"connection refused" errors".


Let me expand on the point I was trying to make. Using the 2000 req/s 
number above, a client timeout of 5s, an acceptCount of 2 and no 
keep-alive I'd expect to see something close to the following:


0s  - 2 connections in acceptCount
1s  - 18000 connections in acceptCount, 2000 completed requests
2s  - 16000 connections in acceptCount, 4000 completed requests
3s  - 14000 connections in acceptCount, 6000 completed requests
4s  - 12000 connections in acceptCount, 8000 completed requests
5s  - 1 connections in acceptCount, 1 completed requests
>5s - 1 client timeouts, 1 completed requests

The clients timeout because they spend more than timeout in the 
acceptCount/backlog queue waiting for Tomcat to call Socket.accept()



Different question around the same issue: What would need to happen, so that 
there would be refused connections instead of client timeouts?


Same scenario as above but with an acceptCount of 5000
0s - 5000 connections in acceptCount, 15000 refused connections
1s - 3000 connections in acceptCount, 15000 refused connections,
 2000 completed requests
2s - 1000 connections in acceptCount, 15000 refused connections,
 4000 completed requests
3s - 15000 refused connections, 5000 completed requests


Your numbers are too close together. If you use numbers that are further
apart, the behaviour should be more obvious. Something like:
maxThreads=4
maxConnections=10
acceptCount=20


What do you mean by "numbers are too close together"? Why would that be an 
issue? What would be far enough? Is there any documentation? The processing speed 
shouldn't be an issue, as the endpoints sleep for 10s.


My point was that with values a 3, 2 and 1 and the off-by-one behaviour 
of maxConnections it is harder to match up observed numbers with 
configuration values. If the configuration values are further apart it 
should be easier to match observations, and changes in observations, 
which configuration values and changes in configuration values.


Keep in mind that my numbers above assume things happen instantly 
whereas in reality there is always going to be an ordering. The observed 
numbers can be slightly different from what you expect sometimes. If 
your configuration values are only 1 apart it will be hard to be sure 
what you are seeing.



Regardless, I tried your suggested configuration and nothing really changed: I 
see 31 successful requests and 19 timed out after 5 seconds. Still not a single 
refused connection. And considering the numbers, the OS acknowledged the 
configured acceptCount number.


So we have:
maxThreads=4
maxConnections=10
acceptCount=20

and a request processing time of 1 second.

I'd guess that the OS is using a much large accept count. Let's model it.

0s - 50 connections in acceptCount
1s - 39 connections in acceptCount, 11 connections maintained by Tomcat,
 4 requests processing
2s - 35 connections in acceptCount, 11 connections maintained by Tomcat,
 4 requests processing, 4 completed requests
3s - 31 connections in acceptCount, 11 connections maintained by Tomcat,
 4 requests processing, 8 completed requests
4s - 27 connections in acceptCount, 11 connections maintained by Tomcat,
 4 requests processing, 12 completed requests
5s - 23 connections in acceptCount, 11 

AW: AW: maxConnections behaving unexpected - no connection gets ever refused

2021-05-19 Thread Paul P Wolf
Thank you Thomas. I carefully read your explanation. It makes sense to me and 
is completely different from what I understood up until this point. With this 
new understanding, the problem still persists. Please let me rephrase my issues 
in the light of what I just learned.

To summarize:
- thread limit defines how many requests can be processed concurrently.
- maxConnections defines how many connections are accepted by tomcat via 
socket.accept() and can be monitored by tomcat. this does not include the 
connections/requests being currently processed in an active thread.
- acceptCount is an OS backlag, which is not monitored by tomcat and the OS may 
decides to override the value.
- if all threads, maxConnections and acceptCount backlog are full, further 
requests get refused by the OS

Now my still persisting issues:
> Say Tomcat can process 2000 requests a second and the typical client timeout
> is 5s, then an acceptCount/backlog of anything up to 1 should be OK but
> above that some clients will time out because Tomcat won't be able to clear
> the all backlog before the unprocessed client connections timeout.
If there are more requests than there is space in the backlog and the 
maxConnections is reached, why would you expect client timeouts instead of 
refused connections? Timeouts are what I see, but not what I expect, when I 
read "Any further simultaneous requests will receive "connection refused" 
errors".

Different question around the same issue: What would need to happen, so that 
there would be refused connections instead of client timeouts?

> Your numbers are too close together. If you use numbers that are further
> apart, the behaviour should be more obvious. Something like:
> maxThreads=4
> maxConnections=10
> acceptCount=20

What do you mean by "numbers are too close together"? Why would that be an 
issue? What would be far enough? Is there any documentation? The processing 
speed shouldn't be an issue, as the endpoints sleep for 10s.

Regardless, I tried your suggested configuration and nothing really changed: I 
see 31 successful requests and 19 timed out after 5 seconds. Still not a single 
refused connection. And considering the numbers, the OS acknowledged the 
configured acceptCount number.

Same question as before: what needs to change to make Tomcat refuse 
connections? This still seems like a bug to me.



Pflichtangaben anzeigen

Nähere Informationen zur Datenverarbeitung im DB-Konzern finden Sie hier: 
http://www.deutschebahn.com/de/konzern/datenschutz


Re: AW: maxConnections behaving unexpected - no connection gets ever refused

2021-05-19 Thread Mark Thomas

On 19/05/2021 09:28, Paul P Wolf wrote:




In regards to point 5 and 6, let me try to point out my issues with the 
documentation and your explanations:
- "Each incoming request requires a thread for the duration of that request. If more 
simultaneous requests are received than can be handled by the currently available request 
processing threads, additional threads will be created up to the configured maximum (the 
value of the maxThreads attribute)." So far the documentation sounds good to me.


Ack.


- "If still more simultaneous requests are received, they are stacked up inside the server socket 
created by the Connector, up to the configured maximum (the value of the acceptCount attribute)." This 
is what I meant with "accept queue" or what you call "Tomcat's own queue". Would you 
usually configure the acceptCount to be less than maxConnections or is it completely unrelated to 
maxConnections? I would intuitively assume you would set it to less, but  now I am not sure anymore.


maxConnections is the maximum number of concurrent connections Tomcat is 
prepared to establish via a call to Socket.accept(). Once that limit is 
reached Tomcat will not call Socket.accept() again until one or more 
established connections is closed. Tomcat will then call Socket.accept() 
until maxConnections is reached or Socket.accept() blocks waiting for an 
incoming connection.


The acceptCount/backlog is maintained by the OS and is invisible to 
Tomcat. It is the queue of connections that have been received by the 
network stack and are waiting for Tomcat to call Socket.accept() to 
start processing them.


maxConnections and acceptCount are entirely unrelated.

The correct setting for maxConnections is driven largely by the maximum 
concurrent load you want (essentially maxThreads), whether you are using 
keep-alive and the percentage of time kept-alive connections are idle vs 
active.


For example if you had a system that could support 200 concurrently 
processing requests, those connections use keep-alive had active 
requests for 10% of the time then a good value for maxConnections would 
be around 200/10% = 2000


The correct setting for acceptCount is driven by the maximum spike in 
new connections you want to accept. The higher the value, the higher the 
spike in new connection attempts you'll be able to support but those new 
connections will then be queued in the acceptCount/backlog queue waiting 
for Tomcat to process them. You need to consider how quickly Tomcat can 
clear that backlog vs typical client timeouts.


Say Tomcat can process 2000 requests a second and the typical client 
timeout is 5s, then an acceptCount/backlog of anything up to 1 
should be OK but above that some clients will time out because Tomcat 
won't be able to clear the all backlog before the unprocessed client 
connections timeout.




- "Any further simultaneous requests will receive "connection refused" errors, until 
resources are available to process them." Now if that is the case, than what does 
maxConnection have to do with anything at all?


See above. That is describing what happens when maxConnections has been 
reached AND the acceptCount/backlog queue is full.



Also I just don't see any connections being refused, but instead they linger in SYN-SENT 
state. You say, that acceptCount can be overridden by the OS, but why would that take 
away tomcats possibility of monitoring those connections, as "they are stacked up 
inside the server socket created by the Connector, up to the configured maximum (the 
value of the acceptCount attribute)"?


No they are not. You are confusing the connections where Tomcat has 
called Socket.accept() but not passed the socket to a thread for 
processing with connections still in the acceptCount/backlog queue.



The way I see it, tomcat should always be able to monitor those connections or 
never at. Are there separate acceptCounts for Tomcat/OS/TCP stack?


Tomcat has full visibility of the number of connections where 
Socket.accept() has been called. Tomcat has no visibility of the number 
of connections in the acceptCount/backlog queue.



-  "Note that once the [maxConnection] limit has been reached, the operating system 
may still accept connections based on the acceptCount setting". Here again my 
confusion rises: Does this only apply/make sense if the OS overrides the acceptCount?


There is always an acceptCount. The issue is whether the OS follows the 
setting provided by Tomcat or does its own thing anyway.



If so, would Tomcat still be able to monitor those connections in the server 
socket created by the Connector or not?


See above. Tomcat tracks connections where Socket.accept() has been 
called. Tomcat cannot track connection requests in the 
acceptCount/backlog queue.



If the OS doesn't override the acceptCount, is Tomcat then able to monitor the 
connections?


Whether the OS follows the acceptCount setting provided by Tomcat or not 
has no impact on Tomcat's 

AW: maxConnections behaving unexpected - no connection gets ever refused

2021-05-19 Thread Paul P Wolf
> Paul,
Thanks for the reply. I am not really much  further with my main issue, but I 
hope this reply provides more information to you, so you can either clear up my 
confusion or see how tomcat doesn't work as intended in my case.

> On 5/18/21 07:44, Paul P Wolf wrote:
> > Hi,
> >
> > I am trying to run a spring boot application with an embedded tomcat. In a
> scenario, where there is a lot of load on the service, I would like tomcat to
> refuse connections or return something like a 503 Service Unavailable. From
> what I understand, you could have this behaviour by setting maxConnections
> and and any additional connections get refused. At least this was how the old
> blocking io acceptor worked, from what I understand.
> >
> > The documentation says "The maximum number of connections that the
> server will accept and process at any given time. When this number has been
> reached, the server will accept, but not process, one further connection. This
> additional connection be blocked until [...]".
> > However the documentation doesn't really state what happens if
> maxConnection+2 connections are reached.
> >
> > I tried to run my application with following settings (embedded-tomcat
> 9.0.45):
> > maxConnections=3
> > acceptCount=2
> > maxThreads=1
> > processorCache=1
> >
> > I created an endpoint, which just sleeps for a few seconds and returns
> > a string. When I create 50 separate connection via curl instances to
> > call that service I see the following behaviour with the NIO Acceptor:
> >
> > *   6 http connections are accepted immediately (maxThreads +
> > acceptCount + maxConnections)
> The maxThreads setting should not be relevant, here. maxConnections
> counts the total connections without regard to how many of them are
> actually having work done (which requires a thread from maxThreads).
>
> So accepting 6 connections really means:
>
> maxConnections (3) + acceptCount (2) + 1 (because maxConnections says it
> will accept 1+maxConnections, which is a little confusing IMO).
>
I tried some different settings to check this and your interpretation seems to 
be correct. Thanks!

> > *   44 http connections aren't established just, but neither are they
> > refused. I will call them "blocked", but different from the
> > specification those are 44 blocked connections and not just 1
> What is the TCP state of the first 6 connections? What about the other 44?
> What is the difference between "accepted" and "blocked", and how are you
> telling them apart?
I tell them apart by inferring from curl's behaviour. I have a service endpoint 
sleep for 10s. I start 50 curls in parallel with a 5s connect-timeout. 6 
connections go through (after a total of 60 seconds), while the other 44 fail 
after the 5s timeout. Makes sense?
By that I infer that 44 connections are in the SYN-SENT state and the other 6 
are in the ESTABLISHED state, after the curl instances have started.

> > *   once the first request finishes, the latest (blocked) requests
> > gets a connection and is being processed (not a request from the
> > accept-queue or one of the other already established connections)
> So the queue is behaving "unfairly"?
>
> > *   when there are no further blocked requests, the requests still
> > get processed in last in first out order
> More unfairness.
>
> You didn't post your  configuration (which is pretty critical for
> trying to think about these things), but I suspect you aren't using an
> , which may ensure fairness. (Older Tomcats used an internal
> thread pool which was NOT an "Executor" but later Tomcat should always be
> using an Executor, which was intended to enforce fairness. Hmm.)
Correct, it behaves unfairly. As stated I am using spring boot and an embedded 
tomcat. From why I understand there is no (obvious) way to provide traditional 
tomcat configuration files. However I used an TomcatConnectorCustomizer to 
inspect the connector and there seems to be no Executor configured. Would you 
say then, that there is a problem with the default spring boot configuration 
not using an Executor?

> > *   I see some timeouts after a while with this setup, depending on
> > what timeouts I set on curl. The requests without an established
> > connection timeout with "connect-timeout" parameter and the ones with
> > established connections depending on the "max-time" parameter.
> When you get a timeout, what kind of timeout are you encountering (on the
> client side)? Is this a "connect timeout" or a "read timeout"?
I encounter both, depending on the configuration. My point here was that I 
could use those timeouts to infer the tcp state as stated above.

> > Now I have a lot of questions and I suspect some bugs, but I wanted to ask
> on this list first:
> >
> >1.  Is there a way to refuse connections, instead of blocking them?
>
> maxConnections + acceptCount *should* be doing this. Remember that
> acceptCount is just a suggestion to the OS. The OS is free to always accept
> 65535 connections into its