> From: David kerber [mailto:dcker...@verizon.net]
> Just over 1000 total, 810 to the port that this application is using.

"Should" be fine on Windows.

> The vast majority are showing a status of TIME_WAIT, a dozen or so in
> ESTABLISHED and one (I think) in FIN_WAIT_1.

Sounds fair enough.  The ESTABLISHED ones are active both ways and able to 
transfer data; the one in FIN_WAIT_1 has been closed at one end but the other 
end's still open; and the ones in TIME_WAIT are closed but tombstoned so the 
TCP stack knows to throw away any data that arrives for them.  None of those 
are a surprise.

> That's our corporate connection, so it's shared across all
> users.  I can
> easily run it up to 100%  it by doing a large d/l from
> somewhere (I need
> to plan my patch Tuesday updates to avoid trouble), so my router and
> firewall have no trouble handling the full bandwidth.

Ah, OK.

> However, those
> are low numbers of high-throughput connections.  This app
> produces large
> numbers of connections, each with small amounts of data, so
> it may scale differently.

It may, but I'd be a little surprised - IP is IP, and you have enough 
concurrency that latency shouldn't be a problem.

That said, if a client has multiple data items to send in rapid succession, 
does it accumulate those and batch them, or does it send each one as a 
different request?  Or does the situation never arise?

                - Peter

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to