Peter Crowther wrote:
From: David kerber [mailto:dcker...@verizon.net]
Just over 1000 total, 810 to the port that this application is using.

"Should" be fine on Windows.
That was my gut feeling too, but I'm glad to have it confirmed.

The vast majority are showing a status of TIME_WAIT, a dozen or so in
ESTABLISHED and one (I think) in FIN_WAIT_1.

Sounds fair enough.  The ESTABLISHED ones are active both ways and able to 
transfer data; the one in FIN_WAIT_1 has been closed at one end but the other 
end's still open; and the ones in TIME_WAIT are closed but tombstoned so the 
TCP stack knows to throw away any data that arrives for them.  None of those 
are a surprise.

That's our corporate connection, so it's shared across all
users.  I can
easily run it up to 100%  it by doing a large d/l from
somewhere (I need
to plan my patch Tuesday updates to avoid trouble), so my router and
firewall have no trouble handling the full bandwidth.

Ah, OK.

However, those
are low numbers of high-throughput connections.  This app
produces large
numbers of connections, each with small amounts of data, so
it may scale differently.

It may, but I'd be a little surprised - IP is IP, and you have enough 
concurrency that latency shouldn't be a problem.
I was wondering about that. I knew total data throughput wasn't a major issue here, but wasn't sure how latency would affect it.

That said, if a client has multiple data items to send in rapid succession, 
does it accumulate those and batch them, or does it send each one as a 
different request?  Or does the situation never arise?
A typical client will have 2 to 5 items to send per transaction (they're actually lines from a data logger's data file), and each line is done in a separate POST request. The frequency of transactions varies widely, but typically won't exceed one every 10 or 15 seconds from any given site. As I mentioned earlier, each data line is small, 20 to 50 bytes. We had looked at batching up the transmissions before, and it's still an option. However that adds a bit of complexity to the software on both ends, though the gain would be far fewer individual requests to process. For now, we prefer the simplicity of line-by-line transmission, but if we start running into network limitations we'll probably start batching them up.

D



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to