"No olvides, no traiciones, lo que llevas bien dentro de ti. No olvides, no
traiciones, lo que siempre te ha hecho vivir."

On Tue, Aug 22, 2017 at 11:36 AM, André Warnier (tomcat) <a...@ice-sa.com>
wrote:

> On 22.08.2017 10:50, Grigor Aleksanyan wrote:
>
>> Hi,
>>
>> I have a web application (.war file) running under *apache-tomcat-7.0.52*.
>> It is a proxy application between my c++ client and server apps. Once HTTP
>> request (from the client) is received by web application, it propagates
>> request to the server and sends response back to the client once it is
>> ready. Server may need long time before it produces some data to send to
>> the client (also it can send data by chunks with really long delays etc.).
>>
>> My question is.
>>
>> *Is there a way to detect client disconnect before the time server has
>> something ready to be written to the output stream?* In case if server
>> writes something after client's disconnect, obviously I will get an
>> exception and can handle it properly. But my goal is to avoid waiting for
>> the server to produce some data to write, to detect this. I saw a couple
>> of
>> forums and mailing lists where people say that only way to do this, is by
>> writing to outputstream. But in case of websockets I know that I can get
>> notifications that connection was closed by the client.
>>
>> I believe this is a very common issue people face and there should be a
>> graceful solution for this for HTTP as well.
>>
>> Can you please advice ?
>>
>>
> Hi.
> You describe the problem well, and even the solution.
>
> The problem is not at the Tomcat level, nor at the HTTP level. The problem
> is at the lower TCP/IP level, and there is not much that Tomcat or HTTP can
> do about it.
> When a client establishes a TCP connection to a server, this is a
> "virtual" connection. There is no hardware or physical line that is
> dedicated to this connection. In other words, it exists only as long as the
> server and the client agree that it exists.
> (And if there are any firewalls or proxies in-between, it is not even one
> connection, it is multiple chained connections).
>
> If the client just decides to "go away", without explicitly telling the
> server so, the server side is totally unaware that the client has gone,
> until the server tries to send a packet on the connection, and (some time
> after that) expects to receive an "ACK" and does not get one.
> In the case of websocket, you have a well-behaved client, that
> (supposedly) tells the server when it is closing the connection. So the
> application could test this, by sending some kind of "probe" from time to
> time, to check that the client is still there.
> And the client would expect such a probe, and know that it is a probe, and
> that it is not data. So it could just respond with some kind of "ack", that
> the server also would interpret correctly.
> But even then, if the client host were to just crash (or anything
> in-between, like a firewall or a router e.g.) you would have the same issue
> : nothing is sent to the server saying that the connection no longer
> exists, so the server thinks it is still there, as long as it does not send
> anything.
>
> And you are also right to say that it affects a lot of people and their
> applications.
> It is frustrating to e.g. start some long and heavy search in a database,
> and then when the result appears, find out that there is nobody listening
> anymore (*).
> There may be some good universal solutions to this, but so far I don't
> know of any.
> Fame awaits you, if you find one that can be universally applicable.
>
>
> (*) and even if you knew that the client has gone, some tasks which you
> started in the background, may not be so easy to stop in the middle either.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>
For cases that could have the described issue (HTTP request that may take
too long), I decided to create the "Interactive Task Manager".
The HTTP request starts a task to process something at server side, and
gets a quick response with the UUID of the task. Then the client can pool
asking if the task has finished.
When the task finishes, it stores the result as JSON in a temporal file.
When the client gets the result, the Interactive Task Manager deletes the
temporal file.
The Interactive Task Manager has it own thread pool to process the tasks.

It adds some overhead: HTTP requests asking for news + storing the result
to file + and reading it back. But it doesn't depend on configuration
changes, or hardware speed. It's robust.

I have the idea that HTTP requests should be resolved quikly (always less
than some seconds). If not, it's a bug, so we need to reimplement the
backend to fix it.

It's one possible solution. Maybe not the best.
Hope it helps.

Reply via email to