On Sat, Mar 24, 2018 at 5:15 AM, Ben Noordhuis <i...@bnoordhuis.nl> wrote:

> On Thu, Mar 22, 2018 at 11:49 PM, CM <crusader.m...@gmail.com> wrote:
> > 1. does libuv provide any guarantees wrt callback order in case when
> > uv_close() cancels all outstanding requests on given handle -- i.e. can I
> > rely that in close_cb all canceled requests callbacks have already been
> > called?
>
> Yes, request callbacks run before the close callback.
>

I assume that (with Windows IOCP) uv_close() on a handle with outstanding
requests will end up calling CancelIo(Ex) -- which (according to limited
knowledge in this area) marks underlying OS request as canceled and returns
immediately (before cancellation notification is delivered to user). I.e. I
suspect that uv_close callback might be called before all read/write
callbacks have reported a "canceled" error -- that is unless uv_close()
keeps track of currently outstanding requests and delays close_cb
notification until they are all gone. Can you confirm or deny this?

Consider this pseudocode of a C++ coroutine:

void my_coro()
{

    RingBuffer buf(1024);

    SocketHandle sock(...);   // in dtor it will call uv_close(), suspend
and resume in uv_close callback
    ...

    // submit read/write requests that access 'buf' in their respective
callbacks
    ...

    // here we hit an error and leave (via return or exception)

}


as you see it is imperative to have a mechanism that guarantees "no more
callback calls" right before 'buf' destructor. uv_close() may or may not
provide this guarantee -- I don't know. It seems that uv_shutdown()
provides it (even in case of shutdown() failure?) -- but it does it for
write requests only, it seems.

So, this is quite important -- does libuv officially(!!!) provide this
guarantee in uv_close() or not? Note that even if it does happen to provide
it in current version, but it is not an official promise -- this means I
can't rely on it and have to provide additional synchronization that will
ensure my coroutine is suspended (right before 'buf' destructor) until all
requests have completed.



> You should probably not rely on the order in which different request
> types run, e.g., whether a connect callback comes before or after a
> write callback.
>

I understand this, but I had somewhat different problem in mind (as shown
above).



> > 2. can uv_close callback be cancelled or not completed? E.g. if I ask
> event
> > loop to terminate -- will it wait for all callbacks to complete? what if
> > these callbacks try to queue another requests -- will they get cancelled
> or
> > those requests will fail to submit?
>
> If "ask to terminate" means calling:
>
> 1. uv_stop() -> uv_stop() just tells uv_run() to return at its
> earliest convenience, it doesn't shut down anything.
> 2. uv_loop_close() -> uv_loop_close() fails with UV_EBUSY if there are
> active handles or requests.
>
> A closing handle is considered active, even if it has been unref'd
> with uv_unref().
>
> When a callback creates another handle or request, it keeps the event
> loop alive.
>

Hmm... Let me rephrase that -- how to cleanly shutdown a running event
loop? uv_stop() gets me out of the loop, but all active handles are still
somewhere in memory (connected to eventloop). Their respective handlers
(coroutine stack frames, requests, buffers, etc) are on heap waiting for
notifications (callbacks). What is the proper procedure to unwind all this
and avoid leaks?

What if during unwinding I will submit another request or try to open a new
handle?


> 4. if I submit two write requests -- is it possible for first one to fail
> > and for second one to succeed? (uv_tcp_t/uv_file_t/etc, Windows/Linux)
>
> In theory, yes; in practice, no.  Libuv stops reading and writing when
> an I/O error happens.  It is theoretically possible to start reading
> and writing again if the error is transient, but in practice you
> simply close the handle.
>

I see... I understand that this shouldn't happen with uv_tcp_t -- because
TCP is a "stream", i.e. guarantees the data transfer order. I suspect
underlying mechanisms (OS) make sure that if first request fails, second
will fail too.

But what about uv_udp_t? In case of two simultaneous write requests -- does
libuv queues second one until first is completed? (and if first failed --
fail second one without submitting it to OS)

Same for uv_fs_t -- I can submit two write requests that will try to update
different locations in the same file. If these writes are immediately
"converted" to async/overlapped requests (on Windows) -- it is possible for
first one to fail and second one to succeed. That is unless libuv
serializes them and executes them one-by-one.


-- 
Sincerely yours,
Michael.

-- 
You received this message because you are subscribed to the Google Groups 
"libuv" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to libuv+unsubscr...@googlegroups.com.
To post to this group, send email to libuv@googlegroups.com.
Visit this group at https://groups.google.com/group/libuv.
For more options, visit https://groups.google.com/d/optout.

Reply via email to