On Wed, Jan 10, 2024 at 5:53 PM 'Mark Maker' via libuv
<libuv@googlegroups.com> wrote:
>
> Hi all,
>
> thanks for a superb lib!
>
> I'm still trying to wrap my head around event driven network programming in 
> the realm of http and websockets.
>
> All the examples one finds (various libs on top of libuv or similar) do very 
> simple things, like echoing messages or reporting the time back. In my 
> application I will have costly, CPU bound, and/or potentially blocking 
> database tasks in the handlers, and/or potentially have to send large data 
> back.
>
> First, I do understand that I need to offload tasks into separate threads.
>
> But then, what I don't understand is how to propagate back-pressure?
>
> From response sender to the task scheduler, i.e. when the network is too slow 
> to return (large) responses to the clients, I should suspend doing more tasks.
> From the tasks to the request/message receiver, i.e. when tasks are backed 
> up, it should suspend receiving new requests/messages from the sockets, and 
> propagate back-pressure through sockets/tcp flow-control to the clients.
> From the receiver to the listener, i.e. when requests/messages are backed up, 
> it should suspend accepting new connections, also putting back-pressure into 
> the listen-backlog, and ultimately to new clients (the backlog might in turn 
> allow proper kernel load-balancing in a multi-threaded/multi-process design, 
> using SO_REUSEPORT)
>
> Am I right to assume that in an event driven design, unless these issues are 
> addressed specifically, excessive buffering, memory exhaustion, and failure 
> is inevitable when client driven overload is present?
>
> So I need to program uv_read_stop() and uv_listen stop() etc. manually, 
> throttling these on/off with all the complexity that entails in the face of 
> potential errors, timeouts etc.?
>
> Or is there a "uv_callback_pipeline" that I could stick between producer and 
> consumer, that buffers a limited number of callbacks (ring buffer) and 
> stops/starts the producer uv_handle_t automatically, according to type?
>
> Note: all this is automatic in a synchronous multi-threaded/multi-process 
> design, by virtue of underlying socket-buffers/listen-backlogs and I'm not 
> yet ready to believe that there is no similar automatism available in libuv 
> or event driven programming in general. :-)
>
> _Mark

Yes, you're right that you have to call uv_read_stop() and
uv_listen_stop() when you're not ready to receive more data.

That's the one big design mistake we made back then: reads and accepts
should have been request-based, like writes are, not the firehose
model we have today.

You can reasonably faithfully emulate the request-based model by
calling uv_read_stop() or uv_listen_stop() first thing in your
uv_read_cb or uv_connection_cb callback. That's what I usually do in
my programs.

-- 
You received this message because you are subscribed to the Google Groups 
"libuv" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to libuv+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/libuv/CAHQurc-k223jewDh1PO-8eQNRacp69Xc_goF_Q_wuB2gDhNtug%40mail.gmail.com.

Reply via email to