Hi Folks:

My Libuv based Server performs all its functions correctly except for TCP 
connection termination.

Each TCP connection has uv_tcp_t connection handle and uv_poll_t handle 
whose allocation
and operation are explained below.  When the Protocol_Task() thread needs 
to terminate
a connection, it must stop polling, terminate the TCP socket connection, 
and deallocate
the handles.

NOTE: I am using the GIT HUB distribution from the following link on Ubuntu 
Linux version 15.04.

    https://github.com/nikhilm/uvbook

I have tried the following two approaches.

1) Just use uv_poll_stop() to terminate polling and uv_close() to terminate 
the TCP connection.

2) Use uv_poll_stop() to terminate polling and the using uv_queue_work() 
and uv_async_send() to
   wake up the Connect_Loop, in the main() process described below, so it 
can terminate the
   TCP connection, by proxy, with uv_close().

In both cases the following problem occurs. The callback routine supplied 
to uv_close()
does not execute until another incoming TCP connection occurs, and in most 
cases,
the Pool_Loop, in the IO_Task() described below, stops invoking it callback 
routine--
poll_callback(). In case 2, a crash almost alway ensues. (I probably am not 
using
uv_async_send() correctly.)

Do I have a fundamental misunderstanding of how Libuv works or am I doing 
something wrong ?

Also, I strongly suspect using Linux recv() to read data is not optimal 
when epoll() is
being used. My understanding is that there is a way to pass buffers to 
epoll() such that
data will automatically be inserted in them when a UV_READABLE event 
occurs. Do you have
any advice about this ?

An overview of my Server and the relevant code follow.

Best Regards,

Paul Romero

Multi-Connection TCP Server Functional Architecture Overview
-----------------------------------------------------------------------------------------
There is a connection descriptor for each incoming TCP connection which 
contains all data
needed to manage the connection and perform the relevant functions.

When the main() process detects an incoming TCP connection, it sends a 
notification message to the
IO_Trigger_Task(). The IO_Trigger_Task() then sets up epoll() monitoring of 
incoming TCP data
for that connection.

Subsequently, the IO_Task() invokes poll_callback() when incoming data is 
available, reads a chunk
of data, and sends a protocol message to the Protocol_Task() when a 
complete protocol message is
recognized.

The Timer_Task() sends an expiration notification message to the 
Protocol_Task() when a protocol
timer expires.

The Protocol_Task() send messages to the Send_Op_Task() for transmission 
across the network.
It spawns a DB Operation Task to perform slow data base operations and the 
DB Operation Task
notifies the Protocol_Task() when the operation is complete and then 
terminates.

Loops of type uv_loop_t
-----------------------
* Connect_Loop
* Pool_Loop
* Timer_Loop`

Tasks: All Libuv thread tasks run concurrently and are launched by main() 
at startup time.
------------------------------------------------------------------------------------------
* main(): A Linux process that runs the Connect_Loop to detect incoming TCP 
connections.
  The make_incoming_connection() callback routine accepts incoming 
connections and
  allocates a uv_tcp_t handle on a per connection basis

* IO_Trigger_Task(): A Libuv thread that sets up epoll() plumbing for the 
IO_Task()
  when an incoming TCP connection occurs. It allocates a uv_poll_t handle, 
on a per
  connection basis, and calls uv_poll_start() to initiate epoll() operation 
with the
  Poll_Loop in the IO_Task(). It configures the handle to detect 
UV_READABLE events and
  handles them with the poll_callback() routine.  However, it does not run 
the Poll_Loop.
  (Basically, this task just sets up plumbing.)

* IO_Task(): A Libuv thread that runs the Poll_Loop to handle incoming TCP 
data, on a per
  connection basis. The poll_callback() routine executes and uses normal 
Linux recv() to read
  chunks of data, in non-blocking mode, when a UV_READABLE event occurs.

* Timer_Task(): A Libuv thread that runs the Time_Loop to handle ticks, and 
whose main
  function is to detect protocol timer expiration. The tick duration is 
configured with
  is configured with uv_timer_init() and uv_timer_start(), and ticks are 
handled by the
  timer_callback() routine.

* Protocol_Task(): A Libuv thread that handles protocol messages sent to it 
by the following tasks
  on per connection basis: IO_Task(), Timer_Task(), DB Operation Tasks. DB 
Operation Libuv thread tasks
  are spawned by the Protocol_Task() to perform slow database operations 
and send a notification message
  to the Protocol_Task() upon completion of the operation.

* Send_Op_Task(): A Libuv thread that transmits all network bound messages 
with normal
  Linux send() on a per connection basis.


Approach 1 Code
-------------
ROUTINE void close_callback(uv_handle_t *handle)
{

    free(handle);
    return;
}

ROUTINE void RELEASE_CONNECTION(CONN_DESC *cdesc)
{
    struct linger spec;
    int r;

    if(N_Sockets > 0)
        N_Sockets--;

    if(cdesc->poll_handle)
       {
        uv_poll_stop(cdesc->poll_handle);
        free((void *) cdesc->poll_handle);
      }

    if(cdesc->conn_handle)
      {
        struct linger spec;

        spec.l_onoff = TRUE;
        spec.l_linger = 0;
        setsockopt(cdesc->fd, SOL_SOCKET, SO_LINGER, &spec, sizeof(spec) );

        uv_close((uv_handle_t *) cdesc->conn_handle, close_callback);
      }

ENTER_MUTEX(&Service_Q_Mutex);
    DELETE_CONN(cdesc);
    cdesc->fd = -1;
    flush_msg(&cdesc->task_input_q);
EXIT_MUTEX(&Service_Q_Mutex);

    return;
}

Approach 2 Code
-----------------
ROUTINE void close_callback(uv_handle_t *handle)
{
    free(handle);
    return;
}

typedef struct close_template {
uv_handle_t    *handle;
void        (*callback) (uv_handle_t *);
} CLOSE_TEMPLATE;

ROUTINE void close_proxy(uv_work_t *data)
{
    CLOSE_TEMPLATE *cparam = (CLOSE_TEMPLATE *) cparam;

    uv_close(cparam->handle, cparam->callback);
    return;
}


extern uv_loop_t Connect_Loop;
static CLOSE_TEMPLATE close_data;

ROUTINE void RELEASE_CONNECTION(CONN_DESC *cdesc)
{
    uv_work_t wreq;
    uv_async_t as_handle;
    struct linger spec;

    if(N_Sockets > 0)
        N_Sockets--;

    //
    // Stop this. TBD: Might need to do this via proxy in the IO_Task() 
Poll_Loop.
    //
    uv_poll_stop(cdesc->poll_handle);

    uv_async_init(&Connect_Loop, &as_handle, NULL);

    close_data.handle = (uv_handle_t *) cdesc->conn_handle;
    close_data.callback = close_callback;
    //
    // Call uv_close() in the close_proxy()
    //
    wreq.data = (void *) &close_data;
    uv_queue_work(&Connect_Loop, &wreq, close_proxy, NULL);

    spec.l_onoff = TRUE;
    spec.l_linger = 0;
    setsockopt(cdesc->fd, SOL_SOCKET, SO_LINGER, &spec, sizeof(spec) );

    uv_async_send(&as_handle);
    uv_close((uv_handle_t *) &as_handle, NULL);

    free(cdesc->poll_handle);

ENTER_MUTEX(&Service_Q_Mutex);
    DELETE_CONN(cdesc);
    cdesc->fd = -1;
    flush_msg(&cdesc->task_input_q);
EXIT_MUTEX(&Service_Q_Mutex);

    return;
}

-- 
You received this message because you are subscribed to the Google Groups 
"libuv" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to libuv+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/libuv/f330d8c2-d2c1-40d2-92dc-5d3d8042e5afn%40googlegroups.com.

Reply via email to