[ 
https://issues.apache.org/jira/browse/DISPATCH-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17323942#comment-17323942
 ] 

ASF GitHub Bot commented on DISPATCH-1878:
------------------------------------------

grs commented on a change in pull request #1129:
URL: https://github.com/apache/qpid-dispatch/pull/1129#discussion_r614972470



##########
File path: src/adaptors/tcp_adaptor.c
##########
@@ -57,11 +57,15 @@ struct qdr_tcp_connection_t {
     qdr_delivery_t       *outstream;
     bool                  ingress;
     bool                  flow_enabled;
+    bool                  incoming_started;
     bool                  egress_dispatcher;
     bool                  connector_closed;//only used if 
egress_dispatcher=true
     bool                  in_list;         // This connection is in the 
adaptor's connections list
-    bool                  raw_closed_read;
-    bool                  raw_closed_write;
+    bool                  raw_closed_read;   // proton event seen
+    bool                  raw_closed_write;  // proton event seen or 
write_close called
+    bool                  raw_read_shutdown; // stream closed
+    bool                  read_eos_seen;
+    qd_buffer_list_t      early_raw_read_bufs; // read from raw conn before 
ingress stream ready

Review comment:
       I'm not too keen on granting buffers before we are ready for them. It 
doesn't seem necessary to me and it complicates reasoning about the code.

##########
File path: src/adaptors/tcp_adaptor.c
##########
@@ -116,6 +120,26 @@ static inline uint64_t qdr_tcp_conn_linkid(const 
qdr_tcp_connection_t *conn)
     return conn->instream ? conn->incoming_id : conn->outgoing_id;
 }
 
+static inline const char * qdr_link_direction_name(const qdr_link_t *link)
+{
+    assert(link);
+    return qdr_link_direction(link) == QD_OUTGOING ? "outgoing" : "incoming";
+}
+
+static inline const char * qdr_tcp_connection_role_name(const 
qdr_tcp_connection_t *tc)
+{
+    assert(tc);
+    return tc->ingress ? "client" : "server";

Review comment:
       I find the use of client and server confusing here. What you call the 
client is actually the server side in tcp terms. I think ingress/egress is less 
ambiguous (perhaps qualified by bridge, e.g. 'ingress bridge') and is already 
used elsewhere even with this patch.

##########
File path: src/adaptors/tcp_adaptor.c
##########
@@ -782,6 +965,14 @@ static void 
qdr_tcp_open_server_side_connection(qdr_tcp_connection_t* tc)
 
     // This attach passes the ownership of the delivery from the core-side 
connection and link
     // to the adaptor-side outgoing connection and link.
+    uint64_t i_conn_id = 0;
+    uint64_t i_link_id = 0;
+    uint64_t i_dlv_id = 0;
+    if (!!tc->initial_delivery) {
+        i_conn_id = tc->initial_delivery->conn_id;

Review comment:
       Minor point, but why are these initialised in a separate conditional 
block up here, when they seem only to be used in the subsequent block for the 
identical condition and are not used in the qdr_link_first_attach() call that 
is the only thing separating these blocks? Makes it easier to read if we just 
declared these just before use I think.

##########
File path: src/adaptors/tcp_adaptor.c
##########
@@ -1257,13 +1489,17 @@ static void qdr_tcp_delivery_update(void *context, 
qdr_delivery_t *dlv, uint64_t
     void* link_context = qdr_link_get_context(qdr_delivery_link(dlv));
     if (link_context) {
         qdr_tcp_connection_t* tc = (qdr_tcp_connection_t*) link_context;
-        qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG, DLV_FMT" 
qdr_tcp_delivery_update: disp: %"PRIu64", settled: %s",
+        qd_log(tcp_adaptor->log_source, QD_LOG_DEBUG,
+               DLV_FMT" qdr_tcp_delivery_update: disp: %"PRIu64", settled: %s",
                DLV_ARGS(dlv), disp, settled ? "true" : "false");
 
         //
         // If one of the streaming deliveries is ever settled, the connection 
must be torn down.

Review comment:
       I think now that we have separate close calls for read and write sides, 
we should close for read if the instream is settled and close for write when 
the outstream is settled.
   
   If we then settle and complete the instream on CLOSED_READ and settle the 
outstream on CLOSED_WRITE, we are then conveying more accurately what each tcp 
endpoint does to its peer. If the instream has not been started when we get 
CLOSED_READ on ingress, we can close the connection entirely (on egress we can 
just send an empty reply). 

##########
File path: src/adaptors/tcp_adaptor.c
##########
@@ -630,64 +780,92 @@ static void handle_connection_event(pn_event_t *e, 
qd_server_t *qd_server, void
         }
     }
     case PN_RAW_CONNECTION_CLOSED_READ: {
-        qd_log(log, QD_LOG_DEBUG, "[C%"PRIu64"] 
PN_RAW_CONNECTION_CLOSED_READ", conn->conn_id);
-        conn->q2_blocked = false;
-        handle_incoming_impl(conn, true);
+        qd_log(log, QD_LOG_DEBUG, "[C%"PRIu64"][L%"PRIu64"] 
PN_RAW_CONNECTION_CLOSED_READ",
+               conn->conn_id, conn->incoming_id);
         sys_mutex_lock(conn->activation_lock);
+        conn->q2_blocked = false;
         conn->raw_closed_read = true;
         sys_mutex_unlock(conn->activation_lock);
-        pn_raw_connection_close(conn->pn_raw_conn);
+        handle_incoming(conn, "PNRC_CLOSED_READ");
         break;
     }
     case PN_RAW_CONNECTION_CLOSED_WRITE: {
-        qd_log(log, QD_LOG_DEBUG, "[C%"PRIu64"] 
PN_RAW_CONNECTION_CLOSED_WRITE", conn->conn_id);
+        qd_log(log, QD_LOG_DEBUG,
+               "[C%"PRIu64"] PN_RAW_CONNECTION_CLOSED_WRITE",
+               conn->conn_id);
         sys_mutex_lock(conn->activation_lock);
         conn->raw_closed_write = true;
         sys_mutex_unlock(conn->activation_lock);
-        pn_raw_connection_close(conn->pn_raw_conn);
+        if (conn->ingress) {

Review comment:
       Why only on ingress?
   
   I would expect that on CLOSED_WRITE we settle the incoming qdr delivery, 
i.e. outstream (since we can no longer forward any data received). That signals 
the peer that they should close their socket for read.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Client app not getting a response through tcpListener
> -----------------------------------------------------
>
>                 Key: DISPATCH-1878
>                 URL: https://issues.apache.org/jira/browse/DISPATCH-1878
>             Project: Qpid Dispatch
>          Issue Type: Bug
>          Components: Protocol Adaptors
>    Affects Versions: 2.0.0
>            Reporter: Fernando Giorgetti
>            Assignee: Charles E. Rolke
>            Priority: Major
>         Attachments: D-1878 @e148c.svg, 
> DISPATCH-1878_fail-pn-raw-closed-before-ingress-stream-set-up.svg, 
> dispatch-1878-trace.html
>
>
> I have a tcp-echo server running locally through:
> podman run -d --rm --name tcpecho -p 9090:9090 quay.io/skupper/tcp-go-echo
> And I have a router configured with a tcpConnector to localhost at port 9090 
> and a tcpListener at port 9999.
> I am able to use nc (netcat) to send data to the tcp-echo directly (port 
> 9090) or through
> the router (port 9999), if I run "nc 127.0.0.1 9999" and the send the data.
> But if I run it as:  "echo abcd | nc 127.0.0.1 9999" I am not seeing a 
> response, but if 
> I use port 9090 (original port) instead of 9999 (router tcpListener), then I 
> get the
> response correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org

Reply via email to