Greetings nginx developers,
I work at Acision, and we make use of nginx, especially its mail module, which
we have added considerable code to. I'm currently experiencing issues related
to non-blocking sockets and ngx_unix_recv() returning NGX_AGAIN which I can't
make sense of, and I was wondering if anyone could help. Forgive me if these
questions are answered somewhere in this list, or elsewhere online -- if so, I
cannot find them.
So, my chief goal is to know the proper way to simply create a new
ngx_connection_t (specifically, with non-blocking sockets and in the mail
module), which gets properly scheduled via the "nginx event queue" -- that is,
to create an ngx_connection_t such that, when data arrives on the socket, the
event engine calls my read handler, and when data is to be written, the event
engine calls my write handler.
To achieve that goal (to not only create an ngx_connection_t, but also have it
"scheduled" in the "proper ngnix style"), I have thus far used
ngx_event_connect_peer(), passing it the address of a local
ngx_peer_connection_t variable, like so:
ngx_peer_connection_t peer;
ngx_str_t peer_name = {13, (u_char*)"MyName"};
...
/* build peer */
peer.sockaddr = (struct sockaddr *) saddr;
peer.socklen = sizeof(struct sockaddr_in);
peer.name = &peer_name;
peer.get = ngx_event_get_peer;
peer.log = s->connection->log;
peer.log_error = NGX_ERROR;
rc = ngx_event_connect_peer(&peer);
if (rc == NGX_ERROR || rc == NGX_BUSY || rc == NGX_DECLINED)
/* error */
peer.connection->data = MyData;
peer.connection->pool = s->connection->pool;
peer.connection->read->handler = my_read_handler;
peer.connection->write->handler = my_write_handler;
Using that approach, ngx_event_connect_peer() creates the ngx_connection_t as
peer.connection, and by using ngx_post_event() on peer.connection->read and
peer.connection->write, I've been able to force my handlers to hit, and also,
the appropriate handler (read or write) seems to be called when data is to be
sent or received.
That approach seemed ok at first, but I've been noticing strange behavior on
the non-blocking sockets within my_read_handler(). In particular, I call
ngx_unix_recv() in my_read_handler() to actually receive data, but in any given
invocation of my_read_handler(), the first call to ngx_unix_recv() never ends
up reading more than 128 bytes, and the second call always returns NGX_AGAIN,
as if no more data were available at that time. However, I know more data is
available! For example, if I continue to call ngx_unix_recv() is a loop,
ngx_unix_recv() will return NGX_AGAIN forever, even though more data will
definitely be available! It's not until a separate invocation
my_read_handler() in the future occurs that ngx_unix_recv() doesn't return
NGX_AGAIN, but again, the first call returns at most 128 bytes and the second
call returns NGX_AGAIN. And, this pattern continues. The result is that for
very large transfers, the data transfer is very slow!
Is there something I can do to avoid that behavior? What am I doing wrong? Do
I really want to be using ngx_event_connect_peer() and an ngx_peer_connection_t
to achieve my goal of creating a new nginx connection, which gets scheduled via
the event queue?
Thanks a bunch,
Drew Abbot, Acision
________________________________
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be copied,
disclosed to, retained or used by, any other party. If you are not an intended
recipient then please promptly delete this e-mail and any attachment and all
copies and inform the sender. Thank you for understanding.
_______________________________________________
nginx-devel mailing list
[email protected]
http://mailman.nginx.org/mailman/listinfo/nginx-devel