Re: [PATCH] SSL: mark connections as non-reusable before SSL handshake
Hey Maxim, As far as I understand, this change isn't useable with an unmodified nginx It is, since nginx modules are free to install those SSL callbacks (for example: ngx_lua's ssl_certificate_by_lua). (and introduces some minor pessimization in an unlikely case when first ngx_ssl_handshake() will not return NGX_AGAIN). Since SSL/TLS handshake requires at least 1 RTT (even in case of session resumption), the only case in which ngx_ssl_handshake() wouldn't return NGX_AGAIN is when the handshake failed based on ClientHello (no shared ciphers, inappropriate fallback, etc.), in which case the connection will be closed and ngx_reusable_connection(c, 0) will be called from ngx_close_connection() anyway. Calling ngx_reusable_connection(c, 0) twice is basically a no-op, so I don't really consider this a pessimization. Best regards, Piotr Sikora ___ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: Breaking content generation phase into multiple phases or adding custom events in content phase
Thanks Yichun and Jeff for your inputs and sorry for late ack. I had a brief look at ngx_drizzle, I see that it needs the library to expose underlying fd, which is not possible in our case, but looks an interesting solution. The other cosocket API mechanism seem to be using unix domain socket as per the docs, so this would be similar to using pipe/socket to handle async apis where the callback could trigger event on the pipe/socket. Please correct me if am missing something? (never worked on lua :) ) Use case is very similar to image optimization case in pagespeed. I am guessing cosocket approach can also fit here? @Jeff regarding using pipes to trigger event, does it create one pipe per request or just one pipe where say module can - trigger the async API, - queue the nginx_request and proceed with other requests, (not sure this is possible with nginx?, as i think nginx handler cannot return something like NGX_AGAIN to resume handler where it called async api) - when there is event on pipe, nginx pipe_event_handler can figure out which request's data is ready and respond Easiest approach I can think of is to use domain socket in upstream form, but wanted to avoid as it it would mean additional socket descriptor per request. If you can help understand at design level w.r.t nginx, will be faster to adapt :) Thanks again!! ___ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: patch to allow loading PKCS #11 URLs
On Mon, 2015-06-22 at 04:11 +0300, Maxim Dounin wrote: Hi, Yes, I've tried it. It would be specified as: engine:pkcs11:pkcs11:model=SoftHSM%20v2serial=f0490bea35;pin -value=1234; But doesn't work, because it doesn't initialize the pkcs11 engine. Shouldn't initialization of an engine be added to engine:... handling then? Hi, I am not sure how the original engine approach was intended to work. My understanding from the mail I quoted was that it required the engine to be enabled via the openssl.cnf file. That's why I didn't touch that part. I could update that part as well if you think it would be a nice addition. (Just a side note: your patch has ENGINE_init() but no ENGINE_finish(). It looks like a leak.) I'm not very familiar with the nginx code base and that could be indeed an issue. Where would that finish be called? The engine initialization for pkcs11 should stay up for the lifetime of all connections. Furthermore, the engine:pkcs11:pkcs11: approach defeats the purpose of PKCS #11 URLs which is to use the same string to identify the same keys on all applications. The goal of the engine:... syntax is to allow nginx to load keys from arbitrary engines. With this approach you can use PKCS #11 URLs as identifiers for engines which support them - though you have to write a prefix engine:name: to instruct nginx to load a key from a named engine rather than a file. So I don't think that the current approach defeats the purpose somehow - it's just a bit more chatty than it can be assuming nginx knows for sure that the only engine useable for PKCS #11 URLs is pkcs11. Note that supporting the pkcs11: urls directly, has the advantage that it doesn't make nginx dependent on any engine to support PKCS #11. The current support relies on engine_pkcs11, which is a 3rd party module (not in openssl distribution). It should be future-proof to have a way to load PKCS #11 modules which is independent of the backend used by nginx. So you could change the internal backend (for example to use libp11 directly), without requiring all nginx users to change their configuration files and remove the engine:pkcs11: part from their keys. It would of course be good to keep the engine support there for the users that may need other modules than pkcs11, but for HSMs, all crypto card providers do ship a PKCS #11 module. So I would expect that the need for PKCS #11 support would overshadow the need for any other engine supported by openssl. regards, Nikos ___ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: Fix windows issue with multiple workers
Hi, enclosed you will find an amend fix as replacement to _sb-win-multi-worker-add-3.patch (just forgotten to save after renaming NGX_SINGLE_WORKER - NGX_CONF_UNSET_PTR, before it was commited). 18.06.2015 21:55, Maxim Dounin: As I already tried to explain, the approach with inherited sockets is basically identical to what nginx does on UNIX with fork(). There are no reasons to keep things different if you can do it similarly on both platforms. But it's not even roughly similar, see win32/ngx_process.c + win32/ngx_process_cycle.c in comparission to unix... This is already at all a biggish difference to unix. The goal is to minimize code bloat, not maximize it. Let me think a bit, how I can make it a little smaller, or unite some code-pieces with unix. Regards, sebres.# HG changeset patch # User sebres serg.bres...@sebres.de # Date 1434103966 -7200 # Fri Jun 12 12:12:46 2015 +0200 # Node ID b72d1091430e8899ee7d23c60924c100cba1c3ab # Parent 76ee2fe9300bdcf0dbf4a05e3ed7a1136b324eb7 backwards compatibility for single worker (without inheritance of socket) + code review / amend fix diff -r 76ee2fe9300b -r b72d1091430e src/core/ngx_connection.c --- a/src/core/ngx_connection.c Thu Jun 11 14:51:59 2015 +0200 +++ b/src/core/ngx_connection.c Fri Jun 12 12:12:46 2015 +0200 @@ -427,33 +427,33 @@ ngx_open_listening_sockets(ngx_cycle_t * /* try to use shared sockets of master */ if (ngx_process NGX_PROCESS_MASTER) { -if (!shinfo) { -shinfo = ngx_get_listening_share_info(cycle, ngx_getpid()); - -if (!shinfo) { -failed = 1; -break; -} +if (!shinfo ngx_get_listening_share_info(cycle, shinfo, +ngx_getpid()) != NGX_OK) { +failed = 1; +break; } -s = ngx_shared_socket(ls[i].sockaddr-sa_family, ls[i].type, 0, -shinfo+i); +if (shinfo != NGX_CONF_UNSET_PTR) { -if (s == (ngx_socket_t) -1) { -ngx_log_error(NGX_LOG_EMERG, log, ngx_socket_errno, - ngx_socket_n for inherited socket %V failed, - ls[i].addr_text); -return NGX_ERROR; +s = ngx_shared_socket(ls[i].sockaddr-sa_family, ls[i].type, 0, +shinfo+i); + +if (s == (ngx_socket_t) -1) { +ngx_log_error(NGX_LOG_EMERG, log, ngx_socket_errno, + ngx_socket_n for inherited socket %V failed, + ls[i].addr_text); +return NGX_ERROR; +} + +ngx_log_debug4(NGX_LOG_DEBUG_CORE, log, 0, [%d] shared socket %d %V: %d, +ngx_process, i, ls[i].addr_text, s); + +ls[i].fd = s; + +ls[i].listen = 1; + +continue; } - -ngx_log_debug4(NGX_LOG_DEBUG_CORE, log, 0, [%d] shared socket %d %V: %d, -ngx_process, i, ls[i].addr_text, s); - -ls[i].fd = s; - -ls[i].listen = 1; - -continue; } #endif diff -r 76ee2fe9300b -r b72d1091430e src/os/win32/ngx_process.c --- a/src/os/win32/ngx_process.c Thu Jun 11 14:51:59 2015 +0200 +++ b/src/os/win32/ngx_process.c Fri Jun 12 12:12:46 2015 +0200 @@ -70,9 +70,7 @@ ngx_spawn_process(ngx_cycle_t *cycle, ch return pid; } -#if (NGX_WIN32) ngx_share_listening_sockets(cycle, pid); -#endif ngx_memzero(ngx_processes[s], sizeof(ngx_process_t)); diff -r 76ee2fe9300b -r b72d1091430e src/os/win32/ngx_process_cycle.c --- a/src/os/win32/ngx_process_cycle.c Thu Jun 11 14:51:59 2015 +0200 +++ b/src/os/win32/ngx_process_cycle.c Fri Jun 12 12:12:46 2015 +0200 @@ -113,11 +113,6 @@ ngx_master_process_cycle(ngx_cycle_t *cy events[2] = ngx_reopen_event; events[3] = ngx_reload_event; -/* does not close listener for win32, will be shared */ -#if (!NGX_WIN32) -ngx_close_listening_sockets(cycle); -#endif - if (ngx_start_worker_processes(cycle, NGX_PROCESS_RESPAWN) == 0) { exit(2); } @@ -210,11 +205,6 @@ ngx_master_process_cycle(ngx_cycle_t *cy ngx_cycle = cycle; -/* does not close listener for win32, will be shared */ -#if (!NGX_WIN32) -ngx_close_listening_sockets(cycle); -#endif - if (ngx_start_worker_processes(cycle, NGX_PROCESS_JUST_RESPAWN)) { ngx_quit_worker_processes(cycle, 1); } diff -r 76ee2fe9300b -r b72d1091430e src/os/win32/ngx_socket.c --- a/src/os/win32/ngx_socket.c Thu Jun 11 14:51:59 2015 +0200 +++