Re: Add support for buffering is scripted logs

2017-08-14 Thread Alexey Ivanov
You are indeed right about the open_file_cache, but you still left with a sync 
write(2) that can block, and blocking of eventloop is a major source of latency 
variance (unless you use Flash-based drives of course).

> On Aug 14, 2017, at 2:24 PM, Eran Kornblau <eran.kornb...@kaltura.com> wrote:
> 
>> -Original Message-
>> From: nginx-devel [mailto:nginx-devel-boun...@nginx.org] On Behalf Of Alexey 
>> Ivanov
>> Sent: Monday, August 14, 2017 9:25 PM
>> To: nginx-devel@nginx.org
>> Subject: Re: Add support for buffering is scripted logs
>> 
>> using syslog for that particular usecase seem way more elegant, 
>> customizable, and simple. As a side bonus you won't block event loop on vfs 
>> operations (open/write/close).
>> 
> Thanks Alexey.
> 
> Regarding the last point about performance, I ran a quick test, compared -
> 1. Nginx writing to gzip access log -
> access_log /var/log/nginx/access_log.gz main gzip flush=5m;
> 2. Nginx writing to local rsyslog over UDP, which in turn writes to local 
> file -
> access_log syslog:server=127.0.0.1 main;
> 3. Nginx writing to remote rsyslog on the same LAN over UDP -
> access_log syslog:server=192.168.11.94 main;
> 
> Ran this command (execute 10 apache bench tests, take the median value):
> (for i in `seq 1 10`; do ab -c 1000 -n 10 127.0.0.1/alive.html 
> 2>/dev/null | grep '^  50%' ; done) | awk '{print $2}' | xargs
> 
> The nginx location being tested simply does 'return 200 "hello"'.
> 
> Results are:
> 1. 4 6 5 6 3 5 3 5 7 5
> 2. 6 7 5 7 6 7 7 7 7 7
> 3. 5 6 6 6 6 6 6 6 4 6
> 
> The numbers fluctuated a bit from one run to another (did it a few times), 
> but the overall trend was the same -
> syslog is slower than having nginx write with gzip. The difference is not 
> dramatic, but it's visible.
> This makes sense since with buffered write, nginx only has to perform a 
> syscall once every X messages,
> most messages are written without any syscalls.
> Regarding what you wrote about open/write/close, when using variables (even 
> before this patch),
> it is possible to enable open file cache on log files, so you only have 
> 'write'.
> 
> Eran
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel



signature.asc
Description: Message signed with OpenPGP
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Add support for buffering is scripted logs

2017-08-14 Thread Alexey Ivanov
using syslog for that particular usecase seem way more elegant, customizable, 
and simple. As a side bonus you won't block event loop on vfs operations 
(open/write/close).

> On Aug 14, 2017, at 11:00 AM, Eran Kornblau  wrote:
> 
>> 
>> -Original Message-
>> From: nginx-devel [mailto:nginx-devel-boun...@nginx.org] On Behalf Of Maxim 
>> Dounin
>> Sent: Monday, August 14, 2017 8:34 PM
>> To: nginx-devel@nginx.org
>> Subject: Re: Add support for buffering is scripted logs
>> 
>>> Ok, so is that a final 'no' for this whole feature, or is there is anything 
>>> else I can do to get this feature in?
>> 
>> It is certainly not a "final no".  As I wrote in the very first comment, a) 
>> it's just a quick note, nothing more, and b) the feature is questionable.  
>> If a good implementation will be submitted, we can consider committing it.
>> 
> That's good, I thought you were just rejecting politely :)
> 
> It would be really great if you could point me to specific parts you think 
> look bad.
> For example, I'm guessing that you don't like the callbacks I added to open 
> file cache,
> but I was thinking that it's better to do it this way than to duplicate large 
> chunks of code
> and write an open file cache specific to log, please let me know if you think 
> otherwise.
> Any feedback you can provide will be appreciated
> 
> Thanks
> 
> Eran
> 
>> --
>> Maxim Dounin
>> http://nginx.org/
>> ___
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel



signature.asc
Description: Message signed with OpenPGP
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: coredump in 1.10.3

2017-03-13 Thread Alexey Ivanov
We have couple of these per week, I was blaming our third party modules, but 
seems like vanilla is also affected.

> On Mar 13, 2017, at 7:22 AM, George .  wrote:
> 
> Yes, for me it looks like memory corruption and really hard to guess with 
> only bt.
> We will run with  in-memory debug, but we have to wait till next core. I'll 
> update you when we have more info.
> 
> On Mon, Mar 13, 2017 at 3:55 PM, Valentin V. Bartenev  wrote:
> On Monday 13 March 2017 15:24:46 George . wrote:
> > Hi Valentin, Sorry, I've sent the mail incidentally before I complete it ;)
> >
> >
> > ssl_proxy_cores # ./nginx -V
> > nginx version: nginx/1.10.3
> > built by gcc 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4)
> > built with OpenSSL 1.0.2g  1 Mar 2016 (running with OpenSSL 1.0.2g-fips  1
> > Mar 2016)
> > TLS SNI support enabled
> > configure arguments: --prefix=/cdn/nginx_ssl_proxy --with-cc-opt='-O0 -g
> > -ggdb -march=core2' --with-debug --with-http_geoip_module
> > --with-http_realip_module --with-http_ssl_module
> > --without-http_charset_module --without-http_ssi_module
> > --without-http_userid_module --without-http_autoindex_module
> > --without-http_scgi_module --without-http_uwsgi_module
> > --without-http_fastcgi_module --without-http_limit_conn_module
> > --without-http_split_clients_module --without-http_limit_req_module
> > --with-http_stub_status_module --with-http_v2_module
> >
> >
> > and some variables values :
> >
> >
> > (gdb) p q
> > $1 = (ngx_queue_t *) 0x3fb0ab0
> > (gdb) p * q
> > $2 = {prev = 0xd3210507e0f72630, next = 0x5f5ded63e9edd904}
> > (gdb) p h2c->waiting
> > $3 = {prev = 0x3ac6ea0, next = 0x3fb0ab0}
> >
> >
> > and here is the config
> >
> [..]
> 
> Unfortunately, backtrace in this case is almost useless.
> 
> You should enable in-memory debug log:
> http://nginx.org/en/docs/debugging_log.html
> 
> Thus it will be possible to trace the events that resulted
> in segfault.
> 
>   wbr, Valentin V. Bartenev
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel



signature.asc
Description: Message signed with OpenPGP
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: HTTP/2 upstream support

2017-01-18 Thread Alexey Ivanov
Just as a datapoint: why do you need that functionality? Can you describe your 
particular usecase?

> On Jan 17, 2017, at 8:37 AM, Sreekanth M via nginx-devel 
>  wrote:
> 
> 
> Is HTTP/2 proxy support planned ?
> 
> -Sreekanth
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel



signature.asc
Description: Message signed with OpenPGP
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: How to contribute fix for checking x509 extended key attrs to nginx?

2017-01-10 Thread Alexey Ivanov
On Jan 10, 2017, at 3:41 PM, Ethan Rahn via nginx-devel  
wrote:
> 
> Hello,
> 
> I noticed that nginx does not check x509v3 certificates ( in 
> event/ngx_event_openssl.c::ngx_ssl_get_client_verify as an example ) to see 
> that the optional extended key usage settings are correct. I have a patch for 
> this that I would like to contribute, but I'm unable to find contribution 
> guidelines on the nginx web-site.
http://nginx.org/en/docs/contributing_changes.html

> The effect of this issue is that someone could offer a client certificate 
> that has extended key usage set to say, serverAuth. This would be a violation 
> of RFC 5280 - Section 4.2.1.12. I fix this by checking the bitfield manually 
> to see that the settings are correct.
> 
> Cheers,
> 
> Ethan
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel



signature.asc
Description: Message signed with OpenPGP
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Why not remove UNIX domain socket before bind

2016-12-01 Thread Alexey Ivanov
Why not just use `flock(2)` there?

> On Nov 30, 2016, at 6:57 AM, Maxim Dounin  wrote:
> 
> Hello!
> 
> On Tue, Nov 29, 2016 at 01:30:25PM -0800, Shuxin Yang wrote:
> 
>> Is there any reason not to delete UNIX domain socket before bind?
> 
> To name a few, deleting a socket implies that:
> 
> a) any file can be accidentally deleted due to a typo in the
>   listen directive;
> 
> b) attempts to do duplicate listen are not detected and silently
>   break service, e.g., if you start duplicate instance of nginx.
> 
> Instead we delete the socket after closing it.
> 
> --
> Maxim Dounin
> http://nginx.org/
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [PATCH] Added the $upstream_connection variable

2016-09-12 Thread Alexey Ivanov
+1 to that.

Connection reuse to an upstream is a very important metric for Edge->DC 
communication.
In our production since we have nginx on both sides we are are gathering that 
metric from the other side of the other side of a connection. I assume not 
everybody have that luxury, therefore that stat would be useful.

> On Aug 31, 2016, at 6:38 AM, Jason Stangroome  wrote:
> 
> Hello,
> 
> I am using nginx primarily as a proxy and I am looking to improve the
> visibility and control over keepalive connections to upstreams.
> 
> I have several patches I would like to submit for your consideration
> but I am relatively unfamiliar with the code base so I've started
> simple and I appreciate your feedback. I must say the code base has
> been very approachable to work with.
> 
> To begin I am adding support for an $upstream_connection variable for
> use with the log_format directive. This is essentially the same as the
> $connection variable but applies to upstream connections instead of
> downstream.
> 
> The intent of this variable is to help understand which requests are
> being serviced by the same upstream keepalive connection and which are
> using different connections.
> 
> I think I have followed the Contributing Changes page at nginx.org.
> I've honoured the existing code formatting and my `hg export` output
> follows my signature. I have also executed the tests from the
> nginx-tests repository in a Ubuntu Trusty environment but I did not
> have many nginx modules included in my build.
> 
> Regards,
> 
> Jason
> --
> # HG changeset patch
> # User Jason Stangroome 
> # Date 1472649436 0
> #  Wed Aug 31 13:17:16 2016 +
> # Node ID f06c8a934e3f3ceac2ff393a391234e225cbfcf1
> # Parent  c6372a40c2a731d8816160bf8f55a7a50050c2ac
> Added the $upstream_connection variable
> 
> Allows the connection identifier of the upstream connection used to service a
> proxied request to be logged in the access.log to understand which requests
> are using which upstream keepalive connections.
> 
> diff -r c6372a40c2a7 -r f06c8a934e3f src/http/ngx_http_upstream.c
> --- a/src/http/ngx_http_upstream.c Fri Aug 26 15:33:07 2016 +0300
> +++ b/src/http/ngx_http_upstream.c Wed Aug 31 13:17:16 2016 +
> @@ -161,6 +161,9 @@
> static ngx_int_t ngx_http_upstream_response_length_variable(
> ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data);
> 
> +static ngx_int_t ngx_http_upstream_connection_variable(
> +ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data);
> +
> static char *ngx_http_upstream(ngx_conf_t *cf, ngx_command_t *cmd,
> void *dummy);
> static char *ngx_http_upstream_server(ngx_conf_t *cf, ngx_command_t *cmd,
> void *conf);
> @@ -395,6 +398,10 @@
>   ngx_http_upstream_response_length_variable, 1,
>   NGX_HTTP_VAR_NOCACHEABLE, 0 },
> 
> +{ ngx_string("upstream_connection"), NULL,
> +  ngx_http_upstream_connection_variable, 0,
> +  NGX_HTTP_VAR_NOCACHEABLE, 0 },
> +
> #if (NGX_HTTP_CACHE)
> 
> { ngx_string("upstream_cache_status"), NULL,
> @@ -1804,6 +1811,7 @@
> 
> if (u->state->connect_time == (ngx_msec_t) -1) {
> u->state->connect_time = ngx_current_msec - u->state->response_time;
> +u->state->connection_number = c->number;
> }
> 
> if (!u->request_sent && ngx_http_upstream_test_connect(c) != NGX_OK) {
> @@ -5291,6 +5299,67 @@
> }
> 
> 
> +static ngx_int_t
> +ngx_http_upstream_connection_variable(ngx_http_request_t *r,
> +ngx_http_variable_value_t *v, uintptr_t data)
> +{
> +u_char *p;
> +size_t  len;
> +ngx_uint_t  i;
> +ngx_http_upstream_state_t  *state;
> +
> +v->valid = 1;
> +v->no_cacheable = 0;
> +v->not_found = 0;
> +
> +if (r->upstream_states == NULL || r->upstream_states->nelts == 0) {
> +v->not_found = 1;
> +return NGX_OK;
> +}
> +
> +len = r->upstream_states->nelts * (NGX_ATOMIC_T_LEN + 2);
> +
> +p = ngx_pnalloc(r->pool, len);
> +if (p == NULL) {
> +return NGX_ERROR;
> +}
> +
> +v->data = p;
> +
> +i = 0;
> +state = r->upstream_states->elts;
> +
> +for ( ;; ) {
> +
> +p = ngx_sprintf(p, "%uA", state[i].connection_number);
> +
> +if (++i == r->upstream_states->nelts) {
> +break;
> +}
> +
> +if (state[i].peer) {
> +*p++ = ',';
> +*p++ = ' ';
> +
> +} else {
> +*p++ = ' ';
> +*p++ = ':';
> +*p++ = ' ';
> +
> +if (++i == r->upstream_states->nelts) {
> +break;
> +}
> +
> +continue;
> +}
> +}
> +
> +v->len = p - v->data;
> +
> +return NGX_OK;
> +}
> +
> +
> ngx_int_t
> ngx_http_upstream_header_variable(ngx_http_request_t *r,
> ngx_http_variable_value_t *v, uintptr_t data)
> diff -r c6372a40c2a7 -r f06c8a934e3f 

Re: [PATCH 1 of 2] HTTP: add support for trailers in HTTP responses

2016-07-20 Thread Alexey Ivanov

> On Jul 20, 2016, at 6:23 PM, Maxim Dounin <mdou...@mdounin.ru> wrote:
> 
> Hello!
> 
> On Wed, Jul 20, 2016 at 03:34:46PM -0700, Alexey Ivanov wrote:
> 
>> Speaking of trailers: we had couple of use cases for HTTP
>> trailers, most of them were around streaming data to user.
>> 
>> For example, when webpage is generated we send headers and part
>> of the body(usually up to ``) almost immediately, but
>> then we start querying all the micro services for the content
>> (SOA, yay!).
>> The problem is that upstreams will inevitably fail/timeout, and
>> when that happens there is no way to pass any metadata about the
>> error to nginx, since headers are already sent. Using trailers
>> here may improve MTTR since backend metadata is available on the
>> frontend.
>> 
>> Another example may be computing checksums for data while you
>> stream it and putting it in the trailer. This should reduce TTFB
>> by quite a lot on some workloads we have.
> 
> Do you actually use something like this, or know places where
> something like this is actually used?
These are examples from our production. Currently we are using workarounds for 
both of these problems. Though I'm not sure that we would use trailers if they 
were supported, since it's one of very obscure HTTP/1.1 features that people do 
not usually know about.

That is starting to change bit by bit though, since people try using gRPC more 
and more.
> 
> Obviously enough, trailers have lots of theoretical use cases, and
> that's why there were introduced in HTTP/1.1 in the first place.
> The problem is that it doesn't seem to be used in the real world
> though.
> 
> --
> Maxim Dounin
> http://nginx.org/
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [PATCH 1 of 2] HTTP: add support for trailers in HTTP responses

2016-07-20 Thread Alexey Ivanov
Speaking of trailers: we had couple of use cases for HTTP trailers, most of 
them were around streaming data to user.

For example, when webpage is generated we send headers and part of the 
body(usually up to ``) almost immediately, but then we start querying 
all the micro services for the content (SOA, yay!).
The problem is that upstreams will inevitably fail/timeout, and when that 
happens there is no way to pass any metadata about the error to nginx, since 
headers are already sent. Using trailers here may improve MTTR since backend 
metadata is available on the frontend.

Another example may be computing checksums for data while you stream it and 
putting it in the trailer. This should reduce TTFB by quite a lot on some 
workloads we have.

> On Jul 13, 2016, at 5:34 PM, Piotr Sikora  wrote:
> 
> Hey Maxim,
> 
>> I'm talking about trailers in general (though it's more about
>> requests than responses).  Normal request (and response)
>> processing in nginx assumes that headers are processed before the
>> body, and adding trailers (which are headers "to be added later")
>> to the picture are likely to have various security implications.
> 
> Let's step back a bit... I have no plans to change the processing
> logic nor merge trailers with headers. Trailers are going to be
> ignored (passed, but not processed) by NGINX, not discarded.
> 
> AFAIK, Apache does the same thing.
> 
> Basically, at this point, trailers act as metadata for the application
> (browser, gRPC, 3rd-party NGINX module, etc.), with no HTTP semantics,
> so there are no security implications for NGINX itself.
> 
> Best regards,
> Piotr Sikora
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [PATCH] Variables: added $tcpinfo_retrans

2015-12-21 Thread Alexey Ivanov
# HG changeset patch
# User Alexey Ivanov <savether...@gmail.com>
# Date 1450520577 28800
#  Sat Dec 19 02:22:57 2015 -0800
# Branch tcpi_retrans
# Node ID b018f837480dbad3dc45f1a2ba93fb99bc625ef5
# Parent  78b4e10b4367b31367aad3c83c9c3acdd42397c4
Variables: added $tcpinfo_retrans

This one is useful for debugging poor network conditions.

diff -r 78b4e10b4367 -r b018f837480d auto/unix
--- a/auto/unix Thu Dec 17 16:39:15 2015 +0300
+++ b/auto/unix Sat Dec 19 02:22:57 2015 -0800
@@ -384,6 +384,17 @@
 . auto/feature


+ngx_feature="TCP_INFO_RETRANS"
+ngx_feature_name="NGX_HAVE_TCP_INFO_RETRANS"
+ngx_feature_run=no
+ngx_feature_incs="#include "
+ngx_feature_path=
+ngx_feature_libs=
+ngx_feature_test="struct tcp_info ti;
+  ti.tcpi_retrans"
+. auto/feature
+
+
 ngx_feature="accept4()"
 ngx_feature_name="NGX_HAVE_ACCEPT4"
 ngx_feature_run=no
diff -r 78b4e10b4367 -r b018f837480d src/http/ngx_http_variables.c
--- a/src/http/ngx_http_variables.c Thu Dec 17 16:39:15 2015 +0300
+++ b/src/http/ngx_http_variables.c Sat Dec 19 02:22:57 2015 -0800
@@ -343,6 +343,11 @@

 { ngx_string("tcpinfo_rcv_space"), NULL, ngx_http_variable_tcpinfo,
   3, NGX_HTTP_VAR_NOCACHEABLE, 0 },
+
+#if (NGX_HAVE_TCP_INFO_RETRANS)
+{ ngx_string("tcpinfo_retrans"), NULL, ngx_http_variable_tcpinfo,
+  4, NGX_HTTP_VAR_NOCACHEABLE, 0 },
+#endif
 #endif

 { ngx_null_string, NULL, NULL, 0, 0, 0 }
@@ -1053,6 +1058,12 @@
 value = ti.tcpi_rcv_space;
 break;

+#if (NGX_HAVE_TCP_INFO_RETRANS)
+case 4:
+value = ti.tcpi_retrans;
+break;
+#endif
+
 /* suppress warning */
 default:
 value = 0;


> On Dec 21, 2015, at 6:03 AM, David CARLIER <devne...@gmail.com> wrote:
> 
> On 21 December 2015 at 13:57, Maxim Dounin <mdou...@mdounin.ru> wrote:
>> Hello!
>> 
>> On Mon, Dec 21, 2015 at 01:41:03PM +, David CARLIER wrote:
>> 
>>> I think FreeBSD has __tcpi_retrans but not "typedef" it though ...
>> 
>> It's just a placeholder for ABI compatibility, it's not set to
>> anything.
> 
> Yes it s many fields are not set (less than I first thought ...)
> 
> 
>> And either way it's named differently, so the patch
>> will break things.
>> 
> 
> Sure it would be meaningless in this case to use the FreeBSD field then.
> 
>> --
>> Maxim Dounin
>> http://nginx.org/
>> 
>> ___
>> nginx-devel mailing list
>> nginx-devel@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Error counters in nginx

2015-06-12 Thread Alexey Ivanov
Hi.

I have a feature request: from system administrator point of view it would be 
nice to have counters for each type of error log message.

For example right now nginx error.log consists of myriad of different error 
message formats:
open() “%s” failed
directory index of “%s” is forbidden
SSL_write() failed
SSL_do_handshake()
zero size but in output chain
client intended to send too large body

To plot that stuff on a dashboard sysadmin needs to increase error_log 
verbosity, write custom daemon that tails it and then manually write parser for 
every “interesting” log entry.
It would be nice if `ngx_log_error()` did all that work for us.

So that one could just specify following config:
location /error_status {
error_status_format csv;
error_status;
}
and then following command…
$ curl localhost/error_status
...would return:
core.zero_size_buf,2
http.module.autoindex.open.failed,3
http.core.directory_listing_forbidden,5
http.core.body_too_large,7
ssl.alerts.downstream.recv.certificate_expired,11
ssl.alerts.downstream.sent.close_notify,13
ssl.write.failed,17
ssl.handshake.failed,19

Apache Traffic Server for example has an abstraction for error stats so we 
quickly implemented statistics for various TLS alerts[1] received from 
clients/upstreams (after that rolling out of preferred ciphersuite list changes 
and openssl upgrades became basically painless).

FreeBSD for example has a nice kernel API for counters: counter(9)[2]. Similar 
approach can be applied to nginx: with each worker maintaining it’s counters in 
separate part of shared memory, so updates to them can be lockless (and even 
non-atomic because there is no preemption inside one worker).

So questions are:
* Would Nginx Inc. be interested in adding such functionality?
* How much it interferes with Nginx Plus selling points? If somebody else 
writes that code, will it be possible to upstream” it?


[1] https://issues.apache.org/jira/browse/TS-3007
[2] 
https://www.freebsd.org/cgi/man.cgi?query=counterapropos=0sektion=9manpath=FreeBSD+11-currentformat=html


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: problems when use fastcgi_pass to deliver request to backend

2015-05-31 Thread Alexey Ivanov
If your backend can’t handle 10k connections then you should limit them there. 
Forwarding requests to the backend that can not handle the request is generally 
a bad idea[1] an it is usually better to fail the request or make them wait for 
a available backend on proxy itself.

Nginx can retry requests if it gets timeout or RST (connection refused) from a 
backend[2]. That combined with tcp_abort_on_overflow[3], listen(2) backlog 
argument and maybe some application limits should be enough to fix your problem.

[1] http://genius.com/James-somers-herokus-ugly-secret-annotated
[2] http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream
[3] https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt

 On May 29, 2015, at 3:48 AM, 林谡 li...@feinno.com wrote:
 
 Thanks for reply,
 I had read all the Discussions you suggested.
 The main reason is  that  multiplexing seems useless when using “keep alive” 
 feature and backend is fast enough.
 It’s true!  But real world is more sophisticated.
 
 Our system is very big, and over 5k machines are providing services. In Our 
 system, nginx proxy_pass  http request to http applications by using “keep 
 alive”, it works well,  over 10 k requests were processed  per second and  
 tcp connections between nginx and backend were blow 100. But, sometimes, 
 response time become 1-10s or more for a while, because maybe a db server 
 fail over or network shrink.  Over 10k tcp connection need to be setup as we 
 see.
 our backend is written by java, connections cannot be setup all a sudden, and 
 memory needed is big , GC collections became bottleneck, GC keep on working 
 even if db server or network resumed to normal, and backend server did not 
 work orderly any more, I observed these things several times.
 
   If multiplexing, no more connections are needed and memory needed is 
 far small under such a circumstance. We use multiplexing everywhere in our 
 java applications, It can prove my idea.
 
   Nginx is needed for sure for client http access, so I study fastcgi to 
 solve above problem, but nginx does support fastcgi multiplexing, which can 
 trigger the same problem.
 
   As a conclusion, a big production system really need that nginx pass 
 request to backend by multiplexing. Can you make nginx developing team work 
 on it?
 
 
 
 发件人: Sergey Brester [mailto:serg.bres...@sebres.de]
 发送时间: 2015年5月29日 16:40
 收件人: nginx-devel@nginx.org
 抄送: 林谡
 主题: Re: 答复: problems when use fastcgi_pass to deliver request to backend
 
 Hi,
 
 It's called fastcgi multiplexing and nginx currently does not implement that 
 (and I don't know .
 
 There were already several discussions about that, so read here, please.
 
 Short, very fast fastcgi processing may be implemented without multiplexing 
 (should be event-driven also).
 
 Regards,
 sebres.
 
 
 
 Am 29.05.2015 09:58, schrieb 林谡:
 
 
 /* we support the single request per connection */
 2573
 
 2574
 case ngx_http_fastcgi_st_request_id_hi:
 2575
 if (ch != 0) {
 2576
 ngx_log_error(NGX_LOG_ERR, r-connection-log, 0,
 2577
   upstream sent unexpected FastCGI 
 2578
   request id high byte: %d, ch);
 2579
 return NGX_ERROR;
 2580
 }
 2581
 state = ngx_http_fastcgi_st_request_id_lo;
 2582
 break;
 2583
 
 2584
 case ngx_http_fastcgi_st_request_id_lo:
 2585
 if (ch != 1) {
 2586
 ngx_log_error(NGX_LOG_ERR, r-connection-log, 0,
 2587
   upstream sent unexpected FastCGI 
 2588
   request id low byte: %d, ch);
 2589
 return NGX_ERROR;
 2590
 }
 2591
 state = ngx_http_fastcgi_st_content_length_hi;
 2592
 break;
 By reading source code, I saw the reason , so can nginx support multi request 
 per connection in future?
 
 发件人: 林谡
 发送时间: 2015年5月29日 11:37
 收件人: 'nginx-devel@nginx.org'
 主题: problems when use fastcgi_pass to deliver request to backend
 
 Hi,
  I write a fastcgi server and use nginx to pass request to my server. 
 It works till now.
  But I find a problem. Nginx always set requestId = 1 when sending 
 fastcgi record.
  I was a little upset for this, cause according to fastcgi protocol, 
 web server can send fastcgi records belonging to different request 
 simultaneously, and  requestIds are different and keep unique. I really need 
 this feature, because  requests can be handled simultaneously just over one 
 connetion.
  Can I find a way out?
 
 
 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel
 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel



signature.asc