Re: How important is the "reuseport" flag for quic?

2024-06-25 Thread J Carter
Hello,

On Sat, 15 Jun 2024 13:58:55 +0100
Dominic Preston  wrote:

> I'm using nginx 1.26.1 from the nginx.org ubuntu repo.
> 
> I find when I remove the "reuseport" flag from the "listen" directive
> for my quic port, a lot page assets fail to load, and the browser
> ultimately falls back to http/2.
> 
> When I re-add "reuseport", all http/3 requests succeed again.
> 
> How crucial is "reuseport" when using quic on nginx? Is it normal for
> things to break badly without it?
> ___

Very, it's required to use the reuseport parameter when using multiple
workers as per the documentation here[1] so that all UDP packets of a
HTTP/3/quic connection are routed to and handled by the same worker, as
I understand. 

This is just due to limitations of how UDP (being
stateless and sessionless) is handled by the kernel compared to TCP
(which is session orientated).

[1] https://nginx.org/en/docs/quic.html#configuration
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Allow response with AD bit in resolver

2024-06-17 Thread J Carter
Hello,

On Mon, 17 Jun 2024 10:22:24 +0100
Kirill A. Korinsky  wrote:

> On Mon, 17 Jun 2024 00:21:27 +0100,
> J Carter  wrote:
> >
> > Well *I* quite agree.
> >
> > I would also suggest that as DNS functionality in nginx is strictly
> > limited to resolving as client in quite a simplistic fashion, and nginx
> > does not support DNSSEC, it makes little sense to hyper-strict about
> > the DNSSEC extension bits in general regardless of what is written in
> > the RFCs.
> >
> > Perhaps it would be better if the patch linked to in my previous
> > response was bumped and reconsidered over your patch, as that would also
> > ignore incorrectly set CD bit in addition to ignoring AD bit, which
> > also appears to be a common issue with certain recursive resolvers.  
> 
> Well, the CD bit means that this response contains a response that fails
> DNSSEC, but for some reason was sent back.
> 
> I've checked unbound and it returns SERVFAIL in such case, or wit no CD bit
> enabled if DNSSEC validation is off.
> 
> Also, I've checked OpenBSD's unwind, which is libunbound-based, which has
> the accept bogus option for forwarder to tolerate invalid DNSSEC.
> 
> Finally, I've tested a random WiFi router running dnsmasq (confirmed by
> fpdns) and it also returns SERVFAIL with broken DNSSEC.
> 
> Do you have an example of zone and resolver that will set CD bit?
> 
> --
> wbr, Kirill
> ___

It's caused by DNS Cache poisoning (either intentionally, or
unintentionally), from a recursive resolver that caches CD bit but 
does not zero it if a non dns-sec query hits that cached response.

I see unbound also has a ticket open for this:
https://github.com/NLnetLabs/unbound/issues/649
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Allow response with AD bit in resolver

2024-06-16 Thread J Carter
Hello,

On Sun, 16 Jun 2024 10:07:28 +0100
Kirill A. Korinsky  wrote:

> On Sun, 16 Jun 2024 02:45:15 +0100,
> J Carter  wrote:
> > 
> > Sounds familiar :)
> > 
> > https://mailman.nginx.org/pipermail/nginx-devel/2022-May/YQ3MYP4VNQYWEJS3XYLPMU4HZUKS4PYF.html
> 
> Unfortunately, the AD bit is set by the libunbound-based resolver when it is
> configured to use an upstream forwarder that also supports security.
> 
> My guess is that unbound uses itself as DNS client in this case and set such
> bit to request to the upstream.
> 
> Probably it can be fixed in unbound / libunbound, but such behavior exists
> right now and affects many different devices...
> 
> Thus, RFC 6840 suggested to set such bit when a request has one, but not
> required, which means that current logic of libunbound RFC complaint, and
> nginx is violating by rejecting such a request.
> 

Well *I* quite agree. 

I would also suggest that as DNS functionality in nginx is strictly
limited to resolving as client in quite a simplistic fashion, and nginx
does not support DNSSEC, it makes little sense to hyper-strict about
the DNSSEC extension bits in general regardless of what is written in
the RFCs.

Perhaps it would be better if the patch linked to in my previous
response was bumped and reconsidered over your patch, as that would also
ignore incorrectly set CD bit in addition to ignoring AD bit, which
also appears to be a common issue with certain recursive resolvers.
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Allow response with AD bit in resolver

2024-06-15 Thread J Carter
On Sun, 16 Jun 2024 04:29:51 +0300
Maxim Dounin  wrote:

> Hello!
> 
> On Sat, Jun 15, 2024 at 12:02:28PM +0100, Kirill A. Korinsky wrote:
> 
> > Greetings,
> > 
> > Here a trivial patch which allows DNS responses with enabled AD bit
> > from used resolver.
> > 
> > Index: src/core/ngx_resolver.c
> > --- src/core/ngx_resolver.c.orig
> > +++ src/core/ngx_resolver.c
> > @@ -1774,7 +1774,7 @@ ngx_resolver_process_response(ngx_resolver_t *r, u_cha
> > (response->nar_hi << 8) + response->nar_lo);
> >  
> >  /* response to a standard query */
> > -if ((flags & 0xf870) != 0x8000 || (trunc && tcp)) {
> > +if ((flags & 0xf850) != 0x8000 || (trunc && tcp)) {
> >  ngx_log_error(r->log_level, r->log, 0,
> >"invalid %s DNS response %ui fl:%04Xi",
> >tcp ? "TCP" : "UDP", ident, flags);
> > 
> 
> Looks good to me, pushed with an appropriate commit log, thanks.
>

Sounds familiar :)

https://mailman.nginx.org/pipermail/nginx-devel/2022-May/YQ3MYP4VNQYWEJS3XYLPMU4HZUKS4PYF.html
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Custom HTTP protocol response?

2024-06-15 Thread J Carter
Hello,

On Thu, 13 Jun 2024 16:05:18 +0100
Kirill A. Korinsky  wrote:

> On Mon, 10 Jun 2024 09:56:05 +0100,
> Martin Kjær Jørgensen via nginx  wrote:
> > 
> > 
> > Is this possible without hacking nginx sources or manipulative intermediate
> > proxies?
> > 
> 
> As you may see in ngx_http_header_filter_module.c such string is hardcoded.
> 

Indeed. 

I'd reccomend stream njs, with it's filter phase handler, js_filter[1],
if you prefer to perform such intermediary manipulation within nginx
itself.

There is an example of performing HTTP manipulation with js_filter
here[2], albeit for injecting a custom header, but the same approach
would with the 'download' callback. 

The Stream module itself can do TLS offloading[3], now has Virtual
Servers[4], and now has direct pass to http listener feature[5] making
such hacks more workable and efficient :).

[1]https://nginx.org/en/docs/stream/ngx_stream_js_module.html#js_filter
[2]https://github.com/nginx/njs-examples/blob/master/njs/stream/inject_header.js
[3]https://nginx.org/en/docs/stream/ngx_stream_ssl_module.html 
[4]https://nginx.org/en/docs/stream/ngx_stream_core_module.html#server_name
[5]https://nginx.org/en/docs/stream/ngx_stream_pass_module.html
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: [PATCH 1 of 2] Rewritten host header validation to follow generic parsing rules

2024-06-15 Thread J Carter
Hello Sergey,

I had some trouble importing your patches in one go due to missing
whitespace after patches 2 and 3. In case anyone else has the same
issue just add 1 new line after each of those patches.

On Fri, 7 Jun 2024 20:48:23 +0400
Sergey Kandaurov  wrote:

> On Tue, May 28, 2024 at 12:53:46PM +0100, J Carter wrote:
> > Hello Sergey,
> > 
> > On Mon, 27 May 2024 14:21:43 +0400
> > Sergey Kandaurov  wrote:
> > 
> > > # HG changeset patch
> > > # User Sergey Kandaurov 
> > > # Date 1716805272 -14400
> > > #  Mon May 27 14:21:12 2024 +0400
> > > # Node ID e82a7318ed48fdbc1273771bc96357e9dc232975
> > > # Parent  f58b6f6362387eeace46043a6fc0bceb56a6786a
> > > Rewritten host header validation to follow generic parsing rules.
> > > 
> > > It now uses a generic model of state-based machine, with more strict
> > > parsing rules borrowed from ngx_http_validate_host(),
> > 
> > I think you mean "borrowed from ngx_http_parse_request_line()".
> > 
> 
> Sure, tnx.
> 
> The problem is that both functions make a subset of parsing
> and validation, and these sets just partially intersect.
> In particular, ngx_http_parse_request_line() currently detects
> invalid characters in a port subcomponent of authority, and
> ngx_http_validate_host() handles a trailing dot.
> So I think it makes sense to make them unifined, this will also
> remove the need to validate host in absolute URIs.
> Below is an updated version with both parsers further polished
> (stream part is excluded for now).
> 
> Also, it may have sense to rename ngx_http_validate_host()
> to something like ngx_http_parse_host(), similar to
> ngx_http_parse_uri(), out of this series.

Makes sense.

> 
> # HG changeset patch
> # User Sergey Kandaurov 
> # Date 171582 -14400
> #  Fri Jun 07 20:26:22 2024 +0400
> # Node ID 0cba4301e4980871de7aceb46acddf8f2b5a7318
> # Parent  02e9411009b987f408214ab4a8b6b6093f843bcd
> Improved parsing of host in absolute URIs.
> 
> When the request line is in the absolute-URI form, a host identified
> by a registered name (reg-name) is now restricted to start with an
> alphanumeric character (see RFC 1123, RFC 3986).  Previously, empty
> domain labels or host starting with a hyphen were accepted.
> 
> Additionally, host with a trailing dot is taken into account.
> 
> diff --git a/src/http/ngx_http_parse.c b/src/http/ngx_http_parse.c
> --- a/src/http/ngx_http_parse.c
> +++ b/src/http/ngx_http_parse.c
> @@ -113,8 +113,10 @@ ngx_http_parse_request_line(ngx_http_req
>  sw_schema_slash_slash,
>  sw_host_start,
>  sw_host,
> +sw_host_dot,
>  sw_host_end,
>  sw_host_ip_literal,
> +sw_host_ip_literal_dot,
>  sw_port,
>  sw_after_slash_in_uri,
>  sw_check_uri,
> @@ -354,27 +356,50 @@ ngx_http_parse_request_line(ngx_http_req
>  break;
>  }
>  
> -state = sw_host;
> +if (ch == '.' || ch == '-') {
> +return NGX_HTTP_PARSE_INVALID_REQUEST;

One inconsistency I noticed while testing these patches is the difference
in the usefulness of error logs between invalid host header vs invalid
host value in absolute uri. 

Absolute uri parsing simply includes a vague "error while parsing
client request line" for everything, including failed host checks,
whereas host header failure error is naturally more specific to what
the error is. Perhaps improving that is beyond scope of this series
however.

> +}
>  
>  /* fall through */
>  
>  case sw_host:
> +case sw_host_dot:
> +
> +if (ch == '.') {
> +if (state == sw_host_dot) {
> +return NGX_HTTP_PARSE_INVALID_REQUEST;
> +}
> +
> +state = sw_host_dot;
> +break;
> +}
>  
>  c = (u_char) (ch | 0x20);
>  if (c >= 'a' && c <= 'z') {
> +state = sw_host;
> +break;
> +}
> +
> +if ((ch >= '0' && ch <= '9') || ch == '-') {
> +state = sw_host;
>  break;
>  }
>  
> -if ((ch >= '0' && ch <= '9') || ch == '.' || ch == '-') {
> -break;
> +if (state == sw_host_start) {
> +return NGX_HTTP_PARSE_INVALID_REQUEST;
> +}
> +
> +if (state == sw_host_dot) {
> +r->host_end = p - 1;
> +
> +} else {
> +r->host_e

Re: [PATCH] Slice filter: proxy_cache_background_update support (ticket #1348)

2024-06-12 Thread J Carter
On Tue, 11 Jun 2024 02:46:13 +0100
J Carter  wrote:

> Style cleanup of previous patch. 
> 
> Also removed 'status already
> returned' guard in slice range variable handler, as it is no longer
> needed given other changes in patch.
>

Additional fixes.

# HG changeset patch
# User J Carter 
# Date 1718225771 -3600
#  Wed Jun 12 21:56:11 2024 +0100
# Node ID 8edd891af4d6474ea139490e3662241212926244
# Parent  02e9411009b987f408214ab4a8b6b6093f843bcd
Slice filter: proxy_cache_background_update support (ticket #1348).

Previously, subrequests of a slice subrequest would have an empty
$slice_range variable value. This prevented
proxy_cache_background_update and friends from successfully
fetching and populating correctly.

This occurred for two reasons:
- Firstly, a single context was reused for all slice subrequests,
where each $slice_range value was overwritten by subsequent slice
subrequests.
- Secondly, subrequests not initiated by slice filter were unable to
access $slice_range in a parent subrequest.

Each slice subrequests now retains $slice_range and subrequests of
slice subrequests now utilize the parent slice subrequest's
$slice_range if available.

diff --git a/src/http/modules/ngx_http_slice_filter_module.c 
b/src/http/modules/ngx_http_slice_filter_module.c
--- a/src/http/modules/ngx_http_slice_filter_module.c
+++ b/src/http/modules/ngx_http_slice_filter_module.c
@@ -18,11 +18,16 @@ typedef struct {
 typedef struct {
 off_tstart;
 off_tend;
-ngx_str_trange;
 ngx_str_tetag;
 unsigned last:1;
 unsigned active:1;
 ngx_http_request_t  *sr;
+} ngx_http_slice_shctx_t;
+
+
+typedef struct {
+ngx_str_t range;
+ngx_http_slice_shctx_t *sh;
 } ngx_http_slice_ctx_t;
 
 
@@ -105,6 +110,7 @@ ngx_http_slice_header_filter(ngx_http_re
 ngx_int_trc;
 ngx_table_elt_t *h;
 ngx_http_slice_ctx_t*ctx;
+ngx_http_slice_shctx_t  *sh;
 ngx_http_slice_loc_conf_t   *slcf;
 ngx_http_slice_content_range_t   cr;
 
@@ -113,6 +119,8 @@ ngx_http_slice_header_filter(ngx_http_re
 return ngx_http_next_header_filter(r);
 }
 
+sh = ctx->sh;
+
 if (r->headers_out.status != NGX_HTTP_PARTIAL_CONTENT) {
 if (r == r->main) {
 ngx_http_set_ctx(r, NULL, ngx_http_slice_filter_module);
@@ -127,10 +135,10 @@ ngx_http_slice_header_filter(ngx_http_re
 
 h = r->headers_out.etag;
 
-if (ctx->etag.len) {
+if (sh->etag.len) {
 if (h == NULL
-|| h->value.len != ctx->etag.len
-|| ngx_strncmp(h->value.data, ctx->etag.data, ctx->etag.len)
+|| h->value.len != sh->etag.len
+|| ngx_strncmp(h->value.data, sh->etag.data, sh->etag.len)
!= 0)
 {
 ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
@@ -140,7 +148,7 @@ ngx_http_slice_header_filter(ngx_http_re
 }
 
 if (h) {
-ctx->etag = h->value;
+sh->etag = h->value;
 }
 
 if (ngx_http_slice_parse_content_range(r, ) != NGX_OK) {
@@ -163,15 +171,15 @@ ngx_http_slice_header_filter(ngx_http_re
 
 end = ngx_min(cr.start + (off_t) slcf->size, cr.complete_length);
 
-if (cr.start != ctx->start || cr.end != end) {
+if (cr.start != sh->start || cr.end != end) {
 ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
   "unexpected range in slice response: %O-%O",
   cr.start, cr.end);
 return NGX_ERROR;
 }
 
-ctx->start = end;
-ctx->active = 1;
+sh->start = end;
+sh->active = 1;
 
 r->headers_out.status = NGX_HTTP_OK;
 r->headers_out.status_line.len = 0;
@@ -198,16 +206,16 @@ ngx_http_slice_header_filter(ngx_http_re
 r->preserve_body = 1;
 
 if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) {
-if (ctx->start + (off_t) slcf->size <= r->headers_out.content_offset) {
-ctx->start = slcf->size
- * (r->headers_out.content_offset / slcf->size);
+if (sh->start + (off_t) slcf->size <= r->headers_out.content_offset) {
+sh->start = slcf->size
+* (r->headers_out.content_offset / slcf->size);
 }
 
-ctx->end = r->headers_out.content_offset
-   + r->headers_out.content_length_n;
+sh->end = r->headers_out.content_offset
+  + r->headers_out.content_length_n;
 
 } else {
-ctx->end = cr.complete_length;
+sh->end = cr.complete_length;
 }
 
 return rc;
@@ -217,9 +225,11 @@ ngx_http_slice_header_filter(ngx_http_re
 static ngx_int_t
 ngx_http_slice_body_filter(ngx_http_request_t *r, ngx_chai

Re: [PATCH] Slice filter: proxy_cache_background_update support (ticket #1348)

2024-06-10 Thread J Carter
Style cleanup of previous patch. 

Also removed 'status already
returned' guard in slice range variable handler, as it is no longer
needed given other changes in patch.

# HG changeset patch
# User J Carter 
# Date 1718069043 -3600
#  Tue Jun 11 02:24:03 2024 +0100
# Node ID bc3e20f3f1f6f86da2ad44bb5b4742bced210b97
# Parent  02e9411009b987f408214ab4a8b6b6093f843bcd
Slice filter: proxy_cache_background_update support (ticket #1348).

Previously, subrequests of a slice subrequest would have an empty
$slice_range variable value. This prevented
proxy_cache_background_update and friends from successfully
fetching and populating correctly.

This occurred for two reasons:
- Firstly, a single context was reused for all slice subrequests,
where each $slice_range value was overwritten by subsequent slice
subrequests.
- Secondly, subrequests not initiated by slice filter were unable to
access $slice_range in a parent subrequest.

Each slice subrequests now retains $slice_range and subrequests of
slice subrequests now utilize the parent slice subrequest's
$slice_range if available.

diff --git a/src/http/modules/ngx_http_slice_filter_module.c 
b/src/http/modules/ngx_http_slice_filter_module.c
--- a/src/http/modules/ngx_http_slice_filter_module.c
+++ b/src/http/modules/ngx_http_slice_filter_module.c
@@ -18,11 +18,16 @@ typedef struct {
 typedef struct {
 off_tstart;
 off_tend;
-ngx_str_trange;
 ngx_str_tetag;
 unsigned last:1;
 unsigned active:1;
 ngx_http_request_t  *sr;
+} ngx_http_slice_shctx_t;
+
+
+typedef struct {
+ngx_str_t range;
+ngx_http_slice_shctx_t *sh;
 } ngx_http_slice_ctx_t;
 
 
@@ -105,6 +110,7 @@ ngx_http_slice_header_filter(ngx_http_re
 ngx_int_trc;
 ngx_table_elt_t *h;
 ngx_http_slice_ctx_t*ctx;
+ngx_http_slice_shctx_t  *sh;
 ngx_http_slice_loc_conf_t   *slcf;
 ngx_http_slice_content_range_t   cr;
 
@@ -113,6 +119,8 @@ ngx_http_slice_header_filter(ngx_http_re
 return ngx_http_next_header_filter(r);
 }
 
+sh = ctx->sh;
+
 if (r->headers_out.status != NGX_HTTP_PARTIAL_CONTENT) {
 if (r == r->main) {
 ngx_http_set_ctx(r, NULL, ngx_http_slice_filter_module);
@@ -127,10 +135,10 @@ ngx_http_slice_header_filter(ngx_http_re
 
 h = r->headers_out.etag;
 
-if (ctx->etag.len) {
+if (sh->etag.len) {
 if (h == NULL
-|| h->value.len != ctx->etag.len
-|| ngx_strncmp(h->value.data, ctx->etag.data, ctx->etag.len)
+|| h->value.len != sh->etag.len
+|| ngx_strncmp(h->value.data, sh->etag.data, sh->etag.len)
!= 0)
 {
 ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
@@ -140,7 +148,7 @@ ngx_http_slice_header_filter(ngx_http_re
 }
 
 if (h) {
-ctx->etag = h->value;
+sh->etag = h->value;
 }
 
 if (ngx_http_slice_parse_content_range(r, ) != NGX_OK) {
@@ -163,15 +171,15 @@ ngx_http_slice_header_filter(ngx_http_re
 
 end = ngx_min(cr.start + (off_t) slcf->size, cr.complete_length);
 
-if (cr.start != ctx->start || cr.end != end) {
+if (cr.start != sh->start || cr.end != end) {
 ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
   "unexpected range in slice response: %O-%O",
   cr.start, cr.end);
 return NGX_ERROR;
 }
 
-ctx->start = end;
-ctx->active = 1;
+sh->start = end;
+sh->active = 1;
 
 r->headers_out.status = NGX_HTTP_OK;
 r->headers_out.status_line.len = 0;
@@ -198,16 +206,16 @@ ngx_http_slice_header_filter(ngx_http_re
 r->preserve_body = 1;
 
 if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) {
-if (ctx->start + (off_t) slcf->size <= r->headers_out.content_offset) {
-ctx->start = slcf->size
- * (r->headers_out.content_offset / slcf->size);
+if (sh->start + (off_t) slcf->size <= r->headers_out.content_offset) {
+sh->start = slcf->size
+* (r->headers_out.content_offset / slcf->size);
 }
 
-ctx->end = r->headers_out.content_offset
-   + r->headers_out.content_length_n;
+sh->end = r->headers_out.content_offset
+  + r->headers_out.content_length_n;
 
 } else {
-ctx->end = cr.complete_length;
+sh->end = cr.complete_length;
 }
 
 return rc;
@@ -217,9 +225,11 @@ ngx_http_slice_header_filter(ngx_http_re
 static ngx_int_t
 ngx_http_slice_body_filter(ngx_http_request_t *r, ngx_chain_t *in)
 {
+u_char *p;
 ngx_int_t   rc;
 ngx_chain_t 

[PATCH] Slice filter: proxy_cache_background_update support (ticket #1348)

2024-06-08 Thread J Carter
# HG changeset patch
# User J Carter 
# Date 1717886685 -3600
#  Sat Jun 08 23:44:45 2024 +0100
# Node ID 1b8a60f7640be4a900ac77d8022b7d8cc6944186
# Parent  02e9411009b987f408214ab4a8b6b6093f843bcd
Slice filter: proxy_cache_background_update support (ticket #1348).

Previously, subrequests of a slice subrequest would have an empty
$slice_range variable value. This prevented
proxy_cache_background_update and friends from successfully
fetching and populating correctly.

This occurred for two reasons:
- Firstly, a single context was reused for all slice subrequests,
where each $slice_range value was overwritten by subsequent slice
subrequests.
- Secondly, subrequests not initiated by slice filter were unable to
access $slice_range in a parent subrequest.

Each slice subrequests now retains $slice_range and subrequests of
slice subrequests now utilize the parent slice subrequest's
$slice_range if available.

diff --git a/src/http/modules/ngx_http_slice_filter_module.c 
b/src/http/modules/ngx_http_slice_filter_module.c
--- a/src/http/modules/ngx_http_slice_filter_module.c
+++ b/src/http/modules/ngx_http_slice_filter_module.c
@@ -18,11 +18,16 @@ typedef struct {
 typedef struct {
 off_tstart;
 off_tend;
-ngx_str_trange;
 ngx_str_tetag;
 unsigned last:1;
 unsigned active:1;
 ngx_http_request_t  *sr;
+} ngx_http_slice_shctx_t;
+
+
+typedef struct {
+ngx_str_t range;
+ngx_http_slice_shctx_t *sh;
 } ngx_http_slice_ctx_t;
 
 
@@ -105,6 +110,7 @@ ngx_http_slice_header_filter(ngx_http_re
 ngx_int_trc;
 ngx_table_elt_t *h;
 ngx_http_slice_ctx_t*ctx;
+ngx_http_slice_shctx_t  *sh;
 ngx_http_slice_loc_conf_t   *slcf;
 ngx_http_slice_content_range_t   cr;
 
@@ -113,6 +119,8 @@ ngx_http_slice_header_filter(ngx_http_re
 return ngx_http_next_header_filter(r);
 }
 
+sh = ctx->sh;
+
 if (r->headers_out.status != NGX_HTTP_PARTIAL_CONTENT) {
 if (r == r->main) {
 ngx_http_set_ctx(r, NULL, ngx_http_slice_filter_module);
@@ -127,10 +135,10 @@ ngx_http_slice_header_filter(ngx_http_re
 
 h = r->headers_out.etag;
 
-if (ctx->etag.len) {
+if (sh->etag.len) {
 if (h == NULL
-|| h->value.len != ctx->etag.len
-|| ngx_strncmp(h->value.data, ctx->etag.data, ctx->etag.len)
+|| h->value.len != sh->etag.len
+|| ngx_strncmp(h->value.data, sh->etag.data, sh->etag.len)
!= 0)
 {
 ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
@@ -140,7 +148,7 @@ ngx_http_slice_header_filter(ngx_http_re
 }
 
 if (h) {
-ctx->etag = h->value;
+sh->etag = h->value;
 }
 
 if (ngx_http_slice_parse_content_range(r, ) != NGX_OK) {
@@ -163,15 +171,15 @@ ngx_http_slice_header_filter(ngx_http_re
 
 end = ngx_min(cr.start + (off_t) slcf->size, cr.complete_length);
 
-if (cr.start != ctx->start || cr.end != end) {
+if (cr.start != sh->start || cr.end != end) {
 ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
   "unexpected range in slice response: %O-%O",
   cr.start, cr.end);
 return NGX_ERROR;
 }
 
-ctx->start = end;
-ctx->active = 1;
+sh->start = end;
+sh->active = 1;
 
 r->headers_out.status = NGX_HTTP_OK;
 r->headers_out.status_line.len = 0;
@@ -198,16 +206,16 @@ ngx_http_slice_header_filter(ngx_http_re
 r->preserve_body = 1;
 
 if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) {
-if (ctx->start + (off_t) slcf->size <= r->headers_out.content_offset) {
-ctx->start = slcf->size
+if (sh->start + (off_t) slcf->size <= r->headers_out.content_offset) {
+sh->start = slcf->size
  * (r->headers_out.content_offset / slcf->size);
 }
 
-ctx->end = r->headers_out.content_offset
+sh->end = r->headers_out.content_offset
+ r->headers_out.content_length_n;
 
 } else {
-ctx->end = cr.complete_length;
+sh->end = cr.complete_length;
 }
 
 return rc;
@@ -217,9 +225,11 @@ ngx_http_slice_header_filter(ngx_http_re
 static ngx_int_t
 ngx_http_slice_body_filter(ngx_http_request_t *r, ngx_chain_t *in)
 {
+u_char *p;
 ngx_int_t   rc;
 ngx_chain_t*cl;
-ngx_http_slice_ctx_t   *ctx;
+ngx_http_slice_ctx_t   *ctx, *sr_ctx;
+ngx_http_slice_shctx_t *sh;
 ngx_http_slice_loc_conf_t  *slcf;
 
 ctx = ngx_http_get_module_ctx(r, ngx_http_slice_filter_module);
@@ -228,32 +238,34 @@ ngx_http_slice_body_filter(ngx_http_r

Re: [PATCH 1 of 2] Rewritten host header validation to follow generic parsing rules

2024-05-28 Thread J Carter
Hello Sergey,

On Mon, 27 May 2024 14:21:43 +0400
Sergey Kandaurov  wrote:

> # HG changeset patch
> # User Sergey Kandaurov 
> # Date 1716805272 -14400
> #  Mon May 27 14:21:12 2024 +0400
> # Node ID e82a7318ed48fdbc1273771bc96357e9dc232975
> # Parent  f58b6f6362387eeace46043a6fc0bceb56a6786a
> Rewritten host header validation to follow generic parsing rules.
> 
> It now uses a generic model of state-based machine, with more strict
> parsing rules borrowed from ngx_http_validate_host(),

I think you mean "borrowed from ngx_http_parse_request_line()".

> with additional
> checks for double dots and stripping a port subcomponent.
> 
> Notably, now a port subcomponent of the Host header is restricted
> to digits, using underscores in domain name labels is prohibited.
> 
[...]
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: NGINX multiple authentication methods (one or the other) AND an IP check seems impossible

2024-05-27 Thread J Carter
Hello,

[...]

> ```
> The goal is to bypass SSO if a correct HTTP Basic Auth header is present 
> while making sure connections are only from said IPs.
> 
> When I disable the IP check it works flawlessly. How could I separate these 
> requirements?
> 
> So (SSO or Basic Auth) and Correct IP

Just use the geo module and "if" to reject unwanted IPs.

"If" is evaluated prior to access & post_access phases, where auth_basic
and co are evaluated.

geo $allowed_ip {
xxx.xxx.xxx.xxx/24 1;
default0;
}

...

location / {
if ($allowed_ip = 0) {
return 403;
}

rest of config without allow/deny.
}
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: [PATCH] Memcached: memcached_flags_set directive

2024-05-26 Thread J Carter
Fixed a typo in patch description, and overlooked buffer overflow...

# HG changeset patch
# User J Carter 
# Date 1716779649 -3600
#  Mon May 27 04:14:09 2024 +0100
# Node ID e2ebeb2b05062369626e2de1ba753f318f821d9c
# Parent  f58b6f6362387eeace46043a6fc0bceb56a6786a
Memcached: memcached_flags_set directive.

This directive creates a variable that returns the masked value of
memcached response flags, by bitwise ANDing flags with specified
mask.

The optional 'shift' parameter may be used right bit-shift extracted
value, for unpacking values.

memcached_flags_set   

This directive may be specified in http context only.

The purpose of this directive is to provide a more generalized form
of the memcached_gzip_flag directive, to provide the means of
conditionally adding other headers to response, such as cache control
headers, and of setting their value dynamically using packed values
in flags.

Example #1:

memcached_flags_set $flags_expire 1;

map $flags_expire $expires_value {
0 off;
1 epoch;
}

server {
location / {
expires $expires_value;
memcached_pass ...;
}
}

Example #2:

memcached_flags_set $flags_expire 1 0;
memcached_flags_set $flags_expire_hour 62 1;

map $flags_expire $expires_value {
0 off;
1 ${flags_expire_hour}h; #0-24h
}

server {
location / {
expires $expires_value;
memcached_pass ...;
}
}

diff --git a/src/http/modules/ngx_http_memcached_module.c 
b/src/http/modules/ngx_http_memcached_module.c
--- a/src/http/modules/ngx_http_memcached_module.c
+++ b/src/http/modules/ngx_http_memcached_module.c
@@ -21,9 +21,16 @@
 size_t rest;
 ngx_http_request_t*request;
 ngx_str_t  key;
+ngx_uint_t flags;
 } ngx_http_memcached_ctx_t;
 
 
+typedef struct {
+ngx_uint_t mask;
+ngx_uint_t shift;
+} ngx_http_memcached_set_t;
+
+
 static ngx_int_t ngx_http_memcached_create_request(ngx_http_request_t *r);
 static ngx_int_t ngx_http_memcached_reinit_request(ngx_http_request_t *r);
 static ngx_int_t ngx_http_memcached_process_header(ngx_http_request_t *r);
@@ -33,12 +40,16 @@
 static void ngx_http_memcached_finalize_request(ngx_http_request_t *r,
 ngx_int_t rc);
 
+static ngx_int_t ngx_http_memcached_variable_set(ngx_http_request_t *r,
+ngx_http_variable_value_t *v, uintptr_t data);
 static void *ngx_http_memcached_create_loc_conf(ngx_conf_t *cf);
 static char *ngx_http_memcached_merge_loc_conf(ngx_conf_t *cf,
 void *parent, void *child);
 
 static char *ngx_http_memcached_pass(ngx_conf_t *cf, ngx_command_t *cmd,
 void *conf);
+static char *ngx_http_memcached_set(ngx_conf_t *cf, ngx_command_t *cmd,
+void *conf);
 
 
 static ngx_conf_bitmask_t  ngx_http_memcached_next_upstream_masks[] = {
@@ -130,6 +141,13 @@
   offsetof(ngx_http_memcached_loc_conf_t, gzip_flag),
   NULL },
 
+{ ngx_string("memcached_flags_set"),
+  NGX_HTTP_MAIN_CONF|NGX_CONF_TAKE23,
+  ngx_http_memcached_set,
+  0,
+  0,
+  NULL },
+
   ngx_null_command
 };
 
@@ -219,6 +237,7 @@
 }
 
 ctx->request = r;
+ctx->flags = 0;
 
 ngx_http_set_ctx(r, ctx, ngx_http_memcached_module);
 
@@ -370,20 +389,12 @@
 
 start = p;
 
-while (*p) {
-if (*p++ == ' ') {
-if (mlcf->gzip_flag) {
-goto flags;
-} else {
-goto length;
-}
+while (*p++ != ' ') {
+if (*p == '\0') {
+goto no_valid;
 }
 }
 
-goto no_valid;
-
-flags:
-
 flags = ngx_atoi(start, p - start - 1);
 
 if (flags == (ngx_uint_t) NGX_ERROR) {
@@ -407,7 +418,9 @@
 r->headers_out.content_encoding = h;
 }
 
-length:
+ctx->flags = flags;
+
+/* length */
 
 start = p;
 p = line.data + line.len;
@@ -587,6 +600,42 @@
 }
 
 
+static ngx_int_t
+ngx_http_memcached_variable_set(ngx_http_request_t *r,
+ngx_http_variable_value_t *v, uintptr_t data)
+{
+ngx_http_memcached_ctx_t*ctx;
+ngx_http_memcached_set_t*set;
+ngx_uint_t   value;
+
+set = (ngx_http_memcached_set_t *) data;
+
+ctx = ngx_http_get_module_ctx(r, ngx_http_memcached_module);
+if (ctx == NULL) {
+   v->not_found = 1;
+return NGX_OK;
+}
+
+v->data = ngx_pnalloc(r->pool, sizeof("4294967295") - 1);
+if (v->data == NULL) {
+return NGX_ERROR;
+}
+
+v->len = 0;
+v->valid = 1;
+v->no_cacheable = 1;
+v->not_found = 0;
+
+value = (ctx->flags & set->mask) >> set->shift;
+
+if (value <= 4294967295) {
+v->len = ngx_sprintf(v->data, "%ui", value) - v->data;
+}
+
+return NGX_OK;
+}
+
+
 static void *
 ngx_http_memcached_cr

[PATCH] Memcached: memcached_flags_set directive

2024-05-26 Thread J Carter
# HG changeset patch
# User J Carter 
# Date 1716779649 -3600
#  Mon May 27 04:14:09 2024 +0100
# Node ID e90301dd1516b6dfa5b2b0beec05cfe2b567ea1e
# Parent  f58b6f6362387eeace46043a6fc0bceb56a6786a
Memcached: memcached_flags_set directive.

This directive creates a variable that returns the masked value of
memcached response flags, by bitwise ANDing flags with specified
mask.

The optional 'shift' parameter may be used right bit-shift extracted
value, for unpacking values.

memcached_flags_set   

This directive may be specified in http context only.

The purpose of this directive is to provide a more generalized form
of the memcached_gzip_flag directive, to provide the means of
conditionally adding other headers to response, such as cache control
headers, and of setting their value dynamically using packed values
in flags.

Example #1:

memcached_flags_set $flags_expire 1;

map $flags_expire $expires_value {
0 off;
1 epoch;
}

server {
location / {
expires $expires_value;
memcached_pass ...;
}
}

Example #2:

memcached_flags_set $flags_expire 1 0;
memcached_flags_set $flags_expire 62 1;

map $flags_expire $expires_value {
0 off;
1 ${flags_expire_hour}h; #0-24h
}

server {
location / {
expires $expires_value;
memcached_pass ...;
}
}

diff --git a/src/http/modules/ngx_http_memcached_module.c 
b/src/http/modules/ngx_http_memcached_module.c
--- a/src/http/modules/ngx_http_memcached_module.c
+++ b/src/http/modules/ngx_http_memcached_module.c
@@ -21,9 +21,16 @@
 size_t rest;
 ngx_http_request_t*request;
 ngx_str_t  key;
+ngx_uint_t flags;
 } ngx_http_memcached_ctx_t;
 
 
+typedef struct {
+ngx_uint_t mask;
+ngx_uint_t shift;
+} ngx_http_memcached_set_t;
+
+
 static ngx_int_t ngx_http_memcached_create_request(ngx_http_request_t *r);
 static ngx_int_t ngx_http_memcached_reinit_request(ngx_http_request_t *r);
 static ngx_int_t ngx_http_memcached_process_header(ngx_http_request_t *r);
@@ -33,12 +40,16 @@
 static void ngx_http_memcached_finalize_request(ngx_http_request_t *r,
 ngx_int_t rc);
 
+static ngx_int_t ngx_http_memcached_variable_set(ngx_http_request_t *r,
+ngx_http_variable_value_t *v, uintptr_t data);
 static void *ngx_http_memcached_create_loc_conf(ngx_conf_t *cf);
 static char *ngx_http_memcached_merge_loc_conf(ngx_conf_t *cf,
 void *parent, void *child);
 
 static char *ngx_http_memcached_pass(ngx_conf_t *cf, ngx_command_t *cmd,
 void *conf);
+static char *ngx_http_memcached_set(ngx_conf_t *cf, ngx_command_t *cmd,
+void *conf);
 
 
 static ngx_conf_bitmask_t  ngx_http_memcached_next_upstream_masks[] = {
@@ -130,6 +141,13 @@
   offsetof(ngx_http_memcached_loc_conf_t, gzip_flag),
   NULL },
 
+{ ngx_string("memcached_flags_set"),
+  NGX_HTTP_MAIN_CONF|NGX_CONF_TAKE23,
+  ngx_http_memcached_set,
+  0,
+  0,
+  NULL },
+
   ngx_null_command
 };
 
@@ -219,6 +237,7 @@
 }
 
 ctx->request = r;
+ctx->flags = 0;
 
 ngx_http_set_ctx(r, ctx, ngx_http_memcached_module);
 
@@ -370,20 +389,12 @@
 
 start = p;
 
-while (*p) {
-if (*p++ == ' ') {
-if (mlcf->gzip_flag) {
-goto flags;
-} else {
-goto length;
-}
+while (*p++ != ' ') {
+if (*p == '\0') {
+goto no_valid;
 }
 }
 
-goto no_valid;
-
-flags:
-
 flags = ngx_atoi(start, p - start - 1);
 
 if (flags == (ngx_uint_t) NGX_ERROR) {
@@ -407,7 +418,9 @@
 r->headers_out.content_encoding = h;
 }
 
-length:
+ctx->flags = flags;
+
+/* length */
 
 start = p;
 p = line.data + line.len;
@@ -587,6 +600,38 @@
 }
 
 
+static ngx_int_t
+ngx_http_memcached_variable_set(ngx_http_request_t *r,
+ngx_http_variable_value_t *v, uintptr_t data)
+{
+ngx_http_memcached_ctx_t*ctx;
+ngx_http_memcached_set_t*set;
+
+set = (ngx_http_memcached_set_t *) data;
+
+ctx = ngx_http_get_module_ctx(r, ngx_http_memcached_module);
+if (ctx == NULL) {
+   v->not_found = 1;
+return NGX_OK;
+}
+
+v->data = ngx_pnalloc(r->pool, sizeof("4294967295") - 1);
+if (v->data == NULL) {
+return NGX_ERROR;
+}
+
+v->len = 0;
+v->valid = 1;
+v->no_cacheable = 1;
+v->not_found = 0;
+
+v->len = ngx_sprintf(v->data, "%ui",
+ (ctx->flags & set->mask) >> set->shift) - v->data;
+
+return NGX_OK;
+}
+
+
 static void *
 ngx_http_memcached_create_loc_conf(ngx_conf_t *cf)
 {
@@ -734,3 +779,59 @@
 
 return NGX_CONF_OK;
 }
+
+
+static char *
+ngx_http_memcached_set(ngx_conf_t *cf, ngx_command_t 

Re: Twitter incompatibility

2024-05-19 Thread J Carter
Hello,

On Sun, 19 May 2024 16:47:02 -0400
Saint Michael  wrote:

> I need some help with a Nginx,. Twitter problem
> please open a twitter client x.com
> and post this link
> https://patrician.org/22a51cfb-7d5b-4a97-a687-a10cd1946766/
> and then open a new client and post
> https://xlong.org/p/a3622727-4df1-46f3-aee8-ee0a43194906/
> 
> in the first case, it's an Apache server, and X pulls the twitter card just 
> fine
> in the second, it fails, it posts the link as is, very ugly, no image
> both link works, but only Apache allows twitter to work as intended,
> pulling image called "twitter:card"

When accessing the new page's twitter:image target image directly I see
the following: 

curl http://ssnode1.minixel.com/p/papel-literario-2.jpg -i
HTTP/1.1 200 OK
Server: openresty
Date: Sun, 19 May 2024 22:07:18 GMT
Content-Type: image/jpeg
Content-Length: 93273
Last-Modified: Mon, 29 Apr 2024 23:52:26 GMT
Connection: keep-alive
ETag: "663032ba-16c59"
Cache-Control:
no-cache,: no-store, must-revalidate
Access-Control-Allow-Origin: *
Pragma: no-cache
Accept-Ranges: bytes
...

A) Cache Control header appears to be malformed, notice it's contents
are in it's own header, and Cache-Control is empty.

B) Twitter actually caches and serves preview image, they do not link
directly to image on your site - perhaps just try removing
Cache-Control header and Pragma: no-cache altogether.

Old version has neither of these issues (see below).

curl -i http://patrician.org/papel-literario-2.jpg -L
HTTP/1.1 301 Moved Permanently
Date: Sun, 19 May 2024 22:11:29 GMT
Server: Apache/2.4.37 (CentOS Stream) OpenSSL/1.1.1k
Location: https://patrician.org/papel-literario-2.jpg
Content-Length: 251
Content-Type: text/html; charset=iso-8859-1

HTTP/1.1 200 OK
Date: Sun, 19 May 2024 22:11:29 GMT
Server: Apache/2.4.37 (CentOS Stream) OpenSSL/1.1.1k
Last-Modified: Sun, 19 May 2024 16:15:57 GMT
ETag: "16c59-618d0e65ae1e3"
Accept-Ranges: bytes
Content-Length: 93273
Content-Type: image/jpeg
...

Old version does also do http->https redirect, although I wouldn't
think lack of that would cause an issue.

Just as an aside you can use twitter card validator here[1] to validate
cards - xlong.org domain's version of page currently shows
"internal server error" for me, old domain passes checks.

[1] https://cards-dev.twitter.com/validator
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: [PATCH] Configure: add support for Homebrew on Apple Silicon

2024-03-07 Thread J Carter
On Wed, 6 Mar 2024 19:36:32 +0400
Sergey Kandaurov  wrote:

> > On 28 Feb 2024, at 05:24, Piotr Sikora via nginx-devel 
> >  wrote:
> > 
> > # HG changeset patch
> > # User Piotr Sikora 
> > # Date 1708977643 0
> > #  Mon Feb 26 20:00:43 2024 +
> > # Branch patch017
> > # Node ID dd95daa55cf6131a7e845edd6ad3b429bcef6f98
> > # Parent  bb99cbe3a343ae581d2369b990aee66e69679ca2
> > Configure: add support for Homebrew on Apple Silicon.  
> 
> > 
> > Signed-off-by: Piotr Sikora   
> 
> Well, this is weird to pick up install prefix depending on the device.
> Hopefully, maintainers will rethink.  Though given the relevant
> issue #9177 on githab is over 3 years, they would rather not.
> 
> An obvious question is why do you need this change.  Homebrew seems
> to be quite niche to pay attention.  Using appropriate paths in
> --with-cc-opt / --with-ld-opt should work (not tested).
> If it really harms though, I think the change should go in.
> 
> > 
> > diff -r bb99cbe3a343 -r dd95daa55cf6 auto/lib/geoip/conf
> > --- a/auto/lib/geoip/conf Mon Feb 26 20:00:42 2024 +
> > +++ b/auto/lib/geoip/conf Mon Feb 26 20:00:43 2024 +  
> 
> A quick grep for MacPorts search paths suggests that some libraries
> are missing in the change.  If this is on purpose, please reflect
> this in the description.
> 
> > @@ -64,6 +64,23 @@
> > fi
> > 
> > 
> > +if [ $ngx_found = no ]; then
> > +
> > +# Homebrew on Apple Silicon  
> 
> Apple Silicon is something from the marketing language,
> using Apple ARM instead should be fine.
> 
> Notably, Homebrew uses Hardware::CPU.arm Ruby language boolean
> to make the distinction.
> 
> Further, given the smooth decay on Intel-based hardware,
> I'd reduce this just to "Homebrew".
> 

It might be worth it to keep at least 'Apple' distinction in there, if
this will be added; Homebrew also supports Linux too, and installs to
/home/linuxbrew/.linuxbrew as default prefix path...

https://docs.brew.sh/Homebrew-on-Linux

> [..]
> 
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Question regarding $invalid_referer

2024-03-07 Thread J Carter
Hello,

On Tue, 5 Mar 2024 13:07:53 -0800
"li...@lazygranch.com"  wrote:

> I am presently using a scheme like this to prevent scraping documents. 
> 
>location /images/ {
>   valid_referers none blocked  www.example.com example.com 
> forums.othersite.com ;
> # you can tell the browser that it can only download content from the domains 
> you explicitly allow
> #   if ($invalid_referer) {
> # return 403;
> if ($invalid_referer) {
>   return 302 $scheme://www.example.com;
> ***
> I commented out some old code which just sends an error message. I
> pulled that from the nginx website. I later added the code which sends
> the user to the top level of the website. 
> 
> It works but the results really aren't user friendly. What I rather do
> is if I find an invalid_referer to some document, I would like to
> redirect the request to the html page that has my link to the document. 
> 
> I am relatively sure I will need to hand code the redirection for every
> file, but plan on only doing this for pdfs. Probably 20 files.
> 
> Here is a google referral I pulled from the log file
> 
> *
> 302 172.0.0.0 - - [05/Mar/2024:20:18:52 +] "GET /images/ttr/0701crash.pdf 
> HTTP/2.0" 145 "https://www.google.com/; "Mozilla/5.0 (Linux; Android 10; K) 
> AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Mobile Safari/537.36" 
> "-"
> **
> So I would need to map /images/ttr/0701crash.pdf to the referring page
> on the website.
> ___

There is really a question in your email :) however, you could use the
SSI module[1] to auto generate the referring page with the link
dynamically if don't already have that.

[1] https://nginx.org/en/docs/http/ngx_http_ssi_module.html

In terms of doing the mapping to some static set of referring pages if
you already have those, that will depend upon what path scheme you plan
for those in relation to original files.

A sensible way would be to make
the referring pages's path related to pdf name, (something like
/referring/0701crash).

In nginx when you do redirect, you can do those mappings
dynamically using regex captures. Something like this using nested
locations:

location /images {
...
location ~/(.+).pdf {
if ($invalid_referer) {
return 302 $scheme://www.example.com/referring/${1};
}
}
}
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: $request_time variable = 0 for small files.

2024-03-07 Thread J Carter
Hello,

On Thu, 7 Mar 2024 08:17:23 +0200
Clima Gabriel  wrote:

> Greetings,
> I'm investigating a bug, super easy to reproduce.
> Thought you might be curious.
> 
> Minimal Nginx config. Create two files. 100M and 1M:
> dd if=/dev/zero of=/var/www/file100M bs=100M count=1
> dd if=/dev/zero of=/var/www/file1M bs=1M count=1
> 
> Get them files:
> curl --limit-rate 10M   -o /dev/null 127.0.0.42:80/file100M
> curl --limit-rate 100k  -o /dev/null 127.0.0.42:80/file1M
> 
> Both transfers take ~10s, but Nginx logs 0s request_time for the small file.
> 

This isn't an issue with nginx. The response nginx sends
truly does take 0s to reach the client's socket.

Curl's limit-rate flag only applies at the application layer, but it has
no effect on curl's tcp socket, or it's buffers/how fast things are
read into the buffer. The entire response sent by nginx is being
received into into curl's tcp socket buffer instantly, which is
auto-scaled to a large window size because you are making these
requests from local machine.

You can temporarily set tcp read window to smallest possible minimum,
default, and maximum to confirm. Like this:

sysctl -w net.ipv4.tcp_rmem="4096 4096 4096"

or just view tcp traffic via wireshark.

> master_process off;
> daemon off;
> error_log /dev/stderr;
> events {}
> http
> {
> log_format req_time  "$request_time";
> server
> {
> server_name 127.0.0.42;
> listen 127.0.0.42:80;
> root /var/www/;
> index index.html;
> location /
> {
> access_log /dev/stderr req_time;
> error_log /dev/stderr;
> }
> }
> }
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: ssl_reject_handshake breaks other server blocks

2024-03-02 Thread J Carter
Hello Taco,

On Sat, 2 Mar 2024 09:54:46 -0300
Taco de Wolff  wrote:

> Thank you Jordan for the response.
> 

No problem. 

> Including the SNI information in cURL works, thank you. I wasn't aware this
> was so very different from TCP/HTTP2.
> 
> The point I was trying to make about the ssl_certificate options to be
> mandatory, is that HTTP/2 also requires SSL 

HTTP2 can be used without TLS by the way (called h2c), and this is also 
implemented in nginx. With curl you can test it easily with 
--http2-prior-knowledge flag against plain-text port.

The $http2 variable [1] can also be easily used to distinguish h2c vs
h2(with tls).

Of course, I doubt there is a lot of real world usage of h2c. Still, it can 
be useful for testing :)

[1] https://nginx.org/en/docs/http/ngx_http_v2_module.html#variables

> but recognizes that when
> ssl_reject_handshake=on it doesn't need the certificate. For HTTP/3 it
> doesn't seem to recognize that it doesn't need the certificate since it
> will reject handshakes anyways.

I see, but when testing with exactly the configuration you posted, it
does not appear to require them in the default server (on 1.25.4). If I
remove ssl_certificate and ssl_certificate_key directives, it still
works...

1) Are you using any out of band patches in your nginx build (if self
built)?

2) Which TLS library are you using (openssl, boringssl, ect)?

3) Which OS?
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: ssl_reject_handshake breaks other server blocks

2024-03-01 Thread J Carter
Hello,

On Wed, 28 Feb 2024 21:45:37 -0300
Taco de Wolff  wrote:

> Hi,
> 
> I've noticed at least in 1.24.0 and 1.25.4 that adding an
> ssl_reject_handshake to the default server breaks SNI for other
> servers. Example:
> 
> ```
> server {
> server_name _;
> listen 80 default_server;
> listen 443 default_server ssl;
> listen 443 default_server quic reuseport;
> listen [::]:80 default_server;
> listen [::]:443 default_server ssl;
> listen [::]:443 default_server quic reuseport;
> 
> http2 on;
> 
> # SSL
> ssl_certificate /etc/pki/lego/certificates/server.crt;
> ssl_certificate_key /etc/pki/lego/certificates/server.key;
> ssl_trusted_certificate /etc/pki/lego/certificates/server.crt;
> ssl_reject_handshake on;
> 
> return 444;
> }
> 
> server {
> server_name domain.com;
> listen 443 ssl;
> listen 443 quic;
> listen [::]:443 ssl;
> listen [::]:443 quic;
> 
> http2 on;
> 
> root /srv/www/html;
> 
> # SSL
> ssl_certificate /etc/pki/lego/certificates/server.crt;
> ssl_certificate_key /etc/pki/lego/certificates/server.key;
> ssl_trusted_certificate /etc/pki/lego/certificates/server.crt;
> 
> location / {
> try_files /index.html =404;
> }
> }
> ```
> 
> There are two remarks for this example:
> - While enabling HTTP/3 I had to add the ssl_certificate lines to the
> default server, while using solely HTTP/2 this wasn't necessary. It
> will throw an error on trying to start Nginx, is that a bug?

TLS is mandatory for HTTP/3 (well, more accurately for QUIC).

https://stackoverflow.com/questions/72826710/quic-transfer-protocol-need-not-tls

> - The ssl_reject_handshake in the default server will prevent proper
> SNI matching for domain.com. If I run `curl https://domain.com/` it
> works fine, but `curl -k -H 'Host: domain.com'
> https://ipaddress-of-server/` does not. When I remove
> ssl_reject_handshake it works as expected
> 

If you curl an IP Address rather than an FQDN, curl will not include
SNI extension in client hello at all.

ssl_reject_handshake, as the name suggests, rejects TLS handshakes prior
to completion. Nginx cannot perform secondary search for correct server
block using host/authority header, as that would require first
completing handshake, and then parsing host/authority header.

> My intent is to have a default server that responds to non-existing
> domain names. Preferably it responds with 444, but requests over TLS
> (such as old domains names with HTST) will throw a security warning
> that the server's certificates don't match the request's virtual
> host's domain name (as expected).
> 

return 444; just a special return value that causes nginx to terminate
connection, nothing get's sent back to the client at all. return
directives (rewrite module more accurately) runs post TLS handshake
though. For default server TLS connections with your present
configuration - it will never get to that point.

Generally ssl_reject_hanshake is preferable for terminating connections
anyway, as it saves performing heavy TLS handshake.

The return 444 is still relevant for plain text connections that reach
your default server though, so I'd recommend still keeping it.

> Instead of showing a security warning in the browser I prefer a
> connection error, which is why I want to employ ssl_reject_handshake.

Your present configuration should work fine then.

> Kind regards,
> Taco de Wolff
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: location {} access_log off -> no such file or directory

2024-02-28 Thread J Carter
Hello,

On Mon, 26 Feb 2024 09:55:10 +0100
"Roberto D. Maggi"  wrote:

> Hi you all,
> 
> I'm trying to improve the reverse proxy's virtual hosts' configuration 
> files of my company,
> 
> but I'm facing an issue that I can't understand:
> 
> 
> In the "location / " block I inserted these lines
> 
> location ~* 
> ^.+\.(eot|otf|woff|woff2|ttf|rss|atom|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$
>  
> {
> access_log off; log_not_found off; expires max;
> }
> 
> and everythings fine,
> 
> ==> /var/log/nginx/MYSITEcom.access.log <==
> 
> 172.18.0.1 - - [26/Feb/2024:08:36:44 +] "GET 
> /wp-content/themes/MYSITE/images/back-numbers.png HTTP/1.1" 200 264666 
> "https://www.MYSITE.com/wp-content/themes/MYSITE/css/style.css?ver=5.6.1; 
> "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) 
> Chrome/122.0.0.0 Safari/537.36"
> 
> it doesn's log the gets to these extentions and so on but when I put the 
> following line,
> 
> location ~* \.(?:css|js)$ {
> expires 1y;
> access_log off;
> add_header Cache-Control "public";
> }
> 
> the site changes aspect and logs are filles with "no such file or directory"
> 
> ==> /var/log/nginx/MYSITE.com.error.log <==
> 2024/02/26 08:34:46 [error] 107#107: *336 open() 
> "/etc/nginx/html/wp-content/themes/MYSITE/webfonts/Roboto-Regular.ttf" 
> failed (2: No such file or directory), client: 172.18.0.1, server: 
> www.MYSITE.com, request: "GET 
> /wp-content/themes/MYSITE/webfonts/Roboto-Regular.ttf HTTP/1.1", host: 
> "www.MYSITE.com", referrer: 
> "https://www.MYSITE.com/wp-content/themes/MYSITE/css/style.css?ver=5.6.1;
> 
> It looks like it changes, some way, the root directory, but being 
> reverse proxies I didn't set it up.
> 
> 
> here below you can fine the virtual host conf file.
> 
> thanks in advance for every suggestion
> 
> Rob
> 

Nested locations don't inherit the proxy_pass directive, you still
need to repeat that in there, like this:

location ~* \.(?:css|js)$ {
expires 1y;
access_log off;
add_header Cache-Control "public";
proxy_pass https://MYSITE.portals:97/;
}

___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: [PATCH] Avoiding mixed socket families in PROXY protocol v1 (ticket #2594)

2024-02-21 Thread J Carter
Hello Roman,

On Wed, 21 Feb 2024 17:29:52 +0400
Roman Arutyunyan  wrote:

> Hi,
> 

[...]

> Checking whether the address used in PROXY writer is in fact the address
> that was passed in the PROXY header, is complicated.  This will either require
> setting a flag when PROXY address is set by realip, which is ugly.
> Another approach is checking if the client address written to a PROXY header
> matches the client address in the received PROXY header.  However since
> currently PROXY protocol addresses are stored as text, and not all addresses
> have unique text repersentations, this approach would require refactoring all
> PROXY protocol code + realip modules to switch from text to sockaddr.
> 
> I suggest that we follow the first plan (INADDR_ANY etc).
> 
> > [...]
> 
> Updated patch attached.
> 
> --
> Roman Arutyunyan

> diff --git a/src/core/ngx_proxy_protocol.c b/src/core/ngx_proxy_protocol.c
> --- a/src/core/ngx_proxy_protocol.c
> +++ b/src/core/ngx_proxy_protocol.c
> @@ -279,7 +279,10 @@ ngx_proxy_protocol_read_port(u_char *p, 
>  u_char *
>  ngx_proxy_protocol_write(ngx_connection_t *c, u_char *buf, u_char *last)
>  {
> -ngx_uint_t  port, lport;
> +socklen_t   local_socklen;
> +ngx_uint_t  port, lport;
> +struct sockaddr*local_sockaddr;
> +static ngx_sockaddr_t   default_sockaddr;

I understand you are using the fact static variables are zero 
initalized - to be both INADDR_ANY and "IN6ADDR_ANY", however is
this defined behavior for a union (specifically for ipv6 case) ?

I was under the impression only the first declared member, along with
padding bits were garunteed to be zero'ed.

https://stackoverflow.com/questions/54160137/what-constitutes-as-padding-in-a-union

>  
>  if (last - buf < NGX_PROXY_PROTOCOL_V1_MAX_HEADER) {
>  ngx_log_error(NGX_LOG_ALERT, c->log, 0,
> @@ -312,11 +315,21 @@ ngx_proxy_protocol_write(ngx_connection_
>  
>  *buf++ = ' ';
>  
> -buf += ngx_sock_ntop(c->local_sockaddr, c->local_socklen, buf, last - 
> buf,
> - 0);
> +if (c->sockaddr->sa_family == c->local_sockaddr->sa_family) {
> +local_sockaddr = c->local_sockaddr;
> +local_socklen = c->local_socklen;
> +
> +} else {
> +default_sockaddr.sockaddr.sa_family = c->sockaddr->sa_family;
> +
> +local_sockaddr = _sockaddr.sockaddr;
> +local_socklen = sizeof(ngx_sockaddr_t);
> +}
> +
> +buf += ngx_sock_ntop(local_sockaddr, local_socklen, buf, last - buf, 0);
>  
>  port = ngx_inet_get_port(c->sockaddr);
> -lport = ngx_inet_get_port(c->local_sockaddr);
> +lport = ngx_inet_get_port(local_sockaddr);
>  
>  return ngx_slprintf(buf, last, " %ui %ui" CRLF, port, lport);
>  }
> 
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: NGINX Reverse Proxy terminate TCP connection after 5 minutes of inactivity

2024-02-21 Thread J Carter
Hello,

On Tue, 20 Feb 2024 11:57:27 +0800
Kin Seng  wrote:

> Hi J Carter,
> 
> Thank you for your reply.
> I am capturing the packet from firewall, and the filtering is as per below
> for the previously attached pcap.

I see, I assumed you had run tcpdump on the nginx
host. I'd reccomend doing that too then (as well as client app host) if
you have a network firewall in the mix - to see what nginx itself
truely sends/recieves.

> Source : client app -- Dest : nginx proxy , any port to any port
> 
> Source : public server -- Dest : nginx proxy , any port to any port
> 
> Source : nginx proxy -- Dest : client app , any port to any port
> 
> Source : nginx proxy -- Dest : public server , any port to any port.
> 

It shouldn't be missing such data then - although again, this may be
specific to the firewall itself.

> Perhaps I will try to do tcpdump from the client app as well.
> 
> One more info that I notice from client app host, from the netstat command,
> it shows CLOSE_WAIT for the terminated session, it seems like close_wait is
> the symbol that the closing is from external ( in this case client app is
> connect to nginx proxy), is this right?

close_wait on client would indicate that the other party initated
connection close (sent the first FIN) - again, firewall makes me more
skeptical, as it can have it's own timers for closing tcp connection /
it's own logic.
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: NGINX Reverse Proxy terminate TCP connection after 5 minutes of inactivity

2024-02-19 Thread J Carter
Hello,

On Tue, 20 Feb 2024 09:40:13 +0800
Kin Seng  wrote:

> Hi J Carter,
> 
> This is the only results from the whole 5 minutes session (intentionally
> without any transaction to create inactivity). Is there any symptoms which
> can prove that other parties are the one who Initiate the closing?
> 

Packet capture is the easiest, however it looks like you have
missing data in PCAP for some reason (like tcpdump filters).

I suppose you could also perform packet capture on the client app host
instead of on the nginx host to corroborate the data - that would show
who sent FIN first.

Also, as Roman says in adjacent thread, debug level logs will also show
what happened.

> On Tue, Feb 20, 2024, 9:33 AM J Carter  wrote:
> 
> > Hello,
> >
> > On Mon, 19 Feb 2024 16:24:48 +0800
> > Kin Seng  wrote:
> >
> > [...]  
> > > Please refer to the attachments for reference.
> > >
> > > On Mon, Feb 19, 2024 at 4:24 PM Kin Seng  wrote:  
> > > > After capturing the tcp packet and check via wireshark, I found out  
> > that  
> > > > the nginx is sending out the RST to the public server and then send  
> > FIN/ACK  
> > > > (refer attached pcap picture) to client application.
> > > >
> > > > I have tried to enable keepalive related parameters as per the nginx
> > > > config above and also check on the OS's TCP tunable and i could not  
> > find  
> > > > any related settings which make NGINX to kill the TCP connection.
> > > >
> > > > Anyone encountering the same issues?
> > > >  
> >
> > The screenshot shows only 1 segment with FIN flag set too which is
> > odd - there should be one from each party in close sequence. Also the
> > client only returns an ACK, rather than FIN+ACK, which it should if
> > nginx was the initiator of closing the connection...
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > https://mailman.nginx.org/mailman/listinfo/nginx
> >  
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: [nginx] Update mime-types

2024-02-19 Thread J Carter
Hello,

On Fri, 16 Feb 2024 14:19:57 +0300
izor...@gmail.com wrote:

> Hello.
> 
> Patch to update current MIME types.
> Most of information is taken from IANA and Wikipedia.
> 
> 

It might be a good idea to provide a reason for each of these mime type
changes, such as what real problem this solves for you or others.

For example:

+audio/mpeg   mp1 mp2 mp3 m1a m2a mpa;

Are you serving .mp1 and .mp2 files? I've never seen such a file in my
life (nor many of the other extensions you've added in the series).

Also it may be a good idea to link to past discussions you've had on
the topic (regardless of language):

https://mailman.nginx.org/pipermail/nginx-ru/2023-November/36Z6S37IZQQWYQXJGFKOMQXFL2XQUJM2.html
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: NGINX Reverse Proxy terminate TCP connection after 5 minutes of inactivity

2024-02-19 Thread J Carter
Hello,

On Mon, 19 Feb 2024 16:24:48 +0800
Kin Seng  wrote:

[...]
> Please refer to the attachments for reference.
> 
> On Mon, Feb 19, 2024 at 4:24 PM Kin Seng  wrote:
> > After capturing the tcp packet and check via wireshark, I found out that
> > the nginx is sending out the RST to the public server and then send FIN/ACK
> > (refer attached pcap picture) to client application.
> >
> > I have tried to enable keepalive related parameters as per the nginx
> > config above and also check on the OS's TCP tunable and i could not find
> > any related settings which make NGINX to kill the TCP connection.
> >
> > Anyone encountering the same issues?
> >

The screenshot shows only 1 segment with FIN flag set too which is
odd - there should be one from each party in close sequence. Also the
client only returns an ACK, rather than FIN+ACK, which it should if
nginx was the initiator of closing the connection...
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: ngx_http_v3_init_session function

2024-02-05 Thread J Carter
On Tue, 6 Feb 2024 00:44:56 +
J Carter  wrote:

> On Tue, 6 Feb 2024 00:16:31 +
> J Carter  wrote:
> 
> > Hello,
> > 
> > On Mon, 5 Feb 2024 23:24:39 +0200
> > Clima Gabriel  wrote:
> > 
> > > Hello everyone,
> > > 
> > > (the code is probably clearer and attached below)
> > > This function modifies what ngx_connection_t->data points to.
> > > ngx_connection_t->data is initially *ngx_http_connection_t.
> > > The *ngx_http_connection_t is assigned to
> > > ngx_http_v3_session_t->http_connection
> > > And the *ngx_http_v3_session_t assigned to ngx_connection_t->data.
> > > 
> > > Result: before ngx_connection_t->data is *ngx_http_connection_t
> > >after ngx_connection_t->data is *ngx_http_v3_session_t
> > 
> > In C, a pointer to struct can be cast to a pointer to the first member
> > of that struct, as there is no padding before the first member per the
> > standard.
> > 
> > The first member of ngx_http_v3_session_t is *ngx_http_connection_t.
> 
> *Sorry typo here - first member is ngx_http_connection_t of course.
> > 
> > Here is the commit where this was implemented.
> > 
> > https://mailman.nginx.org/pipermail/nginx-devel/2023-September/BWH23FTMRUWCUZSNKXJJXEEN76ZYOK62.html
> > 
> > [...]

Oh, I've just realized that is the wrong patch.  There were a couple of
reworks to that patch later that I missed.. Here is the actual 
changeset version, with *ngx_http_connection_t as you say:

https://hg.nginx.org/nginx/rev/4939fd04737f

It appears that this macro should be used to get ngx_http_connection_t:

https://hg.nginx.org/nginx/file/tip/src/http/v3/ngx_http_v3.h#l85

However it's likely a good idea to wait to see if the author/s will
comment on if that is safe and correct in all situations.
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: ngx_http_v3_init_session function

2024-02-05 Thread J Carter
On Tue, 6 Feb 2024 00:16:31 +
J Carter  wrote:

> Hello,
> 
> On Mon, 5 Feb 2024 23:24:39 +0200
> Clima Gabriel  wrote:
> 
> > Hello everyone,
> > 
> > (the code is probably clearer and attached below)
> > This function modifies what ngx_connection_t->data points to.
> > ngx_connection_t->data is initially *ngx_http_connection_t.
> > The *ngx_http_connection_t is assigned to
> > ngx_http_v3_session_t->http_connection
> > And the *ngx_http_v3_session_t assigned to ngx_connection_t->data.
> > 
> > Result: before ngx_connection_t->data is *ngx_http_connection_t
> >after ngx_connection_t->data is *ngx_http_v3_session_t
> 
> In C, a pointer to struct can be cast to a pointer to the first member
> of that struct, as there is no padding before the first member per the
> standard.
> 
> The first member of ngx_http_v3_session_t is *ngx_http_connection_t.

*Sorry typo here - first member is ngx_http_connection_t of course.
> 
> Here is the commit where this was implemented.
> 
> https://mailman.nginx.org/pipermail/nginx-devel/2023-September/BWH23FTMRUWCUZSNKXJJXEEN76ZYOK62.html
> 
> [...]
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: ngx_http_v3_init_session function

2024-02-05 Thread J Carter
Hello,

On Mon, 5 Feb 2024 23:24:39 +0200
Clima Gabriel  wrote:

> Hello everyone,
> 
> (the code is probably clearer and attached below)
> This function modifies what ngx_connection_t->data points to.
> ngx_connection_t->data is initially *ngx_http_connection_t.
> The *ngx_http_connection_t is assigned to
> ngx_http_v3_session_t->http_connection
> And the *ngx_http_v3_session_t assigned to ngx_connection_t->data.
> 
> Result: before ngx_connection_t->data is *ngx_http_connection_t
>after ngx_connection_t->data is *ngx_http_v3_session_t

In C, a pointer to struct can be cast to a pointer to the first member
of that struct, as there is no padding before the first member per the
standard.

The first member of ngx_http_v3_session_t is *ngx_http_connection_t.

Here is the commit where this was implemented.

https://mailman.nginx.org/pipermail/nginx-devel/2023-September/BWH23FTMRUWCUZSNKXJJXEEN76ZYOK62.html

[...]
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: modules after upgrade

2024-02-01 Thread J Carter
Hello,

On Thu, 1 Feb 2024 12:28:40 -0500
Larry Martell  wrote:

> On Thu, Feb 1, 2024 at 11:57 AM Sergey A. Osokin  wrote:
> >
> > Hi Larry,
> >
> > On Thu, Feb 01, 2024 at 11:34:08AM -0500, Larry Martell wrote:  
> > > We run Ubuntu 20.04, which has nginx 1.18. I was asked to upgrade it
> > > to 1.25.3, which I did following the instructions here:
> > > https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/.
> > > We had 6 modules installed (http-ndk, http-image-filter, http-lua,
> > > http-xslt-filter, mail, stream) which all got removed in the upgrade.
> > > When I try to reinstall them it fails with  libnginx-mod-http-ndk :
> > > Depends: nginx-common (= 1.18.0-0ubuntu1.4) but it is not going to be
> > > installed. How can I install the modules I need for 1.25.3? Do I have
> > > to build them from source now?  
> >
> > There two types of modules in the list:
> > - native, developed by nginx development team;
> > - third-party, from vendors.
> >
> > So, http-ndk, [1] and http-lua, [2] are third-party modules.  Those
> > modules need to be recompiled with a corresponding version of nginx.
> > I'd recommend to visit reference pages to get details how to build
> > those third-party modules.
> >
> > References
> > --
> > 1. https://github.com/vision5/ngx_devel_kit
> > 2. https://github.com/openresty/lua-nginx-module  
> 
> Hi Sergey,
> 
> I was able to install nginx-module-image-filter with apt, but
> http-xslt-filter, mail, stream all fail with the same message Depends:
> nginx-common (= 1.18.0-0ubuntu1.4).
> 
> I looked at the 2 links you gave and both talk about building nginx
> from source to get those modules included. Is that now required? With
> 1.18 I simply did:
> 

It's nothing to do with version. What you've done is switch from Ubuntu
provided packages to nginx.org provided packages. Ubuntu provides third
party binary modules that are not in the offical repo.
  
The list of offical packges in the repo can be quickly seen by
browsing:
https://nginx.org/packages/mainline/ubuntu/pool/nginx/n/

> sudo apt install lua5.3
> sudo apt install libluajit-5.1-2
> sudo apt install lua-sql-mysql
> sudo apt install libnginx-mod-http-lua
> 
> Thanks!
> Larry

For Lua (and Luajit) you can build from source using make files from
the nginx offical packaging mercurial repo:

https://hg.nginx.org/pkg-oss/file/1c4041361462/contrib/src
https://hg.nginx.org/pkg-oss/file/1c4041361462/build_module.sh 
(1.25.3 commit)

There are many other useful modules there too that are not shipped as
binaries.  

However, I don't believe lua-sql-mysql is included there, so you will
still need to build that from sources on github:

https://github.com/openresty/lua-resty-mysql
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Configuration adjustment for GRPC service

2024-01-27 Thread J Carter
Hello,

On Thu, 25 Jan 2024 14:53:51 +0100
Ľuboš Pinteš  wrote:

> Hello Jason and thank for your reply.
> 
> I am fairly new to this stuff.
> 
> Concerning health checks, does it matter if I have only one simple 
> server? So no load balancing etc.?
> 

Just so you know, active health checks (on the docs.nginx.com admin
guide page) are only in NGINX Plus (commerical version of nginx).

If you're using OSS nginx you won't have active health checks 
(the 'health_check' directive).

> 
> Dňa 25. 1. 2024 o 14:01 Jason Anderson via nginx napísal(a):
> > Have you tried configuring grpc timeouts on NGINX?
> >
> > This combined with an upstream healthcheck should prevent any client 
> > connections that aren't possible for NGINX to service.
> >
> > https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_connect_timeout
> >
> > https://docs.nginx.com/nginx/admin-guide/load-balancer/grpc-health-check/
> >
> >
> > Regards,
> >
> > Jason
> >
> > On Thu, Jan 25, 2024, 6:01 AM Ľuboš Pinteš  wrote:
> >
> > Hello, everybody,
> > I am implementing a GRPC service which has methods, i.e.
> > request/reply
> > and one streaming method, through which the server sends events at
> > random intervals.
> > The GRPC server is written in Go, the client in C#, we are using
> > Grpc.Core.
> >
> > If the server is not running and I call one of the request/reply
> > methods, an error occurs as I expect. But if I call the streaming
> > method, Nginx accepts the connection and the client gets stuck on
> > calling await events.ResponseStream.MoveNext(...);
> >
> > I would like to ask how to configure Nginx so that an error occurs
> > even
> > if the streaming method is called if the server is not running,
> > e.g. it
> > restarts out of my control if the server on which my service is
> > running
> > restarts.
> >
> > Thank you
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Limit NGINX log size

2024-01-27 Thread J Carter
Hello,

On Sat, 27 Jan 2024 15:55:42 +0530
Rakshith Kumar  wrote:

> Hello Team,
> 
> I would like to know how to limit the NGINX limit size.
> We would like to set size limit for Nginx log files on App Volumes Manager
> since it consume disk space over time. Can we add any parameters to
> nginx.conf to limit or rotate the logs.
> 
> Location: ..\Program Files (x86)\CloudVolumes\Manager\nginx\logs
> 
> Ex: error_https.log, error.log, access.log files.
> 
> Regards,
> Rakshith

Nginx does not have log rotation capabilities built in, nor can you
limit the size of logs. The logrotate utility is used for this task on
unix/unix-like platforms.

The best choice would be to either write your own utility to do the
rotation, or use a premade windows native utility.

Something like this powershell clone of logrotate might work well:

https://github.com/theohbrothers/Log-Rotate

It's necessary for nginx to reopen the logs post rotation, on Windows I
believe you'll need to use the CLI for that ' -s
reopen' - or restart the service if you have it running as a service.
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx-tests: Some SSL tests are failing with openssl 3.2.0

2024-01-26 Thread J Carter
Hello,

On Fri, 26 Jan 2024 16:28:15 -0300
Renato Botelho  wrote:

> Hello!
> 
> I'm building nginx on an environment with openssl 3.2.0 and some SSL 
> tests are failing.  I suspect it's related to openssl version because it 
> works on another env with older openssl.  But maybe I'm wrong.
> 
> Here are results

[..]

I was able to reproduce exact same result on Arch Linux (also Openssl 
3.2.0, so looks likely).

./ssl_certificate_chain.t .. Dubious, test returned 2 (wstat 
512, 0x200)
Failed 2/5 subtests
./ssl_certificate.t  ok
./ssl_conf_command.t ... ok
./ssl_certificates.t ... ok
./ssl_engine_keys.t  skipped: may not work, leaves 
coredump
./ssl_client_escaped_cert.t  ok
./ssl_proxy_protocol.t . skipped: no realip available
===(2096;27  1/3   4/23  1/5  1/3  0/?  0/5   0/32  0/9 )===
#   Failed test 'crl - no revoked certs'
#   at ./ssl_crl.t line 157.
#   'HTTP/1.1 400 Bad Request
# Server: nginx/1.25.4
# Date: Fri, 26 Jan 2024 20:26:41 GMT
# Content-Type: text/html
# Content-Length: 215
# Connection: close
# X-Verify: FAILED:unsuitable certificate purpose
#
# 
# 400 The SSL certificate error
# 
# 400 Bad Request
# The SSL certificate error
# nginx/1.25.4
# 
# 
# '
# doesn't match '(?^:SUCCESS)'
./ssl_curve.t .. ok
./ssi_delayed.t  ok
===(2105;27   4/23  3/5  0/?  1/5   2/32  0/9  0/?  0/? )===# Looks 
like you failed 1 test of 5.
./ssl_crl.t  Dubious, test returned 1 (wstat 
256, 0x100)

...and so on

Test Summary Report
---
./ssl_certificate_chain.t(Wstat: 512 (exited 2) Tests: 5 
Failed: 2)
  Failed tests:  2-3
  Non-zero exit status: 2
./ssl_crl.t  (Wstat: 256 (exited 1) Tests: 5 
Failed: 1)
  Failed test:  1
  Non-zero exit status: 1
./ssl_ocsp.t (Wstat: 3328 (exited 13) Tests: 17 
Failed: 13)
  Failed tests:  1-13
  Non-zero exit status: 13
./ssl_verify_depth.t (Wstat: 768 (exited 3) Tests: 11 
Failed: 3)
  Failed tests:  5, 8-9
  Non-zero exit status: 3
Files=416, Tests=2505, 35 wallclock secs ( 1.43 usr  0.71 sys + 41.49 cusr  
8.42 csys = 52.05 CPU)
Result: FAIL
[vagrant@archlinux nginx-tests]$ openssl version
OpenSSL 3.2.0 23 Nov 2023 (Library: OpenSSL 3.2.0 23 Nov 2023)
[vagrant@archlinux nginx-tests]$ uname -a
Linux archlinux 6.7.0-arch3-1 #1 SMP PREEMPT_DYNAMIC Sat, 13 Jan 2024 14:37:14 
+ x86_64 GNU/Linux
[vagrant@archlinux nginx-tests]$ ../nginx/objs/nginx -V
nginx version: nginx/1.25.4
built by gcc 13.2.1 20230801 (GCC)
built with OpenSSL 3.2.0 23 Nov 2023
TLS SNI support enabled
configure arguments: --with-http_ssl_module
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH] SSL: Added SSLKEYLOGFILE key material to debug logging

2024-01-24 Thread J Carter
Hello,

Thanks for the feedback.

On Wed, 24 Jan 2024 12:20:59 +0300
Maxim Dounin  wrote:

> Hello!
> 
> On Sun, Jan 21, 2024 at 10:37:24AM +, J Carter wrote:
> 
> > # HG changeset patch
> > # User J Carter 
> > # Date 1705832811 0
> > #  Sun Jan 21 10:26:51 2024 +
> > # Node ID b00332a5253eefb53bacc024c72f55876c2eac6e
> > # Parent  ee40e2b1d0833b46128a357fbc84c6e23be9be07
> > SSL: Added SSLKEYLOGFILE key material to debug logging.
> > 
> > This patch also introduces the debug_keylog error log level flag, which
> > may be used to graunually enable or ommit logging of key material via
> > error level flags (note, it's always enabled when using
> > debug_connection).
> > 
> > Each line of key material is output to the error log as separate log
> > message, and is prepended with 'ssl keylog: ' for convenient extraction.
> > 
> > The purpose of logging key material is to allow external tools, such as
> > wireshark/tshark, to decrypt captured TLS connections in all situations.
> > 
> > Previously, only TLS 1.2 (and below) connections could be decrypted
> > when specific ciphers suites were used, and when the decrypter had
> > access to the acting server's TLS certificates and keys. It was not
> > possible to decrypt TLS 1.3 traffic without generating SSLKEYLOGFILE on
> > peer, or by using other hacks on nginx host (using GDB, or patched ssl
> > libraries).
> 
> Thanks for the patch.
> 
> Logging session keying material is known to be problematic from 
> ethical point of view.  As such, I would rather avoid introducing 
> relevant functionality in nginx.
> 
> [...]
> 

Could you expand upon your ethical concerns around logging key
material over say logging / storing to disk request or response content
directly from nginx ?  

It'd be good to have clarity for future contributions.
  

___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH] SSL: Added SSLKEYLOGFILE key material to debug logging

2024-01-21 Thread J Carter
# HG changeset patch
# User J Carter 
# Date 1705832811 0
#  Sun Jan 21 10:26:51 2024 +
# Node ID b00332a5253eefb53bacc024c72f55876c2eac6e
# Parent  ee40e2b1d0833b46128a357fbc84c6e23be9be07
SSL: Added SSLKEYLOGFILE key material to debug logging.

This patch also introduces the debug_keylog error log level flag, which
may be used to graunually enable or ommit logging of key material via
error level flags (note, it's always enabled when using
debug_connection).

Each line of key material is output to the error log as separate log
message, and is prepended with 'ssl keylog: ' for convenient extraction.

The purpose of logging key material is to allow external tools, such as
wireshark/tshark, to decrypt captured TLS connections in all situations.

Previously, only TLS 1.2 (and below) connections could be decrypted
when specific ciphers suites were used, and when the decrypter had
access to the acting server's TLS certificates and keys. It was not
possible to decrypt TLS 1.3 traffic without generating SSLKEYLOGFILE on
peer, or by using other hacks on nginx host (using GDB, or patched ssl
libraries).

Decrypting inbound and outbound TLS connections is useful when
debugging for a few reasons:

1) Nginx does not have a convenient mechanism for logging response body
sent to client, or response body received from upstream while proxying.
Packet captures provide a convenient mechanism for viewing this
traffic. This same use-case applies to client request body in various
scenarios where it cannot be logged using $request_body.

2) It is often convenient to use wireshark to dissect non-http traffic
proxied via stream module. This will now work in all scenarios,
including when stream module is used to perform TLS offloading, and
then proxies to the upstream over TLS.

3) Many post-handshake TLS issues are better diagnosed through analysis
of decypted TLS traffic rather than openssl's often ambiguous
alert/error messages.

4) It's a convenient way to debug third party modules that initiate or
accept TLS connections (provided they utilize nginx's native networking
facilities).

Example usage:

error_log /var/log/nginx/error.log debug;

or

error_log /var/log/nginx/error.log debug_keylog;

Live extraction:

tail -f -n0 /var/log/nginx/error.log |\
  grep -Poa --line-buffered '(?<=ssl keylog: ).*' |\
  tee -a /tmp/keylog.log

Wireshark:

1) Navigate 'Edit -> Preferences -> Protocols -> TLS'.
2) Set '(Pre)-Master-Secret log filename' to '/tmp/keylog.log'.

diff -r ee40e2b1d083 -r b00332a5253e src/core/ngx_log.c
--- a/src/core/ngx_log.cMon Dec 25 21:15:48 2023 +0400
+++ b/src/core/ngx_log.cSun Jan 21 10:26:51 2024 +
@@ -86,7 +86,7 @@
 
 static const char *debug_levels[] = {
 "debug_core", "debug_alloc", "debug_mutex", "debug_event",
-"debug_http", "debug_mail", "debug_stream"
+"debug_http", "debug_mail", "debug_stream", "debug_keylog"
 };
 
 
diff -r ee40e2b1d083 -r b00332a5253e src/core/ngx_log.h
--- a/src/core/ngx_log.hMon Dec 25 21:15:48 2023 +0400
+++ b/src/core/ngx_log.hSun Jan 21 10:26:51 2024 +
@@ -30,6 +30,7 @@
 #define NGX_LOG_DEBUG_HTTP0x100
 #define NGX_LOG_DEBUG_MAIL0x200
 #define NGX_LOG_DEBUG_STREAM  0x400
+#define NGX_LOG_DEBUG_KEYLOG  0x800
 
 /*
  * do not forget to update debug_levels[] in src/core/ngx_log.c
@@ -37,7 +38,7 @@
  */
 
 #define NGX_LOG_DEBUG_FIRST   NGX_LOG_DEBUG_CORE
-#define NGX_LOG_DEBUG_LASTNGX_LOG_DEBUG_STREAM
+#define NGX_LOG_DEBUG_LASTNGX_LOG_DEBUG_KEYLOG
 #define NGX_LOG_DEBUG_CONNECTION  0x8000
 #define NGX_LOG_DEBUG_ALL 0x7ff0
 
diff -r ee40e2b1d083 -r b00332a5253e src/event/ngx_event_openssl.c
--- a/src/event/ngx_event_openssl.c Mon Dec 25 21:15:48 2023 +0400
+++ b/src/event/ngx_event_openssl.c Sun Jan 21 10:26:51 2024 +
@@ -10,6 +10,11 @@
 #include 
 
 
+#if (NGX_DEBUG && OPENSSL_VERSION_NUMBER >= 0x10101000L   \
+ && !defined LIBRESSL_VERSION_NUMBER)
+#define NGX_SSL_KEYLOG 1
+#endif
+
 #define NGX_SSL_PASSWORD_BUFFER_SIZE  4096
 
 
@@ -27,6 +32,9 @@
 static int ngx_ssl_verify_callback(int ok, X509_STORE_CTX *x509_store);
 static void ngx_ssl_info_callback(const ngx_ssl_conn_t *ssl_conn, int where,
 int ret);
+#ifdef NGX_SSL_KEYLOG
+static void ngx_ssl_keylog_callback(const SSL *ssl, const char *line);
+#endif
 static void ngx_ssl_passwords_cleanup(void *data);
 static int ngx_ssl_new_client_session(ngx_ssl_conn_t *ssl_conn,
 ngx_ssl_session_t *sess);
@@ -426,10 +434,28 @@
 
 SSL_CTX_set_info_callback(ssl->ctx, ngx_ssl_info_callback);
 
+#ifdef NGX_SSL_KEYLOG
+SSL_CTX_set_keylog_callback(ssl->ctx, ngx_ssl_keylog_callback);
+#endif
+
 return NGX_OK;
 }
 
 
+#ifdef NGX_SSL_KEYLOG
+
+static void
+ngx_ssl_keylog_callback(const SSL *ssl, const char *line)
+{
+ngx_

Re: [PATCH 4 of 4] AIO operations now add timers (ticket #2162)

2024-01-09 Thread J Carter
Hello,

On Tue, 9 Jan 2024 08:59:14 +0300
Maxim Dounin  wrote:

> Hello!
> 
> On Mon, Jan 08, 2024 at 01:31:11PM +, J Carter wrote:
> 
> > On Mon, 8 Jan 2024 11:25:55 +
> > J Carter  wrote:
> > 
> > > Hello,
> > > 
> > > On Mon, 27 Nov 2023 05:50:27 +0300
> > > Maxim Dounin  wrote:
> > > 
> > > > # HG changeset patch
> > > > # User Maxim Dounin 
> > > > # Date 1701050170 -10800
> > > > #  Mon Nov 27 04:56:10 2023 +0300
> > > > # Node ID 00c3e7333145ddb5ea0eeaaa66b3d9c26973c9c2
> > > > # Parent  61d08e4cf97cc073200ec32fc6ada9a2d48ffe51
> > > > AIO operations now add timers (ticket #2162).
> > > > 
> > > > Each AIO (thread IO) operation being run is now accompanied with 
> > > > 1-minute
> > > > timer.  This timer prevents unexpected shutdown of the worker process 
> > > > while
> > > > an AIO operation is running, and logs an alert if the operation is 
> > > > running
> > > > for too long.  
> > > 
> > > Shouldn't this timer's duration be set to match worker_shutdown_timeout's
> > > duration rather than being hard coded to 60s ?
> > 
> > Ah nevermind, I understand.
> > 
> > These timers will either expire from passing the 60s set duration, or 
> > will expire as worker_process_timeout itself expires, kills the 
> > connection and times out associated timers (including the aio timers). 
> > 
> > Setting it to worker_shutdown_timeout's duration would be pointless 
> > (an 'infinite' timer would give the same result).
> > 
> > So the only situation in which a different value for these AIO 
> > timers would make sense is if these AIO operations are expected to
> > take longer 60s, but less than worker_shutdown_timeout (in cases 
> > where it has been increased from it's own default of 60s).
> > 
> > In that case the AIO operation's timeout would have to be one 
> > (or more) of it's own directives, with a value less than 
> > worker_shutdown_timeout. 
> 
> Not really.
> 
> When worker_shutdown_timeout expires, it tries to terminate the 
> request, but it can't as long as an AIO operation is running.  
> When the AIO operation completes, the request will be actually 
> terminated and the worker process will be allowed to exit.  So far 
> so good.
> 
> But if the AIO operation never completes, the timer will expire 
> after 1 minute, will log an alert, and the worker processes will 
> be anyway allowed to exit (with the request still present).  This 
> might not be actually possible though - for example, 
> ngx_thread_pool_exit_worker() will just block waiting for all 
> pending operations to complete.
> 
> In theory, the situation when an AIO operation never completes 
> should never happen, and just a counter of pending AIO 
> operations can be used instead to delay shutdown (which is 
> essentially equivalent to an infinite timer).  
> 
> In practice, though, using a large enough guard timer might be 
> better: it provides additional level of protection against bugs or 
> system misbehaviour, and at least logs an alert if something 
> really weird happens.  It is also looks more universal and in line 
> with current approach of using existing timers as an indicator 
> that something is going on and shutdown should be delayed.
>
> The timer is expected to be "large enough", since we cannot do 
> anything meaningful with an AIO operation which never completes, 
> we can only complain loudly, so the timer should never expire 
> unless there is something really wrong.  This is not the case with 
> worker_shutdown_timeout: it can be set to an arbitrary low value, 
> which is not expected to mean that something is really wrong if 
> the timer expires, but rather means that nginx shouldn't try to 
> process remaining requests, but should instead close them as long 
> as it can do so.  That is, worker_shutdown_timeout does not fit 
> semantically.  Further, worker_shutdown_timeout is not set by 
> default, so it simply cannot be used.
>

Good point, for whatever reason I had it in my mind that was set
set by default (and you're right it doesn't fit in any case).

> The 1 minute was chosen as it matches default send_timeout, which 
> typically accompanies AIO operations when sending responses (and 
> also delays shutdown, so no "open socket left" alerts are normally 
> seen).  Still, send_timeout is also quite different semantically, 
> and therefore a hardcoded value is used instead.
> 
> I don't think there are valid cases when AIO operations ca

Re: [PATCH 4 of 4] AIO operations now add timers (ticket #2162)

2024-01-08 Thread J Carter
On Mon, 8 Jan 2024 11:25:55 +
J Carter  wrote:

> Hello,
> 
> On Mon, 27 Nov 2023 05:50:27 +0300
> Maxim Dounin  wrote:
> 
> > # HG changeset patch
> > # User Maxim Dounin 
> > # Date 1701050170 -10800
> > #  Mon Nov 27 04:56:10 2023 +0300
> > # Node ID 00c3e7333145ddb5ea0eeaaa66b3d9c26973c9c2
> > # Parent  61d08e4cf97cc073200ec32fc6ada9a2d48ffe51
> > AIO operations now add timers (ticket #2162).
> > 
> > Each AIO (thread IO) operation being run is now accompanied with 1-minute
> > timer.  This timer prevents unexpected shutdown of the worker process while
> > an AIO operation is running, and logs an alert if the operation is running
> > for too long.  
> 
> Shouldn't this timer's duration be set to match worker_shutdown_timeout's
> duration rather than being hard coded to 60s ?

Ah nevermind, I understand.

These timers will either expire from passing the 60s set duration, or 
will expire as worker_process_timeout itself expires, kills the 
connection and times out associated timers (including the aio timers). 

Setting it to worker_shutdown_timeout's duration would be pointless 
(an 'infinite' timer would give the same result).

So the only situation in which a different value for these AIO 
timers would make sense is if these AIO operations are expected to
take longer 60s, but less than worker_shutdown_timeout (in cases 
where it has been increased from it's own default of 60s).

In that case the AIO operation's timeout would have to be one 
(or more) of it's own directives, with a value less than 
worker_shutdown_timeout. 

(doesn't seem like it's worth it from the ticket discussion,
sorry for the noise).


> > This fixes "open socket left" alerts during worker processes shutdown
> > due to pending AIO (or thread IO) operations while corresponding requests
> > have no timers.  In particular, such errors were observed while reading
> > cache headers (ticket #2162), and with worker_shutdown_timeout.  
> 
> [...]
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH 4 of 4] AIO operations now add timers (ticket #2162)

2024-01-08 Thread J Carter
Hello,

On Mon, 27 Nov 2023 05:50:27 +0300
Maxim Dounin  wrote:

> # HG changeset patch
> # User Maxim Dounin 
> # Date 1701050170 -10800
> #  Mon Nov 27 04:56:10 2023 +0300
> # Node ID 00c3e7333145ddb5ea0eeaaa66b3d9c26973c9c2
> # Parent  61d08e4cf97cc073200ec32fc6ada9a2d48ffe51
> AIO operations now add timers (ticket #2162).
> 
> Each AIO (thread IO) operation being run is now accompanied with 1-minute
> timer.  This timer prevents unexpected shutdown of the worker process while
> an AIO operation is running, and logs an alert if the operation is running
> for too long.

Shouldn't this timer's duration be set to match worker_shutdown_timeout's
duration rather than being hard coded to 60s ?

> This fixes "open socket left" alerts during worker processes shutdown
> due to pending AIO (or thread IO) operations while corresponding requests
> have no timers.  In particular, such errors were observed while reading
> cache headers (ticket #2162), and with worker_shutdown_timeout.

[...]
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Calculating requests per second, per IP address

2024-01-01 Thread J Carter
Hello,

On Fri, 29 Dec 2023 09:54:30 -0300
Rejaine Monteiro  wrote:

> Hi all,
> 
> I´m running Nginx community edition and need to implement rate limiting
> 
> There's plenty of guides out there on how to do this, but no guides on how
> to get real values/stats from the access logs
> 
> 
> What I need to get from the NGINX access logs is:
> 
> - Requests per minute, per IP address
> 
> - Average requests per minute, derived from all IP addresses
> 
> - Max requests per minute, per IP address
> 
> We have a few endpoints with different functionalities and we simply cannot
> have a common rule that works for everyone.
> 
> Any tips on a tool or script that could help generate this type of
> information (if possible in real time or collect this data for future
> analysis)?
> 
> 
> I appreciate any tips.
> 

There isn't an existing bespoke tool for this (at least publicly 
available).

Normally such metrics are generated by:
A) Feeding access logs into a log aggregator platform (Splunk, Loki)
B) Performing / creating custom queries on that platform, to generate
such reports.

Of note, Loki (which is AGPL/free) has a nice user experience for 
this.

https://grafana.com/docs/loki/latest/query/metric_queries/

Other than log aggregators, writing a script (python, perl, bash) is 
likely the fastest approach. Consider sharing it if you do, I'm sure 
others will find it useful.

If you're looking for existing scripts as a starting point, there may 
be  similar tools for Apache that you could adapt for nginx. Both use
the 'Common Log Format' for access logs by default.

https://github.com/ajohnstone/apache-log-stats

Something like this.
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


[PATCH] Image filter: fixed sharpen parsing

2023-12-30 Thread J Carter
# HG changeset patch
# User J Carter 
# Date 1703993644 0
#  Sun Dec 31 03:34:04 2023 +
# Node ID 1eaf9f76184f849fb21c5d2d543f2aa2df3be40c
# Parent  ee40e2b1d0833b46128a357fbc84c6e23be9be07
Image filter: fixed sharpen parsing.

Previously non numeric values were read as 0. This patch makes such
values an error.

diff -r ee40e2b1d083 -r 1eaf9f76184f 
src/http/modules/ngx_http_image_filter_module.c
--- a/src/http/modules/ngx_http_image_filter_module.c   Mon Dec 25 21:15:48 
2023 +0400
+++ b/src/http/modules/ngx_http_image_filter_module.c   Sun Dec 31 03:34:04 
2023 +
@@ -1639,7 +1639,7 @@
 }
 
 if (cv.lengths == NULL) {
-n = ngx_http_image_filter_value([1]);
+n = ngx_atoi(value[1].data, value[1].len);
 
 if (n < 0) {
 ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH 00 of 12] HTTP/3 proxying to upstreams

2023-12-28 Thread J Carter
On Thu, 28 Dec 2023 17:23:38 +0300
Vladimir Homutov via nginx-devel  wrote:

> On Thu, Dec 28, 2023 at 04:31:41PM +0300, Maxim Dounin wrote:
> > Hello!
> >
> > On Wed, Dec 27, 2023 at 04:17:38PM +0300, Vladimir Homutov via nginx-devel 
> > wrote:
> >  
> > > On Wed, Dec 27, 2023 at 02:48:04PM +0300, Maxim Dounin wrote:  
> > > > Hello!
> > > >
> > > > On Mon, Dec 25, 2023 at 07:52:41PM +0300, Vladimir Homutov via 
> > > > nginx-devel wrote:
> > > >  
> > > > > Hello, everyone,
> > > > >
> > > > > and Merry Christmas to all!
> > > > >
> > > > > I'm a developer of an nginx fork Angie.  Recently we implemented
> > > > > an HTTP/3 proxy support in our fork [1].
> > > > >
> > > > > We'd like to contribute this functionality to nginx OSS community.
> > > > > Hence here is a patch series backported from Angie to the current
> > > > > head of nginx mainline branch (1.25.3)  
> > > >
> > > > Thank you for the patches.
> > > >
> > > > Are there any expected benefits from HTTP/3 being used as a
> > > > protocol to upstream servers?  
> > >
> > > Personally, I don't see much.
> > >
> > > Probably, faster connection establishing to due 0RTT support (need to be
> > > implemented) and better multiplexing (again, if implemented wisely).
> > > I have made some simple benchmarks, and it looks more or less similar
> > > to usual SSL connections.  
> >
> > Thanks for the details.
> >
> > Multiplexing is available since introduction of the FastCGI
> > protocol, yet to see it working in upstream connections.  As for
> > 0-RTT, using keepalive connections is probably more efficient
> > anyway (and not really needed for upstream connections in most
> > cases as well).  
> 
> With HTTP/3 and keepalive we can have just one quic "connection" per upstream
> server (in extreme). We perform heavy handshake once, and leave it open.
> Next we just create HTTP/3 streams to perform request. They can perfectly
> run in parallel and use same quic connection. Probably, this is something
> worth implementing, with limitations of course: we don't want to mix
> requests from different (classes of) clients in same connection, we
> don't want eternal life of such connection and we need means to control
> level of such multiplexing.
> 
Those heavy handshakes wouldn't be the only concern either...

Lack of upstream multiplexing has come up as a concern in the past
with the grpc module (which lacks it) due to that amplification effect
of client side h2 connections and streams being translated into
x*y upstream connections.

This poses a danger of ephemeral port exhaustion when targeting relatively
few upstream servers (such as proxying to an L4 load balancer instead of 
direct to application servers).

This necessitates provisioning a ton of VIPs and using proxy_bind 
(which isn't always practical / is a pain).

It would be exactly the same for h3 (and more so once grpc over h3
eventually becomes solid, especially bidi).

> >  
> > > >
> > > > [...]
> > > >  
> > > > > Probably, the HTTP/3 proxy should be implemented in a separate 
> > > > > module.
> > > > > Currently it is a patch to the HTTP proxy module to minimize 
> > > > > boilerplate.  
> > > >
> > > > Sure.  I'm very much against the idea of mixing different upstream
> > > > protocols in a single protocol module.  
> > >
> > > noted.
> > >  
> > > > (OTOH, there are some uncertain plans to make proxy module able to
> > > > work with other protocols based on the scheme, such as in
> > > > "proxy_pass fastcgi://127.0.0.1:9000;".  This is mostly irrelevant
> > > > though, and might never happen.)  
> > >
> > > well, currently we have separate proxying modules that are similar enough 
> > > to
> > > think about merging them like suggested. Not sure if one big module with
> > > methods will worth it, as semantics is slightly different.
> > >
> > > proxy modules are already addons on top of upstream module, which does
> > > the heavy lifting. What requires improvement is probably the
> > > configuration that makes user to remember many similar directives doing
> > > the same thing but for different protocols.  
> >
> > Yep, making things easier to configure (and modify, if something
> > related to configuration directives is changed or additional
> > protocol is added) is the main motivator.  Still, there are indeed
> > differences between protocol modules, and this makes single module
> > inconvenient sometimes.  As such, plans are uncertain (and the
> > previous attempt to do this failed miserably).
> >
> >
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH] Proxy: altered limit_rate to support variables

2023-12-28 Thread J Carter
Hello,

Thanks for the review and feedback.

On Tue, 26 Dec 2023 19:07:37 +0400
Sergey Kandaurov  wrote:

> > On 26 Nov 2023, at 03:37, J Carter  wrote:
> > 
> > # HG changeset patch
> > # User J Carter 
> > # Date 1700949429 0
> > #  Sat Nov 25 21:57:09 2023 +
> > # Node ID 98306e705015758eab0a05103d90e6bdb1da2819
> > # Parent  f366007dd23a6ce8e8427c1b3042781b618a2ade
> > Proxy: altered limit_rate to support variables  
> 
> Since it changes the upstream interface, I'd rather say:
> 
> Upstream: variables support in proxy_limit_rate and friends.
> 

Yep, makes sense.

>
> The change looks good to me, also it compliments variables support
> in the stream proxy module's directives.
> Any particular use-cases you'd like to share for this functionality?
> 

There are several, many are quite 'involved'.

A variable proxy_limit_rate (driven by external means), combined with 
max_conns set on upstream servers gives a reasonable method of 
dynamically capping upstream bandwidth utilization.

One scenario in which this has come up is when nginx is used a caching 
server, as often upstream servers cannot support 100% of the 'real' 
traffic load. 

When the cache is cleared or many items expire concurrently, requests
reaching the upstream can spike and it is necessary to adjust (down) 
proxy_limit_rate in a semi continuous fashion to prevent overloading
upstreams as they serve responses.

A novel way of showing this using only core modules is: 

upstream backend {
server 127.0.0.1:8080 max_conns=100;
...
}

... 

server {
listen 80;
location / {
...
proxy_pass http://backend;
proxy_hide_header rate;
proxy_limit_rate $upstream_http_rate;
}
}

...

server {
listen 8080;
location / {
#serve a file
...
add_header rate $my_rate;
}
}

Of course, this is most useful where 'backend' isn't nginx, and doesn't
have it's own client side bandwidth limiting (ie. limit_rate, 
limit_rate_after, limit_conn) - I've just used it here for this example for
convenience.

> > 
> > diff -r f366007dd23a -r 98306e705015 
> > src/http/modules/ngx_http_fastcgi_module.c
> > --- a/src/http/modules/ngx_http_fastcgi_module.cTue Nov 14 15:26:02 
> > 2023 +0400
> > +++ b/src/http/modules/ngx_http_fastcgi_module.cSat Nov 25 21:57:09 
> > 2023 +
> > @@ -375,7 +375,7 @@
> > 
> > { ngx_string("fastcgi_limit_rate"),
> >   NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
> > -  ngx_conf_set_size_slot,
> > +  ngx_http_set_complex_value_size_slot,
> >   NGX_HTTP_LOC_CONF_OFFSET,
> >   offsetof(ngx_http_fastcgi_loc_conf_t, upstream.limit_rate),
> >   NULL },
> > @@ -2898,7 +2898,7 @@
> > 
> > conf->upstream.send_lowat = NGX_CONF_UNSET_SIZE;
> > conf->upstream.buffer_size = NGX_CONF_UNSET_SIZE;
> > -conf->upstream.limit_rate = NGX_CONF_UNSET_SIZE;
> > +conf->upstream.limit_rate = NGX_CONF_UNSET_PTR;
> > 
> > conf->upstream.busy_buffers_size_conf = NGX_CONF_UNSET_SIZE;
> > conf->upstream.max_temp_file_size_conf = NGX_CONF_UNSET_SIZE;
> > @@ -3015,8 +3015,8 @@
> >   prev->upstream.buffer_size,
> >   (size_t) ngx_pagesize);
> > 
> > -ngx_conf_merge_size_value(conf->upstream.limit_rate,
> > -  prev->upstream.limit_rate, 0);
> > +ngx_conf_merge_ptr_value(conf->upstream.limit_rate,
> > +  prev->upstream.limit_rate, NULL);
> > 
> > 
> > ngx_conf_merge_bufs_value(conf->upstream.bufs, prev->upstream.bufs,
> > diff -r f366007dd23a -r 98306e705015 
> > src/http/modules/ngx_http_proxy_module.c
> > --- a/src/http/modules/ngx_http_proxy_module.c  Tue Nov 14 15:26:02 
> > 2023 +0400
> > +++ b/src/http/modules/ngx_http_proxy_module.c  Sat Nov 25 21:57:09 
> > 2023 +
> > @@ -494,7 +494,7 @@
> > 
> > { ngx_string("proxy_limit_rate"),
> >   NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
> > -  ngx_conf_set_size_slot,
> > +  ngx_http_set_complex_value_size_slot,
> >   NGX_HTTP_LOC_CONF_OFFSET,
> >   offsetof(ngx_http_proxy_loc_conf_t, upstream.limit_rate),
> >   NULL },
> > @@ -3371,7 +3371,7 @@
> > 
> > conf->upstream.send_lowat = NGX_CONF_UNSET_SIZE;
> > conf->upstream.buffer_size = NGX_CONF_UNSET_SIZE;
> > -conf->upstream.limit_rate = NGX_CONF_UNSET_SIZE;
> > +conf->upstream.limit_rate = NGX_C

Re: Support on Nginx-ingress V3.4.0

2023-12-22 Thread J Carter
Hello,

On Fri, 22 Dec 2023 12:05:51 +0530
Akash Shrivastava  wrote:

> Hi there,
> Urgent support needed on Nginx-ingress 3.4.0

I'd recommend posting ingress controller related questions on the 
discussions section of it's Github repo.

https://github.com/nginxinc/kubernetes-ingress/discussions

> 
> https://github.com/nginxinc/kubernetes-ingress/pull/4428
> 
> What does this mean?
> 

From an end user's perspective all they've done is merge all CRDs 
that used to be in /deployments/crds/ into one file /deploy/crds.yaml,
for convenience.

There is also now kustomize templates to re-generate CRDs 
(I can't think of a reason for a user to use those unless you want to 
apply patches, just use the single CRD). 

You can see docs on that here, and how to use the single CRD and kustomize
templates here.

https://docs.nginx.com/nginx-ingress-controller/installation/installing-nic/installation-with-manifests/#create-custom-resources

> 
> 
> What consideration needs to be taken before uploading this version?

Generally the release notes will list from version to version if 
there is a breaking change, or some special consideration needs to 
be made.

https://docs.nginx.com/nginx-ingress-controller/releases/

At least from last version, there doesn't appear to be any.
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: [PATCH 2 of 2] Win32: extended ngx_random range to 0x7fffffff

2023-12-15 Thread J Carter
Hello, 

Thanks the review and feedback.

On Sat, 16 Dec 2023 05:57:28 +0300
Maxim Dounin  wrote:

> Hello!
> 
> On Sat, Dec 09, 2023 at 08:42:11AM +, J Carter wrote:
> 
> > On Sat, 09 Dec 2023 07:46:10 +
> > J Carter  wrote:
> >   
> > > # HG changeset patch
> > > # User J Carter 
> > > # Date 1702101635 0
> > > #  Sat Dec 09 06:00:35 2023 +
> > > # Node ID 1a77698f82d2580aa8b8f62ce89b4dbb6d678c5d
> > > # Parent  d9275e982a7188a1ea7855295ffa93362ea9830f
> > > Win32: extended ngx_random range to 0x7fff
> > > 
> > > rand() is used on win32. RAND_MAX is implementation defined. win32's is
> > > 0x7fff.
> > > 
> > > Existing uses of ngx_random rely upon 0x7fff range provided by
> > > posix implementations of random().
> > > 
> > > diff -r d9275e982a71 -r 1a77698f82d2 src/os/win32/ngx_win32_config.h
> > > --- a/src/os/win32/ngx_win32_config.h Sat Dec 09 05:09:07 2023 +
> > > +++ b/src/os/win32/ngx_win32_config.h Sat Dec 09 06:00:35 2023 +
> > > @@ -280,7 +280,9 @@
> > >  
> > >  #define NGX_HAVE_GETADDRINFO 1
> > >  
> > > -#define ngx_random   rand
> > > +#define ngx_random       
> > >   \
> > > +((rand() << 16) | (rand() << 1) | (rand() >> 14))
> > > +
> > >  #define ngx_debug_init()
> > >  
> > >
> > 
> > ^ my mistake - copying error..
> > 
> > # HG changeset patch
> > # User J Carter 
> > # Date 1702111094 0
> > #  Sat Dec 09 08:38:14 2023 +
> > # Node ID 10ef59a412a330872fc6d46de64f42e32e997b3a
> > # Parent  d9275e982a7188a1ea7855295ffa93362ea9830f
> > Win32: extended ngx_random range to 0x7fff  
> 
> Nitpicking:
> 
> Win32: extended ngx_random() range to 0x7fff.

Ah thanks, will include the full stop in future.

> 
> > 
> > rand() is used on win32. RAND_MAX is implementation defined. win32's is
> > 0x7fff.
> > 
> > Existing uses of ngx_random rely upon 0x7fff range provided by
> > posix implementations of random().  
> 
> Thanks for catching this.
> 
> As far as I can see, the only module which actually relies on the 
> range is the random index module.  

Yes, that was the obvious one - I suspect upstream_random balancers
might act strangely in extreme cases (I'm not entirely sure though).

Other uses I'm not sure at all, as those are more domain specific 
(like resolver).

> Relying on the ngx_random()  range generally looks wrong to me, and 
> we might want to change the 
> code to don't.  OTOH, it's the only way to get a completely 
> uniform distribution, and that's what the module tries to do.  As 
> such, it might be good enough to preserve it as is, at least till 
> further changes to ngx_random().

Yes the alternative would seem to be #ifdefs for win32 vs unix, or 
adding in an 'ngx_random_max', or writing (or borrowing) a posix
random implementation into nginx's codebase.

> 
> Either way, wider range for ngx_random() should be beneficial in 
> other places.
>
> > 
> > diff -r d9275e982a71 -r 10ef59a412a3 src/os/win32/ngx_win32_config.h
> > --- a/src/os/win32/ngx_win32_config.h   Sat Dec 09 05:09:07 2023 +
> > +++ b/src/os/win32/ngx_win32_config.h   Sat Dec 09 08:38:14 2023 +
> > @@ -280,7 +280,9 @@
> >  
> >  #define NGX_HAVE_GETADDRINFO 1
> >  
> > -#define ngx_random   rand
> > +#define ngx_random()   
> >   \  
> 
> Nitpicking: the "\" character should be at the 79th column (in 
> some files at 78th).  This ensures that a diff won't wrap on a 
> standard 80-column terminal.

Thanks, will note for the future.

>
> > +((rand() << 16) | (rand() << 1) | (rand() >> 14))
> > +
> >  #define ngx_debug_init()  
> 
> Using "|" for random numbers looks utterly wrong to me, even if 
> ORed values are guaranteed to not interfere.
> 
> I would rather use "^", and avoid dependency on the particular 
> value of RAND_MAX (other than POSIX-required minimum of 32767) by 
> using something like
> 
>0x7fff & ((rand() << 16) ^ (rand() << 8) ^ rand())
> 
> with proper typecasts.

Yep, makes sense.

> 
> Something like this should work:
> 
> diff --git a/src/os/win32/ngx_win32_config.h b/src/os/win32/ngx_win32_config.h
> --- a/src/os/win32/ngx_win32_config.h
> +++ b/src/os/win32/ngx_win32_config.h
> @@ -280,7 +280,11 @@ typedef int sig_atomic_t
>  
>  #define NGX_HAVE_GETADDRINFO 1
>  
> -#define ngx_random   rand
> +#define ngx_random() 
>  \
> +((long) (0x7fff & ( ((uint32_t) rand() << 16)
>  \
> +  ^ ((uint32_t) rand() << 8) 
>  \
> +  ^ ((uint32_t) rand()) )))
> +
>  #define ngx_debug_init()
>  
>  
> 
Looks good to me.

___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: serving files from /proc

2023-12-12 Thread J Carter
On Wed, 13 Dec 2023 02:45:54 +
J Carter  wrote:

> Hello,
> 
> On Tue, 12 Dec 2023 16:17:11 +0100
> Jérôme Loyet  wrote:
> 
> > Hello,
> > 
> > I'm trying to serve some files from /proc but nginx return a 0 bytes
> > content because the file size of many files in /proc/ tree is simply 0 by
> > design.  
> 
> That is correct, reading Virtual File System files would require special 
> handling compared to regular files. Nginx doesn't appear to have this.
> 
> > 
> > here is my sample conf file:
> > ...
> > location = /route {
> >   root /proc/net;
> > }
> > 
> > and the result of the corresponding curl:  
> > > GET /route HTTP/1.1
> > > Host: 172.16.0.3:1513
> > > User-Agent: curl/7.68.0
> > > Accept: */*
> > >
> > * Mark bundle as not supporting multiuse
> > < HTTP/1.1 200 OK
> > < Server: nginx/1.23.1
> > < Date: Tue, 12 Dec 2023 15:08:00 GMT
> > < Content-Type: text/plain
> > < Content-Length: 0
> > < Last-Modified: Tue, 12 Dec 2023 15:08:00 GMT
> > < Connection: keep-alive
> > < ETag: "65787750-0"
> > < Accept-Ranges: bytes
> > 
> > is there a simple way to configure nginx to return the cotent of
> > /proc/net/route or any other file in /proc ?  
> 
> For a solution entirely within nginx, you can use njs to serve 
> /proc/net/route.
> 
> 
> https://github.com/nginx/njs-examples?tab=readme-ov-file#file-io-misc-file-io
> 
> Njs does handle VFS/zero sized files in a special way. If the file is zero
> sized, it will read up to 4096 bytes from the file.
> 

*Correction - it read up until EOF. So any sized VFS file will work actually
even above 4096 bytes.

> 
> https://github.com/nginx/njs/blob/2c937050a4589bbc196db334fef22e6de772dd49/external/njs_fs_module.c#L2732
> 
> 
> > thanks
> > regards
> > ++ Jerome  
> ___
> nginx mailing list
> nginx@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: serving files from /proc

2023-12-12 Thread J Carter
Hello,

On Tue, 12 Dec 2023 16:17:11 +0100
Jérôme Loyet  wrote:

> Hello,
> 
> I'm trying to serve some files from /proc but nginx return a 0 bytes
> content because the file size of many files in /proc/ tree is simply 0 by
> design.

That is correct, reading Virtual File System files would require special 
handling compared to regular files. Nginx doesn't appear to have this.

> 
> here is my sample conf file:
> ...
> location = /route {
>   root /proc/net;
> }
> 
> and the result of the corresponding curl:
> > GET /route HTTP/1.1
> > Host: 172.16.0.3:1513
> > User-Agent: curl/7.68.0
> > Accept: */*
> >  
> * Mark bundle as not supporting multiuse
> < HTTP/1.1 200 OK
> < Server: nginx/1.23.1
> < Date: Tue, 12 Dec 2023 15:08:00 GMT
> < Content-Type: text/plain
> < Content-Length: 0
> < Last-Modified: Tue, 12 Dec 2023 15:08:00 GMT
> < Connection: keep-alive
> < ETag: "65787750-0"
> < Accept-Ranges: bytes
> 
> is there a simple way to configure nginx to return the cotent of
> /proc/net/route or any other file in /proc ?

For a solution entirely within nginx, you can use njs to serve /proc/net/route.


https://github.com/nginx/njs-examples?tab=readme-ov-file#file-io-misc-file-io

Njs does handle VFS/zero sized files in a special way. If the file is zero
sized, it will read up to 4096 bytes from the file.


https://github.com/nginx/njs/blob/2c937050a4589bbc196db334fef22e6de772dd49/external/njs_fs_module.c#L2732


> thanks
> regards
> ++ Jerome
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: [PATCH 2 of 2] Win32: extended ngx_random range to 0x7fffffff

2023-12-09 Thread J Carter
On Sat, 09 Dec 2023 07:46:10 +
J Carter  wrote:

> # HG changeset patch
> # User J Carter 
> # Date 1702101635 0
> #  Sat Dec 09 06:00:35 2023 +
> # Node ID 1a77698f82d2580aa8b8f62ce89b4dbb6d678c5d
> # Parent  d9275e982a7188a1ea7855295ffa93362ea9830f
> Win32: extended ngx_random range to 0x7fff
> 
> rand() is used on win32. RAND_MAX is implementation defined. win32's is
> 0x7fff.
> 
> Existing uses of ngx_random rely upon 0x7fff range provided by
> posix implementations of random().
> 
> diff -r d9275e982a71 -r 1a77698f82d2 src/os/win32/ngx_win32_config.h
> --- a/src/os/win32/ngx_win32_config.h Sat Dec 09 05:09:07 2023 +
> +++ b/src/os/win32/ngx_win32_config.h Sat Dec 09 06:00:35 2023 +
> @@ -280,7 +280,9 @@
>  
>  #define NGX_HAVE_GETADDRINFO 1
>  
> -#define ngx_random   rand
> +#define ngx_random   
>   \
> +((rand() << 16) | (rand() << 1) | (rand() >> 14))
> +
>  #define ngx_debug_init()
>  
>  

^ my mistake - copying error..

# HG changeset patch
# User J Carter 
# Date 1702111094 0
#  Sat Dec 09 08:38:14 2023 +
# Node ID 10ef59a412a330872fc6d46de64f42e32e997b3a
# Parent  d9275e982a7188a1ea7855295ffa93362ea9830f
Win32: extended ngx_random range to 0x7fff

rand() is used on win32. RAND_MAX is implementation defined. win32's is
0x7fff.

Existing uses of ngx_random rely upon 0x7fff range provided by
posix implementations of random().

diff -r d9275e982a71 -r 10ef59a412a3 src/os/win32/ngx_win32_config.h
--- a/src/os/win32/ngx_win32_config.h   Sat Dec 09 05:09:07 2023 +
+++ b/src/os/win32/ngx_win32_config.h   Sat Dec 09 08:38:14 2023 +
@@ -280,7 +280,9 @@
 
 #define NGX_HAVE_GETADDRINFO 1
 
-#define ngx_random   rand
+#define ngx_random()   
  \
+((rand() << 16) | (rand() << 1) | (rand() >> 14))
+
 #define ngx_debug_init()
 
 
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH 1 of 2] Events: debug_random directive

2023-12-09 Thread J Carter
# HG changeset patch
# User J Carter 
# Date 1702098547 0
#  Sat Dec 09 05:09:07 2023 +
# Node ID d9275e982a7188a1ea7855295ffa93362ea9830f
# Parent  f366007dd23a6ce8e8427c1b3042781b618a2ade
Events: debug_random directive

This directive enables debug logging for only a percentage of connections that
match debug_connection/s.

This directive is useful for sample style debugging of nginx when under high
load scenarios, where logging all connections would incur excessive overhead.

This directive takes a value between 0.01%-99.99% inclusive.

Example usage:

events {
 worker_connections  1024;
 debug_connection0.0.0.0/0;
 debug_connection::0;
 debug_random1%;
}

diff -r f366007dd23a -r d9275e982a71 src/event/ngx_event.c
--- a/src/event/ngx_event.c Tue Nov 14 15:26:02 2023 +0400
+++ b/src/event/ngx_event.c Sat Dec 09 05:09:07 2023 +
@@ -30,6 +30,8 @@
 static char *ngx_event_use(ngx_conf_t *cf, ngx_command_t *cmd, void *conf);
 static char *ngx_event_debug_connection(ngx_conf_t *cf, ngx_command_t *cmd,
 void *conf);
+static char *ngx_event_debug_random(ngx_conf_t *cf, ngx_command_t *cmd,
+void *conf);
 
 static void *ngx_event_core_create_conf(ngx_cycle_t *cycle);
 static char *ngx_event_core_init_conf(ngx_cycle_t *cycle, void *conf);
@@ -162,6 +164,13 @@
   0,
   NULL },
 
+  { ngx_string("debug_random"),
+  NGX_EVENT_CONF|NGX_CONF_TAKE1,
+  ngx_event_debug_random,
+  0,
+  0,
+  NULL },
+
   ngx_null_command
 };
 
@@ -1254,6 +1263,51 @@
 }
 
 
+static char *
+ngx_event_debug_random(ngx_conf_t *cf, ngx_command_t *cmd, void *conf)
+{
+#if (NGX_DEBUG)
+ngx_event_conf_t  *ecf = conf;
+
+ngx_int_t n;
+ngx_str_t*value;
+
+if (ecf->debug_percent != NGX_CONF_UNSET_UINT) {
+return "is duplicate";
+}
+
+value = cf->args->elts;
+
+if (value[1].len == 0 || value[1].data[value[1].len - 1] != '%') {
+goto invalid;
+}
+
+n = ngx_atofp(value[1].data, value[1].len - 1, 2);
+if (n == NGX_ERROR || n == 0 || n > ) {
+goto invalid;
+}
+
+ecf->debug_percent = n;
+
+return NGX_CONF_OK;
+
+invalid:
+
+ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
+   "invalid percent value \"%V\"", [1]);
+
+#else
+
+ngx_conf_log_error(NGX_LOG_WARN, cf, 0,
+   "\"debug_random\" is ignored, you need to rebuild "
+   "nginx using --with-debug option to enable it");
+
+#endif
+
+return NGX_CONF_ERROR;
+}
+
+
 static void *
 ngx_event_core_create_conf(ngx_cycle_t *cycle)
 {
@@ -1279,6 +1333,8 @@
 return NULL;
 }
 
+ecf->debug_percent = NGX_CONF_UNSET_UINT;
+
 #endif
 
 return ecf;
@@ -1369,5 +1425,9 @@
 ngx_conf_init_value(ecf->accept_mutex, 0);
 ngx_conf_init_msec_value(ecf->accept_mutex_delay, 500);
 
+#if (NGX_DEBUG)
+ngx_conf_init_uint_value(ecf->debug_percent, 0);
+#endif
+
 return NGX_CONF_OK;
 }
diff -r f366007dd23a -r d9275e982a71 src/event/ngx_event.h
--- a/src/event/ngx_event.h Tue Nov 14 15:26:02 2023 +0400
+++ b/src/event/ngx_event.h Sat Dec 09 05:09:07 2023 +
@@ -438,6 +438,7 @@
 u_char   *name;
 
 #if (NGX_DEBUG)
+ngx_uint_tdebug_percent;
 ngx_array_t   debug_connection;
 #endif
 } ngx_event_conf_t;
diff -r f366007dd23a -r d9275e982a71 src/event/ngx_event_accept.c
--- a/src/event/ngx_event_accept.c  Tue Nov 14 15:26:02 2023 +0400
+++ b/src/event/ngx_event_accept.c  Sat Dec 09 05:09:07 2023 +
@@ -561,6 +561,12 @@
 break;
 }
 
+if  (ecf->debug_percent
+ && ecf->debug_percent <= (ngx_uint_t) ngx_random() % 1)
+{
+break;
+}
+
 c->log->log_level = NGX_LOG_DEBUG_CONNECTION|NGX_LOG_DEBUG_ALL;
 break;
 
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH 0 of 2] Debug_random directive & win32 ngx_random fix

2023-12-09 Thread J Carter
This patch series reworks previous proposal for debug_random directive,
with (re)consideration for advice on unessecary complexity.

(see mailing list, August 2nd 2023)
https://mailman.nginx.org/pipermail/nginx-devel/2023-August/BQNBXJFIIHFLHRE2ASU4QHPTMWPJUP5D.html

Resubmitting as a series due to later discovered issue with win32's 
ngx_random.

- First patch implements debug_random directive.
- Second patch provides a dependent fix for win32 api's implementation
of ngx_random.

Any feedback would be much appreciated.
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Limiting number of client TLS connections

2023-12-08 Thread J Carter
Hello again,

By coincidence, and since my previous email, someone has kindly submitted a 
fixed 
window rate limiting example to the NJS examples Github repo. 

https://github.com/nginx/njs-examples/pull/31/files/ba33771cefefdc019ba76bd1f176e25e18adbc67

https://github.com/nginx/njs-examples/tree/master/conf/http/rate-limit

The example is for rate limiting in http context, however I believe you'd be 
able to adapt this for stream (and your use case) with minor modifications 
(use js_access rather than 'if' as mentioned previously, setting key to a 
fixed value).

Just forwarding it on in case you need it.


On Sat, 25 Nov 2023 16:03:37 +0800
Zero King  wrote:

> Hi Jordan,
> 
> Thanks for your suggestion. I will give it a try and also try to push 
> our K8s team to implement a firewall if possible.
> 
> On 20/11/23 10:33, J Carter wrote:
> > Hello,
> >
> > A self contained solution would be to double proxy, first through nginx 
> > stream server and then locally back to nginx http server (with proxy_pass 
> > via unix socket, or to localhost on a different port).
> >
> > You can implement your own custom rate limiting logic in the stream server 
> > with NJS (js_access) and use the new js_shared_dict_zone (which is shared 
> > between workers) for persistently storing rate calculations.
> >
> > You'd have additional overhead from the stream tcp proxy and the njs, but 
> > it shouldn't be too great (at least compared to overhead of TLS handshakes).
> >
> > Regards,
> > Jordan Carter.
> >
> > 
> > From: nginx  on behalf of Zero King 
> > Sent: Saturday, November 18, 2023 6:44 AM
> > To: nginx@nginx.org
> > Subject: Limiting number of client TLS connections
> >
> > Hi all,
> >
> > I want Nginx to limit the rate of new TLS connections and the total (or
> > per-worker) number of all client-facing connections, so that under a
> > sudden surge of requests, existing connections can get enough share of
> > CPU to be served properly, while excessive connections are rejected and
> > retried against other servers in the cluster.
> >
> > I am running Nginx on a managed Kubernetes cluster, so tuning kernel
> > parameters or configuring layer 4 firewall is not an option.
> >
> > To serve existing connections well, worker_connections can not be used,
> > because it also affects connections with proxied servers.
> >
> > Is there a way to implement these measures in Nginx configuration?
> > ___
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: NIC deletes all listeners when rejecting new listener on reserved port

2023-12-08 Thread J Carter
Hi Brad,

I'd recommend raising your concern the NIC Github repo's issue tracker.

https://github.com/nginxinc/kubernetes-ingress/issues


On Fri, 8 Dec 2023 04:55:12 +
Brad Bishop via nginx  wrote:

> Hi Folks,
> 
> We're using NGINX Ingress Controller 3.0.2 (NGINX 1.23.3) in AKS on a couple 
> AKSUbuntu-2204gen2containerd-202309.06.0 nodes. We do regular helm release 
> installs of a single-tenanted TCP & HTTP service for law firms. Today we had 
> a P1 issue when we added a listener for a new law firm to GlobalConfiguration 
> and set the port number to 9113. NGINX rejected the change because 9113 is 
> reserved for prometheus - fair enough. But it also immediately deleted all 
> other existing listeners, which broke 100 TransportServers and blocked access 
> to 100 law firms. We reproduced this on a second AKS cluster. Is this the 
> intended behaviour? 
> 
> I expected in this case that NGINX would reject the bad config and revert to 
> last-good config, and the docs suggest this is what should happen:
> https://docs.nginx.com/nginx-ingress-controller/configuration/global-configuration/globalconfiguration-resource/#:~:text=the%20Ingress%20Controller%20will%20ignore%20the%20new%20version
> 
> Thanks,
> Brad Bishop
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: [PATCH] Added merge inheritance to proxy_set_header

2023-11-27 Thread J Carter
Hello Roman,

Thank you for the review and feedback.

On Mon, 27 Nov 2023 16:18:21 +0400
Roman Arutyunyan  wrote:

> Hi Jordan,
> 
> On Thu, Nov 23, 2023 at 02:52:55PM +, J Carter wrote:
> > # HG changeset patch
> > # User J Carter 
> > # Date 1700751016 0
> > #  Thu Nov 23 14:50:16 2023 +
> > # Node ID cc79903ca02eff8aa87238a0f801f8070425d9ab
> > # Parent  7ec761f0365f418511e30b82e9adf80bc56681df
> > Added merge inheritance to proxy_set_header
> > 
> > This patch introduces 'inherit' argument to proxy_set_header
> > directive. When marked as such it will be merge inherited into child
> > contexts regardless of the presence of other proxy_set_header
> > directives within those contexts.
> > 
> > This patch also introduces the 'proxy_set_header_inherit' directive
> > which blocks the merge inheritance in receiving contexts when set to off.
> > 
> > The purpose of the added mechanics is to reduce repetition within the
> > nginx configuration for universally set (or boilerplate) request
> > headers, while maintaining flexibility to set additional headers for
> > specific paths.
> > 
> > There is no change in behavior for existing configurations.
> > 
> > Example below:
> > 
> > server {
> > 
> > ...
> > 
> > proxy_set_header Host $host inherit;
> > proxy_set_header Connection "" inherit;
> > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for inherit;
> > 
> > location /1/ {
> > #sets this header in addition to server context ones.
> > proxy_set_header customHeader1 "customvalue1";
> > proxy_pass http://backend1;
> > }
> > 
> > location /2/ {
> > #sets this header in addition to server context ones.
> > proxy_set_header customheader2 "customValue2";
> > proxy_pass http://backend1;
> > }
> > 
> > location /3/ {
> > #no location specific headers, only server context ones set.
> > proxy_pass http://backend1;
> > }
> > 
> > location /special/ {
> > #Blocks merge inheritance from server context.
> > proxy_set_header_inherit off;
> > proxy_set_header Host "notserverthost.co";
> > proxy_set_header Connection "";
> > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> > proxy_pass http://backend2;
> > }
> > 
> > }
> 
> Even though the inheritance rules we have now are not convenient for some
> people, they do have one benefit - they are simple.  Looking into a single
> location is enough to have a full list of headers.  You introduce inheritable
> headers which break this simplicity.
> 
> I also though about this issue and I think it'd be better to add a separate
> directive/parameter which would explicitly inherit all headers from the outer
> scope.

I agree, your suggested approach significantly reduces code and configuration
complexity, and still has good flexibility.

> 
> server {
> proxy_set_header Host "notserverthost.co";
> 
> location / {
> ...
> proxy_set_header inherit; # explicitly inherit from above
> 
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> }
> }
> 
> [..]
> 

I've kept it as a separate directive for a few reasons:

- I think it makes the config syntax a little less ambiguous (ie. harder to 
misread as setting a header called 'inherit' to empty value).

- It avoids a custom handler for proxy_set_header directive parsing.

- It allows for this directive itself to be inherited (in normal fashion)
for further reduction in configuration, while still allowing for disabling 
in lower contexts.

I know this last point is against your explicitness point, however the user
would be free to choose from both approaches. Personally I prefer the one 
with less config lines.

Please find the revised patch below: 


# HG changeset patch
# User J Carter 
# Date 1701107326 0
#  Mon Nov 27 17:48:46 2023 +
# Node ID 33d34dd487501932f141cb6c2124959580ce38e5
# Parent  7ec761f0365f418511e30b82e9adf80bc56681df
Proxy: proxy_set_header_inherit directive

This patch introduces proxy_set_header_inherit which enables merge inheritance
of proxy_set_header directives from higher context when set to 'on' in the
receiving context.

The purpose of the added mechanics is to reduce repetition within the
nginx configuration for universally set (or boilerplate) request
headers, while maintaining flexibility to set additional headers for

[PATCH] Proxy: altered limit_rate to support variables

2023-11-25 Thread J Carter
# HG changeset patch
# User J Carter 
# Date 1700949429 0
#  Sat Nov 25 21:57:09 2023 +
# Node ID 98306e705015758eab0a05103d90e6bdb1da2819
# Parent  f366007dd23a6ce8e8427c1b3042781b618a2ade
Proxy: altered limit_rate to support variables

diff -r f366007dd23a -r 98306e705015 src/http/modules/ngx_http_fastcgi_module.c
--- a/src/http/modules/ngx_http_fastcgi_module.cTue Nov 14 15:26:02 
2023 +0400
+++ b/src/http/modules/ngx_http_fastcgi_module.cSat Nov 25 21:57:09 
2023 +
@@ -375,7 +375,7 @@
 
 { ngx_string("fastcgi_limit_rate"),
   NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
-  ngx_conf_set_size_slot,
+  ngx_http_set_complex_value_size_slot,
   NGX_HTTP_LOC_CONF_OFFSET,
   offsetof(ngx_http_fastcgi_loc_conf_t, upstream.limit_rate),
   NULL },
@@ -2898,7 +2898,7 @@
 
 conf->upstream.send_lowat = NGX_CONF_UNSET_SIZE;
 conf->upstream.buffer_size = NGX_CONF_UNSET_SIZE;
-conf->upstream.limit_rate = NGX_CONF_UNSET_SIZE;
+conf->upstream.limit_rate = NGX_CONF_UNSET_PTR;
 
 conf->upstream.busy_buffers_size_conf = NGX_CONF_UNSET_SIZE;
 conf->upstream.max_temp_file_size_conf = NGX_CONF_UNSET_SIZE;
@@ -3015,8 +3015,8 @@
   prev->upstream.buffer_size,
   (size_t) ngx_pagesize);
 
-ngx_conf_merge_size_value(conf->upstream.limit_rate,
-  prev->upstream.limit_rate, 0);
+ngx_conf_merge_ptr_value(conf->upstream.limit_rate,
+  prev->upstream.limit_rate, NULL);
 
 
 ngx_conf_merge_bufs_value(conf->upstream.bufs, prev->upstream.bufs,
diff -r f366007dd23a -r 98306e705015 src/http/modules/ngx_http_proxy_module.c
--- a/src/http/modules/ngx_http_proxy_module.c  Tue Nov 14 15:26:02 2023 +0400
+++ b/src/http/modules/ngx_http_proxy_module.c  Sat Nov 25 21:57:09 2023 +
@@ -494,7 +494,7 @@
 
 { ngx_string("proxy_limit_rate"),
   NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
-  ngx_conf_set_size_slot,
+  ngx_http_set_complex_value_size_slot,
   NGX_HTTP_LOC_CONF_OFFSET,
   offsetof(ngx_http_proxy_loc_conf_t, upstream.limit_rate),
   NULL },
@@ -3371,7 +3371,7 @@
 
 conf->upstream.send_lowat = NGX_CONF_UNSET_SIZE;
 conf->upstream.buffer_size = NGX_CONF_UNSET_SIZE;
-conf->upstream.limit_rate = NGX_CONF_UNSET_SIZE;
+conf->upstream.limit_rate = NGX_CONF_UNSET_PTR;
 
 conf->upstream.busy_buffers_size_conf = NGX_CONF_UNSET_SIZE;
 conf->upstream.max_temp_file_size_conf = NGX_CONF_UNSET_SIZE;
@@ -3515,8 +3515,8 @@
   prev->upstream.buffer_size,
   (size_t) ngx_pagesize);
 
-ngx_conf_merge_size_value(conf->upstream.limit_rate,
-  prev->upstream.limit_rate, 0);
+ngx_conf_merge_ptr_value(conf->upstream.limit_rate,
+  prev->upstream.limit_rate, NULL);
 
 ngx_conf_merge_bufs_value(conf->upstream.bufs, prev->upstream.bufs,
   8, ngx_pagesize);
diff -r f366007dd23a -r 98306e705015 src/http/modules/ngx_http_scgi_module.c
--- a/src/http/modules/ngx_http_scgi_module.c   Tue Nov 14 15:26:02 2023 +0400
+++ b/src/http/modules/ngx_http_scgi_module.c   Sat Nov 25 21:57:09 2023 +
@@ -223,7 +223,7 @@
 
 { ngx_string("scgi_limit_rate"),
   NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
-  ngx_conf_set_size_slot,
+  ngx_http_set_complex_value_size_slot,
   NGX_HTTP_LOC_CONF_OFFSET,
   offsetof(ngx_http_scgi_loc_conf_t, upstream.limit_rate),
   NULL },
@@ -1301,7 +1301,7 @@
 
 conf->upstream.send_lowat = NGX_CONF_UNSET_SIZE;
 conf->upstream.buffer_size = NGX_CONF_UNSET_SIZE;
-conf->upstream.limit_rate = NGX_CONF_UNSET_SIZE;
+conf->upstream.limit_rate = NGX_CONF_UNSET_PTR;
 
 conf->upstream.busy_buffers_size_conf = NGX_CONF_UNSET_SIZE;
 conf->upstream.max_temp_file_size_conf = NGX_CONF_UNSET_SIZE;
@@ -1413,8 +1413,8 @@
   prev->upstream.buffer_size,
   (size_t) ngx_pagesize);
 
-ngx_conf_merge_size_value(conf->upstream.limit_rate,
-  prev->upstream.limit_rate, 0);
+ngx_conf_merge_ptr_value(conf->upstream.limit_rate,
+  prev->upstream.limit_rate, NULL);
 
 
 ngx_conf_merge_bufs_value(conf->upstream.bufs, prev->upstream.bufs,
diff -r f366007dd23a -r 98306e705015 src/http/modules/ngx_http_uwsgi_module.c
--- a/src/http/modules/ngx_http_uwsgi_module.c  Tue Nov 14 15:26:02 2023 +0400
+++ b/src/http/modules/ngx_http_uwsgi_module.c  Sat Nov 25 21:57:09 2023 +
@@ -289,7 +289,7 @@
 
 { ngx_string("uwsgi_limit_rate"),
   NGX_HTTP_MAIN_CONF|NG

Re: Limiting number of client TLS connections

2023-11-25 Thread J Carter
No problem at all :)

One other suggestion if you do go down the double proxy + njs route. Keep an 
eye on the
nginx-devel mailing list (or nginx release notes) for this patch series 
https://mailman.nginx.org/pipermail/nginx-devel/2023-November/QUTQYBNAHLMQMGTKQK57IXDXD23VVIQO.html

The last patch in the series will make proxying from stream to http 
significantly
more efficient, if merged.

On Sat, 25 Nov 2023 16:03:37 +0800
Zero King  wrote:

> Hi Jordan,
> 
> Thanks for your suggestion. I will give it a try and also try to push 
> our K8s team to implement a firewall if possible.
> 
> On 20/11/23 10:33, J Carter wrote:
> > Hello,
> >
> > A self contained solution would be to double proxy, first through nginx 
> > stream server
> > and then locally back to nginx http server (with proxy_pass via unix 
> > socket, or to
> > localhost on a different port).
> >
> > You can implement your own custom rate limiting logic in the stream server 
> > with NJS
> > (js_access) and use the new js_shared_dict_zone (which is shared between 
> > workers) for
> > persistently storing rate calculations.
> >
> > You'd have additional overhead from the stream tcp proxy and the njs, but it
> > shouldn't be too great (at least compared to overhead of TLS handshakes).
> >
> > Regards,
> > Jordan Carter.
> >
> > 
> > From: nginx  on behalf of Zero King 
> > Sent: Saturday, November 18, 2023 6:44 AM
> > To: nginx@nginx.org
> > Subject: Limiting number of client TLS connections
> >
> > Hi all,
> >
> > I want Nginx to limit the rate of new TLS connections and the total (or
> > per-worker) number of all client-facing connections, so that under a
> > sudden surge of requests, existing connections can get enough share of
> > CPU to be served properly, while excessive connections are rejected and
> > retried against other servers in the cluster.
> >
> > I am running Nginx on a managed Kubernetes cluster, so tuning kernel
> > parameters or configuring layer 4 firewall is not an option.
> >
> > To serve existing connections well, worker_connections can not be used,
> > because it also affects connections with proxied servers.
> >
> > Is there a way to implement these measures in Nginx configuration?
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > https://mailman.nginx.org/mailman/listinfo/nginx
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > https://mailman.nginx.org/mailman/listinfo/nginx  
> ___
> nginx mailing list
> nginx@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


[PATCH] Added merge inheritance to proxy_set_header

2023-11-23 Thread J Carter
# HG changeset patch
# User J Carter 
# Date 1700751016 0
#  Thu Nov 23 14:50:16 2023 +
# Node ID cc79903ca02eff8aa87238a0f801f8070425d9ab
# Parent  7ec761f0365f418511e30b82e9adf80bc56681df
Added merge inheritance to proxy_set_header

This patch introduces 'inherit' argument to proxy_set_header
directive. When marked as such it will be merge inherited into child
contexts regardless of the presence of other proxy_set_header
directives within those contexts.

This patch also introduces the 'proxy_set_header_inherit' directive
which blocks the merge inheritance in receiving contexts when set to off.

The purpose of the added mechanics is to reduce repetition within the
nginx configuration for universally set (or boilerplate) request
headers, while maintaining flexibility to set additional headers for
specific paths.

There is no change in behavior for existing configurations.

Example below:

server {

...

proxy_set_header Host $host inherit;
proxy_set_header Connection "" inherit;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for inherit;

location /1/ {
#sets this header in addition to server context ones.
proxy_set_header customHeader1 "customvalue1";
proxy_pass http://backend1;
}

location /2/ {
#sets this header in addition to server context ones.
proxy_set_header customheader2 "customValue2";
proxy_pass http://backend1;
}

location /3/ {
#no location specific headers, only server context ones set.
proxy_pass http://backend1;
}

location /special/ {
#Blocks merge inheritance from server context.
proxy_set_header_inherit off;
proxy_set_header Host "notserverthost.co";
proxy_set_header Connection "";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend2;
}

}

TODO:
- Repeat this implementation for grpc_set_header

diff -r 7ec761f0365f -r cc79903ca02e src/http/modules/ngx_http_proxy_module.c
--- a/src/http/modules/ngx_http_proxy_module.c  Thu Oct 26 23:35:09 2023 +0300
+++ b/src/http/modules/ngx_http_proxy_module.c  Thu Nov 23 14:50:16 2023 +
@@ -69,6 +69,11 @@
 ngx_str_t  uri;
 } ngx_http_proxy_vars_t;
 
+typedef struct  {
+ngx_str_t  key;
+ngx_str_t  value;
+ngx_uint_t inherit;  /* unsigned  inherit:1 */
+} ngx_http_proxy_header_src_t;
 
 typedef struct {
 ngx_array_t   *flushes;
@@ -117,6 +122,8 @@
 ngx_uint_t headers_hash_max_size;
 ngx_uint_t headers_hash_bucket_size;
 
+ngx_flag_t headers_inherit;
+
 #if (NGX_HTTP_SSL)
 ngx_uint_t ssl;
 ngx_uint_t ssl_protocols;
@@ -215,6 +222,8 @@
 void *conf);
 static char *ngx_http_proxy_store(ngx_conf_t *cf, ngx_command_t *cmd,
 void *conf);
+static char *ngx_http_proxy_set_header(ngx_conf_t *cf, ngx_command_t *cmd,
+void *conf);
 #if (NGX_HTTP_CACHE)
 static char *ngx_http_proxy_cache(ngx_conf_t *cf, ngx_command_t *cmd,
 void *conf);
@@ -409,12 +418,19 @@
   NULL },
 
 { ngx_string("proxy_set_header"),
-  NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE2,
-  ngx_conf_set_keyval_slot,
+  NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE23,
+  ngx_http_proxy_set_header,
   NGX_HTTP_LOC_CONF_OFFSET,
   offsetof(ngx_http_proxy_loc_conf_t, headers_source),
   NULL },
 
+{ ngx_string("proxy_set_header_inherit"),
+  NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,
+  ngx_conf_set_flag_slot,
+  NGX_HTTP_LOC_CONF_OFFSET,
+  offsetof(ngx_http_proxy_loc_conf_t, headers_inherit),
+  NULL },
+
 { ngx_string("proxy_headers_hash_max_size"),
   NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
   ngx_conf_set_num_slot,
@@ -3400,6 +3416,8 @@
 
 conf->upstream.intercept_errors = NGX_CONF_UNSET;
 
+conf->headers_inherit = NGX_CONF_UNSET;
+
 #if (NGX_HTTP_SSL)
 conf->upstream.ssl_session_reuse = NGX_CONF_UNSET;
 conf->upstream.ssl_name = NGX_CONF_UNSET_PTR;
@@ -3444,13 +3462,15 @@
 ngx_http_proxy_loc_conf_t *prev = parent;
 ngx_http_proxy_loc_conf_t *conf = child;
 
-u_char *p;
-size_t  size;
-ngx_int_t   rc;
-ngx_hash_init_t hash;
-ngx_http_core_loc_conf_t   *clcf;
-ngx_http_proxy_rewrite_t   *pr;
-ngx_http_script_compile_t   sc;
+u_char   *p;
+size_tsize;
+ngx_int_t rc;
+ngx_uint_ti;
+ngx_hash_init_t   hash;
+ngx_http_core_loc_co

Re: Limiting number of client TLS connections

2023-11-19 Thread J Carter
Hello,

A self contained solution would be to double proxy, first through nginx stream 
server and then locally back to nginx http server (with proxy_pass via unix 
socket, or to localhost on a different port).

You can implement your own custom rate limiting logic in the stream server with 
NJS (js_access) and use the new js_shared_dict_zone (which is shared between 
workers) for persistently storing rate calculations.

You'd have additional overhead from the stream tcp proxy and the njs, but it 
shouldn't be too great (at least compared to overhead of TLS handshakes).

Regards,
Jordan Carter. 


From: nginx  on behalf of Zero King 
Sent: Saturday, November 18, 2023 6:44 AM
To: nginx@nginx.org
Subject: Limiting number of client TLS connections

Hi all,

I want Nginx to limit the rate of new TLS connections and the total (or
per-worker) number of all client-facing connections, so that under a
sudden surge of requests, existing connections can get enough share of
CPU to be served properly, while excessive connections are rejected and
retried against other servers in the cluster.

I am running Nginx on a managed Kubernetes cluster, so tuning kernel
parameters or configuring layer 4 firewall is not an option.

To serve existing connections well, worker_connections can not be used,
because it also affects connections with proxied servers.

Is there a way to implement these measures in Nginx configuration?
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: How to run a shell script on every request?

2023-08-27 Thread J Carter
+1 on "why are you doing this?".

However, to answer the question - rather than spawning a new shell for every 
request, use a loop in your bash script that is driven by access log output.

For example.

tail -n0 -f /var/log/nginx/access.log | \
while read; 
do echo "one request";
done;

You'll need to handle what happens when log rotation takes place.


On Fri, 18 Aug 2023 21:39:13 -0400
Jeff Dyke  wrote:

> Can you explain why?  I would never tie a script to a request.  I post
> process logs all of the time. If it needs to be in the application, don't
> force it into Nginx.
> 
> Strong statement, but would love to hear why?
> 
> On Fri, Aug 18, 2023 at 9:47 AM Kaushal Shriyan 
> wrote:
> 
> > Hi,
> >
> > I am running nginx version: nginx/1.24.0  on CentOS Linux release 7.9.2009
> > (Core)
> >
> > # nginx -v
> > nginx version: nginx/1.24.0
> > # cat /etc/redhat-release
> > CentOS Linux release 7.9.2009 (Core)
> > #
> >
> > I want to run a shell script every time my nginx server receives any HTTP
> > request. Any simple ways to do this?
> >
> > Please guide me. Thanks in Advance.
> >
> > Best Regards,
> >
> > Kaushal
> >
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > https://mailman.nginx.org/mailman/listinfo/nginx
> >
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: [PATCH] Added debug_random directive

2023-08-26 Thread J Carter
Hello,

Thanks for the review.

> On Mon, Jul 24, 2023 at 05:24:04AM +0100, J Carter wrote:
> 
> [..]
> 
> > # HG changeset patch
> > # User J Carter 
> > # Date 1690171706 -3600
> > #  Mon Jul 24 05:08:26 2023 +0100
> > # Node ID ea91b9aa69d8ce9dc9878209a83b7d538e6bc7e1
> > # Parent  77c1418916f7817a0d020f28d8a08f278a12fe43
> > Added debug_random directive
> 
> This line needs a prefix, for example:
> 
> Events: debug_random directive.
> 

Understood, fixed. 

> See "hg log" for common prefixes used in nginx commits.
> 
> > More bug fixes and style changes.
> > 
> > diff -r 77c1418916f7 -r ea91b9aa69d8 src/event/ngx_event.c
> > --- a/src/event/ngx_event.c Thu Jun 08 14:58:01 2023 +0400
> > +++ b/src/event/ngx_event.c Mon Jul 24 05:08:26 2023 +0100
> > @@ -30,6 +30,8 @@
> >  static char *ngx_event_use(ngx_conf_t *cf, ngx_command_t *cmd, void *conf);
> >  static char *ngx_event_debug_connection(ngx_conf_t *cf, ngx_command_t *cmd,
> >  void *conf);
> > +static char *ngx_event_debug_random(ngx_conf_t *cf, ngx_command_t *cmd,
> > +void *conf);
> >  
> >  static void *ngx_event_core_create_conf(ngx_cycle_t *cycle);
> >  static char *ngx_event_core_init_conf(ngx_cycle_t *cycle, void *conf);
> > @@ -162,6 +164,13 @@
> >0,
> >NULL },
> >  
> > +  { ngx_string("debug_random"),
> > +  NGX_EVENT_CONF|NGX_CONF_TAKE1,
> > +  ngx_event_debug_random,
> > +  0,
> > +  0,
> > +  NULL },
> > +
> >ngx_null_command
> >  };
> >  
> > @@ -496,6 +505,7 @@
> >  size_t   size, cl;
> >  ngx_shm_tshm;
> >  ngx_time_t  *tp;
> > +ngx_cidr_t  *cidr;
> >  ngx_core_conf_t *ccf;
> >  ngx_event_conf_t*ecf;
> >  
> > @@ -507,6 +517,33 @@
> >"using the \"%s\" event method", ecf->name);
> >  }
> >  
> > +if (ecf->debug_connection.nelts == 0
> > +&& ecf->debug_scaled_pct > 0)
> > +{
> > +cidr = ngx_array_push(>debug_connection);
> > +if (cidr == NULL) {
> > +return NGX_ERROR;
> > +}
> > +/*0.0.0.0/0*/
> > +ngx_memzero(cidr, sizeof(ngx_cidr_t));
> > +cidr->family = AF_INET;
> > +
> > +#ifdef NGX_HAVE_INET6
> > +cidr = ngx_array_push(>debug_connection);
> > +if (cidr == NULL) {
> > +return NGX_ERROR;
> > +}
> > +/*::/0*/
> > +ngx_memzero(cidr, sizeof(ngx_cidr_t));
> > +cidr->family = AF_INET6;
> > +#endif
> > +
> > +} else if (ecf->debug_connection.nelts > 0
> > +   && ecf->debug_scaled_pct == 0)
> > +{
> > +ecf->debug_scaled_pct = NGX_MAX_INT32_VALUE;
> > +}
> > +
> 
> Why do we need this code?  It looks to me like everything can be done in
> ngx_debug_accepted_connection().
> 

The previous attempt (further back in this thread) had this logic in 
ngx_debug_accepted_connection(), however I changed it to this style as it made
the logic of that function very messy, and introduced new special cases.

With this method, I am simply utilizing the standard mechanisms 
(same as a user would use, debug_connections) to add mark all IPV4/V6 
connections as candidates for selection.

In any case - the "else if" section must be done in this function. 

> >  ccf = (ngx_core_conf_t *) ngx_get_conf(cycle->conf_ctx, 
> > ngx_core_module);
> >  
> >  ngx_timer_resolution = ccf->timer_resolution;
> > @@ -1254,6 +1291,55 @@
> >  }
> >  
> >  
> > +static char *
> > +ngx_event_debug_random(ngx_conf_t *cf, ngx_command_t *cmd, void *conf)
> > +{
> > +#if (NGX_DEBUG)
> > +ngx_event_conf_t  *ecf = conf;
> > +
> > +u_char   *c;
> > +ngx_int_t pct;
> > +ngx_uint_tlen;
> > +ngx_str_t*value;
> > +
> > +if (ecf->debug_scaled_pct != NGX_CONF_UNSET_UINT) {
> > +return "is duplicate";
> > +}
> > +
> > +value = cf->args->elts;
> > +c = value[1].data;
> > +len = value[1].len;
> > +
> > +if (c[len-1] != '%') {
> > +ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
> > +   "%V missing '%%'",
> > +   

Re: [PATCH] Added debug_random directive

2023-07-23 Thread J Carter
On Tue, 18 Jul 2023 03:36:02 +
J Carter  wrote:

> On Sat, 15 Jul 2023 03:48:25 +
> J Carter  wrote:
> 
> > # HG changeset patch
> > # User J Carter 
> > # Date 1689391559 -3600
> > #  Sat Jul 15 04:25:59 2023 +0100
> > # Node ID b1ea0a60417e547513654bf9d6bb79714865c780
> > # Parent  77c1418916f7817a0d020f28d8a08f278a12fe43
> > Added debug_random directive
> > 
> > This directive enforces for EITHER a percentage of total connections
> > OR a percentage of connections matched by debug_connection CIDRs
> > to have debug logging enabled.
> > 
> > This is useful for debugging when nginx is under high load
> > (production) - where debugging all connections is not possible without
> > disrupting traffic.
> > 
> > This directive takes a value between 0.00%-100.00% exclusive.
> > 
> 
> # HG changeset patch
> # User J Carter 
> # Date 1689649226 -3600
> #  Tue Jul 18 04:00:26 2023 +0100
> # Node ID 87f6f95e0385e6cd37354979ea61cc2435deb430
> # Parent  77c1418916f7817a0d020f28d8a08f278a12fe43
> Added debug_random directive
> 
> Rework of previous patch. Fixed several bugs.
> 
> Example usage:
> 
> events {
>  worker_connections  1024;
>  #if uncommented, the percentage applies to connection from lo.
>  #debug_connection127.0.0.0/8;
>  debug_random1%;
> }
> 
# HG changeset patch
# User J Carter 
# Date 1690171706 -3600
#  Mon Jul 24 05:08:26 2023 +0100
# Node ID ea91b9aa69d8ce9dc9878209a83b7d538e6bc7e1
# Parent  77c1418916f7817a0d020f28d8a08f278a12fe43
Added debug_random directive

More bug fixes and style changes.

diff -r 77c1418916f7 -r ea91b9aa69d8 src/event/ngx_event.c
--- a/src/event/ngx_event.c Thu Jun 08 14:58:01 2023 +0400
+++ b/src/event/ngx_event.c Mon Jul 24 05:08:26 2023 +0100
@@ -30,6 +30,8 @@
 static char *ngx_event_use(ngx_conf_t *cf, ngx_command_t *cmd, void *conf);
 static char *ngx_event_debug_connection(ngx_conf_t *cf, ngx_command_t *cmd,
 void *conf);
+static char *ngx_event_debug_random(ngx_conf_t *cf, ngx_command_t *cmd,
+void *conf);
 
 static void *ngx_event_core_create_conf(ngx_cycle_t *cycle);
 static char *ngx_event_core_init_conf(ngx_cycle_t *cycle, void *conf);
@@ -162,6 +164,13 @@
   0,
   NULL },
 
+  { ngx_string("debug_random"),
+  NGX_EVENT_CONF|NGX_CONF_TAKE1,
+  ngx_event_debug_random,
+  0,
+  0,
+  NULL },
+
   ngx_null_command
 };
 
@@ -496,6 +505,7 @@
 size_t   size, cl;
 ngx_shm_tshm;
 ngx_time_t  *tp;
+ngx_cidr_t  *cidr;
 ngx_core_conf_t *ccf;
 ngx_event_conf_t*ecf;
 
@@ -507,6 +517,33 @@
   "using the \"%s\" event method", ecf->name);
 }
 
+if (ecf->debug_connection.nelts == 0
+&& ecf->debug_scaled_pct > 0)
+{
+cidr = ngx_array_push(>debug_connection);
+if (cidr == NULL) {
+return NGX_ERROR;
+}
+/*0.0.0.0/0*/
+ngx_memzero(cidr, sizeof(ngx_cidr_t));
+cidr->family = AF_INET;
+
+#ifdef NGX_HAVE_INET6
+cidr = ngx_array_push(>debug_connection);
+if (cidr == NULL) {
+return NGX_ERROR;
+}
+/*::/0*/
+ngx_memzero(cidr, sizeof(ngx_cidr_t));
+cidr->family = AF_INET6;
+#endif
+
+} else if (ecf->debug_connection.nelts > 0
+   && ecf->debug_scaled_pct == 0)
+{
+ecf->debug_scaled_pct = NGX_MAX_INT32_VALUE;
+}
+
 ccf = (ngx_core_conf_t *) ngx_get_conf(cycle->conf_ctx, ngx_core_module);
 
 ngx_timer_resolution = ccf->timer_resolution;
@@ -1254,6 +1291,55 @@
 }
 
 
+static char *
+ngx_event_debug_random(ngx_conf_t *cf, ngx_command_t *cmd, void *conf)
+{
+#if (NGX_DEBUG)
+ngx_event_conf_t  *ecf = conf;
+
+u_char   *c;
+ngx_int_t pct;
+ngx_uint_tlen;
+ngx_str_t*value;
+
+if (ecf->debug_scaled_pct != NGX_CONF_UNSET_UINT) {
+return "is duplicate";
+}
+
+value = cf->args->elts;
+c = value[1].data;
+len = value[1].len;
+
+if (c[len-1] != '%') {
+ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
+   "%V missing '%%'",
+   [1]);
+return NGX_CONF_ERROR;
+}
+
+pct = ngx_atofp(c, len-1, 2);
+
+if (pct == NGX_ERROR || pct == 0 || pct > ) {
+ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
+   "%V is an invalid value",
+   [1]);
+return NGX_CONF_ERROR;
+}
+
+ecf->debug_scaled_pct = NGX_MAX_INT32_VALUE / 1 * pct;
+
+#else
+
+ngx_conf_log_error(NGX_LOG_WARN, cf, 0,
+ 

Re: [PATCH] Added debug_random directive

2023-07-17 Thread J Carter
On Sat, 15 Jul 2023 03:48:25 +
J Carter  wrote:

> # HG changeset patch
> # User J Carter 
> # Date 1689391559 -3600
> #  Sat Jul 15 04:25:59 2023 +0100
> # Node ID b1ea0a60417e547513654bf9d6bb79714865c780
> # Parent  77c1418916f7817a0d020f28d8a08f278a12fe43
> Added debug_random directive
> 
> This directive enforces for EITHER a percentage of total connections
> OR a percentage of connections matched by debug_connection CIDRs
> to have debug logging enabled.
> 
> This is useful for debugging when nginx is under high load
> (production) - where debugging all connections is not possible without
> disrupting traffic.
> 
> This directive takes a value between 0.00%-100.00% exclusive.
> 

# HG changeset patch
# User J Carter 
# Date 1689649226 -3600
#  Tue Jul 18 04:00:26 2023 +0100
# Node ID 87f6f95e0385e6cd37354979ea61cc2435deb430
# Parent  77c1418916f7817a0d020f28d8a08f278a12fe43
Added debug_random directive

Rework of previous patch. Fixed several bugs.

Example usage:

events {
 worker_connections  1024;
 #if uncommented, the percentage applies to connection from lo.
 #debug_connection127.0.0.0/8;
 debug_random1%;
}

diff -r 77c1418916f7 -r 87f6f95e0385 src/event/ngx_event.c
--- a/src/event/ngx_event.c Thu Jun 08 14:58:01 2023 +0400
+++ b/src/event/ngx_event.c Tue Jul 18 04:00:26 2023 +0100
@@ -30,6 +30,8 @@
 static char *ngx_event_use(ngx_conf_t *cf, ngx_command_t *cmd, void *conf);
 static char *ngx_event_debug_connection(ngx_conf_t *cf, ngx_command_t *cmd,
 void *conf);
+static char *ngx_event_debug_random(ngx_conf_t *cf, ngx_command_t *cmd,
+void *conf);
 
 static void *ngx_event_core_create_conf(ngx_cycle_t *cycle);
 static char *ngx_event_core_init_conf(ngx_cycle_t *cycle, void *conf);
@@ -162,6 +164,13 @@
   0,
   NULL },
 
+  { ngx_string("debug_random"),
+  NGX_EVENT_CONF|NGX_CONF_TAKE1,
+  ngx_event_debug_random,
+  0,
+  0,
+  NULL },
+
   ngx_null_command
 };
 
@@ -496,6 +505,7 @@
 size_t   size, cl;
 ngx_shm_tshm;
 ngx_time_t  *tp;
+ngx_cidr_t  *cidr;
 ngx_core_conf_t *ccf;
 ngx_event_conf_t*ecf;
 
@@ -507,6 +517,29 @@
   "using the \"%s\" event method", ecf->name);
 }
 
+if (ecf->debug_connection.nelts == 0 && ecf->debug_rnd > 0) {
+cidr = ngx_array_push(>debug_connection);
+if (cidr == NULL) {
+return NGX_ERROR;
+}
+/*0.0.0.0/0*/
+ngx_memzero(cidr, sizeof(ngx_cidr_t));
+cidr->family = AF_INET;
+
+#ifdef NGX_HAVE_INET6
+cidr = ngx_array_push(>debug_connection);
+if (cidr == NULL) {
+return NGX_ERROR;
+}
+/*::/0*/
+ngx_memzero(cidr, sizeof(ngx_cidr_t));
+cidr->family = AF_INET6;
+#endif
+
+} else if (ecf->debug_connection.nelts > 0 && ecf->debug_rnd == 0) {
+ecf->debug_rnd = NGX_MAX_INT32_VALUE;
+}
+
 ccf = (ngx_core_conf_t *) ngx_get_conf(cycle->conf_ctx, ngx_core_module);
 
 ngx_timer_resolution = ccf->timer_resolution;
@@ -1254,6 +1287,54 @@
 }
 
 
+static char *
+ngx_event_debug_random(ngx_conf_t *cf, ngx_command_t *cmd, void *conf)
+{
+#if (NGX_DEBUG)
+ngx_event_conf_t  *ecf = conf;
+
+u_char   *c;
+ngx_int_t pct;
+ngx_uint_tlen;
+ngx_str_t*value;
+
+value = cf->args->elts;
+c = value[1].data;
+len = value[1].len;
+
+if (c[len-1] != '%') {
+ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
+   "%V missing '%'",
+   [1]);
+return NGX_CONF_ERROR;
+}
+
+pct = ngx_atofp(c, len-1, 2);
+
+if (pct == NGX_ERROR
+|| pct == 0
+|| pct > )
+{
+ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
+   "%V is an invalid value",
+   [1]);
+return NGX_CONF_ERROR;
+}
+
+ecf->debug_rnd = NGX_MAX_INT32_VALUE / 1 * pct;
+
+#else
+
+ngx_conf_log_error(NGX_LOG_WARN, cf, 0,
+   "\"debug_random\" is ignored, you need to rebuild "
+   "nginx using --with-debug option to enable it");
+
+#endif
+
+return NGX_CONF_OK;
+}
+
+
 static void *
 ngx_event_core_create_conf(ngx_cycle_t *cycle)
 {
@@ -1279,6 +1360,8 @@
 return NULL;
 }
 
+ecf->debug_rnd = NGX_CONF_UNSET_UINT;
+
 #endif
 
 return ecf;
@@ -1369,5 +1452,7 @@
 ngx_conf_init_value(ecf->accept_mutex, 0);
 ngx_conf_init_msec_value(ecf->accept_mutex_delay, 500);
 
+ngx_conf_init_uint_value(ecf->debug_rnd, 0);
+
 return NGX_CONF_OK;
 }
diff -r 77c1418916f7 -r 87f6f95e0385 src/event/n

[PATCH] Added debug_random directive

2023-07-14 Thread J Carter
# HG changeset patch
# User J Carter 
# Date 1689391559 -3600
#  Sat Jul 15 04:25:59 2023 +0100
# Node ID b1ea0a60417e547513654bf9d6bb79714865c780
# Parent  77c1418916f7817a0d020f28d8a08f278a12fe43
Added debug_random directive

This directive enforces for EITHER a percentage of total connections
OR a percentage of connections matched by debug_connection CIDRs
to have debug logging enabled.

This is useful for debugging when nginx is under high load
(production) - where debugging all connections is not possible without
disrupting traffic.

This directive takes a value between 0.00%-100.00% exclusive.

Example usage:

events {
worker_connections  1024;
#if uncommented, the percentage applies to connection from lo.
#debug_connection127.0.0.1/8;
debug_random1%;
}

diff -r 77c1418916f7 -r b1ea0a60417e src/event/ngx_event.c
--- a/src/event/ngx_event.c Thu Jun 08 14:58:01 2023 +0400
+++ b/src/event/ngx_event.c Sat Jul 15 04:25:59 2023 +0100
@@ -30,6 +30,8 @@
 static char *ngx_event_use(ngx_conf_t *cf, ngx_command_t *cmd, void
*conf); static char *ngx_event_debug_connection(ngx_conf_t *cf,
ngx_command_t *cmd, void *conf);
+static char *ngx_event_debug_random(ngx_conf_t *cf, ngx_command_t *cmd,
+void *conf);
 
 static void *ngx_event_core_create_conf(ngx_cycle_t *cycle);
 static char *ngx_event_core_init_conf(ngx_cycle_t *cycle, void *conf);
@@ -162,6 +164,13 @@
   0,
   NULL },
 
+  { ngx_string("debug_random"),
+  NGX_EVENT_CONF|NGX_CONF_TAKE1,
+  ngx_event_debug_random,
+  0,
+  0,
+  NULL },
+
   ngx_null_command
 };
 
@@ -1254,6 +1263,53 @@
 }
 
 
+static char *
+ngx_event_debug_random(ngx_conf_t *cf, ngx_command_t *cmd, void *conf)
+{
+#if (NGX_DEBUG)
+ngx_event_conf_t  *ecf = conf;
+
+u_char   *c;
+ngx_int_t pct;
+ngx_uint_tlen;
+ngx_str_t*value;
+
+value = cf->args->elts;
+c = value[1].data;
+len = ngx_strlen(c);
+
+if (c[len-1] != '%') {
+ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
+   "%V missing '%'",
+   [1]);
+return NGX_CONF_ERROR;
+}
+
+pct = ngx_atofp(c, len-1, 2);
+
+if (pct == NGX_ERROR
+|| pct == 0
+|| pct > ) {
+ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
+   "%V is an invalid value",
+   [1]);
+return NGX_CONF_ERROR;
+}
+
+ecf->debug_rnd = NGX_MAX_INT32_VALUE / 1 * pct;
+
+#else
+
+ngx_conf_log_error(NGX_LOG_WARN, cf, 0,
+   "\"debug_random\" is ignored, you need to
rebuild "
+   "nginx using --with-debug option to enable it");
+
+#endif
+
+return NGX_CONF_OK;
+}
+
+
 static void *
 ngx_event_core_create_conf(ngx_cycle_t *cycle)
 {
diff -r 77c1418916f7 -r b1ea0a60417e src/event/ngx_event.h
--- a/src/event/ngx_event.h Thu Jun 08 14:58:01 2023 +0400
+++ b/src/event/ngx_event.h Sat Jul 15 04:25:59 2023 +0100
@@ -438,6 +438,7 @@
 u_char   *name;
 
 #if (NGX_DEBUG)
+ngx_uint_tdebug_rnd;
 ngx_array_t   debug_connection;
 #endif
 } ngx_event_conf_t;
diff -r 77c1418916f7 -r b1ea0a60417e src/event/ngx_event_accept.c
--- a/src/event/ngx_event_accept.c  Thu Jun 08 14:58:01 2023 +0400
+++ b/src/event/ngx_event_accept.c  Sat Jul 15 04:25:59 2023 +0100
@@ -523,6 +523,14 @@
 struct sockaddr_in6  *sin6;
 ngx_uint_tn;
 #endif
+ngx_uint_t r = ngx_random();
+
+if (!ecf->debug_connection.nelts) {
+c->log->log_level = (r < ecf->debug_rnd) ?
+NGX_LOG_DEBUG_CONNECTION|NGX_LOG_DEBUG_ALL
:
+c->log->log_level;
+return;
+}
 
 cidr = ecf->debug_connection.elts;
 for (i = 0; i < ecf->debug_connection.nelts; i++) {
@@ -561,7 +569,9 @@
 break;
 }
 
-c->log->log_level = NGX_LOG_DEBUG_CONNECTION|NGX_LOG_DEBUG_ALL;
+c->log->log_level = (r < ecf->debug_rnd) ?
+NGX_LOG_DEBUG_CONNECTION|NGX_LOG_DEBUG_ALL:
:
+c->log->log_level;
 break;
 
 next:
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH] Added $http2_stream_id

2023-05-27 Thread J Carter
Hello, 

On Mon, 22 May 2023 00:40:08 +0300
Maxim Dounin  wrote:

> Hello!
> 
> On Sun, May 14, 2023 at 11:59:35PM +0100, J Carter wrote:
> 
> > Hello, 
> > 
> > >On Sun, 14 May 2023 18:48:06 +0100
> > >J Carter  wrote:
> > 
> > > Hello,
> > > 
> > > On Sun, 14 May 2023 17:40:43 +0300
> > > Maxim Dounin  wrote:
> > > 
> > > > Hello!
> > > > 
> > > > On Fri, May 12, 2023 at 03:37:52AM +0100, J Carter wrote:
> > > > 
> > > > > # HG changeset patch
> > > > > # User jordanc.car...@outlook.com
> > > > > # Date 1683858766 -3600
> > > > > #  Fri May 12 03:32:46 2023 +0100
> > > > > # Node ID de1a1b4141e827984cbd0d2feb97f870c32ff289
> > > > > # Parent  b71e69247483631bd8fc79a47cc32b762625b1fb
> > > > > Added $http2_stream_id
> > > > > 
> > > > > Useful for tracing multiplexed requests from client logs or
> > > > > pcaps captured between client and nginx, to nginx's own
> > > > > access logs.
> > > > > 
> > > > > Also useful for matching multiplexed request's access log
> > > > > entries to debug level error logs - which is particularly
> > > > > difficult to do.
> > > > 
> > > > Thanks for the patch, but I would rather not.
> > > > 
> > > > Consider using $connection_requests variable to identify 
> > > > individual requests within a connection,
> > > > or the $request_id 
> > > > variable to identify requests globally.  These do no depend on
> > > > the particular protocol used and can be universally used for
> > > > both HTTP/1.x and HTTP/2.
> > > > 
> > > 
> > > Thanks for the reply.
> > > 
> > > I hadn't considered $connection_requests. Yes that would work fine
> > > for my use-case with some log processing ($connection_requests *
> > > 2 - 1)
> > > 
> > > One thought does come to mind, although it won't effect my
> > > use-case - This may not work if server push is used as that would
> > > increment stream id, but presumably would not increment
> > > connection->requests (I'd need to check that though).
> > 
> > After some additional testing with $connection_requests it appears
> > to not be suitable method of obtaining stream id in access_logs.
> > 
> > The issue is
> > 1) Stream id and connection->requests are incremented on stream
> >/ request initiation.
> > 2) Access logs are written on request finalization.
> > 3) New streams may be initiated at any time.
> > 3) Requests are not necessarily finalized in initiation order.
> >  
> > Therefore making any assumptions as to the stream id associated
> > with a request from to the current value of connection->requests at
> > finalization time is impossible.
> 
> In HTTP/2, for each stream nginx creates a new connection, and 
> r->connection->requests as seen by $connection_requests will be 
> frozen for the request lifetime.  That is, it essentially shows 
> the request sequence number.

I see.. I must've messed my tests of this up somehow - this explanation
makes sense though. 

> 
> > I'd ask that this patch is reconsidered.
> 
> While $connection_requests is certainly not exactly equivalent to 
> HTTP/2 stream id, the $connection_requests is believed to be 
> enough for user-level tasks, as well as for most debugging tasks.
> 

Yes agreed considering the above, I can indeed just offset the value.

Thanks again.
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH] Added $realip_add_x_forwarded_for

2023-05-27 Thread J Carter
Hello,

Thanks for the reply.

On Mon, 22 May 2023 15:48:23 +0300
Maxim Dounin  wrote:

> Hello!
> 
> On Sun, May 14, 2023 at 04:51:58AM +0100, J Carter wrote:
> 
> > # HG changeset patch
> > # User jordanc.car...@outlook.com
> > # Date 1684035158 -3600
> > #  Sun May 14 04:32:38 2023 +0100
> > # Node ID dad6e472ee0d97a738b117f6480987ef135c9e7f
> > # Parent  b71e69247483631bd8fc79a47cc32b762625b1fb
> > Added $realip_add_x_forwarded_for
> > 
> > Resolves Ticket #2127.
> > 
> > Duplicates the functionality of proxy_add_x_forwarded_for, except
> > the true source ip is appended and not the remote address extracted
> > by the real IP module.
> > 
> > In practice this is proxy_add_x_forwarded_for but
> > $realip_remote_addr is used and not $remote_addr.
> > 
> > This follows the same convention as $realip_remote_addr and
> > $real_ip_remote_port, in that it is a drop in replacement for
> > $proxy_add_x_forwarded_for that can be used in contexts that both do
> > and do not have the real_ip directives, with the same results.
> > 
> > An example configuration:
> > 
> > server {
> > listen 80;
> > real_ip_header X-Forwarded-For;
> > set_real_ip_from 127.0.0.1;
> > 
> > location / {
> > proxy_set_header X-Forwarded-For
> > x; proxy_set_header Remote $remote_addr;
> > proxy_pass http://127.0.0.1:8080;
> > }
> > }  
> 
> Thanks for the patch, but I can't say I like the idea of 
> introducing yet another variable and asking users to change it 
> manually. 

Good point, it should be possible to merge the two.

> This is essentially equivalent to using
> 
> proxy_set_header X-Forwarded-For "$http_x_forwarded_for,
> $realip_remote_addr";
>
> as the ticket suggests.
>

Well yes, but this proxy_set_header example would only be valid if
$http_x_forwarded_for always exists as a request header,
otherwise you'd have a hanging comma at the start. 

It'd need a map to handle that if it were sometimes present and
sometimes not. I imagine avoiding such a map is the reason the
regular proxy_add_x_forwarded_for directive also exists.

> Also, it is an open question if $realip_remote_addr should be 
> used, or X-Forwarded-For should be left unmodified if remote addr 
> was set from X-Forwarded-For. 

Leaving it unmodified does seem undesirable, as it means omitting a
proxy hop($realip_remote_addr) from the x-forwarded-for chain.

> The realip module instructs nginx 
> to use the address as obtained from the header, and not using it 
> for some purposes looks at least questionable.

I believe this would be the only valid exception to that, given that
value of $realip_add_x_forwarded_for is only ever going to be to
overwrite x-forwarded-for with proxy_set_header for the next hop proxy
or backend app to utilize. It's quite contained. 

Also it does seem more sensible than the resulting x-forwarded-for
value shown in the ticket, which would look like nonsense to any
upstream consumer of that value that wishes to analyze the whole chain.

The proxy_add_x_forwarded_for's value in the ticket isn't in
the spirit of the header's purpose either, which is to preserve
addresses of the client & chain of proxies.

> 
> Also, it seems incorrect to use $realip_remote_addr (or keep  
> X-Forwarded-For unmodified) if remote addr was set from other 
> sources, such as PROXY protocol headers.
> 
> Overall, current behaviour might actually be optimal.
> 
> [...]
> 

This is a good point, although perhaps adding both
$remote_addr and $realip_remote_addr to the
x-forwarded-for chain would be better behavior for the other sources
(especially proxy_protocol). 

It'd need some additional checks to ensure
no duplications are introduced (if the x-forwarded-for header already
exists).
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH] Added $http2_stream_id

2023-05-14 Thread J Carter
Hello, 

>On Sun, 14 May 2023 18:48:06 +0100
>J Carter  wrote:

> Hello,
> 
> On Sun, 14 May 2023 17:40:43 +0300
> Maxim Dounin  wrote:
> 
> > Hello!
> > 
> > On Fri, May 12, 2023 at 03:37:52AM +0100, J Carter wrote:
> > 
> > > # HG changeset patch
> > > # User jordanc.car...@outlook.com
> > > # Date 1683858766 -3600
> > > #  Fri May 12 03:32:46 2023 +0100
> > > # Node ID de1a1b4141e827984cbd0d2feb97f870c32ff289
> > > # Parent  b71e69247483631bd8fc79a47cc32b762625b1fb
> > > Added $http2_stream_id
> > > 
> > > Useful for tracing multiplexed requests from client logs or pcaps
> > > captured between client and nginx, to nginx's own access logs.
> > > 
> > > Also useful for matching multiplexed request's access log entries
> > > to debug level error logs - which is particularly difficult to do.
> > 
> > Thanks for the patch, but I would rather not.
> > 
> > Consider using $connection_requests variable to identify 
> > individual requests within a connection,
> > or the $request_id 
> > variable to identify requests globally.  These do no depend on the 
> > particular protocol used and can be universally used for both 
> > HTTP/1.x and HTTP/2.
> > 
> 
> Thanks for the reply.
> 
> I hadn't considered $connection_requests. Yes that would work fine
> for my use-case with some log processing ($connection_requests * 2 -
> 1)
> 
> One thought does come to mind, although it won't effect my use-case -
> This may not work if server push is used as that would increment
> stream id, but presumably would not increment connection->requests
> (I'd need to check that though).

After some additional testing with $connection_requests it appears
to not be suitable method of obtaining stream id in access_logs.

The issue is
1) Stream id and connection->requests are incremented on stream
   / request initiation.
2) Access logs are written on request finalization.
3) New streams may be initiated at any time.
3) Requests are not necessarily finalized in initiation order.
 
Therefore making any assumptions as to the stream id associated with a
request from to the current value of connection->requests at
finalization time is impossible.

I'd ask that this patch is reconsidered.
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: ssl preread for postgres connection

2023-05-14 Thread J Carter
On Sun, 14 May 2023 19:09:30 +0100
J Carter  wrote:

> Hello,
> 
> > On Sun, 14 May 2023 17:33:10 +0300
> > Maxim Dounin  wrote:  
> 
> > Hello!
> > 
> > On Sun, May 14, 2023 at 09:55:54AM +0400, Roman Arutyunyan wrote:
> >   
> > > Hi Eduard,
> > > 
> > > On Sat, May 13, 2023 at 10:43:59PM -0600, Eduard Vercaemer wrote:
> > >  
> > > > for some context, I recently I tried configuring nginx as a tcp
> > > > proxy that routes
> > > > connections based on sni to multiple upstream services
> > > > 
> > > > the server only exposes one tcp port, and receives all
> > > > connections there, for example
> > > > a connection to redis.example.com:1234 would be proxy_pass'ed to
> > > > some port in the
> > > > machine, a connection to www.example.com:1234 to another, etc.
> > > > 
> > > > i used nginx itself to terminate the tls for all services for
> > > > convenience
> > > > 
> > > > the problem:
> > > > now here is the issue, 1: postgres does some weird custom ssl
> > > > stuff, which means I
> > > > cannot terminate the ssl from within nginx  
> > > 
> > > In this case there must be an SSL error logged in nginx error log.
> > > Can you post it?  
> > 
> > Postgres uses their own protocol with STARTTLS-like interface to 
> > initiate SSL handshake, see here:
> > 
> > https://www.postgresql.org/docs/current/protocol-flow.html#id-1.10.6.7.12
> > 
> > That is, it's not going to work with neither SSL termination, nor 
> > SSL preread, and needs an implementation of the Postgres protocol.
> > 
> > [...]
> >   
> 
> Out of curiosity I looked into what 'others' had done for Postgres's
> application level negotiation.
> 
> https://github.com/envoyproxy/envoy/issues/10942
> 
> OP, it might be possible for you to hack this into ssl_preread.c in
> ngx_stream_ssl_preread_handler in a similar fashion to that
> workaround.
> 
> It seems you just need to listen / wait for the SSLRequest magic
> message bytes, send the 'fake' response, then do the normal handshake
> logic.
> 
> https://www.postgresql.org/docs/current/protocol-message-formats.html
> 
> The other issue is if you want TLS from NGINX -> Postgresql Upstream
> you'd need another hack somewhere in ngx_stream_proxy_module.c 
> (or a custom content handler as mentioned above).

Or even in ngx_stream_handler.c to do it properly, similar to how
proxy protocol is handled (obviously with writes too). 
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: ssl preread for postgres connection

2023-05-14 Thread J Carter
Hello,

> On Sun, 14 May 2023 17:33:10 +0300
> Maxim Dounin  wrote:

> Hello!
> 
> On Sun, May 14, 2023 at 09:55:54AM +0400, Roman Arutyunyan wrote:
> 
> > Hi Eduard,
> > 
> > On Sat, May 13, 2023 at 10:43:59PM -0600, Eduard Vercaemer wrote:
> > > for some context, I recently I tried configuring nginx as a tcp
> > > proxy that routes
> > > connections based on sni to multiple upstream services
> > > 
> > > the server only exposes one tcp port, and receives all
> > > connections there, for example
> > > a connection to redis.example.com:1234 would be proxy_pass'ed to
> > > some port in the
> > > machine, a connection to www.example.com:1234 to another, etc.
> > > 
> > > i used nginx itself to terminate the tls for all services for
> > > convenience
> > > 
> > > the problem:
> > > now here is the issue, 1: postgres does some weird custom ssl
> > > stuff, which means I
> > > cannot terminate the ssl from within nginx
> > 
> > In this case there must be an SSL error logged in nginx error log.
> > Can you post it?
> 
> Postgres uses their own protocol with STARTTLS-like interface to 
> initiate SSL handshake, see here:
> 
> https://www.postgresql.org/docs/current/protocol-flow.html#id-1.10.6.7.12
> 
> That is, it's not going to work with neither SSL termination, nor 
> SSL preread, and needs an implementation of the Postgres protocol.
> 
> [...]
> 

Out of curiosity I looked into what 'others' had done for Postgres's
application level negotiation.

https://github.com/envoyproxy/envoy/issues/10942

OP, it might be possible for you to hack this into ssl_preread.c in
ngx_stream_ssl_preread_handler in a similar fashion to that workaround.

It seems you just need to listen / wait for the SSLRequest magic
message bytes, send the 'fake' response, then do the normal handshake
logic.

https://www.postgresql.org/docs/current/protocol-message-formats.html

The other issue is if you want TLS from NGINX -> Postgresql Upstream
you'd need another hack somewhere in ngx_stream_proxy_module.c 
(or a custom content handler as mentioned above).
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: [PATCH] Added $http2_stream_id

2023-05-14 Thread J Carter
Hello,

On Sun, 14 May 2023 17:40:43 +0300
Maxim Dounin  wrote:

> Hello!
> 
> On Fri, May 12, 2023 at 03:37:52AM +0100, J Carter wrote:
> 
> > # HG changeset patch
> > # User jordanc.car...@outlook.com
> > # Date 1683858766 -3600
> > #  Fri May 12 03:32:46 2023 +0100
> > # Node ID de1a1b4141e827984cbd0d2feb97f870c32ff289
> > # Parent  b71e69247483631bd8fc79a47cc32b762625b1fb
> > Added $http2_stream_id
> > 
> > Useful for tracing multiplexed requests from client logs or pcaps
> > captured between client and nginx, to nginx's own access logs.
> > 
> > Also useful for matching multiplexed request's access log entries to
> > debug level error logs - which is particularly difficult to do.
> 
> Thanks for the patch, but I would rather not.
> 
> Consider using $connection_requests variable to identify 
> individual requests within a connection,
> or the $request_id 
> variable to identify requests globally.  These do no depend on the 
> particular protocol used and can be universally used for both 
> HTTP/1.x and HTTP/2.
> 

Thanks for the reply.

I hadn't considered $connection_requests. Yes that would work fine
for my use-case with some log processing ($connection_requests * 2 - 1)

One thought does come to mind, although it won't effect my use-case -
This may not work if server push is used as that would increment stream
id, but presumably would not increment connection->requests
(I'd need to check that though).

___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH] Added $realip_add_x_forwarded_for

2023-05-14 Thread J Carter
Re-sending with non-malformed patch...

# HG changeset patch
# User jordanc.car...@outlook.com
# Date 1684035158 -3600
#  Sun May 14 04:32:38 2023 +0100
# Node ID dad6e472ee0d97a738b117f6480987ef135c9e7f
# Parent  b71e69247483631bd8fc79a47cc32b762625b1fb
Added $realip_add_x_forwarded_for

Resolves Ticket #2127.

Duplicates the functionality of proxy_add_x_forwarded_for, except
the true source ip is appended and not the remote address extracted
by the real IP module.

In practise this is proxy_add_x_forwarded_for but $realip_remote_addr
is used and not $remote_addr.

This follows the same convention as $realip_remote_addr and
$real_ip_remote_port, in that it is a drop in replacement for
$proxy_add_x_forwarded_for that can be used in contexts that both do
and do not have the real_ip directives, with the same results.

An example configuration:

server {
listen 80;
real_ip_header X-Forwarded-For;
set_real_ip_from 127.0.0.1;

location / {
proxy_set_header X-Forwarded-For $realip_add_x_forwarded_for;
proxy_set_header Remote $remote_addr;
proxy_pass http://127.0.0.1:8080;
}
}

server {
listen 8080;

location / {
add_header Echo-X-Forwarded_For $http_x_forwarded_for;
add_header Remote $http_remote;
return 200;
}
}

test with:

curl -I --interface 127.0.0.1 -H "X-Forwarded-For: 10.0.0.1" localhost
curl -I --interface 127.0.0.2 -H "X-Forwarded-For: 10.0.0.1" localhost

diff --git a/src/http/modules/ngx_http_realip_module.c
b/src/http/modules/ngx_http_realip_module.c ---
a/src/http/modules/ngx_http_realip_module.c +++
b/src/http/modules/ngx_http_realip_module.c @@ -53,6 +53,8 @@
 ngx_http_variable_value_t *v, uintptr_t data);
 static ngx_int_t
ngx_http_realip_remote_port_variable(ngx_http_request_t *r,
ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t
ngx_http_realip_add_x_forwarded_for_variable(ngx_http_request_t *r,
+ngx_http_variable_value_t *v, uintptr_t data);
 
 
 static ngx_command_t  ngx_http_realip_commands[] = {
@@ -122,6 +124,9 @@
 { ngx_string("realip_remote_port"), NULL,
   ngx_http_realip_remote_port_variable, 0, 0, 0 },
 
+{ ngx_string("realip_add_x_forwarded_for"), NULL,
+  ngx_http_realip_add_x_forwarded_for_variable, 0, 0, 0 },
+
   ngx_http_null_variable
 };
 
@@ -619,3 +624,55 @@
 
 return NGX_OK;
 }
+
+
+static ngx_int_t
+ngx_http_realip_add_x_forwarded_for_variable(ngx_http_request_t *r,
+ngx_http_variable_value_t *v, uintptr_t data)
+{
+size_t  len;
+u_char *p;
+ngx_str_t  *addr_text;
+ngx_table_elt_t*h, *xfwd;
+ngx_http_realip_ctx_t  *ctx;
+
+v->valid = 1;
+v->no_cacheable = 0;
+v->not_found = 0;
+
+len = 0;
+
+ctx = ngx_http_realip_get_module_ctx(r);
+addr_text = ctx ? >addr_text : >connection->addr_text;
+
+xfwd = r->headers_in.x_forwarded_for;
+
+for (h = xfwd; h; h = h->next) {
+len += h->value.len + sizeof(", ") - 1;
+}
+
+if (len == 0) {
+v->len = addr_text->len;
+v->data = addr_text->data;
+return NGX_OK;
+}
+
+len += addr_text->len;
+
+p = ngx_pnalloc(r->pool, len);
+if (p == NULL) {
+return NGX_ERROR;
+}
+
+v->len = len;
+v->data = p;
+
+for (h = xfwd; h; h = h->next) {
+p = ngx_copy(p, h->value.data, h->value.len);
+*p++ = ','; *p++ = ' ';
+}
+
+ngx_memcpy(p, addr_text->data, addr_text->len);
+
+return NGX_OK;
+}


hgexport.patch
Description: Binary data
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH] Added $realip_add_x_forwarded_for

2023-05-13 Thread J Carter
# HG changeset patch
# User jordanc.car...@outlook.com
# Date 1684035158 -3600
#  Sun May 14 04:32:38 2023 +0100
# Node ID dad6e472ee0d97a738b117f6480987ef135c9e7f
# Parent  b71e69247483631bd8fc79a47cc32b762625b1fb
Added $realip_add_x_forwarded_for

Resolves Ticket #2127.

Duplicates the functionality of proxy_add_x_forwarded_for, except
the true source ip is appended and not the remote address extracted
by the real IP module.

In practice this is proxy_add_x_forwarded_for but $realip_remote_addr
is used and not $remote_addr.

This follows the same convention as $realip_remote_addr and
$real_ip_remote_port, in that it is a drop in replacement for
$proxy_add_x_forwarded_for that can be used in contexts that both do
and do not have the real_ip directives, with the same results.

An example configuration:

server {
listen 80;
real_ip_header X-Forwarded-For;
set_real_ip_from 127.0.0.1;

location / {
proxy_set_header X-Forwarded-For $realip_add_x_forwarded_for;
proxy_set_header Remote $remote_addr;
proxy_pass http://127.0.0.1:8080;
}
}

server {
listen 8080;

 add_header Echo-X-Forwarded_For $http_x_forwarded_for;
 add_header Echo-Remote $http_remote;
 return 200;
}

test with:

curl -I --interface 127.0.0.1 -H "X-Forwarded-For: 10.0.0.1" localhost
curl -I --interface 127.0.0.2 -H "X-Forwarded-For: 10.0.0.1" localhost

diff --git a/src/http/modules/ngx_http_realip_module.c
b/src/http/modules/ngx_http_realip_module.c ---
a/src/http/modules/ngx_http_realip_module.c +++
b/src/http/modules/ngx_http_realip_module.c @@ -53,6 +53,8 @@
 ngx_http_variable_value_t *v, uintptr_t data);
 static ngx_int_t
ngx_http_realip_remote_port_variable(ngx_http_request_t *r,
ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t
ngx_http_realip_add_x_forwarded_for_variable(ngx_http_request_t *r,
+ngx_http_variable_value_t *v, uintptr_t data);
 
 
 static ngx_command_t  ngx_http_realip_commands[] = {
@@ -122,6 +124,9 @@
 { ngx_string("realip_remote_port"), NULL,
   ngx_http_realip_remote_port_variable, 0, 0, 0 },
 
+{ ngx_string("realip_add_x_forwarded_for"), NULL,
+  ngx_http_realip_add_x_forwarded_for_variable, 0, 0, 0 },
+
   ngx_http_null_variable
 };
 
@@ -619,3 +624,55 @@
 
 return NGX_OK;
 }
+
+
+static ngx_int_t
+ngx_http_realip_add_x_forwarded_for_variable(ngx_http_request_t *r,
+ngx_http_variable_value_t *v, uintptr_t data)
+{
+size_t  len;
+u_char *p;
+ngx_str_t  *addr_text;
+ngx_table_elt_t*h, *xfwd;
+ngx_http_realip_ctx_t  *ctx;
+
+v->valid = 1;
+v->no_cacheable = 0;
+v->not_found = 0;
+
+len = 0;
+
+ctx = ngx_http_realip_get_module_ctx(r);
+addr_text = ctx ? >addr_text : >connection->addr_text;
+
+xfwd = r->headers_in.x_forwarded_for;
+
+for (h = xfwd; h; h = h->next) {
+len += h->value.len + sizeof(", ") - 1;
+}
+
+if (len == 0) {
+v->len = addr_text->len;
+v->data = addr_text->data;
+return NGX_OK;
+}
+
+len += addr_text->len;
+
+p = ngx_pnalloc(r->pool, len);
+if (p == NULL) {
+return NGX_ERROR;
+}
+
+v->len = len;
+v->data = p;
+
+for (h = xfwd; h; h = h->next) {
+p = ngx_copy(p, h->value.data, h->value.len);
+*p++ = ','; *p++ = ' ';
+}
+
+ngx_memcpy(p, addr_text->data, addr_text->len);
+
+return NGX_OK;
+}
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH] Added $http2_stream_id

2023-05-11 Thread J Carter
# HG changeset patch
# User jordanc.car...@outlook.com
# Date 1683858766 -3600
#  Fri May 12 03:32:46 2023 +0100
# Node ID de1a1b4141e827984cbd0d2feb97f870c32ff289
# Parent  b71e69247483631bd8fc79a47cc32b762625b1fb
Added $http2_stream_id

Useful for tracing multiplexed requests from client logs or pcaps
captured between client and nginx, to nginx's own access logs.

Also useful for matching multiplexed request's access log entries to
debug level error logs - which is particularly difficult to do.

diff --git a/src/http/v2/ngx_http_v2_module.c
b/src/http/v2/ngx_http_v2_module.c ---
a/src/http/v2/ngx_http_v2_module.c +++
b/src/http/v2/ngx_http_v2_module.c @@ -15,6 +15,8 @@
 
 static ngx_int_t ngx_http_v2_variable(ngx_http_request_t *r,
 ngx_http_variable_value_t *v, uintptr_t data);
+static ngx_int_t ngx_http_v2_variable_stream_id(ngx_http_request_t *r,
+ngx_http_variable_value_t *v, uintptr_t data);
 
 static ngx_int_t ngx_http_v2_module_init(ngx_cycle_t *cycle);
 
@@ -213,6 +215,9 @@
 { ngx_string("http2"), NULL,
   ngx_http_v2_variable, 0, 0, 0 },
 
+{ ngx_string("http2_stream_id"), NULL,
+  ngx_http_v2_variable_stream_id, 0, 0, 0 },
+
   ngx_http_null_variable
 };
 
@@ -271,6 +276,32 @@
 
 
 static ngx_int_t
+ngx_http_v2_variable_stream_id(ngx_http_request_t *r,
+ngx_http_variable_value_t *v, uintptr_t data)
+{
+u_char *p;
+
+if (!r->stream) {
+v->not_found = 1;
+return NGX_OK;
+}
+
+p = ngx_pnalloc(r->pool, NGX_INT32_LEN);
+if (p == NULL) {
+return NGX_ERROR;
+}
+
+v->len = ngx_sprintf(p, "%i", r->stream->node->id) - p;
+v->valid = 1;
+v->no_cacheable = 0;
+v->not_found = 0;
+v->data = p;
+
+return NGX_OK;
+}
+
+
+static ngx_int_t
 ngx_http_v2_module_init(ngx_cycle_t *cycle)
 {
 return NGX_OK;
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH] Asynchronous close event handling for single peer upstreams

2023-05-11 Thread J Carter
On Sun, 7 May 2023 21:55:19 +0100
J Carter  wrote:

> # HG changeset patch
> # User jordanc.car...@outlook.com
> # Date 1683491710 -3600
> #  Sun May 07 21:35:10 2023 +0100
> # Node ID e1ec9971da677b763c7576c729576d6f906631ae
> # Parent  b71e69247483631bd8fc79a47cc32b762625b1fb
> Asynchronous close event handling for single peer upstreams
> 
> Prevents additional upstream tries when consecutive asynchronous close
> errors are encountered for single peer upstreams utilizing keepalive
> connections.
> 
> This replaces the current behavior of unlimited retries.
> 
> diff -r b71e69247483 -r e1ec9971da67 src/event/ngx_event_connect.h
> --- a/src/event/ngx_event_connect.h   Mon May 01 19:16:05 2023
> +0400 +++ b/src/event/ngx_event_connect.h Sun May 07 21:35:10
> 2023 +0100 @@ -17,6 +17,7 @@
>  #define NGX_PEER_KEEPALIVE   1
>  #define NGX_PEER_NEXT2
>  #define NGX_PEER_FAILED  4
> +#define NGX_PEER_ASYNC_FAILED8
>  
>  
>  typedef struct ngx_peer_connection_s  ngx_peer_connection_t;
> @@ -64,6 +65,7 @@
>  unsigned transparent:1;
>  unsigned so_keepalive:1;
>  unsigned down:1;
> +unsigned async_failed:1;
>  
>   /* ngx_connection_log_error_e */
>  unsigned log_error:2;
> diff -r b71e69247483 -r e1ec9971da67 src/http/ngx_http_upstream.c
> --- a/src/http/ngx_http_upstream.cMon May 01 19:16:05 2023
> +0400 +++ b/src/http/ngx_http_upstream.c  Sun May 07 21:35:10
> 2023 +0100 @@ -4317,6 +4317,9 @@
>  {
>  state = NGX_PEER_NEXT;
>  
> +} else if (u->peer.cached && ft_type ==
> NGX_HTTP_UPSTREAM_FT_ERROR) {
> +state = NGX_PEER_FAILED | NGX_PEER_ASYNC_FAILED;
> +
>  } else {
>  state = NGX_PEER_FAILED;
>  }
> @@ -4330,11 +4333,6 @@
>"upstream timed out");
>  }
>  
> -if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) {
> -/* TODO: inform balancer instead */
> -u->peer.tries++;
> -}
> -
>  switch (ft_type) {
>  
>  case NGX_HTTP_UPSTREAM_FT_TIMEOUT:
> diff -r b71e69247483 -r e1ec9971da67
> src/http/ngx_http_upstream_round_robin.c ---
> a/src/http/ngx_http_upstream_round_robin.cMon May 01 19:16:05
> 2023 +0400 +++ b/src/http/ngx_http_upstream_round_robin.c Sun
> May 07 21:35:10 2023 +0100 @@ -623,6 +623,12 @@
> ngx_http_upstream_rr_peers_unlock(rrp->peers); 
>  pc->tries = 0;
> +
> +if (state & NGX_PEER_ASYNC_FAILED && !pc->async_failed) {
> +pc->async_failed = 1;
> +pc->tries = 1;
> +}
> +
>  return;
>  }

Hello, 

any opinions or suggestions for this patch?
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH] Asynchronous close event handling for single peer upstreams

2023-05-07 Thread J Carter
# HG changeset patch
# User jordanc.car...@outlook.com
# Date 1683491710 -3600
#  Sun May 07 21:35:10 2023 +0100
# Node ID e1ec9971da677b763c7576c729576d6f906631ae
# Parent  b71e69247483631bd8fc79a47cc32b762625b1fb
Asynchronous close event handling for single peer upstreams

Prevents additional upstream tries when consecutive asynchronous close
errors are encountered for single peer upstreams utilizing keepalive
connections.

This replaces the current behavior of unlimited retries.

diff -r b71e69247483 -r e1ec9971da67 src/event/ngx_event_connect.h
--- a/src/event/ngx_event_connect.h Mon May 01 19:16:05 2023
+0400 +++ b/src/event/ngx_event_connect.h   Sun May 07 21:35:10
2023 +0100 @@ -17,6 +17,7 @@
 #define NGX_PEER_KEEPALIVE   1
 #define NGX_PEER_NEXT2
 #define NGX_PEER_FAILED  4
+#define NGX_PEER_ASYNC_FAILED8
 
 
 typedef struct ngx_peer_connection_s  ngx_peer_connection_t;
@@ -64,6 +65,7 @@
 unsigned transparent:1;
 unsigned so_keepalive:1;
 unsigned down:1;
+unsigned async_failed:1;
 
  /* ngx_connection_log_error_e */
 unsigned log_error:2;
diff -r b71e69247483 -r e1ec9971da67 src/http/ngx_http_upstream.c
--- a/src/http/ngx_http_upstream.c  Mon May 01 19:16:05 2023 +0400
+++ b/src/http/ngx_http_upstream.c  Sun May 07 21:35:10 2023 +0100
@@ -4317,6 +4317,9 @@
 {
 state = NGX_PEER_NEXT;
 
+} else if (u->peer.cached && ft_type ==
NGX_HTTP_UPSTREAM_FT_ERROR) {
+state = NGX_PEER_FAILED | NGX_PEER_ASYNC_FAILED;
+
 } else {
 state = NGX_PEER_FAILED;
 }
@@ -4330,11 +4333,6 @@
   "upstream timed out");
 }
 
-if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) {
-/* TODO: inform balancer instead */
-u->peer.tries++;
-}
-
 switch (ft_type) {
 
 case NGX_HTTP_UPSTREAM_FT_TIMEOUT:
diff -r b71e69247483 -r e1ec9971da67
src/http/ngx_http_upstream_round_robin.c ---
a/src/http/ngx_http_upstream_round_robin.c  Mon May 01 19:16:05
2023 +0400 +++ b/src/http/ngx_http_upstream_round_robin.c   Sun
May 07 21:35:10 2023 +0100 @@ -623,6 +623,12 @@
ngx_http_upstream_rr_peers_unlock(rrp->peers); 
 pc->tries = 0;
+
+if (state & NGX_PEER_ASYNC_FAILED && !pc->async_failed) {
+pc->async_failed = 1;
+pc->tries = 1;
+}
+
 return;
 }
 
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: resolver does not work

2023-04-18 Thread J Carter

Hi,

On 18/04/2023 13:58, mailingl...@unix-solution.de wrote:

Hello,
I have a nginx proxy in front of systemd-nspawn containers.
The IP's of the containers are dynamic.

When I start nginx *after* the  containers it works.
When the IP of the container is changed while nginx is running i get a 
"Bad Gateway" erro


The config looks like:

server {
     server_name foobar.example.com
...
    location / {
   resolver 127.0.0.53 valid=10s;
   ...
   proxy_pass http://container;


You need to use a variable in the proxy_pass here instead to do dynamic 
proxy pass. This force nginx to re-resolve the hostname. The variable's 
value should be set the target hostname (looks like it's 'containers' 
from your config).


something like
set $container_hostname 'containers';
proxy_pass http://$containers_hostname;

Note, you can't target an upstream server group with this technique - it 
has to be a hostname that the dns resolver returns.


If you need more advanced re-resolving, such as the ability to use 
upstream server groups and resolve servers within it, NGINX Plus has 
this feature.


In fact the patches from NGINX Plus that do the dynamic re-resolving are 
already on the devel mailing list - just not integrated.


https://mailman.nginx.org/pipermail/nginx-devel/2023-February/4MCLSVRK7EX6DNKHFZN6CA4SKZUSA3GA.html

So it can also be obtained by comping from source with that set of 
patches applied.


This is another alternative upstream resolver - although not as good, as 
it requires requests to initiate a re-resolve.


https://www.nginx.com/resources/wiki/modules/domain_resolve/


    }
}

nginx is 1.1.18 so it should work as documented in 
http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver
The workaround there 
https://stackoverflow.com/questions/42720618/docker-nginx-stopped-emerg-11-host-not-found-in-upstream/52319161#52319161 doesn't work.


I have also try to config a upstream backend and the resolver in the 
server part or in the http part.
The errors are: "upstream timed out" or "container could not be resolved 
(3: Host not found)"


Whats wrong there?
Best Regards
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


[PATCH] Asynchronous close event handling for single peer upstreams

2023-04-09 Thread J Carter
Hello,

resubmitting with the changes suggested.

# HG changeset patch
# User jordanc.car...@outlook.com
# Date 1681067934 -3600
#  Sun Apr 09 20:18:54 2023 +0100
# Node ID c8dcf584b36505e42bd2ea2965c1020069adb677
# Parent  5f1d05a21287ba0290dd3a17ad501595b442a194
Asynchronous close event handling for single peer upstreams

Limits single peer upstreams to a single retry when consecutive
asynchronous close events are encountered.

diff -r 5f1d05a21287 -r c8dcf584b365 src/event/ngx_event_connect.h
--- a/src/event/ngx_event_connect.h Tue Mar 28 18:01:54 2023
+0300 +++ b/src/event/ngx_event_connect.h   Sun Apr 09 20:18:54
2023 +0100 @@ -17,6 +17,7 @@
 #define NGX_PEER_KEEPALIVE   1
 #define NGX_PEER_NEXT2
 #define NGX_PEER_FAILED  4
+#define NGX_PEER_ASYNC_FAILED8
 
 
 typedef struct ngx_peer_connection_s  ngx_peer_connection_t;
@@ -64,6 +65,7 @@
 unsigned transparent:1;
 unsigned so_keepalive:1;
 unsigned down:1;
+unsigned async_failed:1;
 
  /* ngx_connection_log_error_e */
 unsigned log_error:2;
diff -r 5f1d05a21287 -r c8dcf584b365 src/http/ngx_http_upstream.c
--- a/src/http/ngx_http_upstream.c  Tue Mar 28 18:01:54 2023 +0300
+++ b/src/http/ngx_http_upstream.c  Sun Apr 09 20:18:54 2023 +0100
@@ -4317,6 +4317,9 @@
 {
 state = NGX_PEER_NEXT;
 
+} else if (u->peer.cached && ft_type ==
NGX_HTTP_UPSTREAM_FT_ERROR) {
+state = NGX_PEER_FAILED | NGX_PEER_ASYNC_FAILED;
+
 } else {
 state = NGX_PEER_FAILED;
 }
@@ -4330,11 +4333,6 @@
   "upstream timed out");
 }
 
-if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) {
-/* TODO: inform balancer instead */
-u->peer.tries++;
-}
-
 switch (ft_type) {
 
 case NGX_HTTP_UPSTREAM_FT_TIMEOUT:
@@ -4421,7 +4419,6 @@
 return;
 }
 #endif
-
 ngx_http_upstream_finalize_request(r, u, status);
 return;
 }
diff -r 5f1d05a21287 -r c8dcf584b365
src/http/ngx_http_upstream_round_robin.c ---
a/src/http/ngx_http_upstream_round_robin.c  Tue Mar 28 18:01:54
2023 +0300 +++ b/src/http/ngx_http_upstream_round_robin.c   Sun
Apr 09 20:18:54 2023 +0100 @@ -616,14 +616,14 @@
ngx_http_upstream_rr_peer_lock(rrp->peers, peer); 
 if (rrp->peers->single) {
-
-peer->conns--;
+pc->tries = 0;
 
-ngx_http_upstream_rr_peer_unlock(rrp->peers, peer);
-ngx_http_upstream_rr_peers_unlock(rrp->peers);
+if (state & NGX_PEER_ASYNC_FAILED && pc->async_failed == 0) {
+pc->tries = 2;
+pc->async_failed = 1;
+}
 
-pc->tries = 0;
-return;
+goto cleanup;
 }
 
 if (state & NGX_PEER_FAILED) {
@@ -659,6 +659,7 @@
 }
 }
 
+cleanup:
 peer->conns--;
 
 ngx_http_upstream_rr_peer_unlock(rrp->peers, peer);


hgexport
Description: Binary data
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH] Added keepalive_async_fails command

2023-04-02 Thread J Carter

Hello Maxim,

Thank you for the feedback.

I think the points you made are fair - a new directive is possibly 
overkill for

this issue.

A single peer going through all of it's (many) cached connections
when there is is a non-asynchronous close connection error is where I've
personally seen the current behavior be most problematic - so this make 
sense

and your direction would still resolve this.

Point 3 would increase overhead/add latency for the average case, as a new
connection would need to be created (assuming additional cached connections
are likely to exist, and to not fail). I'd prefer to avoid that unless 
there are strong

objections, although implementing this would be fairly trivial if there are.

However, for point 2 -  I do have one question that I'd like your 
opinion on

regarding multi-peer upstreams - should a (suspected) asynchronous close
event count as a failure in terms of the logic in upstream_round_robin's 
free?


In lieu of double trying for multi-peer, it seems like it may be 
desirable to avoid
counting these as 'real' failures given all the effects imparted through 
passive
health checks  - such as triggering a failure increment (and/or timeout) 
as well as

adjusting weights downwards if the weighs are set for that upstream peer.

On the other hand, the ambiguity in the cause of the error means not 
counting

failures at all for connection errors that involve a cached connection.

On 03/04/2023 02:42, Maxim Dounin wrote:

Hello!

On Sun, Apr 02, 2023 at 06:57:03PM +0100, J Carter wrote:


I've also attached an example nginx.conf and test script that
simulates the asynchronous close events.
Two different test cases can be found within that, one with path
/1 for single peer upstream and /2 for multi-peer.

You should see 2 upstream addresses repeated in a row
per-upstream-server in the access log by default, as it fails
through the cached connections & next performs next upstream
tries.

Any feedback would be appreciated.

# HG changeset patch
# User jordanc.car...@outlook.com
# Date 1680457073 -3600
#  Sun Apr 02 18:37:53 2023 +0100
# Node ID 9ec4d7a8cdf6cdab00d09dff75fa6045f6f5533f
# Parent  5f1d05a21287ba0290dd3a17ad501595b442a194
Added keepalive_async_fails command to keepalive load balancer module.
This value determines the number suspected keepalive race events
per-upstream-try that will be tolerated before a subsequent network connection
error is considered a true failure.

It looks like you are trying to address issues with re-trying
requests to upstream servers when errors not distinguishable from
asynchronous close events happens, as outlined in ticket #2421
(https://trac.nginx.org/nginx/ticket/2421).

I don't think that introducing another directive for this is a
good idea.  Rather, I would consider modifying the behaviour to
better behave in case of such errors.  In particular, the
following approaches look promising:

- Allow at most one additional try.

- Allow an additional try only if there is a single peer, so
   normally request is not going to be retried at all.

- Don't use cached connections if an error considered to be an
   asynchronous close event happens.

Given that since nginx 1.15.3 we have keepalive_timeout in the
upstream blocks to mitigate potential asynchronous close events,
even something simple like combination of (1) and (2) might be
good enough.  With (3), things are going to be as correct as it
can be.

[...]


___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH] Added keepalive_async_fails command

2023-04-02 Thread J Carter
re-sending the patch as an attachment as the formatting is still weird, 
and fixed typo I spotted..


On 02/04/2023 18:57, J Carter wrote:

Hello,

I've also attached an example nginx.conf and test script that 
simulates the asynchronous close events.
Two different test cases can be found within that, one with path /1 
for single peer upstream and /2 for multi-peer.


You should see 2 upstream addresses repeated in a row 
per-upstream-server in the access log by default, as it fails

through the cached connections & next performs next upstream tries.

Any feedback would be appreciated.

# HG changeset patch
# User jordanc.car...@outlook.com
# Date 1680457073 -3600
#  Sun Apr 02 18:37:53 2023 +0100
# Node ID 9ec4d7a8cdf6cdab00d09dff75fa6045f6f5533f
# Parent  5f1d05a21287ba0290dd3a17ad501595b442a194
Added keepalive_async_fails command to keepalive load balancer module.
This value determines the number suspected keepalive race events
per-upstream-try that will be tolerated before a subsequent network 
connection

error is considered a true failure.

diff -r 5f1d05a21287 -r 9ec4d7a8cdf6 src/event/ngx_event_connect.h
--- a/src/event/ngx_event_connect.h    Tue Mar 28 18:01:54 2023 +0300
+++ b/src/event/ngx_event_connect.h    Sun Apr 02 18:37:53 2023 +0100
@@ -17,6 +17,7 @@
 #define NGX_PEER_KEEPALIVE   1
 #define NGX_PEER_NEXT    2
 #define NGX_PEER_FAILED  4
+#define NGX_PEER_ASYNC_FAILED    8


 typedef struct ngx_peer_connection_s  ngx_peer_connection_t;
@@ -41,6 +42,7 @@
 ngx_str_t   *name;

 ngx_uint_t   tries;
+    ngx_uint_t   async_fails;
 ngx_msec_t   start_time;

 ngx_event_get_peer_pt    get;
diff -r 5f1d05a21287 -r 9ec4d7a8cdf6 
src/http/modules/ngx_http_upstream_keepalive_module.c
--- a/src/http/modules/ngx_http_upstream_keepalive_module.c    Tue Mar 
28 18:01:54 2023 +0300
+++ b/src/http/modules/ngx_http_upstream_keepalive_module.c    Sun Apr 
02 18:37:53 2023 +0100

@@ -13,6 +13,7 @@
 typedef struct {
 ngx_uint_t max_cached;
 ngx_uint_t requests;
+    ngx_uint_t max_async_fails;
 ngx_msec_t time;
 ngx_msec_t timeout;

@@ -108,6 +109,13 @@
   offsetof(ngx_http_upstream_keepalive_srv_conf_t, requests),
   NULL },

+ { ngx_string("keepalive_async_fails"),
+  NGX_HTTP_UPS_CONF|NGX_CONF_TAKE1,
+  ngx_conf_set_num_slot,
+  NGX_HTTP_SRV_CONF_OFFSET,
+  offsetof(ngx_http_upstream_keepalive_srv_conf_t, max_async_fails),
+  NULL },
+
   ngx_null_command
 };

@@ -160,6 +168,7 @@
 ngx_conf_init_msec_value(kcf->time, 360);
 ngx_conf_init_msec_value(kcf->timeout, 6);
 ngx_conf_init_uint_value(kcf->requests, 1000);
+    ngx_conf_init_uint_value(kcf->max_async_fails, 2);

 if (kcf->original_init_upstream(cf, us) != NGX_OK) {
 return NGX_ERROR;
@@ -320,6 +329,21 @@
 u = kp->upstream;
 c = pc->connection;

+    if (state & NGX_PEER_ASYNC_FAILED) {
+    pc->async_fails++;
+
+    if (pc->async_fails == 2) {
+    pc->async_fails = 0;
+    state = NGX_PEER_FAILED;
+
+    } else {
+    pc->tries++;
+    }
+    goto invalid;
+    }
+
+    pc->async_fails = 0;
+
 if (state & NGX_PEER_FAILED
 || c == NULL
 || c->read->eof
@@ -529,6 +553,8 @@
 conf->time = NGX_CONF_UNSET_MSEC;
 conf->timeout = NGX_CONF_UNSET_MSEC;
 conf->requests = NGX_CONF_UNSET_UINT;
+    conf->max_async_fails = NGX_CONF_UNSET_UINT;
+

 return conf;
 }
diff -r 5f1d05a21287 -r 9ec4d7a8cdf6 src/http/ngx_http_upstream.c
--- a/src/http/ngx_http_upstream.c    Tue Mar 28 18:01:54 2023 +0300
+++ b/src/http/ngx_http_upstream.c    Sun Apr 02 18:37:53 2023 +0100
@@ -4317,6 +4317,8 @@
 {
 state = NGX_PEER_NEXT;

+    } else if (u->peer.cached && ft_type == 
NGX_HTTP_UPSTREAM_FT_ERROR) {

+    state = NGX_PEER_ASYNC_FAILED;
 } else {
 state = NGX_PEER_FAILED;
 }
@@ -4330,11 +4332,6 @@
   "upstream timed out");
 }

-    if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) {
-    /* TODO: inform balancer instead */
-    u->peer.tries++;
-    }
-
 switch (ft_type) {

 case NGX_HTTP_UPSTREAM_FT_TIMEOUT:
@@ -4421,7 +4418,6 @@
 return;
 }
 #endif
-
 ngx_http_upstream_finalize_request(r, u, status);
 return;
 }
diff -r 5f1d05a21287 -r 9ec4d7a8cdf6 
src/http/ngx_http_upstream_round_robin.c
--- a/src/http/ngx_http_upstream_round_robin.c    Tue Mar 28 18:01:54 
2023 +0300
+++ b/src/http/ngx_http_upstream_round_robin.c    Sun Apr 02 18:37:53 
2023 +0100

@@ -2

[PATCH] Added keepalive_async_fails command

2023-04-02 Thread J Carter

Hello,

I've also attached an example nginx.conf and test script that simulates the 
asynchronous close events.
Two different test cases can be found within that, one with path /1 for single 
peer upstream and /2 for multi-peer.

You should see 2 upstream addresses repeated in a row per-upstream-server in 
the access log by default, as it fails
through the cached connections & next performs next upstream tries.

Any feedback would be appreciated.

# HG changeset patch
# User jordanc.car...@outlook.com
# Date 1680457073 -3600
#  Sun Apr 02 18:37:53 2023 +0100
# Node ID 9ec4d7a8cdf6cdab00d09dff75fa6045f6f5533f
# Parent  5f1d05a21287ba0290dd3a17ad501595b442a194
Added keepalive_async_fails command to keepalive load balancer module.
This value determines the number suspected keepalive race events
per-upstream-try that will be tolerated before a subsequent network connection
error is considered a true failure.

diff -r 5f1d05a21287 -r 9ec4d7a8cdf6 src/event/ngx_event_connect.h
--- a/src/event/ngx_event_connect.h Tue Mar 28 18:01:54 2023 +0300
+++ b/src/event/ngx_event_connect.h Sun Apr 02 18:37:53 2023 +0100
@@ -17,6 +17,7 @@
 #define NGX_PEER_KEEPALIVE   1
 #define NGX_PEER_NEXT2
 #define NGX_PEER_FAILED  4
+#define NGX_PEER_ASYNC_FAILED8
 
 
 typedef struct ngx_peer_connection_s  ngx_peer_connection_t;

@@ -41,6 +42,7 @@
 ngx_str_t   *name;
 
 ngx_uint_t   tries;

+ngx_uint_t   async_fails;
 ngx_msec_t   start_time;
 
 ngx_event_get_peer_ptget;

diff -r 5f1d05a21287 -r 9ec4d7a8cdf6 
src/http/modules/ngx_http_upstream_keepalive_module.c
--- a/src/http/modules/ngx_http_upstream_keepalive_module.c Tue Mar 28 
18:01:54 2023 +0300
+++ b/src/http/modules/ngx_http_upstream_keepalive_module.c Sun Apr 02 
18:37:53 2023 +0100
@@ -13,6 +13,7 @@
 typedef struct {
 ngx_uint_t max_cached;
 ngx_uint_t requests;
+ngx_uint_t max_async_fails;
 ngx_msec_t time;
 ngx_msec_t timeout;
 
@@ -108,6 +109,13 @@

   offsetof(ngx_http_upstream_keepalive_srv_conf_t, requests),
   NULL },
 
+ { ngx_string("keepalive_async_fails"),

+  NGX_HTTP_UPS_CONF|NGX_CONF_TAKE1,
+  ngx_conf_set_num_slot,
+  NGX_HTTP_SRV_CONF_OFFSET,
+  offsetof(ngx_http_upstream_keepalive_srv_conf_t, max_async_fails),
+  NULL },
+
   ngx_null_command
 };
 
@@ -160,6 +168,7 @@

 ngx_conf_init_msec_value(kcf->time, 360);
 ngx_conf_init_msec_value(kcf->timeout, 6);
 ngx_conf_init_uint_value(kcf->requests, 1000);
+ngx_conf_init_uint_value(kcf->max_async_fails, 2);
 
 if (kcf->original_init_upstream(cf, us) != NGX_OK) {

 return NGX_ERROR;
@@ -320,6 +329,21 @@
 u = kp->upstream;
 c = pc->connection;
 
+if (state & NGX_PEER_ASYNC_FAILED) {

+pc->async_fails++;
+
+if (pc->async_fails == 2) {
+pc->async_fails = 0;
+state = NGX_PEER_FAILED;
+
+} else {
+pc->tries++;
+}
+goto invalid;
+}
+
+pc->async_fails = 0;
+
 if (state & NGX_PEER_FAILED
 || c == NULL
 || c->read->eof
@@ -529,6 +553,8 @@
 conf->time = NGX_CONF_UNSET_MSEC;
 conf->timeout = NGX_CONF_UNSET_MSEC;
 conf->requests = NGX_CONF_UNSET_UINT;
+conf->max_async_fails = NGX_CONF_UNSET_UINT;
+
 
 return conf;

 }
diff -r 5f1d05a21287 -r 9ec4d7a8cdf6 src/http/ngx_http_upstream.c
--- a/src/http/ngx_http_upstream.c  Tue Mar 28 18:01:54 2023 +0300
+++ b/src/http/ngx_http_upstream.c  Sun Apr 02 18:37:53 2023 +0100
@@ -4317,6 +4317,8 @@
 {
 state = NGX_PEER_NEXT;
 
+} else if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) {

+state = NGX_PEER_ASYNC_FAILED;
 } else {
 state = NGX_PEER_FAILED;
 }
@@ -4330,11 +4332,6 @@
   "upstream timed out");
 }
 
-if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) {

-/* TODO: inform balancer instead */
-u->peer.tries++;
-}
-
 switch (ft_type) {
 
 case NGX_HTTP_UPSTREAM_FT_TIMEOUT:

@@ -4421,7 +4418,6 @@
 return;
 }
 #endif
-
 ngx_http_upstream_finalize_request(r, u, status);
 return;
 }
diff -r 5f1d05a21287 -r 9ec4d7a8cdf6 src/http/ngx_http_upstream_round_robin.c
--- a/src/http/ngx_http_upstream_round_robin.c  Tue Mar 28 18:01:54 2023 +0300
+++ b/src/http/ngx_http_upstream_round_robin.c  Sun Apr 02 18:37:53 2023 +0100
@@ -297,6 +297,7 @@
 r->upstream->peer.get = ngx_http_upstream_get_round_robin_peer;
 r->upstream->peer.free = ngx_http_upstream_free_round_robin_peer;
 r->upstream->peer.tries = ngx_http_upstream_tries(rrp->peers);
+r->upstream->peer.async_fails = 0;
 #if 

Re: [PATCH 5 of 6] Upstream: allow any worker to resolve upstream servers

2023-02-09 Thread J Carter


On 09/02/2023 16:45, Aleksei Bavshin wrote:

On 2/5/2023 7:01 PM, J Carter wrote:

Hi Aleksei,

Why not permanently assign the task of resolving a given upstream 
server group (all servers/peers within it) to a single worker?


It seems that this approach would resolve the SRV issues, and remove 
the need for the shared queue of tasks.


The load would still be spread evenly for the most realistic 
scenarios - which is where there are many upstream server groups of 
few servers, as opposed to few upstream server groups of many servers.


The intent of the change was exactly opposite, to avoid any permanent 
assignment of periodic tasks to a worker and allow another processes 
to resume resolving if the original assignee exits, no matter if 
normally or abnormally. I'm not even doing enough for that -- I 
should've kept in-progress tasks at the end of the queue with expires 
= resolver timeout + a small constant, and retry from another process 
when the timeout is reached, but the idea was abandoned for a 
minuscule improvement of insertion time. I expect to be asked to 
reconsider, as patch 6/6 does not cover all the possible situations 
where we want to recover a stale task.


Makes sense.

A permanent assignment of a whole upstream would also require 
notifying another processes that the upstream is no longer assigned if 
the worker exits or consistently recovering that assignment over a 
restart of single worker (e.g. after a crash - not a regular 
situation, but one we should take into account nonetheless).


It's a good point, in my mind I had rendezvous hashing + a notification 
sent to all workers when a fellow worker dies - the next worker in the 
rendezvous 'list' would simply assume the dead worker's upstreams while 
the new one inits, and share it back once the replacement worker is up 
(would still use some locks).


Or to keep it simple, just wait for the dead worker's replacement to 
reinit, and pick up the former's stale upstreams.


And the benefit is not quite obvious - I mentioned that resolving SRVs 
with a lot of records may take longer to update the list of peers, but 
the situation with contention is not expected to change significantly* 
if we pin these tasks to a single worker as another worker may be 
doing the same for another upstream. Most importantly, this isn't even 
a bottleneck. It only slightly exacerbates an existing problem with 
certain balancers that already suffer from the overuse of locks, in a 
configuration that was specifically crafted to amplify and highlight 
the difference and is far from these most realistic scenarios.

* Pending verification on a performance test stand.


Well the benefit is that it would prevent the disadvantage you listed, 
and remove at least one other contended lock throughout normal 
operations (the priority queue). But fair enough, yes it makes sense to 
profile it in a wide range of scenarios to see if it's any of those are 
legitimate worries first.



___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel

___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH 5 of 6] Upstream: allow any worker to resolve upstream servers

2023-02-05 Thread J Carter

Hi Aleksei,

Why not permanently assign the task of resolving a given upstream server 
group (all servers/peers within it) to a single worker?


It seems that this approach would resolve the SRV issues, and remove the 
need for the shared queue of tasks.


The load would still be spread evenly for the most realistic scenarios - 
which is where there are many upstream server groups of few servers, as 
opposed to few upstream server groups of many servers.



On 01/02/2023 01:37, Aleksei Bavshin wrote:

# HG changeset patch
# User Aleksei Bavshin 
# Date 1670883784 28800
#  Mon Dec 12 14:23:04 2022 -0800
# Node ID f8eb6b94d8f46008eb5f2f1dbc747750d5755506
# Parent  cfae397f1ea87a35c41febab6162fe5142aa767b
Upstream: allow any worker to resolve upstream servers.

This change addresses one of the limitations of the current re-resolve
code, dependency on the worker 0.  Now, each worker is able to pick a
resolve task from a shared priority queue.

The single worker implementation relies on the fact that each peer is
assigned to a specific worker and no other process may access its data.
Thus, it's safe to keep `peer->host.event` in the shared zone and modify
as necessary.  That assumption becomes invalid once we allow any free
worker to update the peer.  Now, all the workers have to know when the
previous resolution result expires and maintain their own timers.  A
single shared event structure is no longer sufficient.

The obvious solution is to make timer events local to a worker by moving
them up to the nearest object in a local memory, upstream.  From the
upstream timer event handler we can walk the list of the peers and pick
these that are expired and not already owned by another process.

To reduce the time spent under a lock we can keep a priority queue of
pending tasks, sorted by expiration time.  Each worker is able to get an
expired server from the head of the queue, perform the name resolution
and put the peer back with a new expiration time.
Per-upstream or per-zone rbtree was considered as a store for pending
tasks, but there won't be any performance benefit until a certain large
number of servers in the upstream.  Per-zone queues also require more
intricate locking.

The benefits of the change are obvious: we're no longer tied to a
lifetime of the first worker process and the distribution of the tasks
is more even.  There are certain disadvantages though:

- SRV record may resolve into multiple A/ records with different TTL
   kept in a worker-local cache of a resolver.  The next query in the
   same worker will reuse all the cache entries that are still valid.
   With the task distribution implemented, any worker may schedule the
   next update of a peer and thus we lose the benefit of a local cache.

- The change introduces an additional short lock on the list of peers
   and allows to acquire existing long locks from different processes.
   For example, it's possible that different workers will resolve large
   SRV records from the same upstream and attempt to update the list of
   peers at the same time.

diff --git a/src/http/modules/ngx_http_upstream_zone_module.c 
b/src/http/modules/ngx_http_upstream_zone_module.c
--- a/src/http/modules/ngx_http_upstream_zone_module.c
+++ b/src/http/modules/ngx_http_upstream_zone_module.c
@@ -10,6 +10,9 @@
  #include 
  
  
+#define NGX_UPSTREAM_RESOLVE_NO_WORKER  (ngx_uint_t) -1

+
+
  static char *ngx_http_upstream_zone(ngx_conf_t *cf, ngx_command_t *cmd,
  void *conf);
  static ngx_int_t ngx_http_upstream_init_zone(ngx_shm_zone_t *shm_zone,
@@ -40,6 +43,13 @@ static ngx_command_t  ngx_http_upstream_
  static ngx_int_t ngx_http_upstream_zone_init_worker(ngx_cycle_t *cycle);
  static void ngx_http_upstream_zone_resolve_timer(ngx_event_t *event);
  static void ngx_http_upstream_zone_resolve_handler(ngx_resolver_ctx_t *ctx);
+static void ngx_http_upstream_zone_resolve_queue_insert(ngx_queue_t *queue,
+ngx_http_upstream_host_t *host);
+static void ngx_http_upstream_zone_start_resolve(
+ngx_http_upstream_srv_conf_t *uscf, ngx_http_upstream_host_t *host);
+static void ngx_http_upstream_zone_schedule_resolve(
+ngx_http_upstream_srv_conf_t *uscf, ngx_http_upstream_host_t *host,
+ngx_msec_t timer);
  
  
  static ngx_http_module_t  ngx_http_upstream_zone_module_ctx = {

@@ -231,6 +241,8 @@ ngx_http_upstream_zone_copy_peers(ngx_sl
  peers->shpool = shpool;
  peers->config = config;
  
+ngx_queue_init(>resolve_queue);

+
  for (peerp = >peer; *peerp; peerp = >next) {
  /* pool is unlocked */
  peer = ngx_http_upstream_zone_copy_peer(peers, *peerp);
@@ -248,6 +260,9 @@ ngx_http_upstream_zone_copy_peers(ngx_sl
  return NULL;
  }
  
+ngx_http_upstream_rr_peer_ref(peers, peer);

+ngx_queue_insert_tail(>resolve_queue, >host->queue);
+
  *peerp = peer;
  peer->id = (*peers->config)++;
  }
@@ -268,6 +283,8 @@ ngx_http_upstream_zone_copy_peers(ngx_sl
  

Re: [PATCH] Added dark mode support to error pages - request in ticket #2434

2023-01-28 Thread J Carter
Adding the patch file, as something is messing with the formatting of 
the patches in email body..


On 28/01/2023 23:15, J Carter wrote:

Hello,

An example of the dark mode and light mode pages is here 
https://imgur.com/a/n9QJpp4


# HG changeset patch
# User jordanc.car...@outlook.com
# Date 1674945436 0
# Sat Jan 28 22:37:16 2023 +
# Branch dark-error
# Node ID 0dcba21038765f6f03098cbdf23f401e89e3648f
# Parent 07b0bee87f32be91a33210bc06973e07c4c1dac9
Added dark mode support to error pages - request in ticket #2434

diff -r 07b0bee87f32 -r 0dcba2103876 src/http/ngx_http_special_response.c
--- a/src/http/ngx_http_special_response.c Wed Dec 21 14:53:27 2022 +0300
+++ b/src/http/ngx_http_special_response.c Sat Jan 28 22:37:16 2023 +
@@ -18,6 +18,27 @@
static ngx_int_t ngx_http_send_refresh(ngx_http_request_t *r);
+#define ngx_error_page_head(MESSAGE) \
+"" CRLF \
+"" CRLF \
+"" MESSAGE "" CRLF \
+"" CRLF \
+"" CRLF \
+"" CRLF \
+"" MESSAGE "" CRLF
+
+
+#define ngx_error_page_head_br(MESSAGE) \
+"" CRLF \
+"" CRLF \
+"400" MESSAGE "" CRLF \
+"" CRLF \
+"" CRLF \
+"" CRLF \
+"400 Bad Request" CRLF \
+"" MESSAGE "" CRLF
+
+
static u_char ngx_http_error_full_tail[] =
"" NGINX_VER "" CRLF
"" CRLF
@@ -57,284 +78,140 @@
"\">" CRLF;
-static char ngx_http_error_301_page[] =
-"" CRLF
-"301 Moved Permanently" CRLF
-"" CRLF
-"301 Moved Permanently" CRLF
-;
+static char ngx_http_error_301_page[] = + ngx_error_page_head("301 
Moved Permanently");

static char ngx_http_error_302_page[] =
-"" CRLF
-"302 Found" CRLF
-"" CRLF
-"302 Found" CRLF
-;
+ ngx_error_page_head("302 Found");
static char ngx_http_error_303_page[] =
-"" CRLF
-"303 See Other" CRLF
-"" CRLF
-"303 See Other" CRLF
-;
+ ngx_error_page_head("303 See Other");
static char ngx_http_error_307_page[] =
-"" CRLF
-"307 Temporary Redirect" CRLF
-"" CRLF
-"307 Temporary Redirect" CRLF
-;
+ ngx_error_page_head("307 Temporary Redirect");
static char ngx_http_error_308_page[] =
-"" CRLF
-"308 Permanent Redirect" CRLF
-"" CRLF
-"308 Permanent Redirect" CRLF
-;
+ ngx_error_page_head("308 Permanent Redirect");
static char ngx_http_error_400_page[] =
-"" CRLF
-"400 Bad Request" CRLF
-"" CRLF
-"400 Bad Request" CRLF
-;
+ ngx_error_page_head("400 Bad Request");
static char ngx_http_error_401_page[] =
-"" CRLF
-"401 Authorization Required" CRLF
-"" CRLF
-"401 Authorization Required" CRLF
-;
+ ngx_error_page_head("401 Authorization Required");
static char ngx_http_error_402_page[] =
-"" CRLF
-"402 Payment Required" CRLF
-"" CRLF
-"402 Payment Required" CRLF
-;
+ ngx_error_page_head("402 Payment Required");
static char ngx_http_error_403_page[] =
-"" CRLF
-"403 Forbidden" CRLF
-"" CRLF
-"403 Forbidden" CRLF
-;
+ ngx_error_page_head("403 Forbidden");
static char ngx_http_error_404_page[] =
-"" CRLF
-"404 Not Found" CRLF
-"" CRLF
-"404 Not Found" CRLF
-;
+ ngx_error_page_head("404 Not Found");
static char ngx_http_error_405_page[] =
-"" CRLF
-"405 Not Allowed" CRLF
-"" CRLF
-"405 Not Allowed" CRLF
-;
+ ngx_error_page_head("405 Not Allowed");
static char ngx_http_error_406_page[] =
-"" CRLF
-"406 Not Acceptable" CRLF
-"" CRLF
-"406 Not Acceptable" CRLF
-;
+ ngx_error_page_head("406 Not Acceptable");
static char ngx_http_error_408_page[] =
-"" CRLF
-"408 Request Time-out" CRLF
-"" CRLF
-"408 Request Time-out" CRLF
-;
+ ngx_error_page_head("408 Request Time-out");
static char ngx_http_error_409_page[] =
-"" CRLF
-"409 Conflict" CRLF
-"" CRLF
-"409 Conflict" CRLF
-;
+ ngx_error_page_head("409 Conflict");
static char ngx_http_error_410_page[] =
-"" CRLF
-"410 Gone" CRLF
-"" CRLF
-"410 Gone" CRLF
-;
+ ngx_error_page_head("410 Gone");
static char ngx_http_error_411_page[] =
-"" CRLF
-"411 Length Required" CRLF
-"" CRLF
-"411 Length Required" CRLF
-;
+ ngx_error_page_head("411 Length Required");
static char ngx_http_error_412_page[] =
-"" CRLF
-"412 Precondition Failed" CRLF
-"" CRLF
-"412 Precondition Failed" CRLF
-;

[PATCH] Added dark mode support to error pages - request in ticket #2434

2023-01-28 Thread J Carter

Hello,

An example of the dark mode and light mode pages is here 
https://imgur.com/a/n9QJpp4


# HG changeset patch
# User jordanc.car...@outlook.com
# Date 1674945436 0
# Sat Jan 28 22:37:16 2023 +
# Branch dark-error
# Node ID 0dcba21038765f6f03098cbdf23f401e89e3648f
# Parent 07b0bee87f32be91a33210bc06973e07c4c1dac9
Added dark mode support to error pages - request in ticket #2434

diff -r 07b0bee87f32 -r 0dcba2103876 src/http/ngx_http_special_response.c
--- a/src/http/ngx_http_special_response.c Wed Dec 21 14:53:27 2022 +0300
+++ b/src/http/ngx_http_special_response.c Sat Jan 28 22:37:16 2023 +
@@ -18,6 +18,27 @@
static ngx_int_t ngx_http_send_refresh(ngx_http_request_t *r);
+#define ngx_error_page_head(MESSAGE) \
+"" CRLF \
+"" CRLF \
+"" MESSAGE "" CRLF \
+"" CRLF \
+"" CRLF \
+"" CRLF \
+"" MESSAGE "" CRLF
+
+
+#define ngx_error_page_head_br(MESSAGE) \
+"" CRLF \
+"" CRLF \
+"400" MESSAGE "" CRLF \
+"" CRLF \
+"" CRLF \
+"" CRLF \
+"400 Bad Request" CRLF \
+"" MESSAGE "" CRLF
+
+
static u_char ngx_http_error_full_tail[] =
"" NGINX_VER "" CRLF
"" CRLF
@@ -57,284 +78,140 @@
"\">" CRLF;
-static char ngx_http_error_301_page[] =
-"" CRLF
-"301 Moved Permanently" CRLF
-"" CRLF
-"301 Moved Permanently" CRLF
-;
+static char ngx_http_error_301_page[] = + ngx_error_page_head("301 
Moved Permanently");

static char ngx_http_error_302_page[] =
-"" CRLF
-"302 Found" CRLF
-"" CRLF
-"302 Found" CRLF
-;
+ ngx_error_page_head("302 Found");
static char ngx_http_error_303_page[] =
-"" CRLF
-"303 See Other" CRLF
-"" CRLF
-"303 See Other" CRLF
-;
+ ngx_error_page_head("303 See Other");
static char ngx_http_error_307_page[] =
-"" CRLF
-"307 Temporary Redirect" CRLF
-"" CRLF
-"307 Temporary Redirect" CRLF
-;
+ ngx_error_page_head("307 Temporary Redirect");
static char ngx_http_error_308_page[] =
-"" CRLF
-"308 Permanent Redirect" CRLF
-"" CRLF
-"308 Permanent Redirect" CRLF
-;
+ ngx_error_page_head("308 Permanent Redirect");
static char ngx_http_error_400_page[] =
-"" CRLF
-"400 Bad Request" CRLF
-"" CRLF
-"400 Bad Request" CRLF
-;
+ ngx_error_page_head("400 Bad Request");
static char ngx_http_error_401_page[] =
-"" CRLF
-"401 Authorization Required" CRLF
-"" CRLF
-"401 Authorization Required" CRLF
-;
+ ngx_error_page_head("401 Authorization Required");
static char ngx_http_error_402_page[] =
-"" CRLF
-"402 Payment Required" CRLF
-"" CRLF
-"402 Payment Required" CRLF
-;
+ ngx_error_page_head("402 Payment Required");
static char ngx_http_error_403_page[] =
-"" CRLF
-"403 Forbidden" CRLF
-"" CRLF
-"403 Forbidden" CRLF
-;
+ ngx_error_page_head("403 Forbidden");
static char ngx_http_error_404_page[] =
-"" CRLF
-"404 Not Found" CRLF
-"" CRLF
-"404 Not Found" CRLF
-;
+ ngx_error_page_head("404 Not Found");
static char ngx_http_error_405_page[] =
-"" CRLF
-"405 Not Allowed" CRLF
-"" CRLF
-"405 Not Allowed" CRLF
-;
+ ngx_error_page_head("405 Not Allowed");
static char ngx_http_error_406_page[] =
-"" CRLF
-"406 Not Acceptable" CRLF
-"" CRLF
-"406 Not Acceptable" CRLF
-;
+ ngx_error_page_head("406 Not Acceptable");
static char ngx_http_error_408_page[] =
-"" CRLF
-"408 Request Time-out" CRLF
-"" CRLF
-"408 Request Time-out" CRLF
-;
+ ngx_error_page_head("408 Request Time-out");
static char ngx_http_error_409_page[] =
-"" CRLF
-"409 Conflict" CRLF
-"" CRLF
-"409 Conflict" CRLF
-;
+ ngx_error_page_head("409 Conflict");
static char ngx_http_error_410_page[] =
-"" CRLF
-"410 Gone" CRLF
-"" CRLF
-"410 Gone" CRLF
-;
+ ngx_error_page_head("410 Gone");
static char ngx_http_error_411_page[] =
-"" CRLF
-"411 Length Required" CRLF
-"" CRLF
-"411 Length Required" CRLF
-;
+ ngx_error_page_head("411 Length Required");
static char ngx_http_error_412_page[] =
-"" CRLF
-"412 Precondition Failed" CRLF
-"" CRLF
-"412 Precondition Failed" CRLF
-;
+ ngx_error_page_head("412 Precondition Failed");
static char ngx_http_error_413_page[] =
-"" CRLF
-"413 Request Entity Too Large" CRLF
-"" CRLF
-"413 Request Entity Too Large" CRLF
-;
+ ngx_error_page_head("413 Request Entity Too Large");
static char ngx_http_error_414_page[] =
-"" CRLF
-"414 Request-URI Too Large" CRLF
-"" CRLF
-"414 Request-URI Too Large" CRLF
-;
+ ngx_error_page_head("414 Request-URI Too Large");
static char ngx_http_error_415_page[] =
-"" CRLF
-"415 Unsupported Media Type" CRLF
-"" CRLF
-"415 Unsupported Media Type" CRLF
-;
+ ngx_error_page_head("415 Unsupported Media Type");
static char ngx_http_error_416_page[] =
-"" CRLF
-"416 Requested Range Not Satisfiable" CRLF
-"" CRLF
-"416 Requested Range Not Satisfiable" CRLF
-;
+ ngx_error_page_head("416 Requested Range Not Satisfiable");
static char ngx_http_error_421_page[] =
-"" CRLF
-"421 Misdirected Request" CRLF
-"" CRLF
-"421 Misdirected Request" CRLF
-;
+ ngx_error_page_head("421 Misdirected Request");
static char ngx_http_error_429_page[] =
-"" CRLF
-"429 Too Many Requests" CRLF
-"" CRLF
-"429 Too Many Requests" CRLF
-;
+ ngx_error_page_head("429 Too Many Requests");
-static char ngx_http_error_494_page[] =
-"" CRLF
-"400 

Re: [PATCH] limit_req_status: allow status response to be as low as 200

2023-01-12 Thread J Carter

It's trivial to do with error page OP as mentioned - an example:

    limit_req_status 598;
    limit_req_zone $binary_remote_addr zone=a:1m rate=1r/m;

    server {
    limit_req zone=a nodelay;

    error_page 598 = @send-204;

    location / {
   ...
    }

    ...

    location @send-204 {
        return 204;
    }
    }

error_page's '=' can also be set to 204, and the named location contain 
nothing at all (whatever you prefer).


On 12/01/2023 17:11, Maxim Dounin wrote:

Hello!

On Thu, Jan 12, 2023 at 05:16:21AM -0800, Christopher Liebman wrote:


Not with 204.
This works quite well with a partner that has an aversion to errors when
they run over the limit:
limit_req_status 204;

Indeed, 204 happens to be one of the two 2xx codes which can be
returned directly, as they are handled in
ngx_http_finalize_request() to facilitate simpler
code in the dav module.  This is not what your patch enables
though.  For all other codes, except 204 and 201 mentioned above,
just returning them will simply break things.

As already suggested, proper behaviour for all the codes can be
easily configured by using the "error_page" directive.

Hope this helps.


___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH] Dynamic rate limiting for limit_req module

2023-01-07 Thread J Carter
s, 
ngx_uint_t n,

-    ngx_uint_t *ep, ngx_http_limit_req_limit_t **limit)
+    ngx_uint_t *ep, ngx_http_limit_req_limit_t **limit, ngx_uint_t rate)
 {
 ngx_int_t   excess;
 ngx_msec_t  now, delay, max_delay;
@@ -548,8 +590,7 @@
 max_delay = 0;

 } else {
-    ctx = (*limit)->shm_zone->data;
-    max_delay = (excess - (*limit)->delay) * 1000 / ctx->rate;
+    max_delay = (excess - (*limit)->delay) * 1000 / rate;
 }

 while (n--) {
@@ -572,7 +613,7 @@
 ms = 0;
 }

-    excess = lr->excess - ctx->rate * ms / 1000 + 1000;
+    excess = lr->excess - lr->rate * ms / 1000 + 1000;

 if (excess < 0) {
 excess = 0;
@@ -593,7 +634,7 @@
 continue;
 }

-    delay = (excess - limits[n].delay) * 1000 / ctx->rate;
+    delay = (excess - limits[n].delay) * 1000 / lr->rate;

 if (delay > max_delay) {
 max_delay = delay;
@@ -676,7 +717,7 @@
 return;
 }

-    excess = lr->excess - ctx->rate * ms / 1000;
+    excess = lr->excess - lr->rate * ms / 1000;

 if (excess > 0) {
 return;
@@ -860,7 +901,7 @@
 }

 size = 0;
-    rate = 1;
+    rate = 0;
 scale = 1;
 name.len = 0;

@@ -902,6 +943,20 @@

 if (ngx_strncmp(value[i].data, "rate=", 5) == 0) {

+    ngx_memzero(, sizeof(ngx_http_compile_complex_value_t));
+
+    ccv.cf = cf;
+    ccv.value = [i];
+    ccv.complex_value = >cvr;
+
+    if (ngx_http_compile_complex_value() != NGX_OK) {
+    return NGX_CONF_ERROR;
+    }
+
+    if (ctx->cvr.lengths != NULL) {
+    continue;
+    }
+
 len = value[i].len;
 p = value[i].data + len - 3;


On 07/01/2023 06:04, J Carter wrote:

Hello,

Please find below my revised patch with style and code bug fixes 
included. The only change in behavior of note is that invalid rate 
values are now simply ignored in the same fashion an empty key is, and 
the use of the parsing of complex value being disabled if a non 
variable is used as the rate.


A brief overview of the issue and how I've resolved it to match the 
behavior of the current best solution for multiple rates.


> "The fundamental problem with dynamic rates in limit_req, which is
> a leaky bucket limiter, is that the rate is a property which
> applies to multiple requests as accumulated in the bucket (and
> this is basically why it is configured in the limit_req_zone, and
> not in limit_req), while the dynamically calculated rate is a
> property of the last request.
>
> This can easily result in counter-intuitive behaviour.
> For example, consider 5 requests with 1 second interval, assuming
> burst 2, and rate being 1r/m or 100r/s depending on some request
> properties:
>
> - 1st request, rate 1r/m, request is accepted
> - 2st request, rate 1r/m, request is accepted
> - 3rd request, rate 1r/m, request is rejected, since there is no
>    room in the bucket
> - 4th request, rate 100r/s, request is accepted
> - 5th request, rate 1r/m, request is accepted (unexpectedly)
>
> Note that the 5th request is accepted, while it is mostly
> equivalent to the 3rd request, and one could expect it to be
> rejected.  But it happens to to be accepted because the 4th
> request "cleared" the bucket."

No additional logic was required to fix this. Simply appending the 
$rate variable to the $key is sufficient for a user to avoid unwanted 
excess value changes with extreme rate jumps (as seen above).


Instead of:   limit_req_zone $binary_remote_addr zone=one:10m rate=$rate;

this:   limit_req_zone $binary_remote_addr$rate 
zone=one:10m rate=$rate;


This ensures that a change in the rate variable's value starts new 
accounting of excess (as a new state is created to reflect this new 
key) - this follows the behavior previously given in this chain for a 
static set of rates. As an additional benefit, the syntax is more 
compact and readable within the nginx configuration at the cost of 
slight overhead in memory.



(the solution previously given for multiple defined/static rates)

> ...
> map $traffic_tier $limit_free {
> free  $binary_remote_addr;
> }
>
> map $traffic_tier $limit_basic {
> basic $binary_remote_addr;
> }
>
> map $traffic_tier $limit_premium {
> premium   $binary_remote_addr;
> }
>
> limit_req_zone $limit_free zone=free:10m rate=1r/m;
> limit_req_zone $limit_basic zone=basic:10m rate=2r/m;
> limit_req_zone $limit_premium zone=premium:10m rate=1r/s;
>
> limit_req zone=free (burst=x nodelay);
> limit_req zone=basic (burst=x nod

Re: [PATCH] Dynamic rate limiting for limit_req module

2023-01-06 Thread J Carter

Hello,

Please find below my revised patch with style and code bug fixes 
included. The only change in behavior of note is that invalid rate 
values are now simply ignored in the same fashion an empty key is, and 
the use of the parsing of complex value being disabled if a non variable 
is used as the rate.


A brief overview of the issue and how I've resolved it to match the 
behavior of the current best solution for multiple rates.


> "The fundamental problem with dynamic rates in limit_req, which is
> a leaky bucket limiter, is that the rate is a property which
> applies to multiple requests as accumulated in the bucket (and
> this is basically why it is configured in the limit_req_zone, and
> not in limit_req), while the dynamically calculated rate is a
> property of the last request.
>
> This can easily result in counter-intuitive behaviour.
> For example, consider 5 requests with 1 second interval, assuming
> burst 2, and rate being 1r/m or 100r/s depending on some request
> properties:
>
> - 1st request, rate 1r/m, request is accepted
> - 2st request, rate 1r/m, request is accepted
> - 3rd request, rate 1r/m, request is rejected, since there is no
>    room in the bucket
> - 4th request, rate 100r/s, request is accepted
> - 5th request, rate 1r/m, request is accepted (unexpectedly)
>
> Note that the 5th request is accepted, while it is mostly
> equivalent to the 3rd request, and one could expect it to be
> rejected.  But it happens to to be accepted because the 4th
> request "cleared" the bucket."

No additional logic was required to fix this. Simply appending the $rate 
variable to the $key is sufficient for a user to avoid unwanted excess 
value changes with extreme rate jumps (as seen above).


Instead of:   limit_req_zone $binary_remote_addr zone=one:10m rate=$rate;

this:   limit_req_zone $binary_remote_addr$rate zone=one:10m 
rate=$rate;


This ensures that a change in the rate variable's value starts new 
accounting of excess (as a new state is created to reflect this new key) 
- this follows the behavior previously given in this chain for a static 
set of rates. As an additional benefit, the syntax is more compact and 
readable within the nginx configuration at the cost of slight overhead 
in memory.



(the solution previously given for multiple defined/static rates)

> ...
> map $traffic_tier $limit_free {
> free  $binary_remote_addr;
> }
>
> map $traffic_tier $limit_basic {
> basic $binary_remote_addr;
> }
>
> map $traffic_tier $limit_premium {
> premium   $binary_remote_addr;
> }
>
> limit_req_zone $limit_free zone=free:10m rate=1r/m;
> limit_req_zone $limit_basic zone=basic:10m rate=2r/m;
> limit_req_zone $limit_premium zone=premium:10m rate=1r/s;
>
> limit_req zone=free (burst=x nodelay);
> limit_req zone=basic (burst=x nodelay);
> limit_req zone=premium (burst=x nodelay);

to

...

map $traffic_tier $rate {
    free 1r/m;
    basic   2r/m;
    premium    1r/s;
}

limit_req_zone $binary_remote_addr$rate zone=one:10m rate=$rate;
limit_req zone=one burst=x nodelay;

(With the appended $rate after $key being optional - if your rates don't 
flip flop to extremes omitting it will save memory and allow more states)


The is merely a side benefit - the main purpose of the patch is being 
able to obtain a rate limit in the from a string (of some description) 
from a request, or any variable that is populated in time for the 
limit_req module to use directly as a rate limit value. The approach 
detailed above works for that too.



debian@debian:~/projects/nginx-merc/nginx-tests$ prove limit_req*
limit_req2.t . ok
limit_req_delay.t  ok
limit_req_dry_run.t .. ok
limit_req.t .. ok
All tests successful.
Files=4, Tests=40,  5 wallclock secs ( 0.05 usr  0.01 sys +  0.48 cusr  
0.12 csys =  0.66 CPU)

Result: PASS

# HG changeset patch
# User jordanc.car...@outlook.com
# Date 1673064770 0
#  Sat Jan 07 04:12:50 2023 +
# Branch dynamic-rate-limiting
# Node ID f80741fb734e0a4f83f2f96436ff300c4f3125aa
# Parent  07b0bee87f32be91a33210bc06973e07c4c1dac9
Variables support for limit_req_zone's rate

diff -r 07b0bee87f32 -r f80741fb734e 
src/http/modules/ngx_http_limit_req_module.c
--- a/src/http/modules/ngx_http_limit_req_module.c  Wed Dec 21 
14:53:27 2022 +0300
+++ b/src/http/modules/ngx_http_limit_req_module.c  Sat Jan 07 
04:12:50 2023 +

@@ -26,6 +26,7 @@
 /* integer value, 1 corresponds to 0.001 r/s */
 ngx_uint_t   excess;
 ngx_uint_t   count;
+    ngx_uint_t   rate;
 u_char   data[1];
 } ngx_http_limit_req_node_t;

@@ -41,6 +42,7 @@
 ngx_http_limit_req_shctx_t  *sh;
 ngx_slab_pool_t *shpool;
 /* integer value, 1 corresponds to 0.001 r/s */
+    ngx_http_complex_value_t cvr;
 ngx_uint_t   rate;
 

Re: [PATCH] Dynamic rate limiting for limit_req module

2023-01-02 Thread J Carter

Hello,

Thanks again for your reply and thoughts.

On 02/01/2023 23:47, Maxim Dounin wrote:

Hello!

On Mon, Jan 02, 2023 at 06:21:03AM +, J Carter wrote:


"The fundamental problem with dynamic rates in limit_req, which is
a leaky bucket limiter, is that the rate is a property which
applies to multiple requests as accumulated in the bucket (and
this is basically why it is configured in the limit_req_zone, and
not in limit_req), while the dynamically calculated rate is a
property of the last request.

This can easily result in counter-intuitive behaviour.
For example, consider 5 requests with 1 second interval, assuming
burst 2, and rate being 1r/m or 100r/s depending on some request
properties:

- 1st request, rate 1r/m, request is accepted
- 2st request, rate 1r/m, request is accepted
- 3rd request, rate 1r/m, request is rejected, since there is no
   room in the bucket
- 4th request, rate 100r/s, request is accepted
- 5th request, rate 1r/m, request is accepted (unexpectedly)

Note that the 5th request is accepted, while it is mostly
equivalent to the 3rd request, and one could expect it to be
rejected.  But it happens to to be accepted because the 4th
request "cleared" the bucket."

This is true - however I do wonder if the behavior (clearing the bucket)
is truly counter intuitive in that scenario.

Well, certainly it is.  And it is even more so, if you'll assume
that 4th and 5th requests are processed essentially at the same
time: it will be a race condition with different outcomes
depending on the actual order of the requests during limit_req
zone updates.  For additional fun, this order is not the same as
order of the requests in logs.
You're right, it's a good point. Given a short duration between 
requests, the behavior during transitions would not be deterministic and 
could 'flip flop' with regards to the excess value. This behavior could 
be harsh in the extreme rate jump scenario presented.



Should it not be expected that this user/key/state that has been
assigned 100r/s (even if just for one request) would have it's
outstanding excess requests recalculated (and in this case absolved) by
that rate increase?
You have after all assigned it more capacity.

I'm not sure how to elegantly avoid this if it is an issue, since there
is no 'standard' rate to reference (it could interpolate over
time/requests, but that might be even more confusing).

I don't think there is a good solution, and that's one of the
reasons why there is no such functionality in limit_req.  Instead,
it provides a fixed rate, and an ability to provide different
burst limits for different requests (which is checked on a
per-request basis), as well as multiple different limits to cover
cases when multiple rates are needed.
I have a couple of ideas to solve this issue, that I plan to investigate 
further:


Interpolation - Instead of transitioning 'rate' in a single request 
straight to a new value, do it smoothly over time/requests.


In practical terms this would be achieved by interpolating the state's 
new rate with the previous rate, and using time between requests for 
that state (ms) as a scale factor.
A new command variable would be added to control the smoothness(the 
slope angle) of the interpolation. Since interpolation is an additive 
process, and considers time, this would smooth 'excess''s change and 
also mask the order of concurrent operations/requests (so no race).


Another option - Create a new state per unique 'key and rate' 
combination. This is effectively the same logical process(and memory 
required) as with the manual approach from your example (just 
dynamically determined at runtime in a single zone).


This means when a user (a given key) changes rate, a new state is 
created. The old state remains - to be cleaned up later via expire, and 
conversely if the user switches back to their original rate before 
expiry the old state will become active once again. This would mean 
concatenating key and rate's values for the hash used for the state (as 
it must be unique).


This would increase memory usage briefly during the transition periods 
due to the additional states and would not consider the other key + rate 
combo's state's excess value when moving between rates (again much the 
same as the manual approach).


The first option is most suitable to a 'fine grained' key (per user) 
rate limiting, whereas the second is more suitable for a 'broad based' 
key (one with lots of different rates).



"Note well that the same limits now can be configured separately,
in distinct shared memory zones.  This makes it possible to limit
them requests consistently, without any unexpected deviations from
the configured behaviour if requests with different rates
interfere.

Just in case, the very same behaviour can be implemented with
multiple limit_req_zones, with something like this:

 map $traffic_tier $limit_free {
 free  $binary_remote_addr;
 }

 map $

Re: [PATCH] Dynamic rate limiting for limit_req module

2023-01-01 Thread J Carter
variables, and 
complex values if there are variables (e.g., "imcf->angle" and 
"imcf->acv")."


Thanks, that makes perfect sense. I'll update accordingly.


Note that this is a style change, and it is wrong.

The same here.

Note that this is a style change, since the max_delay is set in 
all possible code paths, and there are no reasons to additionally 
initialize it.


Note that this is a style change, since the max_delay is set in 
all possible code paths, and there are no reasons to additionally 
initialize it.


See above.


Understood, I'll work to fix the style of my changes and I will keep it 
more consistent with the guidelines in future.


This seems to be separate change, irrelevant to what the patch 
does.  If at all, it should be submitted as a separate patch, see 
http://nginx.org/en/docs/contributing_changes.html.


Note though that this changes looks wrong to me, as it will break 
request accounting on large time intervals with large burst, such 
as with rate=1r/m burst=100.


Note well that it fails to use an empty line before "} else ", 
which is a style bug.


You are correct, it will break for that scenario. I will rework this and 
the related portions to more generic overflow detection & handling.


I did include it in this patch as there is a danger (which is already 
present) of rate * ms overflowing. rate is only limited by atoi's limits 
at present(and it's multiplied by 1000 after that even) . This is more 
of a worrisome when arbitrary values (including user request values) are 
in use place of static values from the nginx configuration.


I will make those changes into a separate patch as advised.

The "lr->rate", which is saved in the shared memory, seems to be 
never used except during processing of the same request.  Any 
reasons to keep it in the shared memory, reducing the number of 
states which can be stored there?


A persistent record of the rate that was last assigned to a given 
node/state is needed exclusively for expiring records (in the expire 
function). I don't believe this can be avoided, as the rate of the 
current request is unrelated to the 'to be removed' record's.


Thanks again.

On 01/01/2023 17:35, Maxim Dounin wrote:

Hello!

On Fri, Dec 30, 2022 at 10:23:12PM +, J Carter wrote:


Please find below a patch to enable dynamic rate limiting for
limit_req module.

Thanks for the patch, and Happy New Year.

The fundamental problem with dynamic rates in limit_req, which is
a leaky bucket limiter, is that the rate is a property which
applies to multiple requests as accumulated in the bucket (and
this is basically why it is configured in the limit_req_zone, and
not in limit_req), while the dynamically calculated rate is a
property of the last request.  This can easily result in
counter-intuitive behaviour.

For example, consider 5 requests with 1 second interval, assuming
burst 2, and rate being 1r/m or 100r/s depending on some request
properties:

- 1st request, rate 1r/m, request is accepted
- 2st request, rate 1r/m, request is accepted
- 3rd request, rate 1r/m, request is rejected, since there is no
   room in the bucket
- 4th request, rate 100r/s, request is accepted
- 5th request, rate 1r/m, request is accepted (unexpectedly)

Note that the 5th request is accepted, while it is mostly
equivalent to the 3rd request, and one could expect it to be
rejected.  But it happens to to be accepted because the 4th
request "cleared" the bucket.

Could you please clarify how this problem is addressed in your
patch?

Note well that the same limits now can be configured separately,
in distinct shared memory zones.  This makes it possible to limit
them requests consistently, without any unexpected deviations from
the configured behaviour if requests with different rates
interfere.


/* EXAMPLE-*/

  geo $traffic_tier {
 defaultfree;
 127.0.1.0/24   basic;
 127.0.2.0/24   premium;
 }

 map $traffic_tier $rate {
 free   1r/m;
 basic  2r/m;
 premium1r/s;
 }

 limit_req_zone $binary_remote_addr zone=one:10m rate=$rate;

 server {
 limit_req zone=one;
 listen   80;
 server_name  localhost;
 return 200;
 }

curl --interface 127.0.X.X localhost

Just in case, the very same behaviour can be implemented with
multiple limit_req_zones, with something like this:

 map $traffic_tier $limit_free {
 free  $binary_remote_addr;
 }

 map $traffic_tier $limit_basic {
 basic $binary_remote_addr;
 }

 map $traffic_tier $limit_premium {
 premium   $binary_remote_addr;
 }

 limit_req_zone $limit_free zone=free:10m rate=1r/m;
 limit_req_zone $limit_basic zone=basic:10m rate=2r/m;
 limit_req_zone $limit_premium zone=premium:10m rate=1r/s;

 limit_re

[PATCH] Dynamic rate limiting for limit_req module

2022-12-30 Thread J Carter
Hello,

Please find below a patch to enable dynamic rate limiting for limit_req module.


/* EXAMPLE-*/

 geo $traffic_tier {
defaultfree;
127.0.1.0/24   basic;
127.0.2.0/24   premium;
}

map $traffic_tier $rate {
free   1r/m;
basic  2r/m;
premium1r/s;
}

limit_req_zone $binary_remote_addr zone=one:10m rate=$rate;

server {
limit_req zone=one;
listen   80;
server_name  localhost;
return 200;
}

curl --interface 127.0.X.X localhost


/* NGINX-TESTS-*/

debian@debian:~/projects/nginx-merc/nginx-tests$ prove limit_req*
limit_req2.t . ok
limit_req_delay.t  ok
limit_req_dry_run.t .. ok
limit_req.t .. ok
All tests successful.
Files=4, Tests=40,  4 wallclock secs ( 0.04 usr  0.00 sys +  0.28 cusr  0.11 
csys =  0.43 CPU)
Result: PASS


/* CHANGES OF 
BEHAVIOUR-*/

It is backwards compatible with the syntax of existing configurations, either a 
rate=$variable can be used or the existing syntax of rate=xy/s.

 - 'rate=' can be assigned empty value, which results in an unlimited(maximum) 
rate limits value.
- 'rate=' set to an invalid value also results in an unlimited(maximum) rate 
limit value.
- The value of rate is now limited to prevent integer overflow in certain 
operations.
- The maximum time between consecutive requests that is determined is now 
limited to 60s (6ms) to prevent integer overflow/underflow.

/* USE-CASES-*/

Allow rate limits for a given user to be determined by mapping trusted request 
values to a rate, such as:
- Source IP CIDR.
- Client Certificate identifiers.
- JWT claims.

This could also be performed dynamically at runtime with key_val zone to alter 
rate limits on the fly without a reload.

/* PATCHBOMB-*/

# HG changeset patch
# User jordanc.car...@outlook.com
# Date 1672437935 0
#  Fri Dec 30 22:05:35 2022 +
# Branch dynamic-rate-limiting
# Node ID b2bd50efa81e5aeeb9b8f84ee0af34463add07fa
# Parent  07b0bee87f32be91a33210bc06973e07c4c1dac9
Changed 'rate=' to complex value and added limits to the rate value to prevent 
integer overflow/underflow

diff -r 07b0bee87f32 -r b2bd50efa81e 
src/http/modules/ngx_http_limit_req_module.c
--- a/src/http/modules/ngx_http_limit_req_module.c  Wed Dec 21 14:53:27 
2022 +0300
+++ b/src/http/modules/ngx_http_limit_req_module.c  Fri Dec 30 22:05:35 
2022 +
@@ -26,6 +26,7 @@
 /* integer value, 1 corresponds to 0.001 r/s */
 ngx_uint_t   excess;
 ngx_uint_t   count;
+ngx_uint_t   rate;
 u_char   data[1];
 } ngx_http_limit_req_node_t;

@@ -41,7 +42,7 @@
 ngx_http_limit_req_shctx_t  *sh;
 ngx_slab_pool_t *shpool;
 /* integer value, 1 corresponds to 0.001 r/s */
-ngx_uint_t   rate;
+ngx_http_complex_value_t rate;
 ngx_http_complex_value_t key;
 ngx_http_limit_req_node_t   *node;
 } ngx_http_limit_req_ctx_t;
@@ -66,9 +67,9 @@

 static void ngx_http_limit_req_delay(ngx_http_request_t *r);
 static ngx_int_t ngx_http_limit_req_lookup(ngx_http_limit_req_limit_t *limit,
-ngx_uint_t hash, ngx_str_t *key, ngx_uint_t *ep, ngx_uint_t account);
+ngx_uint_t hash, ngx_str_t *key, ngx_uint_t *ep, ngx_uint_t account, 
ngx_uint_t rate);
 static ngx_msec_t ngx_http_limit_req_account(ngx_http_limit_req_limit_t 
*limits,
-ngx_uint_t n, ngx_uint_t *ep, ngx_http_limit_req_limit_t **limit);
+ngx_uint_t n, ngx_uint_t *ep, ngx_http_limit_req_limit_t **limit, 
ngx_uint_t rate);
 static void ngx_http_limit_req_unlock(ngx_http_limit_req_limit_t *limits,
 ngx_uint_t n);
 static void ngx_http_limit_req_expire(ngx_http_limit_req_ctx_t *ctx,
@@ -195,10 +196,13 @@
 ngx_http_limit_req_handler(ngx_http_request_t *r)
 {
 uint32_t hash;
-ngx_str_tkey;
+ngx_str_tkey, rate_s;
 ngx_int_trc;
 ngx_uint_t   n, excess;
+ngx_uint_t   scale;
+ngx_uint_t   rate;
 ngx_msec_t   delay;
+u_char  *p;
 ngx_http_limit_req_ctx_t*ctx;
 ngx_http_limit_req_conf_t   *lrcf;
 ngx_http_limit_req_limit_t  *limit, *limits;
@@ -243,10 +247,34 @@

 hash = ngx_crc32_short(key.data, key.len);

+if (ngx_http_complex_value(r, >rate, _s) != NGX_OK) {
+ngx_http_limit_req_unlock(limits, n);
+return NGX_HTTP_INTERNAL_SERVER_ERROR;
+}
+
+scale = 1;
+rate = NGX_ERROR;
+
+if (rate_s.len > 8) {
+
+