Re: Allow response with AD bit in resolver

2024-06-15 Thread Maxim Dounin
Hello!

On Sat, Jun 15, 2024 at 12:02:28PM +0100, Kirill A. Korinsky wrote:

> Greetings,
> 
> Here a trivial patch which allows DNS responses with enabled AD bit
> from used resolver.
> 
> Index: src/core/ngx_resolver.c
> --- src/core/ngx_resolver.c.orig
> +++ src/core/ngx_resolver.c
> @@ -1774,7 +1774,7 @@ ngx_resolver_process_response(ngx_resolver_t *r, u_cha
> (response->nar_hi << 8) + response->nar_lo);
>  
>  /* response to a standard query */
> -if ((flags & 0xf870) != 0x8000 || (trunc && tcp)) {
> +if ((flags & 0xf850) != 0x8000 || (trunc && tcp)) {
>  ngx_log_error(r->log_level, r->log, 0,
>"invalid %s DNS response %ui fl:%04Xi",
>tcp ? "TCP" : "UDP", ident, flags);
> 

Looks good to me, pushed with an appropriate commit log, thanks.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Status code 0

2024-05-04 Thread Maxim Dounin
Hello!

On Sat, May 04, 2024 at 07:31:43PM +0200, Sébastien Rebecchi wrote:

> Hello
> 
> What does it mean when nginx returns an http status code 0?
> 
> We see that cause we monitor nginx response status code, which is used as
> front of our apps. The apps themselves can not return 0, only 200 or 500.
> So it seems issue here comes from nginx itself which can not process the
> connection under high peak of load, but why 0, is that expected?

Status code 0 as seen in nginx http access logs means that nginx 
wasn't able to generate any reasonable status code, even some 
generic failure like 500 (Internal Server Error), yet the request 
was closed.

This usually happens due to some fatal issues, like unexpected 
conditions (which might indicate a bug somewhere), unexpected 
errors, or memory allocation errors if it wasn't possible to 
return 500.

In most cases there should be additional details in the error log 
explaining the reasons.  If there aren't any, or reasons aren't 
clear, it might be a good idea to dig further.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Nginx ignores proxy_no_cache

2024-04-07 Thread Maxim Dounin
Hello!

On Sun, Apr 07, 2024 at 01:36:21PM +0200, Kirill A. Korinsky wrote:

> Greetings,
> 
> Let assume that I would like behavior on LB from the backend and force it to
> cache only resposnes that have a X-No-Cache header with value NO.
> 
> Nginx should cache a response with any code, if it has such headers.
> 
> This works well until the backend is unavailable and nginx returns a
> hardcoded 502 that doesn't have a control header, but such a response is
> cached anyway.
> 
> Here is the config that allows to reproduce the issue:
> 
>   http {
>   default_type  application/octet-stream;
> 
>   proxy_cache_path/tmp/nginx_cache keys_zone=the_zone:1m;
>   proxy_cache the_zone;
>   proxy_cache_valid   any 15m;
>   proxy_cache_methods GET HEAD POST;
> 
>   add_header  X-Cache-Status $upstream_cache_status always;
> 
>   map $upstream_http_x_no_cache $no_cache {
>   default 1;
>   "NO"0;
>   }
> 
>   proxy_no_cache  $no_cache;
> 
>   upstream echo {
>   server 127.127.127.127:80;
>   }
> 
>   server {
>   listen   1234;
>   server_name  localhost;
> 
>   location / {
>   proxy_pass http://echo;
>   }
>   }
>   }
> 
> when I run:
> 
>   curl -D - http://127.0.0.1:1234/
> 
> it returns MISS on the first request, and HIT on the second one.
> 
> Here I expect both requests to return MISS.

Thanks for the report.

Indeed, proxy_no_cache is only checked for proper upstream 
responses, but not when caching errors, including internally 
generated 502/504 in ngx_http_upstream_finalize_request(), and 
intercepted errors in ngx_http_upstream_intercept_errors().

Quick look suggests there will be also issues with caching errors 
after proxy_cache_bypass (errors won't be cached even if they 
should), as well as issues with proxy_cache_max_range_offset after 
proxy_cache_bypass (it will be ignored).

This needs cleanup / fixes, added to my TODO list.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: announcing freenginx.org

2024-02-15 Thread Maxim Dounin
Hello!

On Thu, Feb 15, 2024 at 04:31:49PM +0400, Roman Arutyunyan wrote:

> Hello,
> 
> On Wed, Feb 14, 2024 at 08:59:10PM +0300, Maxim Dounin wrote:
> > Hello!
> > 
> > As you probably know, F5 closed Moscow office in 2022, and I no
> > longer work for F5 since then.  Still, we’ve reached an agreement
> > that I will maintain my role in nginx development as a volunteer.
> > And for almost two years I was working on improving nginx and
> > making it better for everyone, for free.
> > 
> > Unfortunately, some new non-technical management at F5 recently
> > decided that they know better how to run open source projects.  In
> > particular, they decided to interfere with security policy nginx
> > uses for years, ignoring both the policy and developers’ position.
> > 
> > That’s quite understandable: they own the project, and can do
> > anything with it, including doing marketing-motivated actions,
> > ignoring developers position and community.  Still, this
> > contradicts our agreement.  And, more importantly, I no longer able
> > to control which changes are made in nginx within F5, and no longer
> > see nginx as a free and open source project developed and
> > maintained for the public good.
> > 
> > As such, starting from today, I will no longer participate in nginx
> > development as run by F5.  Instead, I’m starting an alternative
> > project, which is going to be run by developers, and not corporate
> > entities:
> > 
> > http://freenginx.org/
> > 
> > The goal is to keep nginx development free from arbitrary corporate
> > actions.  Help and contributions are welcome.  Hope it will be
> > beneficial for everyone.
> 
> All the best to you, Maxim.  On behalf of everyone that has been involved
> in the nginx project, I would like to take this opportunity to recognize
> your incredible contribution, not only to nginx, but to open source
> software and the Internet itself.
> 
> We wish you the best of luck and would be pleased to work with you again
> in future.

Thank you for your words, appreciated.  You are welcome to 
contribute to freenginx.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: announcing freenginx.org

2024-02-15 Thread Maxim Dounin
Hello!

On Thu, Feb 15, 2024 at 02:33:51AM +0100, Manuel wrote:

> Good Evening Maxim,
> 
> thank you for the work.
> 
> I am speechless. My personal opinion:
> @F5 get an advisor for open source
> and maybe read something about enshittification m(
> 
> TT
> 
> Will follow freenginx then.
> Thx.

Thanks.

Interesting term, never heard it before.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: announcing freenginx.org

2024-02-14 Thread Maxim Dounin
Hello!

On Wed, Feb 14, 2024 at 07:32:48PM +, agill--- via nginx wrote:

> Is there a way for us to also financially support the project or 
> is it going under the umbrella of a funder?

Thanks for the suggestion.  For now, I have enough resources to 
support the project and myself.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: announcing freenginx.org

2024-02-14 Thread Maxim Dounin
Hello!

On Thu, Feb 15, 2024 at 03:24:59AM +0800, Jeffrey 'jf' Lim wrote:

> On Thu, Feb 15, 2024 at 1:59 AM Maxim Dounin  wrote:
> 
> > Hello!
> >
> > As you probably know, F5 closed Moscow office in 2022, and I no
> > longer work for F5 since then.  Still, we’ve reached an agreement
> > that I will maintain my role in nginx development as a volunteer.
> > And for almost two years I was working on improving nginx and
> > making it better for everyone, for free.
> 
> wow, I did not know that. Thank you for your work and contribution over
> these 2 years!
> 
> > Unfortunately, some new non-technical management at F5 recently
> > decided that they know better how to run open source projects.  In
> > particular, they decided to interfere with security policy nginx
> > uses for years, ignoring both the policy and developers’ position.
> >
> > That’s quite understandable: they own the project, and can do
> > anything with it, including doing marketing-motivated actions,
> > ignoring developers position and community.
> 
> would you be able to (within reason) give any examples of these?

The most recent "security advisory" was released despite the fact 
that the particular bug in the experimental HTTP/3 code is 
expected to be fixed as a normal bug as per the existing security 
policy, and all the developers, including me, agree on this.

And, while the particular action isn't exactly very bad, the 
approach in general is quite problematic.

> > Still, this
> > contradicts our agreement.  And, more importantly, I no longer able
> > to control which changes are made in nginx within F5, and no longer
> > see nginx as a free and open source project developed and
> > maintained for the public good.
> >
> > As such, starting from today, I will no longer participate in nginx
> > development as run by F5.  Instead, I’m starting an alternative
> > project, which is going to be run by developers, and not corporate
> > entities:
> >
> > http://freenginx.org/
> >
> > The goal is to keep nginx development free from arbitrary corporate
> > actions.  Help and contributions are welcome.  Hope it will be
> > beneficial for everyone.
> >
> >
> thank you for continuing on with your efforts, and for this new effort!

Thanks, appreciated.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


announcing freenginx.org

2024-02-14 Thread Maxim Dounin
Hello!

As you probably know, F5 closed Moscow office in 2022, and I no
longer work for F5 since then.  Still, we’ve reached an agreement
that I will maintain my role in nginx development as a volunteer.
And for almost two years I was working on improving nginx and
making it better for everyone, for free.

Unfortunately, some new non-technical management at F5 recently
decided that they know better how to run open source projects.  In
particular, they decided to interfere with security policy nginx
uses for years, ignoring both the policy and developers’ position.

That’s quite understandable: they own the project, and can do
anything with it, including doing marketing-motivated actions,
ignoring developers position and community.  Still, this
contradicts our agreement.  And, more importantly, I no longer able
to control which changes are made in nginx within F5, and no longer
see nginx as a free and open source project developed and
maintained for the public good.

As such, starting from today, I will no longer participate in nginx
development as run by F5.  Instead, I’m starting an alternative
project, which is going to be run by developers, and not corporate
entities:

http://freenginx.org/

The goal is to keep nginx development free from arbitrary corporate
actions.  Help and contributions are welcome.  Hope it will be
beneficial for everyone.


-- 
Maxim Dounin
http://freenginx.org/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: International characters and serving files

2024-02-10 Thread Maxim Dounin
Hello!

On Sat, Feb 10, 2024 at 03:14:02PM +1000, David Connors wrote:

> Hi All,
> 
> I have moved off IIS/WIndows onto nginx on ubuntu a while back. Since doing
> so I receive 404s for files with international characters in their name.
> I've added the charset utf-8 directive to the nginx config. Looking at the
> request:
> 
> https://www.davidconnors.com/wp-content/uploads/2022/08/Aliinale-Für-Alina.pdf
> 
> Confirm that is exists on the file exist on the filesystem:
> 
> -rwx--  1 www-data www-data 10560787 Aug 21  2022 Aliinale-Für-Alina.pdf
> 
> if I copy that from that name to a.pdf and request that it serves fine.
> 
> Access log shows the character with the diacritic mark is escaped:
> 172.68.210.38 - - [10/Feb/2024:05:11:27 +] "GET
> /wp-content/uploads/2022/08/Aliinale-F%C3%BCr-Alina.pdf HTTP/1.1" 404 27524
> "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15
> (KHTML, like Gecko) Version/17.2.1 Safari/605.1.15"
> 
> What confirmation directive am I missing?

File names on Unix systems are typically stored as bytes, and it 
is user's responsibility to interpret them according to a 
particular character set.

As long as nginx returns 404, this suggests that you don't have a 
file with the name with C3 BC UTF-8 bytes in it: instead, there is 
something different.  My best guess is that you are using Latin1 
as a charset for your terminal, and there is an FC byte instead.  To 
see what's there in fact, consider looking at the raw bytes in the 
file name with something like "ls | hd".

Also, you can use nginx autoindex module - it will generate a page 
with properly escaped links, so it will be possible to access 
files regardless of the charset used in the file names.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Serving pdf configuration issue

2024-02-08 Thread Maxim Dounin
Hello!

On Wed, Feb 07, 2024 at 02:21:09PM -0500, Victor Oppenheimer wrote:

> I am running nginx 1.24.0 on a computer with
> Windows Server 2016 Server standard OS.
> 
> I am truing to use nginx to function as a reverse proxy
> server when it received requests for some URLs and
> to directly serve web pages for others.  I was successful s
> configuing nginx to do the reverse proxy serving.
> 
> However, when I tried to directly serve some .pdf pages I kept
> getting errors.  To isolate the problem I edited nginx.config
> to only function as a direct server for a .pdf file.
> 
> I have a file named camprental.pdf in the directory
>     C:\Users\victor\My
> Drive\Properties\rental_manuals_documents_and_email_templates\Clearwater
> 
> I want users to be able to browse to:
>     www.clearwaterescapes.com/camp/camprental.pdf to see the file.
> 
> The nginx.conf file I am using to try to accomplish this is copied below.
> 
> When I start nginx with this config file and browse to:
>     http://clearwaterescapes.com/camp/camprrental.pdf
> 
> I receive the following error message:
>     This page isn’t working right nowclearwaterescapes.com redirected you
> too many times.
> 
> I assume that I am probably making a simple error.  But I do not see it.
> 
> Hopefully somebody will point it out.

[...]

>  server {
>     listen 80;
>     server_name clearwaterescapes.com;
> 
>    # Make incoming URLs lowercase
>     rewrite ^(.*)$ /$1 permanent;

This is going to be a infinite redirection loop, as you redirect 
any requests to the same URI.  Likely this is what causes the 
"redirected too many times" error you are seeing.

> 
>   # Rewrite rule for PDF files
>   rewrite ^clearwaterescapes.com/camp/camprental.pdf$
> http://www.clearwaterescapes.com/documentation/camprental.pdf break;

This is not going to work, as rewrite works with URI as seen in 
the HTTP request, that is, URI path, and it doesn't contain domain.

(And it does not seem to be needed, given the following 
locations.)

> 
>     # avoid errors when favicon.ico file is missing
>     location = /favicon.ico {
>     access_log off;
>     log_not_found off;
>     return 204;
>     }
> 
>     location ~* ^/documentation/ {
>     # Convert URL to lowercase and capture filename
>     rewrite ^/documentation/(.*)$ /documentation/$1 break;
> 
>     # Serve documentation files directly
>     root "C:/Users/victor/My
> Drive/Properties/rental_manuals_documents_and_email_templates/clearwater";
>     try_files /$1 =404;

Just

location /documentation/ {
alias "C:/.../clearwater/";
}

should do the trick.

>     }
> 
>     # Add location block for /camp/ requests
>   location ~* ^/camp/ {
>     root "C:/Users/victor/My
> Drive/Properties/rental_manuals_documents_and_email_templates/clearwater";
>     try_files $uri =404;
> }

There is no need to use regex location here, just a prefix one 
will be equivalent on Windows (since on Windows location matching 
is caseless).  Similarly, in most cases there is no need to use 
try_files, as simply serving static files is equivalent (unless 
you specifically want to return 404 for directories).

That is, just

 location /camp/ {
 root "C:/.../clearwater";
 }

would be (mostly) equivalent.

But, given that you want "/camp/camprental.pdf" to access the file 
in the "C:/.../Clearwater" directory, correct approach would be to 
use "alias" instead of "root", similarly to the above.

 location /camp/ {
 alias "C:/.../clearwater/";
 }

Hope this helps.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Occasional "400 Bad Request" Responses from Nginx

2024-02-04 Thread Maxim Dounin
123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef


Since first 680 bytes of this "request" is actually the request body 
of the previous request, the request line actually starts 
somewhere in "0123456789abcdef0123..." bytes, and the method is 
clearly incorrect, as indicated by nginx.

Looks like an ATS bug to me.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx looking in wrong dir for module

2024-02-02 Thread Maxim Dounin
Hello!

On Fri, Feb 02, 2024 at 01:07:40PM -0500, Larry Martell wrote:

> I built nginx from source and it fails to start with:
> 
> 2024/02/02 13:00:59 [emerg] 1961284#1961284: dlopen()
> "/usr/modules/ndk_http_module.so" failed
> (/usr/modules/ndk_http_module.so: cannot open shared object file: No
> such file or directory) in
> /etc/nginx/modules-enabled/10-mod-http-ndk.conf:1
> 
> -V shows:
> 
> $ sudo /usr/sbin/nginx -V

[...]

> --prefix=/usr

[...]

> --modules-path=/usr/lib/nginx/modules

[...]

> ndk_http_module.so does exist in /usr/lib/nginx/modules/:
> 
> $ ls -l /usr/lib/nginx/modules/ndk_http_module.so
> -rw-r--r-- 1 root root 18688 Nov 10  2022
> /usr/lib/nginx/modules/ndk_http_module.so
> 
> Why is it looking in the wrong dir for this?

Paths in nginx configuration file are resolved from prefix (with 
the exception of various included configuration files or similar 
resources, which are resolved from configuration prefix).

Your configuration seems to contain

load_module modules/ndk_http_module.so;

which is resolved from /usr to /usr/modules/ndk_http_module.so.  
There is no such file, hence the error.

Note that the --modules-path configure option defines where to 
install modules.  It is, however, your responsibility to provide 
proper paths to modules in load_module directives.  The 
"modules/foo.so" construct will work with the default 
--modules-path, which is "modules" under prefix, but you'll have 
to use something different if you've modified --modules-path to a 
custom value.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: ngx_http_find_virtual_server ngx_http_regex_exec DOS

2024-01-30 Thread Maxim Dounin
Hello!

On Tue, Jan 30, 2024 at 10:28:23AM +0200, Clima Gabriel wrote:

> Greetings fellow nginx-devs,
> It looks to me as if an attacker could force the server to use up a large
> amount of resources doing ngx_http_regex_exec if the server were to be
> configured with a relatively large number of regex server_names.
> I would appreciate any ideas on the topic, especially suggestions as to how
> some form of caching could be implemented for the responses, so that the
> server didn't have to execute the ngx_http_regex_exec on subsequent
> requests.

Not using "large number of regex server_names" might be the best 
solution available here.  Requests are not required to be to the 
same virtual server, and caching won't generally work.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Managing Static Files

2024-01-30 Thread Maxim Dounin
Hello!

On Tue, Jan 30, 2024 at 07:36:49AM +, bittn...@gmail.com wrote:

> Hi...I read a few articles about managing static files and I'm a 
> bit confused!
> I use Nginx as the main server to host my website
> I enabled gzip and brotli
> I have also enabled gzip_static and brotli_static
> And I have pre-compressed all static files with gzip and brotli
> I read in an article that after compressing all files, I should 
> delete all uncompressed files to save memory and only gzip and 
> Brotli files remain.
> (Of course, I need to create an empty file called index.html for 
> it to work properly)
> Everything works fine now but my problem is when the browser 
> doesn't support compression and requires uncompressed files.
> In another article it was written that if gunzip is enabled for 
> browsers that do not support the compressed format, it 
> decompresses the gzip then sends it to the client.
> But after doing some testing, I found (I think) that gnuzip only 
> works if nginx is used as the proxy (between main server and 
> client) (due to the content encoding header requirement).
> Now, if I want to support gzip, brotli and non-compressed files, 
> do I have to have all three types of files? Is this method 
> correct? What method do you use? What method is suggested?Thanks

The gunzip module works perfectly fine without proxying, though 
you'll need to ensure that appropriate Content-Encoding is 
properly set on the response.  In particular, if you only have 
gzipped files, you can do:

   gzip_static always;
   gunzip on;

In this configuration gzip_static will respond with the compressed 
version of the file to all requests, and gunzip will uncompress it 
for clients which does not support gzip (see 
http://nginx.org/r/gzip_static for the documentation).

Not sure about brotli_static, but if the 3rd party module is 
implemented properly, it should be possible to do "brotli_static on;" 
in the same configuration to return brotli-compressed files 
to clients which support brotli.

It is not required to delete uncompressed files though.  While 
gunzip module makes it possible, this might be not the best 
approach available: uncompressing files on the fly certainly 
consumes additional CPU resources, and also no uncompressed files 
on disk might be suboptimal for other tasks.  Removing 
uncompressed files usually makes sense only if amount of static 
files is huge.

Hope this helps.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: This is a question about the "$status" log value when "proxy_read_timeout" occurs.

2024-01-16 Thread Maxim Dounin
Hello!

On Tue, Jan 16, 2024 at 01:15:09PM +0900, 박규철 wrote:

> This is a question about the "$status" log value when "proxy_read_timeout"
> occurs.
> Nginx version in use: v1.25.3
> 
> Contents of 1Mbyte size were requested to [Origin Server].
> A response up to approximately 500Kbytes in size, including the header, was
> received without delay.
> However, after 500Kbytes, no response was received from Origin for 3
> seconds and the connection (time-out)
> Since the message "upstream timed out...while reading upstream" was logged
> in the error log, I think the connection was lost due to the
> "proxy_read_timeout 3s" setting.
> 
> While checking the log, I noticed that the "$status" value in the access
> log was different from what I thought.
> In my opinion, if the connection was terminated by "proxy_read_timeout",
> the "$status" value would be 5xx, but the "$status" value in the saved
> access log was 200.
> 
> A normal response was not completed due to "proxy_read_timeout", so I would
> like to know why the "$status" value is stored as 200 instead of 5xx.
> Should I check a variable other than "$status" for responses to abnormal
> timeouts such as "proxy_read_timeout"?

The $status variable shows the status as sent to the client in the 
response headers.  When proxy_read_timeout happens, the response 
headers are already sent, so $status contains 200 as sent to the 
client.

For errors happened during sending the response body, consider 
looking into the error log.  Some generic information about 
successful request completion might be found in the 
$request_completion variable 
(http://nginx.org/r/$request_completion).  Note though that it 
might not be set for variety of reasons.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Bug in handling POST then sending a file back

2024-01-08 Thread Maxim Dounin
Hello!

On Sun, Jan 07, 2024 at 09:41:33PM -0500, Saint Michael wrote:

> I am using openresty and nginx.
> I send a file to the server, which is a POST operation.  Then the
> server processes the file and needs to send back a different file. I
> try to send a file with ng.exec("/static/file_name") and I get error
> 405 Not Allowed.
> But if I do a 302 redirect, it works.
> I imagine that Nginx feels bad about sending a file in a POST
> operation, but http does not have such a limitation.
> Is there a workaround for this?

As far as I can see from the Lua module docs, ngx.exec() you are 
using in your script does an internal redirect.  As the result, 
nginx ends up with a POST request to a static file, which is not 
something nginx can handle: it does not know what to do with data 
POSTed to a static file, hence the error.

If you've already processed POSTed data, and want to show some 
static message to a client, consider returning a redirect to the 
static file to the user, such as 303 (See Other), see 
https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/303 for a 
good description.

If you are sure you want to return the file as a response to the 
POST request itself (this is generally a bad practice, since it 
will break page refresh and browser history navigation), consider 
returning the file directly from your script instead of trying to 
do an internal redirect.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: IMAP Proxy with TLS Upstream Configuration

2024-01-06 Thread Maxim Dounin
Hello!

On Sat, Jan 06, 2024 at 11:03:47AM -0800, Jeff Kletsky wrote:

> I believe I have properly configured nginx v1.24.0 (open source) for 
> IMAP proxy on FreeBSD 14.0. I am, however, unable to establish a TLS 
> connection to the upstream server.
> 
> I have confirmed that I can connect to the proxy with TLS and that the 
> auth server is called. The auth server returns the expected Auth-Server 
> and Auth-Port. The upstream server is on a remote host with Dovecot 
> running TLS on the standard port of 993. I can see the TCP handshake 
> between the proxy and Dovecot on both machines, but nginx does not proceed.
> 
> It eventually returns "* BAD internal server error" with the error log 
> indicating a timeout

[...]

> I have confirmed using openssl s_client that the connection can be made 
> from the host running nginx to the host at the expected IP address and port.
> 
> Looking at the source, I did not see an option in the auth-header 
> parsing related to using TLS upstream.
> 
> Is there a way to use TLS for the IMAP upstream natively (without 
> needing to configure a port with STARTTLS)?

Backend IMAP servers are expected to be plain text, not SSL/TLS.  
Neither IMAPS nor IMAP with STARTTLS are supported for upstream 
connections.

If you want to use SSL/TLS connections between nginx and backend 
servers, consider configuring stream{} proxying on the same nginx 
instance with "proxy_ssl on;" to handle SSL/TLS with the backend 
servers for you, see http://nginx.org/r/proxy_ssl for details.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Windows ARM64

2024-01-02 Thread Maxim Dounin
Hello!

On Tue, Jan 02, 2024 at 11:03:03AM +, Anthony Roberts wrote:

> A small introduction - I work on Linaro's Windows on Arm enablement team,
> and we work on porting/enabling various open-source projects for the
> platform.
> 
> We have recently done a small investigation, and it turns out nginx can be
> compiled OOB on Windows ARM64 platforms with VS2022 - an example run from
> our internal nightlies can be seen here:
> https://gitlab.com/Linaro/windowsonarm/packages/nginx/-/jobs/5742208111

Yep, there shouldn't be any problems with building, at least when 
building nginx itself and/or when building OpenSSL with "no-asm".  
In more sophisticated cases, some adjustment might be needed, see 
https://hg.nginx.org/nginx/rev/3c4d81ea1338 for an example.

If you'll find any issues and/or need any help, don't hesitate to 
write here or in the nginx-devel@ mailing list.

> With the advent of things like Microsoft's Azure Windows ARM64
> instances[0], and various client devices, it is a growing platform. Our
> partners (Microsoft and Qualcomm) would be interested in seeing a release!
> 
> Is an official Windows ARM64 build something you have considered? Would you
> consider it?

As of now, there are no plans to publish additional official nginx 
for Windows builds.

Note well that nginx for Windows is in beta and unlikely to be 
considered production ready in the foreseeable future 
(https://nginx.org/en/docs/windows.html).  Its main purpose is to 
facilitate web development directly on Windows devices.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Calculating requests per second, per IP address

2023-12-29 Thread Maxim Dounin
Hello!

On Fri, Dec 29, 2023 at 09:54:30AM -0300, Rejaine Monteiro wrote:

> Hi all,
> 
> I´m running Nginx community edition and need to implement rate limiting
> 
> There's plenty of guides out there on how to do this, but no guides on how
> to get real values/stats from the access logs
> 
> 
> What I need to get from the NGINX access logs is:
> 
> - Requests per minute, per IP address
> 
> - Average requests per minute, derived from all IP addresses
> 
> - Max requests per minute, per IP address
> 
> We have a few endpoints with different functionalities and we simply cannot
> have a common rule that works for everyone.
> 
> Any tips on a tool or script that could help generate this type of
> information (if possible in real time or collect this data for future
> analysis)?

Note that nginx's limit_req module implements limiting based on 
the "leaky bucket" algorithm, which permits traffic bursts when 
properly configured, and therefore there is no need to calculate 
exact rates of requests per minute and so on.

Rather, I would recommend to focus on the number of requests a 
typical legitimate user can generate to the protected resources 
within a short period of time, and set large enough burst= value 
to ensure legitimate users are not limited.  Then use some 
reasonable long-term rate - this does not need to be exact, but 
rather somewhat reasonable, for example, to cover typical number 
of requests per day from a legitimate user.

Note well that using "nodelay" (or "delay=N") is recommended with 
such approach, see http://nginx.org/r/limit_req for details.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: ngx_quic_create_stream segfault

2023-12-22 Thread Maxim Dounin
Hello!

On Fri, Dec 22, 2023 at 11:59:22AM +0200, Clima Gabriel wrote:

> Thanks.
> I ended up using valgrind and got much closer to the answer.
> Arguments I used:
> valgrind --leak-check=full --show-leak-kinds=all --trace-children=yes
> --track-origins=yes --verbose --log-file=valgrind-output.txt
> /root/nginx/objs/nginx -c /etc/nginx/nginx.conf
> That led me to find that the HTTP and QUIC  requests somehow end upsing the
> same pointer to http_connection_t, which seems obviously wrong.

This might be perfectly correct as long as the original connection 
was closed.

> #0  ngx_SSL_early_cb_fn (s=0x55ae3ad5cae0, al=0x7fff1cf4c8f4,
> arg=0x7f3417bf4b00) at src/event/ngx_event_openssl.c:1949

There is no such function in nginx, so it looks like you are using 
some 3rd party modifications.

You may want to start with compiling vanilla nginx as available 
from nginx.org without any 3rd party modules and/or patches and 
testing if you are able to reproduce the problem.

[...]

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Disable http_dav_module in Nginx Web server (version nginx/1.24.0)

2023-11-27 Thread Maxim Dounin
Hello!

On Tue, Nov 28, 2023 at 12:39:47AM +0530, Kaushal Shriyan wrote:

> I am running nginx version: nginx/1.24.0 on Red Hat Enterprise Linux
> release 8.8 (Ootpa). Is there a way to disable http_dav_module in Nginx Web
> server?

The DAV module is disabled by default, unless you've explicitly 
enabled it in nginx configuration with the dav_methods directive 
(http://nginx.org/r/dav_methods).

If you additionally want nginx without the DAV module compiled in, 
recompile nginx without the "--with-http_dav_module" configure 
option.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Limiting number of client TLS connections

2023-11-21 Thread Maxim Dounin
Hello!

On Mon, Nov 20, 2023 at 11:29:39PM +0800, Zero King wrote:

> In our case, layer-4 firewall is difficult to introduce in the request 
> path. Would you consider rate limiting in Nginx a valid feature request?

Firewall is expected to be much more effective solution compared 
to nginx (which has to work with already established connections 
at the application level).  It might be a better idea to actually 
introduce a firewall if you need such limits (or, rather, make it 
possible to configure the one most likely already present).

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Nginx as reverse proxy - proxy_ssl_x questions

2023-11-19 Thread Maxim Dounin
Hello!

On Sun, Nov 19, 2023 at 12:41:11PM +0300, Mark wrote:

> Hello Mr. Maxim, thank you very much for your reply.
> 
> Things are much clearer now, thanks!
> 
> One, last question;
> 
> I have implemented nginx as a reverse proxy with TLS termination in my
> FreeBSD host machine, and another nginx instance running in my jail, in;
> 10.10.10.2.
> 
> So, the host machine does the reverse proxying and SSL.
> 
> Before I open my website to public and production (a Wordpress website),
> could you please kindly have a look at my reverse proxy configuration here;
> 
> http://paste.nginx.org/b8
> 
> So that you might wish to add some suggestions, or perhaps I still have a
> misconfigured/unneeded directive there?

Here are some comments:

> proxy_cache_bypass $http_upgrade;

You don't need proxy_cache_bypass if you aren't using cache.

> proxy_buffering off;

I don't really recommend switching off buffering unless you have 
reasons to.  And if the reason is to avoid disk buffering, 
consider "proxy_max_temp_file_size 0;" instead, see 
http://nginx.org/r/proxy_max_temp_file_size for details.

> proxy_set_header Referer $scheme://$host;

This looks simply wrong.

> proxy_set_header X-Scheme https;
> proxy_set_header X-Forwarded-Proto https;
> proxy_set_header X-Scheme https;
> proxy_set_header X-Forwarded-Ssl on;

This looks a bit too many of custom headers to let backend know 
that https is being used.

> proxy_set_header Upgrade $http_upgrade;
> proxy_set_header Connection "upgrade";

This shouldn't be used unless you intentionally configuring 
WebSocket proxying.

> proxy_set_header Early-Data $ssl_early_data;

This is certainly not needed unless you are using TLSv1.3 Early 
Data (http://nginx.org/r/ssl_early_data), and you aren't.

Hope this helps.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Limiting number of client TLS connections

2023-11-18 Thread Maxim Dounin
Hello!

On Sat, Nov 18, 2023 at 02:44:20PM +0800, Zero King wrote:

> I want Nginx to limit the rate of new TLS connections and the total (or 
> per-worker) number of all client-facing connections, so that under a 
> sudden surge of requests, existing connections can get enough share of 
> CPU to be served properly, while excessive connections are rejected and 
> retried against other servers in the cluster.
> 
> I am running Nginx on a managed Kubernetes cluster, so tuning kernel 
> parameters or configuring layer 4 firewall is not an option.
> 
> To serve existing connections well, worker_connections can not be used, 
> because it also affects connections with proxied servers.
> 
> Is there a way to implement these measures in Nginx configuration?

No, nginx does not provide a way to limit rate of new connections 
and/or total number of established connections.  Instead, firewall is 
expected to be used for such tasks.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Nginx as reverse proxy - proxy_ssl_x questions

2023-11-18 Thread Maxim Dounin
Hello!

On Sat, Nov 18, 2023 at 01:54:21PM +0300, Mark wrote:

> Hello there.
> 
> Having a proxy directive like;
> 
> location / {
> proxy_pass http://10.10.10.4:4020;
> ...
> 
> I wonder when using proxy_pass http://... (not httpS),
> are these directives effective, under the proxy_pass?
> 
> proxy_ssl_name $host;
> proxy_ssl_server_name on;
> proxy_ssl_session_reuse off;
> 
> Or they would work ONLY if proxy_pass is pointed to an "https://";?

The "proxy_ssl_*" directives define configuration for SSL 
proxying.  That is, corresponding values are only used when 
proxy_pass is used with the "https" scheme.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: control proxy_buffering with variables

2023-11-17 Thread Maxim Dounin
Hello!

On Fri, Nov 17, 2023 at 03:57:23AM +, public1020 via nginx wrote:

> I'm trying to control buffering with variables, but nginx complains about it,
> 
> nginx: [emerg] invalid value "$val" in "proxy_request_buffering" directive, 
> it must be "on" or "off" in /etc/nginx/sites-enabled/test.conf:9
> 
> Is there any way to resolve this? Attached the configuration in question.

Much like most of the nginx configuration directives, 
"proxy_request_buffering" does not support variables.  Note that 
if variables are supported by a particular directive, this is 
explicitly documented in the directive description at nginx.org.

If you want to use different buffering options for different 
requests, consider using distinct locations instead.  Something 
like

location / {
proxy_pass http://127.0.0.1:;
}

location ~* enable {
proxy_pass http://127.0.0.1:;
proxy_request_buffering off;
proxy_buffering off;
}

would be close to the configuration you've tried to use, and 
mostly arbitrary conditions, including the exact equivalent to 
your configuration, can be implemented using internal 
redirections, such as with "rewrite".

Note well that the proxy_buffering can also be controlled from the 
backend via the X-Accel-Buffering response header.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: location ~* \.(...) access_log off; prevents access to files instead of logs

2023-11-14 Thread Maxim Dounin
Hello!

On Tue, Nov 14, 2023 at 02:51:05PM +0100, Roberto D. Maggi wrote:

> Hi you all,
> I'm having a problem with these two stanzas, in writing down a virtual
> host and can't figure out what's wrong with them.
> They look correct but the first doesn't simply work and the second blocks
> 
> --> here I'm trying to add this header only to cgi|shtml|phtml|php 
> extensions
> 
> location ~* \.(?:cgi|shtml|phtml|php)$ {
>    add_header Cache-Control "public";
>    client_max_body_size 0;
>    chunked_transfer_encoding on;
>    }
> 
> --> here I don't want to log accesses to to 
> woff|woff2|ico|pdf|flv|jpg|jpeg|png|gif|js|css|gz|swf|txt files
> location ~* 
> \.(?:woff|woff2|ico|pdf|flv|jpg|jpeg|png|gif|js|css|gz|swf|txt)$ {
>    access_log off;
>    }
> 
> 
> Does anybody can guess what's wrong with them?
> Thanks in advance.

When processing a particular request, nginx selects a 
location and handles a request according to the configuration in 
this location, see http://nginx.org/r/location.

As such, the first location, which tries to alter processing of 
php files, does not seem to be correct: in particular, it lacks 
any fastcgi_pass / proxy_pass directives, and hence such files 
will be simply returned to the client as static files.  While it 
might be what you indeed tried to setup, the "doesn't simply work" 
suggests it isn't.  You may want to re-consider your configuration 
to ensure that requests to php files are properly proxied to 
appropriate backend servers.

The second location, which disables logging to some static files, 
looks correct, but it might have the same problem: as long as 
requests are handled in this location, some essential handling 
which was previously present might be missing, and this breaks 
things.  For example, a common error is to write something like 
this:

location / {
root /path/to/site;
}

location ~ \.css$ {
# no root here
}

Note that the root path is configured in "location /", but not in 
"location ~ \.css$", hence all css files will use some default 
root path (or the one inherited from previous configuration 
levels), which is likely incorrect.  An obvious fix would be to 
configure root at the server level instead, so it will be used for 
both locations.

Just in case, looking into error log usually makes such issues 
trivial to identify - nginx will complain if it cannot find a file 
requested, and will show full path it tried to use.

Hope this helps.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


nginx-1.25.3

2023-10-24 Thread Maxim Dounin
Changes with nginx 1.25.324 Oct 2023

*) Change: improved detection of misbehaving clients when using HTTP/2.

*) Feature: startup speedup when using a large number of locations.
   Thanks to Yusuke Nojima.

*) Bugfix: a segmentation fault might occur in a worker process when
   using HTTP/2 without SSL; the bug had appeared in 1.25.1.

*) Bugfix: the "Status" backend response header line with an empty
   reason phrase was handled incorrectly.

*) Bugfix: memory leak during reconfiguration when using the PCRE2
   library.
   Thanks to ZhenZhong Wu.

*) Bugfixes and improvements in HTTP/3.


-- 
Maxim Dounin
http://nginx.org/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: trying to disable gzip

2023-10-18 Thread Maxim Dounin
Hello!

On Wed, Oct 18, 2023 at 04:13:39PM +, alienmega via nginx wrote:

> Hello,
> I am trying to disable gzip to mitigate the breach attack( I use 
> a service to check for vulnerabilities and it came up with 
> that). I added gzip off to nginx.conf file and then check the 
> configuration with nginx -t, and then reloaded with systemctl 
> reload nginx.
> 
> When I visit the site, I still have
> Accept-Encoding: gzip, deflate, br

The "Accept-Encoding" is a _request_ header, sent by your browser.  
You have to look at the response headers instead, notably 
Content-Encoding.

> I check that I dont have gip on anywhere else on /etc/nginx/*
> grep -Ri "gzip off" /etc/nginx

As long as you don't have "gzip on" (or "gzip_static", but it is 
certainly not affected by BREACH) in your nginx configuration, 
nginx won't use gzip.  Note though that if you are using some 
backend server to return dynamic responses, you might need to 
disable gzip there as well.

Note well that completely disabling gzip might not be the best 
solution.  The BREACH attack only affects response body 
compression if the resource being returned 1) contains some secret 
information and 2) it reflects some user input.  That is, it 
certainly does not affect static files, and can be easily avoided 
by masking secrets in dynamic pages, see 
https://www.breachattack.com/ for details.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: OT: Rapid Reset attacks on HTTP/2

2023-10-10 Thread Maxim Dounin
Hello!

On Tue, Oct 10, 2023 at 05:30:52PM -0400, Rick Gutierrez wrote:

> In the open version 1.24 and 1.25 the correction will be applied?, ¿or in
> the new release?

To re-iterate:

We do not consider nginx to be affected by this issue.  In the 
default configuration, nginx is sufficiently protected by the 
limit of allowed requests per connection (see 
http://nginx.org/r/keepalive_requests for details), so an attacker 
will be required to reconnect very often, making the attack 
obvious and easy to stop at the network level.  And it is not 
possible to circumvent the max concurrent streams limit in nginx, 
as nginx only allows additional streams when previous streams are 
completely closed.

Further, additional protection can be implemented in nginx by 
using the "limit_req" directive, which limits the rate of requests 
and rejects excessive requests.

Overall, with the handling as implemented in nginx, impact of 
streams being reset does no seem to be significantly different 
from impacts from over workloads with large number of requests 
being sent by the client, such as handling of multiple HTTP/2 
requests or HTTP/1.x pipelined requests.

Nevertheless, we've decided to implemented some additional 
mitigations which will help nginx to detect such attacks and drop 
connections with misbehaving clients faster.  The patch to do so 
was committed (http://hg.nginx.org/nginx/rev/cdda286c0f1b) and 
will be available in the next nginx release.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: OT: Rapid Reset attacks on HTTP/2

2023-10-10 Thread Maxim Dounin
Hello!

On Tue, Oct 10, 2023 at 02:50:37PM -0400, Jeffrey Walton wrote:

> Hi Everyone,
> 
> This just made my radar:
> <https://thehackernews.com/2023/10/http2-rapid-reset-zero-day.html>.
> 
> From the article:
> 
> F5, in an independent advisory of its own, said the attack impacts the
> NGINX HTTP/2 module and has urged its customers to update their NGINX
> configuration to limit the number of concurrent streams to a default of
> 128 and persist HTTP connections for up to 1000 requests.

The "the attack impacts the NGINX HTTP/2 module" claim is 
incorrect, see here:

https://mailman.nginx.org/pipermail/nginx-devel/2023-October/S36Q5HBXR7CAIMPLLPRSSSYR4PCMWILK.html

Hope this helps.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Compatibility of X25519Kyber768 ClientHello

2023-10-09 Thread Maxim Dounin
Hello!

On Mon, Oct 09, 2023 at 11:55:15PM +0800, Gentry Deng via nginx wrote:

> I recently encountered a compatibility issue with X25519Kyber768 
> <https://bugs.chromium.org/p/chromium/issues/detail?id=1484074>: I was 
> unable to access the site via X25519Kyber768-enabled Google Chrome on a 
> server with only TLS 1.2 enabled, but not TLS 1.3.
> 
> The Chromium team replied:
> 
> 
> > Regarding TLS 1.2 vs TLS 1.3, a TLS ClientHello is generally good for 
> > all the parameters we support. So though we include TLS 1.3 with Kyber 
> > in there, we also include parameters for TLS 1.3 without Kyber and TLS 
> > 1.2. So if the server and network well behaving correctly, it's 
> > perfectly fine if the server only supports TLS 1.2.
> >
> > I'm able to reproduce the problem. It looks like a bug in 
> > www.paypal.cn's server. They didn't implement TLS 1.2 correctly. 
> > Specifically, they do not correctly handle when the ClientHello comes 
> > in in two reads. Before Kyber, this wasn't very common because 
> > ClientHellos usually fit in a packet. But Kyber makes ClientHellos 
> > larger, so it is possible to get only a partial ClientHello in the 
> > first read, and require a second read to try again. This is something 
> > that any TCP-based application needs to handle; you may not have 
> > gotten the whole message on a given read and need to keep on reading.
> >
> > www.paypal.cn will need to fix their server to correctly handle this case.
> 
> 
> So the Chromium team isn't considering making a change, so I'm wondering 
> how compatible nginx is with this? Or what version is needed to make it 
> error free?

There are no known issues in nginx with ClientHello split between 
packets (with all supported SSL libraries).  And I would be very 
much surprised if there are any, as this is indeed a very basic 
thing TCP-based applications used to handle.

Such issues are more likely to be seen in various packet-based 
filtering solutions, and I would assume this is most likely the 
case for the site in question.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Requesting a Nginx Variable - "client_time_taken" (similar to request_time & upstream_response_time)

2023-10-02 Thread Maxim Dounin
Hello!

On Mon, Oct 02, 2023 at 03:25:15PM +0530, Devarajan D via nginx wrote:

> > In general, $request_time minus $upstream_response_time is the 
> > slowness introduced by the client. 
> 
> 1. It's true most of the time. But clients are not willing to 
> accept unless they see a log from server side. (Say the client 
> server itself is running in another hosing services like amazon 
> EC2 instance)

Well, $request_time and $upstream_response_time are logs from 
server side.  Introducing yet another variable which will 
calculate the difference just to convince your clients is not 
something I would reasonably expect to happen.

> > Further, $request_time can be saved at various request 
> > processing stages, such as after reading request headers via 
> > the "set"  directive, or via a map when sending the response 
> > headers. This provides mostly arbitrary time measurements if 
> > you need it. 
> 
> 2. How do we get control in nginx configuration when the last 
> byte of request body is received from the client

In simple proxying configurations, nginx starts to read the 
request body when control reaches the proxy module (so you can 
save start time with a simple "set" in the relevant location), and 
when the request body is completely read, nginx will create the 
request to the upstream server (so you can save this time by 
accessing a map in proxy_set_header).

> > For detailed investigation on what happens with the particular 
> > client, debugging log is the most efficient instrument, 
> > notably the "debug_connection" directive which makes it 
> > possible to activate debug logging only for a particular client 
> 
> This debug log would definitely help to check the last byte of 
> the request body !
> 
> 3. But is it recommended to used nginx built with --with-debug 
> in production environments

The "--with-debug" is designed to be used in production 
environments.  It incurs some extra costs, and therefore not the 
default, and on loaded servers it might be a good idea to use 
nginx compiled without "--with-debug" unless you are debugging 
something.  But unless debugging is actually activated in the 
configuration, the difference is negligible.

> 4. We receive such slow requests infrequently. Enabling debug 
> log is producing a huge amount of logs/per request (2MB of log 
> file per 10 MB request body upload) and it becomes hard to 
> identify the slow request in that. Thats why it is mentioned as 
> no straightforward way to measure the time taken by client to 
> send the request body completely. 

As previously suggested, using $request_time minus 
$upstream_response_time (or even just $request_time) makes it 
trivial to identify requests to look into.

> > > Is there a timeout for the whole request? 
> 
> 5. How to prevent attacks like slow-loris DDos from exhausting 
> the client connections when using the open-source version. 
> Timeouts such as client_body_timeout are not much helpful for 
> such attacks.

Stopping DDoS attacks is generally a hard problem, and timeouts 
are not an effective solution either.  Not to mention that in many 
practical cases total timeout on the request body reading cannot 
be less than several hours, making such timeouts irrelevant.

For trivial in-nginx protection from Slowloris-like attacks 
involving request body, consider using limit_conn 
(http://nginx.org/r/limit_conn).

[...]

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Requesting a Nginx Variable - "client_time_taken" (similar to request_time & upstream_response_time)

2023-10-01 Thread Maxim Dounin
Hello!

On Sun, Oct 01, 2023 at 08:20:23PM +0530, Devarajan D via nginx wrote:

> Currently, there is no straightforward way to measure the time 
> taken by client to upload the request body. 
> 
> 1. A variable similar to request_time, upstream_response_time 
> can be helpful to easily log this time taken by client.
>     So it will be easy to prove to the client where the slowness 
> is.

In general, $request_time minus $upstream_response_time is the 
slowness introduced by the client.  (In some cases, 
$upstream_response_time might also depend on the client behaviour, 
such as with "proxy_request_buffering off;" or with 
"proxy_buffering off;" and/or due to proxy_max_temp_file_size 
reached.)

Further, $request_time can be saved at various request processing 
stages, such as after reading request headers via the "set" 
directive, or via a map when sending the response headers.  This 
provides mostly arbitrary time measurements if you need it.

For detailed investigation on what happens with the particular 
client, debugging log is the most efficient instrument, notably 
the "debug_connection" directive which makes it possible to 
activate debug logging only for a particular client 
(http://nginx.org/r/debug_connection).

> 2. Also, is there a timeout for the whole request? 
> 
>     (say request should be timed out if it is more than 15 
> minutes)

No.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: SSL Reuse not happening in s3 presigned urls

2023-10-01 Thread Maxim Dounin
0 18:07:19 [debug] 36#36: *9 SSL_get_error: 2
> 2023/09/30 18:07:19 [debug] 36#36: *9 SSL handshake handler: 1
> 2023/09/30 18:07:19 [debug] 36#36: *9 SSL_do_handshake: -1
> 2023/09/30 18:07:19 [debug] 36#36: *9 SSL_get_error: 2
> 2023/09/30 18:07:19 [debug] 36#36: *9 SSL handshake handler: 0
> 2023/09/30 18:07:19 [debug] 36#36: *9 save session: 55ED330FBAC0

Note: here nginx saves the SSL session which was established 
during the handshake.  This SSL session is expected to be used 
during following handshakes in the same worker process.

> 2023/09/30 18:07:19 [debug] 36#36: *9 SSL_do_handshake: 1
> 2023/09/30 18:07:19 [debug] 36#36: *9 SSL: TLSv1.2, cipher:
> "ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128)
> Mac=AEAD"

Note: here nginx logs handshake details.  This handshake does not 
reuse an SSL session, since there were none.  If there was an SSL 
session and it was correctly reused during the SSL handshake, the 
next log line would be:

2023/09/30 18:07:19 [debug] 36#36: *9 SSL reused session

Check the following SSL handshakes in the same worker process to 
see if sessions are actually reused or not.

Most likely, these sessions are properly reused, and everything 
already works as it should.

> 2023/09/30 18:07:19 [debug] 36#36: *9 *http upstream ssl handshake*:
> "/blob/zte3odk1ymnl@CIBC-2mb
> /singleurl0?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIASQOYQRM4CTRY6I54%2F202309>
> 2023/09/30 18:07:19 [debug] 36#36: *9 http upstream send request
> 2023/09/30 18:07:19 [debug] 36#36: *9 http upstream send request body
> 
> If I run 4K clients using a simulator,I will see 100% CPU in the nginx
> container.I believe if we cache SSL sessions then SSL handshake for every
> request will be avoided hence we may not have high CPU at nginx container.
> 
> Can you please help how to achieve SSL Cache? how to make sure the CPU is
> not high? Is there any reason why the CPU is high other than SSL Handshake.

As outlined above, most likely SSL session reuse to upstream 
servers is already working properly in your setup.

Note though that SSL is generally costly, and you are using it for 
both client connections and upstream connections.  Depending on 
the certificates being used, ciphers being used and so on costs 
might vary, and there might be a room for improvement.

Hope this helps.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Debugging Nginx Memory Spikes on Production Servers

2023-09-20 Thread Maxim Dounin
Hello!

On Wed, Sep 20, 2023 at 11:55:39AM -0500, Lance Dockins wrote:

> Are there any best practices or processes for debugging sudden memory
> spikes in Nginx on production servers?  We have a few very high-traffic
> servers that are encountering events where the Nginx process memory
> suddenly spikes from around 300mb to 12gb of memory before being shut down
> by an out-of-memory termination script.  We don't have Nginx compiled with
> debug mode and even if we did, I'm not sure that we could enable that
> without overly taxing the server due to the constant high traffic load that
> the server is under.  Since it's a server with public websites on it, I
> don't know that we could filter the debug log to a single IP either.
> 
> Access, error, and info logs all seem to be pretty normal.  Internal
> monitoring of the Nginx process doesn't suggest that there are major
> connection spikes either.  Theoretically, it is possible that there is just
> a very large sudden burst of traffic coming in that is hitting our rate
> limits very hard and bumping the memory that Nginx is using until the OOM
> termination process closes Nginx (which would prevent Nginx from logging
> the traffic).  We just don't have a good way to see where the memory in
> Nginx is being allocated when these sorts of spikes occur and are looking
> for any good insight into how to go about debugging that sort of thing on a
> production server.
> 
> Any insights into how to go about troubleshooting it?

In no particular order:

- Make sure you are monitoring connection and request numbers as 
  reported by the stub_status module as well as memory usage.

- Check 3rd party modules you are using, if there are any - try 
  disabling them.

- If you are using subrequests, such as with SSI, make sure these 
  won't generate enormous number of subrequests.

- Check your configuration for buffer sizes and connection limits, 
  and make sure that your server can handle maximum memory 
  allocation without invoking the OOM Killer, that is: 
  worker_processes * worker_connections * (total amount of various 
  buffers as allocated per connection).  If not, consider reducing 
  various parts of the equation.

Hope this helps.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Unnecessary Log Entry?

2023-09-20 Thread Maxim Dounin
Hello!

On Wed, Sep 20, 2023 at 11:47:48AM -0500, Lance Dockins wrote:

> Is there a reason that Nginx continuously logs the sort of message below in
> the info log when client_body_in_file_only is set to either "clean" or
> "on"?  Regardless of which of those two settings you're using for that
> directive, the request body is always going to be buffered to a temporary
> file (that is the whole point of that setting).
> 
> "[notice] 3951130#3951130: *769735 a client request body is buffered to a
> temporary file"
> 
> Seems like even the info log should be suppressing that notice if you're
> using settings that force the request body to a temp file.

That's exactly the reason why it is logged at the "notice" level, 
and not "warn" as it used to be under normal conditions.  If you 
don't want to see these messages, consider configuring error_log 
logging level to a higher one, see http://nginx.org/r/error_log 
for details.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: module which handles POST request help

2023-09-18 Thread Maxim Dounin
Hello!

On Mon, Sep 18, 2023 at 09:27:36AM -0600, Ben Mesander via nginx wrote:

> I'm working on an nginx module which will handle POST requests. I've never
> written an nginx module before, but I have worked on apache modules. My
> goal is to have it do unbuffered reads and process the data being posted as
> it arrives. Initially, I'm doing just a simple blocking read of the POST
> body. I've simplified things down almost to the "hello world" level.
> 
> When I do a POST, my module gets called, but I get a 404 not found, I am
> not sure why. I do see the file I post being uploaded and stored in a file
> in the nginx temporary file area. I do not see an entry in the access log
> showing the POST, again I am not sure why.  How can I fix these things?
> 
> Source to my module:
> https://github.com/benmesander/ngx-dashll-module/tree/main

In no particular order:

Your module registers an access phase handler, which means it is 
to either accept or reject requests for later processing with 
other handlers.  Unless you do something to handle the request, 
the 404 response code is the most expected one, and likely the 
reason why you are getting 404 response.

Your module fails to properly account for request reference 
counting as needed during request body reading: you fail to call 
ngx_http_finalize_request() after calling 
ngx_http_read_client_request_body() during phase processing before 
the content phase.  Due to reference count leak the request will 
hang during termination, waiting for the 
ngx_http_finalize_request() call (which will never happen).  
That's why you are not seeing the request in the access log - the 
request is still running.

Also, your module fails to properly restore phase processing after 
you've stopped it to read the request body: in addition to calling 
ngx_http_core_run_phases() you have to restore 
r->write_event_handler, since it can be overwritten during request 
body reading.

In general, if you want to read the request body during the 
access phase, consider looking into the mirror module request body  
handling:

http://hg.nginx.org/nginx/file/tip/src/http/modules/ngx_http_mirror_module.c#l104

If you actually want to handle such requests yourself, 
consider instead using a content handler.  In a content handler 
request finalization happens automatically, and there is no need 
to recover phase processing, so reading the request body is more 
or less trivial.  An example code can be seen in the development 
guide:

http://nginx.org/en/docs/dev/development_guide.html#http_request_body

Hope this helps.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: status-line trailing SP is missing ?

2023-08-31 Thread Maxim Dounin
Hello!

On Thu, Aug 31, 2023 at 03:45:18PM +0400, Sergey Kandaurov wrote:

> > On 29 Aug 2023, at 08:14, Maxim Dounin  wrote:
> > 
> > Hello!
> > 
> > On Sat, Aug 26, 2023 at 04:21:07PM +0200, Jérémy Lal wrote:
> > 
> >> Hi,
> >> 
> >> https://bugs.debian.org/1050571
> >> reports that the Status line doesn't always end with space, which seems
> >> contradictory to RFC9112 which states:
> >> "A server MUST send the space that separates the status-code from the
> >> reason-phrase even when the reason-phrase is absent (i.e., the status-line
> >> would end with the space)."
> >> 
> >> Is it a documented nginx 1.24.0 behavior ?
> > 
> > Interesting.
> > 
> > As you can see from the report referenced, nginx returns in the 
> > HTTP status what is sent by the FastCGI application in the 
> > "Status" response header.
> > 
> > [..]
> > 
> > Summing the above, I tend to think that it is generally a bad idea 
> > to use Status header without a reason-phrase, as it is going to 
> > result in missing SP sooner or later.  At least if you do care 
> > about missing SP in the status line (AFAIK, it causes no practical 
> > issues, though I'm not really tested).
> 
> Agree.
> 
> > 
> > As for the nginx behaviour, I don't think we want to try to 
> > implement custom parsing for the Status header to preserve 
> > trailing SP if it's present.  We can, however, consider using 
> > only the status code from such Status headers, so nginx will 
> > provide reason phrase by itself.  
> > 
> > Something like this should do the trick:
> > 
> > # HG changeset patch
> > # User Maxim Dounin 
> > # Date 1693282407 -10800
> > #  Tue Aug 29 07:13:27 2023 +0300
> > # Node ID 10aec7047ed8c8e429e8e9b9d676a83751899bc6
> > # Parent  44536076405cf79ebdd82a6a0ab27bb3aed86b04
> > Upstream: fixed handling of Status headers without reason-phrase.
> > 
> > Status header with an empty reason-phrase, such as "Status: 404 ", is
> > valid per CGI specification, but looses the trailing space during parsing.
> > Currently, this results in "HTTP/1.1 404" HTTP status line in the response,
> > which violates HTTP specification due to missing trailing space.
> > 
> > With this change, only the status code is used from such short Status
> > header lines, so nginx will generate status line itself, with the space
> > and appropriate reason phrase if available.
> > 
> > Reported at:
> > https://mailman.nginx.org/pipermail/nginx/2023-August/EX7G4JUUHJWJE5UOAZMO5UD6OJILCYGX.html
> > 
> > diff --git a/src/http/modules/ngx_http_fastcgi_module.c 
> > b/src/http/modules/ngx_http_fastcgi_module.c
> > --- a/src/http/modules/ngx_http_fastcgi_module.c
> > +++ b/src/http/modules/ngx_http_fastcgi_module.c
> > @@ -2048,7 +2048,10 @@ ngx_http_fastcgi_process_header(ngx_http
> > }
> > 
> > u->headers_in.status_n = status;
> > -u->headers_in.status_line = *status_line;
> > +
> > +if (status_line->len > 3) {
> > +u->headers_in.status_line = *status_line;
> > +}
> > 
> > } else if (u->headers_in.location) {
> > u->headers_in.status_n = 302;
> > diff --git a/src/http/modules/ngx_http_scgi_module.c 
> > b/src/http/modules/ngx_http_scgi_module.c
> > --- a/src/http/modules/ngx_http_scgi_module.c
> > +++ b/src/http/modules/ngx_http_scgi_module.c
> > @@ -1153,7 +1153,10 @@ ngx_http_scgi_process_header(ngx_http_re
> > }
> > 
> > u->headers_in.status_n = status;
> > -u->headers_in.status_line = *status_line;
> > +
> > +if (status_line->len > 3) {
> > +u->headers_in.status_line = *status_line;
> > +}
> > 
> > } else if (u->headers_in.location) {
> > u->headers_in.status_n = 302;
> > diff --git a/src/http/modules/ngx_http_uwsgi_module.c 
> > b/src/http/modules/ngx_http_uwsgi_module.c
> > --- a/src/http/modules/ngx_http_uwsgi_module.c
> > +++ b/src/http/modules/ngx_http_uwsgi_module.c
> > @@ -1381,7 +1381,10 @@ ngx_http_uwsgi_process_header(ngx_http_r
> > }
> > 
> > u->headers_in.status_n = status;
> > -u->headers_in.status_line = *status_line;
> > +
> > +if (status_line->len > 3) {
> > +u->headers_in.status_line = *status_line;
> > +}
> > 
> > } else if (u->headers_in.location) {
> > u->headers_in.status_n = 302;
> > 
> > 
> 
> After discussion in the adjacent thread,
> I think the change is fine.

Pushed to http://mdounin.ru/hg/nginx, thanks for the review.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: status-line trailing SP is missing ?

2023-08-31 Thread Maxim Dounin
Hello!

On Thu, Aug 31, 2023 at 03:45:06PM +0400, Sergey Kandaurov wrote:

> > On 31 Aug 2023, at 14:28, Maxim Dounin  wrote:
> > 
> > Hello!
> > 
> > On Wed, Aug 30, 2023 at 04:20:15PM +0400, Sergey Kandaurov wrote:
> > 
> >>> On 29 Aug 2023, at 08:33, Maxim Dounin  wrote:
> >>> 
> >>> On Mon, Aug 28, 2023 at 08:59:28PM +0400, Sergey Kandaurov wrote:
> >>> 
> >>>>> On 26 Aug 2023, at 18:21, Jérémy Lal  wrote:
> >>>>> 
> >>>>> Hi,
> >>>>> 
> >>>>> https://bugs.debian.org/1050571
> >>>>> reports that the Status line doesn't always end with space, which seems 
> >>>>> contradictory to RFC9112 which states:
> >>>>> "A server MUST send the space that separates the status-code from the 
> >>>>> reason-phrase even when the reason-phrase is absent (i.e., the 
> >>>>> status-line would end with the space)."
> >>>>> 
> >>>>> Is it a documented nginx 1.24.0 behavior ?
> >>>>> 
> >>>> 
> >>>> Note that the response line with empty reason phrase
> >>>> is properly generated since nginx 1.5.6.
> >>>> The exception remains is proxying FastCGI responses
> >>>> as there is no distinguished response line in CGI syntax.
> >>>> 
> >>>> The reason is that Status is a CGI header field, and hence
> >>>> it is parsed by a generic routine that cuts trailing spaces.
> >>>> But it can have a trailing space per RFC 3875, section 6.3.3.
> >>>> So it needs a special treatment to preserve SP before empty reason
> >>>> phrase.  The below patch should help; although it doesn't look
> >>>> efficient and can be polished, I think this is quite enough for
> >>>> valid use cases.
> >>> 
> >>> I very much doubt that RFC 3875 properly defines whitespace 
> >>> handling, see my response to the original report.  In this 
> >>> particular case, it seems to define a header which cannot be 
> >>> correctly parsed if reason-phrase is empty.
> >>> 
> >> 
> >> Yes, it is quite dubious how this can be parsed correctly.
> >> Although it is valid to have a trailing space in Status,
> >> this contradicts to header field value syntax per RFC 3875:
> >>  field-content   = *( token | separator | quoted-string )
> > 
> > Note that field-value syntax does no apply to the "Status" header, 
> > its syntax is defined separately.
> 
> Well, per RFC 3875 BNF, Status is CGI-field, which if generalized
> to other-field, consists of field-content.

Syntax is as follows
(https://datatracker.ietf.org/doc/html/rfc3875#section-6.3):

  header-field= CGI-field | other-field
  CGI-field   = Content-Type | Location | Status
  other-field = protocol-field | extension-field
  protocol-field  = generic-field
  extension-field = generic-field
      generic-field   = field-name ":" [ field-value ] NL

CGI-field and other-field are mutually exclusive variants of 
header-field, and there is no generalization.

Generalization was present in the original RFC draft 
(https://datatracker.ietf.org/doc/html/draft-robinson-www-interface-00#section-9.2),
 
but was removed during work on RFC 3875 (in 
https://datatracker.ietf.org/doc/html/draft-coar-cgi-v11-03).  Not 
sure about the reasons, but might be because Status indeed does not 
conform to the generic-field syntax, and this was an attempt to 
fix it.

Either way, the fact is: RFC 3875 does not correctly specify 
whitespace handling, and shouldn't be relied upon.

[...]

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: status-line trailing SP is missing ?

2023-08-31 Thread Maxim Dounin
Hello!

On Wed, Aug 30, 2023 at 04:20:15PM +0400, Sergey Kandaurov wrote:

> > On 29 Aug 2023, at 08:33, Maxim Dounin  wrote:
> > 
> > On Mon, Aug 28, 2023 at 08:59:28PM +0400, Sergey Kandaurov wrote:
> > 
> >>> On 26 Aug 2023, at 18:21, Jérémy Lal  wrote:
> >>> 
> >>> Hi,
> >>> 
> >>> https://bugs.debian.org/1050571
> >>> reports that the Status line doesn't always end with space, which seems 
> >>> contradictory to RFC9112 which states:
> >>> "A server MUST send the space that separates the status-code from the 
> >>> reason-phrase even when the reason-phrase is absent (i.e., the 
> >>> status-line would end with the space)."
> >>> 
> >>> Is it a documented nginx 1.24.0 behavior ?
> >>> 
> >> 
> >> Note that the response line with empty reason phrase
> >> is properly generated since nginx 1.5.6.
> >> The exception remains is proxying FastCGI responses
> >> as there is no distinguished response line in CGI syntax.
> >> 
> >> The reason is that Status is a CGI header field, and hence
> >> it is parsed by a generic routine that cuts trailing spaces.
> >> But it can have a trailing space per RFC 3875, section 6.3.3.
> >> So it needs a special treatment to preserve SP before empty reason
> >> phrase.  The below patch should help; although it doesn't look
> >> efficient and can be polished, I think this is quite enough for
> >> valid use cases.
> > 
> > I very much doubt that RFC 3875 properly defines whitespace 
> > handling, see my response to the original report.  In this 
> > particular case, it seems to define a header which cannot be 
> > correctly parsed if reason-phrase is empty.
> > 
> 
> Yes, it is quite dubious how this can be parsed correctly.
> Although it is valid to have a trailing space in Status,
> this contradicts to header field value syntax per RFC 3875:
>   field-content   = *( token | separator | quoted-string )

Note that field-value syntax does no apply to the "Status" header, 
its syntax is defined separately.

On the other hand, the "Status" header syntax does not allow any 
spaces after the colon, which rules out headers like "Status: 200 OK",
and makes the "Status" header syntax highly questionable.

As already suggested in my response to the original report, I tend 
to think that the best available option is to ignore RFC 3875 idea 
about headers syntax, and use HTTP instead.

> 
> >> 
> >> # HG changeset patch
> >> # User Sergey Kandaurov 
> >> # Date 1693238094 -14400
> >> #  Mon Aug 28 19:54:54 2023 +0400
> >> # Node ID f621c60dfa24ab5fadb3c8b805957ca3f281
> >> # Parent  2880f60a80c3e2706151dc7b48dc1267e39c47a9
> >> FastCGI: preserved SP for empty Status header reason phrase.
> >> 
> >> diff --git a/src/http/modules/ngx_http_fastcgi_module.c 
> >> b/src/http/modules/ngx_http_fastcgi_module.c
> >> --- a/src/http/modules/ngx_http_fastcgi_module.c
> >> +++ b/src/http/modules/ngx_http_fastcgi_module.c
> >> @@ -2048,7 +2048,25 @@ ngx_http_fastcgi_process_header(ngx_http
> >> }
> >> 
> >> u->headers_in.status_n = status;
> >> -u->headers_in.status_line = *status_line;
> >> +
> >> +if (status_line->len == 3) {
> >> +/* preserve SP for empty reason phrase */
> >> +
> >> +u->headers_in.status_line.data = 
> >> ngx_pnalloc(r->pool,
> >> + 5);
> >> +if (u->headers_in.status_line.data == NULL) {
> >> +return NGX_ERROR;
> >> +}
> >> +
> >> +ngx_memcpy(u->headers_in.status_line.data,
> >> +   status_line->data, 3);
> >> +u->headers_in.status_line.data[3] = ' ';
> >> +u->headers_in.status_line.data[4] = '\0';
> >> +u->headers_in.status_line.len = 4;
> >> +
> >> +} else {
> >> +u->headers_in.status_line = *status_line;
> >> +}
> >> 
> >> } else if (u->headers_in.location) {
> >>

Re: status-line trailing SP is missing ?

2023-08-28 Thread Maxim Dounin
dule.c
> +++ b/src/http/modules/ngx_http_fastcgi_module.c
> @@ -2048,7 +2048,6 @@ ngx_http_fastcgi_process_header(ngx_http
>  }
>  
>  u->headers_in.status_n = status;
> -    u->headers_in.status_line = *status_line;
>  
>  } else if (u->headers_in.location) {
>  u->headers_in.status_n = 302;

I don't think it's a good idea, since this always drops the 
reason phrase provided by the upstream server.  It can contain 
some meaningful information which will be lost as a result, most 
notably for non-standard error codes.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: status-line trailing SP is missing ?

2023-08-28 Thread Maxim Dounin
Hello!

On Sat, Aug 26, 2023 at 04:21:07PM +0200, Jérémy Lal wrote:

> Hi,
> 
> https://bugs.debian.org/1050571
> reports that the Status line doesn't always end with space, which seems
> contradictory to RFC9112 which states:
> "A server MUST send the space that separates the status-code from the
> reason-phrase even when the reason-phrase is absent (i.e., the status-line
> would end with the space)."
> 
> Is it a documented nginx 1.24.0 behavior ?

Interesting.

As you can see from the report referenced, nginx returns in the 
HTTP status what is sent by the FastCGI application in the 
"Status" response header.

The tricky part is an attempt to use "Status" header to sent 
status with a trailing space, "404 ".  HTTP header values cannot 
contain trailing spaces, see here:

https://www.rfc-editor.org/rfc/rfc9112.html#name-field-syntax
https://www.rfc-editor.org/rfc/rfc9112.html#section-5.1-3

: A field line value might be preceded and/or followed by optional 
: whitespace (OWS); a single SP preceding the field line value is 
: preferred for consistent readability by humans. The field line 
: value does not include that leading or trailing whitespace: OWS 
: occurring before the first non-whitespace octet of the field line 
: value, or after the last non-whitespace octet of the field line 
: value, is excluded by parsers when extracting the field line value 
: from a field line.

As such, both "Status: 404" and "Status: 404 " are equivalent and 
both contain value "404", without any trailing spaces.  And this 
is what nginx uses in the response, and this is what is seen in 
the original report.

Following the original CGI specification, which uses HTTP headers 
syntax in the response[1], this basically means that an 
application is not allowed to sent Status with an empty reason 
phrase.

[1] 
https://web.archive.org/web/20100212084036/http://hoohoo.ncsa.illinois.edu/cgi/out.html

: The output of scripts begins with a small header. This header 
: consists of text lines, in the same format as an HTTP header, 
: terminated by a blank line (a line with only a linefeed or CR/LF).

Similarly, original CGI specification RFC draft, 
draft-robinson-www-interface-00 explicitly defines a generic 
syntax for CGI response headers, which does not use implied LWS 
rule (as in HTTP), but explicitly allows whitespaces between 
tokens in the headers:

https://datatracker.ietf.org/doc/html/draft-robinson-www-interface-00#section-9.2

   The CGI headers have the generic syntax:

  generic-header = field-name ":" [ field-value ] NL
  field-name = 1*
  field-value= *( field-content | LWSP )
  field-content  = *( token | tspecial | quoted-string)

While it does not explicitly state, like HTTP specification, that 
leading and trailing LWSPs are not part of the field value, this 
is the only approach which makes the resulting specification 
usable.

Similarly, RFC 3875 uses the same generic syntax for headers, and 
also tries to clarify whitespace usage:

https://datatracker.ietf.org/doc/html/rfc3875#section-6.3

   ... Whitespace is permitted between the ":"
   and the field-value (but not between the field-name and the ":"), and
   also between tokens in the field-value.

Though in contrast to draft-robinson-www-interface-00, it only 
defines generic syntax for non-CGI headers.  CGI headers, that is, 
"Content-Type", "Location", and "Status", use their own syntax 
defined separately.  In particular, the Status header is defined 
as follows:

https://datatracker.ietf.org/doc/html/rfc3875#section-6.3.3

  Status = "Status:" status-code SP reason-phrase NL
  status-code= "200" | "302" | "400" | "501" | extension-code
  extension-code = 3digit
  reason-phrase  = *TEXT

Note that whitespaces are _not_ allowed by the syntax.  This, 
however, rules out responses like "Status: 200 OK" due to the 
space between "Status:" and "200 OK".  As such, it looks 
reasonable to assume this is a specification bug, and at least 
some whitespaces are expected to be allowed.

Overall, it does not look like RFC 3875 provides a consistent 
approach to whitespace handling, and using the same approach as 
for HTTP seem to be the best available option.


Summing the above, I tend to think that it is generally a bad idea 
to use Status header without a reason-phrase, as it is going to 
result in missing SP sooner or later.  At least if you do care 
about missing SP in the status line (AFAIK, it causes no practical 
issues, though I'm not really tested).

As for the nginx behaviour, I don't think we want to try to 
implement custom parsing for the Status header to preserve 
trailing SP if it's present.  We can, however, consider u

nginx-1.25.2

2023-08-15 Thread Maxim Dounin
Changes with nginx 1.25.215 Aug 2023

*) Feature: path MTU discovery when using HTTP/3.

*) Feature: TLS_AES_128_CCM_SHA256 cipher suite support when using
   HTTP/3.

*) Change: now nginx uses appname "nginx" when loading OpenSSL
   configuration.

*) Change: now nginx does not try to load OpenSSL configuration if the
   --with-openssl option was used to built OpenSSL and the OPENSSL_CONF
   environment variable is not set.

*) Bugfix: in the $body_bytes_sent variable when using HTTP/3.

*) Bugfix: in HTTP/3.


-- 
Maxim Dounin
http://nginx.org/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Limit upload time

2023-07-30 Thread Maxim Dounin
Hello!

On Fri, Jul 28, 2023 at 01:57:56PM +, Crosnier, Laurent via nginx wrote:

> Hello team,
> 
> I am trying to block uploads when they reach 10 seconds.
> I tried many directives, just like this:
> 
>  +
> 
> location / {
> proxy_set_headerX-Real-IP  $remote_addr;
> proxy_set_headerX-Forwarded-For $remote_addr;
> proxy_set_headerX-Forwarded-Proto HTTPS;
> proxy_set_headerHost $host;
> proxy_pass  https://10.0.1.48:443;
> keepalive_timeout10;
> proxy_read_timeout   10s;
> proxy_connect_timeout10s;
> proxy_send_timeout   10s;
> send_timeout 10s;
> keepalive_time   10s;
> client_body_timeout  10s;
> }
> 
> But none of these options seem to work, as in my logs, I'm still seeing my 
> uploads exceeding 10 seconds:
> 
> ++
> * - - [28/Jul/2023:15:22:03 +0200] "POST 
> /api/knowledge-base/media/e9702102-f1ec-458c-8b49-6b3394bee29a HTTP/1.1" 201 
> 775 
> "https://**/knowledge-base/upload/?sortBy=name&sortDirection=asc&unit=19647654-acc7-493c-9b0f-d17f4542f339";
>  "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) 
> Chrome/115.0.0.0 Safari/537.36" T="32.099" P="6967" XF="-"
> ++
> 
> I could not find any other directive to limit times for my upload request.
> Is there another right way to do it ?

For client request body, the only available timeout option is 
client_body_timeout, which defines a timeout between two read 
operations, but not a total upload time (see 
http://nginx.org/r/client_body_timeout for details).  It can be 
accompanied by limit_conn (http://nginx.org/r/limit_conn) to limit 
the total number of connections from each client.

If you want something more sophisticated, an option to consider 
might be "proxy_request_buffering off;" and appropriate limits on 
the backend side (see https://nginx.org/r/proxy_request_buffering 
for details).  This might not be a good solution though, since in 
most cases keeping connections on nginx side is way cheaper than 
passing them to backends.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: load_module with "./configure ... --modules-path= ..."

2023-07-14 Thread Maxim Dounin
Hello!

On Thu, Jul 13, 2023 at 03:08:19PM -0300, Fabiano Furtado Pessoa Coelho wrote:

> Hello...
> 
> On Wed, Jul 12, 2023 at 9:37 PM Maxim Dounin wrote:
> >
> > Hello!
> >
> > On Wed, Jul 12, 2023 at 06:40:21PM -0300, Fabiano Furtado Pessoa Coelho 
> > wrote:
> >
> > > Hello...
> > >
> > > I have compiled NGINX 1.24.0 with "./configure ... --modules-path= 
> > > ...".
> > >
> ...
> > > //ngx_http_module1.so;", it works fine.
> >
> > The "--modules-path" configure option defines a directory where
> > dynamic modules will be installed by "make install" and only used
> > during installation.
> >
> > The "load_module" configuration directive, if used with relative
> > paths, resolves such paths from "--prefix", much like most of the
> > configuration directives.  Note the example in the documentation
> > (http://nginx.org/r/load_module):
> >
> > load_module modules/ngx_mail_module.so;
> >
> > This example assumes the default "--modules-path", so modules are
> > installed into the "modules" directory under prefix.
> 
> First of all, thanks for the help.
> 
> Is it a good idea to modify the "load_module" directive to read the
> "--modules-path" parameter, if defined by "./configure", to use a
> custom relative path instead of default "--prefix" relative path?

No, adding yet another option to resolve relative paths in 
configuration looks like a bad idea to me.  Resolving these from 
prefix looks perfectly correct and expected.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: load_module with "./configure ... --modules-path= ..."

2023-07-12 Thread Maxim Dounin
Hello!

On Wed, Jul 12, 2023 at 06:40:21PM -0300, Fabiano Furtado Pessoa Coelho wrote:

> Hello...
> 
> I have compiled NGINX 1.24.0 with "./configure ... --modules-path= ...".
> 
> Therefore, the "load_module" directive is not reading the path/config
> specified in "--modules-path" parameter. For instance, in nginx.conf,
> I have declared "load_module ngx_http_module1.so;" and when I test it,
> the following message appears:
> # nginx -t
> nginx: [emerg] dlopen() "/etc/nginx/ngx_http_module1.so" failed
> (/etc/nginx/ngx_http_module1.so: cannot open shared object file: No
> such file or directory) in /etc/nginx/nginx.conf:12
> nginx: configuration file /etc/nginx/nginx.conf test failed
> 
> Why? Should the "load_module" directive read "--modules-path" value,
> if specified in "./configure"? From the "load_module" official
> documentation (http://nginx.org/en/docs/ngx_core_module.html#load_module),
> there is no default value defined, but, once "--modules-path" is
> configured, the "load_module" directive should read from
> "--modules-path", right? If not, what is the purpose of "./configure
> ... --modules-path= ...", to compile NGINX?
> 
> Of course, if I specify the full path "load_module
> //ngx_http_module1.so;", it works fine.

The "--modules-path" configure option defines a directory where 
dynamic modules will be installed by "make install" and only used 
during installation.

The "load_module" configuration directive, if used with relative 
paths, resolves such paths from "--prefix", much like most of the 
configuration directives.  Note the example in the documentation 
(http://nginx.org/r/load_module):

load_module modules/ngx_mail_module.so;

This example assumes the default "--modules-path", so modules are 
installed into the "modules" directory under prefix.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Possible bug with "proxy_intercept_errors on; " + "error_page 301 302"?

2023-06-30 Thread Maxim Dounin
Hello!

On Fri, Jun 30, 2023 at 01:24:18PM -0300, Fabiano Furtado Pessoa Coelho wrote:

> Hello Maxim...
> 
> On Thu, Jun 29, 2023 at 7:00 PM Maxim Dounin  wrote:
> >
> > Hello!
> >
> > On Thu, Jun 29, 2023 at 04:29:39PM -0300, Fabiano Furtado Pessoa Coelho 
> > wrote:
> >
> > > Hi...
> ...
> > > "proxy_intercept_errors on;"?
> >
> > The "proxy_intercept_errors" handling does not copy any response
> > headers from the original response (the only exception is
> > WWW-Authenticate for 403 responses).
> >
> > If you want nginx to copy some headers, consider doing it yourself
> > with the $upstream_http_* variables and the add_header directive.
> > Something like this should work:
> >
> > location /30x.html {
> > add_header Location $upstream_http_location;
> > ...
> > }
> >
> > Note though that you'll have to manually rewrite location if
> > needed (as proxy_redirect handling won't be used).
> 
> Your approach worked fine for me! Thanks for the help.
> 
> Horever, without this "add_header" modification and with
> "proxy_intercept_errors on;", I've two situations:
> * without "error_page 301 302 /30x.html;" directive configured: I
> receive the HTTP "location" header from my NGINX;
> * with "error_page 301 302 /30x.html;" directive configured: I don't
> receive the HTTP "location" header from my NGINX;
> 
> If "proxy_intercept_errors" handling does not copy any response
> headers from the original response, why is this HTTP "location" header
> present with "error_page 301 302 /30x.html;" directive configured in
> my system?
> 
> I really don't understand why it happens.

The "proxy_intercept_errors" directive only works for error 
responses for which you have an error_page explicitly configured.  
That is:

- If you don't have an error_page configured for the particular 
  response code, nginx returns the original response as obtained 
  from the upstream server, with all the response headers, much like 
  it does with "proxy_intercept_errors off;".

- If you do have an error_page configured, nginx instead 
  intercepts the error as per "proxy_intercept_errors on;", and 
  returns the error page to the client.  This response does not contain
  headers from the original response.

This seems to be exactly what you are seeing (assuming "with" in 
your final question is a typo, and you mean "without" instead).  
Hope this helps.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Possible bug with "proxy_intercept_errors on; " + "error_page 301 302"?

2023-06-29 Thread Maxim Dounin
Hello!

On Thu, Jun 29, 2023 at 04:29:39PM -0300, Fabiano Furtado Pessoa Coelho wrote:

> Hi...
> 
> I have one peculiar issue with NGINX 1.22.0 + "proxy_intercept_errors
> on;" + custom 302 "error_page".
> 
> Here is my simplified NGINX config:
> 
> http {
>   error_page 301 302 /30x.html;
>   server {
> location /30x.html {
>   root /etc/nginx/custom_error_pages;
>   sub_filter_once off;
>   sub_filter '*HTTP_STATUS_CODE*' '$status';
>   internal;
> }
> location /mysystem {
>   rewrite ^(.*)$ / break; # remove "/mysystem" and only send "/" to 
> backend
>   proxy_intercept_errors on;
>   proxy_pass http://php_server;
> }
>   }
> }
> 
> Using both "proxy_intercept_errors on;" and "error_page" directive, I
> have HTTP 302 response from my php backend server (OK!), but without
> HTTP new "location" header:
> 
> $ curl -v https://foo/mysystem
> ...
> < HTTP/2 302
> < date: Thu, 29 Jun 2023 17:48:31 GMT
> < content-type: text/html
> < strict-transport-security: max-age=63072000
> ...
> 
> Why? If I turn off the "proxy_intercept_errors" directive or remove
> the line "error_page 301 302 /30x.html;", it works:
> 
> $ curl -v https://foo/mysystem
> ...
> < HTTP/2 302
> < date: Thu, 29 Jun 2023 18:05:42 GMT
> < content-type: text/html; charset=UTF-8
> < location: https://bar
> < strict-transport-security: max-age=63072000
> ...
> 
> Is this normal? Why can't I have a 302 custom error page using
> "proxy_intercept_errors on;"?

The "proxy_intercept_errors" handling does not copy any response 
headers from the original response (the only exception is 
WWW-Authenticate for 403 responses).

If you want nginx to copy some headers, consider doing it yourself 
with the $upstream_http_* variables and the add_header directive.  
Something like this should work:

location /30x.html {
add_header Location $upstream_http_location;
...
}

Note though that you'll have to manually rewrite location if 
needed (as proxy_redirect handling won't be used).

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: chunk body in response

2023-06-27 Thread Maxim Dounin
Hello!

On Tue, Jun 27, 2023 at 10:57:37AM +, Yuval Abadi via nginx wrote:

> Hi
> 
> Nginx 1.23.2
> 
> When HTTP Server replies chunk body nginx has an error
> 0 upstream sent invalid chunked response while reading upstream, client: 
> 10.1.0.14, server: , request:
> 
> My nginx.conf have these attribute under the server section:
> 
> + proxy_http_version 1.1;
> + proxy_set_header Connection "";
> 
> What am I Missing?
> Is new/other Nginx version solving this issue?

The error message suggests your upstream server returned an 
invalid response.  Check nginx debug logs and/or tcpdump for more 
details (and probably some hints on how to fix your upstream 
server).

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx_http_write_filter_module.c

2023-06-21 Thread Maxim Dounin
Hello!

On Wed, Jun 21, 2023 at 11:36:54AM +, Yuval Abadi via nginx wrote:

> Nginx 1.23.2
> 
> The cpi ngx_http_write_filter(ngx_http_request_t *r, ngx_chain_t *in)
> 
> Loop over the request out buffers  and   in chain
> 
> If calculate size of combined buffer to write.
> 
> I  add  body replacement on  chain.
> It have value, but ngx_buf_size(cl->buf);
> Return it have 0. Size.
> 
> (l->buf->last - cl->buf->pos) give the correct size
> 
> So the buffer  not written.
> 
> Only the header part send back to client
> 
> Any idea?
> 
> 2023/06/21 10:45:31 [debug] 22962#22962: *3 write old buf t:1 f:0 
> 5600970D81E0, pos 5600970D81E0, size: 179 file: 0, size: 0
> 2023/06/21 10:45:31 [debug] 22962#22962: *3 write new buf t:0 f:0 
> 5600970D8098, pos 5600970D8098, size: 247 file: 0, size: 0
> 2023/06/21 10:45:31 [debug] 22962#22962: *3 http write filter: l:1 f:0 s:179
> 2023/06/21 10:45:31 [debug] 22962#22962: *3 http write filter limit 2097152
> 2023/06/21 10:45:31 [debug] 22962#22962: *3 writev: 179 of 179

Note the "t:0" part in the relevant debug log line: it suggests 
that the cl->buf->temporary flag is not set.

If the cl->buf->memory flag isn't set as well, this basically 
means that buffer does not reference any memory at all (even if 
cl->buf->post / cl->buf->last pointers are set).  For such a 
buffer ngx_buf_size() will return the file size.  And, since the 
buffer does not reference file data as well (note "f:0"), it is 
expected to be 0 (and it is, as per "file: 0, size: 0").

So it looks like the behaviour you observe is the result of 
forgotten cl->buf->memory (or cl->buf->temporary) flag.  Check 
your code to see if the buffer flags are set correctly, it looks 
like they aren't.

See

http://nginx.org/en/docs/dev/development_guide.html#buffer

for basic information about memory buffers and some basic 
examples.  For additional information, check nginx code.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


nginx-1.25.1

2023-06-13 Thread Maxim Dounin
Changes with nginx 1.25.113 Jun 2023

*) Feature: the "http2" directive, which enables HTTP/2 on a per-server
   basis; the "http2" parameter of the "listen" directive is now
   deprecated.

*) Change: HTTP/2 server push support has been removed.

*) Change: the deprecated "ssl" directive is not supported anymore.

*) Bugfix: in HTTP/3 when using OpenSSL.


-- 
Maxim Dounin
http://nginx.org/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx mail auth_http response support ssl?

2023-06-13 Thread Maxim Dounin
Hello!

On Tue, Jun 13, 2023 at 10:18:44AM +0800, l...@cndns.com wrote:

> I want the response to support ssl, is there a way to achieve it? 
> 
> HTTP/1.0 200 OK 
> 
> Auth-Status: OK 
> 
> Auth-Server: 198.51.100.1 
> 
> Auth-Port: 143 
> 
> add parameter Auth-Ssl: on ? 

The mail proxy module in nginx is designed to work with backend 
servers in trusted environment, and does not provide a direct way 
to establish SSL connections to backend servers.

If you want nginx to use SSL when connecting to backend servers, 
consider using an SSL tunnel to do so.  For example, you can make 
one with the stream module, see here for details:

http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Regex match the middle of a URL and also the ending?

2023-06-03 Thread Maxim Dounin
Hello!

On Sun, Jun 04, 2023 at 07:30:40AM +1000, Jore wrote:

> Hi there,
> 
> Thanks for getting back.
> 
> On 4/6/23 3:16 am, Maxim Dounin wrote:
> 
> > Hello!
> 
> […]
> 
> > The "^~" location modifier is for prefix-match locations to prevent 
> > further checking of regular expressions, see 
> > http://nginx.org/r/location for details. If you want to use a regular 
> > expression, you have to use the "~" modifier instead.
> 
> Thank you for that. Apologies, I should’ve mentioned that I did review 
> that documentation on how nginx selects a location. Unfortunately I 
> didn’t find it particularly clear or helpful.
> 
> I especially thought this rule in question would match and take 
> precedence over the latter /browser rule, because of this line on that page:
> 
> "If the longest matching prefix location has the “^~” modifier then
> regular expressions are not checked."
> 
> i.e. because this rule in question comes first and it is longer than the 
> latter /browser rule, a match would occur here and not later (because 
> processing stops here)?

The most important part is in the following paragraph:

  A location can either be defined by a prefix string, or by a 
  regular expression. Regular expressions are specified with the 
  preceding “~*” modifier (for case-insensitive matching), or the 
  “~” modifier (for case-sensitive matching). To find location 
  matching a given request, nginx first checks locations defined 
  using the prefix strings (prefix locations). Among them, the 
  location with the longest matching prefix is selected and 
  remembered. Then regular expressions are checked, in the order of 
  their appearance in the configuration file. The search of regular 
  expressions terminates on the first match, and the corresponding 
  configuration is used. If no match with a regular expression is 
  found then the configuration of the prefix location remembered 
  earlier is used.

In other words:

- Regular expressions are with "~*" and "~" modifiers.  Everything 
  else are prefix strings.

- For prefix strings, longest matching prefix is used (note that 
  order of prefix locations is not important).

- If the longest prefix found does not disable regular expression 
  matching (with the "^~" modifier, as per the quote you've 
  provided), regular expressions are checked in order.

As long as a regular expression is matched, nginx will use the 
corresponding location.  If no regular expressions matched, nginx 
will use the longest matching prefix location.

The "location" directive description additionally provides some 
examples explaining how this all works.  Reading the 
https://nginx.org/en/docs/http/request_processing.html article 
might be also helpful.

> And because I couldn’t find much on how nginx handles regex, I ended up 
> checking this question/answer 
> <https://stackoverflow.com/questions/59846238> on Stackoverflow. It 
> cleared things up a little, but still made me wonder why my approach 
> didn’t work.
> 
> Nevertheless, your suggestions to remove the priority prefix |^~| for 
> the second rule fixed the problem, but I still wonder why my approach 
> didn’t work. ;)

In your configuration,

location ^~ "/browser/.*/welcome/welcome.html" { ... }

is a location defined by a prefix string.

It will work for requests with the given prefix, such as 
"/browser/.*/welcome/welcome.html" or 
"/browser/.*/welcome/welcome.html.foobar".  But since it is a 
prefix string, and not a regular expression, the ".*" characters 
do not have any special meaning, and matched literally.  That 
is, this location won't match requests to resources like 
"/browser/foo123/welcome/welcome.html", since these use a 
different prefix.

To make it match requests to 
"/browser/foo123/welcome/welcome.html", you have to change the 
location to a location defined by a regular expression.  That, you 
have to change the "^~" modifier to "~" modifier (and it is also a 
good idea to change the regular expression to a slightly more 
explicit one, see my initial response).  But it is not enough, see 
below.

Similarly,

location ^~ /browser { ... }

is also a location defined by a prefix string.  Further, due to 
the "^~" modifier, it disables matching of regular expressions, so 
any request which starts with "/browser" won't be checked against 
regular expressions.  So you have to remove the "^~" modifier if 
you want nginx to check regular expressions, notably the one in 
the first location (assuming "^~" is changed to "~").

Hope this helps.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Regex match the middle of a URL and also the ending?

2023-06-03 Thread Maxim Dounin
Hello!

On Sun, Jun 04, 2023 at 12:26:55AM +1000, Jore wrote:

> Hi all,
> 
> I have an app on a domain that is set by a developer to proxy at certain 
> URLs:
> 
> |example.com/browser/123foo0/stuff.js |
> 
> for example, where |123foo0| is some random key. The key may also change 
> length in future.
> 
> That’s all fine.
> 
> But I’d like to interrupt specific requests and not proxy them: I don’t 
> want to serve anything after the key that is in the path |/welcome| for 
> example, i.e. not proxy any of these:
> 
> |example.com/browser/123foo0/welcome/welcome.html 
> example.com/browser/foo456b/welcome/welcome.css 
> example.com/browser/bar123f/welcome/welcome.js 
> example.com/browser/456foob/welcome/other.stuff 
> example.com/browser/foo789b/welcome/ |
> 
> So I tried simple stuff first like: |location ^~ 
> /browser/.*/welcome/welcome.html {...|
> but couldn’t even get that working, before moving on to try capturing 
> groups like css files and scripts and so on.
> 
> I also tried putting regex in quotes, but that didn’t seem to work either.
> 
> What am I doing wrong?
> 
> Here’s a truncated version of the conf, with the location blocks only:
> 
> |location ^~ "/browser/.*/welcome/welcome.html" { return 200 'Not 
> proxied.\n'; add_header Content-Type text/plain; } location ^~ /browser 
> { proxy_pass http://127.0.0.1:1234; proxy_set_header Host $http_host; } 
> # landing page location / { root /var/www/foobar; index index.html; 
> try_files $uri $uri/ /index.html; } |

The "^~" location modifier is for prefix-match locations to 
prevent further checking of regular expressions, see 
http://nginx.org/r/location for details.  If you want to use a 
regular expression, you have to use the "~" modifier instead.

That is, proper configuration will look like:

location ~ ^/browser/.*/welcome/welcome.html$ {
# URI matches given regular expression
...
}

location /browser/ {
# URI starts with /browser/
...
}

location / {
# anything else
...
}

Hope this helps.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


nginx-1.25.0

2023-05-23 Thread Maxim Dounin
Changes with nginx 1.25.023 May 2023

*) Feature: experimental HTTP/3 support.


-- 
Maxim Dounin
http://nginx.org/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: http2 error: Invalid HTTP header field was received:

2023-05-23 Thread Maxim Dounin
Hello!

On Tue, May 23, 2023 at 02:51:02PM +, Johnatan Hallman via nginx wrote:

> Hello List,
> 
> I have a setup where I put an ancient host running a perl-cgi 
> app behind an nginx reverse proxy.
> 
> The http reverse proxy works fine however if I try:
> 
> curl -v --anyauth -k  https://app.test.lan/cgi-bin/page.pl
> 
> I get:
> 
> http2 error: Invalid HTTP header field was received: frame type: 
> 1, stream: 3, name: [defined(%hash) is deprecated at page.pl 
> line 14.], value: []
> 
> However if I define http1.1 it works fine:
> 
> curl -v --http1.1 --anyauth -k  
> https://app.test.lan/cgi-bin/page.pl
> 
> I guess all modern browsers default to http2 these days so that 
> I can specify it for curl is not a big help. I need this to work 
> out of the box.
> 
> I have read that Nginx with the listen   443 ssl http2; 
> directive will downgrade the connection to http1.1 automatically 
> if it's negotiated... otherwise my previous test wouldn't work 
> at all.
> 
> What I have tried is adding this version directive into the 
> location:
> 
> location / {
> proxy_pass http://10.9.0.90/;
> proxy_http_version 1.1;
> }
> 
> Makes no difference. Any idea how to fix this without touching 
> the original app or the original webserver.

It looks like the original app returns an invalid HTTP header (or, 
more specifically, returns Perl warnings among the HTTP headers).  
This used to work, but with stricter header checking requirements 
in HTTP/2 it is rejected by the clients.

Further, nginx itself will reject such invalid upstream responses 
starting with nginx 1.21.1, quoting CHANGES:

*) Change: now nginx always returns an error if spaces or control
   characters are used in a header name.

That is, with newer nginx version the only option is to actually 
fix the upstream server.

With older nginx versions, you may try disabling HTTP/2 (that is, 
on the listen socket) as a temporary workaround, this will likely 
prevent (at least some) clients from rejecting such invalid 
responses.  Note though that this is, uhm, a workaround, and 
proper solution is to fix the upstream server.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: QUIC Stream and Server Push support

2023-05-19 Thread Maxim Dounin
Hello!

On Fri, May 19, 2023 at 06:57:04PM -0500, Dustin Marquess wrote:

> I noticed that QUIC Stream and QUIC Server Push support were 
> removed in
> 
> https://hg.nginx.org/nginx-quic/rev/113e2438dbd4
> 
> And
> 
> https://hg.nginx.org/nginx-quic/rev/f742b1b46901
> 
> I'm curious on what the reasoning was, given that none was given 
> in the commit messages?

These features weren't considered to be usable enough to be merged 
into the mainline branch, hence were removed before the merge.

For QUIC in the stream module, see this message, which provides 
detailed explanation:

https://mailman.nginx.org/pipermail/nginx/2023-April/CZ5L6KMLLZNDP2R6TGAX5MN4VLQA3U3R.html

For HTTP/3 Server Push, the main reason is that Server Push is not 
generally usable, and, in particular, not supported by browsers 
(notably, removed from Chrome, see 
https://developer.chrome.com/blog/removing-push/).  Removing 
HTTP/2 Server Push support is also planned sometime during nginx 1.25.x 
development cycle.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: ssl preread for postgres connection

2023-05-14 Thread Maxim Dounin
Hello!

On Sun, May 14, 2023 at 09:55:54AM +0400, Roman Arutyunyan wrote:

> Hi Eduard,
> 
> On Sat, May 13, 2023 at 10:43:59PM -0600, Eduard Vercaemer wrote:
> > for some context, I recently I tried configuring nginx as a tcp proxy that
> > routes
> > connections based on sni to multiple upstream services
> > 
> > the server only exposes one tcp port, and receives all connections there,
> > for example
> > a connection to redis.example.com:1234 would be proxy_pass'ed to some port
> > in the
> > machine, a connection to www.example.com:1234 to another, etc.
> > 
> > i used nginx itself to terminate the tls for all services for convenience
> > 
> > the problem:
> > now here is the issue, 1: postgres does some weird custom ssl stuff, which
> > means I
> > cannot terminate the ssl from within nginx
> 
> In this case there must be an SSL error logged in nginx error log.
> Can you post it?

Postgres uses their own protocol with STARTTLS-like interface to 
initiate SSL handshake, see here:

https://www.postgresql.org/docs/current/protocol-flow.html#id-1.10.6.7.12

That is, it's not going to work with neither SSL termination, nor 
SSL preread, and needs an implementation of the Postgres protocol.

[...]

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Duplicate Content-Length header with same value, recent change in behavior intentional?

2023-04-19 Thread Maxim Dounin
Hello!

On Wed, Apr 19, 2023 at 08:57:28AM -0400, Jesse Stimpson via nginx wrote:

> I've noticed a recent change in behavior that occurred between releases
> 1.22.1 and 1.23.4, and I'm unable to find this particular change documented.
> 
> If an upstream proxy sends a response with duplicate Content-Length
> headers, both of the same value, nginx 1.22.1 allows the response to be
> sent to the client, removing the duplicate. However, nginx 1.23.4 (and
> 1.24.0) responds to the client with a 502. I did not explicitly test with
> any other versions.
> 
> I won't try to make any claims on the correctness of either behavior, but
> it is a change that may affect some workloads.

[...]

> Is this change intentional? Did I overlook it in the Changelog?

Yes, it is.  It is part of a large work to implement proper 
handling of multiple header lines with the same name, as done in 
nginx 1.23.0.  The particular commit is:

http://hg.nginx.org/nginx/rev/2bf7792c262e

The particular modification isn't in the change log as the 
upstream response in question is obviously invalid and should 
never be accepted in the first place, and the change is more or 
less a minor cleanup work.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: resolver does not work

2023-04-18 Thread Maxim Dounin
Hello!

On Tue, Apr 18, 2023 at 10:25:09PM +0200, basti wrote:

> Hello Maxim,
> 
> that does not work for me.
> 
> root@tower:~# resolvectl query icinga2
> icinga2: 192.168.145.217   -- link: ve-icinga2
> 
> -- Information acquired via protocol LLMNR/IPv4 in 105.7ms.
> -- Data is authenticated: no
> 
> root@tower:~# getent hosts icinga2
> 192.168.145.217 icinga2
> 
> root@tower:~# cat /etc/resolv.conf
> nameserver 127.0.0.53
> options edns0
> root@tower:~#
> 

And what "nslookup icinga2 127.0.0.53" shows?

> location / {
> 
>  proxy_redirect off;
>  proxy_set_header Host $remote_addr;
>  proxy_set_header X-Forwarded-For  $remote_addr;
>  proxy_set_header X-Forwarded-Host $remote_addr;
>  proxy_set_header X-Real-IP $remote_addr;
> #proxy_pass https://icinga2;
>  resolver 127.0.0.53 valid=10s;
>  set $upstream "https://icinga2";;
>  proxy_pass $upstream;
> 
>  # WebSocket support
>  proxy_http_version 1.1;
>  proxy_set_header Upgrade $http_upgrade;
>  proxy_set_header Connection "upgrade";
> }
> 
> 2023/04/18 22:16:28 [error] 779163#779163: *26641 icinga2 could not be 
> resolved (3: Host not found),

As previously mentioned, the message suggests that the name is not 
available via DNS.  This is to be resolved on the systemd side.

Unfortunately, I'm not familiar with systemd-nspawn and have no 
idea how to do it properly and if it's at all possible.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: resolver does not work

2023-04-18 Thread Maxim Dounin
Hello!

On Tue, Apr 18, 2023 at 02:58:08PM +0200, mailingl...@unix-solution.de wrote:

> Hello,
> I have a nginx proxy in front of systemd-nspawn containers.
> The IP's of the containers are dynamic.
> 
> When I start nginx *after* the  containers it works.
> When the IP of the container is changed while nginx is running i get a 
> "Bad Gateway" error.
> 
> The config looks like:
> 
> server {
>      server_name foobar.example.com
> ...
>     location / {
>    resolver 127.0.0.53 valid=10s;
>    ...
>    proxy_pass http://container;
>     }
> }
> 
> nginx is 1.1.18 so it should work as documented in 
> http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver
> The workaround there 
> https://stackoverflow.com/questions/42720618/docker-nginx-stopped-emerg-11-host-not-found-in-upstream/52319161#52319161
>  
> doesn't work.
> 
> I have also try to config a upstream backend and the resolver in the 
> server part or in the http part.
> The errors are: "upstream timed out" or "container could not be resolved 
> (3: Host not found)"
> 
> Whats wrong there?

The "resolver" directive is only used when the "proxy_pass" 
directive contain variables, and therefore nginx isn't able to 
determine upstream addresses when parsing configuration.  Quoting 
http://nginx.org/r/proxy_pass:

: Parameter value can contain variables. In this case, if an 
: address is specified as a domain name, the name is searched among 
: the described server groups, and, if not found, is determined 
: using a resolver.

The "workaround", however, should work fine.  Something like this 
should be enough:

    resolver 127.0.0.53 valid=10s;
set $upstream "http://container";;
    proxy_pass $upstream;

Note though that there should be an actual DNS name which can be 
resolved with the DNS server configured with the "resolver" 
directive.  The "container could not be resolved (3: Host not 
found)" error suggests that your DNS server does not return 
anything for the name - you may want to dig further into systemd 
to find out why it doesn't return anything.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: ngx_list_free

2023-04-18 Thread Maxim Dounin
Hello!

On Tue, Apr 18, 2023 at 03:12:08PM +, Yuval Abadi via nginx wrote:

> Thanks
> 
> I just need  list   not queue,

Queue is a double-linked list.

> And I want to free the whole list , not one node.

In nginx, memory allocations use pools, and in most cases you 
cannot free individual allocations.  The only options is to 
destroy the whole pool.

For example, this naturally works with requests: all allocations 
related to a request are done from the request pool, and are 
automatically freed when the request is freed and its pool is 
destroyed.

See here for more details about memory pools:

http://nginx.org/en/docs/dev/development_guide.html#pool

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: ngx_list_free

2023-04-18 Thread Maxim Dounin
Hello!

On Tue, Apr 18, 2023 at 06:14:46AM +, Yuval Abadi via nginx wrote:

> Ngx_list_t   have create api but no free_list api
> 
> Why?
> 
> I wrote my own free_list,  is anyone else   face this  issue?

Much like many things in nginx, ngx_list_t structure is an 
append-only structure, and it is expected to be used as a list of 
items being added.  Most notably, ngx_list_t is used for request 
headers (these are added to the list when they are parsed), and 
response headers (these are added when they are generated 
internally).

If you have to remove an item for some reason (which shouldn't be 
common case for legitimate usage, but might happen to be needed 
occasionally), the common approach is to use a "skip" flag in the 
structures being listed.  For example, response headers are 
cleared by setting the "hash" field to 0, so such headers are 
ignored by the code iterating over the list.

If you need something more dynamic in your code, ngx_list_t might 
not be the right structure to use.  Something like ngx_queue_t or 
ngx_rbtree_t might be a better choice.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


nginx-1.24.0

2023-04-11 Thread Maxim Dounin
Changes with nginx 1.24.011 Apr 2023

*) 1.24.x stable branch.


-- 
Maxim Dounin
http://nginx.org/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: client certificate to upstream via nginx

2023-03-29 Thread Maxim Dounin
Hello!

On Wed, Mar 29, 2023 at 01:13:10PM +0530, Vishwas Bm wrote:

> We have a client certificate in pem format and it is having below content
> and all these certificates are present in a single file.
> serverCertificate -> intermediate CA -> rootCA
> 
> Now in the nginx.conf have below directives set:

[...]

> *  proxy_set_header ssl-client-cert $ssl_client_escaped_cert; *

[...]

> So based on above only the client certificate is sent to the upstream
> server but our expectation was that the complete pem contents including all
> the 3 certs (client, intermediate and root) will be sent to the backend.
> 
> Is our expectation correct or wrong ?
> Any change has to be done to handle this ?

I've already replied in the ticket #2476, which I assume is 
created by you (https://trac.nginx.org/nginx/ticket/2476).  For 
the record:

The $ssl_client_escaped_cert variable is expected to contain only 
the client certificate, and not the additional/extra certificates 
sent by the client.  This matches the behaviour you are seeing.

Further, it is not generally possible to provide a variable with 
extra certificates sent by the client: these certificates are not 
saved into the session data and therefore not available in resumed 
SSL sessions, see ticket #2297 
(https://trac.nginx.org/nginx/ticket/2297) and here:

https://mailman.nginx.org/pipermail/nginx-devel/2022-January/L3RBOEOUD5OFU23DYJAJG775ZJDASNEF.html

Summing the above, it might be a good idea to reconsider the setup 
you are using.

Hope this helps.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


nginx-1.23.4

2023-03-28 Thread Maxim Dounin
Changes with nginx 1.23.428 Mar 2023

*) Change: now TLSv1.3 protocol is enabled by default.

*) Change: now nginx issues a warning if protocol parameters of a
   listening socket are redefined.

*) Change: now nginx closes connections with lingering if pipelining was
   used by the client.

*) Feature: byte ranges support in the ngx_http_gzip_static_module.

*) Bugfix: port ranges in the "listen" directive did not work; the bug
   had appeared in 1.23.3.
   Thanks to Valentin Bartenev.

*) Bugfix: incorrect location might be chosen to process a request if a
   prefix location longer than 255 characters was used in the
   configuration.

*) Bugfix: non-ASCII characters in file names on Windows were not
   supported by the ngx_http_autoindex_module, the ngx_http_dav_module,
   and the "include" directive.

*) Change: the logging level of the "data length too long", "length too
   short", "bad legacy version", "no shared signature algorithms", "bad
   digest length", "missing sigalgs extension", "encrypted length too
   long", "bad length", "bad key update", "mixed handshake and non
   handshake data", "ccs received early", "data between ccs and
   finished", "packet length too long", "too many warn alerts", "record
   too small", and "got a fin before a ccs" SSL errors has been lowered
   from "crit" to "info".

*) Bugfix: a socket leak might occur when using HTTP/2 and the
   "error_page" directive to redirect errors with code 400.

*) Bugfix: messages about logging to syslog errors did not contain
   information that the errors happened while logging to syslog.
   Thanks to Safar Safarly.

*) Workaround: "gzip filter failed to use preallocated memory" alerts
   appeared in logs when using zlib-ng.

*) Bugfix: in the mail proxy server.


-- 
Maxim Dounin
http://nginx.org/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: How to compile Nginx with zlib-ng

2023-03-27 Thread Maxim Dounin
Hello!

On Mon, Mar 27, 2023 at 09:57:34PM +0400, Sergey Kandaurov wrote:

> > On 24 Mar 2023, at 06:07, Maxim Dounin  wrote:
> > 
> > Hello!
> > 
> > On Thu, Mar 23, 2023 at 09:33:19PM +0100, Richard Stanway via nginx wrote:
> > 
> >> Yes, when using the latest zlib-ng on nginx-1.21.6 I received the
> >> alerts. Previous versions of zlib-ng have worked great after the 2021
> >> patch. I tried to update it myself as follows based on advice of
> >> zlib-ng GitHub issues, while it reduced the number of alerts logged it
> >> did not completely solve the issue so it seems the memory requirements
> >> may have further changed. While I would appreciate a proper patch
> >> making it into nginx, the seemingly-frequent upstream changes may make
> >> this difficult to maintain.
> >> 
> >> -ctx->allocated = 8192 + 16 + (1 << (wbits + 2))
> >> +ctx->allocated = 8192 + 288 + 16 + (1 << (wbits + 2))
> >>  + 131072 + (1 << (memlevel + 8));
> > 
> > It looks like there are at least two changes in zlib-ng since I 
> > looked into it:
> > 
> > - Window bits are no longer forced to 13 on compression level 1.
> > 
> > - All allocations use custom alloc_aligned() wrapper, and 
> >  therefore all allocations are larger than expected by (64 + 
> >  sizeof(void*)).
> > 
> > Further, due to the wrapper nginx sees all allocations as an 
> > allocation of 1 element of a given size, so it misinterprets 
> > some allocations as the state allocation.
> > 
> > [..]
> > 
> > Please try the following patch, it should help with recent versions:
> > 
> > # HG changeset patch
> > # User Maxim Dounin 
> > # Date 1679622670 -10800
> > #  Fri Mar 24 04:51:10 2023 +0300
> > # Node ID 67a0999550c3622e51639acb8bde57d199826f7e
> > # Parent  d1cf09451ae84b930ce66fa6d63ae3f7eeeac5a5
> > Gzip: compatibility with recent zlib-ng versions.
> > 
> > It now uses custom alloc_aligned() wrapper for all allocations,
> > therefore all allocations are larger than expected by (64 + sizeof(void*)).
> > Further, they are seen as allocations of 1 element.  Relevant calculations
> > were adjusted to reflect this, and state allocation is now protected
> > with a flag to avoid misinterpreting other allocations as the zlib
> > deflate_state allocation.
> > 
> > Further, it no longer forces window bits to 13 on compression level 1,
> > so the comment was adjusted to reflect this.
> 
> For the record, the corresponding zlib-ng git commits:
> ce6789c7e093e8e6bb6fc591bbdf0f805999bdb9
> a39e323a4db80a57feecf2ae212c08070234050c
> 
> > 
> > diff --git a/src/http/modules/ngx_http_gzip_filter_module.c 
> > b/src/http/modules/ngx_http_gzip_filter_module.c
> > --- a/src/http/modules/ngx_http_gzip_filter_module.c
> > +++ b/src/http/modules/ngx_http_gzip_filter_module.c
> > @@ -57,6 +57,7 @@ typedef struct {
> > unsigned nomem:1;
> > unsigned buffering:1;
> > unsigned zlib_ng:1;
> > +unsigned state_allocated:1;
> > 
> > size_t   zin;
> > size_t   zout;
> > @@ -514,9 +515,10 @@ ngx_http_gzip_filter_memory(ngx_http_req
> > } else {
> > /*
> >  * Another zlib variant, https://github.com/zlib-ng/zlib-ng.
> > - * It forces window bits to 13 for fast compression level,
> > - * uses 16-byte padding in one of window-sized buffers, and
> > - * uses 128K hash.
> > + * It used to force window bits to 13 for fast compression level,
> 
> BTW, it makes sense to peel off this extra allocation somewhere in future,
> similar to how it was done for updated handling of zlib variant from Intel.

Sure, but it's not significant compared to the 128k hash anyway.

> > + * uses (64 + sizeof(void*)) additional space on all allocations
> > + * for alignment, 16-byte padding in one of window-sized buffers,
> > + * and 128K hash.
> >  */
> > 
> > if (conf->level == 1) {
> > @@ -524,7 +526,8 @@ ngx_http_gzip_filter_memory(ngx_http_req
> > }
> > 
> > ctx->allocated = 8192 + 16 + (1 << (wbits + 2))
> > - + 131072 + (1 << (memlevel + 8));
> > + + 131072 + (1 << (memlevel + 8))
> > + + 4 * (64 + sizeof(void*));
> > ctx->zlib_ng = 1;
> > }
> > 

Re: nginx configuration variable interpolation

2023-03-27 Thread Maxim Dounin
Hello!

On Mon, Mar 27, 2023 at 03:24:06PM +, Vilius Šumskas wrote:

> I have the following directive in Nginx configuration:
> 
> location /docs {
> add_header Access-Control-Allow-Origin $cors_origin;
> }
> 
> $cors_origin comes from the map {} where the value is set to * or something 
> else.
> 
> I have noticed that if $cors_origin is set to 0, add_header 
> doesn’t add Access-Control-Allow-Origin header at all. Is this 
> expected? I don’t see anything about such behaviour in the 
> add_header documentation.

The header will not be added if the value is an empty string.  If 
it is not empty but "0", the header will be added just fine.  For 
example, consider the following configuration:

map $uri $cors_origin {
default 0;
}

server {
   ...
   location / {
  add_header Access-Control-Allow-Origin $cors_origin;
  return 204;
   }
}

Test:

$ telnet 127.0.0.1 8080
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GET / HTTP/1.0

HTTP/1.1 204 No Content
Server: nginx/1.22.1
Date: Mon, 27 Mar 2023 16:01:05 GMT
Connection: close
Access-Control-Allow-Origin: 0

Connection closed by foreign host.


As you can see from the telnet output, the 
Access-Control-Allow-Origin is properly added.  If you are seeing 
a different behaviour, you may want to recheck the value actually 
being used.  If it is indeed "0", and not an empty string, please 
provide more details.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: How to compile Nginx with zlib-ng

2023-03-23 Thread Maxim Dounin
Hello!

On Thu, Mar 23, 2023 at 09:33:19PM +0100, Richard Stanway via nginx wrote:

> Yes, when using the latest zlib-ng on nginx-1.21.6 I received the
> alerts. Previous versions of zlib-ng have worked great after the 2021
> patch. I tried to update it myself as follows based on advice of
> zlib-ng GitHub issues, while it reduced the number of alerts logged it
> did not completely solve the issue so it seems the memory requirements
> may have further changed. While I would appreciate a proper patch
> making it into nginx, the seemingly-frequent upstream changes may make
> this difficult to maintain.
> 
> -ctx->allocated = 8192 + 16 + (1 << (wbits + 2))
> +ctx->allocated = 8192 + 288 + 16 + (1 << (wbits + 2))
>   + 131072 + (1 << (memlevel + 8));

It looks like there are at least two changes in zlib-ng since I 
looked into it:

- Window bits are no longer forced to 13 on compression level 1.

- All allocations use custom alloc_aligned() wrapper, and 
  therefore all allocations are larger than expected by (64 + 
  sizeof(void*)).

Further, due to the wrapper nginx sees all allocations as an 
allocation of 1 element of a given size, so it misinterprets 
some allocations as the state allocation.

For example, allocations for a 1k responses are as follows (note 
"a:8192" in most of the lines, that is, nginx thinks these are 
state allocations):

2023/03/24 03:26:10 [debug] 36809#100069: *2 http gzip filter
2023/03/24 03:26:10 [debug] 36809#100069: *2 malloc: 21DEE5C0:176144
2023/03/24 03:26:10 [debug] 36809#100069: *2 gzip alloc: n:1 s:6036 a:8192 
p:21DEE5C0
2023/03/24 03:26:10 [debug] 36809#100069: *2 gzip alloc: n:1 s:4180 a:8192 
p:21DF05C0
2023/03/24 03:26:10 [debug] 36809#100069: *2 gzip alloc: n:1 s:4164 a:8192 
p:21DF25C0
2023/03/24 03:26:10 [debug] 36809#100069: *2 gzip alloc: n:1 s:131140 a:131140 
p:21DF45C0
2023/03/24 03:26:10 [debug] 36809#100069: *2 gzip alloc: n:1 s:4164 a:8192 
p:21E14604
2023/03/24 03:26:10 [debug] 36809#100069: *2 gzip in: 21C31D84

Allocations for 4k response are as follows (and generate an 
alert):

2023/03/24 03:44:29 [debug] 36863#100652: *2 http gzip filter
2023/03/24 03:44:29 [debug] 36863#100652: *2 malloc: 21DEE5C0:188432
2023/03/24 03:44:29 [debug] 36863#100652: *2 gzip alloc: n:1 s:6036 a:8192 
p:21DEE5C0
2023/03/24 03:44:29 [debug] 36863#100652: *2 gzip alloc: n:1 s:16468 a:16468 
p:21DF05C0
2023/03/24 03:44:29 [debug] 36863#100652: *2 gzip alloc: n:1 s:16452 a:16452 
p:21DF4614
2023/03/24 03:44:29 [debug] 36863#100652: *2 gzip alloc: n:1 s:131140 a:131140 
p:21DF8658
2023/03/24 03:44:29 [alert] 36863#100652: *2 gzip filter failed to use 
preallocated memory: 16452 of 16180 while sending response to client, client: 
127.0.0.1, server: one, request: "GET /t/4k HTTP/1.1", host: "127.0.0.1:8080"
2023/03/24 03:44:29 [debug] 36863#100652: *2 malloc: 21DC58C0:16452
2023/03/24 03:44:29 [debug] 36863#100652: *2 gzip in: 21C31D98

The "+ 288" you are using should be enough to cover additional 
memory used for alignment, but it is not enough to account 
for misinterpretation when using gzip_comp_level above 1 (so nginx 
won't allocate additional memory assuming window bits will be 
adjusted to 13).

Please try the following patch, it should help with recent versions:

# HG changeset patch
# User Maxim Dounin 
# Date 1679622670 -10800
#  Fri Mar 24 04:51:10 2023 +0300
# Node ID 67a0999550c3622e51639acb8bde57d199826f7e
# Parent  d1cf09451ae84b930ce66fa6d63ae3f7eeeac5a5
Gzip: compatibility with recent zlib-ng versions.

It now uses custom alloc_aligned() wrapper for all allocations,
therefore all allocations are larger than expected by (64 + sizeof(void*)).
Further, they are seen as allocations of 1 element.  Relevant calculations
were adjusted to reflect this, and state allocation is now protected
with a flag to avoid misinterpreting other allocations as the zlib
deflate_state allocation.

Further, it no longer forces window bits to 13 on compression level 1,
so the comment was adjusted to reflect this.

diff --git a/src/http/modules/ngx_http_gzip_filter_module.c 
b/src/http/modules/ngx_http_gzip_filter_module.c
--- a/src/http/modules/ngx_http_gzip_filter_module.c
+++ b/src/http/modules/ngx_http_gzip_filter_module.c
@@ -57,6 +57,7 @@ typedef struct {
 unsigned nomem:1;
 unsigned buffering:1;
 unsigned zlib_ng:1;
+unsigned state_allocated:1;
 
 size_t   zin;
 size_t   zout;
@@ -514,9 +515,10 @@ ngx_http_gzip_filter_memory(ngx_http_req
 } else {
 /*
  * Another zlib variant, https://github.com/zlib-ng/zlib-ng.
- * It forces window bits to 13 for fast compression level,
- * uses 16-byte padding in one of window-sized buffers, and
- * uses 128K hash.
+ * It used to force window bits to

Re: ngx_mail: untagged IMAP responses

2023-03-11 Thread Maxim Dounin
Hello!

On Sat, Mar 11, 2023 at 06:15:24PM +, Anton Yuzhaninov via nginx wrote:

> Trying to setup ngx_mail proxy I've discovered that nginx doesn't like 
> untagged responses which can be send by an IMAP servers. There was a similar 
> question in 2013:
> 
> https://mailman.nginx.org/pipermail/nginx-devel/2013-March/003477.html
> 
> Given that it is a known issue and it has not been addressed I don't expect 
> it 
> to be supported by nginx out of the box but may be there are 3rd party 
> patches 
> for this I can try?

I'm not aware of any.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Reverse Proxy 502 Bad Gateway

2023-03-03 Thread Maxim Dounin
Hello!

On Fri, Mar 03, 2023 at 04:33:25PM +0100, Hans Müller wrote:

>Hello,
>I am working in a proxmox environment, setting up a ngnix reverse proxy
>(192.168.178.103) forwarding requests via https to a nginx backend
>server (192.168.178.105). On the backend server shellinabox is
>installed. Request from the internet are secured via a Letsentcrypt
>certificate. For the encryption to the backend server I use a
>self-signed certificate.
>When I want to open the next-shell.example.com I get an 502 Bad Gateway
>error
>On the reverse proxy are the following configs
>HttpGateway

[...]

>location / {
>proxy_set_header Host $host;
>proxy_set_header X-Real-IP $remote_addr;
>proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
>proxy_ssl_certificate /etc/selfcerts/stern-example-cert-chain.pem;
>proxy_ssl_certificate_key /etc/selfcerts/stern-example-key.pem;
>proxy_ssl_verify off;
>proxy_pass [1]https://192.168.178.105:4200;

Note you are proxying using the "https" protocol.

[...]

>On the backend server there is the following config
>next-shell.example.com
>server {
>listen 192.168.178.105:4200;
>server_name next-shell.example.com;

But the backend is not configured to use SSL on the socket (or at 
least this is not something visible from the configuration 
provided).  Note no "ssl" parameter on the listen directive.

[...]

>When I try to open the page there is this error in the nginx error log
>{{{
>[error] 1103#1103: *1 SSL_do_handshake() failed (SSL:
>error:0A00010B:SSL routines::wrong version
>number) while SSL handshaking to upstream, client: 95.116.52.151,
>server: next-shell.example.com, request: "GET /f
>avicon.ico HTTP/2.0", upstream:
>"[3]https://192.168.178.105:4200/favicon.ico";, host:
>"next-shell.example.com"
>}}}
> 
>Any idea, what I can do here?

The error is somewhat expected: you are trying to connect to 
non-SSL port using SSL, and this is expected to fail.

You should either configure SSL on the backend, or change proxying 
to use the "http" protocol instead.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx serving corrupt images

2023-02-23 Thread Maxim Dounin
Hello!

On Thu, Feb 23, 2023 at 10:35:16PM -0500, Dan Swaney wrote:

> Hi Maxim,
> 
> Here is the version details from my full recompile of NGINX 64-bit on
> Windows.  My code base is 2 months old, but it reproduced Saint's issue.
> 
> nginx version: nginx/1.23.3
> > built by cl 19.34.31937 for x64
> > *built with OpenSSL 3.1.0-beta1-dev*
> > TLS SNI support enabled
> > configure arguments: --with-cc=cl --builddir=objs --with-debug --prefix=.
> > --conf-path=conf/nginx.conf --pid-path=logs/nginx.pid
> > --http-log-path=logs/access.log --error-log-path=logs/error.log
> > --sbin-path=nginx.exe --http-client-body-temp-path=temp/client_body_temp
> > --http-proxy-temp-path=temp/proxy_temp
> > --http-fastcgi-temp-path=temp/fastcgi_temp
> > --http-scgi-temp-path=temp/scgi_temp --http-uwsgi-temp-path=temp/uwsgi_temp
> > --with-cc-opt=-DFD_SETSIZE=1024 --with-pcre=objs/lib/pcre2
> > --with-zlib=objs/lib/zlib --with-select_module --with-http_v2_module
> > --with-http_realip_module --with-http_addition_module
> > --with-http_sub_module --with-http_dav_module
> > --with-http_stub_status_module --with-http_flv_module
> > --with-http_mp4_module --with-http_gunzip_module
> > --with-http_gzip_static_module --with-http_auth_request_module
> > --with-http_random_index_module --with-http_secure_link_module
> > --with-http_slice_module --with-mail --with-stream --with-http_ssl_module
> > --with-mail_ssl_module --with-stream_ssl_module
> > --with-openssl=objs/lib/openssl
> > --add-module=objs/lib/spnego-http-auth-nginx-module --with-cc-opt='-I
> > objs/lib/krb5/objs/include'
> >
> 
> I'm using a OpenSSL beta build from earlier, but I was able to reproduce

Thanks for the details, OpenSSL 3.1.0-beta1 perfectly explains the 
issue you are seeing.  Avoid using it for anything but testing 
OpenSSL itself.

> Saint's issue and discovered the work-around with lowering the
> ssl_buffer_size to 4k,  Something for Saint to try out.

I don't think it's related.  The issue you are seeing is very 
specific to some broken OpenSSL development builds, and shouldn't 
appear anywhere else.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx serving corrupt images

2023-02-23 Thread Maxim Dounin
Hello!

On Thu, Feb 23, 2023 at 09:42:29PM -0500, Dan Swaney wrote:

> Ah-ah...I caught the NGINX failure in the SSL response:

[...]

> > 2023/02/23 21:24:49 [debug] 4768#4528: *1 malloc: 02DC83A8F350:16384
> > 2023/02/23 21:24:49 [debug] 4768#4528: *1 SSL buf copy: 626
> > 2023/02/23 21:24:49 [debug] 4768#4528: *1 SSL buf copy: 15758
> > *2023/02/23 21:24:49 [debug] 4768#4528: *1 SSL to write: 16384*
> > 2023/02/23 21:24:49 [debug] 4768#4528: ssl remove session: B87DD7B9:32
> > 2023/02/23 21:24:49 [debug] 4768#4528: shmtx lock
> > 2023/02/23 21:24:49 [debug] 4768#4528: shmtx unlock
> > 2023/02/23 21:24:49 [debug] 4768#4528: *1 SSL_write: -1
> > 2023/02/23 21:24:49 [debug] 4768#4528: *1 SSL_get_error: 1
> >
> > *2023/02/23 21:24:49 [crit] 4768#4528: *1 SSL_write() failed (SSL:
> > error:0A0C0103:SSL routines::internal error) while sending response to
> > client, client: 192.168.80.130, server: win10-web-svr.dreamstone.com
> > <http://win10-web-svr.dreamstone.com>, request: "GET /images/image001.jpg
> > HTTP/1.1", host: "win10-web-svr.dreamstone.com
> > <http://win10-web-svr.dreamstone.com>", referrer:
> > "https://win10-web-svr.dreamstone.com/
> > <https://win10-web-svr.dreamstone.com/>"*

The error suggests there is a bug in the SSL library you are 
using.  What does "nginx -V" show?

(IIRC, there was something like this in the OpenSSL development 
recently, though I believe it doesn't affect any of the released 
versions.  I may be wrong though.)

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx serving corrupt images

2023-02-23 Thread Maxim Dounin
Hello!

On Thu, Feb 23, 2023 at 06:32:11PM -0500, Saint Michael wrote:

> if I do this,
> root /static/duc/;
> 
> location / {
> 
> rewrite ^ /index.html break;
> #try_files $uri $uri/ /index.html;
> # try_files /index.html =404;
> 
> }
> and open https://x3x.us/
> the images cannot be seen, it does not work.

You've asked nginx to return index.html to all requests, and 
that's what it does.  Obviously enough, index.html is not an 
image, so no images are expected to be accessible.

As previously suggested, just remove anything from "location /", 
and let nginx do it's job.

[...]

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx serving corrupt images

2023-02-23 Thread Maxim Dounin
Hello!

On Thu, Feb 23, 2023 at 08:13:54AM -0500, Saint Michael wrote:

> Suppose that my default html file forn a location is xyz.html, but there
> are hundreds. What is the canonical way to specify that and only that file
> for a / {}.

Locations are for a given URI prefix, not for a particular file.  
Locations define a configuration to be used for matching requests.  
See http://nginx.org/r/location for the detailed description of 
the location directive.

When mapping requests to the files, nginx simply concatenates the 
document root, as defined by the "root" directive, and URI from 
the requests.  For example, given "/path/to/html" root and a request 
to "/xyz.html", nginx will return the "/path/to/html/xyz.html" file.

If you want nginx to return only the particular file to all 
requests matched by the location, the most simple approach would 
be to use a rewrite (http://nginx.org/r/rewrite) to change the 
URI:

location / {
rewrite ^ /xyz.html break;
}

Note though that this is not going to be a good solution to the 
issues with likely non-existing files you are facing.  Rather, 
this will make such issues impossible to debug, and is generally 
not recommended - unless you have good reasons to implement 
something like this.  Instead, I would recommend using normal 
request processing and let nginx to normally return files under 
the document root and reject requests to non-existing files.

Hope this helps.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx serving corrupt images

2023-02-23 Thread Maxim Dounin
Hello!

On Thu, Feb 23, 2023 at 01:32:06PM +0200, Reinis Rozitis wrote:

> > Question:
> > if I don't have any like inside de 
> > location / {}
> > how does nginx delivers an html file out of many possibly found in the root 
> > for the location?
> 
> There is generally no need to specify location / { .. } if you don't provide 
> any specific configuration or behavior for it.
> 
> Having just:
> 
> server { 
> root /some/path;
> }
> 
> is enough to serve (static) files.  

I would rather recommend to always specify at least "location /", 
even if it's empty.

While not specifying any locations mostly works, this causes nginx 
to process requests in the "server location", which was originally 
introduced as a purely technical one.  Trying to process requests 
in it might have various associated bugs.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx serving corrupt images

2023-02-22 Thread Maxim Dounin
Hello!

On Wed, Feb 22, 2023 at 09:31:43PM -0500, Saint Michael wrote:

> how would this line look?
> try_files $uri $uri/ /index.html;

You have to remove this line completely (or comment it out by 
using the "#" character at the start of the line).

> and how do set I the error log to the max level for this particular server
> {}?
> I am not getting any errors in /var/log/nginx/error.log

I would recommend configuring error_log at the global level only, 
not in individual servers.  Just keep error_log at the global 
level, outside of the http{} block, and remove any other error_log 
directives.

Using the level "info" is the most verbose mode you'll be able to 
obtain from non-debug nginx builds.  It is usually ok to run this 
in production for a long time, especially when you are trying to 
investigate some issues.  It is also immediately shows that 
logging is properly configured, as nginx will log some 
notice-level messages at start.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx serving corrupt images

2023-02-22 Thread Maxim Dounin
Hello!

On Wed, Feb 22, 2023 at 07:35:54PM -0500, Saint Michael wrote:

> a) The error does not have a single line.
> b) restarting does not fix it
> c) my nginx is no acting as proxy

So it's certainly unrelated to the article about proxying and 
error propagation you've previously linked.

> d) it happened twice and both times I fixed it by turning gzip off,
> restarting, and back on.
> e) I also noticed that I requested the image file with wget, get a full
> HTML file for the whole document, but named as if it were the image file.
> 
> wget https://x3x.us/index_files/image001.jpg
> but `stat image001.jpg' showed it was the entire text HTML file.

Your configuration contains:

  try_files $uri $uri/ /index.html;

so returning "/index.html" for all non-existing files is what 
nginx is configured to do.

Are you sure the file you are requesting actually exists?  Is the 
file system with these files local, or it's something distributed 
and/or some network filesystem?  Are files static, or generated 
and/or synced from somewhere?

You may want to try removing the "try_files" directive, so nginx 
will be able to properly respond with 404 on non-existing files, 
and will log errors with details about attempts to request them.  
Such errors will look like:

2023/02/23 05:16:25 [error] 80125#100117: *1 open() "/path/to/file" failed (2: 
No such file or directory), client: ...

This should help to identify exact files being requested, so 
you'll be able to check them manually, as will contain exact error 
code, so it will be possible to identify various permission 
issues.

Hope this helps.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Log Of Attempted Files

2023-02-22 Thread Maxim Dounin
Hello!

On Wed, Feb 22, 2023 at 02:39:36PM -0800, Ivo Welch wrote:

> dear nginx experts --- If I use an alias in my location, then
> $realpath shows in my log file what file was actually served up.
> Great.

There is no $realpath variable in nginx.

The $request_filename variable identifies the file being served 
(or going to be served) based on URI and root/alias directives in 
effect, see http://nginx.org/r/$request_filename.

> How can I show in the access log file what file(s) nginx tried?  I do
> not want the full rewrite engine in the debug log --- it contains way
> too many things I do not understand and way too few things I do [or,
> equivalently, I am just much too clueless].  I would ideally want just
> to see something like
> 
> /var/www/html/index.html not found
> /var/www/html/index.htm not found
> /var/www/html/index-debian.htm OK
> 
> Short of this, what is the $variable name for the last file for the
> log format in nginx?  pointer(s) appreciated.

I don't think it's possible to obtain such a list of files in a 
variable.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx serving corrupt images

2023-02-22 Thread Maxim Dounin
Hello!

On Wed, Feb 22, 2023 at 02:46:29PM -0500, Saint Michael wrote:

> It's not a misconfiguration, is a huge bug.
> A wasted two days of sleep for something that is 100% a bug.
> Please read here:
> https://laracasts.com/discuss/channels/general-discussion/homestead-nginx-serving-wrong-images-and-only-cut-in-the-middle
> He mentions the same exact problem and also he points to
> https://tech.blog.aknin.name/2011/11/04/nginxgzip-module-might-silently-corrupt-data-upon-backend-failure/
> where the author says that Niginx will not fix it.
> So he already tried he was rebuffed.

The fun fact is that the referenced article doesn't state "will 
not fix", but rather "not a top priority".  Further, proper error 
propagation is available in nginx for about 10 years now, since 
2013 (http://hg.nginx.org/nginx/rev/d3eab5e2df5f, nginx 1.5.3).  
Quoting CHANGES:

*) Change: now after receiving an incomplete response from a backend
   server nginx tries to send an available part of the response to a
   client, and then closes client connection.

As long as nginx have an information about an error, it will 
preserve this information and propagate it to the client.

Also note that it is only expected to make a difference if you are 
using nginx as a proxy, not to directly serve files.  And only in 
case of errors.  That is, if you are seeing the behaviour 
described, it might be a good idea to focus on the errors in the 
first place.

I don't think it's anyhow related though, as switching gzip off 
and back on, as seems to be "the fix" described in the first link, 
is not going to help with anything.  The important part is likely 
"restarted the server", so I would rather assume that "the server" 
(not sure if it refers to nginx or the whole server) was using an 
incorrect configuration and/or was out of some resources, and 
restart fixed it.

Summing the above, if you want to find out what goes wrong in your 
case - you may want to provide more details.  If you don't, nobody 
will be able to do it, unfortunately.

The most basic thing I would recommend in the first place is to 
look into nginx error log, it is likely to contain important 
information if something goes wrong.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx does not start automatically, what have I done?

2023-02-17 Thread Maxim Dounin
Hello!

On Fri, Feb 17, 2023 at 12:34:23PM +, ng...@onekarlo.com wrote:

> How will I be able to fix it and avoid the use of this method?
> 
> Please do help me, thanks!

As suggested in the original response, first thing to check is the 
systemd service you are using to start nginx.  It should contain
"After=" and "Wants=" clauses with "network-online.target", see 
http://hg.nginx.org/pkg-oss/file/tip/debian/debian/nginx.service 
for an example.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx does not start automatically, what have I done?

2023-02-17 Thread Maxim Dounin
Hello!

On Thu, Feb 16, 2023 at 03:13:59PM +, ng...@onekarlo.com wrote:

> Thanks for replying. I appreciate the thorough response!
> I found the solution online.
> And it worked!
> 
> nano /etc/sysctl.conf # allow processes to bind to the non-local address 
> net.ipv4.ip_nonlocal_bind = 1
> sysctl -p /etc/sysctl.conf
> 
> What do you think about it?

It should work around the issue, though the obvious downside is 
that the whole system will not be able to detect real errors, such 
as mistyped local addresses.

A better solution might be to ensure proper startup order.  This 
should also fix other potential issues, such as non-working DNS 
resolution during nginx startup.

On the other hand, if it works for you, it's probably good enough.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx does not start automatically, what have I done?

2023-02-16 Thread Maxim Dounin
Hello!

On Thu, Feb 16, 2023 at 02:48:23PM +0800, JK wrote:

>Hello, it is my first time posting here, and I appreciate the
>opportunity to share my experience with nginx.
> 
> 
>Just to give you background, I am using a VPS from Racknerd, and I use
>CWP (Control Web Panel), which is pretty similar to cPanel.
> 
> 
>I tried to update the repo and use dnf upgrade -y, and what happened
>was, after restarting the server, nginx did not start automatically.
> 
> 
>Here is the log of nginx:
> 
> 
>● nginx.service - The nginx HTTP and reverse proxy server
>   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled;
>vendor preset: disabled)
>   Active: failed (Result: exit-code) since Wed 2023-02-15 17:29:03
>EST; 7h ago
>  Process: 779 ExecStartPre=/usr/sbin/nginx -t (code=exited,
>status=1/FAILURE)
>  Process: 776 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited,
>status=0/SUCCESS)
>Feb 15 17:29:02 racknerd-f5 systemd[1]: Starting The nginx HTTP and
>reverse proxy server...
>Feb 15 17:29:02 racknerd-f5 nginx[779]: nginx: the configuration
>file /etc/nginx/nginx.conf syntax is ok
>Feb 15 17:29:03 racknerd-f5 nginx[779]: nginx: [emerg] bind() to
>100.xx.xx.xx:80 failed (99: Cannot assign requested address)
>Feb 15 17:29:03 racknerd-f5 nginx[779]: nginx: configuration file
>/etc/nginx/nginx.conf test failed
>Feb 15 17:29:03 racknerd-f5 systemd[1]: nginx.service: Control
>process exited, code=exited status=1
>Feb 15 17:29:03 racknerd-f5 systemd[1]: nginx.service: Failed with
>result 'exit-code'.
>Feb 15 17:29:03 racknerd-f5 systemd[1]: Failed to start The nginx
>HTTP and reverse proxy server.
> 
> 
>The solution is to manually start nginx after a minute or two.

The error message "bind() to 100.xx.xx.xx:80 failed (99: Cannot 
assign requested address)" suggests the IP address you are trying 
to use is not available when nginx starts.

Possible options to fix this include:

- Make sure nginx systemd service properly depends on the network 
  being available (After=/Wants= network-online.target is usually 
  good enough, though you may need to also enable proper wait 
  service, see [1]).

- Reconfigure nginx to listen on the wildcard address instead, so 
  it won't try to bind on IP addresses not yet available when it 
  starts.

Hope this helps.

[1] https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: "ssl_stapling" ignored warning on boot with LE certs?

2023-02-16 Thread Maxim Dounin
Hello!

On Wed, Feb 15, 2023 at 12:25:09PM -0500, PGNet Dev wrote:

> i run
> 
>   nginx -v
>   nginx version: nginx/1.23.3 (Local Build)
> 
> nginx is launched on boot with a systemd service
> 
> my site's ssl enabled, using letsencrypt certs
> 
> in my boot logs, i see
> 
>   Feb 15 11:54:03 svr017 nginx[912]: nginx: [warn] "ssl_stapling" 
> ignored, host not found in OCSP responder "r3.o.lencr.org/" in the 
> certificate "/sec/svr017/fullchain.ec.crt.pem"

[...]

> is this warning due to a nginx misconfig?  or a cert issue?

The error message suggests there is something wrong with DNS on 
your host.

If this happens only on boot but not when you restart/reload nginx 
after boot, this might indicate that DNS is not yet properly 
available when nginx starts.  One possible reason is that nginx 
systemd service is not properly configured to depend on DNS being 
available: for nginx to start properly you may want to ensure that 
there is Wants= and After= dependency on network-online.target, 
and After= dependency on nss-lookup.target, see nginx.service as 
shipped by nginx.org nginx packages[1] for an example.

[1] http://hg.nginx.org/pkg-oss/file/tip/debian/debian/nginx.service

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Nginx won't build with xslt into static linked binary

2023-02-12 Thread Maxim Dounin
Hello!

On Sun, Feb 12, 2023 at 05:31:30PM +0100, Kevin Weis via nginx wrote:

> Hi,
> I aim to build nginx from source, statically linked, and run into issues 
> when `http_xslt_module` is enabled or I'm building with NJS 0.7.10 
> (which also needs xslt).
> 
> I've created a simplified docker build to debug this issue at 
> https://git.compilenix.org/CompileNix/docker-nginx-static
> 
> The error occures while running `./auto/configure` if 
> `--with-ld-opt="-static"` is added. The build succeeds without it 
> (remove line 176 from Dockerfile).
> 
> The configure ends with:
> ```
> checking for libxslt ... not found
> checking for libxslt in /usr/local/ ... not found
> checking for libxslt in /usr/pkg/ ... not found
> checking for libxslt in /opt/local/ ... not found
> 
> ./auto/configure: error: the HTTP XSLT module requires the libxml2/libxslt
> libraries. You can either do not enable the module or install the libraries.
> ```
> 
> The issue is not that libxml2 and libxslt aren't installed, they are.
> 
> I've looked into why libxslt is reported to be "not found" by inspecting 
> the configure log file `objs/autoconf.err`, which shows the following error:
> ``
> checking for libxslt
> 
> /usr/lib/gcc/x86_64-alpine-linux-musl/12.2.1/../../../../x86_64-alpine-linux-musl/bin/ld:
>  
> /usr/lib/gcc/x86_64-alpine-linux-musl/12.2.1/../../../../lib/libxml2.a(libxml2_la-xmlIO.o):
>  
> in function `xmlFreeZMemBuff':
> /home/buildozer/aports/main/libxml2/src/libxml2-2.10.3/xmlIO.c:1482: 
> undefined reference to `deflateEnd'
> /usr/lib/gcc/x86_64-alpine-linux-musl/12.2.1/../../../../x86_64-alpine-linux-musl/bin/ld:
>  
> /usr/lib/gcc/x86_64-alpine-linux-musl/12.2.1/../../../../lib/libxml2.a(libxml2_la-xmlIO.o):
>  
> in function `xmlGzfileOpenW':
> /home/buildozer/aports/main/libxml2/src/libxml2-2.10.3/xmlIO.c:1172: 
> undefined reference to `gzdopen'
> 
> 
> ```
> 
> Here is the full build log: 
> https://compilenix.org/static/95Yeyaxv/build-static.log
> 
> If you are wondering why I also build libxslt from source instead of 
> using the alpine `libxslt-dev` package.. `libxslt-dev` does not contain 
> "libxslt.a", which is required for static linking.
> 
> Additionally: I've also looked into other distros like ubuntu, fedora or 
> gentoo. None of them install "libxslt.a" and show the same configure 
> error messages.
> 
> Steps to reproduce:
> ```
> git clone https://git.compilenix.org/CompileNix/docker-nginx-static.git
> cd docker-nginx-static
> ./build.sh -t docker-nginx-static --no-cache | tee build.log
> ```
> 
> Any help, suggestions or hints are very welcome :)

Static linking requires more than just "-static".  In particular, 
for all the libraries being used you'll have to manually provide 
library dependencies.

The particular error suggests that libxml2 depends on zlib, so 
something like "--with-ld-opt="-static -lz"" should help.  
Likely there will be other similar errors and you'll have to 
include other libraries as well.  For example, on FreeBSD static 
linking with libxslt requires "-static -lz -lm -llzma -lmd".

Something like

pkg-config --static libxslt

might be a good starting point for full list of required 
libraries, assuming libxslt package information is properly 
installed.  Alternatively, ldd on the dynamic library might be a 
good start.

Note well that in general it might not be a good idea to compile 
things statically unless you have some good reasons to and 
understand possible consequences.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Use of upstream keepalive_time

2023-02-02 Thread Maxim Dounin
Hello!

On Thu, Feb 02, 2023 at 01:03:13PM +0530, Vishwas Bm wrote:

> Can this keepalive_time be made infinite ?
> So that connection is not broken after default 1hr

There is no specific infinite value, though you can use a large 
enough keepalive_time instead.  Something like "keepalive_time 10d;" 
should be a good replacement for "infinite".

Note though that this might not be a good idea for the reasons 
outlined in the previous response.

[...]

> > The "keepalive_time" is a directive to limit total lifetime of the
> > connection, making it possible to free any resources associated
> > with the connection, notably allocated memory.  Further, in some
> > setups it might be important to periodically redo connection
> > authentication, notably re-validate peer certificates.
> >
> > The "keepalive_time" directive is mostly equivalent to
> > keepalive_requests, which is documented as follows:
> >
> > : Closing connections periodically is necessary to free
> > : per-connection memory allocations. Therefore, using too high
> > : maximum number of requests could result in excessive memory usage
> > : and not recommended.
> >
> > Note though that keepalive_time is 1 hour by default, and reopening
> > connections once per hour is not expected to have any performance
> > impact.  Rather, it is expected to be a hard limit on the total
> > connection lifetime on connections which are mostly idle and
> > therefore do not reach the "keepalive_requests" limit in a
> > reasonable time.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Use of upstream keepalive_time

2023-01-20 Thread Maxim Dounin
Hello!

On Fri, Jan 20, 2023 at 11:37:15AM +0530, Vishwas Bm wrote:

> I see that from 1.19.10 keepalive_time has been added.
> http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_time
> 
> Also keepalive_timeout is present for idle connection
> http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_timeout
> 
> On checking the behaviour between these two, I see that keepalive_time is
> having higher precedence over keepalive_timeout.
> Even if connection is not idle based on keepqlive_timeout, connection is
> still  getting closed because of keepalive_time.
> 
> Is this expected behaviour?
> Also can I set keepalive_time to higher value say 24hours ? Any drawbacks
> with this ?
> Can this keepalive_time be disabled and priority given only to
> keepalive_timeout ?

The "keepalive_time" is a directive to limit total lifetime of the 
connection, making it possible to free any resources associated 
with the connection, notably allocated memory.  Further, in some 
setups it might be important to periodically redo connection 
authentication, notably re-validate peer certificates.

The "keepalive_time" directive is mostly equivalent to 
keepalive_requests, which is documented as follows:

: Closing connections periodically is necessary to free 
: per-connection memory allocations. Therefore, using too high 
: maximum number of requests could result in excessive memory usage 
: and not recommended.

Note though that keepalive_time is 1 hour by default, and reopening 
connections once per hour is not expected to have any performance 
impact.  Rather, it is expected to be a hard limit on the total 
connection lifetime on connections which are mostly idle and 
therefore do not reach the "keepalive_requests" limit in a 
reasonable time.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: AW: nginx-1.23.3 on Win Server wth HTTPS

2023-01-18 Thread Maxim Dounin
Hello!

On Wed, Jan 18, 2023 at 12:41:32PM +, Kappes, Michael wrote:

> My "correct certificate information" is the Problem. Nginx tells me:
> 
> invalid number of arguments in "ssl_certificate" directive in 
> C:\nginx\nginx-1.23.3/conf/nginx.conf:102
> (please note: "\" and "/" in the same path?!)

This works fine, since Windows interprets both "\" and "/" in 
places (including the functions nginx uses).

> C:\nginx\nginx-1.23.3\cert\ => here a my cert and key files
> At my nginx.conf file (the syntax) =>
> 
> ssl_certificate  C:\nginx\nginx-1.23.3\cert\1-Servername.cert.pem;
> ssl_certificate_key  C:\nginx\nginx-1.23.3\cert\1-Servername.cert.key;

When using "\" in nginx configuration, you have to be careful to 
properly escape it, since "\" is also an escape character, and, 
for example, "\n" will be interpreted as a newline character.  As 
such, using "/" is usually easier.

On the other hand, this particular issue does not explain why you 
are seeing the "invalid number of arguments" error, it should be 
"cannot load certificate" with a garbled certificate path instead.

The "invalid number of arguments" error suggests you've typed 
multiple arguments in the directive (instead of just one it 
accepts).  This usually happens if a space character is 
accidentally used where it shouldn't, but in the directives as 
shown certainly there are no extra space characters.

Most likely, there is an issue with the text editor you use, and 
it's somehow inserts some additional non-printable characters, 
such as byte order mark or something like that, and this confuses 
nginx.

What editor do you use?  Is using another one and re-typing the 
directives) makes a difference?  E.g., Notepad is usually 
available on Windows and does not seem to corrupt text files.

[...]

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Unsafe AIO under FreeBSD?

2023-01-17 Thread Maxim Dounin
Hello!

On Tue, Jan 17, 2023 at 12:04:31PM -0600, Mahmoud Al-Qudsi wrote:

> Hello all,
> 
> By default, FreeBSD restricts potentially unsafe AIO operations (as
> determined by the target fd type) and operations like aio_read(2) will
> “falsely” return EOPNOTSUPP to avoid a potentially dangerous operation
> that can result in blocking the aio threadpool hanging the system or the
> process, per aio(4).
> 
> I’ve observed in production with an nginx/1.23.3 instance (compiled with
> --with-file-aio) running on FreeBSD 13.1-RELEASE-p5, configured with
> `aio on;` (and `use kqueue;` though I suspect that is not relevant),
> the following syslog entry:
> 
> pid 1125 (nginx) is attempting to use unsafe AIO requests - not
> logging anymore
> 
> My curiosity got the best of me and I decided to allow unsafe aio
> requests to see what would happen (`sysctl vfs.aio.enable_unsafe=1`).
> It’s been about 24 hours and I haven’t noticed any ill or adverse
> effects, at least judging by my scrutiny of the logs, though I intend to
> continue to closely monitor this server and see what happens.
> 
> My question is whether or not nginx does anything “advanced” with aio
> under FreeBSD, beyond using aio for operations on “sockets, raw disk
> devices, and regular files on local filesystems,” which is the “safe”
> list, again per aio(4), while other types of fds are blocked unless
> unsafe aio is enabled.
> 
> On this server, nginx is serving static files from various zfs datasets
> and is functioning as a reverse proxy to http and fastcgi upstreams. I
> do have a few 3rd party modules statically compiled into nginx, so I'm
> naturally limiting my question to core/stock nginx behavior to the best
> of its developers’ knowledge :)
> 
> I don't have all the system logs but in a sample of the logs preserved
> going back to November 2022 the "unsafe AIO" is not repeated anywhere,
> leading me to _suspect_ that this isn't "normal" nginx behavior and that
> I probably should *not* be enabling unsafe AIO - but curiosity is a hell
> of a drug!

The only aio operation nginx uses is aio_read(), and it does 
nothing "advanced" - just reads normal files which are being 
served by nginx.

Further, nginx explicitly checks files being served, and rejects 
non-regular files.  As such, the "unsafe AIO" checks shouldn't be 
triggered unless you are trying to serve something from non-local 
file systems (well, you indeed shouldn't).

In general, if an aio_read() error happens, you should be able to 
see corresponding error in nginx error log at the "crit" level.  
The error will look like "[crit] ... aio_read("/path/to/file") 
failed (45: Operation not supported)".  It should make it possible 
to find out what actually causes the error.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: load balancer the best way to solve

2023-01-13 Thread Maxim Dounin
Hello!

On Thu, Jan 12, 2023 at 09:30:12PM -0600, Rick Gutierrez wrote:

> Hi list, I have a situation where I am looking for the best way to
> solve it, I have a load balancer with nginx 1.22.1 and behind it I
> have three backend servers:
> 
>/  -> App1
>   /
> load balancer.  / --> App2
> /
>/---> App3
> 
> if shutdown app1, the load balancer keeps sending traffic to app1 ,
> and the clients are in a lag waiting for app1 to respond, I think the
> load balancer should send all those clients to app2 and app3, but it
> doesn't.
> 
> it put me in research mode :)  and the nginx version can't do that,
> it's only in the plus version, correct me if I'm wrong, but "Voilà"
> https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/#hc_active
> , It gave me hope when reading that this option could help me, that by
> adding it to the nginx configuration it would not be able to continue
> sending requests, but it is not like that.

Certainly nginx can do that.  By default, all user requests are 
used by nginx to detect any upstream server failures, and re-route 
requests to other available servers.

Active health checks, which are indeed only available in the 
commercial version, are only different that they also use requests 
generated periodically by nginx-plus itself.  This might improve 
service to some real clients in some specific cases, but not 
generally required.

> logs:
> 
> 2023/01/12 18:07:38 [error] 26895#26895: *834 no live upstreams while
> connecting to upstream, client: 44.210.106.130, server: demo.app.com,
> request: "GET /aqr HTTP/1.1", upstream:
> "http://paginaweb/aqr";, host: "demo.app.com",
> referrer: "http://173.255.X.X:80/aqr";
> 2023/01/12 18:07:38 [error] 26895#26895: *832 no live upstreams while
> connecting to upstream, client: 44.210.106.130, server: demo.app.com,
> request: "GET /99vt HTTP/1.1", upstream: "http://paginaweb/99vt";,
> host: "demo.app.com", referrer: "http://173.255.X.X:80/99vt";

The errors indicate that all your upstream servers were not 
responding properly, and were either all tried for the particular 
request, or were disabled based on "fail_timeout=30s max_fails=3" 
in your configuration.

Usually looking into other errors in the logs makes it immediately 
obvious what actually happened.  Alternatively, you may want to 
further dig into what happened with the requests by logging the 
$upstream_addr and $upstream_status variables (see 
https://nginx.org/r/$upstream_addr and 
https://nginx.org/r/$upstream_status for details).

[...]

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx ssl stream termination for MySQL backends

2023-01-07 Thread Maxim Dounin
Hello!

On Sat, Jan 07, 2023 at 10:48:31PM +0530, Vishwas Bm wrote:

> Below is the use case which I am trying:
> 
> client--->nginx stream(ssl termination) ---> MySQL Db
> 
> Connection between nginx and MySQL db is unencrypted.
> 
> When I send ssl request using MySQL client, I am getting ssl handshake
> timeout error. I do not see client hello from client in tcpdump capture.
> 
> Is the above usecase valid with nginx?
> Has someone tried this configuration ?

The MySQL protocol uses an internal SSL handshake establishment, 
which is only happens if both client and server agree to use it.  
That is, it works similarly to STARTTLS in SMTP.  See here for 
details:

https://dev.mysql.com/doc/dev/mysql-server/latest/page_protocol_basic_tls.html

As such, it is not possible to do simple SSL offloading, 
something that nginx stream module can do for you, but rather a 
protocol-specific implementation is needed.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Where to compress text files and filter access

2022-12-29 Thread Maxim Dounin
Hello!

On Wed, Dec 28, 2022 at 11:05:01PM +, Mik J via nginx wrote:

> What is the best practice for these two situations:
> 1. Compress text files, should I make the compression on the 
> reverse proxy or on the backend server ?

In most cases, it makes sense to compress things on the frontend 
server.

In particular, this is because of at least the following factors:

1) Frontend servers are usually not just reverse proxies, but also 
serve some static resources.  As such, compression is anyway needs 
to be configured on frontend servers.

2) Frontend servers often used with multiple different backends.  
Further, in some cases they are used to generate responses based 
on subrequests to different requests, such as with SSI.  This 
makes compression on frontend servers easier or even the only 
possible solution.

3) Frontend servers are often used to cache backend responses, and 
proper caching of compressed responses might be problematic and/or 
inefficient (in particular, because the only mechanism available 
is Vary).

Note well that by default nginx uses HTTP/1.0 when connecting to 
upstream servers, and this in turn will disable gzip with default 
settings.  This naturally results in compression being done on 
frontend servers when nginx with default settings is used both as 
a backend and a frontend.

In some cases, it might make sense to compress on the backend 
servers, for example, to ensure that CPU usage for compression is 
balanced among multiple backend servers, or to minimize traffic 
between frontends and backends.  These are mostly about specific 
configurations though.

> 2. Deny access to specific files for example, files starting 
> with a dot .file, should I write the rule on the reverse proxy 
> or on the backend server ?

I would recommend both.  In particular, rules on the backend 
server will ensure that the access is denied where the file 
resides, making things safe even if the frontend servers is 
somehow bypassed.  Rules on the frontend server ensure that 
requests are denied efficiently.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: website/admin behind my reverse proxy doesn't work

2022-12-29 Thread Maxim Dounin
Hello!

On Wed, Dec 28, 2022 at 11:01:11PM +, Mik J via nginx wrote:

> Hello,
> I have a website hosted on a server using nginx behind a nginx 
> reverse proxy but things don't work properly.
> https://mywebsite.org => works
> https://mywebsite.org/admin => doestn't work it redirects to 
> https://mywebsite.org
> 
> On my backend serverserver {
>     listen 80;
>     server_name mywebsite.org ;
>     index index.php;
>     root /var/www/htdocs/sites/mywebsite;...
>     location / {
>   try_files $uri $uri/ /index.php$is_args$args;
> 
>   location ~ \.php$ {
>   root   /var/www/htdocs/sites/mywebsite;
>   try_files $uri =404;
>   fastcgi_pass   unix:/run/php-fpm.mywebsite.org.sock;
>   fastcgi_split_path_info ^(.+\.php)(/.+)$;
>   fastcgi_index  index.php;
>   fastcgi_param  SCRIPT_FILENAME 
> $document_root$fastcgi_script_name;
>   include    fastcgi_params;
>   }
>     }}
> On my reverse proxyserver {
> #    listen 80;
> #    listen [::]:80;
>     listen 443 ssl;
>     listen [::]:443 ssl;
>     server_name http://mywebsite.org;...
>     root /var/www/htdocs/mywebsite;
>     location ^~ / {
>     proxy_pass  http://10.12.255.23:80;
>     proxy_redirect  off;
>     proxy_set_header    Host    $host;
>     proxy_http_version 1.1;
>     proxy_set_header  X-Real-IP    $remote_addr;
>     proxy_set_header  X-Forwarded-Host $host;
>     proxy_set_header  X-Forwarded-For  $proxy_add_x_forwarded_for;
>     proxy_set_header    Referer "http://mywebsite.org/";;
>     proxy_pass_header Set-Cookie;
>     proxy_set_header  X-Forwarded-Proto $scheme;
>     }
> }
> 
> 
> So I can't access 
> 
> In the backend server logs I see
> [28/Dec/2022:23:54:33 +0100] "GET /admin/ HTTP/1.1" 302 5 
> "http://mywebsite.org/"; ...
> [28/Dec/2022:23:54:33 +0100] "GET / HTTP/1.1" 499 0 "http://mywebsite.org/"; 
> ...

In your nginx configurations no redirects are returned.  
Accordingly, it looks like redirects you are seeing are returned 
by the backend's PHP code.  To find out why these are returned 
you'll probably have to look into the PHP code.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}}

2022-12-21 Thread Maxim Dounin
Hello!

On Tue, Dec 20, 2022 at 11:44:05PM +0530, Kaushal Shriyan wrote:

> On Sat, Dec 17, 2022 at 3:48 AM Maxim Dounin  wrote:
> 
> > On Fri, Dec 16, 2022 at 11:53:40PM +0530, Kaushal Shriyan wrote:
> >
> > > I have a follow up question regarding the settings below in nginx.conf
> > > where the php-fpm upstream server is processing all php files for Drupal
> > > CMS.
> > >
> > > fastcgi_intercept_errors off
> > > proxy_intercept_errors off
> > >
> > > User -> Nginx -> php-fpm -> MySQL DB.
> > >
> > > For example if the php-fpm upstream server is down then nginx should
> > render
> > > 502 bad gateway
> > >  if MySQL DB service is down then nginx should render
> > > 500 ISE.
> > >
> > > Is there a way to render any of the messages or any custom messages to
> > the
> > > User from the php-fpm upstream server that should be passed to a client
> > > without being intercepted by the Nginx web server. Any examples? I have
> > > attached the file for your reference. Please guide me. Thanks in advance.
> >
> > Not sure I understand what are you asking about.
> >
> > With fastcgi_intercept_errors turned off (the default) nginx does
> > not intercept any of the errors returned by php-fpm.
> >
> > That is, when MySQL is down and php-fpm returns 500 (Internal
> > Server Error), it is returned directory to the client.  When
> > php-fpm is down, nginx generates 502 (Bad Gateway) itself and
> > returns it to the client.
> >
> >
> Hi Maxim,
> 
> Apologies for the delay in responding. I am still not able to get it. The
> below settings will be hardcoded in nginx.conf. Is there a way to
> dynamically render the different errors to the client when the client hits
> http://mydomain.com/apis
> 
> error_page 502 /502.json;
> 
> location = /502.json {
> return 200 '{"errors": {"status_code": 502, "status": "php-fpm
> server is down"}}';
> }
> 
> Please guide me. Thanks in advance.

You can pass these error pages to a backend server by using 
proxy_pass or fastcgi_pass in the location, much like any other 
resource in nginx.

Note though that in most cases it's a bad idea, at least unless 
you have a dedicated backend to generate error pages: if a request 
to an upstream server failed, there is a good chance that another 
request to generate an error page will fail as well.

As such, it is usually recommended to keep error pages served by 
nginx itself, either as static files, or directly returned with 
"return".

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}}

2022-12-16 Thread Maxim Dounin
Hello!

On Fri, Dec 16, 2022 at 11:53:40PM +0530, Kaushal Shriyan wrote:

> I have a follow up question regarding the settings below in nginx.conf
> where the php-fpm upstream server is processing all php files for Drupal
> CMS.
> 
> fastcgi_intercept_errors off
> proxy_intercept_errors off
> 
> User -> Nginx -> php-fpm -> MySQL DB.
> 
> For example if the php-fpm upstream server is down then nginx should render
> 502 bad gateway
>  if MySQL DB service is down then nginx should render
> 500 ISE.
> 
> Is there a way to render any of the messages or any custom messages to the
> User from the php-fpm upstream server that should be passed to a client
> without being intercepted by the Nginx web server. Any examples? I have
> attached the file for your reference. Please guide me. Thanks in advance.

Not sure I understand what are you asking about.

With fastcgi_intercept_errors turned off (the default) nginx does 
not intercept any of the errors returned by php-fpm.

That is, when MySQL is down and php-fpm returns 500 (Internal 
Server Error), it is returned directory to the client.  When 
php-fpm is down, nginx generates 502 (Bad Gateway) itself and 
returns it to the client.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}}

2022-12-15 Thread Maxim Dounin
Hello!

On Thu, Dec 15, 2022 at 09:53:11PM +0530, Kaushal Shriyan wrote:

> 
> I am running the nginx version: nginx/1.22 as a reverse proxy server on
> CentOS Linux release 7.9.2009 (Core). When I hit http://mydomain.com/apis I
> see the below message on the browser even if the upstream server php-fpm
> server is up and running.
> 
> *{"errors": {"status_code": 502,"status": "php-fpm server is down"}}*
> 
> I have set the below in the nginx.conf file and attached the file for your
> reference.
> 
> if ($upstream_http_content_type = "") {
>  add_header 'Content-Type' 'application/json' always;
>  add_header 'Content-Type-3'
> $upstream_http_content_type$isdatatypejson"OK" always;
>   return 502 '{"errors": {"status_code": 502,"status":
> "php-fpm server is down"}}';
>   }

The "if" directive makes it possible to conditionally select 
configuration to handle a request, and therefore can only use 
information available before the request is handled.  In your 
case, before the request is sent to the upstream server.  See 
http://nginx.org/en/docs/http/ngx_http_rewrite_module.html for 
more details.

As such, $upstream_http_content_type will be always empty, since 
there are no upstream response yet, and therefore the 
configuration will always return 502.  This matches your 
observations.

An obvious fix would be to remove the configuration chunk in 
question.

Instead, you probably need something like:

error_page 502 /502.json;

location = /502.json {
return 200 '{"errors": {"status_code": 502, "status": "php-fpm server 
is down"}}';
}

Hope this helps.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: lmit_req with differents rules

2022-12-15 Thread Maxim Dounin
Hello!

On Thu, Dec 15, 2022 at 09:50:46AM -0300, Rejaine Monteiro wrote:

> Hello!!
> 
> I need apply different limit_req rules with different rules, like
> 
> limit_req_zone $binary_remote_addr zone=ipsrc:10m rate=1r/s;
> limit_req_zone $arg_token zone=apitoken:10m rate=5r/m;
> limit_req_zone $http_autorization zone=httpauth:10m rate=5r/s;
> 
> server {
> listen   443;
> server_name  api.domain.com;
>   }
> 
> location / {
> limit_req zone=ipsrc;
> limit_req zone=apitoken;
>  limit_req zone=httpauth;
> proxy_pass http://internal.api.com;
>}
> }
> 
> Would this be correct and should it work as expected?

This is certainly supported, see http://nginx.org/r/limit_req for 
details.

Note that it might be a good idea to add some meaningful "burst" 
to the configuration, as well as "nodelay".

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Certificate Error

2022-12-14 Thread Maxim Dounin
Hello!

On Wed, Dec 14, 2022 at 10:02:04PM -0500, Software Info wrote:

> Hi All,
> I would really appreciate some help with this sticky problem. I am
> using nginx as a reverse proxy. I have version 1.20.1 running on
> FreeBSD 13.1. Today I set up for a new domain. I got a wildcard
> certificate for mydomain.com from GoDaddy. I put the paths in
> nginx.conf but when I run nginx -t
> I get the following error: nginx:
> [emerg] SSL_CTX_use_PrivateKey("/usr/local/etc/nginx/ssl/domain.com.key")
> failed (SSL: error:0B080074:x509 certificate
> routines:X509_check_private_key:key values mismatch)
> nginx: configuration file /usr/local/etc/nginx /nginx.conf test failed
> 
> When I ran the test below to check the Public and Private keys, I get
> back the same checksum so I guess the Certs must be ok.
> # openssl rsa -modulus -in domain.com.key -noout | md5sum
> # openssl x509 -modulus -in domain.com.crt -noout | md5sum
> 
> This is the relevant section in my nginx.conf

[...]

>  ssl_certificate ssl/gd_bundle-g2-g1.crt;
>  ssl_certificate_key ssl/domain.com.key;

You are "gd_bundle-g2-g1.crt" instead of "domain.com.crt", this 
looks like the culprit.

See 

http://nginx.org/en/docs/http/configuring_https_servers.html

for some basic tips about configuring HTTPS servers.

[...]

>  ssl_trusted_certificate ssl/domain.com.crt;

And this also looks incorrect.

Hope this helps.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx 400 bad request

2022-12-14 Thread Maxim Dounin
Hello!

On Wed, Dec 14, 2022 at 03:32:00PM +0530, Vishwas Bm wrote:

> Thanks for the response.
> What happens when content-length  provided in request is greater than the
> post body size?
> How does nginx handle this case ?
> Does it fail with 400 ?
> 
> Also how is the truncation/padding done in case of lesser or higher content
> length ?

In HTTP/1.x, Content-Length specifies the exact size of the 
request body.  Anything after it sent on the same persistent 
connection is expected to be the next request, and it is 
interpreted as an HTTP request.  As long as it is not a valid HTTP 
request, like in the case you are seeing in your logs, the 400 
(Bad request) error is generated by nginx.

For further information about the HTTP protocol consider reading 
relevant RFCs, notably RFC 9112 
(https://www.rfc-editor.org/rfc/rfc9112).

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


  1   2   3   4   5   6   7   8   9   10   >