Re: announcing freenginx.org

2024-02-14 Thread Sergey Brester via nginx-devel

Hi Maxim,

it is pity to hear such news...

I have few comments and questions about, which I enclosed inline 
below...


Regards,
Serg.

14.02.2024 19:03, Maxim Dounin wrote:


Hello!

As you probably know, F5 closed Moscow office in 2022, and I no
longer work for F5 since then. Still, we've reached an agreement
that I will maintain my role in nginx development as a volunteer.
And for almost two years I was working on improving nginx and
making it better for everyone, for free.


And you did a very good job!


Unfortunately, some new non-technical management at F5 recently
decided that they know better how to run open source projects. In
particular, they decided to interfere with security policy nginx
uses for years, ignoring both the policy and developers' position.


Can you explain a bit more about that (or provide some examples
or a link to a public discussion about, if it exists)?


That's quite understandable: they own the project, and can do
anything with it, including doing marketing-motivated actions,
ignoring developers position and community. Still, this
contradicts our agreement. And, more importantly, I no longer able
to control which changes are made in nginx within F5, and no longer
see nginx as a free and open source project developed and
maintained for the public good.


Do you speak only about you?.. Or are there also other developers which
share your point of view? Just for the record...
What is about R. Arutyunyan, V. Bartenev and others?
Could one expect any statement from Igor (Sysoev) about the subject?


As such, starting from today, I will no longer participate in nginx
development as run by F5. Instead, I'm starting an alternative
project, which is going to be run by developers, and not corporate
entities:

http://freenginx.org/ [1]


Why yet another fork? I mean why just not "angie", for instance?

Additionally I'd like to ask whether the name "freenginx" is really well 
thought-out?

I mean:
  - it can be easy confused with free nginx (compared to nginx plus)
  - the search for that will be horrible (if you would try to search for 
freenginx,
even as exact (within quotes, with plus etc), many internet search 
engine

would definitely include free nginx in the result.
  - possibly copyright or trademark problems, etc


The goal is to keep nginx development free from arbitrary corporate
actions. Help and contributions are welcome. Hope it will be
beneficial for everyone.


Just as an idea: switch the primary dev to GH (github)... (and commonly 
from hg to git).
I'm sure it would boost the development drastically, as well as bring 
many new

developers and let grow the community.

Links:
--
[1] http://freenginx.org/

___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Core: Avoid memcpy from NULL

2023-12-15 Thread Dipl. Ing. Sergey Brester via nginx-devel

Enclosed few thoughts to the subject:

- since it is very rare situation that one needs only a memcpy without 
to know whether previous alloc may fail
  (e. g. some of pointers were NULL), me too thinks that the caller 
should be responsible for the check.

  So I would not extend ngx_memcpy or ngx_cpymem in that way.

- rewrite of `ngx_memcpy` define like here:
  ```
  + #define ngx_memcpy(dst, src, n) (void) ((n) == 0 ? (dst) : 
memcpy(dst, src, n))

  ```
  may introduce a regression or compat issues, e. g. fully functioning 
codes like that may become broken hereafter:

  ```
  ngx_memcpy(dst, src, ++len); // because n would be incremented twice 
in the macro now

  ```
  Sure, `ngx_cpymem` has also the same issue, but it had that already 
before the "fix"...
  Anyway, I'm always against of such macros (no matter with or without 
check it would be better as an inline function instead).


My conclusion:
  a fix of affected places invoking `ngx_memcpy` and `ngx_cpymem`, and 
possibly an assert in `ngx_memcpy`

  and `ngx_cpymem` would be fully enough, in my opinion.

Regards,
Sergey.

On 15.12.2023 03:41, Maxim Dounin wrote:


Hello!

On Wed, Dec 13, 2023 at 11:09:28AM -0500, Ben Kallus wrote:

Nginx executes numerous `memcpy`s from NULL during normal 
execution.
`memcpy`ing to or from NULL is undefined behavior. Accordingly, 
some
compilers (gcc -O2) make optimizations that assume `memcpy` 
arguments

are not NULL. Nginx with UBSan crashes during startup due to this
issue.

Consider the following function:
```C
#include 

int f(int i) {
char a[] = {'a'};
void *src = i ? a : NULL;
char dst[1];
memcpy(dst, src, 0);
return src == NULL;
}
```
Here's what gcc13.2 -O2 -fno-builtin will do to it:
```asm
f:
sub rsp, 24
xor eax, eax
testedi, edi
lea rsi, [rsp+14]
lea rdi, [rsp+15]
mov BYTE PTR [rsp+14], 97
cmove   rsi, rax
xor edx, edx
callmemcpy
xor eax, eax
add rsp, 24
ret
```
Note that `f` always returns 0, regardless of the value of `i`.

Feel free to try for yourself at 
https://gcc.godbolt.org/z/zfvnMMsds


The reasoning here is that since memcpy from NULL is UB, the 
optimizer
is free to assume that `src` is non-null. You might consider this 
to
be a problem with the compiler, or the C standard, and I might 
agree.

Regardless, relying on UB is inherently un-portable, and requires
maintenance to ensure that new compiler releases don't break 
existing

assumptions about the behavior of undefined operations.

The following patch adds a check to `ngx_memcpy` and `ngx_cpymem` 
that
makes 0-length memcpy explicitly a noop. Since all memcpying from 
NULL

in Nginx uses n==0, this should be sufficient to avoid UB.

It would be more efficient to instead add a check to every call to
ngx_memcpy and ngx_cpymem that might be used with src==NULL, but in
the discussion of a previous patch that proposed such a change, a 
more

straightforward and tidy solution was desired.
It may also be worth considering adding checks for NULL memset,
memmove, etc. I think this is not necessary unless it is 
demonstrated

that Nginx actually executes such undefined calls.

# HG changeset patch
# User Ben Kallus 
# Date 1702406466 18000
#  Tue Dec 12 13:41:06 2023 -0500
# Node ID d270203d4ecf77cc14a2652c727e236afc659f4a
# Parent  a6f79f044de58b594563ac03139cd5e2e6a81bdb
Add NULL check to ngx_memcpy and ngx_cpymem to satisfy UBSan.

diff -r a6f79f044de5 -r d270203d4ecf src/core/ngx_string.c
--- a/src/core/ngx_string.c Wed Nov 29 10:58:21 2023 +0400
+++ b/src/core/ngx_string.c Tue Dec 12 13:41:06 2023 -0500
@@ -2098,6 +2098,10 @@
 ngx_debug_point();
 }

+if (n == 0) {
+return dst;
+}
+
 return memcpy(dst, src, n);
 }

diff -r a6f79f044de5 -r d270203d4ecf src/core/ngx_string.h
--- a/src/core/ngx_string.h Wed Nov 29 10:58:21 2023 +0400
+++ b/src/core/ngx_string.h Tue Dec 12 13:41:06 2023 -0500
@@ -103,8 +103,9 @@
  * gcc3 compiles memcpy(d, s, 4) to the inline "mov"es.
  * icc8 compile memcpy(d, s, 4) to the inline "mov"es or XMM 
moves.

  */
-#define ngx_memcpy(dst, src, n)   (void) memcpy(dst, src, n)
-#define ngx_cpymem(dst, src, n)   (((u_char *) memcpy(dst, src, 
n)) + (n))
+#define ngx_memcpy(dst, src, n) (void) ((n) == 0 ? (dst) : 
memcpy(dst, src, n))
+#define ngx_cpymem(dst, src, n)
  \

+((u_char *) ((n) == 0 ? (dst) : memcpy(dst, src, n)) + (n))

 #endif

diff -r a6f79f044de5 -r d270203d4ecf src/http/v2/ngx_http_v2.c
--- 

Re: Non blocking delay in header filters

2023-04-21 Thread Dipl. Ing. Sergey Brester via nginx-devel
 

Well, it is impossible if you'd use some memory blocks allocated by
nginx within main request.
The memory allocated inside the request is released on request end. 

An example how one can implement non-blocking delay can you see in
https://github.com/openresty/echo-nginx-module#echo_sleep [2]. 

But again, ensure you have not stored references to some main request
structures (request related memory range). If you'd need some of them
(e. g. headers, args, etc), duplicate them and release in event handler
if timer becomes set or after processing your sub-requests. 

However if I were you, I'd rather implement it on a backend side (not in
nginx), e. g. using background sub-requests either with post_action
(despite nonofficial undocumented) or with mirror [3], configured in a
corresponding location.
Especially if one expects some transaction safety (e. g. for save
operation in corner cases like nginx restart/reload/shutdown/etc during
the delay between main response and all the sub-requests) as a pipeline
similar procedure.
So one could register each step (your delayed request) in some queue,
for instance storing the request chain in a database or file, to get it
safe against shutdown.
Although also without the transaction safety your approach to implement
it completely in nginx is questionable for many reasons (particularly if
the delay is not something artificial, but rather a real timing event). 

Regards,
Serg. 

20.04.2023 18:46, Ava Hahn wrote via nginx-devel: 

> Hello All, 
> 
> I am currently implementing a response header filter that triggers one or 
> more subrequests conditionally based on the status of the parent response.
> 
> I want to introduce some delay between the checking of the response and the 
> triggering of the subrequest, but I do not want to block the worker thread.
> 
> So far I have allocated an event within the main request's pool, added a 
> timer with my desired delay, and attached a callback that does two things. 
> 
> * triggers a subrequest as desired
> * proceeds to call the next response header filter
> 
> In the meantime, after posting the event my handler returns NGX_OK.
> 
> This is not working at all. Shortly after my filter returns NGX_OK the 
> response is finalized, and the pool is deallocated. When the timer wakes up a 
> segfault occurs in the worker process (in ngx_event_del_timer). Even if I 
> allocate the event in a separate pool that outlives the request it is still 
> not defined what happens when I try to trigger a subrequest on a request that 
> has been finalized.
> 
> My question is how can I make the finalization of the request contingent on 
> the associated/posted event being handled first?
> 
> OR, is there another facility for implementing non blocking delay that I can 
> use?
> 
> Thanks,
> Ava 
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx-devel [1]
 

Links:
--
[1] https://mailman.nginx.org/mailman/listinfo/nginx-devel
[2] https://github.com/openresty/echo-nginx-module#echo_sleep
[3] http://nginx.org/en/docs/http/ngx_http_mirror_module.html
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Special character # in url

2022-12-12 Thread Sergey Brester via nginx-devel
 

Hi, 

firstly please don't use nginx-devel for such questions, this is a
developer mailing list and reserved for developers only purposes. 

Furthermore it is not nginx related question at all. The browsers handle
#-chars as an internal jump to the anchor by ID of element on page, so
nginx (or whatever web-server) can not and will not receive the part of
URI after #-char from your agent. If you need such character in URI, you
have to URL-encode it (# ==> %23). 

Regards,
sebres 

12.12.2022 11:43, Solanellas Llobet, Xavier wrote: 

> Hi everybody, 
> 
> I'm trying to redirect an url like /text1/text2#/text3 to url 
> /text1/text2/text3 
> 
> The problem seems that # character cut url in navigator, and nginx receives 
> /text1/text2 
> 
> Anybody have an workarround from nginx? 
> 
> The solution seems use javascript but how can implement it? 
> 
> Thanks! 
> Xavier 
> 
> ___
> nginx-devel mailing list -- nginx-devel@nginx.org
> To unsubscribe send an email to nginx-devel-le...@nginx.org
 ___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: ngx_log_error, ngx_slprintf signal safe

2022-11-07 Thread Sergey Brester via nginx-devel
 

Hi, 

Function ngx_slprintf is conditionally async-signal safe (unless you'd
use the same buffer, supplied as first argument, or free such buffer or
some of the arguments in signal handler, because the function is not
atomic and can be interrupted by a signal). 

However regarding the function ngx_log_error (or rather
ngx_log_error_core invoked internally) it is a bit more complex (there
are still structures like log, log->connection etc used internally,
which must remain unchanged unless the function ends), but you can safe
call the function inside the handler, because it fills the buffer in
stack before logging it in one piece hereafter with function write
(being async-signal-safe corresponding POSIX) using stderr handler.

In my opinion it is safe to call both functions inside the async-signal
handler, but you should avoid to "touch" the structures and arguments
(e. g. release or modify them) that can be used in invocations of that
functions outside. 

Regards,
Sergey. 

05.11.2022 21:32, Nikhil Singhal wrote: 

> Hi All,
> 
> Wanted to know if ngx_log_error and ngx_slprintf functions are async signal 
> safe or not. Any help would be appreciated.
> 
> Regards, 
> Nikhil Singhal 
> 
> ___
> nginx-devel mailing list -- nginx-devel@nginx.org
> To unsubscribe send an email to nginx-devel-le...@nginx.org
 ___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [PATCH] fix weakness by logging of broken header by incorect proxy protocol (IDS/IPS/LOG-analysis)

2022-09-28 Thread Dipl. Ing. Sergey Brester via nginx-devel
 

Sure, this was also my first intention. Just after all I thought the
whole buffer 
could be better in order to provide a possibility to debug for someone
searching 
for a bug. But there are another aids that would help, so indeed let it
be so.

As for the rest, well it is surely a subject of different discussion.
However I think 
someone who monitoring logs with foreign data is able to ignore wrong
chars
or unsupported surrogates using appropriate encoding facilities, but...
But I would suggest at least to replace a quote-char to provide
opportunity
to bypass the buffer in quotes with something similar this `... header:
"[^"]*" ...`
if parsing of such lines is needed. 

So may be this one either: 

+ for (p = buf; p != last; p++) {
+ if (*p == '"') {
+ *p = '''; continue;
+ }
+ if (*p == CR || *p == LF) {
+ break;
+ }
+ }

Although I don't believe the safe data (like IP, etc) shall take place
after "unsafe" 
(foreign) input (especially of variable length), but it is rather a
matter of common 
logging format for the error-log.
I mean normally one would rather expect something like that: 

- [error] 13868#6132: *1 broken header: "...unsafe..." while reading
PROXY protocol, client: 127.0.0.1, server: 0.0.0.0:80

+ [error] 13868#6132: while reading PROXY protocol, client: 127.0.0.1,
server: 0.0.0.0:80 - *1 broken header: "...unsafe..."

Unfortunatelly errorlog is not configurable at the moment at all. 

28.09.2022 12:02, Roman Arutyunyan wrote: 

> Hi Sergey,
> 
> Thanks for reporting this. The issue indeed needs to be fixed. Attached is 
> a patch similar to yours that does this. I don't think we need to do anything
> beyond just cutting the first line since there's another similar place in
> nginx - ngx_http_log_error_handler(), where exactly that is implemented.
> 
> Whether we need to skip special characters when logging to nginx log is
> a topic for a bigger discussion and this will require a much bigger patch.
> I suggest that we only limit user data to the first line now.
> 
> [..]
> 
> --
> Roman Arutyunyan
 ___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


[PATCH] fix weakness by logging of broken header by incorect proxy protocol (IDS/IPS/LOG-analysis)

2022-09-26 Thread Dipl. Ing. Sergey Brester via nginx-devel
 

Hi, 

below is a patch to fix a weakness by logging of broken header by
incorrect proxy protocol. 

If some service (IDS/IPS) analyzing or monitoring log-file, regularly
formatted lines may be simply confused with lines written not escaped
directly from buffer supplied from foreign source.
Not to mention it may open a certain vector allowing "injection" of user
input in order to avoid detection of failures or even to simulate
malicious traffic from legitimate service. 

How to reproduce: 

- enable proxy_protocol for listener and start nginx (here localhost on
port 80);
- echo 'set s [socket localhost 80]; puts $s "testntestntest"; close $s'
| tclsh

Error-log before fix: 

2022/09/26 19:29:58 [error] 10104#17144: *3 broken header: "test
test
test
" while reading PROXY protocol, client: 127.0.0.1, server: 0.0.0.0:80 

Error-log after fix: 

2022/09/26 22:48:50 [error] 13868#6132: *1 broken header:
"test→→test→→test→→" while reading PROXY protocol, client: 127.0.0.1,
server: 0.0.0.0:80 

It is not advisable to log such foreign user input unescaped to the
formatted log-file: instead of "...ntestn..." the attacker can write
correctly formatted line simulating a 401-, 403-failure or rate-limit
overran, so IDS could block a innocent service or mistakenly ban
legitimate user. 

The patch proposes simplest escape (LF/CR-char with →, double quote with
single quote and additionally every char larger or equal than 0x80 to
avoid possible logging of "broken" utf-8 sequences or unsupported
surrogates, just as a safest variant for not-valid foreign buffer)
in-place in the malicious buffer directly (without mem-alloc, etc). 

Real life example -
https://github.com/fail2ban/fail2ban/issues/3303#issuecomment-1148691902


Regards,
Sergey. 

diff --git "a/src/core/ngx_proxy_protocol.c"
"b/src/core/ngx_proxy_protocol.c"
--- "a/src/core/ngx_proxy_protocol.c"
+++ "b/src/core/ngx_proxy_protocol.c"
@@ -139,6 +139,20 @@ skip:

 invalid:

+ p = buf;
+ while (p < last) {
+ const u_char c = *p;
+ switch (c) {
+ case LF:
+ case CR:
+ *p = (u_char)'x1a';
+ break;
+ case '"':
+ *p = (u_char)''';
+ break;
+ default:
+ if (c >= 0x80) {*p = '?';}
+ break;
+ }
+ p++;
+ }
 ngx_log_error(NGX_LOG_ERR, c->log, 0,
 "broken header: "%*s"", (size_t) (last - buf), buf);

 ___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: Precedence return directive and nested locations

2022-08-24 Thread Dipl. Ing. Sergey Brester via nginx-devel
 

OK, 

regarding the "fallback" location, this one can be used (empty -
shortest match): 

location "" {
 return 444;
}

Regards,
Serg. 

24.08.2022 19:38, Sergey Brester via nginx-devel wrote: 

> Hi, 
> 
> it seems that the question of precedence of non-conditional _return_ 
> directive vs nested _location_s is not really clear,
> or rather some constellations (like fallback) are impossible or else the 
> configuration may look weird.
> 
> For instance: 
> 
> server {
> server_name ...;
> 
> location ~ ^/(some-uri|other-uri) {
> return 200 ...;
> }
> 
> # fallback for any not matched uri:
> return 444;
> } 
> 
> will ALWAYS return with 444 for that server no matter whether it matches the 
> location(s) or doesn't.
> Basically the locations are totally superfluous here (despite specified), 
> considering the return is non-conditional. 
> 
> Sure, the documentation says "Stops processing and returns the specified 
> _code_ to a client.",
> but normally the nested locations (if matched) have always higher precedence 
> over anything else
> in the parent location. 
> 
> Furthermore the docu of ngx_http_rewrite_module [1] says at begin: 
> 
> * the directives of this module specified on the server [2] level are 
> executed sequentially;
> 
> * repeatedly: 
> 
> * a location [3] is searched based on a request URI;
> * the directives of this module specified inside the found location are 
> executed sequentially;
> 
> What is a bit ambiguous. But if it is to understand directly - it is clear 
> that a return directive at server level
> bypasses all other locations (sequence in current level before its locations).
> Just it makes hardly possible (up to impossible depending on location tree) 
> to use return directive for a fallback case. 
> 
> To implement the fallback return, one can surely pack this into a location 
> like: 
> 
> location ~ .* { return 444; } 
> Just it is a bit ugly in my opinion, let alone it would quasi disallow the 
> usage of longest match prefix locations, 
> because they work only if no match with a regular expression is found (then 
> the configuration of the 
> prefix location remembered earlier is used). 
> 
> So I assume:
> 
> * either it is a lack of documentation (and it must get a hint about the 
> precedence) 
> and/or still better a description how one could achieve such a "fallback" 
> return (location or whatever).
> (but do we have at all a possibility to specify such proper fallback location 
> matching at end, 
> if nothing else matched?)
> 
> * or (improbably) this is a flaw and must be "fixed" or enhanced rather for a 
> non-conditional return case
> (unsure it wouldn't introduce some compat-issue); 
> 
> Any thoughts? 
> 
> Regards,
> Sergey. 
> 
> ___
> nginx-devel mailing list -- nginx-devel@nginx.org
> To unsubscribe send an email to nginx-devel-le...@nginx.org
 

Links:
--
[1] http://nginx.org/en/docs/http/ngx_http_rewrite_module.html
[2] http://nginx.org/en/docs/http/ngx_http_core_module.html#server
[3] http://nginx.org/en/docs/http/ngx_http_core_module.html#location
___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Precedence return directive and nested locations

2022-08-24 Thread Dipl. Ing. Sergey Brester via nginx-devel
 

Hi, 

it seems that the question of precedence of non-conditional _return_
directive vs nested _location_s is not really clear,
or rather some constellations (like fallback) are impossible or else the
configuration may look weird.

For instance: 

server {
 server_name ...;

 location ~ ^/(some-uri|other-uri) {
 return 200 ...;
 }

 # fallback for any not matched uri:
 return 444;
} 

will ALWAYS return with 444 for that server no matter whether it matches
the location(s) or doesn't.
Basically the locations are totally superfluous here (despite
specified), considering the return is non-conditional. 

Sure, the documentation says "Stops processing and returns the specified
_code_ to a client.",
but normally the nested locations (if matched) have always higher
precedence over anything else
in the parent location. 

Furthermore the docu of ngx_http_rewrite_module [1] says at begin: 

* the directives of this module specified on the server [2] level are
executed sequentially;

* repeatedly: 

* a location [3] is searched based on a request URI;
* the directives of this module specified inside the found location
are executed sequentially;

What is a bit ambiguous. But if it is to understand directly - it is
clear that a return directive at server level
bypasses all other locations (sequence in current level before its
locations).
Just it makes hardly possible (up to impossible depending on location
tree) to use return directive for a fallback case. 

To implement the fallback return, one can surely pack this into a
location like: 

location ~ .* { return 444; } 
Just it is a bit ugly in my opinion, let alone it would quasi disallow
the usage of longest match prefix locations, 
because they work only if no match with a regular expression is found
(then the configuration of the 
prefix location remembered earlier is used). 

So I assume:

* either it is a lack of documentation (and it must get a hint about
the precedence) 
and/or still better a description how one could achieve such a
"fallback" return (location or whatever).
(but do we have at all a possibility to specify such proper fallback
location matching at end, 
if nothing else matched?)

* or (improbably) this is a flaw and must be "fixed" or enhanced rather
for a non-conditional return case
(unsure it wouldn't introduce some compat-issue); 

Any thoughts? 

Regards,
Sergey. 

Links:
--
[1] http://nginx.org/en/docs/http/ngx_http_rewrite_module.html
[2] http://nginx.org/en/docs/http/ngx_http_core_module.html#server
[3] http://nginx.org/en/docs/http/ngx_http_core_module.html#location
___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: Unsubscribe from Nginx Project

2022-02-24 Thread Sergey Brester via nginx-devel
 

To unsubscribe send an email to nginx-devel-le...@nginx.org

Sergey Brester
--Stop the pathetic Hypocrisy 

24.02.2022 13:51, Ranier Vilela wrote: 

> Hi, 
> 
> Please unsubscribe me from the Nginx mail list. 
> 
> Ranier Vilela 
> --Stop the War. 
> 
> ___
> nginx-devel mailing list -- nginx-devel@nginx.org
> To unsubscribe send an email to nginx-devel-le...@nginx.org
 ___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: PCRE2 support?

2021-10-18 Thread Dipl. Ing. Sergey Brester
 

Just for the record (and probably to reopen this discussion again). 

https://github.com/PhilipHazel/pcre2/issues/26 [3] shows a heavy bug in
PCRE library (it is not safe to use it anymore, at least without jit) as
well as the statement of the PCRE developer regarding the end of life
for PCRE. 

Regards,
Serg. 

25.01.2019 01:12, Maxim Dounin wrote: 

> Hello!
> 
> On Thu, Jan 24, 2019 at 10:47:48AM -0800, PGNet Dev wrote:
> Well, this depends on your point of view. If a project which actually 
> developed the library fails to introduce support to the new version of the 
> library - for an external observer this suggests that there is something 
> wrong with the new version. FUD 'suggestions' simply aren't needed.

Sure, they aren't. What is wrong with PCRE2 is clear from the 
very start: it's a different library with different API. And 
supporting PCRE2 is a question of advantages of PCRE2 over PCRE.

> The Exim project didn't develop the pcre2 library ... Philip Hazel did 
> (https://www.pcre.org/current/doc/html/pcre2.html#SEC4 [1]), as a separate 
> project.

Philip Hazel developed both Exim and the PCRE library, "originally 
written for the Exim MTA". And PCRE2 claims to be a "major 
version" of the PCRE library.

> Exim's last (? something newer out there?) rationale for not adopting it was 
> simply, https://bugs.exim.org/show_bug.cgi?id=1878 [2] "The original PCRE 
> support is not broken. If it is going to go away, then adding PCRE2 support 
> becomes much more important, but I've seen nobody saying that yet."

I've posted this link in my first response in this thread 4 month 
ago. The same rationale applies to any project already using 
the PCRE library.

 

Links:
--
[1] https://www.pcre.org/current/doc/html/pcre2.html#SEC4
[2] https://bugs.exim.org/show_bug.cgi?id=1878
[3] https://github.com/PhilipHazel/pcre2/issues/26
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Subrequest without returning to nginx

2021-05-04 Thread Dipl. Ing. Sergey Brester
 

What do you mean as "return to nginx"? Or with other words why it should
not? Or how you need to guarantee that a subrequest would take place at
all and will be considered on another side? 

As for njs and "it also returns to nginx", either I don't really
understand your approach or you simply missed the word "detached" in my
answer (and njs subrequest documentation). 

04.05.2021 16:49, Alfred Sawaya wrote: 

> mirror and post_action both return to nginx to complete the subrequest. 
> 
> njs also does an event-driven subrequest (ie gives back a promise and set a 
> callback), so it also returns to nginx to complete the subrequest. 
> 
> On 04/05/2021 16:32, Dipl. Ing. Sergey Brester wrote: 
> 
> Hi, 
> 
> see how the directive mirror [2] or post_action doing this. 
> 
> Also take a look at njs [3], how it can make a detached subrequest. 
> 
> Regards,
> Serg. 
> 
> 04.05.2021 16:11, Alfred Sawaya wrote: 
> 
> Hello,
> 
> I am currently converting an Apache module to Nginx. This module uses
> subrequests and needs (for now) to execute the subrequest without
> unwinding the stack (ie without returning to nginx).
> 
> I tried to call ngx_http_run_posted_requests by hand, but it does not
> work as the upstream socket needs to get polled some time.
> 
> So I wonder, is there any way to do this ?
> 
> Of course I know that I shouldn't do it like this, but the current
> module is not reentrant and poorly architectured. I will eventually
> refactor it but later.
> 
> Thanks,
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel [1]
 

Links:
--
[1] http://mailman.nginx.org/mailman/listinfo/nginx-devel
[2] https://nginx.org/en/docs/http/ngx_http_mirror_module.html
[3] http://nginx.org/en/docs/njs/reference.html
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Subrequest without returning to nginx

2021-05-04 Thread Dipl. Ing. Sergey Brester
 

Hi, 

see how the directive mirror [2] or post_action doing this. 

Also take a look at njs [3], how it can make a detached subrequest. 

Regards,
Serg. 

04.05.2021 16:11, Alfred Sawaya wrote: 

> Hello,
> 
> I am currently converting an Apache module to Nginx. This module uses
> subrequests and needs (for now) to execute the subrequest without
> unwinding the stack (ie without returning to nginx).
> 
> I tried to call ngx_http_run_posted_requests by hand, but it does not
> work as the upstream socket needs to get polled some time.
> 
> So I wonder, is there any way to do this ?
> 
> Of course I know that I shouldn't do it like this, but the current
> module is not reentrant and poorly architectured. I will eventually
> refactor it but later.
> 
> Thanks,
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel [1]
 

Links:
--
[1] http://mailman.nginx.org/mailman/listinfo/nginx-devel
[2] https://nginx.org/en/docs/http/ngx_http_mirror_module.html
[3] http://nginx.org/en/docs/njs/reference.html
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

"detach" upstream from request (or allow send from keepalive saved connection) / FastCGI multiplexing

2021-02-10 Thread Dipl. Ing. Sergey Brester
 

Hi, 

I have a question: how an upstream could be properly "detached" from
request in case it gets closed by client? 

Some time ago I have implemented a FastCGI multiplexing for nginx, which
would work pretty well, excepting the case if a request gets closed by
client side. In such a case _ngx_http_upstream_finalize_request_ would
close the upstream connection.
This may not be worse for a single connect per request, but very
annoying in case of multiplexing, since closing of such upstream connect
means a simultaneous close for N requests not involved by the client,
but serving through the same upstream pipe. 

So I was trying to implement abortive "close", using send of
ABORT_REQUEST (2) packet, corresponding FastCGI protocol.
Since _upstream->abort_request_ handler is not yet implemented in nginx,
my first attempt was just to extend _ngx_http_fastcgi_finalize_request_
in order to create new send buffer there and restore
_r->upstream->keepalive _so that _u->peer.free _in
_ngx_http_upstream_finalize_request_ would retain the upstream connect. 

So I see "free keepalive peer: saving connection" logged (and connect is
not closed), but probably because
_ngx_http_upstream_free_keepalive_peer_ moved it to cached queue, it
does not send ABORT_REQUEST packet to the fastcgi side.

Is there some proper way to retain an upstream connection (able to send)
which is detached from request by close or before close, so such an
abortive "disconnect" can be handled through upstream pipe? With other
words avoid an upstream close or saving keepalive connection in
_ngx_http_fastcgi_finalize_request_. 

Regards,
Sergey. ___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: One last try - large long-running worker tasks

2020-11-10 Thread Dipl. Ing. Sergey Brester
 

You could do it similar proxy module is buffering the response, for
instance see proxy_buffering [2] directive: 

_When buffering is enabled, nginx receives a response from the proxied
server as soon as possible, saving it into the buffers set by the
proxy_buffer_size [3] and proxy_buffers [4] directives. If the whole
response does not fit into memory, a part of it can be saved to a
temporary file [5] on the disk. Writing to temporary files is controlled
by the proxy_max_temp_file_size [6] and proxy_temp_file_write_size [7]
directives._ 

This or other communicating modules (like fcgi, scgi or uwsgi) using
upstream buffering of response. The handling around buffering of
upstream is almost the same in all modules.
This is already event-driven - handler is called on readable, by
incoming response chunk (or on writable of downstream). 

Basically depending on how your module architecture is built, you could:


* either use default upstream buffering mechanism (if you have
something like upstream or can simulate that). In thin case you have to
set certain properties of r->upstream: buffering, buffer_size, bufs.num
and bufs.size, temp_file_write_size and max_temp_file_size and of course
register the handler reading the upstream pipe.
* or organize your own response buffering as it is implemented in
ngx_event_pipe.c and ngx_http_upstream.c, take a look there for
implementation details.

As for performance (disk I/O, etc) - it depends (buffer size, system
cache, mount type of temp storage, speed of clients downstream, etc).
But if you would configure the buffers large enough, nginx could use it
as long as possible and the storing in temp file can be considered as
safe on demand fallback to smooth out the peak of load, to avoid OOM
situation.
Usage a kernel pipe buffers could be surely faster, but indirect you'd
just relocate the potential OOM issue from nginx process to the system. 

Regards,
Sergey 

10.11.2020 02:54, Jeff Heisz wrote: 

> Hi all, I've asked this before with no response, trying one last time
> before I just make something work.
> 
> I'm making a custom module for nginx that does a number of things but
> one of the actions is a long-running (in the nginx sense) task that
> could produce a large response. I've already got proper processing
> around using worker tasks for the other long-running operations that
> have small datasets, but worry about accumulating a large amount of
> memory in a buffer chain for the response. Ideally it would drain as
> fast as the client can consume it and throttle appropriately, there
> could conceivably be gigabytes of content.
> 
> My choices (besides blowing all of the memory in the system) are:
> 
> - write to a temporary file and attach a file buffer as the response,
> less than ideal as it's essentially translating a file to begin with,
> so it's a lot of disk I/O and performance will be less than stellar.
> From what I can tell, this is one of the models for the various CGI
> systems for caching, although in my case caching is not of use
> 
> - somehow hook into the eventing system of nginx to detect the write
> transitions and implement flow control directly using threading
> conditionals. I've tried this for a few weeks but can't figure out
> the 'right' thing to make the hooks work in a separate module without
> changing the core nginx code, which I'm loathe to do (unless you are
> looking for someone to contribute such a solution, but I'd probably
> need some initial guidance)
> 
> - attach a kernel pipe object (yah yah, won't work on Windows, don't
> care) to each of my worker instances and somehow 'connect' that as an
> upstream-like resource, so that the nginx event loop handles the
> read/write consumption and the thread automatically blocks when full
> on the kernel pipe. Would need some jiggery to handle reuse and
> start/end markers. Also not clear if I can override the connection
> model for the upstream without again changing core nginx server code
> 
> Any thoughts? Not looking for code here (although telling me to look
> at the blah-blah-blah that does exactly this would be awesome), but if
> someone who is more familiar with the inner workings of the nginx data
> flow could just say which solution is a non-starter (so I don't waste
> time trying to make it work) or even which would be a suitable
> solution would be awesome!
> 
> jmh
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel [1]
 

Links:
--
[1] http://mailman.nginx.org/mailman/listinfo/nginx-devel
[2]
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering
[3]
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size
[4]
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers
[5]
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_temp_path
[6]

Re: nginx for Windows - WSASend() socket error 10057

2020-02-27 Thread Dipl. Ing. Sergey Brester
 

Am 27.02.2020 12:47, schrieb Maxim Dounin: 

> Further, I don't see what you are trying to debug here. As I see 
> from the messages in this thread, the issue was lack of IPv6 
> listener while using an IPv6 address in auth_http, and it is 
> already resolved.

Well, as for the initial issue - sure. :) But it "continues" with the
new question "How I could delay" and one of my
suggestions was "use limit_req, Luke" (which did not work for some
reason against location of `auth_http`). 

I don't know why the people put several questions in one thread, but you
should ask them not me. 

So he can indeed try "Auth-Wait" now (I totally forget about that, thx
for reminder). 

Anyway the suggestion to debug has come after "strange" behavior of
limit_req by "auth_http" requests, 
especially as you say that this location is called each time the client
authenticate: 

> There is no keep-alive nor any cache for established connections in auth_http.
> 
> The auth_http server is called for each authentication attempt.

Regards,
Sergey ___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: nginx for Windows - WSASend() socket error 10057

2020-02-26 Thread Sergey Brester
 

There are several possibilities to introduce a latency in nginx: 

- limit_req -
https://www.nginx.com/blog/rate-limiting-nginx/#Two-Stage-Rate-Limiting
[14] 

- Maxim's ngx_http_delay (I used it more for development purposes, like
test or simulation of load etc); 

- some "slow" upstream backend that doing nothing, just waiting
(preferably asynchronous). 

You seems to have some upstream (php?) serving auth_http requests, so
you could for example implement some delay in case of failed attempt
within php (or whatever you use there as backend).
Note that it is always good if the latency will be implemented
asynchronously (without a real "sleep") in order to avoid possible
overload under DDoS similar circumstances. 

Regards,
Sergey. 

Am 26.02.2020 01:59, schrieb Yury Shpakov: 

> Hi Sergey, 
> 
> You mentioned that you can set up some delays in responses. 
> How can I do it? 
> Adding this module during compilation? 
> https://github.com/openresty/echo-nginx-module [2] 
> 
> I tried but it didn't want to compile. I got many compilation errors. 
> 
> Maybe I can set up delays somehow else? 
> 
> Thank you, 
> Yury 
> 
> -
> 
> FROM: nginx-devel  on behalf of Yury Shpakov 
> 
> SENT: Friday, February 14, 2020 6:08 PM
> TO: Sergey Brester 
> CC: nginx-devel@nginx.org 
> SUBJECT: Re: nginx for Windows - WSASend() socket error 10057 
> 
> So what is the meaning of Auth-Server and Auth-Port headers? So it's relevant 
> only when nginx works as SMTP Proxy (not SMTP Server)? And these are 
> host/port where to redirect SMTP requests? 
> Yeah, I was all the time surprised -- how come, it's set as Proxy but there 
> is no setting where it redirects SMTP communication to. A little bit 
> unexpected place for those setting. 
> 
> Well, let me try... 
> 
> I ran Fake SMTP Server on port 25.(I found on Internet some fake SMTP 
> Server). I configured my test SMTP client to localhost:25 (later to 
> 127.0.0.1:25). They send/receive successfully. So both SMTP Client and (fake) 
> SMTP Server work fine. 
> 127.0.0.1 works fine too. 
> 
> I re-configured my test SMTP client to localhost:8025 (tried 127.0.0.1:8025 
> too). As well, I changed this section of config as follows: 
> http {
> 
> server { 
> listen 9000; 
> 
> location /cgi-bin/nginxauth.cgi { 
> add_header Auth-Status OK; 
> add_header Auth-Server 127.0.0.1; # backend ip 
> add_header Auth-Port 25; # backend port 
> return 204; 
> } 
> } 
> } 
> The same error: 
> 2020/02/14 17:37:18 [error] 15260#3328: *5 WSASend() failed (10057: A request 
> to send or receive data was disallowed because the socket is not connected 
> and (when sending on a datagram socket using a sendto call) no address was 
> supplied) while in http auth state, client: 127.0.0.1, server: 0.0.0.0:8025 
> 
> UPDATE: 
> Detailed logging with debug information helped a lot. 
> This is what I noticed in there: 
> 
> 2020/02/14 17:40:28 [debug] 3940#22096: *1 smtp auth state
> 
> 2020/02/14 17:40:28 [debug] 3940#22096: *1 WSARecv: fd:584 rc:0 24 of 4096 
> 2020/02/14 17:40:28 [debug] 3940#22096: *1 smtp rcpt to:"RCPT 
> TO:" 
> 2020/02/14 17:40:28 [debug] 3940#22096: *1 event timer del: 584: 1172123084 
> 2020/02/14 17:40:28 [debug] 3940#22096: *1 malloc: 02F8C260:2048 
> 2020/02/14 17:40:28 [debug] 3940#22096: *1 stream socket 588 
> 2020/02/14 17:40:28 [debug] 3940#22096: *1 connect to [::1]:9000, fd:588 #2 
> 2020/02/14 17:40:28 [debug] 3940#22096: *1 select add event fd:588 ev:768 
> 2020/02/14 17:40:28 [debug] 3940#22096: *1 select add event fd:588 ev:16 
> 2020/02/14 17:40:28 [debug] 3940#22096: *1 event timer add: 588: 
> 6:1172123084 
> 2020/02/14 17:40:28 [debug] 3940#22096: *1 event timer add: 588: 
> 6:1172123084 
> So it's trying to use IP6 rather than IP4. 
> And below: 
> 
> 2020/02/14 17:40:29 [debug] 3940#22096: *1 delete posted event 03171170
> 
> 2020/02/14 17:40:29 [debug] 3940#22096: *1 mail auth http write handler 
> 2020/02/14 17:40:29 [debug] 3940#22096: *1 WSASend: fd:588, -1, 0 of 306 
> 2020/02/14 17:40:29 [error] 3940#22096: *1 WSASend() failed (10057: A request 
> to send or receive data was disallowed because the socket is not connected 
> and (when sending on a datagram socket using a sendto call) no address was 
> supplied) while in http auth state, client: 127.0.0.1, server: 0.0.0.0:8025 
> 2020/02/14 17:40:29 [debug] 3940#22096: *1 event timer del: 588: 1172123084 
> 2020/02/14 17:40:29 [debug] 3940#22096: *1 event timer del: 588: 1172123084 
> So, I replaced localhost with 127.0.0.1 like this: 
> auth_http 127.0.0.1:9000/cgi-bin/nginxauth.cgi;
> 
> And it worked. Since I forced it to use IP4. 
> Any idea how to use ho

Re: nginx for Windows - WSASend() socket error 10057

2020-02-14 Thread Sergey Brester
ycle
... 

<<<<<<<<< 

Regards,
Sergey 

13.02.2020 22:45, Yury Shpakov wrote: 

> Hi Sergey, 
> 
> I reconfigured the config file as follows: 
> 
> === === === 
> 
> #user nobody; 
> worker_processes 1; 
> 
> #error_log logs/error.log; 
> #error_log logs/error.log notice; 
> #error_log logs/error.log info; 
> 
> #pid logs/nginx.pid; 
> 
> events { 
> worker_connections 1024; 
> } 
> 
> mail { 
> server_name localhost; 
> auth_http localhost:9000/cgi-bin/nginxauth.cgi; 
> # auth_http none; 
> 
> smtp_auth none; 
> # smtp_auth login plain cram-md5; 
> # smtp_capabilities "SIZE 10485760" ENHANCEDSTATUSCODES 8BITMIME DSN; 
> xclient off; 
> 
> server { 
> listen 8025; 
> protocol smtp; 
> proxy on; 
> proxy_pass_error_message on; 
> } 
> } 
> 
> http { 
> server { 
> listen 9000; 
> 
> location /cgi-bin/nginxauth.cgi { 
> add_header Auth-Status OK; 
> add_header Auth-Server 127.0.0.2; # backend ip 
> add_header Auth-Port 143; # backend port 
> return 204; 
> } 
> } 
> } 
> === === === 
> 
> And now it's responding on port 9000 as expected: 
> 
> === === === 
> C:WINDOWSsystem32>curl -H "Auth-Method: plain" -H "Auth-User: user" -H 
> "Auth-Pass: pwd" -H "Auth-Protocol: imap" -H "Auth-Login-Attempt: 1" -i 
> http://127.0.0.1:9000/cgi-bin/nginxauth.cgi
> 
> HTTP/1.1 204 No Content 
> Server: nginx/1.17.9 
> Date: Thu, 13 Feb 2020 21:30:54 GMT 
> Connection: keep-alive 
> Auth-Status: OK 
> Auth-Server: 127.0.0.2 Auth-Port: 143 
> === === === 
> 
> However I'm still experiencing the same issue (in log file): 
> 
> === === === 
> 2020/02/13 16:29:24 [notice] 35048#26192: signal process started
> 
> 2020/02/13 16:29:34 [error] 31732#22720: *1 WSASend() failed (10057: A 
> request to send or receive data was disallowed because the socket is not 
> connected and (when sending on a datagram socket using a sendto call) no 
> address was supplied) while in http auth state, client: 127.0.0.1, server: 
> 0.0.0.0:8025 === === === 
> 
> Tried under both admin and regular user. 
> 
> Any further ideas how to get it fixed please? 
> 
> Thank you, 
> Yury 
> 
> -
> 
> FROM: Sergey Brester 
> SENT: Wednesday, February 12, 2020 1:51 PM
> TO: Yury Shpakov 
> CC: nginx-devel@nginx.org 
> SUBJECT: Re: nginx for Windows - WSASend() socket error 10057 
> 
> I answered inline... 
> 
> 12.02.2020 18:59, Yury Shpakov wrote: 
> 
>> Hi Sergey, 
>> 
>> Thank you for you response. 
>> 
>> I tried netstat /nabo and I don't see any reference to port 9000 at all. 
>> So a problem is to make nginx to listen on port 9000 (as server)? 
>> Or nginx is not listening on port 9000 but rather sending requests to port 
>> 9000 (as client)?
> 
> With setting of `auth_http`, you are defining an URL to the service 
> responsible for authentication (and upstream choice). 
> Of course then you should have something that would response to the 
> auth-requests (your own upstream, or some nginx location, or some "foreign" 
> http-server). 
> 
> See https://docs.nginx.com/nginx/admin-guide/mail-proxy/mail-proxy/ [2] for 
> more examples. 
> 
>> Maybe it's easier not to use auth_http at all? I was trying to remove it 
>> from configuration file but nginx was not happy.
> 
> I have my own auth-module so I don't know how it can be solved in stock-nginx 
> without this directive. 
> 
> Take a look here - 
> https://serverfault.com/questions/594962/nginx-understanding-the-purpose-of-auth-http-imap-proxy
>  [3] - you can use some nginx location (and internal URL to same nginx 
> instance) to specify that.
> 
> Anyway it is recommended to use some auth (on nginx side), because it'd 
> preserve the resources of mail-servers, allow you to authenticate email 
> clients with same user/password for all mail-servers (smtp, imap, pop3, etc) 
> as well as the same user/pwd as for some other http-services. And it is used 
> to choose an upstream server (if multiple) for the email processing. 
> 
>> At this point I don't need any authentication. I was told by my boss to use 
>> nginx for load testing of our service sending emails (SMTP client). I've got 
>> some SMTP Server and nginx would be used as SMTP proxy because it allows to 
>> set up delays.
> 
> Well, an auth request to some nginx-location would allow you to set up delays 
> even on authentication phase. 
> 
>> And take into account that I REMOVED "--with-http_ssl_module" from 
>> parameters when I was building n

Re: nginx for Windows - WSASend() socket error 10057

2020-02-12 Thread Sergey Brester
 

It looks like your service defined in auth_http doesn't answer (or no
listener on 127.0.0.1 port 9000?)... 

try netstat (in cmd as admin): 

netstat /nabo
netstat /nabo | grep -A 1 ":9000b" 

and check whether the listener on port 9000 is bound to 127.0.0.1 (or it
is 0.0.0.0 only?) and it is the process you expect to see there (can be
"reserved" by some other windows-service). 

additionally try to telnet or curl it: 

curl -H "Auth-Method: plain" -H "Auth-User: user" -H "Auth-Pass: pwd" -H
"Auth-Protocol: imap" -H "Auth-Login-Attempt: 1" -i
http://127.0.0.1:9000/cgi-bin/nginxauth.cgi 

if it does not answer, make another attempt by replace 127.0.0.1 with
0.0.0.0 (or a host-name). 

If it answers - see whether it is the expected response (some examples
of good and bad responses are described in
http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html [7]). 

But I guess if WSASend fails, it would probably (unexpected) reject the
connection during the send (or even connect) process.
It can be also invalid (unexpected) content-length in keep-alive connect
to auth-upstream - so send but still receive is expected (or vice
versa). 

Also follow this forum topic addressing similar issue:
https://forum.nginx.org/read.php?2,257206,257207#msg-257207 [8] 

Anyway it doesn't look to me like an issue of nginx (regardless windows
or not), but you can also try some other ready build (for example on my
GH [9] - nginx.zip [10], where it works well). 

Regards,
Sergey 

12.02.2020 03:01, Yury Shpakov wrote: 

> Hi there, 
> 
> Trying to make nginx work as SMTP server and/or SMTP proxy. Done everything 
> according to: 
> http://nginx.org/en/docs/howto_build_on_win32.html [2] 
> 
> But excluded (don't care about SSL at this point so don't want to 
> install/configure Perl now): 
> --with-openssl=objs/lib/openssl-master 
> 
> --with-openssl-opt=no-asm 
> --with-http_ssl_module 
> And added: 
> --with-mail
> 
> nmake was successful and nginx.exe was created. 
> 
> However nginx.exe keeps failing with the error: 
> WSASend() failed (10057: A request to send or receive data was disallowed 
> because the socket is not connected and (when sending on a datagram socket 
> using a sendto call) no address was supplied) while in http auth state, 
> client: 127.0.0.1, server: 0.0.0.0:8025 
> 
> Windows API says the following about this error: 
> 
> WSAENOTCONN10057
> Socket is not connected.A request to send or receive data was disallowed 
> because the socket is not connected and (when sending on a datagram socket 
> using SENDTO [3]) no address was supplied. Any other type of operation might 
> also return this error--for example, SETSOCKOPT [4] setting SO_KEEPALIVE [5] 
> if the connection has been reset.
> 
> https://docs.microsoft.com/en-us/windows/win32/winsock/windows-sockets-error-codes-2
>  [6]
> 
> Windows Sockets Error Codes (Winsock2.h) - Win32 apps | Microsoft Docs [6] 
> Return code/value Description; WSA_INVALID_HANDLE 6: Specified event object 
> handle is invalid. An application attempts to use an event object, but the 
> specified handle is not valid. 
> docs.microsoft.com 
> 
> Managed to debug your code in VS 2010 a little bit but it's brutal C so it's 
> hard to figure your code out. And this debugger doesn't show you any local 
> variables values. 
> 
> Any recommendation for me to make it work? 
> 
> Tried to play with config (commenting/uncommenting): 
> 
> #user nobody; 
> worker_processes 1; 
> 
> #error_log logs/error.log; 
> #error_log logs/error.log notice; 
> #error_log logs/error.log info; 
> 
> #pid logs/nginx.pid; 
> 
> events { 
> worker_connections 1024; 
> } 
> 
> mail { 
> server_name localhost; 
> auth_http localhost:9000/cgi-bin/nginxauth.cgi; 
> # auth_http none; 
> 
> smtp_auth none; 
> # smtp_auth login plain cram-md5; 
> # smtp_capabilities "SIZE 10485760" ENHANCEDSTATUSCODES 8BITMIME DSN; 
> xclient off; 
> 
> server { 
> listen 8025; 
> protocol smtp; 
> proxy on; 
> proxy_pass_error_message on; 
> } 
> } 
> Tried both under a regular user and under admin. Tried on 25, 1025 and 8025 
> ports. 
> 
> Thank you, 
> Yury 
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel [1]
 

Links:
--
[1] http://mailman.nginx.org/mailman/listinfo/nginx-devel
[2] http://nginx.org/en/docs/howto_build_on_win32.html
[3]
https://docs.microsoft.com/en-us/windows/desktop/api/winsock/nf-winsock-sendto
[4]
https://docs.microsoft.com/en-us/windows/desktop/api/winsock/nf-winsock-setsockopt
[5]
https://docs.microsoft.com/en-us/windows/desktop/winsock/so-keepalive
[6]
https://docs.microsoft.com/en-us/windows/win32/winsock/windows-sockets-error-codes-2
[7] http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html
[8] https://forum.nginx.org/read.php?2,257206,257207#msg-257207
[9] https://github.com/sebres/nginx/releases/tag/release-1.13.0
[10] https://github.com/sebres/nginx/files/2246440/nginx.zip

Re: Seg fault in http read event handler caused by rouge event call without context

2019-11-18 Thread Sergey Brester
 

Looks like [efd71d49bde0 [2]] could be indeed responsible for that:

I see at least one state where rev->ready could remain 1 (after
rev->available gets 0) e. g. deviation between blocks
[efd71d49bde0#l10.8 [3]] and [efd71d49bde0#l11.8 [4]] where first did
not reset rev->ready and for example if ngx_socket_nread in
[efd71d49bde0#l10.38 [5]] would write 0 into rev->available, so
rev->ready remains 1 yet.

Maybe it should be changed to this one:

if (rev->available == 0 && !rev->pending_eof) {

if (rev->available <= 0 && !rev->pending_eof) { 

Also rev->available could remain negative if n != size and
ngx_readv_chain or ngx_unix_recv wouldn't enter this blocks or if
ngx_socket_nread failed (returns -1). 
And there are some code pices where nginx would expect positive
ev->available.

So I guess either one of this blocks are not fully correct, or perhaps
the block [efd71d49bde0#l10.28 [6]] could be moved to end of the #if
(NGX_HAVE_FIONREAD) block (before #endif at least in case
!rev->pending_eof).

Regards,
Sergey.

18.11.2019 15:03, Dave Brennan wrote: 

> For the last few years we have been using the "nginx_upload" module to 
> streamline result posting within our environment. 
> 
> With the introduction of nginx 1.17.5 we saw a large number of segment 
> faults, causing us to revert to 1.17.4 on our development system. 
> 
> While isolating the fault we added an increase in debug messages to monitor 
> the request and context variables being passed to event handlers. 
> 
> So a good response in 1.17.4 looks like this:- 
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 Upload handle pre alloc 
> Request address = 563E9FE451F0 Context =  
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 Upload Handler post alloc 
> Request address = 563E9FE451F0 Context = 563E9FE81CD8 
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 Upload_eval_path Request 
> address = 563E9FE451F0 Context = 563E9FE81CD8 
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 Upload eval state path Request 
> address = 563E9FE451F0 Context = 563E9FE81CD8 
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 Upload client read Request 
> address = 563E9FE451F0 Context = 563E9FE81CD8 
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 do read upload client Request 
> address = 563E9FE451F0 Context = 563E9FE81CD8 
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 process request body Request 
> address = 563E9FE451F0 Context = 563E9FE81CD8 
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 Upload variable Request 
> address = 563E9FE451F0 Context = 563E9FE81CD8 
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 Upload variable Request 
> address = 563E9FE451F0 Context = 563E9FE81CD8 
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 Upload variable Request 
> address = 563E9FE451F0 Context = 563E9FE81CD8 
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 Upload variable Request 
> address = 563E9FE451F0 Context = 563E9FE81CD8 
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 Upload variable Request 
> address = 563E9FE451F0 Context = 563E9FE81CD8 
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 Upload variable Request 
> address = 563E9FE451F0 Context = 563E9FE81CD8 
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 Upload variable Request 
> address = 563E9FE451F0 Context = 563E9FE81CD8 
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 upload md5 variable Request 
> address = 563E9FE451F0 Context = 563E9FE81CD8 
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 Upload variable Request 
> address = 563E9FE451F0 Context = 563E9FE81CD8 
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 Upload File size variable 
> Request address = 563E9FE451F0 Context = 563E9FE81CD8 
> 
> 2019/11/14 10:24:21 [debug] 12398#12398: *9770 Upload Body Handler Request 
> address = 563E9FE451F0 Context = 563E9FE81CD8 
> 
> In 1.17.5 the event stream looks like this:- 
> 
> 2019/11/13 14:21:52 [debug] 28086#28086: *3703 Upload handle pre alloc 
> Request address = 558ADDD4F780 Context =  
> 
> 2019/11/13 14:21:52 [debug] 28086#28086: *3703 Upload Handler post alloc 
> Request address = 558ADDD4F780 Context = 558ADDD49FF8 
> 
> 2019/11/13 14:21:52 [debug] 28086#28086: *3703 Upload_eval_path Request 
> address = 558ADDD4F780 Context = 558ADDD49FF8 
> 
> 2019/11/13 14:21:52 [debug] 28086#28086: *3703 Upload eval state path Request 
> address = 558ADDD4F780 Context = 558ADDD49FF8 
> 
> 2019/11/13 14:21:52 [debug] 28086#28086: *3703 Upload client read Request 
> address = 558ADDD4F780 Context = 558ADDD49FF8 
> 
> 2019/11/13 14:21:52 [debug] 28086#28086: *3703 do read upload client Request 
> address = 558ADDD4F780 Context = 558ADDD49FF8 
> 
> 2019/11/13 14:21:52 [debug] 28086#28086: *3703 

Re: Default log file locations

2019-06-27 Thread Sergey Brester
 

Hmm... 

>From _The Linux Programming Interface: A Linux and UNIX System
Programming Handbook_ [2]: 

Nonblocking mode can be used with devices (e.g., terminals and
pseudoterminals), pipes, FIFOs, and sockets. (Because file descriptors
for pipes and sockets are not obtained using open(), we must enable this
flag using the fcntl() F_SETFL operation described in Section 5.3.) 

So basically if a "sharing" of standard handles is properly implemented
between master/workers and a line buffering is good enough for the
logging, you can "write to terminal in non-blocking manner." 

Additionally note the stdout can be mapped to systemd journals (which
are files and channel will be to pipe), if nginx would running as
systemd service unit.

Regards,
Serg. 

27.06.2019 19:35, Valentin V. Bartenev wrote: 

> Afaik, there's no way in Linux systems to write to terminal in
> non-blocking manner. As the result, writing log can block the
> whole nginx worker process and cause DoS.
> 
> IMHO, it's not a good idea to make your web-application depend
> on logging capabilities.
> 
> wbr, Valentin V. Bartenev
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx-devel [1]
 

Links:
--
[1] http://mailman.nginx.org/mailman/listinfo/nginx-devel
[2]
https://books.google.de/books?id=Ps2SH727eCIClpg=PA103ots=kMHcB5EQyadq=From%20The%20Linux%20Programming%20Interface%3A%20A%20Linux%20and%20UNIX%20System%20Programming%20Handbook%3A%20Nonblocking%20mode%20can%20be%20used%20with%20devices%20(e.g.%2C%20terminals%20and%20pseudoterminals)%2C%20pipes%2C%20FIFOs%2C%20and%20sockets.%20(Because%20file%20descriptors%20for%20pipes%20and%20sockets%20are%20not%20obtained%20using%20open()%2C%20we%20must%20enable%20The%20Linux%20Programming%20Interface%3A%20A%20Linux%20and%20UNIX%20System%20Programming%20Handbookhl=depg=PA103#v=onepageqf=false___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [nginx]empty gif

2018-07-19 Thread Sergey Brester
 

So you want something like?
 location = /healthcheck.gif {
 empty_gif;
 }
And it does not work?

1. if you've installed nginx via distribution and your nginx version is
larger as 1.9 (which supports dynamic modules), you can just install the
modules via yum. If not, you should use nginx-extra package or compile
nginx with modules you needed (or install from other repository).
2. to load module use load_module [2] directive. to list what it has
statically, use "nginx -V"

But IMHO for the simple health check would be enough:

 location = /healthcheck {
 default_type text/plain;
 return 200 "OK";
 }

Regards,

sebres.

Am 19.07.2018 10:24, schrieb 桐山 健太郎: 

> Hello, 
> 
> I have installed the nginx by making yum repository to the RHEL 7.4 server on 
> EC2(AWS) and the architecture is like below. 
> 
> CloudFrontàWAFàELB(ALB)àEC2 Nginx server(as a Reverse proxy)àELB(ALB)àgoes to 
> 2 different backend web server A and B. 
> 
> Also, I have configured the conf files as two conf. file other than 
> nginx.conf. 
> 
> The above two conf. file is named RP.conf and ALBHealthcheck.conf and placed 
> those under /etc/nginx/ directory, so that nginx conf. could load those two 
> conf. files. (include /etc/nginx/conf.d/*.conf) 
> 
> As for the ELB health check I would like to use the empty gif, however I 
> couldn't find the default module 'ngx_http_empty_gif_module" is
> on the system. 
> 
> I have checked the "/usr/lib64/nginx/modules/", but shows there is nothing in 
> it. 
> 
> 1. Do I have to configure the module one by one? Or nginx has the default 
> module? 
> 
> 2. How can I define the list of module for nginx? What command can list all 
> the module with nginx? 
> 
> My goal is to succeed the health check for ELB with an path of 
> "/healthcheck.html" 
> 
> Regards 
> 
> Kentaro 
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel [1]
 

Links:
--
[1] http://mailman.nginx.org/mailman/listinfo/nginx-devel
[2] http://nginx.org/en/docs/ngx_core_module.html#load_module
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Permanent structure to accumulate data

2018-02-09 Thread Sergey Brester
 

Unfortunately, the question is ambiguous. 

You can put it into the structure, you initialized `mycf =
ngx_pcalloc(cf->pool, size)`, 
where cf is `ngx_conf_t *cf`, if you want to save it per location.
Then you can get it in the handler, using `mycf =
ngx_http_get_module_loc_conf(r, ngx_http_your_module);`...

There are many another places, where you can put it. 
E. g. just as static variable like `static int var`, so valid per
module-scope.

But you should provide more info what exactly you want do.

For example another issue may be also multiple-workers at all (that are
processes, not the threads in nginx), so 
if it should be exactly one counter across all workers, then you should
put it into the shared memory (or somewhere in shared pools, using
ngx_slab_alloc functions). The good example would be own nginx module
"ngx_http_limit_conn_module".
Because otherwise, each worker will just get its own counter.

Regards,
Serg.

Am 09.02.2018 17:31, schrieb Antonio Nappa: 

> Hello,
> 
> I am looking for a structure that I can use to store a counter in the module, 
> this structure should not be reset every time there is a new request.
> 
> I have made tests with the module custom ctx and the module custom srv 
> configuration without success, which means every time there is a new request 
> the field becomes zero.
> 
> Thanks, Antonio 
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel [1]
 

Links:
--
[1] http://mailman.nginx.org/mailman/listinfo/nginx-devel
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Fwd: nginx.conf + Location + regular expression

2018-01-24 Thread Sergey Brester
 

Although you've used a wrong list for your question (this is development
mailing list not a forum for howto's),
but because I've ATM a free minute, here you go: 

You CANNOT use variables in nginx location [2] uri-pattern (no matter
regexp or static URI), 

so the following code:

```
 location ~ ^.../$reg_exp/...$ {...

```

does not substitute the variable (the char $ - is just a dollar
character here).

The same is valid for directive if [3], so `if ($uri ~*
".../$reg_exp/...") { ...` does not interpolate `$reg_exp`

as variable.

So what you are trying is impossible in nginx.

Just write it direct in location-regex (without variable) or use some
macros resp. some custom module, 
that could do it. 

In addition please note this. [4]

Regards,
sebres.

Am 24.01.2018 14:22, schrieb fx.ju...@free.fr: 

> Hello nginx DEV TEAM ! How are you ?
> 
> I would like to know how can I do to detect this type of directory name with 
> regexp : (Guid 8-4-4-4-12) => "be93d4d0-b25b-de94-fcbb-463e6c0fe9cc"
> 
> How can I use $reg_exp in location
> I do this : 
> 
> set $reg_exp 
> "[0-9a-fA-F]{8}?-[0-9a-fA-F]{4}?-[0-9a-fA-F]{4}?-[0-9a-fA-F]{4}?-[0-9a-fA-F]{12}";
>  
> 
> location ~ ^/locale/Project/Documents/(reg_exp)/(?.*)$ {
> 
> }
> 
> Thank you very well for your reply 
> 
> François - Xavier JUHEL
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel [1]
 

Links:
--
[1] http://mailman.nginx.org/mailman/listinfo/nginx-devel
[2] http://nginx.org/en/docs/http/ngx_http_core_module.html#location
[3] http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#if
[4] http://nginx.org/en/docs/faq/variables_in_config.html
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Process http request before send it to the proxy

2017-05-02 Thread Sergey Brester
 

Hi, 

If you want parse/modify request data before you pass it to upstream,
take a look at NGINX-UPLOAD-MODULE, that do it similar way you may need
- reads request (POST/multipart), save file-data into temp-file(s), and
rewrite arguments pointing to this new file(s).

If you want rather to do something like external authorization, the
handling like `auth_request` from `NGX_HTTP_AUTH_REQUEST_MODULE` may be
interesting for you. 

> Should i use a handler or a filter in order to work with proxy_pass? 

Filters mostly modify response resp. do the replacement in response
body/header (that will be already received from upstream via
proxy_pass), so it would be too late in your case. 

Regards,
sebres 

Am 02.05.2017 00:17, schrieb Aris Striglis: 

> Hello! 
> 
> I'm new to nginx, playing using the api and making some handlers and filters. 
> Now what i would like to do is to 
> 
> process the request headers, call an external c library, do some work and 
> pass the request to an upstream using proxy_pass. 
> 
> I read about http request processing phases and handler priority, but i am a 
> little bit confused. 
> 
> Should i use a handler or a filter in order to work with proxy_pass? 
> 
> Thanks. 
> 
> aris 
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel [1]
 

Links:
--
[1] http://mailman.nginx.org/mailman/listinfo/nginx-devel
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Authentication/access control module for reverse proxy NGINX

2017-02-22 Thread Sergey Brester

Hi,

you can use "auth_request" (see 
http://nginx.org/en/docs/http/ngx_http_auth_request_module.html)


that can take full authorization control you wanted, e. g. authorize via 
your own internal location to some backend (ruby, etc.), additionally 
you can use there all features nginx supported, for example you can use 
nginx-caching (by credential as key) in this location to control the 
time expiration, etc.


Regards,
sebres.

Am 22.02.2017 23:16, schrieb Jun Chen via nginx-devel:


Hi everyone,

I am looking for a module which does the authentication/access control 
for reverse proxy (preferable `nginx`). This module should do:


1. user authentication using credential stored in database (such as 
postgres)
2. Monitoring the ongoing connection and take action if certain access 
credential is met. For example, time is expired

3. open source (allow customization) and nginx, ruby(rails) preferable.

It seems that [`OpenResty`][1] with `nginx` can do the job. Here is an 
[article][2] talking about access control with `Lua` on `nginx`. Here 
is an example (`nginx and Lua`) giving me impression that a snippet of 
file could executed for access (`access_by_lua_file`):


server {
listen 8080;

location / {
auth_basic "Protected Elasticsearch";
auth_basic_user_file passwords;

access_by_lua_file '../authorize.lua'; #<<<=

proxy_pass http://elasticsearch;
proxy_redirect off;
}

}

I am new to access control with reverse proxy. Any thought is 
appreciated.


[1]: https://github.com/openresty/lua-nginx-module
[2]: Playing HTTP Tricks with Nginx [1]



Links:
--
[1] https://www.elastic.co/blog/playing-http-tricks-nginx
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [module dev] PCRE compiled code lost at reload

2016-06-22 Thread Sergey Brester
 

A little bit off-topic, but which benefits you think, you will get using
cross process compiled regexp? 

The compiling of regex is normally fast operation, that will be done
only once (even jit), and can be done in each worker.
What I cannot imagine, is the sharing of the result of regexp execution.
But not the regexp self.
Regards, sebres.

22.06.2016 11:31, MAGNIEN, Thierry wrote: 

> Hi,
> 
> I'm experiencing a strange behavior and I wonder if I'm missing something 
> obvious...
> 
> I've developed a module and I use shared memory and slab allocations to keep 
> data unique across workers and have data survive a reload.
> 
> Everything works fine except one single thing: PCRE compiled codes 
> (ngx_regex_compile_t->regex->code).
> 
> To be more precise, at reload, in my module init function, I recompile some 
> of the PCRE if they have changed, still using shared memory. What I notice is 
> that, just after init module function has returned, all dying workers lose 
> PCRE compiled code (regex->code = 0), where all new created workers correctly 
> get new compiled code.
> 
> I tried to use my own pcre_malloc function in order to be sure memory is 
> allocated in shared memory (and this *is* the case), but without success.
> 
> So any help is welcome: does anyone have a clue about why only those data are 
> "lost" by dying workers ?
> 
> Thanks a lot for your help,
> Thierry Magnien
> 
> P.S.: I can't exhibit code for confidentiality reasons but if no one has a 
> clue, I'll try to write a very simple module, only to exhibit this behavior.
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel [1]
 

Links:
--
[1] http://mailman.nginx.org/mailman/listinfo/nginx-devel
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [BF] slab init + http file cache fixes

2016-06-16 Thread Sergey Brester

The NGX_HTTP_CACHE_SCARCE is intended to signal to the caller that
min_uses was not yet reached, and that's why the request will not
use cache. And that's the only case where NGX_HTTP_CACHE_SCARCE
is used. Semantics of the error case your are trying to add here
is quite different. This will be an immediate problem if/when
we'll add appropriate $upstream_cache_status value for SCARCE.

(Moreover, I'm not really sure this needs fixing at all, as since
version 1.9.13 cache manger monitors number of elements in the
cache and tries to avoid cache zone overflows. That is, overflows
are not expected to happen often.)


I think, I had clearly described my position to that: always better to 
process the request as to fail it, just because cache is sometimes 
exceeded. And I had such situation several times, so cache manger was 
apparently not succeeded by very intense load.



Race condition here is intended: we are dropping the lock to
prevent lock contention during a potentially long syscall. It is
expected that in some cases another worker will be able to grab
the node, and this will prevent its complete removal. But there
is more than one case possible: e.g., another worker can grab the
node and put an updated file into it before the file will be
deleted, and as a result we'll delete different file here. The
question is: what is the case you are trying to improve, and what
it means for other possible cases.


As I already wrote, it happened without this fix.


The "cache->files" is used to count number of files processed by
the cache loader. When it reaches loader_files, cache loader will
sleep for a while. There is nothing wrong that it is incremented -
as the file was processed, it should be incremented. With the
change in question cache loader won't be able to properly respect
loader_files and loader_threshold, making it use all available
resources in case of errors.


Well, I do not insist here.


... win32 ...

If you think this problem can be seen on other platforms - feel
free to point them out as well.


For which purposes? Thus the message contains "win32"? Really?
Just to not harry mere any statistics, that says nginx (and nginx+) have 
no bugs (under *nix)?


No, sorry...


If you can suggest a better name - please do so.


If I would have better name, I had used it right away.


We certainly don't want to try to preserve compatibility with
someone who in theory can use slab code outside of nginx. If
somebody really does so, it can add appropriate initialization.


Agreed, I do not insist here.


You may find this article interesting:
http://blog.flaper87.org/post/opensource-owes-you-everything-and-nothing/


My foot! It's just embarrassing, if it was a respond to critique (imho 
well-deserved critique).
I'm already longer as century quarter a developer, the same long 
contribute to several projects, more than decade I'm a lead or co-owner 
of many projects. Never, never I've been so a nonsense read.


But shouldn't argue about taste...

Regards,
sebres.

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [BF] wrong value of cache max-size in workers

2016-06-15 Thread Sergey Brester

Thanks, looks like a valid win32-related problem. See below for
comments about the patch.


And why you come to this conclusion?



As this is a win32-only problem, please clearly describe this in
commit log. E.g., use "Win32: " prefix in the summary line.
Please also describe that this is win32-specific. Alternatively,
please use "Cache: " prefix. Also, please use full sentences,
including dots.


Well, I think, by merging you can modify the commit message as you like 
(I renounce the authorship of messsage;)))




More hints can be found here:
http://nginx.org/en/docs/contributing_changes.html [1]


I known this paper (I'm not a first time here) ...

BTW. With this kind of handling with contributors I understand the 
people, that sometimes or never more want to post anything to the 
nginx-developers.




This empty line looks unneded for me. YMMV.



If you'll look at source code you will see this was done exactly as in 2 
another blocks contains this code piece (one above and one bellow). I've 
done it, just thus looks like as in the rest of the code.


Regards, sebres

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


[BF] slab init + http file cache fixes

2016-06-15 Thread Sergey Brester
The story with shared zone (slab) resp. http file cache has a 
continuation.


I've additionally found and fixed 2 bugs, the fixes for that you'll find 
the changesets enclosed as attachments:


- _sb-slab-init-fix.patch - slab (zone pool) initialization bug: many 
static stubs not initialized in workers (because only master calls 
ngx_slab_init).
So stubs initialization moved to new "ngx_slab_core_init" called from 
"main" direct after "ngx_os_init" (requires "ngx_pagesize").


In consequence of this bug, the pool always allocated a full page (4K - 
8K) instead of small slots inside page, so for example 1MB zone can 
store not more as 250 keys.
BTW. According to the documentation: One megabyte zone can store about 8 
thousand keys.


- _sb-scarce-cache-fix.patch - fixes http file cache:
  * prevent failure of requests, for that cache cannot be allocated, 
just because of the cache scarce
(NGX_HTTP_CACHE_SCARCE) - alert and send the request data 
nevertheless;
  * wrong counting size of cache (too many decrease of 
"cache->sh->size", because unlocked delete

and value of "fcn->exists" was not reset);


For the people using github - here is my PR as fix for all 3 issues (3 
commits), merged in my mod-branch:

  https://github.com/sebres/nginx/pull/8

Regargs,
sebres

___

14.06.2016 16:50, Sergey Brester wrote:


Hi,

enclosed you'll find a changeset with fix for wrong max_size in http 
file cache:


max_size still in bytes in child workers, because cache init called in 
master (and cache->max_size does not corrected from child if already 
exists, and it is not in shared mem),
so this large size will be "never" reached, in such comparisons like 
`if (size < cache->max_size) ...`.


Regards,
Sergey Brester (aka sebres).

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel [1]



Links:
--
[1] http://mailman.nginx.org/mailman/listinfo/nginx-devel# HG changeset patch
# User Serg G. Brester (sebres) <serg.bres...@sebres.de>
# Date 1465990539 -7200
#  Wed Jun 15 13:35:39 2016 +0200
# Node ID ed82931de91e4eb335cc2a094896e6f4f10ac536
# Parent  0bfc68ad1b7af8c3a7dea24d479ed18bfd024028
* fixes http file cache:
- prevent failure of requests, for that cache cannot be allocated, just because of the cache scarce (NGX_HTTP_CACHE_SCARCE) - alert and send the request data nevertheless;
- wrong counting size of cache (too many decrease "cache->sh->size", because unlocked delete and "fcn->exists" was not reset);

diff -r 0bfc68ad1b7a -r ed82931de91e src/http/ngx_http_file_cache.c
--- a/src/http/ngx_http_file_cache.c	Wed Jun 15 11:53:55 2016 +0200
+++ b/src/http/ngx_http_file_cache.c	Wed Jun 15 13:35:39 2016 +0200
@@ -879,7 +879,8 @@ ngx_http_file_cache_exists(ngx_http_file
 if (fcn == NULL) {
 ngx_log_error(NGX_LOG_ALERT, ngx_cycle->log, 0,
   "could not allocate node%s", cache->shpool->log_ctx);
-rc = NGX_ERROR;
+/* cannot be cached (NGX_HTTP_CACHE_SCARCE), just use it without cache */
+rc = NGX_AGAIN;
 goto failed;
 }
 }
@@ -1870,24 +1871,27 @@ ngx_http_file_cache_delete(ngx_http_file
 p = ngx_hex_dump(p, fcn->key, len);
 *p = '\0';
 
-fcn->count++;
-fcn->deleting = 1;
-ngx_shmtx_unlock(>shpool->mutex);
-
-len = path->name.len + 1 + path->len + 2 * NGX_HTTP_CACHE_KEY_LEN;
-ngx_create_hashed_filename(path, name, len);
-
-ngx_log_debug1(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0,
-   "http file cache expire: \"%s\"", name);
-
-if (ngx_delete_file(name) == NGX_FILE_ERROR) {
-ngx_log_error(NGX_LOG_CRIT, ngx_cycle->log, ngx_errno,
-  ngx_delete_file_n " \"%s\" failed", name);
+if (!fcn->deleting) {
+fcn->count++;
+fcn->deleting = 1;
+fcn->exists = 0;
+ngx_shmtx_unlock(>shpool->mutex);
+
+len = path->name.len + 1 + path->len + 2 * NGX_HTTP_CACHE_KEY_LEN;
+ngx_create_hashed_filename(path, name, len);
+
+ngx_log_debug1(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0,
+   "http file cache expire: \"%s\"", name);
+
+if (ngx_delete_file(name) == NGX_FILE_ERROR) {
+ngx_log_error(NGX_LOG_CRIT, ngx_cycle->log, ngx_errno,
+  ngx_delete_file_n " \"%s\" failed", name);
+}
+
+ngx_shmtx_lock(>shpool->mutex);
+fcn->count--;
+fcn->deleting = 0;
 }
-
-ngx_shmtx_loc

Re: Reading body during the REWRITE phase ?

2016-01-15 Thread Sergey Brester
 

Hi, 

It's normally no proper way to read a body within REWRITE phase (I think
you mean body of something like POST request). 

The request body will be typically read resp. upstreamed first if
location will be processed (after all of the phases, including a rewrite
also). 

But you can use that inside another location using some modules, that
may read a body (even manipulate it for example inside a filter) and
then upstream it further to another location.
For example see the nginx-upload-module [5], that read
multipart/form-data, disaggregates all parts with file-data (writes such
parts in files), and hereafter passes modified request body (even
without file-data) to another location... The similar things will do the
lua-module, etc.

You can also overwrite some default filter handlers, to read the body
formerly, but it would be not really nginx-way (you know, everything
should be be asynchronously =). 

BTW, if you use "auth_request" (and other things internally using
sub_request) you can disable passing of the body inside a location that
do auth_request (not in main location), with directives like
"scgi_pass_request_body off;", "fastcgi_pass_request_body off;", etc. 

Hope it helps reasonably...

Regards,
sebres. 

Am 15.01.2016 15:01, schrieb Thibault Koechlin: 

> Hi,
> 
> I have a module (naxsi) that reads the body during the REWRITE phase, in
> the exact same way that ngx_form_input does :
> https://github.com/calio/form-input-nginx-module [1].
> 
> When used with auth_request (or maybe other modules, but that's the
> first time I encounter this issue within a few years of usage), there is
> no request made to the upstream if the request is made using POST/PUT
> and the body is bigger than client_body_buffer_size.
> 
> For the simplicity of the example, we'll assume I'm talking about
> ngx_form_input (behaviour is the same, except code is way shorter).
> 
> The user reporting me the bug opened a ticket :
> https://trac.nginx.org/nginx/ticket/801 [2]. It is possible to replace naxsi
> with ngx_for_input and obtain the same results.
> 
> From Maxim's reply, it seems I failed to properly restore request
> handlers after reading body.
> 
> What would be (if there is any) the proper way to read body within
> REWRITE phase ? Is there any example/existing module that does such so I
> can understand what am I doing wrong ? (In the past, I always thought
> ngx_form_input was the reference one).
> 
> PS: You can find a bit more details here :
> https://github.com/nbs-system/naxsi/issues/226 [3] (including sample config
> & commands to reproduce bug)
> 
> Thanks,
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel [4]
 

Links:
--
[1] https://github.com/calio/form-input-nginx-module
[2] https://trac.nginx.org/nginx/ticket/801
[3] https://github.com/nbs-system/naxsi/issues/226
[4] http://mailman.nginx.org/mailman/listinfo/nginx-devel
[5] https://github.com/vkholodkov/nginx-upload-module/tree/2.2
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

RE: Tracking sent responses

2015-11-11 Thread Sergey Brester
 

11.11.2015 19:17, Julien FROMENT wrote: 

> Thanks for the reply,

Welcome :) 

> Using post_action could work, if we can sent to the @after_request_location 
> enough reliable information. 
> 
> Can we use the all the variable documented in the ngx_http_core_module 
> (http://nginx.org/en/docs /http/ngx_http_core_module.html#variables [1]) ? 
> Are there any other variables that we could use?

Yes, and your own specified also (or some from custom modules). 

Here is a more large list of all variables -
http://nginx.org/en/docs/varindex.html 

And if you want to use some values returned from upstream you should get
a variables beginning with "upstream_...".
For example, if you need a http-header "X_MY_VAR", you should get
$upstream_http_x_my_var.
If you need a cookie value "example", you can get
$upstream_cookie_example etc. For http status of response use
$upstream_status. Here is the list of it all -
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_addr

> Although, I am a bit concerned by your comment "possibly not recommended to 
> use", could we clarify what you mean or what lead you to think it is not 
> recommended?

Well, you can read my small discussion with unambiguous answer about
this from a nginx developer -
https://www.mail-archive.com/nginx-devel@nginx.org/msg03680.html 

I will keep this feature in my own bundles (and in my own forks) - no
matter what some nginx developers say about this. 
But ... it is my decision about. 

In any case, I believe it is not very complex to create a similar
functionality as (replacement) module, if "post_action" will be removed
later from nginx standard bundle. 

> Rergard, 
> 
> Julien

Regards,
Serg G. Brester (sebres)

> FROM: Sergey Brester [mailto:serg.bres...@sebres.de] 
> SENT: Tuesday, November 10, 2015 2:30 PM
> TO: nginx-devel@nginx.org
> CC: Julien FROMENT
> SUBJECT: Re: Tracking sent responses 
> 
> Hi, 
> 
> I'm sure you can do that using on-board "equipment" of nginx, without deep 
> integrating to the nginx (without write of own module). 
> 
> You can use for this a "post_action", something like:
> 
> post_action @after_request_location; 
> 
> But (There is always a "but":), according to my last known stand: 
> 
> - the feature "post_action" is asynchronously;
> - the feature is not documentated (and possibly not recommended to use);- if 
> location "executed" in post_action uses upstreams (fcgi, proxy_pass, etc.), 
> it will always breaks a keepalive connection to the upstream channel 
> (possibly fixed, but I've missed). 
> 
> Regards,
> sebres. 
> 
> Am 10.11.2015 19:51, schrieb Julien FROMENT: 
> 
>> Hello, 
>> 
>> We would like to use Nginx to keep track of exactly what part of an 
>> upstream's server response was sent over a socket. Nginx could call an API 
>> asynchronously with the number of bytes sent over the socket for a given 
>> request. 
>> 
>>  p; 
>> 
>> Here is the pseudo code: 
>> 
>> -- Client send a request 
>> 
>> -- Nginx processes the request and send it to the upstream 
>> 
>> ... 
>> 
>> -- The upstream returns the response 
>> 
>> -- Nginx sends the response to the client 
>> 
>> -- Nginx calls Async API with the number of bytes sent 
>> 
>> I read a little bit of "Emiller's Guide To Nginx Module Development", and I 
>> think we could write a Handler that provide some tracking information. But I 
>> am unsure if it is possible to hook it at a low enough level for our needs. 
>> 
>> Are there any expert on this mailing list that could provide us consulting 
>> services and guide us through the development of such functionality? 
>> 
>> Thanks in advance! 
>> 
>> Julien 
>> 
>> #
>> 
>> " Ce courriel et les documents qui lui sont joints peuvent contenir des
>> 
>> informations confidentielles ou ayant un caractè privÃ(c)S'ils ne vous sont
>> 
>> pas destinÃ(c) nous vous signalons qu'il est strictement interdit de les
>> 
>> divulguer, de les reproduire ou d'en utiliser de quelque maniè que ce
>> 
>> soit le contenu. Si ce message vous a Ã(c) transmis par erreur, merci d'en
>> 
>> informer l'expÃ(c)teur et de supprimer immÃ(c)atement de votre systè
>> 
>> informatique ce courriel ainsi que tous les documents qui y sont attachÃ(c)"
>> 
>> **
>> 
>> " This e-mail and any attached documents may contain confidential or
>> 
>> proprietary information. If you are not the intended recipient, you are
>> 
>> no

Re: How does Nginx look-up cached resource?

2015-09-10 Thread Sergey Brester
The patch sounds not bad at all, but I would have also removed the 
calculation and verification of crc32... Makes no sense, if either way 
the keys would be compared.


___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: How does Nginx look-up cached resource?

2015-09-10 Thread Sergey Brester
 

Enclosed you will find an attached changeset, that contains suggested
fix with keys comparison and completely removed additional protection
via crc32. 

Tested also on known to me keys with md5 collisions (see below) - it
works. 

If someone needs a git version of it: 

https://github.com/sebres/nginx/pull/2 [2] 

Below you can find a TCL-code to test strings (hex), that produce an md5
collision (with an example with one collision): 

https://github.com/sebres/misc/blob/tcl-test-hash-collision/tcl/hash-collision.tcl
[3] 

Regards, 
sebres. 

On 10.09.2015 11:57, Sergey Brester wrote: 

> The patch sounds not bad at all, but I would have also removed the 
> calculation and verification of crc32... Makes no sense, if either way the 
> keys would be compared.
> 
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel [1]
 

Links:
--
[1] http://mailman.nginx.org/mailman/listinfo/nginx-devel
[2] https://github.com/sebres/nginx/pull/2
[3]
https://github.com/sebres/misc/blob/tcl-test-hash-collision/tcl/hash-collision.tcl# HG changeset patch
# User sebres <serg.bres...@sebres.de>
# Date 1441885669 -7200
#  Thu Sep 10 13:47:49 2015 +0200
# Node ID 4293b50a7977e86c98bd9ae245dc03156a0a
# Parent  f829cfb5364c0f32eae7a608c18cab1dc9b06a87
always compares keys of cache entry (hash is insufficient secure, so protects in case of hash collision),
  see "http://forum.nginx.org/read.php?29,261413,261529; for a discussion about;
additional protection via crc32 is obsolete and removed now;

diff -r f829cfb5364c -r 4293b50a7977 src/http/ngx_http_cache.h
--- a/src/http/ngx_http_cache.h	Thu Sep 03 15:09:21 2015 +0300
+++ b/src/http/ngx_http_cache.h	Thu Sep 10 13:47:49 2015 +0200
@@ -64,7 +64,6 @@ typedef struct {
 struct ngx_http_cache_s {
 ngx_file_t   file;
 ngx_array_t  keys;
-uint32_t crc32;
 u_char   key[NGX_HTTP_CACHE_KEY_LEN];
 u_char   main[NGX_HTTP_CACHE_KEY_LEN];
 
@@ -119,7 +118,6 @@ typedef struct {
 time_t   valid_sec;
 time_t   last_modified;
 time_t   date;
-uint32_t crc32;
 u_short  valid_msec;
 u_short  header_start;
 u_short  body_start;
diff -r f829cfb5364c -r 4293b50a7977 src/http/ngx_http_file_cache.c
--- a/src/http/ngx_http_file_cache.c	Thu Sep 03 15:09:21 2015 +0300
+++ b/src/http/ngx_http_file_cache.c	Thu Sep 10 13:47:49 2015 +0200
@@ -233,7 +233,6 @@ ngx_http_file_cache_create_key(ngx_http_
 
 len = 0;
 
-ngx_crc32_init(c->crc32);
 ngx_md5_init();
 
 key = c->keys.elts;
@@ -243,14 +242,12 @@ ngx_http_file_cache_create_key(ngx_http_
 
 len += key[i].len;
 
-ngx_crc32_update(>crc32, key[i].data, key[i].len);
 ngx_md5_update(, key[i].data, key[i].len);
 }
 
 c->header_start = sizeof(ngx_http_file_cache_header_t)
   + sizeof(ngx_http_file_cache_key) + len + 1;
 
-ngx_crc32_final(c->crc32);
 ngx_md5_final(c->key, );
 
 ngx_memcpy(c->main, c->key, NGX_HTTP_CACHE_KEY_LEN);
@@ -521,9 +518,12 @@ wakeup:
 static ngx_int_t
 ngx_http_file_cache_read(ngx_http_request_t *r, ngx_http_cache_t *c)
 {
+u_char*p;
 time_t now;
 ssize_tn;
+ngx_str_t *key;
 ngx_int_t  rc;
+ngx_uint_t i;
 ngx_http_file_cache_t *cache;
 ngx_http_file_cache_header_t  *h;
 
@@ -547,12 +547,28 @@ ngx_http_file_cache_read(ngx_http_reques
 return NGX_DECLINED;
 }
 
-if (h->crc32 != c->crc32) {
+if (h->header_start != c->header_start) {
 ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0,
-  "cache file \"%s\" has md5 collision", c->file.name.data);
+  "cache file \"%s\" has hash collision", c->file.name.data);
 return NGX_DECLINED;
 }
 
+p = c->buf->pos + sizeof(ngx_http_file_cache_header_t)
++ sizeof(ngx_http_file_cache_key);
+
+/* because any hash is insufficient, check keys are equal also */
+key = c->keys.elts;
+for (i = 0; i < c->keys.nelts; i++) {
+if (ngx_memcmp(p, key[i].data, key[i].len) != 0) {
+ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0,
+  "cache file \"%s\" has hash collision",
+  c->file.name.data);
+return NGX_DECLINED;
+}
+
+p += key[i].len;
+}
+
 if ((size_t) h->

Re: How does Nginx look-up cached resource?

2015-09-10 Thread Sergey Brester

On 10.09.2015 17:33, Maxim Dounin wrote:


Hello!

On Thu, Sep 10, 2015 at 05:07:36PM +0200, Sergey Brester wrote:

Leave header format unchanged (I mean changes in header file 
'src/http/ngx_http_cache.h'), but not calculate and not compare crc32 
(unused / reserved up to "change cache header format next time")?


Even that way resulting cache files will not be compatible with
older versions which do the check, thus breaking upgrades (when
cache items can be used by different versions simultaneously for a
short time) and, more importantly, downgrades (which sometimes
happen due to various reasons).


In this case (someone will downgrade back to previous nginx version) the 
cache entries will be invalidated by first use (because crc32 will not 
equal) - well I think it's not a great problem.


Please find enclosed a changeset (2nd) that should restore backwards 
compatibility of cache header file (forwards).


Regards,
sebres.# HG changeset patch
# User sebres <serg.bres...@sebres.de>
# Date 1441885669 -7200
#  Thu Sep 10 13:47:49 2015 +0200
# Node ID 4293b50a7977e86c98bd9ae245dc03156a0a
# Parent  f829cfb5364c0f32eae7a608c18cab1dc9b06a87
always compares keys of cache entry (hash is insufficient secure, so protects in case of hash collision),
  see "http://forum.nginx.org/read.php?29,261413,261529; for a discussion about;
additional protection via crc32 is obsolete and removed now;

diff -r f829cfb5364c -r 4293b50a7977 src/http/ngx_http_cache.h
--- a/src/http/ngx_http_cache.h	Thu Sep 03 15:09:21 2015 +0300
+++ b/src/http/ngx_http_cache.h	Thu Sep 10 13:47:49 2015 +0200
@@ -64,7 +64,6 @@ typedef struct {
 struct ngx_http_cache_s {
 ngx_file_t   file;
 ngx_array_t  keys;
-uint32_t crc32;
 u_char   key[NGX_HTTP_CACHE_KEY_LEN];
 u_char   main[NGX_HTTP_CACHE_KEY_LEN];
 
@@ -119,7 +118,6 @@ typedef struct {
 time_t   valid_sec;
 time_t   last_modified;
 time_t   date;
-uint32_t crc32;
 u_short  valid_msec;
 u_short  header_start;
 u_short  body_start;
diff -r f829cfb5364c -r 4293b50a7977 src/http/ngx_http_file_cache.c
--- a/src/http/ngx_http_file_cache.c	Thu Sep 03 15:09:21 2015 +0300
+++ b/src/http/ngx_http_file_cache.c	Thu Sep 10 13:47:49 2015 +0200
@@ -233,7 +233,6 @@ ngx_http_file_cache_create_key(ngx_http_
 
 len = 0;
 
-ngx_crc32_init(c->crc32);
 ngx_md5_init();
 
 key = c->keys.elts;
@@ -243,14 +242,12 @@ ngx_http_file_cache_create_key(ngx_http_
 
 len += key[i].len;
 
-ngx_crc32_update(>crc32, key[i].data, key[i].len);
 ngx_md5_update(, key[i].data, key[i].len);
 }
 
 c->header_start = sizeof(ngx_http_file_cache_header_t)
   + sizeof(ngx_http_file_cache_key) + len + 1;
 
-ngx_crc32_final(c->crc32);
 ngx_md5_final(c->key, );
 
 ngx_memcpy(c->main, c->key, NGX_HTTP_CACHE_KEY_LEN);
@@ -521,9 +518,12 @@ wakeup:
 static ngx_int_t
 ngx_http_file_cache_read(ngx_http_request_t *r, ngx_http_cache_t *c)
 {
+u_char*p;
 time_t now;
 ssize_tn;
+ngx_str_t *key;
 ngx_int_t  rc;
+ngx_uint_t i;
 ngx_http_file_cache_t *cache;
 ngx_http_file_cache_header_t  *h;
 
@@ -547,12 +547,28 @@ ngx_http_file_cache_read(ngx_http_reques
 return NGX_DECLINED;
 }
 
-if (h->crc32 != c->crc32) {
+if (h->header_start != c->header_start) {
 ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0,
-  "cache file \"%s\" has md5 collision", c->file.name.data);
+  "cache file \"%s\" has hash collision", c->file.name.data);
 return NGX_DECLINED;
 }
 
+p = c->buf->pos + sizeof(ngx_http_file_cache_header_t)
++ sizeof(ngx_http_file_cache_key);
+
+/* because any hash is insufficient, check keys are equal also */
+key = c->keys.elts;
+for (i = 0; i < c->keys.nelts; i++) {
+if (ngx_memcmp(p, key[i].data, key[i].len) != 0) {
+ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0,
+  "cache file \"%s\" has hash collision",
+  c->file.name.data);
+return NGX_DECLINED;
+}
+
+p += key[i].len;
+}
+
 if ((size_t) h->body_start > c->body_start) {
 ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0,
   "cache file \"%s\" has too long header",
@@ -583,7 +599,6 @@ ngx_http_file

Re: How does Nginx look-up cached resource?

2015-09-10 Thread Sergey Brester

On 10.09.2015 18:59, Maxim Dounin wrote:


unexpected alerts are certainly a bad thing and shouldn't happen.


But only in case of downgrade to prev version...

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: How does Nginx look-up cached resource?

2015-09-07 Thread Sergey Brester

I have tried - I give up (it makes no sense),

I have a my own fork (to make everything right there).

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: How does Nginx look-up cached resource?

2015-09-07 Thread Sergey Brester

On 06.09.2015 02:08, Maxim Dounin wrote:


Well, not, I don't confuse anything. For sure, brute force attack
on a 128 bit hash requires approximately 2^64 attempts.



That is, a single nginx instance with 2^64 cached resources will
likely show up a collision. But that's not a number of resources
you'll be able to store on a single node - in particular, because
64-bit address space wouldn't be enough to address that many
cached items.



To obtain a collision of a 128-bit hash with at least 1%
probability, you'll need more than 10^18 resources cached on a
single node, which is not even close to a something possible as
well.



Assuming 1 billion of keys (which is way more than a single nginx
node can handle now, and will require about 125G of memory for a
cache keys zone), probability of a collision is less than 10^(-20).



Quoting https://en.wikipedia.org/wiki/Birthday_attack [2]:



For comparison, 10^(-18) to 10^(-15) is the uncorrectable bit
error rate of a typical hard disk.


1) I will try to explain you, that is not quite true with a small 
approximation: let our hash value be exact one byte large (8 bit), it 
can obtain 2^8 = 256 different hash values (and let it be perfect 
distributed).
The relative frequency to encounter a collision - same hash for any 
random another inside this interval (Hv0 - Hv255) will be also 256, 
because circa each 256-th char sequence will obtain the same hash value 
(Hv0).

Will be such hash safe? Of course not, never.
But if we will hash any character sequences with max length 16 bytes, we 
will have 256^16 (~= 10^38) different variations of binary string 
(keys). The relation (and the probability (Pc) to have a collision for 
two any random strings) would be only (10^38/256 - 1)/10^38 * (10^38/256 
- 2)/(10^38 - 1) ~= 0.15.
Small, right? But the relative frequency (Hrc) is still 256! This can be 
explained with really large count of different sequences, and so with 
large count of hash values (Hv1-Hv255) that are not equal with (Hv0).


But let us resolve the approximation: the hash value obtain 2^128 (~= 
10^38), but how many values maximum should be hashed? It's unknown. Let 
our keys contain maximum 100 bytes, the count of variations of all 
possible strings will be 256^100 (~= 10^240). The probability to 
encounter of a collision and the relative frequency to encounter a 
collision will be a several order smaller (1e-78), but the relation 
between Hrc and Pc is comparable to example above (in the sense of 
relation between of both). And in this relation is similar (un)safe. Yes 
less likely (well 8 vs 128 bits) but still "unsafe".


And we can have keys with the length of 500 bytes...

And don't compare the probability of error rate in hard disks with 
probability of a collision for hashes of any *pseudo-random* two strings 
(stress mark to "pseudo-random"). This is in about the same as to 
compare a warm with soft.


I can write here larger two pages formulas to prove it. But... forget 
the probabilities and this approximation... we come to point 2.


2) For the *caching* it's at all not required to have such "safe" hash 
functions:
  - The hash function should create reasonably perfect distributed 
values;
  - The hash function should be fast as possible (we can get MurmurHash 
or something similar, significantly faster than md5);
  - We should always compare the keys, after cache entry with hash value 
was found, to be sure exact the same key was found; But that does not 
make our cache slower, because the generation of hash value can be 
faster through algorithms faster as md5 (for example the MMH3 is up to 
90% faster as MD5);
  - In the very less likely case of collision we will just forget the 
cached entry with previous key or save it as array for serial access 
(not really expected by caching and large hash value, because rare and 
that is a cache - not a database that should always hold an entry).


I want implement that and post a PR, and can make it configurable 
(something like `fastcgi_cache_safe = on`) then you can compare the 
performance of it - would be faster as your md5/crc32 implementation.


Regards,
sebres.

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: How does Nginx look-up cached resource?

2015-09-07 Thread Sergey Brester

On 08.09.2015 01:17, Gena Makhomed wrote:


There is no obscurity here. Value of proxy_cache_key is known,
hash function is known, nginx sources is open and available.


If value of proxy_cache_key is known and attackers can generate it, what 
do you want to protect with some hash value?
If attacker can use any key - it's no matter which hash algorithm you 
have used (attacker can get entry).
If attacker can't use any key (it's protected with some internal nginx 
variable) - it's no matter also which hash will be used (attacker cannot 
get entry, because the keys will be compared also).


Hash value should be used only for fast searching of hash key. Not to 
identify the cached resources!

You remember proposed solution from your message?
http://mailman.nginx.org/pipermail/nginx-devel/2015-September/007286.html 
[1]

Attacker easily can provide DDoS attack against nginx in this case:
http://www.securityweek.com/hash-table-collision-attacks-could-trigger-ddos-massive-scale 
[2]

Hash Table Vulnerability Enables Wide-Scale DDoS Attacks


And what's stopping him to do the same with much safe hash function? On 
the contrary, don't forget the generating of such hash values is cpu 
greedy also.


If your entry should be secure, the key (not it hash) should contain 
part of security token, authentication, salt etc.

This is "security through obscurity",
and you say, what this is bad thing.


Wrong! Because if this secure parts in key are an internal nginx 
values/variables (authenticated user name, salt etc.), that the attacker 
never can use!
He can theoretical use a variable part of key, to generate or brute some 
expected hash, equal with searched one, but the keys comparison makes 
all his attempts void.
If it is not so, the key contains nothing internals and attacker can use 
any key (example uri part) - see above - it's no matter which hash 
algorithm you have used (attacker can get entry because he can use key). 
The cache is then insecure.


So again: hash bears no security function, and if the whole key would 
be always compared - it would be at all not important which hash 
function will be used, and how secure it is. And to "crack" resp. 
return the cache entry you should always "crack" it completely key, 
not it hash.



If site is under high load, and, for example contains many pages,
which are very popular (for example, 100 req/sec of each) and backend
need many time for generating such page, for example, 2-3 seconds -
attacker can using MurmurHash create requestst to non-existend
pages, with same MurmurHash hash value. And 404 responces
from backend will replace very popular pages with different uri
but with same MurmurHash hash value. And backend will be DDoS`ed
by many requests for different popular pages. So, attacker easily
can disable nginx cache for any known uri. And if backend can't
process all client requests without cache - it will be overloaded
and access to site will be denied for all new users.


1) big error - non-existend pages can be not placed in cache (no cache 
for 404);
2) attacker does not uses MurmurHash - it does nginx over a valid cache 
key generated nginx side, just to find the entry faster (for nothing 
else)!
3) the attacker almost always cannot control the key used for caching 
(except only if it a part of uri - insecure cache);
4) what do you want to do against it? If I understood it correct, 
instead of intensive load by page generation, you want shift the load 
into hash generation? Well, very clever. :)
5) DDOS prevention is in any event not the role of caching and can be 
made with other instruments (also using nginx);



So, many thousands of nginx configurations will be in vulnerable state.


Wrong. Not a single one, and if only, then not vulnerable - because it 
would be then exactly so "vulnerable" with any other hash function - the 
configuration for key or something else is blame, not the hashing self.
Either the cache is insecure (any key can be used) - no matter which 
hash function,
Or the cache is secure (protected with any internal nginx variable as 
part of cache key, that attacker can't use) - fail by keys compare - no 
matter which hash function also.



Hash table implementations vulnerable to algorithmic complexity attacks


That means the pure hashing, without keys comparison - that is safe if 
keys would be compared.

What I was already said and wrote 100 times.

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: How does Nginx look-up cached resource?

2015-09-07 Thread Sergey Brester

On 07.09.2015 21:29, Gena Makhomed wrote:


Using MurmurHash is not good idea, because attacker
can easy make collisions and invalidate popular entries
from cache, and this technology can be used for DDoS attacks.
(even in case if only one site exists on server with nginx cache)

Using secure hash function for nginx cache is strong requirement,
even in case then full proxy_cache_key value check will be added.


It's not correct, because something like that will be called "security 
through obscurity"!


Hash value should be used only for fast searching of hash key. Not to 
identify the cached resources!
If your entry should be secure, the key (not it hash) should contain 
part of security token, authentication, salt etc.


So again: hash bears no security function, and if the whole key would be 
always compared - it would be at all not important which hash function 
will be used, and how secure it is. And to "crack" resp. return the 
cache entry you should always "crack" it completely key, not it hash.


As I already said, the hash value - is only the way to fast searching 
resp. even direct access to entry with the hashed key.


I know systems, where the hash values are 32 bits and uses simplest 
algorithm like Ci << 3 + Ci+1. But as already said, hereafter the whole 
keys will be compared and it's very safe.
So for example if you should cache the pages per user, give the 
authenticated user-name in the cache key (value given with 
proxy_cache_key). Then the attacker should crack the nginx 
authentication also.


Everything else would be as already stated - security through obscurity.

Regards,
sebres.

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: How does Nginx look-up cached resource?

2015-09-04 Thread Sergey Brester

On 04.09.2015 20:10, Maxim Dounin wrote:


For sure this is something that can be done. The question remains
though: how often collisions are observed in practice, is it make
sense to do anything additional to protect from collisions and
spend resources on it? Even considering only md5, without the
crc32 check, no practical cases were reported so far.


What?
That SHOULD be done! Once is already too much!

nginx can cache pages from different users (key contains username),

so imagine in the case of such collision:
  - the user 1 will suddenly receive an info of the user 2;
  - if authorisation uses "auth_request" (via fastcgi) and it will be 
cached (because of performance resp. persistent handshake-like 
authorisation), the the user 1 will even act as a user 2 (with his 
rights and authority) etc.


I can write hier hundred situations that never ever should be occured! 
Never.


___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: How does Nginx look-up cached resource?

2015-09-04 Thread Sergey Brester

On 04.09.2015 21:43, Maxim Dounin wrote:


No one yet happened. And likely won't ever happen, as md5 is a
good hash function 128 bits wide, and it took many years to find
even a single collision of md5.


You confuse good for "collision-search algorithms" with a good in the 
sense of the "probability the collision can occur". A estimation of 
collision in sence of "collision-search algorithm" and co. implies the 
hashed string is unknown and for example it estimates attacks to find 
that (like brute, chosen-prefix etc).


I'm talking about the probability of incidence the same hash for two 
different cache keys.
In addition, because of so-called birthday problem 
(https://en.wikipedia.org/wiki/Birthday_problem) we can increase this 
probability with at least comparable 64 bit for real random data 
(different length).
Don't forget our keys, that will be hashed, are not really any "random" 
data - most of the time it contains only specified characters and/or has 
specified length.


So the probability that the collision will occur is still significant 
larger (a billion billion times larger).



And even if it'll happen, we have
crc32 check in place to protect us.


Very funny... You make such conclusions based on what?
So last but not least, if you still haven't seen the collision in sence 
of md5 "protected" crc32, how can you be sure, that this is still not 
occurred?


For example, how large you will estimate the probability that the 
collision will occur, if my keys will contain only exact 32 characters 
in range [0-9A-Za-z]? And it frequency? Just approximately dimension...


Regards,
sebres.

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Making external http calls from nginx module

2015-08-04 Thread Sergey Brester
 

Hi, 

You can try to use `ngx_http_subrequest` (don't know how good it works
for not an nginx location (or named location)). 

For example see some module used that (ex.:
https://github.com/sebres/nginx/blob/hg-mirror/src/http/modules/ngx_http_auth_request_module.c#L189).


You can also try to use a board directive `post_action` using pure
nginx-config solution, without writing own module - adding it to your
location should call external service specified within (but if I'm not
mistaken still unsupported/undocumented feature). 

Regards,
sebres 

04.08.2015 16:34, wrote sunil mallya: 

 Hey folks, 
 
 Can someone point a code snippet on how I could make a http call to an 
 external service within my module. I believe I should be able to reuse some 
 of the code in ngx_proxy but unclear about the pitfalls. 
 
 Muchos Gracias, 
 Sunil Mallya
 
 @sunilmallya 
 
 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel [1]
 

Links:
--
[1] http://mailman.nginx.org/mailman/listinfo/nginx-devel
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Satisfy directive behaviour

2015-07-01 Thread Sergey Brester

Hi,

Look at module auth_request 
(http://nginx.org/en/docs/http/ngx_http_auth_request_module.html).
Good working solution at the moment is to use auth_request module 
together with some external auth-daemon.

You can avoid many problems, e.g. with async/sync handling etc.

Using that I have already successful realized many authentication 
methods (inclusively NTLM/Negotiate for windows).
If you have to realize anything doing handshake, you can use a variable 
$connection or combination $connection:$remote_addr:$remote_port as 
identifier for your connect with persistent authentication.


Regards,
sebres.


01.07.2015 15:36, Petra Kamenickova:


Hi!

I'm working on custom PAM module which could be used as an 
authorization support for authentication modules (e.g. 
ngx_http_auth_spnego_module) and I ran into few problems. I'm not sure 
I fully get the interactions between and within
phases in nginx. My background is Apache HTTP Server so that might have 
twisted my expectations.


I have noticed that satisfy directive behaves slightly different than 
Apache's satisfy - nginx checks every module in access phase and the 
first successful invocation stops any subsequent checks whereas 
Apache's satisfy checks host based access vs. other access modules. It 
has some implications especially for authentication and authorization 
implications. What would be the best way to make sure that 
authorization phases that need authentication to be run gets that 
authentication executed, even with satisfy any?


The post access phase looks like a good place for authorization but it 
seems custom modules cannot really be added to this phase. So... is it 
possible to add somehow my module handler into post access phase 
without changing the core module? Or is there any way how to keep my 
module in access phase but skip the satisfy check for that module?


I would be grateful for any help!

--
Petra Kamenickova

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel [1]



Links:
--
[1] http://mailman.nginx.org/mailman/listinfo/nginx-devel

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Fix windows issue with multiple workers

2015-06-22 Thread Sergey Brester

Hi,

enclosed you will find an amend fix as replacement to 
_sb-win-multi-worker-add-3.patch (just forgotten to save after 
renaming NGX_SINGLE_WORKER - NGX_CONF_UNSET_PTR, before it was 
commited).


18.06.2015 21:55, Maxim Dounin:


As I already tried to explain, the approach with inherited sockets
is basically identical to what nginx does on UNIX with fork().
There are no reasons to keep things different if you can do it
similarly on both platforms.


But it's not even roughly similar, see win32/ngx_process.c + 
win32/ngx_process_cycle.c in comparission to unix...

This is already at all a biggish difference to unix.


The goal is to minimize code bloat, not maximize it.


Let me think a bit, how I can make it a little smaller, or unite some 
code-pieces with unix.


Regards, sebres.# HG changeset patch
# User sebres serg.bres...@sebres.de
# Date 1434103966 -7200
#  Fri Jun 12 12:12:46 2015 +0200
# Node ID b72d1091430e8899ee7d23c60924c100cba1c3ab
# Parent  76ee2fe9300bdcf0dbf4a05e3ed7a1136b324eb7
backwards compatibility for single worker (without inheritance of socket) + code review / amend fix

diff -r 76ee2fe9300b -r b72d1091430e src/core/ngx_connection.c
--- a/src/core/ngx_connection.c	Thu Jun 11 14:51:59 2015 +0200
+++ b/src/core/ngx_connection.c	Fri Jun 12 12:12:46 2015 +0200
@@ -427,33 +427,33 @@ ngx_open_listening_sockets(ngx_cycle_t *
 /* try to use shared sockets of master */
 if (ngx_process  NGX_PROCESS_MASTER) {
 
-if (!shinfo) {
-shinfo = ngx_get_listening_share_info(cycle, ngx_getpid());
-
-if (!shinfo) {
-failed = 1;
-break;
-}
+if (!shinfo  ngx_get_listening_share_info(cycle, shinfo,
+ngx_getpid()) != NGX_OK) {
+failed = 1;
+break;
 }
 
-s = ngx_shared_socket(ls[i].sockaddr-sa_family, ls[i].type, 0,
-shinfo+i);
+if (shinfo != NGX_CONF_UNSET_PTR) {
 
-if (s == (ngx_socket_t) -1) {
-ngx_log_error(NGX_LOG_EMERG, log, ngx_socket_errno,
-  ngx_socket_n  for inherited socket %V failed, 
-  ls[i].addr_text);
-return NGX_ERROR;
+s = ngx_shared_socket(ls[i].sockaddr-sa_family, ls[i].type, 0,
+shinfo+i);
+
+if (s == (ngx_socket_t) -1) {
+ngx_log_error(NGX_LOG_EMERG, log, ngx_socket_errno,
+  ngx_socket_n  for inherited socket %V failed, 
+  ls[i].addr_text);
+return NGX_ERROR;
+}
+
+ngx_log_debug4(NGX_LOG_DEBUG_CORE, log, 0, [%d] shared socket %d %V: %d,
+ngx_process, i, ls[i].addr_text, s);
+
+ls[i].fd = s;
+
+ls[i].listen = 1;
+
+continue;
 }
-
-ngx_log_debug4(NGX_LOG_DEBUG_CORE, log, 0, [%d] shared socket %d %V: %d,
-ngx_process, i, ls[i].addr_text, s);
-
-ls[i].fd = s;
-
-ls[i].listen = 1;
-
-continue;
 }
 #endif
 
diff -r 76ee2fe9300b -r b72d1091430e src/os/win32/ngx_process.c
--- a/src/os/win32/ngx_process.c	Thu Jun 11 14:51:59 2015 +0200
+++ b/src/os/win32/ngx_process.c	Fri Jun 12 12:12:46 2015 +0200
@@ -70,9 +70,7 @@ ngx_spawn_process(ngx_cycle_t *cycle, ch
 return pid;
 }
 
-#if (NGX_WIN32)
 ngx_share_listening_sockets(cycle, pid);
-#endif
 
 ngx_memzero(ngx_processes[s], sizeof(ngx_process_t));
 
diff -r 76ee2fe9300b -r b72d1091430e src/os/win32/ngx_process_cycle.c
--- a/src/os/win32/ngx_process_cycle.c	Thu Jun 11 14:51:59 2015 +0200
+++ b/src/os/win32/ngx_process_cycle.c	Fri Jun 12 12:12:46 2015 +0200
@@ -113,11 +113,6 @@ ngx_master_process_cycle(ngx_cycle_t *cy
 events[2] = ngx_reopen_event;
 events[3] = ngx_reload_event;
 
-/* does not close listener for win32, will be shared */
-#if (!NGX_WIN32)
-ngx_close_listening_sockets(cycle);
-#endif
-
 if (ngx_start_worker_processes(cycle, NGX_PROCESS_RESPAWN) == 0) {
 exit(2);
 }
@@ -210,11 +205,6 @@ ngx_master_process_cycle(ngx_cycle_t *cy
 
 ngx_cycle = cycle;
 
-/* does not close listener for win32, will be shared */
-#if (!NGX_WIN32)
-ngx_close_listening_sockets(cycle);
-#endif
-
 if (ngx_start_worker_processes(cycle, NGX_PROCESS_JUST_RESPAWN)) {
 ngx_quit_worker_processes(cycle, 1);
 }
diff -r 76ee2fe9300b -r b72d1091430e src/os/win32/ngx_socket.c
--- a/src/os/win32/ngx_socket.c	Thu Jun 11 14:51:59 2015 +0200
+++ 

Re: Fix windows issue with multiple workers

2015-06-18 Thread Sergey Brester

So, in VM it work for me also.

I'm assuming that something on my windows work-pc has prevented to 
inherit listener in this way (driver, LSPs installed (Layered Service 
Providers), antivirus or something else)...



But, why don't you want to use a suggested solution of me?

If I will realize the way with easy inheritance (with bInheritHandle 
through CreateProcess), it will be not really easier, because:


- we have several listener to share, so we should tell all this handles 
to child process;


- bInheritHandle=True in CreateProcess can be a potential risk by not 
closed handles, if process crashed, and that are not only sockets - thus 
will arise so-called zombi-handles as half-open (dropped) or 
half-closed. But for sockets are listening it is extrem. Here is an 
example when this situation is encountered (* - listener, which process 
does not exist):


netstat /ano | grep 0.0:80
 * TCP0.0.0.0:80 0.0.0.0:0  LISTENING   
3824
   TCP0.0.0.0:80 0.0.0.0:0  LISTENING   
4378


taskkill /F /PID 3824
ERROR: The process 3824 not found.

Unfortunately, it is not guaranteed that new process 4378 accepts 
connections (because zombi listener of 3824 can block it).
But also not so good are another zombies, like not closed temp-files, 
lock-files, pipes etc.


You can talk long about that would be windows bugs, but that are 
facts. And thus it is instable. Apart from, does not work at all on some 
mashines (like my work-pc).
And the way with WSADuplicateSocket self Microsoft recommends in various 
articles.


If you still want to use the solution with bInheritHandle, I suggest a 
compromise:
I will make it with selectable option (resp. defines like 
NGX_WIN32_DUPLICATE_LISTEN and NGX_WIN32_INHERIT_LISTEN).


Please tell me your decision.

Regards,
sebres.



Am 17.06.2015 16:52, schrieb Maxim Dounin:


Hello!

On Wed, Jun 17, 2015 at 04:01:17PM +0200, Sergey Brester wrote:

Hmm, strange - almost same code, but it does not work... only first 
child can accept connections.


Have you tried exactly the code I provided? Almost the same
is a usual difference between working and non-working code.


Which version of windows are you using for test?


Works fine at least in Windows 7 and Windows 8.1 VMs here, both
32-bit. I have no 64-bit Windows on hand to test, but if it
doesn't work for you specifically on 64-bit Windows, this may be
some minor bug in the test code related to type casting.


___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Fix windows issue with multiple workers

2015-06-17 Thread Sergey Brester

Hi,

Yes, for exactly one child process...

For example, same (modified for multiprocessing) code does not work on 
my mashine (win7 x64) for 2nd (3th etc.) process (with error 10022 = 
INVARG). I think the handle schould be then duplicated, and see MSDN 
(https://msdn.microsoft.com/en-us/library/windows/desktop/ms724251.aspx)


Sockets. No error is returned, but the duplicate handle may not be 
recognized by Winsock at the target process. Also, using DuplicateHandle 
interferes with internal reference counting on the underlying object. To 
duplicate a socket handle, use the WSADuplicateSocket function.


Regards,
sebres.

.

Am 17.06.2015 04:27, schrieb Maxim Dounin:


Hello!

On Wed, Jun 10, 2015 at 09:48:28PM +0200, Sergey Brester wrote:

[...]

@Maxim Dounin: 1) your suggested way with shared handle and 
bInheritHandle does not work, because of: [quote] Sockets. No error is 
returned, but the duplicate handle may not be recognized by Winsock at 
the target process. Also, using DUPLICATEHANDLE interferes with 
internal reference counting on the underlying object. To duplicate a 
socket handle, use the WSADUPLICATESOCKET function. [/quote]


The quote is from DuplicateHandle() description, which is
irrelevant to the approach suggested. Sockets are inheritable
between processes, including listen ones.

Simple test code below, as adapted from the accept() MSDN example,
demonstrates that the approach is working.

#include winsock2.h
#include stdio.h
#include windows.h

#pragma comment(lib, Ws2_32.lib)

int
main(int argc, char *argv[])
{
int rc;
u_long code;
SOCKET listen_socket, s;
WSADATA wsaData;
struct sockaddr_in sin;
STARTUPINFO si;
PROCESS_INFORMATION pi;
char command[256];

rc = WSAStartup(MAKEWORD(2, 2), wsaData);
if (rc != NO_ERROR) {
printf(WSAStartup() failed: %dn, rc);
return 2;
}

if (argc == 2) {
listen_socket = atoi(argv[1]);
printf(Inherited socket: %dn, listen_socket);
goto accept;
}

listen_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if (listen_socket == INVALID_SOCKET) {
printf(socket failed with error: %ldn, WSAGetLastError());
return 1;
}

printf(Listen socket: %dn, listen_socket);

sin.sin_family = AF_INET;
sin.sin_addr.s_addr = inet_addr(127.0.0.1);
sin.sin_port = htons(8080);

if (bind(listen_socket, (SOCKADDR *) sin, sizeof(sin)) == 
SOCKET_ERROR) {

printf(bind() failed: %ldn, WSAGetLastError());
return 1;
}

if (listen(listen_socket, 1) == SOCKET_ERROR) {
printf(listen() failed: %ldn, WSAGetLastError());
return 1;
}

if (argc == 1) {
ZeroMemory(si, sizeof(si));
si.cb = sizeof(si);
ZeroMemory(pi, sizeof(pi));

_snprintf(command, sizeof(command), %s %d, argv[0], listen_socket);

if (CreateProcess(NULL, command,
NULL, NULL, 1, 0, NULL, NULL,
si, pi)
== 0)
{
printf(CreateProcess() failed: %ldn, GetLastError());
return 1;
}

WaitForSingleObject(pi.hProcess, INFINITE);

if (GetExitCodeProcess(pi.hProcess, code) == 0) {
printf(GetExitCodeProcess() failed: %ldn, GetLastError());
return 1;
}

printf(Child process exited: %dn, code);

CloseHandle(pi.hProcess);
CloseHandle(pi.hThread);
}

accept:

printf(Waiting for client to connect...n);

s = accept(listen_socket, NULL, NULL);
if (s == INVALID_SOCKET) {
printf(accept() failed: %ldn, WSAGetLastError());
return 1;
}

printf(Client connectedn);

return 0;
}


___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Fix windows issue with multiple workers

2015-06-12 Thread Sergey Brester
 

Hi, 

enclosed a further changeset with backwards compatibility to 1 worker
processing (without inheritance as before fix), if single worker
configured + a little bit code review. 

P.S. github updated also. 

Regards,
sebres. 

11.06.2015 15:03, Sergey Brester: 

 Hi, 
 
 I've forgotten to free the shmem, thus enclosed an amendment with clean-up, 
 relative last changeset. 
 
 Regards,
 sebres.
 # HG changeset patch
# User sebres serg.bres...@sebres.de
# Date 1434103966 -7200
#  Fri Jun 12 12:12:46 2015 +0200
# Node ID 3499ecc86ec00deeef93869fc52a5ac02fd2e49f
# Parent  76ee2fe9300bdcf0dbf4a05e3ed7a1136b324eb7
backwards compatibility for single worker (without inheritance of socket) + code review

diff -r 76ee2fe9300b -r 3499ecc86ec0 src/core/ngx_connection.c
--- a/src/core/ngx_connection.c	Thu Jun 11 14:51:59 2015 +0200
+++ b/src/core/ngx_connection.c	Fri Jun 12 12:12:46 2015 +0200
@@ -427,33 +427,33 @@ ngx_open_listening_sockets(ngx_cycle_t *
 /* try to use shared sockets of master */
 if (ngx_process  NGX_PROCESS_MASTER) {
 
-if (!shinfo) {
-shinfo = ngx_get_listening_share_info(cycle, ngx_getpid());
-
-if (!shinfo) {
-failed = 1;
-break;
-}
+if (!shinfo  ngx_get_listening_share_info(cycle, shinfo,
+ngx_getpid()) != NGX_OK) {
+failed = 1;
+break;
 }
 
-s = ngx_shared_socket(ls[i].sockaddr-sa_family, ls[i].type, 0,
-shinfo+i);
+if (shinfo != NGX_SINGLE_WORKER) {
 
-if (s == (ngx_socket_t) -1) {
-ngx_log_error(NGX_LOG_EMERG, log, ngx_socket_errno,
-  ngx_socket_n  for inherited socket %V failed, 
-  ls[i].addr_text);
-return NGX_ERROR;
+s = ngx_shared_socket(ls[i].sockaddr-sa_family, ls[i].type, 0,
+shinfo+i);
+
+if (s == (ngx_socket_t) -1) {
+ngx_log_error(NGX_LOG_EMERG, log, ngx_socket_errno,
+  ngx_socket_n  for inherited socket %V failed, 
+  ls[i].addr_text);
+return NGX_ERROR;
+}
+
+ngx_log_debug4(NGX_LOG_DEBUG_CORE, log, 0, [%d] shared socket %d %V: %d,
+ngx_process, i, ls[i].addr_text, s);
+
+ls[i].fd = s;
+
+ls[i].listen = 1;
+
+continue;
 }
-
-ngx_log_debug4(NGX_LOG_DEBUG_CORE, log, 0, [%d] shared socket %d %V: %d,
-ngx_process, i, ls[i].addr_text, s);
-
-ls[i].fd = s;
-
-ls[i].listen = 1;
-
-continue;
 }
 #endif
 
diff -r 76ee2fe9300b -r 3499ecc86ec0 src/os/win32/ngx_process.c
--- a/src/os/win32/ngx_process.c	Thu Jun 11 14:51:59 2015 +0200
+++ b/src/os/win32/ngx_process.c	Fri Jun 12 12:12:46 2015 +0200
@@ -70,9 +70,7 @@ ngx_spawn_process(ngx_cycle_t *cycle, ch
 return pid;
 }
 
-#if (NGX_WIN32)
 ngx_share_listening_sockets(cycle, pid);
-#endif
 
 ngx_memzero(ngx_processes[s], sizeof(ngx_process_t));
 
diff -r 76ee2fe9300b -r 3499ecc86ec0 src/os/win32/ngx_process_cycle.c
--- a/src/os/win32/ngx_process_cycle.c	Thu Jun 11 14:51:59 2015 +0200
+++ b/src/os/win32/ngx_process_cycle.c	Fri Jun 12 12:12:46 2015 +0200
@@ -113,11 +113,6 @@ ngx_master_process_cycle(ngx_cycle_t *cy
 events[2] = ngx_reopen_event;
 events[3] = ngx_reload_event;
 
-/* does not close listener for win32, will be shared */
-#if (!NGX_WIN32)
-ngx_close_listening_sockets(cycle);
-#endif
-
 if (ngx_start_worker_processes(cycle, NGX_PROCESS_RESPAWN) == 0) {
 exit(2);
 }
@@ -210,11 +205,6 @@ ngx_master_process_cycle(ngx_cycle_t *cy
 
 ngx_cycle = cycle;
 
-/* does not close listener for win32, will be shared */
-#if (!NGX_WIN32)
-ngx_close_listening_sockets(cycle);
-#endif
-
 if (ngx_start_worker_processes(cycle, NGX_PROCESS_JUST_RESPAWN)) {
 ngx_quit_worker_processes(cycle, 1);
 }
diff -r 76ee2fe9300b -r 3499ecc86ec0 src/os/win32/ngx_socket.c
--- a/src/os/win32/ngx_socket.c	Thu Jun 11 14:51:59 2015 +0200
+++ b/src/os/win32/ngx_socket.c	Fri Jun 12 12:12:46 2015 +0200
@@ -13,6 +13,7 @@ typedef struct {
 
 ngx_pid_t  pid;
 ngx_uint_t nelts;
+ngx_int_t  worker_processes;
 
 /* WSAPROTOCOL_INFO * [listening.nelts] */
 
@@ -90,25 +91,30 @@ void ngx_free_listening_share(ngx_cycle_
 }
 
 
-ngx_shared_socket_info 
-ngx_get_listening_share_info(ngx_cycle_t *cycle, ngx_pid_t pid)
+ngx_int_t

Re: Fix windows issue with multiple workers

2015-06-11 Thread Sergey Brester
 

Hi, 

I've forgotten to free the shmem, thus enclosed an amendment with
clean-up, relative last changeset. 

Regards,
sebres. 

10.06.2015 21:48, Sergey Brester: 

 Hi, 
 
 enclosed you will find an attached changeset, that contains fix for windows 
 issue with multiple workers (once listening - only one made any work). 
 
 If someone needs a git version of it: 
 
 https://github.com/sebres/nginx/pull/1/files [2] 
 
 Here [3] you may find a benchmark comparison for that (1 worker column - 
 measured before fix). 
 
 -- 
 
 Shortly about fix algorithm (changes related are marked as [*], unchanged - 
 [-]): 
 
 - master process create all listener; 
 
 - [cycle] master process create a worker; 
 
 * [win32] master calls `ngx_share_listening_sockets`: each listener share 
 (inheritance) info for pid of this worker (cloned via WSADuplicateSocket), 
 that will be saved in shmem; 
 
 - master process wait until worker will send an event worker_nnn; 
 
 * [win32] worker process executes `ngx_get_listening_share_info` to obtain 
 shared info, protocol structure that can be used to create a new socket 
 descriptor for a shared socket; 
 
 * [win32] worker process creates all listener sockets using given shared info 
 of master; 
 
 - worker process sets an event worker_nnn; 
 
 - master process create next worker, repeat [cycle]. 
 
 -- 
 
 @Maxim Dounin:
 1) your suggested way with shared handle and bInheritHandle does not work, 
 because of:
 [quote]
 Sockets. No error is returned, but the duplicate handle may not be recognized 
 by Winsock at the target process. Also, using DUPLICATEHANDLE interferes with 
 internal reference counting on the underlying object. 
 To duplicate a socket handle, use the WSADUPLICATESOCKET function.
 [/quote] 
 
 2) proposal to use an environment instead of shared memory can not work also, 
 because sharing via WSADuplicateSocket should already know a pid of target 
 process, that uses this handle - specially shared for each worker. 
 
 BTW, using of `accept_mutex` was disallowed for win32, cause of possible 
 deadlock if grabbed by a process which can't accept connections. Because, 
 this is fixed now, I have removed this restriction in separated commit.
 But I think, accept_mutex is not needed in win32 resp. with accept_mutex it 
 is much slower as without him. So, whats about set default of `accept_mutex` 
 to `off` on windows platform? 
 
 BTW[2], I have executed extensive tests of this fix, also with reloading 
 (increase/decrease `worker_processes`), restarting, as well as 
 auto-restarting of worker, if it was crashed (ex.: have sporadically killed 
 some worker). 
 
 Regards,
 Serg G. Brester (aka sebres). 
 
 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel [1]
 

Links:
--
[1] http://mailman.nginx.org/mailman/listinfo/nginx-devel
[2] https://github.com/sebres/nginx/pull/1/files
[3] https://github.com/sebres/nginx/pull/1
# HG changeset patch
# User sebres serg.bres...@sebres.de
# Date 1434027119 -7200
#  Thu Jun 11 14:51:59 2015 +0200
# Node ID 76ee2fe9300bdcf0dbf4a05e3ed7a1136b324eb7
# Parent  e40ee60150e47616d86fdee90f62f0f88c4b1e80
clean-up amendment for windows issue with multiple workers: free shared memory for inherited protocol info;

diff -r e40ee60150e4 -r 76ee2fe9300b src/core/ngx_connection.c
--- a/src/core/ngx_connection.c	Wed Jun 10 19:39:18 2015 +0200
+++ b/src/core/ngx_connection.c	Thu Jun 11 14:51:59 2015 +0200
@@ -641,6 +641,12 @@ ngx_open_listening_sockets(ngx_cycle_t *
 return NGX_ERROR;
 }
 
+#if (NGX_WIN32)
+if (ngx_process  NGX_PROCESS_MASTER) {
+ngx_free_listening_share(cycle);
+}
+#endif
+
 return NGX_OK;
 }
 
@@ -906,6 +912,10 @@ ngx_close_listening_sockets(ngx_cycle_t 
 ngx_log_debug2(NGX_LOG_DEBUG_CORE, cycle-log, 0, [%d] close %d listener(s),
 ngx_process, cycle-listening.nelts);
 
+#if (NGX_WIN32)
+ngx_free_listening_share(cycle);
+#endif
+
 ngx_accept_mutex_held = 0;
 ngx_use_accept_mutex = 0;
 
diff -r e40ee60150e4 -r 76ee2fe9300b src/os/win32/ngx_socket.c
--- a/src/os/win32/ngx_socket.c	Wed Jun 10 19:39:18 2015 +0200
+++ b/src/os/win32/ngx_socket.c	Thu Jun 11 14:51:59 2015 +0200
@@ -79,6 +79,17 @@ ngx_int_t ngx_get_listening_share(ngx_cy
 }
 
 
+void ngx_free_listening_share(ngx_cycle_t *cycle)
+{
+if (shm_listener.addr) {
+
+ngx_shm_free(shm_listener);
+shm_listener.addr = NULL;
+
+}
+}
+
+
 ngx_shared_socket_info 
 ngx_get_listening_share_info(ngx_cycle_t *cycle, ngx_pid_t pid)
 {
diff -r e40ee60150e4 -r 76ee2fe9300b src/os/win32/ngx_socket.h
--- a/src/os/win32/ngx_socket.h	Wed Jun 10 19:39:18 2015 +0200
+++ b/src/os/win32/ngx_socket.h	Thu Jun 11 14:51:59 2015 +0200
@@ -211,6 +211,7 @@ typedef WSAPROTOCOL_INFO * ngx_shared_so
 
 ngx_shared_socket_info ngx_get_listening_share_info(ngx_cycle_t *cycle, 
 ngx_pid_t pid);
+void

Fix windows issue with multiple workers

2015-06-10 Thread Sergey Brester
 

Hi, 

enclosed you will find an attached changeset, that contains fix for
windows issue with multiple workers (once listening - only one made any
work). 

If someone needs a git version of it: 

https://github.com/sebres/nginx/pull/1/files [1] 

Here [2] you may find a benchmark comparison for that (1 worker column -
measured before fix). 

-- 

Shortly about fix algorithm (changes related are marked as [*],
unchanged - [-]): 

- master process create all listener; 

- [cycle] master process create a worker; 

* [win32] master calls `ngx_share_listening_sockets`: each listener
share (inheritance) info for pid of this worker (cloned via
WSADuplicateSocket), that will be saved in shmem; 

- master process wait until worker will send an event worker_nnn; 

* [win32] worker process executes `ngx_get_listening_share_info` to
obtain shared info, protocol structure that can be used to create a new
socket descriptor for a shared socket; 

* [win32] worker process creates all listener sockets using given shared
info of master; 

- worker process sets an event worker_nnn; 

- master process create next worker, repeat [cycle]. 

-- 

@Maxim Dounin:
1) your suggested way with shared handle and bInheritHandle does not
work, because of:
[quote]
Sockets. No error is returned, but the duplicate handle may not be
recognized by Winsock at the target process. Also, using DUPLICATEHANDLE
interferes with internal reference counting on the underlying object. 
To duplicate a socket handle, use the WSADUPLICATESOCKET function.
[/quote] 

2) proposal to use an environment instead of shared memory can not work
also, because sharing via WSADuplicateSocket should already know a pid
of target process, that uses this handle - specially shared for each
worker. 

BTW, using of `accept_mutex` was disallowed for win32, cause of possible
deadlock if grabbed by a process which can't accept connections.
Because, this is fixed now, I have removed this restriction in
separated commit.
But I think, accept_mutex is not needed in win32 resp. with accept_mutex
it is much slower as without him. So, whats about set default of
`accept_mutex` to `off` on windows platform? 

BTW[2], I have executed extensive tests of this fix, also with reloading
(increase/decrease `worker_processes`), restarting, as well as
auto-restarting of worker, if it was crashed (ex.: have sporadically
killed some worker). 

Regards,
Serg G. Brester (aka sebres). 

 

Links:
--
[1] https://github.com/sebres/nginx/pull/1/files
[2] https://github.com/sebres/nginx/pull/1
# HG changeset patch
# User sebres serg.bres...@sebres.de
# Date 1433956047 -7200
#  Wed Jun 10 19:07:27 2015 +0200
# Node ID 7f0a48e380944d476063419db7e99122e2f17d92
# Parent  1729d8d3eb3acbb79b1b0c1d60b411aacc4f8461
prevent extreme growth of log file if select failed (possible nowait inside)

diff -r 1729d8d3eb3a -r 7f0a48e38094 src/event/modules/ngx_win32_select_module.c
--- a/src/event/modules/ngx_win32_select_module.c	Mon Jun 08 23:13:56 2015 +0300
+++ b/src/event/modules/ngx_win32_select_module.c	Wed Jun 10 19:07:27 2015 +0200
@@ -214,6 +214,8 @@ ngx_select_del_event(ngx_event_t *ev, ng
 }
 
 
+static ngx_uint_t ngx_select_err_cntr = 0;
+
 static ngx_int_t
 ngx_select_process_events(ngx_cycle_t *cycle, ngx_msec_t timer,
 ngx_uint_t flags)
@@ -278,7 +280,16 @@ ngx_select_process_events(ngx_cycle_t *c
select ready %d, ready);
 
 if (err) {
-ngx_log_error(NGX_LOG_ALERT, cycle-log, err, select() failed);
+/* because select failed (possible nowait) - prevent extreme growth of log file */
+if (++ngx_select_err_cntr  10) {
+ngx_log_error(NGX_LOG_ALERT, cycle-log, err, select() failed);
+} else if (ngx_select_err_cntr == 10) {
+ngx_log_error(NGX_LOG_ALERT, cycle-log, err, select() failed, 
+ %d times - no more log), 10);
+} else {
+/* prevent 100% cpu usage if nowait - WSAEINVAL etc. */
+ngx_msleep(500);
+}
 
 if (err == WSAENOTSOCK) {
 ngx_select_repair_fd_sets(cycle);
@@ -286,6 +297,7 @@ ngx_select_process_events(ngx_cycle_t *c
 
 return NGX_ERROR;
 }
+ngx_select_err_cntr = 0;
 
 if (ready == 0) {
 if (timer != NGX_TIMER_INFINITE) {
# HG changeset patch
# User sebres serg.bres...@sebres.de
# Date 1433956100 -7200
#  Wed Jun 10 19:08:20 2015 +0200
# Node ID 80e20498d99e0a4065dfd5ec5c1633cdcfc94541
# Parent  7f0a48e380944d476063419db7e99122e2f17d92
Fix windows issue with multiple workers (once listening - only one made any work),
using shared socket handles for each worker process (WSADuplicateHandle).

diff -r 7f0a48e38094 -r 80e20498d99e src/core/ngx_connection.c
--- a/src/core/ngx_connection.c	Wed Jun 10 19:07:27 2015 +0200
+++ b/src/core/ngx_connection.c	Wed Jun 10 19:08:20 2015 +0200
@@ -369,6 +369,9 @@ ngx_open_listening_sockets(ngx_cycle_t *
 ngx_log_t*log;
 ngx_socket_t  s;
 ngx_listening_t  

Re: Fwd: Windows shmem fix: makes shared memory fully ASLR and DEP compliant (ea. cache zone, limit zone, etc.)

2015-06-09 Thread Sergey Brester

09.06.2015 15:43, Sergey Brester:


09.06.2015 14:44, Maxim Dounin:

I don't see how CreateProcess() bInheritHandles affects handles 
created by worker processes. It is documented to only control whether 
inheritable handles will be inherited by a new process or not. Either 
way, worker processes are not expected to start other processes, so 
you probably shouldn't care at all.


The problem is, some handles are frequently default inheritable in 
windows. And if any process in combination parent/children was exited 
(ex. creashed) without closing this handle - it would be not closed, as 
long as last process of this group is still alive (can potentially 
inherit this leak handle).


Very bad thing is this bInheritHandles, have very bad experience with 
it.


Additionally, I have tested in the meantime the solution with 
createprocess/bInheritHandles=1.
Select in each child will fail with WSAEINVAL select() failed (10022: 
An invalid argument was supplied), despite the flag 
WSA_FLAG_NO_HANDLE_INHERIT in master was not specified by creating of 
listening socket.


Don't forget LSPs (Layered Service Providers) - although depricated but, 
when certain LSPs are installed, the inherited handles can't be used in 
the child.


But I try to dig deeper...

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Fwd: Windows shmem fix: makes shared memory fully ASLR and DEP compliant (ea. cache zone, limit zone, etc.)

2015-06-08 Thread Sergey Brester

Hi,

Back to my wish to fix a problem with multiple workers under windows...

Since we successful implemented shared memory on windows, it may be used 
for proper sharing a socket descriptor for multiple workers.


Possible scenario can be found in this MSDN-article:

  
https://msdn.microsoft.com/en-en/library/windows/desktop/ms741565%28v=vs.85%29.aspx


I need some help, to make the changes more nginx conform:

I have not yet decided which process will create/share a socket 
(duplicate handle and set it to shared memory). The idea was to create 
from master in ngx_event_process_init an initial shared memory with 
empty socket descriptors. The first worker inside 
ngx_open_listening_sockets may create a socket and duplicate a handle to 
share it with all other workers (save it to shared memory).


The first problem would be the way windows does it - the (source) 
process should know each target process handle to duplicate a socket 
handle for it. So if a new worker will be created, it should receive a 
new shared handle, duplicated specially for him.


So if not possible to do that somehow, only master process can create a 
socket and duplicate handle for each worker, before create him. But this 
will make the whole solution more complicated as now.


Another way is, to create child with CreateProcess and 
bInheritHandles=1, and then save the first created socket handle in 
shared memory, but the problem would be - all handles created from 
childs will be shared also (files, channels, etc.), so if some handle 
will be not proper closed before end of some process - it will be a 
leak, as long as all processes including master are not terminated.



Any ideas are welcome.

Regards,
sebres.



22.04.2015 18:29, Maxim Dounin:

BTW(1): I have interest to fix a problem with multiple workers under 
windows (ex.: 
http://forum.nginx.org/read.php?2,188714,188714#msg-188714 [1] [1]). 
@Maxim Dounin: can you possibly give me more information, what you 
mean here, resp. what it currently depends on (see 
http://forum.nginx.org/read.php?2,188714,212568#msg-212568 [2] [2]).


The most obvious problem with multiple workers on Windows is
listening sockets. As of now, if you start multiple workers on
windows, nginx will open multiple listening sockets - one in each
worker process. These are independant sockets, each opened with
SO_REUSEADDR, and only one of these sockets will be able to accept
connections.

Possible solution to the problem would be to pass listening sockets
from a master process to worker processes via handle inheritance
as available in Windows:

http://msdn.microsoft.com/en-us/library/windows/desktop/ms683463(v=vs.85).aspx 
[3]





Links:
--
[1] http://forum.nginx.org/read.php?2,188714,188714#msg-188714
[2] http://forum.nginx.org/read.php?2,188714,212568#msg-212568
[3] 
http://msdn.microsoft.com/en-us/library/windows/desktop/ms683463(v=vs.85).aspx


___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: 答复: problems when use fastcgi_pass to deliver request to backend

2015-05-29 Thread Sergey Brester
 

Hi, 

It's called fastcgi multiplexing and nginx currently does not implement
that (and I don't know . 

There were already several discussions about that, so read here, please.
[22] 

Short, very fast fastcgi processing may be implemented without
multiplexing (should be event-driven also). 

Regards,
sebres. 

Am 29.05.2015 09:58, schrieb 林谡: 

 _/* we support the single request per connection */_ 
 
 2573 [2] 
 
 2574 [3] 
 
 CASE ngx_http_fastcgi_st_request_id_hi: 
 
 2575 [4] 
 
 IF (ch != 0) { 
 
 2576 [5] 
 
 ngx_log_error(NGX_LOG_ERR, r-connection-log, 0, 
 
 2577 [6] 
 
 upstream sent unexpected FastCGI  
 
 2578 [7] 
 
 request id high byte: %d, ch); 
 
 2579 [8] 
 
 RETURN NGX_ERROR; 
 
 2580 [9] 
 
 } 
 
 2581 [10] 
 
 state = ngx_http_fastcgi_st_request_id_lo; 
 
 2582 [11] 
 
 BREAK; 
 
 2583 [12] 
 
 2584 [13] 
 
 CASE ngx_http_fastcgi_st_request_id_lo: 
 
 2585 [14] 
 
 IF (ch != 1) { 
 
 2586 [15] 
 
 ngx_log_error(NGX_LOG_ERR, r-connection-log, 0, 
 
 2587 [16] 
 
 upstream sent unexpected FastCGI  
 
 2588 [17] 
 
 request id low byte: %d, ch); 
 
 2589 [18] 
 
 RETURN NGX_ERROR; 
 
 2590 [19] 
 
 } 
 
 2591 [20] 
 
 state = ngx_http_fastcgi_st_content_length_hi; 
 
 2592 [21] 
 
 BREAK; 
 
 By reading source code, I saw the reason , so can nginx support multi request 
 per connection in future? 
 
 发件人: 林谡 
 发送时间: 2015年5月29日 11:37
 收件人: 'nginx-devel@nginx.org'
 主题: problems when use fastcgi_pass to deliver request to backend 
 
 Hi, 
 
 I write a fastcgi server and use nginx to pass request to my server. It works 
 till now. 
 
 But I find a problem. Nginx always set requestId = 1 when sending fastcgi 
 record. 
 
 I was a little upset for this, cause according to fastcgi protocol, web 
 server can send fastcgi records belonging to different request 
 simultaneously, and requestIds are different and keep unique. I really need 
 this feature, because requests can be handled simultaneously just over one 
 connetion. 
 
 Can I find a way out? 
 
 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel [1]
 

Links:
--
[1] http://mailman.nginx.org/mailman/listinfo/nginx-devel
[2]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2573
[3]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2574
[4]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2575
[5]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2576
[6]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2577
[7]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2578
[8]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2579
[9]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2580
[10]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2581
[11]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2582
[12]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2583
[13]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2584
[14]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2585
[15]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2586
[16]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2587
[17]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2588
[18]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2589
[19]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2590
[20]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2591
[21]
http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2592
[22] http://forum.nginx.org/read.php?2,237158
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

execution of post_action each time breaks a keepalive connection to upstream

2015-05-07 Thread Sergey Brester

Hi all,

I've found that use of post_action @named_post always (each time) 
closes a upstream connection (despite of keepalive).


I've been using fastcgi in @named_post. I think it belong somehow to 
r-header_only=1,
because fastcgi request does not wait for end-request record from 
fastcgi,
so end of request and closing immediately after logging of http fastcgi 
header done.


Facts are (as debug shows):

 - r-keepalive == 1, but in ngx_http_upstream_free_keepalive_peer - 
u-keepalive == 0, so goto invalid;

   so, saving connection will not be held.

 - I think, that in fastcgi u-keepalive will be set to 1 only by 
processing of end-request record and
   possible neither ngx_http_fastcgi_input_filter or 
ngx_http_fastcgi_non_buffered_filter will
   not executed, but sure is, that line u-keepalive = 1 will never 
executed for post_action request:


if (f-state == ngx_http_fastcgi_st_padding) {
if (f-type == NGX_HTTP_FASTCGI_END_REQUEST) {
   ...
if (f-pos + f-padding == f-last) {
...
u-keepalive = 1;

 - I don't think that is fastcgi only, and that never work, because 
similar
   execution plan used by auth_request (header_only also, but all that 
over subrequest)
   and there u-keepalive is 1, so it holds resp. saves connection and 
uses it again later.
   But as I said it uses a subrequest, and post_action uses 
ngx_http_named_location.


Keep-alive is very very important for me, unfortunately I can not give 
up this.


I can rewrite post_action over subrequest, but imho it's not a correct 
solution for this.


In which direction should I dig to fix this issue ?

Possible it is not r-header_only, but one of if (r-post_action) 
... or anything else, that prevents execution of input_filter...


Any suggestions are welcome...

Regards,
sebres.

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: New feature request: Docker files for Power platform (SLES, RHEL, Ubuntu)

2015-05-07 Thread Sergey Brester

Hi,

It is a mercurial (hg) repo, for contribution to it please read hier:

   http://nginx.org/en/docs/contributing_changes.html

Short, it should be a changeset (created with hg export)...

BTW: I don't know, will nginx developers want it, but if even not (and 
you have possibly a github account), so please make a pull request in:


   https://github.com/sebres/nginx

Or if you have your own repository on github please tell me know.

Thx,
sebres.

07.05.2015 12:46, Abhishek Kumar:


Hi,
I have written dockerfile for building nginx from source. I have built 
and tested the source code available on git successfully through the 
dockerfile for PPC64LE architecture. The dockerfile is successfully run 
on following platforms:

Ubuntu 14.10
SUSE Linux 12.0
RHEL 7.1

Kindly suggest me where can I (which repository) contribute this 
dockerfile for nginx


Regards,
Abhishek Kumar

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel [1]



Links:
--
[1] http://mailman.nginx.org/mailman/listinfo/nginx-devel

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: execution of post_action each time breaks a keepalive connection to upstream

2015-05-07 Thread Sergey Brester

Hello!

On Thu, May 07, 2015 at 12:51:33PM +0200, Sergey Brester wrote:

Hi all, I've found that use of post_action @named_post always (each 
time) closes a upstream connection (despite of keepalive).


In short:

- post_action is a dirty hack and undocumented on purpose, avoid
using it;


Undocumented in the long term or just still not?
Because meanwhile it will be used from half world... I know at least a 
dozen companies, using that.




- as long as an upstream response is complete after receiving a
header, upstream keepalive should work even with post_action; it
might be tricky to ensure this with fastcgi though.


What confuses me, that a header_only subrequest to fastcgi works fine!
And what I mean was, it can be not post_action self, but possible all 
header_only requests via ngx_http_named_location etc. so in this case 
many things are affected (including third-party modules).


Thx, Serg.

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: execution of post_action each time breaks a keepalive connection to upstream

2015-05-07 Thread Sergey Brester

It was never documented, and will never be documented. Well, may
be we'll add something like post_action: don't use it unless you
understand what are you doing to let people know that this
directive should not be used.


It's a proper pity if that some day gets the chop :(
Because I see no really options to easy realize it (with any current 
advantages) as module without great integrity in nginx core.



A connections to an upstream server can be only kept alive if it
is in some consistent state and no outstanding data are expected
in it. On the other hand, nginx doesn't try to read anything in
addition to what it already read during normal upstream response
parsing.



As a result, if sending of a response is stopped once nginx got a
header (this happens in case of post_action and in some cases with
r-header_only), nginx will only be able to cache a connection if
it's already in a consistent state. This may be the case with
HTTP if Content-Length is explicitly set to 0 in the response
headers and in some other cases (see ngx_http_proxy_process_header()
for details).



Quick look suggests that with FastCGI it doesn't seems to be
possible at all, at least with current code, as nginx parses
headers from FCGI_STDOUT records, but at least a FCGI_END_REQUEST
record is additionally expected.


I know that all... And both fsgi upstreams work proper (header only 
also) and sends end-request record hereafter. (Just checked again).
The problem is that it does not wait (I know it's not really wait) for 
proper endrequest and does not set u-keepalive to 1, so worker closes a 
connection.


But I will find it out and fix anyway :)

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: nginx http-core enhancement: named location in subrequests + directive use_location

2015-04-30 Thread Sergey Brester
 

Am 30.04.2015 15:55, schrieb Maxim Dounin: 

 Hello!
 
 On Wed, Apr 29, 2015 at 07:22:51PM +0200, Sergey Brester wrote:
 
 [...]
 And how it's expected to be processed in a named location if r-uri is 
 @...? Function ngx_http_core_find_named_location if location was found 
 set r-loc_conf = (*clcfp)-loc_conf and returns NGX_OK.

The problem is that the r-uri will be illegal.

I think not for internal request?! 

if (r-internal  r-uri... 

 Note that named locations are ones where requests are handled with their 
 original URIs unmodified. This, in particular, allows to use the original URI 
 to select a file, or in a request to the upstream server, etc. With what you 
 are trying to do it doesn't look different from a static location As I wrote 
 it already (of course it is not a static location): Example, instead of : # 
 auth_request /auth_loc/; You can use named location @auth_loc and write now: 
 auth_request @auth_loc;

So what will happen if the location is:

 location @auth_loc {
 proxy_pass http://auth.backend [1];
 }

Which URI will be used in a HTTP request to auth.backend? As far 
as I see, with your code it will result in a

GET @auth_loc HTTP/1.0

request to the upstream server, and immediate 400 response is what 
will happen.

Yes, in this case the backend upstream should support such uri's or more
likely in named location it should be overwritten... 

But for subrequests (example for auth_request) is anyway original
document_uri resp. script_name more interesting...
Anyway if proxy_pass should use real uri with / you can still use
normal named convention for nonnamed internal location. 

Here is my config for example: 

 location @cgisvc_auth1 {
 fastcgi_keep_conn on;
 include fastcgi_params;
 fastcgi_pass_request_body off;
 fastcgi_pass http_cgi_backend;
 } 

 location @cgisvc_auth2 {
 proxy_pass http://localhost:8088/ [1];
 } 

 location /test1 {
 auth_request @cgisvc_auth1;
 } 

 location /test2 {
 auth_request @cgisvc_auth2;
 } 

Links:
--
[1] http://auth.backend
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Allow more than one challenge - multiple authenticate response-header [rfc2616 sec14.47]

2015-04-29 Thread Sergey Brester
 

Hi, 

enclosed you will find an attached changeset, that allows more than one
authentication challenge - multiple authenticate response-header
[rfc2616 sec14.47].

Implemented for auth_request and http upstream (ex. backends).

If you want to support it in your own authentication module, just call
`ngx_http_upstream_transmit_headers` after setting
`headers_out.www_authenticate` of request, like in both implemented
modules (see changeset attached).
For upstreams simply add a multiple header entries with
www-authentificate challenges, supported from your module.

PS. If someone needs a git version of it:
https://github.com/sebres/nginx/commits/hg-sb-mod [1] 

Regards, 

sebres. 

Links:
--
[1] https://github.com/sebres/nginx/commits/hg-sb-mod
# HG changeset patch
# User Serg G. Brester (sebres) serg.bres...@sebres.de
# Date 1430300326 -7200
#  Wed Apr 29 11:38:46 2015 +0200
# Node ID 10c2a7b3420bc03fc31b969a07daa3f354676a0e
# Parent  43135346275c76add5bf953024a3d244f04184ba
Allow more than one challenge - multiple authenticate response-header [rfc2616 sec14.47]: currently implemented for auth_request (and upstream) only;

Header example:
  WWW-Authenticate: NTLM
  WWW-Authenticate: Digest realm=..., qop=, nonce=, opaque=
  WWW-Authenticate: Basic realm=...

diff -r 43135346275c -r 10c2a7b3420b src/http/modules/ngx_http_auth_request_module.c
--- a/src/http/modules/ngx_http_auth_request_module.c	Tue Apr 28 15:34:33 2015 +0200
+++ b/src/http/modules/ngx_http_auth_request_module.c	Wed Apr 29 11:38:46 2015 +0200
@@ -156,6 +156,16 @@ ngx_http_auth_request_handler(ngx_http_r
 *ho = *h;
 
 r-headers_out.www_authenticate = ho;
+
+/* transmit all authenticate headers (ex: multiple authenticate
+ * challenges [rfc2616 sec14.47]): */
+if (ngx_http_upstream_transmit_headers(r, !sr-upstream ?
+r-headers_in.headers.part : 
+sr-upstream-headers_in.headers.part,
+ho) != NGX_OK)
+{
+return NGX_ERROR;
+}
 }
 
 return ctx-status;
diff -r 43135346275c -r 10c2a7b3420b src/http/ngx_http_upstream.c
--- a/src/http/ngx_http_upstream.c	Tue Apr 28 15:34:33 2015 +0200
+++ b/src/http/ngx_http_upstream.c	Wed Apr 29 11:38:46 2015 +0200
@@ -2293,6 +2293,57 @@ ngx_http_upstream_test_next(ngx_http_req
 }
 
 
+ngx_int_t
+ngx_http_upstream_transmit_headers(ngx_http_request_t *r, 
+ngx_list_part_t *part, ngx_table_elt_t *filter)
+{
+ngx_table_elt_t   *h, *ho;
+ngx_uint_ti;
+h = part-elts;
+
+for (i = 0; /* void */; i++) {
+
+if (i = part-nelts) {
+if (part-next == NULL) {
+break;
+}
+part = part-next;
+h = part-elts;
+i = 0;
+}
+if (h[i].hash == 0) {
+continue;
+}
+/* filter same header: */
+if (h[i] == filter) {
+continue;
+}
+/* filter by key name: */
+if (h[i].key.len != filter-key.len || 
+ngx_strncasecmp(h[i].key.data, filter-key.data,
+filter-key.len) != 0)
+{
+continue;
+}
+/* filter same value: */
+if (h[i].value.len == filter-value.len  
+ngx_strncasecmp(h[i].value.data, filter-value.data,
+filter-value.len) == 0)
+{
+continue;
+}
+ho = ngx_list_push(r-headers_out.headers);
+if (ho == NULL) {
+return NGX_ERROR;
+}
+
+*ho = h[i];
+}
+
+return NGX_OK;
+}
+
+
 static ngx_int_t
 ngx_http_upstream_intercept_errors(ngx_http_request_t *r,
 ngx_http_upstream_t *u)
@@ -2339,6 +2390,16 @@ ngx_http_upstream_intercept_errors(ngx_h
 *h = *u-headers_in.www_authenticate;
 
 r-headers_out.www_authenticate = h;
+
+/* transmit all authenticate headers (ex: multiple authenticate
+ * challenges [rfc2616 sec14.47]): */
+if (ngx_http_upstream_transmit_headers(r, 
+u-headers_in.headers.part, h) != NGX_OK)
+{
+ngx_http_upstream_finalize_request(r, u,
+   NGX_HTTP_INTERNAL_SERVER_ERROR);
+return NGX_OK;
+}
 }
 
 #if (NGX_HTTP_CACHE)
diff -r 43135346275c -r 10c2a7b3420b src/http/ngx_http_upstream.h
--- a/src/http/ngx_http_upstream.h	Tue Apr 28 15:34:33 2015 +0200
+++ b/src/http/ngx_http_upstream.h	Wed Apr 29 11:38:46 2015 +0200
@@ -404,6 +404,8 @@ ngx_int_t ngx_http_upstream_hide_headers
 ngx_http_upstream_conf_t *conf, ngx_http_upstream_conf_t *prev,
 ngx_str_t *default_hide_headers, ngx_hash_init_t *hash);
 
+ngx_int_t ngx_http_upstream_transmit_headers(ngx_http_request_t *r,
+ngx_list_part_t *part, 

Re: nginx http-core enhancement: named location in subrequests + directive use_location

2015-04-29 Thread Sergey Brester
 

Am 29.04.2015 15:48, schrieb Maxim Dounin: 

 Hello!
 
 On Wed, Apr 29, 2015 at 09:18:11AM +0200, Sergey Brester wrote:
 
 Hi, enclosed you will find an attached changeset, that: - allows to fast use 
 of named location in sub requests, such as auth_request, etc. Currently no 
 named location was possible in any sub requests, real (or internal) 
 locations only. # now you can use named location in sub requests: # 
 auth_request /auth_loc/; auth_request @auth_loc; - in addition, a second 
 mini-commit (37d7786e7015) with new directive use_location as alias or 
 replacement for try_files with no file argument and without checking the 
 existence of file(s): # try_files  @loc use_location @loc It was allready 
 more times discussed (goto location, etc.). PS. If someone needs a git 
 version of it: https://github.com/sebres/nginx/commits/hg-sb-mod [1] [1 [1]] 
 Regards, sebres. Links: -- [1] 
 https://github.com/sebres/nginx/commits/hg-sb-mod [1]
 
 # HG changeset patch # User Serg G. Brester (sebres) 
 serg.bres...@sebres.de # Date 1430227790 -7200 # Tue Apr 28 15:29:50 2015 
 +0200 # Node ID 37d7786e7015f8a784e6a4dc3f88f8a7573a4c08 # Parent 
 96e22e4f1b03ff15a774c6ed34d74b897af32c55 http-core: new directive 
 use_location as replacement or alias for try_files with no file argument 
 and without checking the existence of file(s): `use_location @loc` replaces 
 `try_files  @loc`
 
 Something like this was previously discussed more than once, and 
 the short answer is no.
 
 # HG changeset patch # User Serg G. Brester (sebres) 
 serg.bres...@sebres.de # Date 1430228073 -7200 # Tue Apr 28 15:34:33 2015 
 +0200 # Node ID 43135346275c76add5bf953024a3d244f04184ba # Parent 
 37d7786e7015f8a784e6a4dc3f88f8a7573a4c08 http-core: allow to fast use of 
 named location in (internal) sub requests, ex.: auth_request, etc.; diff -r 
 37d7786e7015 -r 43135346275c src/http/ngx_http_core_module.c --- 
 a/src/http/ngx_http_core_module.c Tue Apr 28 15:29:50 2015 +0200 +++ 
 b/src/http/ngx_http_core_module.c Tue Apr 28 15:34:33 2015 +0200 @@ -22,6 
 +22,7 @@ typedef struct { static ngx_int_t 
 ngx_http_core_find_location(ngx_http_request_t *r); +static ngx_int_t 
 ngx_http_core_find_named_location(ngx_http_request_t *r, ngx_str_t *name); 
 static ngx_int_t ngx_http_core_find_static_location(ngx_http_request_t *r, 
 ngx_http_location_tree_node_t *node); @@ -1542,6 +1543,16 @@ 
 ngx_http_core_find_location(ngx_http_req noregex = 0; #endif + /* already 
 internal - check is resp. can be named
location - search it */ + + if (r-internal  r-uri.len = 1  
r-uri.data[0] == '@') { + + if (ngx_http_core_find_named_location(r, r-uri) 
== NGX_OK) { + + return NGX_OK; + } + } +
 
 And how it's expected to be processed in a named location if 
 r-uri is @...?

Function ngx_http_core_find_named_location if location was found set
r-loc_conf = (*clcfp)-loc_conf and returns NGX_OK.

 Note that named locations are ones where requests are handled with their 
 original URIs unmodified. This, in particular, allows to use the original URI 
 to select a file, or in a request to the upstream server, etc. With what you 
 are trying to do it doesn't look different from a static location

As I wrote it already (of course it is not a static location):

Example, instead of : 

# auth_request /auth_loc/; 

You can use named location @auth_loc and write now: 

auth_request @auth_loc; 

Regards,
sebres. 

Links:
--
[1] https://github.com/sebres/nginx/commits/hg-sb-mod
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Fwd: Windows shmem fix: makes shared memory fully ASLR and DEP compliant (ea. cache zone, limit zone, etc.)

2015-04-27 Thread Sergey Brester
 

Hi, 

Your patch looks very interesting. 

But small comment by the way: 

 So the answer is no, right? ...

I had no problem with reaload at all... What I do so differently?
The last test was repeated 2000 times: added and removed up to 10 zones
and reload with '-s reload'.
I see no problem. ??? 

Am 27.04.2015 03:25, schrieb Maxim Dounin: 

 Hello!
 
 On Fri, Apr 24, 2015 at 01:21:41AM +0200, Sergey Brester wrote:
 Hello, There are lots of style problems which need cleanup. The newer, 
 nginx-style compliant version of changeset (shmem fix2.patch) was already 
 posted to nginx-devel (Thx, Filipe DA SILVA). You will find it also on under 
 https://github.com/sebres/nginx/commit/e7c149f1ad76b9d850fb59ecc479d4a658c13e04
  [1] [4].

It still needs cleanup, but I don't think it worth detailed 
comments as there are more important things to be addressed.

 Such things belong to ngx_win32_init.c. It may be also good enough to use 
 ngx_pagesize, which is already set there.
 Agree, but some (little) things can reside in same module, where these are 
 used (noted as todo). 
 
 This probably should be somewhere at ngx_win32_config.h.
 Imho, belong definitelly in shmem module. But don't want to fight about it :)

The question here is a balance between code locality and 
visibility of platform-dependent constants for modifications. E.g., 
we've recently added 64-bit constants into ngx_win32_config.h 
to allow compilation of 64-bit windows binaries - and I'm 
perfectly sure shmem code would have been forgotten assuming it 
used just magic addresses as in your patch.

May be some constants with appropriate comments at the top of the 
ngx_shmem.c file will be a good compromise solution.

 It might be better idea to save the address at the end of the memory block 
 allocated...
 And for example stop to work of other new worker (when unfortunately 
 overwriting of it occurs). That brings at once many problem and restrictions 
 later (for example more difficult implementing of shared area resizing, etc). 
 And for what? To save 2 line code for decrement addr pointer in ngx_shm_free?

It won't be overwritten as long as you allocate additional bytes 
for it (as of now) and shm-size is not incremented. And yes, 
saving 2 lines of code is important - especially if it happens 
because the whole approach becomes simplier.

Anyway, I think that we should try to use the address which is 
already present, see other comments. This will also avoid 
unneeded restrictions on mappings where pointers are not stored.

 Have you tried using an arbitrary addresses (with subsequent remap using 
 proper base address for already existing mappings)? Does is work noticeably 
 worse?
 I've fought against ASLR since 2008 resp. x64 was released and it was 
 essential. This specific solution is extensively tested and runs in 
 production also without any problems (not only in nginx).

So the answer is no, right? Either way, I tried it myself, and 
it seems to be noticeably worse - in my tests in most cases it 
can't use the address from the master process in workers due to 
conflicts with other allocations. When combined with base address 
selection, it seems to be much more stable.

Below is a patch which takes into account above comments, and 
also fixes a problem observed with your patch on configuration 
reloads, e.g., when something like:

 proxy_cache_path cache1 keys_zone=cache1:10m;

is changed to

 proxy_cache_path cache0 keys_zone=cache0:10m;
 proxy_cache_path cache1 keys_zone=cache1:10m;

(i.e., when mappings are created in master and worker processes in 
a different order, resulting in conflicts between a mapping we are 
trying to create at some base address with a mapping already 
remapped to this address).

Review, comments and testing appreciated.

# HG changeset patch
# User Maxim Dounin mdou...@mdounin.ru
# Date 1430097438 -10800
# Mon Apr 27 04:17:18 2015 +0300
# Node ID 89cd9c63c58e5d7c474d317f83584779e2342ee3
# Parent 859ce1c41f642c39d2db9741bdd305f3ee6507f5
Win32: shared memory base addresses and remapping.

Two mechanisms are implemented to make it possible to store pointers
in shared memory on Windows, in particular on Windows Vista and later
versions with ASLR:

- The ngx_shm_remap() function added to allow remapping of a shared
memory
 zone to the address originally used for it in the master process. While
 important, it doesn't solve the problem by itself as in many cases it's
 not possible to use the address because of conflicts with other
 allocations.

- We now create mappings at the same address in all processes by
starting
 mappings at predefined addresses normally unused by newborn processes.

These two mechanisms combined allow to use shared memory on Windows
almost without problems, including reloads.

Based on the patch by Sergey Brester:
http://mailman.nginx.org/pipermail/nginx-devel/2015-April/006836.html
[2]

diff -r 859ce1c41f64 -r 89cd9c63c58e src/core/ngx_cycle.c
--- a/src/core/ngx_cycle.c Mon Apr 27 03

Re: Fwd: Windows shmem fix: makes shared memory fully ASLR and DEP compliant (ea. cache zone, limit zone, etc.)

2015-04-27 Thread Sergey Brester
 

Hi, 

I have little bit tested your changeset. 
Looks good: 1 successful reloads with randomly adding/removing zones
up to 1GB (Win7 x64 and 2K8-R2 x64).

Thx and regards, 
sebres. 

P.S. If someone needs a git version of it:
https://github.com/sebres/nginx/commit/2d549c958cf4fa53eeacec13b410946bbe053544
[3] 

-- 

Am 27.04.2015 03:25, schrieb Maxim Dounin: 

 Hello!
 
 On Fri, Apr 24, 2015 at 01:21:41AM +0200, Sergey Brester wrote:
 Hello, There are lots of style problems which need cleanup. The newer, 
 nginx-style compliant version of changeset (shmem fix2.patch) was already 
 posted to nginx-devel (Thx, Filipe DA SILVA). You will find it also on under 
 https://github.com/sebres/nginx/commit/e7c149f1ad76b9d850fb59ecc479d4a658c13e04
  [1] [4].

It still needs cleanup, but I don't think it worth detailed 
comments as there are more important things to be addressed.

 Such things belong to ngx_win32_init.c. It may be also good enough to use 
 ngx_pagesize, which is already set there.
 Agree, but some (little) things can reside in same module, where these are 
 used (noted as todo). 
 
 This probably should be somewhere at ngx_win32_config.h.
 Imho, belong definitelly in shmem module. But don't want to fight about it :)

The question here is a balance between code locality and 
visibility of platform-dependent constants for modifications. E.g., 
we've recently added 64-bit constants into ngx_win32_config.h 
to allow compilation of 64-bit windows binaries - and I'm 
perfectly sure shmem code would have been forgotten assuming it 
used just magic addresses as in your patch.

May be some constants with appropriate comments at the top of the 
ngx_shmem.c file will be a good compromise solution.

 It might be better idea to save the address at the end of the memory block 
 allocated...
 And for example stop to work of other new worker (when unfortunately 
 overwriting of it occurs). That brings at once many problem and restrictions 
 later (for example more difficult implementing of shared area resizing, etc). 
 And for what? To save 2 line code for decrement addr pointer in ngx_shm_free?

It won't be overwritten as long as you allocate additional bytes 
for it (as of now) and shm-size is not incremented. And yes, 
saving 2 lines of code is important - especially if it happens 
because the whole approach becomes simplier.

Anyway, I think that we should try to use the address which is 
already present, see other comments. This will also avoid 
unneeded restrictions on mappings where pointers are not stored.

 Have you tried using an arbitrary addresses (with subsequent remap using 
 proper base address for already existing mappings)? Does is work noticeably 
 worse?
 I've fought against ASLR since 2008 resp. x64 was released and it was 
 essential. This specific solution is extensively tested and runs in 
 production also without any problems (not only in nginx).

So the answer is no, right? Either way, I tried it myself, and 
it seems to be noticeably worse - in my tests in most cases it 
can't use the address from the master process in workers due to 
conflicts with other allocations. When combined with base address 
selection, it seems to be much more stable.

Below is a patch which takes into account above comments, and 
also fixes a problem observed with your patch on configuration 
reloads, e.g., when something like:

 proxy_cache_path cache1 keys_zone=cache1:10m;

is changed to

 proxy_cache_path cache0 keys_zone=cache0:10m;
 proxy_cache_path cache1 keys_zone=cache1:10m;

(i.e., when mappings are created in master and worker processes in 
a different order, resulting in conflicts between a mapping we are 
trying to create at some base address with a mapping already 
remapped to this address).

Review, comments and testing appreciated.

# HG changeset patch
# User Maxim Dounin mdou...@mdounin.ru
# Date 1430097438 -10800
# Mon Apr 27 04:17:18 2015 +0300
# Node ID 89cd9c63c58e5d7c474d317f83584779e2342ee3
# Parent 859ce1c41f642c39d2db9741bdd305f3ee6507f5
Win32: shared memory base addresses and remapping.

Two mechanisms are implemented to make it possible to store pointers
in shared memory on Windows, in particular on Windows Vista and later
versions with ASLR:

- The ngx_shm_remap() function added to allow remapping of a shared
memory
 zone to the address originally used for it in the master process. While
 important, it doesn't solve the problem by itself as in many cases it's
 not possible to use the address because of conflicts with other
 allocations.

- We now create mappings at the same address in all processes by
starting
 mappings at predefined addresses normally unused by newborn processes.

These two mechanisms combined allow to use shared memory on Windows
almost without problems, including reloads.

Based on the patch by Sergey Brester:
http://mailman.nginx.org/pipermail/nginx-devel/2015-April/006836.html
[2]

diff -r 859ce1c41f64 -r 89cd9c63c58e src/core/ngx_cycle.c
--- a/src/core

Re: Fwd: Windows shmem fix: makes shared memory fully ASLR and DEP compliant (ea. cache zone, limit zone, etc.)

2015-04-23 Thread Sergey Brester
 

Hello, 

 There are lots of style problems which need cleanup.

The newer, nginx-style compliant version of changeset (shmem
fix2.patch) was already posted to nginx-devel (Thx, Filipe DA SILVA).
You will find it also on under
https://github.com/sebres/nginx/commit/e7c149f1ad76b9d850fb59ecc479d4a658c13e04
[4]. 

 Such things belong to ngx_win32_init.c. It may be also good 
 enough to use ngx_pagesize, which is already set there.

 Agree, but some (little) things can reside in same module, where these
are used (noted as todo). 

 This probably should be somewhere at ngx_win32_config.h.

Imho, belong definitelly in shmem module. But don't want to fight about
it :) 

 It might be better idea to save the address at the end of the 
 memory block allocated...

And for example stop to work of other new worker (when unfortunately
overwriting of it occurs). That brings at once many problem and
restrictions later (for example more difficult implementing of shared
area resizing, etc). And for what? To save 2 line code for decrement
addr pointer in ngx_shm_free? 

 Have you tried using an arbitrary addresses (with subsequent 
 remap using proper base address for already existing mappings)? 
 Does is work noticeably worse?

I've fought against ASLR since 2008 resp. x64 was released and it was
essential. This specific solution is extensively tested and runs in
production also without any problems (not only in nginx). 

Regards,
sebres. 

Am 22.04.2015 18:29, schrieb Maxim Dounin: 

 Hello!
 
 On Wed, Apr 22, 2015 at 09:45:45AM +0200, Sergey Brester wrote:
 
 enclosed you will find an attached changeset, that fixes a ASLR/DEP problem 
 on windows platforms (example Win 7/2008 x64). To find shared addr offset 
 with ASLR, we have successful used the same resp. similar solution on 
 various open source projects (ex.: postgresql etc.). Also nginx with 
 suggested patch works fine over 5 months in production on several machines 
 (Win-2k8-R2).
 
 Interesting, thanks, though the patch certainly needs some 
 cleanup. See below for some comments.
 
 BTW(1): I have interest to fix a problem with multiple workers under windows 
 (ex.: http://forum.nginx.org/read.php?2,188714,188714#msg-188714 [1] [1]). 
 @Maxim Dounin: can you possibly give me more information, what you mean 
 here, resp. what it currently depends on (see 
 http://forum.nginx.org/read.php?2,188714,212568#msg-212568 [2] [2]).
 
 The most obvious problem with multiple workers on Windows is 
 listening sockets. As of now, if you start multiple workers on 
 windows, nginx will open multiple listening sockets - one in each 
 worker process. These are independant sockets, each opened with 
 SO_REUSEADDR, and only one of these sockets will be able to accept 
 connections.
 
 Possible solution to the problem would be to pass listening sockets 
 from a master process to worker processes via handle inheritance 
 as available in Windows:
 
 http://msdn.microsoft.com/en-us/library/windows/desktop/ms683463(v=vs.85).aspx
  [3]
 
 [...]
 
 # HG changeset patch # User Serg G. Brester (sebres) 
 serg.bres...@sebres.de # Date 1429643760 -7200 # Tue Apr 21 21:16:00 2015 
 +0200 # Branch sb-mod # Node ID f759f7084204bb1f3c2b0a8023a8f03a81da9a0b # 
 Parent 1bdfceda86a99a4dc99934181d2f9e2632003ca8 Windows shmem fix: makes 
 shared memory fully ASLR and DEP compliant (ea. cache zone, limit zone, 
 etc.) diff -r 1bdfceda86a9 -r f759f7084204 src/os/win32/ngx_shmem.c --- 
 a/src/os/win32/ngx_shmem.c Mon Apr 20 17:36:51 2015 +0300 +++ 
 b/src/os/win32/ngx_shmem.c Tue Apr 21 21:16:00 2015 +0200 @@ -9,11 +9,30 @@ 
 #include ngx_core.h +static volatile size_t g_addrbaseoffs = 0; + +static 
 volatile int g_SystemInfo_init = 0; +SYSTEM_INFO g_SystemInfo; + +int 
 ngx_shm_init_once() { + if (g_SystemInfo_init) + return 1; + 
 g_SystemInfo_init = 1; + GetSystemInfo(g_SystemInfo); + return 1; +}
 
 Such things belong to ngx_win32_init.c. It may be also good 
 enough to use ngx_pagesize, which is already set there.
 
 + +/* --- */ +
 
 There are lots of style problems which need cleanup.
 
 ngx_int_t ngx_shm_alloc(ngx_shm_t *shm) { u_char *name; uint64_t size; + 
 u_char *addr; + u_char *addrbase; + + ngx_shm_init_once(); name = 
 ngx_alloc(shm-name.len + 2 + NGX_INT32_LEN, shm-log); if (name == NULL) { 
 @@ -26,6 +45,9 @@ size = shm-size; + // increase for base address, will be 
 saved inside shared mem : + size += sizeof(addr); +
 
 As of now, uses of ngx_shm_alloc() already contain address in the 
 shared memory region where addresses are important. It may worth 
 to try to use it, if possible (may be with some additional call to 
 redo the mapping with the address specified).
 
 shm-handle = CreateFileMapping(INVALID_HANDLE_VALUE, NULL, PAGE_READWRITE, 
 (u_long) (size  32), (u_long) (size  0x), @@ -34,7 +56,7 @@ if 
 (shm-handle == NULL) { ngx_log_error(NGX_LOG_ALERT, shm-log, ngx_errno, 
 CreateFileMapping(%uz, %s) failed, - shm-size, name); + size, name

Fwd: Windows shmem fix: makes shared memory fully ASLR and DEP compliant (ea. cache zone, limit zone, etc.)

2015-04-22 Thread Sergey Brester
 

Hi, 

enclosed you will find an attached changeset, that fixes a ASLR/DEP
problem on windows platforms (example Win 7/2008 x64). 

To find shared addr offset with ASLR, we have successful used the same
resp. similar solution on various open source projects (ex.: postgresql
etc.). Also nginx with suggested patch works fine over 5 months in
production on several machines (Win-2k8-R2). 

BTW(1): I have interest to fix a problem with multiple workers under
windows (ex.: http://forum.nginx.org/read.php?2,188714,188714#msg-188714
[1]).
@Maxim Dounin: can you possibly give me more information, what you mean
here, resp. what it currently depends on (see
http://forum.nginx.org/read.php?2,188714,212568#msg-212568 [2]). 

BTW(2): I would like to fix iocp under windows also. Thanks in advance
for any information about.

P.S. speak fluently russian and german...

Regards, 

Serg G. Brester (sebres) 
-
 

Links:
--
[1] http://forum.nginx.org/read.php?2,188714,188714#msg-188714
[2] http://forum.nginx.org/read.php?2,188714,212568#msg-212568
# HG changeset patch
# User Serg G. Brester (sebres) serg.bres...@sebres.de
# Date 1429643760 -7200
#  Tue Apr 21 21:16:00 2015 +0200
# Branch sb-mod
# Node ID f759f7084204bb1f3c2b0a8023a8f03a81da9a0b
# Parent  1bdfceda86a99a4dc99934181d2f9e2632003ca8
Windows shmem fix: makes shared memory fully ASLR and DEP compliant (ea. cache zone, limit zone, etc.)

diff -r 1bdfceda86a9 -r f759f7084204 src/os/win32/ngx_shmem.c
--- a/src/os/win32/ngx_shmem.c	Mon Apr 20 17:36:51 2015 +0300
+++ b/src/os/win32/ngx_shmem.c	Tue Apr 21 21:16:00 2015 +0200
@@ -9,11 +9,30 @@
 #include ngx_core.h
 
 
+static volatile size_t g_addrbaseoffs = 0;
+
+static volatile int g_SystemInfo_init = 0;
+SYSTEM_INFO g_SystemInfo;
+
+int ngx_shm_init_once() {
+  if (g_SystemInfo_init)
+return 1;
+  g_SystemInfo_init = 1;
+  GetSystemInfo(g_SystemInfo);
+  return 1;
+}
+
+/* --- */
+
 ngx_int_t
 ngx_shm_alloc(ngx_shm_t *shm)
 {
 u_char*name;
 uint64_t   size;
+u_char*addr;
+u_char*addrbase;
+
+ngx_shm_init_once();
 
 name = ngx_alloc(shm-name.len + 2 + NGX_INT32_LEN, shm-log);
 if (name == NULL) {
@@ -26,6 +45,9 @@
 
 size = shm-size;
 
+// increase for base address, will be saved inside shared mem :
+size += sizeof(addr);
+
 shm-handle = CreateFileMapping(INVALID_HANDLE_VALUE, NULL, PAGE_READWRITE,
 (u_long) (size  32),
 (u_long) (size  0x),
@@ -34,7 +56,7 @@
 if (shm-handle == NULL) {
 ngx_log_error(NGX_LOG_ALERT, shm-log, ngx_errno,
   CreateFileMapping(%uz, %s) failed,
-  shm-size, name);
+  size, name);
 ngx_free(name);
 
 return NGX_ERROR;
@@ -42,14 +64,79 @@
 
 ngx_free(name);
 
+shm-exists = 0;
 if (ngx_errno == ERROR_ALREADY_EXISTS) {
 shm-exists = 1;
 }
 
-shm-addr = MapViewOfFile(shm-handle, FILE_MAP_WRITE, 0, 0, 0);
+/*
+// Because of Win x64 since Vista/Win7 (always ASLR in kernel32), and nginx uses pointer 
+// inside this shared areas, we should use the same address for shared memory (Windows IPC)
+
+// Now we set preferred base to a hardcoded address that newborn processes never seem to be using (in available versions of Windows).
+// The addresses was selected somewhat randomly in order to minimize the probability that some other library doing something similar 
+// conflicts with us. That is, using conspicuous addresses like 0x2000 might not be good if someone else does it.
+*/
+#ifdef _WIN64
+ /* 
+  * There is typically a giant hole (almost 8TB):
+  *  7fff
+  * ...
+  * 07f6 8e8b
+  */
+  addrbase = (u_char*)0x047047e0ULL;
+#else
+ /* 
+  * This is more dicey.  However, even with ASLR there still
+  * seems to be a big hole:
+  * 1000
+  * ...
+  * 7000
+  */
+  addrbase = (u_char*)0x2efe;
+#endif
 
-if (shm-addr != NULL) {
+// add offset (corresponding all used shared) to preferred base:
+addrbase += g_addrbaseoffs;
+
+addr = MapViewOfFileEx(shm-handle, FILE_MAP_WRITE, 0, 0, 0, addrbase);
+
+ngx_log_debug4(NGX_LOG_DEBUG_CORE, shm-log, 0, map shared \%V\ - %p (base %p), size: %uz, shm-name, addr, addrbase, size);
+
+if (addr != NULL) {
+
+  // get allocated address if already exists:
+  if (shm-exists) {
+// get base and realoc using it if different:
+addrbase = *(u_char **)addr;
+ngx_log_debug3(NGX_LOG_DEBUG_CORE, shm-log, 0, shared \%V\ - %p - %p, shm-name, addr, addrbase);
+if (addrbase != addr) {
+  // free:
+  if (UnmapViewOfFile(addr) == 0) {
+ngx_log_error(NGX_LOG_ALERT, shm-log, ngx_errno,
+  UnmapViewOfFile(%p) of file mapping \%V\ failed,
+