Websockets - recommended settings question

2016-09-12 Thread Cain
Hi,

In the nginx documentation (https://www.nginx.com/blog/websocket-nginx), it
is recommended to set the 'Connection' header to 'close' (if there is no
upgrade header) - from my understanding, this disables keep alive from
nginx to the upstream - is there a reason for this?

Additionally, is keep alive the default behaviour when connecting to
upstreams?

Thanks
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: --with-openssl and OPENSSL_OPT

2016-09-12 Thread Maxim Dounin
Hello!

On Mon, Sep 12, 2016 at 09:55:32PM +0200, Ondřej Nový wrote:

> I want to use OpenSSL 1.0.2 static linked with nginx. So I'm using
> --with-openssl option. But I want to set OpenSSL configure options. Option
> OPENSSL_OPT looks like correct way.
> 
> If I set this variable:
> export OPENSSL_OPT=no-idea
> 
> After OpenSSL configure I got message:
> *** Because of configuration changes, you MUST do the following before
> *** building:
> 
> make depend
> 
> And building fails:
> make[5]: *** No rule to make target '../../include/openssl/idea.h', needed
> by 'e_idea.o'.  Stop.
> 
> I think you are not calling "make depend" after configuration of OpenSSL
> (auto/lib/openssl/make*).

If you need to configure openssl configure options which require 
"make depend", you can run "make depend" yourself before making 
nginx (and thus OpenSSL).

Alternatively, you can make OpenSSL yourself instead of asking 
nginx to do it for you.  Note that using --with-openssl option of 
nginx configure is not something required for static building with 
OpenSSL.  Rather, it's a convenient shortcut to make things easier 
in common cases.

-- 
Maxim Dounin
http://nginx.org/

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Don't process requests containing folders

2016-09-12 Thread Grant
>> location ~ (^/[^/]*|.html)$ {}
>
> Yes, that should do what you describe.


I realize now that I didn't define the requirement properly.  I said:
"match requests with a single / or ending in .html" but what I need
is: "match requests with a single / *and* ending in .html, also match
/".  Will this do it:

location ~ ^(/[^/]*\.html|/)$ {}


> Note that the . is a metacharacter for "any one"; if you really want
> the five-character string ".html" at the end of the request, you should
> escape the . to \.


Fixed.  Do I ever need to escape / in location blocks?


>> And let everything else match the following, most of which will 404 
>> (cheaply):
>>
>> location / { internal; }
>
> Testing and measuring might show that "return 404;" is even cheaper than
> "internal;" in the cases where they have the same output. But if there
> are cases where the difference in output matters, or if the difference
> is not measurable, then leaving it as-is is fine.


I'm sure you're right.  I'll switch to:

location / { return 404; }

- Grant

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Don't process requests containing folders

2016-09-12 Thread Francis Daly
On Mon, Sep 12, 2016 at 01:55:35PM -0700, Grant wrote:

Hi there,

> > If you want to match "requests with a second slash", do just that:
> >
> >   location ~ ^/.*/ {}
> >
> > (the "^" is not necessary there, but I guess-without-testing that
> > it helps.)
> 
> When you say it helps, you mean for performance?

Yes - I guess that anchoring this regex at a point where it will always
match anyway, will do no harm.

> > If you want to match "requests without a second slash", you could do
> >
> >   location ~ ^/[^/]*$ {}
> >
> > but I suspect you'll be better off with the positive match, plus a
> > "location /" for "all the rest".
> 
> 
> I want to keep my location blocks to a minimum so I think I should use
> the following as my last location block which will send all remaining
> good requests to my backend:
> 
> location ~ (^/[^/]*|.html)$ {}

Yes, that should do what you describe.

Note that the . is a metacharacter for "any one"; if you really want
the five-character string ".html" at the end of the request, you should
escape the . to \.

> And let everything else match the following, most of which will 404 (cheaply):
> 
> location / { internal; }

Testing and measuring might show that "return 404;" is even cheaper than
"internal;" in the cases where they have the same output. But if there
are cases where the difference in output matters, or if the difference
is not measurable, then leaving it as-is is fine.

Cheers,

f
-- 
Francis Dalyfran...@daoine.org

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: limit-req and greedy UAs

2016-09-12 Thread Richard Stanway
limit_req works with multiple connections, it is usually configured per IP
using $binary_remote_addr. See
http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone
- you can use variables to set the key to whatever you like.

limit_req generally helps protect eg your backend against request floods
from a single IP and any amount of connections. limit_conn protects against
excessive connections tying up resources on the webserver itself.

On Mon, Sep 12, 2016 at 10:23 PM, Grant  wrote:

> > ‎https://www.nginx.com/blog/tuning-nginx/
> >
> > ‎I have far more faith in this write up regarding tuning than the
> anti-ddos, though both have similarities.
> >
> > My interpretation is the user bandwidth is connections times rate. But
> you can't limit the connection to one because (again my interpretation)
> there can be multiple users behind one IP. Think of a university reading
> your website. Thus I am more comfortable limiting bandwidth than I am
> limiting the number of connections. ‎The 512k rate limit is fine. I
> wouldn't go any higher.
>
>
> If I understand correctly, limit_req only works if the same connection
> is used for each request.  My goal with limit_conn and limit_conn_zone
> would be to prevent someone from circumventing limit_req by opening a
> new connection for each request.  Given that, why would my
> limit_conn/limit_conn_zone config be any different from my
> limit_req/limit_req_zone config?
>
> - Grant
>
>
> > Should I basically duplicate my limit_req and limit_req_zone
> > directives into limit_conn and limit_conn_zone? In what sort of
> > situation would someone not do that?
> >
> > - Grant
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Don't process requests containing folders

2016-09-12 Thread Grant
>> My site doesn't have any folders in its URL structure so I'd like to
>> have nginx process any request which includes a folder (cheap 404)
>> instead of sending the request to my backend (expensive 404).
>
>> Currently I'm using a series of location blocks to check for a valid
>> request.  Here's the last one before nginx internal takes over:
>>
>> location ~ (^/|.html)$ {
>> }
>
> I think that says "is exactly /, or ends in html".


Yes that is my intention.


> I'm actually not sure whether this is intended to be the "good"
> request, or the "bad" request. If it is the "bad" one, then "return
> 404;" can easily be copied in to each. If it is the "good" one, with a
> complicated config, then you may need to have many duplicate lines in
> the two locations; or just "include" a file with the good" configuration.


That's the good request.  I do need it in multiple locations but an
include is working well for that.


>> Can I expand that to only match requests with a single / or ending in
>> .html like this:
>>
>> location ~ (^[^/]+/?[^/]+$|.html$) {
>
> Since every real request starts with a /, I think that that pattern
> effectively says "ends in html", which matches fewer requests than the
> earlier one.


That is not what I intended.


> If you want to match "requests with a second slash", do just that:
>
>   location ~ ^/.*/ {}
>
> (the "^" is not necessary there, but I guess-without-testing that
> it helps.)


When you say it helps, you mean for performance?


> If you want to match "requests without a second slash", you could do
>
>   location ~ ^/[^/]*$ {}
>
> but I suspect you'll be better off with the positive match, plus a
> "location /" for "all the rest".


I want to keep my location blocks to a minimum so I think I should use
the following as my last location block which will send all remaining
good requests to my backend:

location ~ (^/[^/]*|.html)$ {}

And let everything else match the following, most of which will 404 (cheaply):

location / { internal; }

- Grant

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Don't process requests containing folders

2016-09-12 Thread Francis Daly
On Mon, Sep 12, 2016 at 10:17:06AM -0700, Grant wrote:

Hi there,

> My site doesn't have any folders in its URL structure so I'd like to
> have nginx process any request which includes a folder (cheap 404)
> instead of sending the request to my backend (expensive 404).

The location-matching rules are at http://nginx.org/r/location

At the point of location-matching, nginx does not know anything about
folders; it only knows about the incoming request and the defined
"location" patterns.

That probably sounds like it is being pedantic; but once you know what the
rules are, it may be clearer how to configure nginx to do what you want.

"doesn't have any folders" might mean "no valid url has a second
slash". (Unless you are using something like a fastcgi service which
makes use of PATH_INFO.)

> Currently I'm using a series of location blocks to check for a valid
> request.  Here's the last one before nginx internal takes over:
> 
> location ~ (^/|.html)$ {
> }

I think that says "is exactly /, or ends in html".

It might be simpler to understand if you write it as two locations:

  location = / {}
  location ~ html$ {}

partly because if that is *not* what you want, that should be obvious
from the simpler expression.

I'm actually not sure whether this is intended to be the "good"
request, or the "bad" request. If it is the "bad" one, then "return
404;" can easily be copied in to each. If it is the "good" one, with a
complicated config, then you may need to have many duplicate lines in
the two locations; or just "include" a file with the good" configuration.

> Can I expand that to only match requests with a single / or ending in
> .html like this:
> 
> location ~ (^[^/]+/?[^/]+$|.html$) {

Since every real request starts with a /, I think that that pattern
effectively says "ends in html", which matches fewer requests than the
earlier one.

> Should that work as expected?

Only if you expect it to be the same as "location ~ html$ {}". So:
probably "no".


If you want to match "requests with a second slash", do just that:

  location ~ ^/.*/ {}

(the "^" is not necessary there, but I guess-without-testing that
it helps.)

If you want to match "requests without a second slash", you could do

  location ~ ^/[^/]*$ {}

but I suspect you'll be better off with the positive match, plus a
"location /" for "all the rest".

Good luck with it,

f
-- 
Francis Dalyfran...@daoine.org

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: limit-req and greedy UAs

2016-09-12 Thread Grant
> ‎https://www.nginx.com/blog/tuning-nginx/
>
> ‎I have far more faith in this write up regarding tuning than the anti-ddos, 
> though both have similarities.
>
> My interpretation is the user bandwidth is connections times rate. But you 
> can't limit the connection to one because (again my interpretation) there can 
> be multiple users behind one IP. Think of a university reading your website. 
> Thus I am more comfortable limiting bandwidth than I am limiting the number 
> of connections. ‎The 512k rate limit is fine. I wouldn't go any higher.


If I understand correctly, limit_req only works if the same connection
is used for each request.  My goal with limit_conn and limit_conn_zone
would be to prevent someone from circumventing limit_req by opening a
new connection for each request.  Given that, why would my
limit_conn/limit_conn_zone config be any different from my
limit_req/limit_req_zone config?

- Grant


> Should I basically duplicate my limit_req and limit_req_zone
> directives into limit_conn and limit_conn_zone? In what sort of
> situation would someone not do that?
>
> - Grant

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

--with-openssl and OPENSSL_OPT

2016-09-12 Thread Ondřej Nový
Hi,

I want to use OpenSSL 1.0.2 static linked with nginx. So I'm using
--with-openssl option. But I want to set OpenSSL configure options. Option
OPENSSL_OPT looks like correct way.

If I set this variable:
export OPENSSL_OPT=no-idea

After OpenSSL configure I got message:
*** Because of configuration changes, you MUST do the following before
*** building:

make depend

And building fails:
make[5]: *** No rule to make target '../../include/openssl/idea.h', needed
by 'e_idea.o'.  Stop.

I think you are not calling "make depend" after configuration of OpenSSL
(auto/lib/openssl/make*).

Thanks for help.

-- 
Best regards
 Ondřej Nový
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Connecting Nginx to LDAP/Kerberos

2016-09-12 Thread Joshua Schaeffer
On Mon, Sep 12, 2016 at 1:37 PM, A. Schulze  wrote:

>
>
> Am 12.09.2016 um 21:33 schrieb Joshua Schaeffer:
>
>> Any chance anybody has played around with Kerberos auth? Currently my SSO
>> environment uses GSSAPI for most authentication.
>>
>
> I compile also the module https://github.com/stnoonan/sp
> nego-http-auth-nginx-module
> but I've no time to configure / learn how to configure it
> ... unfortunately ...


I did actually see this module as well, but didn't look into it too much.
Perhaps it would be best for me to take a closer look and then report back
on what I find.

Thanks,
Joshua Schaeffer
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Connecting Nginx to LDAP/Kerberos

2016-09-12 Thread A. Schulze



Am 12.09.2016 um 21:33 schrieb Joshua Schaeffer:

Any chance anybody has played around with Kerberos auth? Currently my SSO
environment uses GSSAPI for most authentication.


I compile also the module 
https://github.com/stnoonan/spnego-http-auth-nginx-module
but I've no time to configure / learn how to configure it
... unfortunately ...

Andreas

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Connecting Nginx to LDAP/Kerberos

2016-09-12 Thread Joshua Schaeffer
>
>
>> I'm using that one to authenticate my users.
>
> auth_ldap_cache_enabled on;
> ldap_server my_ldap_server {
> url ldaps://ldap.example.org/dc=u
> sers,dc=mybase?uid?sub;
> binddn  cn=nginx,dc=mybase;
> binddn_passwd   ...;
> require valid_user;
> }
>
> server {
>   ...
>   location / {
> auth_ldap   "foobar";
> auth_ldap_servers   "my_ldap_server";
>
> root/srv/www/...;
>   }
> }
>

Thanks having a config to compare against is always helpful for me.


>
> this is like documented on https://github.com/kvspb/nginx-auth-ldap exept
> my auth_ldap statements are inside the location.
> while docs suggest them outside.
> Q: does that matter?
>

>From my understanding of Nginx, no, since location is lower in the
hierarchy it will just override any auth_ldap directives outside of it.


>
> I found it useful to explicit set "auth_ldap_cache_enabled on" but cannot
> remember the detailed reasons.
> Finally: it's working as expected for me (basic auth, no Kerberos)
>

Any chance anybody has played around with Kerberos auth? Currently my SSO
environment uses GSSAPI for most authentication.

Thanks,
Joshua Schaeffer
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Connecting Nginx to LDAP/Kerberos

2016-09-12 Thread A. Schulze



Am 12.09.2016 um 21:04 schrieb Joshua Schaeffer:

- https://github.com/kvspb/nginx-auth-ldap


I'm using that one to authenticate my users.

auth_ldap_cache_enabled on;
ldap_server my_ldap_server {
url 
ldaps://ldap.example.org/dc=users,dc=mybase?uid?sub;
binddn  cn=nginx,dc=mybase;
binddn_passwd   ...;
require valid_user;
}

server {
  ...
  location / {
auth_ldap   "foobar";
auth_ldap_servers   "my_ldap_server";

root/srv/www/...;
  }
}

this is like documented on https://github.com/kvspb/nginx-auth-ldap exept my 
auth_ldap statements are inside the location.
while docs suggest them outside.
Q: does that matter?

I found it useful to explicit set "auth_ldap_cache_enabled on" but cannot 
remember the detailed reasons.
Finally: it's working as expected for me (basic auth, no Kerberos)

BUT: I fail to compile this module with openssl-1.1.0
I send a message to https://github.com/kvspb some days ago but got no response 
till now.

the problem (nginx-1.11.3 + openssl-1.1.0 + nginx-auth-ldap)

cc -c -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wall   
-I src/core -I src/event -I src/event/modules -I src/os/unix -I 
/opt/local/include -I objs -I src/http -I src/http/modules -I src/http/v2 \
-o objs/addon/nginx-auth-ldap-20160428/ngx_http_auth_ldap_module.o \
./nginx-auth-ldap-20160428//ngx_http_auth_ldap_module.c
./nginx-auth-ldap-20160428//ngx_http_auth_ldap_module.c: In function 
'ngx_http_auth_ldap_ssl_handshake':
./nginx-auth-ldap-20160428//ngx_http_auth_ldap_module.c:1325:79: error: 
dereferencing pointer to incomplete type
 int setcode = 
SSL_CTX_load_verify_locations(transport->ssl->connection->ctx,
   ^
./nginx-auth-ldap-20160428//ngx_http_auth_ldap_module.c:1335:80: error: 
dereferencing pointer to incomplete type
   int setcode = 
SSL_CTX_set_default_verify_paths(transport->ssl->connection->ctx);

^
make[2]: *** [objs/addon/nginx-auth-ldap-20160428/ngx_http_auth_ldap_module.o] 
Error 1
objs/Makefile:1343: recipe for target 
'objs/addon/nginx-auth-ldap-20160428/ngx_http_auth_ldap_module.o' failed

Maybe the list have a suggestion...

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Connecting Nginx to LDAP/Kerberos

2016-09-12 Thread Joshua Schaeffer
Greetings Nginx list,

I've setup git-http-backend on a sandbox nginx server to host my git
projects inside my network. I'm trying to get everything setup so that I
can require auth to that server block using SSO, which I have setup and
working with LDAP and Kerberos.

I have all my accounts in Kerberos which is stored in OpenLDAP and
authentication works via GSSAPI. How do I get my git server block to use my
central authentication? Does anybody have any experience in setting this up?

I've found a couple git projects which enhances Nginx to support LDAP
authentication:

- https://github.com/kvspb/nginx-auth-ldap
- https://github.com/nginxinc/nginx-ldap-auth

I've gone through the reference implementation (nginx-ldap-auth), but found
that this won't work for me as I use GSSAPI for my authentication.

Looking to see if anybody has done something like this and what their
experience was. Let me know if you'd like to see any of my nginx
configuration files.

Thanks,
Joshua Schaeffer
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

[nginx] OCSP stapling: fixed using wrong responder with multiple certs.

2016-09-12 Thread Maxim Dounin
details:   http://hg.nginx.org/nginx/rev/6acbe9964ceb
branches:  
changeset: 6688:6acbe9964ceb
user:  Maxim Dounin 
date:  Mon Sep 12 20:11:06 2016 +0300
description:
OCSP stapling: fixed using wrong responder with multiple certs.

diffstat:

 src/event/ngx_event_openssl_stapling.c |  3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diffs (20 lines):

diff --git a/src/event/ngx_event_openssl_stapling.c 
b/src/event/ngx_event_openssl_stapling.c
--- a/src/event/ngx_event_openssl_stapling.c
+++ b/src/event/ngx_event_openssl_stapling.c
@@ -376,6 +376,7 @@ ngx_ssl_stapling_responder(ngx_conf_t *c
 {
 ngx_url_t  u;
 char  *s;
+ngx_str_t  rsp;
 STACK_OF(OPENSSL_STRING)  *aia;
 
 if (responder->len == 0) {
@@ -403,6 +404,8 @@ ngx_ssl_stapling_responder(ngx_conf_t *c
 return NGX_DECLINED;
 }
 
+responder = 
+
 responder->len = ngx_strlen(s);
 responder->data = ngx_palloc(cf->pool, responder->len);
 if (responder->data == NULL) {

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Don't process requests containing folders

2016-09-12 Thread Grant
My site doesn't have any folders in its URL structure so I'd like to
have nginx process any request which includes a folder (cheap 404)
instead of sending the request to my backend (expensive 404).
Currently I'm using a series of location blocks to check for a valid
request.  Here's the last one before nginx internal takes over:

location ~ (^/|.html)$ {
}

Can I expand that to only match requests with a single / or ending in
.html like this:

location ~ (^[^/]+/?[^/]+$|.html$) {
}

Should that work as expected?

- Grant

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


[nginx] SSL: improved session ticket callback error handling.

2016-09-12 Thread Sergey Kandaurov
details:   http://hg.nginx.org/nginx/rev/dfa626cdde6b
branches:  
changeset: 6687:dfa626cdde6b
user:  Sergey Kandaurov 
date:  Mon Sep 12 18:57:42 2016 +0300
description:
SSL: improved session ticket callback error handling.

Prodded by Guido Vranken.

diffstat:

 src/event/ngx_event_openssl.c |  35 ---
 1 files changed, 32 insertions(+), 3 deletions(-)

diffs (54 lines):

diff -r f28e74f02c88 -r dfa626cdde6b src/event/ngx_event_openssl.c
--- a/src/event/ngx_event_openssl.c Mon Sep 12 18:57:42 2016 +0300
+++ b/src/event/ngx_event_openssl.c Mon Sep 12 18:57:42 2016 +0300
@@ -2982,9 +2982,26 @@ ngx_ssl_session_ticket_key_callback(ngx_
ngx_hex_dump(buf, key[0].name, 16) - buf, buf,
SSL_session_reused(ssl_conn) ? "reused" : "new");
 
-RAND_bytes(iv, EVP_CIPHER_iv_length(cipher));
-EVP_EncryptInit_ex(ectx, cipher, NULL, key[0].aes_key, iv);
+if (RAND_bytes(iv, EVP_CIPHER_iv_length(cipher)) != 1) {
+ngx_ssl_error(NGX_LOG_ALERT, c->log, 0, "RAND_bytes() failed");
+return -1;
+}
+
+if (EVP_EncryptInit_ex(ectx, cipher, NULL, key[0].aes_key, iv) != 1) {
+ngx_ssl_error(NGX_LOG_ALERT, c->log, 0,
+  "EVP_EncryptInit_ex() failed");
+return -1;
+}
+
+#if OPENSSL_VERSION_NUMBER >= 0x1000L
+if (HMAC_Init_ex(hctx, key[0].hmac_key, 16, digest, NULL) != 1) {
+ngx_ssl_error(NGX_LOG_ALERT, c->log, 0, "HMAC_Init_ex() failed");
+return -1;
+}
+#else
 HMAC_Init_ex(hctx, key[0].hmac_key, 16, digest, NULL);
+#endif
+
 ngx_memcpy(name, key[0].name, 16);
 
 return 1;
@@ -3011,8 +3028,20 @@ ngx_ssl_session_ticket_key_callback(ngx_
ngx_hex_dump(buf, key[i].name, 16) - buf, buf,
(i == 0) ? " (default)" : "");
 
+#if OPENSSL_VERSION_NUMBER >= 0x1000L
+if (HMAC_Init_ex(hctx, key[i].hmac_key, 16, digest, NULL) != 1) {
+ngx_ssl_error(NGX_LOG_ALERT, c->log, 0, "HMAC_Init_ex() failed");
+return -1;
+}
+#else
 HMAC_Init_ex(hctx, key[i].hmac_key, 16, digest, NULL);
-EVP_DecryptInit_ex(ectx, cipher, NULL, key[i].aes_key, iv);
+#endif
+
+if (EVP_DecryptInit_ex(ectx, cipher, NULL, key[i].aes_key, iv) != 1) {
+ngx_ssl_error(NGX_LOG_ALERT, c->log, 0,
+  "EVP_DecryptInit_ex() failed");
+return -1;
+}
 
 return (i == 0) ? 1 : 2 /* renew */;
 }

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] SSL: factored out digest and cipher in session ticket callback.

2016-09-12 Thread Sergey Kandaurov
details:   http://hg.nginx.org/nginx/rev/f28e74f02c88
branches:  
changeset: 6686:f28e74f02c88
user:  Sergey Kandaurov 
date:  Mon Sep 12 18:57:42 2016 +0300
description:
SSL: factored out digest and cipher in session ticket callback.

No functional changes.

diffstat:

 src/event/ngx_event_openssl.c |  28 ++--
 1 files changed, 14 insertions(+), 14 deletions(-)

diffs (66 lines):

diff -r 4a16fceea03b -r f28e74f02c88 src/event/ngx_event_openssl.c
--- a/src/event/ngx_event_openssl.c Thu Sep 08 15:51:36 2016 +0300
+++ b/src/event/ngx_event_openssl.c Mon Sep 12 18:57:42 2016 +0300
@@ -2941,13 +2941,6 @@ failed:
 }
 
 
-#ifdef OPENSSL_NO_SHA256
-#define ngx_ssl_session_ticket_md  EVP_sha1
-#else
-#define ngx_ssl_session_ticket_md  EVP_sha256
-#endif
-
-
 static int
 ngx_ssl_session_ticket_key_callback(ngx_ssl_conn_t *ssl_conn,
 unsigned char *name, unsigned char *iv, EVP_CIPHER_CTX *ectx,
@@ -2958,6 +2951,8 @@ ngx_ssl_session_ticket_key_callback(ngx_
 ngx_array_t   *keys;
 ngx_connection_t  *c;
 ngx_ssl_session_ticket_key_t  *key;
+const EVP_MD  *digest;
+const EVP_CIPHER  *cipher;
 #if (NGX_DEBUG)
 u_char buf[32];
 #endif
@@ -2965,6 +2960,13 @@ ngx_ssl_session_ticket_key_callback(ngx_
 c = ngx_ssl_get_connection(ssl_conn);
 ssl_ctx = c->ssl->session_ctx;
 
+cipher = EVP_aes_128_cbc();
+#ifdef OPENSSL_NO_SHA256
+digest = EVP_sha1();
+#else
+digest = EVP_sha256();
+#endif
+
 keys = SSL_CTX_get_ex_data(ssl_ctx, ngx_ssl_session_ticket_keys_index);
 if (keys == NULL) {
 return -1;
@@ -2980,10 +2982,9 @@ ngx_ssl_session_ticket_key_callback(ngx_
ngx_hex_dump(buf, key[0].name, 16) - buf, buf,
SSL_session_reused(ssl_conn) ? "reused" : "new");
 
-RAND_bytes(iv, 16);
-EVP_EncryptInit_ex(ectx, EVP_aes_128_cbc(), NULL, key[0].aes_key, iv);
-HMAC_Init_ex(hctx, key[0].hmac_key, 16,
- ngx_ssl_session_ticket_md(), NULL);
+RAND_bytes(iv, EVP_CIPHER_iv_length(cipher));
+EVP_EncryptInit_ex(ectx, cipher, NULL, key[0].aes_key, iv);
+HMAC_Init_ex(hctx, key[0].hmac_key, 16, digest, NULL);
 ngx_memcpy(name, key[0].name, 16);
 
 return 1;
@@ -3010,9 +3011,8 @@ ngx_ssl_session_ticket_key_callback(ngx_
ngx_hex_dump(buf, key[i].name, 16) - buf, buf,
(i == 0) ? " (default)" : "");
 
-HMAC_Init_ex(hctx, key[i].hmac_key, 16,
- ngx_ssl_session_ticket_md(), NULL);
-EVP_DecryptInit_ex(ectx, EVP_aes_128_cbc(), NULL, key[i].aes_key, iv);
+HMAC_Init_ex(hctx, key[i].hmac_key, 16, digest, NULL);
+EVP_DecryptInit_ex(ectx, cipher, NULL, key[i].aes_key, iv);
 
 return (i == 0) ? 1 : 2 /* renew */;
 }

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH 2 of 2] Core: add ngx_atomic_store() and ngx_atomic_load()

2016-09-12 Thread Maxim Dounin
Hello!

On Wed, Aug 17, 2016 at 05:29:32PM -0700, Piotr Sikora wrote:

> # HG changeset patch
> # User Piotr Sikora 
> # Date 1471265532 25200
> #  Mon Aug 15 05:52:12 2016 -0700
> # Node ID 40765d8ee4dd29089b0e60ed5b6099ac624e804e
> # Parent  2f2ec92c3af93c11e195fb6d805df57518fede7c
> Core: add ngx_atomic_store() and ngx_atomic_load().
> 
> Those functions must be used to prevent data races between
> threads operating concurrently on the same variables.
> 
> No performance loss measured in microbenchmarks on x86_64.
> 
> No binary changes when compiled without __atomic intrinsics.
> 
> Found with ThreadSanitizer.
> 
> Signed-off-by: Piotr Sikora 

[...]

>  #define ngx_trylock(lock, value) 
>  \
> -(*(lock) == 0 && ngx_atomic_cmp_set(lock, 0, value))
> +(ngx_atomic_load(lock) == 0 && ngx_atomic_cmp_set(lock, 0, value))

The "*(lock) == 0" check here is just an optimization, it only 
ensures that the lock is likely to succed.  Atomicity is provided 
by the ngx_atomic_cmp_set() operation following the check.  If the 
check returns a wrong result due to non-atomic load - this won't 
do any harm.  The idea is that a quick-and-dirty non-atomic 
reading can be used to optimize things when the lock is already 
obtained by another process.  This is especially important in 
spinlocks like in ngx_shmtx_lock().

The same is believed to apply to the other places changed as well.  
If you think there are places where atomic reading is critical - 
please highlight these particular places.

-- 
Maxim Dounin
http://nginx.org/

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: nginx not returning updated headers from origin server on conditional GET

2016-09-12 Thread Maxim Dounin
Hello!

On Sun, Sep 11, 2016 at 06:56:17AM -0400, jchannon wrote:

> I have nginx and its cache working as expected apart from one minor issue.
> When a request is made for the first time it hits the origin server, returns
> a 200 and nginx caches that response. If I make another request I can see
> from the X-Cache-Status header that the cache has been hit. When I wait a
> while knowing the cache will have expired I can see nginx hit my origin
> server doing a conditional GET because I have proxy_cache_revalidate on;
> defined.
> 
> When I check if the resource has changed in my app on the origin server I
> see it hasn't and return a 304 with a new Expires header. Some may argue why
> are you returning a new Expires header if the origin server says nothing has
> changed and you are returning 304. The answer is, the HTTP RFC says that
> this can be done https://tools.ietf.org/html/rfc7234#section-4.3.4
> 
> One thing I have noticed, no matter what headers are I add or modify, when
> the origin server returns 304 nginx will give a response with the first set
> of response headers it saw for that resource.

Conditional revalidation as available with 
"proxy_cache_revalidate on" doesn't try to merge any new/updated 
headers to the response stored.  This is by design - merging and 
updating headers will be just too costly.

This is normally not an issue as you can (and should) use 
"Cache-Control: max-age=..." instead of Expires, and with max-age 
you don't need to update anything in the response.

If you can't afford this behaviour for some reason, the only 
solution is to avoid using proxy_cache_revalidate.

-- 
Maxim Dounin
http://nginx.org/

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: limit-req and greedy UAs

2016-09-12 Thread c0nw0nk
gariac Wrote:
---
> ‎This page has all the secret sauce, including how to limit the number
> of connections. 
> 
> https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-ngin
> x-plus/
> 
> I set up the firewall with a higher number as a "just in case." Also
> note if you do streaming outside nginx, then you have to limit
> connections for that service in the program providing it. 
> 
> Mind you while I think this page has good advice, what is listed here
> won't stop a real ddos attack. The first D is for distributed, meaning
> the attack come from many IP addresses. You probably have to pay for
> one of those reverse proxy services to avoid a real ddos, but I
> personally find them them a bit creepy since I have seen hacking
> attempts come from behind them. 
> 
> The tips on this nginx page will limit the teenage boy in his parents
> basement, which is a more real life scenario to be attacked. But note
> that every photo you load is a request, so I wouldn't make the limit
> ‎any lower than 5 to10 per second. You can play with the limits and
> watch the results on your own system. Just remember to: 
> service nginx reload
> service nginx restart
> 
> If you do fancy caching, you may have to clear your browser cache.
> 
> In theory, Google page ranking takes speed into account.  There are
> many websites that will evaluate your nginx set up. 
> https://www.webpagetest.org/
> 
> One thing to remember is nginx limits are in bytes per second, not
> bits per second. So the 512k limit in this example is really quite
> generous.
> ‎http://www.webhostingtalk.com/showthread.php?t=1433413
> 
> There are programs you can run on your server to flog nginx.
> https://www.howtoforge.com/how-to-benchmark-your-system-cpu-file-io-my
> sql-with-sysbench
> 
> I did this with htperf, but sysbench is supposed to be better. Nginx
> is very efficient. Your limiting factor will probably be your server
> network connection. If you sftp files from your server, it will be at
> the maximum rate you can deliver, and this depends on time of day
> since you are sharing the pipe. I'm using a VPS that does 40mbps on a
> good day. Figure 10 users at a time and the 512kbyes per second put me
> at the limit. 
> 
> If you use the nginx map module, you can block download managers if
> they are honest with their user agents. 
> 
> http://nginx.org/en/docs/http/ngx_http_map_module.html
> http://ask.xmodulo.com/block-specific-user-agents-nginx-web-server.htm
> l
> 
> Beware of creating false positives with such rules. When developing
> code, I return a 444 then search the access.log for what it found,
> just to insure I wrote the rule correctly.
> 
> 
> 
> 
> 
> 
>   Original Message  
> From: Grant
> Sent: Sunday, September 11, 2016 5:30 AM
> To: nginx@nginx.org
> Reply To: nginx@nginx.org
> Subject: Re: limit-req and greedy UAs
> 
> > What looks to me to be a real resource hog that quite frankly you
> cant do much about are download managers. They open up multiple
> connections, but the rate limits apply to each individual connection.
> (this is why you want to limit the number of connections.)
> 
> 
> Does this mean an attacker (for example) could get around rate limits
> by opening a new connection for each request? How are the number of
> connections limited?
> 
> - Grant
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx


The following is a good resource also if you are having issues with slow DOS
attacks where they are trying to keep connections open for long periods of
time.

OWASP : https://www.owasp.org/index.php/SCG_WS_nginx

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,269435,269473#msg-269473

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: [PATCH] Added the $upstream_connection variable

2016-09-12 Thread Alexey Ivanov
+1 to that.

Connection reuse to an upstream is a very important metric for Edge->DC 
communication.
In our production since we have nginx on both sides we are are gathering that 
metric from the other side of the other side of a connection. I assume not 
everybody have that luxury, therefore that stat would be useful.

> On Aug 31, 2016, at 6:38 AM, Jason Stangroome  wrote:
> 
> Hello,
> 
> I am using nginx primarily as a proxy and I am looking to improve the
> visibility and control over keepalive connections to upstreams.
> 
> I have several patches I would like to submit for your consideration
> but I am relatively unfamiliar with the code base so I've started
> simple and I appreciate your feedback. I must say the code base has
> been very approachable to work with.
> 
> To begin I am adding support for an $upstream_connection variable for
> use with the log_format directive. This is essentially the same as the
> $connection variable but applies to upstream connections instead of
> downstream.
> 
> The intent of this variable is to help understand which requests are
> being serviced by the same upstream keepalive connection and which are
> using different connections.
> 
> I think I have followed the Contributing Changes page at nginx.org.
> I've honoured the existing code formatting and my `hg export` output
> follows my signature. I have also executed the tests from the
> nginx-tests repository in a Ubuntu Trusty environment but I did not
> have many nginx modules included in my build.
> 
> Regards,
> 
> Jason
> --
> # HG changeset patch
> # User Jason Stangroome 
> # Date 1472649436 0
> #  Wed Aug 31 13:17:16 2016 +
> # Node ID f06c8a934e3f3ceac2ff393a391234e225cbfcf1
> # Parent  c6372a40c2a731d8816160bf8f55a7a50050c2ac
> Added the $upstream_connection variable
> 
> Allows the connection identifier of the upstream connection used to service a
> proxied request to be logged in the access.log to understand which requests
> are using which upstream keepalive connections.
> 
> diff -r c6372a40c2a7 -r f06c8a934e3f src/http/ngx_http_upstream.c
> --- a/src/http/ngx_http_upstream.c Fri Aug 26 15:33:07 2016 +0300
> +++ b/src/http/ngx_http_upstream.c Wed Aug 31 13:17:16 2016 +
> @@ -161,6 +161,9 @@
> static ngx_int_t ngx_http_upstream_response_length_variable(
> ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data);
> 
> +static ngx_int_t ngx_http_upstream_connection_variable(
> +ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data);
> +
> static char *ngx_http_upstream(ngx_conf_t *cf, ngx_command_t *cmd,
> void *dummy);
> static char *ngx_http_upstream_server(ngx_conf_t *cf, ngx_command_t *cmd,
> void *conf);
> @@ -395,6 +398,10 @@
>   ngx_http_upstream_response_length_variable, 1,
>   NGX_HTTP_VAR_NOCACHEABLE, 0 },
> 
> +{ ngx_string("upstream_connection"), NULL,
> +  ngx_http_upstream_connection_variable, 0,
> +  NGX_HTTP_VAR_NOCACHEABLE, 0 },
> +
> #if (NGX_HTTP_CACHE)
> 
> { ngx_string("upstream_cache_status"), NULL,
> @@ -1804,6 +1811,7 @@
> 
> if (u->state->connect_time == (ngx_msec_t) -1) {
> u->state->connect_time = ngx_current_msec - u->state->response_time;
> +u->state->connection_number = c->number;
> }
> 
> if (!u->request_sent && ngx_http_upstream_test_connect(c) != NGX_OK) {
> @@ -5291,6 +5299,67 @@
> }
> 
> 
> +static ngx_int_t
> +ngx_http_upstream_connection_variable(ngx_http_request_t *r,
> +ngx_http_variable_value_t *v, uintptr_t data)
> +{
> +u_char *p;
> +size_t  len;
> +ngx_uint_t  i;
> +ngx_http_upstream_state_t  *state;
> +
> +v->valid = 1;
> +v->no_cacheable = 0;
> +v->not_found = 0;
> +
> +if (r->upstream_states == NULL || r->upstream_states->nelts == 0) {
> +v->not_found = 1;
> +return NGX_OK;
> +}
> +
> +len = r->upstream_states->nelts * (NGX_ATOMIC_T_LEN + 2);
> +
> +p = ngx_pnalloc(r->pool, len);
> +if (p == NULL) {
> +return NGX_ERROR;
> +}
> +
> +v->data = p;
> +
> +i = 0;
> +state = r->upstream_states->elts;
> +
> +for ( ;; ) {
> +
> +p = ngx_sprintf(p, "%uA", state[i].connection_number);
> +
> +if (++i == r->upstream_states->nelts) {
> +break;
> +}
> +
> +if (state[i].peer) {
> +*p++ = ',';
> +*p++ = ' ';
> +
> +} else {
> +*p++ = ' ';
> +*p++ = ':';
> +*p++ = ' ';
> +
> +if (++i == r->upstream_states->nelts) {
> +break;
> +}
> +
> +continue;
> +}
> +}
> +
> +v->len = p - v->data;
> +
> +return NGX_OK;
> +}
> +
> +
> ngx_int_t
> ngx_http_upstream_header_variable(ngx_http_request_t *r,
> ngx_http_variable_value_t *v, uintptr_t data)
> diff -r c6372a40c2a7 -r f06c8a934e3f 

Re: limit-req and greedy UAs

2016-09-12 Thread lists
‎I picked 444 based on the following, though I see your point in that it is a 
non-standard code. I guess from a multiplier standpoint, returning nothing is 
as minimal as it gets, but the hacker often sends the message twice due to lack 
of response. A 304 return to an attempt to log into WordPress would seem a bit 
weird. All I really need is a unique code to find in the log file. 

444 CONNECTION CLOSED WITHOUT RESPONSE
A non-standard status code used to instruct nginx to close the connection 
without sending a response to the client, most commonly used to deny malicious 
or malformed requests.
‎
This status code is not seen by the client, it only appears in nginx log files.‎
  Original Message  
From: B.R.
Sent: Monday, September 12, 2016 1:08 AM
To: nginx ML
Reply To: nginx@nginx.org
Subject: Re: limit-req and greedy UAs

You could also generate 304 responses for content you won't provide (cf. 
return).
nginx is good at dealing with loads of requests, no problem on that side. And 
since return generates an in-memory answer by default, you won't be hammering 
your resources. If yo uare CPU or RAM-limited because of those requests, then I 
would suggest you evaluate the sizing of your server(s).
You might wish to seperate logging for these requests from the standard flow to 
improve their readability, or deactivate them altogether if you consider they 
add little-to-no value.

My 2¢,
---
B. R.

On Sun, Sep 11, 2016 at 9:16 PM,  wrote:
‎https://www.nginx.com/blog/tuning-nginx/

‎I have far more faith in this write up regarding tuning than the anti-ddos, 
though both have similarities. 

My interpretation is the user bandwidth is connections times rate. But you 
can't limit the connection to one because (again my interpretation) there can 
be multiple users behind one IP. Think of a university reading your website. 
Thus I am more comfortable limiting bandwidth than I am limiting the number of 
connections. ‎The 512k rate limit is fine. I wouldn't go any higher. 

I don't believe their is one answer here because it depends on how the user 
interacts with the website. I only serve static content. In fact, I only allow 
the verbs "head" and "get" to limit the attack surface. A page of text and 
photos itself can be many things. Think of a photo gallery versus a forum page. 
The forum has mostly text sprinkled with avatar photos, while the gallery can 
be mostly images with just a line of text each. 

Basically you need to experiment. Even then, your setup may be better or worse 
than the typical user. That said, if you limited the rate to 512k bytes per 
second, most users could achieve that rate‎. 

I just don't see evidence of download managers. I see plenty of wget, curl, and 
python. Those people get my 444 treatment. I use the map module as indicated in 
my other post to do this. 

What I haven't mentioned is filtering out machines. If you are really concerned 
about your system being overloaded, think about the search engines you want to 
support. Assuming you want Google, you need to set up your website in a manner 
so that Google knows you own it, then you can throttle it back. Google is maybe 
20% of my referrals.

If you have a lot of photos, you can set up nginx to block hit linking. This is 
significant because Google images will hot link everything you have. What you 
want is for Google itself to see your images, which it will present in reduced 
resolution, but block the Google hot link. If someone really wants to see your 
image, Google supplies the referal page. 

http://nginx.org/en/docs/http/ngx_http_referer_module.html

I make my own domain a valid, but maybe that is assumed. If you want to place a 
link to an image on your website in a forum, you need to make that forum valid. 

Facebook will steal your images.
http://badbots.vps.tips/info/facebookexternalhit-bot

I would use the nginx map module since you will probably be blocking many bots. 

Finally, you may want to block "the cloud"‎ using your firewall. Only block the 
browser ports since mail servers will be on the cloud. I block all of AWS for 
example. My nginx.conf also flags certain requests such as logging into 
WordPress since I'm not using WordPress! Clearly that IP is a hacker. I have 
plenty more signatures in the map. I have a script that pulls the IP addresses 
out of the access.log. I get maybe 20 addresses a day. I feed them to 
ip2location. Any address that goes to a cloud, VPS, colo, hosting company gets 
added to the firewall blocking list. I don't just block the IP, but I use the 
Hurricane Electric BGP tool to get the entire IP space to block. As a rule, I 
don't block schools, libraries, or ISPs. The idea here is to allow eyeballs but 
not machines. 

You can also use commercial blocking services if you trust them. (I don't. )


  Original Message  
From: Grant
Sent: Sunday, September 11, 2016 10:28 AM
To: nginx@nginx.org
Reply To: nginx@nginx.org
Subject: Re: limit-req and greedy UAs

> 

Re: nginx not returning updated headers from origin server on conditional GET

2016-09-12 Thread B.R.
>From what I understand, 304 answers should not try to modify headers, as
the cache having made the conditional request to check the correctness of
its entry will not necessarily update it:
https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.5.
The last sentence sums it all: '*If* a cache uses a received 304 response
to update a cache entry, [...]'
---
*B. R.*

On Sun, Sep 11, 2016 at 12:56 PM, jchannon 
wrote:

> I have nginx and its cache working as expected apart from one minor issue.
> When a request is made for the first time it hits the origin server,
> returns
> a 200 and nginx caches that response. If I make another request I can see
> from the X-Cache-Status header that the cache has been hit. When I wait a
> while knowing the cache will have expired I can see nginx hit my origin
> server doing a conditional GET because I have proxy_cache_revalidate on;
> defined.
>
> When I check if the resource has changed in my app on the origin server I
> see it hasn't and return a 304 with a new Expires header. Some may argue
> why
> are you returning a new Expires header if the origin server says nothing
> has
> changed and you are returning 304. The answer is, the HTTP RFC says that
> this can be done https://tools.ietf.org/html/rfc7234#section-4.3.4
>
> One thing I have noticed, no matter what headers are I add or modify, when
> the origin server returns 304 nginx will give a response with the first set
> of response headers it saw for that resource.
>
> Also if I change the Cache-Control:max-age header value from the first
> request when I return the 304 response it appears nginx obeys the new value
> as my resource is cached for that time however the response header value is
> that of what was given on the first request not the value that I modified
> on
> the 304 response.   This applies to all subsequent requests if the origin
> server issues a 304.
>
> I am running nginx version: nginx/1.10.1
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,269457,269457#msg-269457
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: limit-req and greedy UAs

2016-09-12 Thread B.R.
You could also generate 304 responses for content you won't provide (cf.
return).
nginx is good at dealing with loads of requests, no problem on that side.
And since return generates an in-memory answer by default, you won't be
hammering your resources. If yo uare CPU or RAM-limited because of those
requests, then I would suggest you evaluate the sizing of your server(s).
You might wish to seperate logging for these requests from the standard
flow to improve their readability, or deactivate them altogether if you
consider they add little-to-no value.

My 2¢,
---
*B. R.*

On Sun, Sep 11, 2016 at 9:16 PM,  wrote:

> ‎https://www.nginx.com/blog/tuning-nginx/
>
> ‎I have far more faith in this write up regarding tuning than the
> anti-ddos, though both have similarities.
>
> My interpretation is the user bandwidth is connections times rate. But you
> can't limit the connection to one because (again my interpretation) there
> can be multiple users behind one IP. Think of a university reading your
> website. Thus I am more comfortable limiting bandwidth than I am limiting
> the number of connections. ‎The 512k rate limit is fine. I wouldn't go any
> higher.
>
> I don't believe their is one answer here because it depends on how the
> user interacts with the website. I only serve static content. In fact, I
> only allow the verbs "head" and "get" to limit the attack surface. A page
> of text and photos itself can be many things. Think of a photo gallery
> versus a forum page. The forum has mostly text sprinkled with avatar
> photos, while the gallery can be mostly images with just a line of text
> each.
>
> Basically you need to experiment. Even then, your setup may be better or
> worse than the typical user. That said, if you limited the rate to 512k
> bytes per second, most users could achieve that rate‎.
>
> I just don't see evidence of download managers. I see plenty of wget,
> curl, and python. Those people get my 444 treatment. I use the map module
> as indicated in my other post to do this.
>
> What I haven't mentioned is filtering out machines. If you are really
> concerned about your system being overloaded, think about the search
> engines you want to support. Assuming you want Google, you need to set up
> your website in a manner so that Google knows you own it, then you can
> throttle it back. Google is maybe 20% of my referrals.
>
> If you have a lot of photos, you can set up nginx to block hit linking.
> This is significant because Google images will hot link everything you
> have. What you want is for Google itself to see your images, which it will
> present in reduced resolution, but block the Google hot link. If someone
> really wants to see your image, Google supplies the referal page.
>
> http://nginx.org/en/docs/http/ngx_http_referer_module.html
>
> I make my own domain a valid, but maybe that is assumed. If you want to
> place a link to an image on your website in a forum, you need to make that
> forum valid.
>
> Facebook will steal your images.
> http://badbots.vps.tips/info/facebookexternalhit-bot
>
> I would use the nginx map module since you will probably be blocking many
> bots.
>
> Finally, you may want to block "the cloud"‎ using your firewall. Only
> block the browser ports since mail servers will be on the cloud. I block
> all of AWS for example. My nginx.conf also flags certain requests such as
> logging into WordPress since I'm not using WordPress! Clearly that IP is a
> hacker. I have plenty more signatures in the map. I have a script that
> pulls the IP addresses out of the access.log. I get maybe 20 addresses a
> day. I feed them to ip2location. Any address that goes to a cloud, VPS,
> colo, hosting company gets added to the firewall blocking list. I don't
> just block the IP, but I use the Hurricane Electric BGP tool to get the
> entire IP space to block. As a rule, I don't block schools, libraries, or
> ISPs. The idea here is to allow eyeballs but not machines.
>
> You can also use commercial blocking services if you trust them. (I don't.
> )
>
>
>   Original Message
> From: Grant
> Sent: Sunday, September 11, 2016 10:28 AM
> To: nginx@nginx.org
> Reply To: nginx@nginx.org
> Subject: Re: limit-req and greedy UAs
>
> > ‎This page has all the secret sauce, including how to limit the number
> of connections.
> >
> > https://www.nginx.com/blog/mitigating-ddos-attacks-with-
> nginx-and-nginx-plus/
> >
> > I set up the firewall with a higher number as a "just in case."
>
>
> Should I basically duplicate my limit_req and limit_req_zone
> directives into limit_conn and limit_conn_zone? In what sort of
> situation would someone not do that?
>
> - Grant
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>