quic listen on multiple vhost

2023-07-24 Thread Anoop Alias
Hi,

I am trying to setup nginx with multiple vhost and quic support for all and
using sample config as per
https://www.nginx.com/blog/binary-packages-for-preview-nginx-quic-http3-implementation/

server {
listen65.109.175.140:443 ssl ;
listen65.109.175.140:443 quic reuseport;
server_name a.com;
.
.
}
server {
listen65.109.175.140:443 ssl ;
listen65.109.175.140:443 quic reuseport;
server_name b.com;
.
.
}

This however is throwing an error

# nginx -t
nginx: [emerg] duplicate listen options for 65.109.175.140:443 in
/etc/nginx/sites-enabled/b.com.conf:105
nginx: configuration file /etc/nginx/nginx.conf test failed

What am I doing wrong?


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Installing two versions of PHP-FMP?

2022-10-04 Thread Anoop Alias
This should help:

https://tinyurl.com/2mrps4a4


-- 
*Anoop P Alias*
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org


Re: Reg: Gradual buildup of nginx memory

2022-03-21 Thread Anoop Alias
I have seen similar behavior in the mod_sec2 module

On Mon, Mar 21, 2022 at 1:29 PM bkannadassan 
wrote:

> Hi All,
>
>   We are seeing a gradual buildup of NGINX memory to the tune of 1-2 MB
> every 15 mins or so. This memory doesn't comedown, please let us know how
> can we know what is the reason for the same. Please note this is a free
> version of NGINX.
>
> rgds
> Balaji
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,293859,293859#msg-293859
>
> ___
> nginx mailing list -- nginx@nginx.org
> To unsubscribe send an email to nginx-le...@nginx.org
>


-- 
*Anoop P Alias*
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org


Re: ktls nginx not working

2022-01-27 Thread Anoop Alias
it works now

But I have a strange situation

If we download the file using text clients like wget or curl ,
the BIO_get_ktls_send and SSL_sendfile is not showing in debug log, but it
shows up if we use a browser like Chrome or Firefox

-- 
*Anoop P Alias*
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org


Re: ktls nginx not working

2022-01-27 Thread Anoop Alias
sendfile on;

is there in the http context

I tested with

# TLS Settings
ssl_protocols TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384;

which should cover centos8 as mentioned in the blog post?

But it still did not work
##

Its a KVM vps from hetzner and tls module seems loaded

[root@65-108-156-104 nginx-1.21.6]# lsmod|grep tls
tls   102400  0





-- 
*Anoop P Alias*
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org


ktls nginx not working

2022-01-27 Thread Anoop Alias
Hi,

I am trying to implement/test ktls as per the blog article

https://www.nginx.com/blog/improving-nginx-performance-with-kernel-tls/#tls-protocol

###
This is done on CentOS8 VM

# uname -r
4.18.0-348.7.1.el8_5.x86_64
###
# openssl-3.0.1/.openssl/bin/openssl ciphers
TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES256-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:RSA-PSK-AES256-GCM-SHA384:DHE-PSK-AES256-GCM-SHA384:RSA-PSK-CHACHA20-POLY1305:DHE-PSK-CHACHA20-POLY1305:ECDHE-PSK-CHACHA20-POLY1305:AES256-GCM-SHA384:PSK-AES256-GCM-SHA384:PSK-CHACHA20-POLY1305:RSA-PSK-AES128-GCM-SHA256:DHE-PSK-AES128-GCM-SHA256:AES128-GCM-SHA256:PSK-AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:ECDHE-PSK-AES256-CBC-SHA384:ECDHE-PSK-AES256-CBC-SHA:SRP-RSA-AES-256-CBC-SHA:SRP-AES-256-CBC-SHA:RSA-PSK-AES256-CBC-SHA384:DHE-PSK-AES256-CBC-SHA384:RSA-PSK-AES256-CBC-SHA:DHE-PSK-AES256-CBC-SHA:AES256-SHA:PSK-AES256-CBC-SHA384:PSK-AES256-CBC-SHA:ECDHE-PSK-AES128-CBC-SHA256:ECDHE-PSK-AES128-CBC-SHA:SRP-RSA-AES-128-CBC-SHA:SRP-AES-128-CBC-SHA:RSA-PSK-AES128-CBC-SHA256:DHE-PSK-AES128-CBC-SHA256:RSA-PSK-AES128-CBC-SHA:DHE-PSK-AES128-CBC-SHA:AES128-SHA:PSK-AES128-CBC-SHA256:PSK-AES128-CBC-SHA

###
# /usr/sbin/nginx-debug -V
nginx version: nginx/1.21.6
built by gcc 8.5.0 20210514 (Red Hat 8.5.0-4) (GCC)
built with OpenSSL 3.0.1 14 Dec 2021
TLS SNI support enabled
configure arguments: --with-debug --prefix=/etc/nginx
--sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules
--with-pcre=./pcre2-10.39 --with-pcre-jit --with-zlib=./zlib-1.2.11
--with-openssl=./openssl-3.0.1 --with-openssl-opt=enable-ktls
--with-openssl-opt=enable-tls1_3 --conf-path=/etc/nginx/nginx.conf
--error-log-path=/var/log/nginx/error_log

The debug log does not show any signs of ktls in use
(snippet from the log provided below on download of a 1G file)

2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame
02077A08 was sent
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent:
02077A08 sid:1 bl:0 len:8192
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame
02077D30 was sent
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent:
02077D30 sid:1 bl:0 len:8192
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame
02075E58 was sent
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent:
02075E58 sid:1 bl:0 len:8192
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame
02075F60 was sent
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent:
02075F60 sid:1 bl:0 len:8192
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame
02077BA8 was sent
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent:
02077BA8 sid:1 bl:0 len:8192
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame
02077AA0 was sent
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent:
02077AA0 sid:1 bl:0 len:8192
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame
02077890 was sent
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent:
02077890 sid:1 bl:0 len:8192
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame
02075BC8 was sent
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent:
02075BC8 sid:1 bl:0 len:8192
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http write filter

2022/01/27 13:41:33 [debug] 1843564#1843564: *140 read: 15,
02791FC0, 32768, 21168128
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 read: 15,
02791FC0, 32768, 21168128
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 read: 15,
02799FD0, 32768, 21200896
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http postpone filter
"/1G?" 02075DD8
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 write new buf t:1 f:1
02791FC0, pos 02791FC0, size: 32768 file: 21168128, size:
32768
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 write new buf t:1 f:1
02799FD0, pos 02799FD0, size: 32768 file: 21200896, size:
32768
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http write filter: l:0
f:1 s:65536
2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http write filter limit
2097152
2022/01/27 13:41:33 [debug] 

Re: Nginx performance data

2022-01-07 Thread Anoop Alias
https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/

On Fri, Jan 7, 2022 at 6:33 PM James Read  wrote:

>
>
> On Fri, Jan 7, 2022 at 11:56 AM Anoop Alias 
> wrote:
>
>> This basically depends on your hardware and network speed etc
>>
>> Nginx is event-driven and does not fork a separate process for handling
>> new connections which basically makes it different from Apache httpd
>>
>
> Just to be clear Nginx is entirely single threaded?
>
> James Read
>
>
>>
>> On Wed, Jan 5, 2022 at 5:48 AM James Read 
>> wrote:
>>
>>> Hi,
>>>
>>> I have some questions about Nginx performance. How many concurrent
>>> connections can Nginx handle? What throughput can Nginx achieve when
>>> serving a large number of small pages to a large number of clients (the
>>> maximum number supported)? How does Nginx achieve its performance? Is the
>>> epoll event loop all done in a single thread or are multiple threads used
>>> to split the work of serving so many different clients?
>>>
>>> thanks in advance
>>> James Read
>>> ___
>>> nginx mailing list
>>> nginx@nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>>
>>
>> --
>> *Anoop P Alias*
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx performance data

2022-01-07 Thread Anoop Alias
This basically depends on your hardware and network speed etc

Nginx is event-driven and does not fork a separate process for handling new
connections which basically makes it different from Apache httpd

On Wed, Jan 5, 2022 at 5:48 AM James Read  wrote:

> Hi,
>
> I have some questions about Nginx performance. How many concurrent
> connections can Nginx handle? What throughput can Nginx achieve when
> serving a large number of small pages to a large number of clients (the
> maximum number supported)? How does Nginx achieve its performance? Is the
> epoll event loop all done in a single thread or are multiple threads used
> to split the work of serving so many different clients?
>
> thanks in advance
> James Read
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: failed (104: Connection reset by peer) while proxying connection

2021-05-21 Thread Anoop Alias
The private_ip:5044 is closing the connection before completing the request.

You should check the log in the upstream server for why it is doing this.
Perhaps a security module or something that drop connection immediately etc

On Fri, May 21, 2021 at 2:46 PM Mauro Tridici  wrote:

>
> Dear Users,
>
> I’m noticing a these error messages in /var/log/nginx/error.log.
>
> 021/05/21 10:57:25 [error] 21145#0: *7 recv() failed (104: Connection
> reset by peer) while proxying connection, client: public_ip, server:
> 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709,
> bytes from/to upstream:7709/321
> 2021/05/21 10:58:07 [error] 21145#0: *9 recv() failed (104: Connection
> reset by peer) while proxying connection, client: public_ip, server:
> 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709,
> bytes from/to upstream:7709/321
> 2021/05/21 10:58:46 [error] 21145#0: *11 recv() failed (104: Connection
> reset by peer) while proxying connection, client: public_ip, server:
> 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709,
> bytes from/to upstream:7709/321
> 2021/05/21 10:59:19 [error] 21145#0: *13 recv() failed (104: Connection
> reset by peer) while proxying connection, client: public_ip, server:
> 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709,
> bytes from/to upstream:7709/321
> 2021/05/21 10:59:57 [error] 21145#0: *15 recv() failed (104: Connection
> reset by peer) while proxying connection, client: public_ip, server:
> 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709,
> bytes from/to upstream:7709/321
> 2021/05/21 11:00:55 [error] 21145#0: *17 recv() failed (104: Connection
> reset by peer) while proxying connection, client: public_ip, server:
> 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709,
> bytes from/to upstream:7709/321
> 2021/05/21 11:01:38 [error] 21145#0: *19 recv() failed (104: Connection
> reset by peer) while proxying connection, client: public_ip, server:
> 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709,
> bytes from/to upstream:7709/321
> 2021/05/21 11:02:33 [error] 21145#0: *21 recv() failed (104: Connection
> reset by peer) while proxying connection, client: public_ip, server:
> 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709,
> bytes from/to upstream:7709/321
> 2021/05/21 11:03:06 [error] 21145#0: *23 recv() failed (104: Connection
> reset by peer) while proxying connection, client: public_ip, server:
> 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709,
> bytes from/to upstream:7709/321
> 2021/05/21 11:03:39 [error] 21145#0: *25 recv() failed (104: Connection
> reset by peer) while proxying connection, client: public_ip, server:
> 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709,
> bytes from/to upstream:7709/321
> 2021/05/21 11:04:33 [error] 21145#0: *27 recv() failed (104: Connection
> reset by peer) while proxying connection, client: public_ip, server:
> 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709,
> bytes from/to upstream:7709/321
>
> Basically, it seems that the error is related to a particular (authorized)
> IP address.
> This remote FILEBEAT client is sending data to LOGSTASH server via NGINX.
>
> Do you some suggestion to fix this annoying issue?
>
> Thank you in advancee,
> Mauro
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Compile Nginx

2020-04-28 Thread Anoop Alias
The Nginx binary compiled on one system can be run on a similar
architecture system as it is portable code.

The ones you download from the repo are compiled on a machine to binary by
the repo maintainer

you can ship the binary in a tool like rpm or deb

On Tue, Apr 28, 2020 at 7:13 PM Praveen Kumar K S 
wrote:

> I usually install from the official nginx apt repo. But since I want to
> use modules like more_set_headers which requires building nginx from
> source, I'm looking for best practices.
>
> On Tue, Apr 28, 2020 at 6:50 PM Reinis Rozitis  wrote:
>
>> > Can I compile nginx on Ubuntu 16.04 and reuse it on other deployments?
>> Or do I need to compile every time ? Please advise.
>>
>> As far as the hosts have all the shared libraries like openssl/pcre etc
>> (you can check with 'ldd /path/to/nginx') there is no need to compile every
>> time and you can just copy the nginx binary.
>>
>> rr
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
> --
>
>
> *Regards,*
>
>
> *K S Praveen KumarM: +91-9986855625 *
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx with Socket.io?

2020-01-30 Thread Anoop Alias
GET https://localhost/sockjs-node/info?t=1580228998416
net::ERR_CONNECTION_REFUSED"

means it is connecting to localhost:443 ( default https port) and not port
8080

On Thu, Jan 30, 2020 at 6:41 PM MarcoI  wrote:

> Hi Francis,
> thanks for helping.
>
> curl on PC-Server (Ubuntu 18.04.03 Server Edition):
>
> (base) marco@pc:~/vueMatters/testproject$ curl -Iki
> http://localhost:8080/
> HTTP/1.1 200 OK
> X-Powered-By: Express
> Accept-Ranges: bytes
> Content-Type: text/html; charset=UTF-8
> Content-Length: 774
> ETag: W/"306-TZR5skx9okrXHMJbxwuiUem3Jkk"
> Date: Thu, 30 Jan 2020 09:32:30 GMT
> Connection: keep-alive
>
> But from a laptop (Ubuntu 18.04.03 Desktop):
> - https://drive.google.com/open?id=1r56ZApxg3gQLRakKGCwI7CriQbbmfrLh
> - https://drive.google.com/open?id=1Dm-PC85pjGfqIeMOS45k3hvV9PANgOH5
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,286850,286862#msg-286862
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: slow connection on SSL port (TTFB)

2019-08-07 Thread Anoop Alias
Do you see a large ttfb on a static html page ? , if an upstream like
proxy/fastcgi is involved and they are slow to respond the ttfb also will
be high

17K open/TIME_WAIT  -- investigate this as this dont seem normal

On Wed, Aug 7, 2019 at 3:46 PM neomaq  wrote:

> Hello
> there is a problem:
> slow connection to nginx server
>
> telnet server 443
> 1-8 random sec before TTFB
>
> all possible network stack tunings are applied, similar problems are not
> observed on other(non nginx) ports
>
> 32 vCPU   Intel(R) Xeon(R) CPU E5-2630 v4
> 96 GB RAM
> avg CPU load -20%
> 1 GB network (tested on local internal network)
>
> there are over 1400 virtual hosts with SSL
> the problem is observed during busy hours
>
> nginx:
> user www-data;
> worker_processes 64;
> pid /run/nginx.pid;
> worker_rlimit_nofile 16384;
> events {
> use epoll;
> worker_connections 16384;
> multi_accept on;}
> http {
> sendfile on;
> tcp_nopush on;
> tcp_nodelay on;
> keepalive_timeout 65;
> types_hash_max_size 2048;
> server_names_hash_max_size 524280;
> ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
> ssl_prefer_server_ciphers on;
> }
> 
> there are 5-15K  ESTANLISHED connections and over 17K open/TIME_WAIT ports
>
> What can be done to reduce the connection time to the server?
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,285142,285142#msg-285142
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: reverse proxy nextcloud / owncloud

2019-06-02 Thread Anoop Alias
Not enough space?  -- doesn't seem to be a standard Nginx error message

It might be something to do with the application itself ( nextcloud)..and
since you say docker..make sure the container can store the files ( data
dir for nextcloud)



On Sun, Jun 2, 2019 at 6:51 PM babaz  wrote:

> Hi guys,
> sorry to bother you with this topic but I've tried for two days without
> finding solution.
> Basically I have a letsencrypt installation and a nextcloud in a docker.
> I'm able to make the reverse proxy working loading the pages but I cannot
> upload any kind of file is always giving "not enough space message".
> This is my configuration on nginx
>
> location /nextcloud/ {
> include /config/nginx/proxy.conf;
> proxy_pass http://172.17.0.2:80/;
> #   proxy_max_temp_file_size 2048m;
> client_max_body_size 0;
> #   proxy_http_version 1.1;
> proxy_request_buffering off;
> #   proxy_set_header Host $host;
> #   proxy_set_header X-Real-IP $remote_addr;
> #   proxy_set_header X-Forwarded-For
> $proxy_add_x_forwarded_for;
> #   proxy_set_header X-Forwarded-Proto $scheme;
> #   add_header Strict-Transport-Security "max-age=31536000;
> includeSubDomains; preload";
> }
>
> Please help me I-m getting crazy.
> Thanks
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,284407,284407#msg-284407
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: More than one host

2019-05-06 Thread Anoop Alias
Try

proxy_set_header   Host   $host;

On Mon, May 6, 2019 at 5:15 PM Julian Brown  wrote:

> I am having a problem and not sure which side of the ocean it is on (Nginx
> or Apache).
>
> I am internally setting up an Nginx reverse proxy that will eventually go
> public.
>
> I have two domains I want Nginx to proxy for, both go to different
> machines.
>
> The second domain is for a bugzilla host, bugzilla.conf:
>
> server {
> server_name bugzilla.example.com;
>
> listen *:80;
>
> access_log /var/log/nginx/bugzilla.access.log;
> error_log /var/log/nginx/bugzilla.error.log debug;
>
> location / {
> proxy_set_header X-Real-IP  $remote_addr;
> proxy_set_header X-Forwarded-For $remote_addr;
> proxy_set_header Host bugzilla.example.com;
> proxy_pass https://INTERNAL_IP /;
> }
> }
>
> It does send the request to the correct machine, but I do not know if it
> is sending the correct hostname or not.
>
> On the machine I am sending to is an Apache instance with multiple
> development versions of our server and bugzilla.   The request is getting
> handled by what is apparently the default vhost of the Apache server, not
> the bugzilla vhost.  In other words the wrong data is being sent out
> because it is going to the wrong end point on Apache.
>
> In the log for that vhost on Apache I see:
>
>   1 192.168.1.249 - - [05/May/2019:14:43:28 -0500] "GET /bugzilla/
> HTTP/1.0" 200 4250 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64)
> AppleWebKit/537.36 (KHT
>   2 Execution Time 8579
>
> the dash after 200 4250 is the 'host" I believe it is seeing or defaulting
> to "-" and not http://bugzilla.example.com.
>
> In my Nginx config I set proxy_set_header Host to what I want it to send
> as bugzilla.example.com, but I am not sure what is getting sent.
>
> Is proxy_set_header Host, the proper way to send it as "
> bugzilla.example.com" so that Apache sees it coming on that server name
> to activate the correct vhost?
>
> It could be a problem in the Apache vhost config, but if I direct my
> browser with /etc/hosts directly at Apache it works correctly it is only
> with proxying from Nginx that I see this behavior.
>
> Any comments?
>
> Thanx
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx POST requests are slow

2019-04-12 Thread Anoop Alias
Most likely this is an issue with your PHP application. Try a simple
PHP code or a ready-made app like WordPress and see if you can recreate the
error.



On Fri, Apr 12, 2019 at 3:41 PM sharvadze 
wrote:

> For some reason all the POST request are delayed for about 1 min. Here is
> my
> configuration:
>
> /etc/nginx/nginx.conf
>
> sendfile on;
> tcp_nopush on;
> tcp_nodelay off;
> keepalive_timeout 65;
> types_hash_max_size 2048;
> proxy_buffering off;
> proxy_http_version 1.1;
> proxy_set_header Connection "";
>
> /etc/nginx/sites-available/default
>
> client_max_body_size 0;
> send_timeout 300;
> proxy_set_header   X-Real-IP $remote_addr;
> proxy_set_header   Host  $http_host;
>
> location / {
> # First attempt to serve request as file, then
> # as directory, then fall back to displaying a 404.
> try_files $uri $uri/ /index.php?$query_string;
> }
>
> /etc/php/7.2/fpm/pool.d/www.conf
>
> pm = ondemand
> pm.max_children = 60
> pm.start_servers = 20
> pm.min_spare_servers = 20
> pm.max_spare_servers = 60
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,283736,283736#msg-283736
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: nginx-1.15.10

2019-03-26 Thread Anoop Alias
On Tue, Mar 26, 2019 at 7:55 PM Maxim Dounin  wrote:

> Changes with nginx 1.15.10   26 Mar
> 2019
>
>
> *) Feature: loading of SSL certificates and secret keys from variables
>

The doc says:

Since version 1.15.9, variables can be used in the file name when using
OpenSSL 1.0.2 or higher:

So what's new in 1.15.10?



> --
> Maxim Dounin
> http://nginx.org/
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Possible memory leak?

2019-03-12 Thread Anoop Alias
An nginx restart can take the web server offline for more than 30 seconds
or so depending upon the number of server{} blocks and configuration. It
may be fine for a few vhost though




On Wed, Mar 13, 2019 at 11:14 AM Peter Booth via nginx 
wrote:

> Perhaps I’m naive or just lucky, but I have used nginx on many contracts
> and permanent jobs for over ten years and have never attempted to reload
> canfigurations. I have always stopped then restarted nginx instances one at
> a time. Am I not recognizing a constraint that affects other people?
>
> Curious ,
>
> Peter
>
> Sent from my iPhone
>
> > On Mar 12, 2019, at 9:57 PM, Maxim Dounin  wrote:
> >
> > Hello!
> >
> >> On Tue, Mar 12, 2019 at 02:09:06PM -0400, wkbrad wrote:
> >>
> >> First of all, thanks so much for your insights into this and being
> patient
> >> with me.  :)  I'm just trying to understand the issue and what can be
> done
> >> about it.
> >>
> >> Can you explain to me what you mean by this?
> >>> you can configure system allocator to use mmap()
> >>
> >> I'm not a C programmer so correct me if I'm wrong, but doesn't the Nginx
> >> code determine which memory allocator it uses?
> >
> > Normally C programs use malloc() / free() functions as provided by
> > system libc library to allocate memory.  While it is possible for
> > an application to provide its own implementation of these
> > functions, this is something rarely used in practice.
> >
> >> If not can you point me to an article that describes how to do that as I
> >> would like to test it?
> >
> > For details on how to control system allocator on Linux, please
> > refer to the mallopt(3) manpage, notably the
> > MALLOC_MMAP_THRESHOLD_ environment variable.  Web version is
> > available here:
> >
> > http://man7.org/linux/man-pages/man3/mallopt.3.html
> >
> > Please refer to the M_MMAP_THRESHOLD description in the same man
> > page for details on what it does and various implications.
> >
> > Using a values less than NGX_CYCLE_POOL_SIZE (16k by default)
> > should help to move all configuration-related allocations into
> > mmap(), so these can be freed independently.  Alternatively,
> > recompiling nginx with NGX_CYCLE_POOL_SIZE set to a value larger
> > than 128k (default mmap() threshold) should have similar
> > effect.
> >
> > Note though that there may be other limiting factors,
> > such as MALLOC_MMAP_MAX_, which limits maximum number of mmap()
> > allocations to 65536 by default.
> >
> > You can also play with different allocators by using the
> > LD_PRELOAD environment variable, see for example jemalloc's wiki
> > here:
> >
> > https://github.com/jemalloc/jemalloc/wiki/Getting-Started
> >
> >> Also, you seem to be saying that Nginx IS attempting to free the memory
> but
> >> is not able to due to the way the OS is allocating memory or refusing to
> >> release the memory.  I've tested this in several Linux distros,
> kernels, and
> >> Nginx versions and I see the same behavior in all of them.  Do you know
> of
> >> an OS or specific distro where Nginx can release the old memory
> allocations
> >> correctly?  I would like to test that too.  :)
> >
> > Any Linux distro can be tuned so freed memory will be returned to
> > the system, see above.  And for example on FreeBSD, which uses
> > jemalloc as a system allocator, unused memory is properly returned
> > to the system out of the box (though can be seen in virtual
> > address space occupied by the process, since the allocator uses
> > madvise() to make the memory as unused instead of unmapping a
> > mapping).
> >
> > --
> > Maxim Dounin
> > http://mdounin.ru/
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Possible memory leak?

2019-03-12 Thread Anoop Alias
limiting the server blocks may not be practical when each domain has a
different TLS config

unless we use lua modules provided in the openresty

Correct me if I am wrong



On Tue, Mar 12, 2019 at 8:09 PM Maxim Dounin  wrote:

> Hello!
>
> On Mon, Mar 11, 2019 at 04:37:50PM -0400, wkbrad wrote:
>
> > I think I haven't been clear in what I'm seeing so let's start over.
> :)  I
> > set up a very simple test on Centos 7 with a default install of Nginx
> > 1.12.2.  Below is exactly what I did to produce the result and it's
> clear to
> > me that Nginx is using 2x the ram than it should be using after the first
> > reload.  Can anyone explain why the ram usage would double after doing a
> > config reload?
>
> As I already tried to explained earlier in this thread, this is a
> result of two things:
>
> 1) How nginx allocates memory when doing a configuration reload:
> it creates a new configuration first, and then frees the old one.
>
> 2) How system memory allocator works.  Usually it cannot return
> memory to the system if there are any remaining allocations above
> the freed memory regions.  In some cases you can configure system
> allocator to use mmap(), so it will be possible to free such
> allocations, but it may a be a bad idea for other reasons.
>
> As a result, if large amount of memory is used solely for the
> configuration structures, memory occupied by the nginx master
> process from the system point of view is roughly doubled after a
> configuration reload.
>
> Note that the memory in question is not leaked.  It is properly
> freed by nginx, and it is available for future allocations within
> nginx.  In worker processes, this memory will be used for various
> run-time allocations, such as request buffers and so on.  In the
> master process, this memory will be used on further configuration
> reloads, so the master process will not grow any further.
>
> If the amount of memory used for configuration structures is a
> problem, you may want to re-think your configuration approach.  In
> particular, large virtual hosting providers are known to use nginx
> with small number of server{} blocks serving many different
> domains.  Alternatively, you may want to build nginx with less
> modules compiled in, as each module usually allocates at least
> basic configuration structures in each server{} / location{} even
> if not used.
>
> --
> Maxim Dounin
> http://mdounin.ru/
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Possible memory leak?

2019-03-12 Thread Anoop Alias
I am able to reproduce the issue @wkbrad is reporting

[root@server1 ~]# ps_mem|head -1 && ps_mem|grep nginx
 Private  +   Shared  =  RAM used   Program
 25.3 MiB + 119.5 MiB = 144.9 MiB   nginx (3)
[root@server1 ~]# systemctl restart nginx
[root@server1 ~]# ps_mem|head -1 && ps_mem|grep nginx
 Private  +   Shared  =  RAM used   Program
 24.2 MiB +  58.1 MiB =  82.2 MiB   nginx (4)
 -->  notice the sharedmemory usage is half os what
is used before restart
[root@server1 ~]# ps_mem|head -1 && ps_mem|grep nginx
 Private  +   Shared  =  RAM used   Program
 23.1 MiB +  57.9 MiB =  81.0 MiB   nginx (3)
 ---> the cache loader process exits and the ram
usage remain same
[root@server1 ~]# nginx -s reload
   ---> A graceful reload is performed on Nginx
[root@server1 ~]# ps_mem|head -1 && ps_mem|grep nginx
 Private  +   Shared  =  RAM used   Program
 15.8 MiB + 118.8 MiB = 134.5 MiB   nginx (3)
 > The shared RAM size doubles and stay at this
value till another restart is performed



##

I think this is because the pmap shows 2 heaps after reload whereas there
is only one right after the restart , An additional heap appears after
reload

[root@server1 ~]# systemctl restart nginx
[root@server1 ~]# ps aux|grep nginx
root 22392  0.0  0.7 510316 62184 ?Ss   13:49   0:00 nginx:
master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
[root@server1 ~]# pmap -X 22392|head -2 && pmap -X 22392|grep heap
22392:   nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
 Address Perm   Offset Device  Inode   Size   Rss   Pss
Referenced Anonymous Swap Locked Mapping
01b1 rw-p   00:00  0  61224 58688 17187
 80 586880  0 [heap]


Now after the reload


[root@server1 ~]# pmap -X 20983|head -2 && pmap -X 20983|grep heap
20983:   nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
 Address Perm   Offset Device  Inode   SizeRss   Pss
Referenced Anonymous Swap Locked Mapping
0278 rw-p   00:00  0  61224  61220 23118
51540 612200  0 [heap]
0634a000 rw-p   00:00  0  57856  55360 19138
55360 553600  0 [heap]
###


On Tue, Mar 12, 2019 at 2:07 AM wkbrad  wrote:

> Hi All,
>
> I think I haven't been clear in what I'm seeing so let's start over.  :)  I
> set up a very simple test on Centos 7 with a default install of Nginx
> 1.12.2.  Below is exactly what I did to produce the result and it's clear
> to
> me that Nginx is using 2x the ram than it should be using after the first
> reload.  Can anyone explain why the ram usage would double after doing a
> config reload?
>
> yum update
> reboot
> yum install epel-release
> yum install nginx
> systemctl enable nginx
> systemctl start nginx
> yum install ps_mem vim
> cd /etc/nginx/
> vim vhost.template
>
> 
> server {
> listen 80;
> listen [::]:80;
>
> server_name {{DOMAIN}};
>
> root /var/www/html;
> index index.html;
>
> location / {
> try_files $uri $uri/ =404;
> }
> }
>
> 
> cd conf.d
> for i in $(seq -w 1 5); do sed 's/{{DOMAIN}}/dom'${i}'.com/'
> ../vhost.template > dom${i}.conf; done
> systemctl restart nginx
> ps_mem|grep nginx
>
> 
>  13.8 MiB + 750.7 MiB = 764.5 MiB   nginx (3)
>
> 
> systemctl reload nginx; sleep 60; ps_mem |grep nginx
>
> 
>  27.2 MiB +   1.4 GiB =   1.5 GiB   nginx (3)
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,283216,283344#msg-283344
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Possible memory leak?

2019-03-08 Thread Anoop Alias
Sorry OT -  @wkbrad -  Please contact me off-the-list and we can discuss
this further

On Fri, Mar 8, 2019 at 9:09 PM wkbrad  wrote:

> Hi Anoop!
>
> I thought you might have been the nDeploy guy and I've been planning on
> bringing this up with you too.  We actually have several servers licensed
> with you.  :)
>
> And they do have the same issue but you're still misunderstanding what the
> problem is.
>
> I completely understand that when the reload happens it should use 2x the
> ram.  That's expected.  What is not expected is that the ram stays at that
> level AFTER the reload is complete.
>
> Let's look at an example from a live Xtendweb server.  Here is the ram
> usage
> after a restart.
>  30.5 MiB +   1.4 GiB =   1.5 GiB   nginx (4)
>
> And here is the ram usage after a reload.
>  28.4 MiB +   2.8 GiB =   2.9 GiB   nginx (4)
>
> The reload is completely finished at that point with no workers in shutting
> down state and it's now using 2x the ram.  Now if I use the binary reload
> process next it goes back down.
>  26.1 MiB +   1.5 GiB =   1.5 GiB   nginx (4)
>
> Again, I'm not talking about what SHOULD be happening.  It's totally normal
> and expected for it to use 2x the ram DURING the reload.  It's not expected
> for it to continue using 2x the ram AFTER the reload is finished.
>
> Thanks!
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,283216,283317#msg-283317
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Possible memory leak?

2019-03-07 Thread Anoop Alias
Its simple..Nginx has a master process and number of worker process as you
configure in nginx.conf . Its the workers that handle connections and each
one does async

When you do a HUP, all the master process is doing is spawning n new
workers and all new connections to port 80/443 are handled by the new
workers, but remember that he old workers may still be doing some job and
terminating it then and there means you are closing off some connections in
a non-graceful way, so the master process keeps the old workers also active
for sometime to let it gracefully finish all its doing

So if the worker process is n , during reload it will become 2n and then n
workers are gracefully shutdown which means if n workers use x memory ,
then during reload the memory become 2x

You can set workers to a low value ,say 1 worker process if the system is
limited in memory ,but the possibility of having 2n workers during reload
cannot be avoided as its more like a feature and the 2x memory usage is an
unwanted side effect of this feature

Having said that Nginx dev's can still look into why defining more vhost
consume lot of memory while apache dont have this problem. I develop an
automation script for a popular web control panel and most servers using
the script have upto 10k vhost defined and the memory usage would be 4x
times than apache for nginx with this much amount of vhosts defined . with
ssl defs etc needed for each vhost we cannot reduce the number of
vhosts also


On Fri, Mar 8, 2019 at 8:05 AM wkbrad  wrote:

> Thanks, Anoop!  But I don't think you understood the point I was trying to
> get across.  I was definitely not trying to compare nginx and apache memory
> usage. Let's just ignore that part was ever said.  :)
>
> I'm trying to understand why Nginx is using 2x the memory usage when the
> HUP
> signal is sent, i.e. the normal reload process.
>
> When you use the USR2/QUIT method, i.e. the binary upgrade process, it
> doesn't do this.
>
> It's a big problem on high vhost servers when you go from normally using 1G
> of ram to using 2G and then 4G during subsequent reloads.
>
> It's that brief 4G spike that initially caught my attention.  But then I
> noticed that it was always using 2x more ram.  Whoa!
>
> This is super easy to reproduce so I invite you to test it yourself.
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,283216,283312#msg-283312
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Possible memory leak?

2019-03-07 Thread Anoop Alias
Nginx does use more ram for the number of vhosts than Apache does

http://nginx.org/en/docs/control.html

The USR2 is for binary in-place upgrades and normally you should just send
a SIGHUP, I have seen sometimes the USR2 leading multiple master process
and showing some weird behaviour

You can probably use worker_shutdown_timeout 10s; or something to get the
workers to shut down in a more timebound manner



On Fri, Mar 8, 2019 at 12:03 AM wkbrad  wrote:

> Hi all,
>
> I just wanted to share the details of what I've found about this issue.
> Also thanks to Maxim Dounin and Reinis Rozitis who gave some really great
> answers!
>
> The more I look into this the more I'm convinced this is an issue with
> Nginx
> itself.  I've tested this with 3 different builds now and all have the
> exact
> same issue.
>
> The first 2 types of servers I tested were both running Nginx 1.15.8 on
> Centos 7 ( with 1 of them being on 6 ).  I tested about 10 of our over 100
> servers.  This time I tested in a default install of Debian 9 with Nginix
> version 1.10.3 and the issue exists there too.  I just wanted to test on
> something completely different.
>
> For the test, I created 50k very simple vhosts which used about 1G of RAM.
> Here is the ps_mem output.
>  94.3 MiB +   1.0 GiB =   1.1 GiB   nginx (3)
>
> After a normal reload it then uses 2x the ram:
> 186.3 MiB +   1.9 GiB =   2.1 GiB   nginx (3)
>
> And if I reload it again it briefly jumps up to about 4G during the reload
> and then goes back down to 2G.
>
> If I instead use the "upgrade" option.  In the case of Debian, service
> nginx
> upgrade, then it reloads gracefully and goes back to using 1G again.
> 100.8 MiB +   1.0 GiB =   1.1 GiB   nginx (3)
>
> The difference between the "reload" and "upgrade" process is basically only
> that reload sends a HUP signal to Nginx and upgrade sends a USR2 and then
> QUIT signal.  What happens with all of those signals is entirely up to
> Nginx.  It could even ignore them if chose too.
>
> Additionally, I ran the same test with Apache.  Not because I want to
> compare Nginx to Apache, they are different for a reason.  I just wanted to
> test if this was a system issue.  So I did the same thing on Debian 9,
> installed Apache and created 50k simple vhosts.  It used about 800M of ram
> and reloading did not cause that to increase at all.
>
> All of that leads me to these questions.
>
> Why would anyone want to use the normal reload process to reload the Nginx
> configuration?
> Shouldn't we always be using the upgrade process instead?
> Are there any downsides to doing that?
> Has anyone else noticed these issues and have you found another fix?
>
> Look forward to hearing back and thanks in advance!
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,283216,283309#msg-283309
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: I'm about to embark on creating 12000 vhosts

2019-02-11 Thread Anoop Alias
I maintain an Nginx config generation plugin for a web hosting control
panel, where people put on such high number of domains on a server normally
and things I notice are

1. Memory consumption by worker process go up when vhost count go up , so
we may need to reduce worker count

2. As already mentioned the reload might take a lot of time ,so do nginx -t

3. Even startup will take time as most package maintainers put a nginx -t
on ExecPre(similar in non-systemd) which take a lot of time on startup

I have read somewhere, Nginx is not good at handling this many
vhost defs ,so they use a dynamic setup (like the one in OpenResty) at
CloudFlare edge servers for SSL

On Tue, Feb 12, 2019 at 1:25 AM Peter Booth via nginx 
wrote:

> +1 to the openresty suggestion
>
> I’ve found that whenever I want to do something gnarly or perverse with
> nginx, openresty helps me do it in a way that’s maintainable and with any
> ugliness minimized.
>
> It’s like nginx with super-powers!
>
> Sent from my iPhone
>
> On Feb 11, 2019, at 1:34 PM, Robert Paprocki <
> rpapro...@fearnothingproductions.net> wrote:
>
> FWIW, this kind of large installation is why solutions like OpenResty
> exist (providing for dynamic config/cert service/hostname registration
> without having to worry about the time/expense of re-parsing the Nginx
> config).
>
> On Mon, Feb 11, 2019 at 7:59 AM Richard Paul 
> wrote:
>
>> Hi Ben,
>>
>> Thanks for the quick response. That's great to hear, as we'd only get to
>> find this out after putting rather a lot of effort into the process.
>> We'll be hosting these on cloud instances but since those aren't the
>> fastest machines around I'll take the reloading as a word of caution (we're
>> probably going to have to make another bit of application functionality
>> which will handle this so that we're only reloading when we have domain
>> changes rather than on a regular schedule that'd I'd thought would be the
>> simplest method.)
>>
>> I have a plan for the rate limits, but thank you for mentioning it. SANs
>> would reduce the number of vhosts, but I'm not sure about the added
>> complexity of managing the vhost templates and the key/cert naming.
>>
>> Kind regards,
>> Richard
>>
>>
>> On Mon, 2019-02-11 at 16:35 +0100, Ben Schmidt wrote:
>>
>> Hi Richard,
>>
>> we have experience with around 1/4th the vhosts on a single Server, no
>> Issues at all.
>> Reloading can take up to a minute but the Hardware isn't what I would
>> call recent.
>>
>> The only thing that you'll have to watch out are Letsencrypt rate
>> Limits > https://letsencrypt.org/docs/rate-limits/
>> #
>> /etc/letsencrypt/renewal $ ls | wc -l
>> 1647
>> #
>> We switched to using SAN Certs whenever possible.
>>
>> Around 8 years ago I managed a 8000 vHosts Webfarm with a apache. No
>> Issues ether.
>>
>> Cheers,
>> Ben
>>
>> On Mon, Feb 11, 2019 at 4:16 PM rick_pri 
>> wrote:
>>
>> Our current setup is pretty simple, we have a regex capture to ensure that
>> the incoming request is a valid ascii domain name and we serve all our
>> traffic from that.  Great ... for us.
>>
>> However, our customers, with about 12000 domain names at present have
>> started to become quite vocal about having HTTPS on their websites, to
>> which
>> we provide a custom CMS and website package, which means we're about to
>> create a new Nginx layer in front of our current servers to terminate
>> TLS.
>> This will require us to set up vhosts for each certificate issued with
>> server names which match what's in the certificate's SAN.
>>
>> To keep this simple we're currently thinking about just having each
>> domain,
>> and www subdomain, on its own certificate (LetsEncrypt) and vhost but that
>> is going to lead, approximately, to the number of vhosts mentioned in the
>> subject line.  As such I wanted to put the feelers out to see if anyone
>> else
>> had tried to work with large numbers of vhosts and any issues which they
>> may
>> have come across.
>>
>> Kind regards,
>>
>> Richard
>>
>> Posted at Nginx Forum:
>> https://forum.nginx.org/read.php?2,282986,282986#msg-282986
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>> ___
>>
>> nginx mailing list
>>
>> nginx@nginx.org
>>
>>
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: How to avoid nginx failure in case of bad certificate ?

2019-01-13 Thread Anoop Alias
You can reload instead of restart and nginx would continue working on old
config if the new one is invalid

On Sun, Jan 13, 2019 at 2:31 PM Pierre Couderc  wrote:

> In case of bad certificate (certificate file missing for exemple), nginx
> fails to restart.
>
> Is there a way to avoid that ?
>
> There may be an error on one site without stopping all other correct sites.
>
> This occurs particularly in case we remove an old site, and make an
> error in configuration files : a few months later, let's encrypt tries
> to renew the certificate in the night and fails, then restarts nginx
> which fails because missing a no more used certificate, and the full
> site is stoped
>
> Thanks.
>
> PC
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx hang and do not respond with large number of network connection in FIN_WAIT state

2019-01-10 Thread Anoop Alias
This server is not using network drives and the only thing I can think of
is the temp paths set to /dev/shm

--http-client-body-temp-path=/dev/shm/client_temp
--http-proxy-temp-path=/dev/shm/proxy_temp
--http-fastcgi-temp-path=/dev/shm/fastcgi_temp
--http-uwsgi-temp-path=/dev/shm/uwsgi_temp
--http-scgi-temp-path=/dev/shm/scgi_temp


Could this be causing an issue? The domain under the attack is set to proxy
to httpd and would surely be using http-client-body-temp-path
and http-proxy-temp-path

Although the system is quite beefy  in terms of cpu and ram

# df -h|grep shm
tmpfs 63G  7.2M   63G   1% /dev/shm






On Thu, Jan 10, 2019 at 11:34 PM Anoop Alias  wrote:

> The issue was identified to be an enormous number of http request (
> attack) to one of the hosted domains that was using cloudflare. The traffic
> is coming in from cloudflare and this was causing nginx to be exhausted in
> terms of the TCP stack
>
> #
> # netstat -tn|awk '{print $6}'|sort|uniq -c
>   1
>   19922 CLOSE_WAIT
>   2 CLOSING
>   23528 ESTABLISHED
>   17785 FIN_WAIT1
>   4 FIN_WAIT2
>   1 Foreign
>  17 LAST_ACK
> 904 SYN_RECV
>  14 SYN_SENT
> 142 TIME_WAIT
> 
>
> Interestingly with the same attack, removing Nginx from the picture and
> exposing httpd cause the connections to be fine
>
> 
> ]# netstat -tn|awk '{print $6}'|sort|uniq -c
>   1
>  39 CLOSE_WAIT
>   9 CLOSING
> 664 ESTABLISHED
>  13 FIN_WAIT1
>  48 FIN_WAIT2
>   1 Foreign
>  24 LAST_ACK
>   8 SYN_RECV
>  12 SYN_SENT
>1137 TIME_WAIT
> ##
>
> Although the load is a bit high than usual.
>
> It looks like the TCP connections in the established state is somehow
> piling up with Nginx
>
> Number of established connections over time with nginx
> ##
> 535 ESTABLISHED
> 1195 ESTABLISHED
> 23437 ESTABLISHED
> 23490 ESTABLISHED
> 23482 ESTABLISHED
> 389 ESTABLISHED
> ##
>
> I think this could be a misconfiguration in Nginx?. Would be great if
> someone points out what is wrong with the config
>
> Thanks,
>
>
> On Thu, Jan 10, 2019 at 8:27 AM Anoop Alias 
> wrote:
>
>> Hi,
>>
>> Have had a really strange issue on a Nginx server configured as a reverse
>> proxy wherein the server stops responding when the network connections in
>> ESTABLISHED state and FIN_WAIT state in very high compared to normal
>> working
>>
>> If you see the below network graph, at around 00:30 hours there is a big
>> spike in network connections in FIN_WAIT state, to around 12000 from the
>> normal value of  ~20
>>
>> https://i.imgur.com/wb6VMWo.png
>>
>> At this state, Nginx stops responding fully and does not work even after
>> a full restart of the service.
>>
>> Switching off Nginx and bring Apache service to the frontend (removing
>> the reverse proxy) fix this and the connections drop
>>
>> Nginx config & build setting
>> ##
>>  nginx -V
>> nginx version: nginx/1.15.8
>> built by gcc 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC)
>> built with LibreSSL 2.8.3
>> TLS SNI support enabled
>> configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
>> --modules-path=/etc/nginx/modules --with-pcre=./pcre-8.42 --with-pcre-jit
>> --with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.8.3
>> --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log
>> --http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid
>> --lock-path=/var/run/nginx.lock
>> --http-client-body-temp-path=/dev/shm/client_temp
>> --http-proxy-temp-path=/dev/shm/proxy_temp
>> --http-fastcgi-temp-path=/dev/shm/fastcgi_temp
>> --http-uwsgi-temp-path=/dev/shm/uwsgi_temp
>> --http-scgi-temp-path=/dev/shm/scgi_temp --user=nobody --group=nobody
>> --with-http_ssl_module --with-http_realip_module
>> --with-http_addition_module --with-http_sub_module --with-http_dav_module
>> --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module
>> --with-http_gzip_static_module --with-http_random_index_module
>> --with-http_secure_link_module --with-http_stub_status_module
>> --with-http_auth_request_module --with-file-aio --with-threads
>> --with-stream --with-stream_ssl_module --with-http_slice_module
>> --with-compat --with-http_v2_module
>> --add-dynamic-module=incubator-pagespeed-ngx-1.13.35.2-stable
>> --add-dynamic-module=/usr/local/

Re: Nginx hang and do not respond with large number of network connection in FIN_WAIT state

2019-01-10 Thread Anoop Alias
The issue was identified to be an enormous number of http request ( attack)
to one of the hosted domains that was using cloudflare. The traffic is
coming in from cloudflare and this was causing nginx to be exhausted in
terms of the TCP stack

#
# netstat -tn|awk '{print $6}'|sort|uniq -c
  1
  19922 CLOSE_WAIT
  2 CLOSING
  23528 ESTABLISHED
  17785 FIN_WAIT1
  4 FIN_WAIT2
  1 Foreign
 17 LAST_ACK
904 SYN_RECV
 14 SYN_SENT
142 TIME_WAIT


Interestingly with the same attack, removing Nginx from the picture and
exposing httpd cause the connections to be fine


]# netstat -tn|awk '{print $6}'|sort|uniq -c
  1
 39 CLOSE_WAIT
  9 CLOSING
664 ESTABLISHED
 13 FIN_WAIT1
 48 FIN_WAIT2
  1 Foreign
 24 LAST_ACK
  8 SYN_RECV
 12 SYN_SENT
   1137 TIME_WAIT
##

Although the load is a bit high than usual.

It looks like the TCP connections in the established state is somehow
piling up with Nginx

Number of established connections over time with nginx
##
535 ESTABLISHED
1195 ESTABLISHED
23437 ESTABLISHED
23490 ESTABLISHED
23482 ESTABLISHED
389 ESTABLISHED
##

I think this could be a misconfiguration in Nginx?. Would be great if
someone points out what is wrong with the config

Thanks,


On Thu, Jan 10, 2019 at 8:27 AM Anoop Alias  wrote:

> Hi,
>
> Have had a really strange issue on a Nginx server configured as a reverse
> proxy wherein the server stops responding when the network connections in
> ESTABLISHED state and FIN_WAIT state in very high compared to normal
> working
>
> If you see the below network graph, at around 00:30 hours there is a big
> spike in network connections in FIN_WAIT state, to around 12000 from the
> normal value of  ~20
>
> https://i.imgur.com/wb6VMWo.png
>
> At this state, Nginx stops responding fully and does not work even after a
> full restart of the service.
>
> Switching off Nginx and bring Apache service to the frontend (removing the
> reverse proxy) fix this and the connections drop
>
> Nginx config & build setting
> ##
>  nginx -V
> nginx version: nginx/1.15.8
> built by gcc 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC)
> built with LibreSSL 2.8.3
> TLS SNI support enabled
> configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
> --modules-path=/etc/nginx/modules --with-pcre=./pcre-8.42 --with-pcre-jit
> --with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.8.3
> --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log
> --http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid
> --lock-path=/var/run/nginx.lock
> --http-client-body-temp-path=/dev/shm/client_temp
> --http-proxy-temp-path=/dev/shm/proxy_temp
> --http-fastcgi-temp-path=/dev/shm/fastcgi_temp
> --http-uwsgi-temp-path=/dev/shm/uwsgi_temp
> --http-scgi-temp-path=/dev/shm/scgi_temp --user=nobody --group=nobody
> --with-http_ssl_module --with-http_realip_module
> --with-http_addition_module --with-http_sub_module --with-http_dav_module
> --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module
> --with-http_gzip_static_module --with-http_random_index_module
> --with-http_secure_link_module --with-http_stub_status_module
> --with-http_auth_request_module --with-file-aio --with-threads
> --with-stream --with-stream_ssl_module --with-http_slice_module
> --with-compat --with-http_v2_module
> --add-dynamic-module=incubator-pagespeed-ngx-1.13.35.2-stable
> --add-dynamic-module=/usr/local/rvm/gems/ruby-2.5.3/gems/passenger-6.0.0/src/nginx_module
> --add-dynamic-module=ngx_brotli --add-dynamic-module=echo-nginx-module-0.61
> --add-dynamic-module=headers-more-nginx-module-0.32
> --add-dynamic-module=ngx_http_redis-0.3.8
> --add-dynamic-module=redis2-nginx-module
> --add-dynamic-module=srcache-nginx-module-0.31
> --add-dynamic-module=ngx_devel_kit-0.3.0
> --add-dynamic-module=set-misc-nginx-module-0.31
> --add-dynamic-module=ngx_http_geoip2_module
> --add-dynamic-module=testcookie-nginx-module
> --add-dynamic-module=ModSecurity-nginx --with-cc-opt='-O2 -g -pipe -Wall
> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
> --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic'
> --with-ld-opt=-Wl,-E
>
> #
> # worker_processes  auto;  #Set to auto for a powerful server
> worker_processes  1;
> worker_rlimit_nofile 69152;
> worker_shutdown_timeout 10s;
> # worker_cpu_affinity auto;
> timer_resolution 1s;
> thread_pool iopool threads=32 max_queue=65536;
> pcre_jit on;
> pid/var/run/nginx.pid;
> erro

Nginx hang and do not respond with large number of network connection in FIN_WAIT state

2019-01-09 Thread Anoop Alias
Hi,

Have had a really strange issue on a Nginx server configured as a reverse
proxy wherein the server stops responding when the network connections in
ESTABLISHED state and FIN_WAIT state in very high compared to normal
working

If you see the below network graph, at around 00:30 hours there is a big
spike in network connections in FIN_WAIT state, to around 12000 from the
normal value of  ~20

https://i.imgur.com/wb6VMWo.png

At this state, Nginx stops responding fully and does not work even after a
full restart of the service.

Switching off Nginx and bring Apache service to the frontend (removing the
reverse proxy) fix this and the connections drop

Nginx config & build setting
##
 nginx -V
nginx version: nginx/1.15.8
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC)
built with LibreSSL 2.8.3
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
--modules-path=/etc/nginx/modules --with-pcre=./pcre-8.42 --with-pcre-jit
--with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.8.3
--conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log
--http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid
--lock-path=/var/run/nginx.lock
--http-client-body-temp-path=/dev/shm/client_temp
--http-proxy-temp-path=/dev/shm/proxy_temp
--http-fastcgi-temp-path=/dev/shm/fastcgi_temp
--http-uwsgi-temp-path=/dev/shm/uwsgi_temp
--http-scgi-temp-path=/dev/shm/scgi_temp --user=nobody --group=nobody
--with-http_ssl_module --with-http_realip_module
--with-http_addition_module --with-http_sub_module --with-http_dav_module
--with-http_flv_module --with-http_mp4_module --with-http_gunzip_module
--with-http_gzip_static_module --with-http_random_index_module
--with-http_secure_link_module --with-http_stub_status_module
--with-http_auth_request_module --with-file-aio --with-threads
--with-stream --with-stream_ssl_module --with-http_slice_module
--with-compat --with-http_v2_module
--add-dynamic-module=incubator-pagespeed-ngx-1.13.35.2-stable
--add-dynamic-module=/usr/local/rvm/gems/ruby-2.5.3/gems/passenger-6.0.0/src/nginx_module
--add-dynamic-module=ngx_brotli --add-dynamic-module=echo-nginx-module-0.61
--add-dynamic-module=headers-more-nginx-module-0.32
--add-dynamic-module=ngx_http_redis-0.3.8
--add-dynamic-module=redis2-nginx-module
--add-dynamic-module=srcache-nginx-module-0.31
--add-dynamic-module=ngx_devel_kit-0.3.0
--add-dynamic-module=set-misc-nginx-module-0.31
--add-dynamic-module=ngx_http_geoip2_module
--add-dynamic-module=testcookie-nginx-module
--add-dynamic-module=ModSecurity-nginx --with-cc-opt='-O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
--param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic'
--with-ld-opt=-Wl,-E

#
# worker_processes  auto;  #Set to auto for a powerful server
worker_processes  1;
worker_rlimit_nofile 69152;
worker_shutdown_timeout 10s;
# worker_cpu_affinity auto;
timer_resolution 1s;
thread_pool iopool threads=32 max_queue=65536;
pcre_jit on;
pid/var/run/nginx.pid;
error_log /var/log/nginx/error_log;

#Load Dynamic Modules
include /etc/nginx/modules.d/*.load;


events {
worker_connections  20480;
use epoll;
multi_accept on;
accept_mutex off;
}

lingering_close off;
limit_req zone=FLOODVHOST burst=200;
limit_req zone=FLOODPROTECT burst=200;
limit_conn PERSERVER 60;
client_header_timeout  5s;
client_body_timeout 5s;
send_timeout 5s;
keepalive_timeout 0;
http2_idle_timeout 20s;
http2_recv_timeout 20s;


aio threads=iopool;
aio_write on;
directio 64m;
output_buffers 2 512k;

tcp_nodelay on;

types_hash_max_size 4096;
server_tokens off;
client_max_body_size 2048m;
reset_timedout_connection on;

#Proxy
proxy_read_timeout 300;
proxy_send_timeout 300;
proxy_connect_timeout 30s;

#FastCGI
fastcgi_read_timeout 300;
fastcgi_send_timeout 300;
fastcgi_connect_timeout 30s;

#Proxy Buffer
proxy_buffering on;
proxy_buffer_size  128k;
proxy_buffers  8 128k;
proxy_busy_buffers_size256k;

#FastCGI Buffer
fastcgi_buffer_size  128k;
fastcgi_buffers  8 128k;
fastcgi_busy_buffers_size256k;

server_names_hash_max_size 2097152;
server_names_hash_bucket_size 128;
##



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

high memory usage

2018-10-25 Thread Anoop Alias
Hi,

On a shared server with a large number of accounts


sites-enabled]# grep  "server {"  *|wc -l
11877


The memory usage of nginx is very high
--

 Private  +   Shared  =  RAM used   Program
  1.6 GiB +   4.9 GiB =   6.5 GiB   nginx (3)
-

# cat /proc/2068600/maps
0040-006d6000 r-xp  09:7d 105657122
/usr/sbin/nginx
008d5000-008d6000 r--p 002d5000 09:7d 105657122
/usr/sbin/nginx
008d6000-008fe000 rw-p 002d6000 09:7d 105657122
/usr/sbin/nginx
008fe000-00921000 rw-p  00:00 0
0218b000-a2217000 rw-p  00:00 0
[heap]
a2217000-13fbc7000 rw-p  00:00 0
 [heap]



pmap 2068600

2068600:   nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
0040   2904K r-x-- nginx
008d5000  4K r nginx
008d6000160K rw--- nginx
008fe000140K rw---   [ anon ]
0218b000 2622000K rw---   [ anon ]
a2217000 2582208K rw---   [ anon ]
---

It looks like the heap is 2.6GB in size.

Is there a way to reduce this?

The configuration is not the problem ( which is why I am not attaching it)
as systems will a smaller number of vhosts using the same config consume
less ram




-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx compile with OpenSSL 1.1.1 and DESTDIR=

2018-09-19 Thread Anoop Alias
Thanks Sergey

On Wed, Sep 19, 2018 at 4:24 PM Sergey Kandaurov  wrote:

>
> > On 19 Sep 2018, at 05:41, Anoop Alias  wrote:
> >
> > Hi,
> >
> > ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
> --with-openssl=./openssl-1.1.1
> > make DESTDIR=/opt/test install
> >
> > Did not create the .openssl directory inside the openssl source , but
> instead, this created the .openssl directory in the DESTDIR
> >
>
> As expected.
>
> > I found out that if we use an explicit make command
> > ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
> --with-openssl=./openssl-1.1.1
> > make
> > make DESTDIR=/opt/test install
> >
> > This works
> >
>
> And this is expected too.
> Just use a separate ``make'' command to not mix
> nginx's DESTDIR and openssl's DESTDIR means.
>
> > But the former command without the explicit make used to work on openssl
> 1.0.xx releases
> >
> > Starting from OpenSSL 1.1.0, it is used there as install prefix.  ==>
> This may be an after effect of this
>
> As previously noted.  You can also find this note in CHANGES:
>   *) The INSTALL_PREFIX Makefile variable has been renamed to
>  DESTDIR.  That makes for less confusion on what this variable
>  is for.  Also, the configuration option --install_prefix is
>  removed.
>  [Richard Levitte]
>
>
> --
> Sergey Kandaurov
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Understood Diretive Location and Regex (concept question).

2018-09-19 Thread Anoop Alias
location ~ /\.

regex location for /.
The back slash before dot is just an escape char as dot has special meaning
in regex
---

location ~ \.php$

regex location for anything ending in .php
Here again the backslash before dot serve as an escape

On Wed, Sep 19, 2018 at 4:15 PM Labs Ocozzi  wrote:

> Dears, in me Lab i have nginx work fine, but i dont understood the
> diretive location with regex "~ /\. " "~*  \." and
> "~ \.php$" bellow examples in me enviroment.
>
>
>
>location ~ /\. {
> deny all;
> access_log off;
> log_not_found off;
>}
>
>location ~ \.php$ {
> try_files $uri =404;
> include /etc/nginx/fastcgi_params;
> fastcgi_pass 127.0.0.1:9000;
> fastcgi_param SCRIPT_FILENAME 
> $document_root$fastcgi_script_name;
>}
>
>
>
>location ~ /\. {
> deny all;
> access_log off;
> log_not_found off;
>}
>
> --
> Att,
> BR-RJ.
> Togy Silva Ocozzy
> e-mail: rjtogy1...@gmail.com
> LABS OCOZZI PE.
>
>
>
> --
> [image: Avast logo] 
>
> Este email foi escaneado pelo Avast antivírus.
> www.avast.com 
>
> <#m_577209082744095808_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx compile with OpenSSL 1.1.1 and DESTDIR=

2018-09-18 Thread Anoop Alias
Hi,

./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
--with-openssl=./openssl-1.1.1
make DESTDIR=/opt/test install

Did not create the .openssl directory inside the openssl source , but
instead, this created the .openssl directory in the DESTDIR

I found out that if we use an explicit make command
./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
--with-openssl=./openssl-1.1.1
make
make DESTDIR=/opt/test install

This works

But the former command without the explicit make used to work on
openssl 1.0.xx releases

Starting from OpenSSL 1.1.0, it is used there as install prefix.  ==> This
may be an after effect of this



On Tue, Sep 18, 2018 at 9:47 PM Sergey Kandaurov  wrote:

>
> > On 18 Sep 2018, at 10:55, Anoop Alias  wrote:
> >
> >
> > Hi,
> >
> > I am trying to compile nginx 1.15.3 (mainline) with OpenSSL 1.1.1
> >
> > # ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
> --with-openssl=./openssl-1.1.1
> >
> > # make DESTDIR=/opt/test install
> >
> > But this error out with
> > ---
> > cc: error: ./openssl-1.1.1/.openssl/lib/libssl.a: No such file or
> directory
> > cc: error: ./openssl-1.1.1/.openssl/lib/libcrypto.a: No such file or
> directory
> > make[1]: *** [objs/nginx] Error 1
> > --
> >
> > I could find that the openssl-1.1.1/.openssl directory is not created
> but instead
> >
> > /opt/test/$nginxsrcpath/openssl-1.1.1/.openssl
> >
> > That is if the nginx src is in /root/nginx-1.15.3/
> >
> > The directory .openssl will be
> /opt/test/root/nginx-1.15.3/openssl-1.1.1/.openssl/
> >
> > The make DESTDIR=/opt/test install works fine in nginx 1.13.x with
> OpenSSL 1.0.2p
> > I am not sure the change is caused by nginx 1.15.3 or openssl-1.1.1 to
> be honest
>
> What effect do you expect from DESTDIR?
> Starting from OpenSSL 1.1.0, it is used there as install prefix.
>
> --
> Sergey Kandaurov
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx compile with OpenSSL 1.1.1 and DESTDIR=

2018-09-18 Thread Anoop Alias
Hi,

I am trying to compile nginx 1.15.3 (mainline) with OpenSSL 1.1.1

# ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
--with-openssl=./openssl-1.1.1

# make DESTDIR=/opt/test install

But this error out with
---
cc: error: ./openssl-1.1.1/.openssl/lib/libssl.a: No such file or directory
cc: error: ./openssl-1.1.1/.openssl/lib/libcrypto.a: No such file or
directory
make[1]: *** [objs/nginx] Error 1
--

I could find that the openssl-1.1.1/.openssl directory is not created but
instead

/opt/test/$nginxsrcpath/openssl-1.1.1/.openssl

That is if the nginx src is in /root/nginx-1.15.3/

The directory .openssl will be
/opt/test/root/nginx-1.15.3/openssl-1.1.1/.openssl/

The make DESTDIR=/opt/test install works fine in nginx 1.13.x with
OpenSSL 1.0.2p
I am not sure the change is caused by nginx 1.15.3 or openssl-1.1.1 to be
honest



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: posix_memalign error

2018-08-10 Thread Anoop Alias
I may have found the root cause of this issue. Many thanks to Igor for the
valuable inputs

The issue had something to do with the fact that I was calling nginx -s
reload from python subprocess module

and I believe the error was coming from the fork in python


https://stackoverflow.com/questions/1367373/python-subprocess-popen-oserror-errno-12-cannot-allocate-memory
https://stackoverflow.com/questions/5306075/python-memory-allocation-error-using-subprocess-popen

As Igor suggested I changed the subprocess call in python to signal (SIGHUP
to master binary) and the logs don't have the ENOMEM error anymore, at
least in the past 12+ hours

The memory usage for ~10k virtual hosts are a bit high though, but things
do work as expected and no more errors

Thank you all again

On Wed, Aug 8, 2018 at 9:03 AM Anoop Alias  wrote:

> Hi Igor,
>
> Yes the server runs other software including httpd with a similar number
> of vhost
>
> # grep " 5168
>
> I haven't found issue with the other softwares in the logs relating to
> memory
>
> Infact httpd (event mpm) use lesser memory to load similar config
>
> # ps_mem| head -1 && ps_mem |grep httpd
>  Private  +   Shared  =  RAM used   Program
> 585.6 MiB + 392.0 MiB = 977.6 MiB   httpd (63)
>
> # ps_mem| head -1 && ps_mem |grep nginx
>  Private  +   Shared  =  RAM used   Program
> 999.8 MiB +   2.5 GiB =   3.5 GiB   nginx (3)
>
> The server is a shared hosting one and runs CloudLinux , but as far as I
> know ,CloudLinux applies limits to only user level process and not nginx
>
> The nginx HUP is needed as this is triggered by changes in apache
> configuration and nginx need to reload the new config . For log file reload
> SIGUSR1 is used
>
>
>
>
>
> On Tue, Aug 7, 2018 at 5:50 PM Igor A. Ippolitov 
> wrote:
>
>> Anoop,
>>
>> I don't see any troubles with your configuration.
>> Also, if you have 120G of RAM and a single worker - the problem is not in
>> nginx.
>> Do you have other software running on the host?
>>
>> Basically, you just run out of memory.
>>
>> You can optimize your reload though: use "service nginx reload" (or "kill
>> -SIGHUP") to reload nginx configuration.
>> When you do nginx -s reload - you make nginx parse configuration (and it
>> requires memory) and then send a signal to the running master. You can
>> avoid this overhead with 'service' command as it uses 'kill' documented in
>> the manual page.
>>
>> On 06.08.2018 22:55, Anoop Alias wrote:
>>
>> Hi Igor,
>>
>> Config is reloaded using
>>
>> /usr/sbin/nginx -s reload
>>
>> this is invoked from a python/shell script ( Nginx is installed on a web
>> control panel )
>>
>> The top-level Nginx config is in the gist below
>>
>> https://gist.github.com/AnoopAlias/ba5ad6749a586c7e267672ee65b32b3a
>>
>> It further includes ~8k server blocks or more in some servers. Out of
>> this 2/3 are server {} blocks with TLS config and 1/3 non-TLS ones
>>
>> ]# pwd
>> /etc/nginx/sites-enabled
>> # grep "server {" *|wc -l
>> 7886
>>
>> And yes most of them are very similar and mostly proxy to upstream httpd
>>
>> I have tried removing all the loadable modules and even tried an older
>> version of nginx and all produce the error
>>
>>
>> # numastat -m
>>
>> Per-node system memory usage (in MBs):
>>   Node 0  Node 1   Total
>>  --- --- ---
>> MemTotal65430.8465536.00   130966.84
>> MemFree  5491.26   40.89 5532.15
>> MemUsed 59939.5865495.11   125434.69
>> Active  22295.6121016.0943311.70
>> Inactive 8742.76 4662.4813405.24
>> Active(anon)16717.1016572.1933289.29
>> Inactive(anon)   2931.94 1388.14 4320.08
>> Active(file) 5578.50 4443.9110022.41
>> Inactive(file)   5810.82 3274.34 9085.16
>> Unevictable 0.000.000.00
>> Mlocked 0.000.000.00
>> Dirty   7.041.648.67
>> Writeback   0.000.000.00
>> FilePages   18458.9310413.9728872.90
>> Mapped862.14  413.38 1275.52
>> AnonPages   125

Re: posix_memalign error

2018-08-07 Thread Anoop Alias
Hi Igor,

Yes the server runs other software including httpd with a similar number of
vhost

# grep "
wrote:

> Anoop,
>
> I don't see any troubles with your configuration.
> Also, if you have 120G of RAM and a single worker - the problem is not in
> nginx.
> Do you have other software running on the host?
>
> Basically, you just run out of memory.
>
> You can optimize your reload though: use "service nginx reload" (or "kill
> -SIGHUP") to reload nginx configuration.
> When you do nginx -s reload - you make nginx parse configuration (and it
> requires memory) and then send a signal to the running master. You can
> avoid this overhead with 'service' command as it uses 'kill' documented in
> the manual page.
>
> On 06.08.2018 22:55, Anoop Alias wrote:
>
> Hi Igor,
>
> Config is reloaded using
>
> /usr/sbin/nginx -s reload
>
> this is invoked from a python/shell script ( Nginx is installed on a web
> control panel )
>
> The top-level Nginx config is in the gist below
>
> https://gist.github.com/AnoopAlias/ba5ad6749a586c7e267672ee65b32b3a
>
> It further includes ~8k server blocks or more in some servers. Out of this
> 2/3 are server {} blocks with TLS config and 1/3 non-TLS ones
>
> ]# pwd
> /etc/nginx/sites-enabled
> # grep "server {" *|wc -l
> 7886
>
> And yes most of them are very similar and mostly proxy to upstream httpd
>
> I have tried removing all the loadable modules and even tried an older
> version of nginx and all produce the error
>
>
> # numastat -m
>
> Per-node system memory usage (in MBs):
>   Node 0  Node 1   Total
>  --- --- ---
> MemTotal65430.8465536.00   130966.84
> MemFree  5491.26   40.89 5532.15
> MemUsed 59939.5865495.11   125434.69
> Active  22295.6121016.0943311.70
> Inactive 8742.76 4662.4813405.24
> Active(anon)16717.1016572.1933289.29
> Inactive(anon)   2931.94 1388.14 4320.08
> Active(file) 5578.50 4443.9110022.41
> Inactive(file)   5810.82 3274.34 9085.16
> Unevictable 0.000.000.00
> Mlocked 0.000.000.00
> Dirty   7.041.648.67
> Writeback   0.000.000.00
> FilePages   18458.9310413.9728872.90
> Mapped862.14  413.38 1275.52
> AnonPages   12579.4915264.3727843.86
> Shmem7069.52 2695.71 9765.23
> KernelStack18.343.03   21.38
> PageTables153.14  107.77  260.90
> NFS_Unstable0.000.000.00
> Bounce  0.000.000.00
> WritebackTmp0.000.000.00
> Slab 4830.68 2254.55 7085.22
> SReclaimable 2061.05  921.72 2982.77
> SUnreclaim   2769.62 1332.83 4102.45
> AnonHugePages   4.002.006.00
> HugePages_Total 0.000.000.00
> HugePages_Free  0.000.000.00
> HugePages_Surp  0.000.000.00
>
>
> Thanks,
>
>
>
>
>
> On Mon, Aug 6, 2018 at 6:33 PM Igor A. Ippolitov 
> wrote:
>
>> Anoop,
>>
>> I suppose, most of your 10k servers are very similar, right?
>> Please, post top level configuration and a typical server{}, please.
>>
>> Also, how do you reload configuration? With 'service nginx reload' or may
>> be other commands?
>>
>> It looks like you have a lot of fragmented memory and only 4gb free in
>> the second numa node.
>> So, I'd say this is OK that you are getting errors from allocating a 16k
>> stripes.
>>
>> Could you please post numastat -m output additionally. Just to make sure
>> you have half of the memory for the second CPU.
>> And we'll have a look if memory utilization may be optimized based on
>> your configuration.
>>
>> Regards,
>> Igor.
>>
>> On 04.08.2018 07:54, Anoop Alias wrote:
>>
>> Hi Igor,
>>
>> Setting vm.max_map_count to 20x the normal value did not help
>>
>> Th

Re: posix_memalign error

2018-08-06 Thread Anoop Alias
Hi Igor,

Config is reloaded using

/usr/sbin/nginx -s reload

this is invoked from a python/shell script ( Nginx is installed on a web
control panel )

The top-level Nginx config is in the gist below

https://gist.github.com/AnoopAlias/ba5ad6749a586c7e267672ee65b32b3a

It further includes ~8k server blocks or more in some servers. Out of this
2/3 are server {} blocks with TLS config and 1/3 non-TLS ones

]# pwd
/etc/nginx/sites-enabled
# grep "server {" *|wc -l
7886

And yes most of them are very similar and mostly proxy to upstream httpd

I have tried removing all the loadable modules and even tried an older
version of nginx and all produce the error


# numastat -m

Per-node system memory usage (in MBs):
  Node 0  Node 1   Total
 --- --- ---
MemTotal65430.8465536.00   130966.84
MemFree  5491.26   40.89 5532.15
MemUsed 59939.5865495.11   125434.69
Active  22295.6121016.0943311.70
Inactive 8742.76 4662.4813405.24
Active(anon)16717.1016572.1933289.29
Inactive(anon)   2931.94 1388.14 4320.08
Active(file) 5578.50 4443.9110022.41
Inactive(file)   5810.82 3274.34 9085.16
Unevictable 0.000.000.00
Mlocked 0.000.000.00
Dirty   7.041.648.67
Writeback   0.000.000.00
FilePages   18458.9310413.9728872.90
Mapped862.14  413.38 1275.52
AnonPages   12579.4915264.3727843.86
Shmem7069.52 2695.71 9765.23
KernelStack18.343.03   21.38
PageTables153.14  107.77  260.90
NFS_Unstable0.000.000.00
Bounce  0.000.000.00
WritebackTmp0.000.000.00
Slab 4830.68 2254.55 7085.22
SReclaimable 2061.05  921.72 2982.77
SUnreclaim   2769.62 1332.83 4102.45
AnonHugePages   4.002.006.00
HugePages_Total 0.000.000.00
HugePages_Free  0.000.000.00
HugePages_Surp  0.000.000.00


Thanks,





On Mon, Aug 6, 2018 at 6:33 PM Igor A. Ippolitov 
wrote:

> Anoop,
>
> I suppose, most of your 10k servers are very similar, right?
> Please, post top level configuration and a typical server{}, please.
>
> Also, how do you reload configuration? With 'service nginx reload' or may
> be other commands?
>
> It looks like you have a lot of fragmented memory and only 4gb free in the
> second numa node.
> So, I'd say this is OK that you are getting errors from allocating a 16k
> stripes.
>
> Could you please post numastat -m output additionally. Just to make sure
> you have half of the memory for the second CPU.
> And we'll have a look if memory utilization may be optimized based on your
> configuration.
>
> Regards,
> Igor.
>
> On 04.08.2018 07:54, Anoop Alias wrote:
>
> Hi Igor,
>
> Setting vm.max_map_count to 20x the normal value did not help
>
> The issue happens on a group of servers and among the group, it shows up
> only in servers which have ~10k  server{} blocks
>
> On servers that have lower number of server{} blocks , the ENOMEM issue is
> not there
>
> Also, I can find that the RAM usage of the Nginx process is directly
> proportional to the number of server {} blocks
>
> For example on a server having the problem
>
> # ps_mem| head -1 && ps_mem |grep nginx
>  Private  +   Shared  =  RAM used   Program
>   1.0 GiB +   2.8 GiB =   3.8 GiB   nginx (3)
>
>
> That is for a single worker process with 4 threads in thread_pool
> # pstree|grep nginx
> |-nginx-+-nginx---4*[{nginx}]
> |   `-nginx
>
> Whatever config change I try the memory usage seem to mostly depend on the
> number of server contexts defined
>
> Now the issue mostly happen in nginx reload ,when one more worker process
> will be active in shutting down mode
>
> I believe the memalign error is thrown by the worker being shutdown, this
> is because the sites work after the error and also the pid mentioned in the
> error would have gone when I check ps
>
>
> # pmap 948965|grep 16K
> 7f2923ff2000 16K r-x-- ngx_http_redis

Re: posix_memalign error

2018-08-03 Thread Anoop Alias
Hi Igor,

Setting vm.max_map_count to 20x the normal value did not help

The issue happens on a group of servers and among the group, it shows up
only in servers which have ~10k  server{} blocks

On servers that have lower number of server{} blocks , the ENOMEM issue is
not there

Also, I can find that the RAM usage of the Nginx process is directly
proportional to the number of server {} blocks

For example on a server having the problem

# ps_mem| head -1 && ps_mem |grep nginx
 Private  +   Shared  =  RAM used   Program
  1.0 GiB +   2.8 GiB =   3.8 GiB   nginx (3)


That is for a single worker process with 4 threads in thread_pool
# pstree|grep nginx
|-nginx-+-nginx---4*[{nginx}]
|   `-nginx

Whatever config change I try the memory usage seem to mostly depend on the
number of server contexts defined

Now the issue mostly happen in nginx reload ,when one more worker process
will be active in shutting down mode

I believe the memalign error is thrown by the worker being shutdown, this
is because the sites work after the error and also the pid mentioned in the
error would have gone when I check ps


# pmap 948965|grep 16K
7f2923ff2000 16K r-x-- ngx_http_redis2_module.so
7f2924fd7000 16K r libc-2.17.so
7f2925431000 16K rw---   [ anon ]
7f292584a000 16K rw---   [ anon ]

Aug  4 05:50:00 b kernel: SysRq : Show Memory
Aug  4 05:50:00 b kernel: Mem-Info:
Aug  4 05:50:00 b kernel: active_anon:7757394 inactive_anon:1021319
isolated_anon:0#012 active_file:3733324 inactive_file:2136476
isolated_file:0#012 unevictable:0 dirty:1766 writeback:6 wbtmp:0
unstable:0#012 slab_reclaimable:2003687 slab_unreclaimable:901391#012
mapped:316734 shmem:2381810 pagetables:63163 bounce:0#012 free:4851283
free_pcp:11332 free_cma:0
Aug  4 05:50:00 bravo kernel: Node 0 DMA free:15888kB min:8kB low:8kB
high:12kB active_anon:0kB inactive_anon:0kB active_file:0kB
inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB
present:15972kB managed:15888kB mlocked:0kB dirty:0kB writeback:0kB
mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB
kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB
local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0
all_unreclaimable? yes
Aug  4 05:50:00 b kernel: lowmem_reserve[]: 0 1679 64139 64139

# cat /proc/buddyinfo
Node 0, zone  DMA  0  0  1  0  2  1  1
0  1  1  3
Node 0, zoneDMA32   5284   6753   6677   1083410 59  1
0  0  0  0
Node 0, zone   Normal 500327 638958 406737  14690872106 11
0  0  0  0
Node 1, zone   Normal 584840 291640188  0  0  0  0
0  0  0  0


The only correlation I see in having the error is the number of server {}
blocks (close to 10k) which then makes the nginx process consume ~ 4GB of
mem with a single worker process and then a reload is done




On Thu, Aug 2, 2018 at 6:02 PM Igor A. Ippolitov 
wrote:

> Anoop,
>
> There are two guesses: either mmap allocations limit is hit or memory is
> way too fragmented.
> Could you please track amount of mapped regions for a worker with pmap and
> amount of 16k areas in Normal zones (it is the third number)?
>
> You can also set vm.max_map_count to a higher number (like 20 times higher
> than default) and look if the error is gone.
>
> Please, let me know if increasing vm.max_map_count helps you.
>
> On 02.08.2018 13:06, Anoop Alias wrote:
>
> Hi Igor,
>
> The error happens randomly
>
> 2018/08/02 06:52:42 [emerg] 874514#874514: posix_memalign(16, 16384)
> failed (12: Cannot allocate memory)
> 2018/08/02 09:42:53 [emerg] 872996#872996: posix_memalign(16, 16384)
> failed (12: Cannot allocate memory)
> 2018/08/02 10:16:14 [emerg] 877611#877611: posix_memalign(16, 16384)
> failed (12: Cannot allocate memory)
> 2018/08/02 10:16:48 [emerg] 879410#879410: posix_memalign(16, 16384)
> failed (12: Cannot allocate memory)
> 2018/08/02 10:17:55 [emerg] 876563#876563: posix_memalign(16, 16384)
> failed (12: Cannot allocate memory)
> 2018/08/02 10:20:21 [emerg] 879263#879263: posix_memalign(16, 16384)
> failed (12: Cannot allocate memory)
> 2018/08/02 10:20:51 [emerg] 878991#878991: posix_memalign(16, 16384)
> failed (12: Cannot allocate memory)
>
> # date
> Thu Aug  2 10:58:48 BST 2018
>
> --
> # cat /proc/buddyinfo
> Node 0, zone  DMA  0  0  1  0  2  1  1
>   0  1  1  3
> Node 0, zoneDMA32  11722  11057   4663   1647609 72 10
>   7  1  0  0
> Node 0, zone   Normal 755026 710760 398136  21462   1114 18  1
>   0  0  0  0
> Node 1, zone   Normal 341295 801810 179604256  0  0  0
>   0  0  0  0
> 

Re: posix_memalign error

2018-08-02 Thread Anoop Alias
---

Thank you very much for looking into this


On Thu, Aug 2, 2018 at 12:37 PM Igor A. Ippolitov 
wrote:

> Anoop,
>
> I doubt this will be the solution, but may we have a look at
> /proc/buddyinfo and /proc/slabinfo the moment when nginx can't allocate
> memory?
>
> On 02.08.2018 08:15, Anoop Alias wrote:
>
> Hi Maxim,
>
> I enabled debug and the memalign call is happening on nginx reloads and
> the ENOMEM happen sometimes on the reload(not on all reloads)
>
> 2018/08/02 05:59:08 [notice] 872052#872052: signal process started
> 2018/08/02 05:59:23 [notice] 871570#871570: signal 1 (SIGHUP) received
> from 872052, reconfiguring
> 2018/08/02 05:59:23 [debug] 871570#871570: wake up, sigio 0
> 2018/08/02 05:59:23 [notice] 871570#871570: reconfiguring
> 2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
> 02B0DA00:16384 @16  === > the memalign call on reload
> 2018/08/02 05:59:23 [debug] 871570#871570: malloc: 087924D0:4560
> 2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
> 0E442E00:16384 @16
> 2018/08/02 05:59:23 [debug] 871570#871570: malloc: 05650850:4096
> 20
>
>
>
>
> 2018/08/02 05:48:49 [debug] 871275#871275: bind() :443 #71
> 2018/08/02 05:48:49 [debug] 871275#871275: bind() :443 #72
> 2018/08/02 05:48:49 [debug] 871275#871275: bind() :443 #73
> 2018/08/02 05:48:49 [debug] 871275#871275: bind() :443 #74
> 2018/08/02 05:48:49 [debug] 871275#871275: add cleanup: 5340D728
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 024D3260:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 517BAF10:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 53854FC0:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 53855FD0:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 53856FE0:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 53857FF0:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: posix_memalign:
> 53859000:16384 @16
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 5385D010:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 5385E020:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 5385F030:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 536CD160:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 536CE170:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 536CF180:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 536D0190:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 536D11A0:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 536D21B0:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 536D31C0:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 536D41D0:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 536D51E0:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 536D61F0:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 536D7200:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 536D8210:4096
> 2018/08/02 05:48:49 [debug] 871275#871275: malloc: 536D9220:4096
>
>
> Infact there are lot of such calls during a reload
>
> 2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
> BA17ED00:16384 @16
> 2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
> BA1B0FF0:16384 @16
> 2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
> BA1E12C0:16384 @16
> 2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
> BA211590:16384 @16
> 2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
> BA243880:16384 @16
> 2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
> BA271B30:16384 @16
> 2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
> BA2A3E20:16384 @16
> 2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
> BA2D20D0:16384 @16
> 2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
> BA3063E0:16384 @16
> 2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
> BA334690:16384 @16
> 2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
> BA366980:16384 @16
> 2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
> BA396C50:16384 @16
> 2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
> BA3C8F40:16384 @16
> 2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
> BA3F9210:16384 @16
> 2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
> BA4294E0:16384 @16
> 2018/08/02 05:59:23 [debug] 871570#871570: po

Re: posix_memalign error

2018-08-01 Thread Anoop Alias
ign:
BA611160:16384 @16
2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
BA641430:16384 @16
2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
BA671700:16384 @16
2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
BA6A29E0:16384 @16
2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
BA6D5CE0:16384 @16
2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
BA707FD0:16384 @16
2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
BA736280:16384 @16
2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
BA768570:16384 @16
2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
BA796820:16384 @16
2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
BA7CAB30:16384 @16
2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
BA7F8DE0:16384 @16
2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
BA82B0D0:16384 @16
2018/08/02 05:59:23 [debug] 871570#871570: posix_memalign:
BA85B3A0:16384 @16



What is perplexing is that the system has enough free (available RAM)
#
# free -g
  totalusedfree  shared  buff/cache
 available
Mem:125  54  24   8  46
  58
Swap: 0   0   0
#

# ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 514579
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 514579
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

#

There is no other thing limiting memory allocation

Any way to prevent this or probably identify/prevent this


On Tue, Jul 31, 2018 at 7:08 PM Maxim Dounin  wrote:

> Hello!
>
> On Tue, Jul 31, 2018 at 09:52:29AM +0530, Anoop Alias wrote:
>
> > I am repeatedly seeing errors like
> >
> > ##
> > 2018/07/31 03:46:33 [emerg] 2854560#2854560: posix_memalign(16, 16384)
> > failed (12: Cannot allocate memory)
> > 2018/07/31 03:54:09 [emerg] 2890190#2890190: posix_memalign(16, 16384)
> > failed (12: Cannot allocate memory)
> > 2018/07/31 04:08:36 [emerg] 2939230#2939230: posix_memalign(16, 16384)
> > failed (12: Cannot allocate memory)
> > 2018/07/31 04:24:48 [emerg] 2992650#2992650: posix_memalign(16, 16384)
> > failed (12: Cannot allocate memory)
> > 2018/07/31 04:42:09 [emerg] 3053092#3053092: posix_memalign(16, 16384)
> > failed (12: Cannot allocate memory)
> > 2018/07/31 04:42:17 [emerg] 3053335#3053335: posix_memalign(16, 16384)
> > failed (12: Cannot allocate memory)
> > 2018/07/31 04:42:28 [emerg] 3053937#3053937: posix_memalign(16, 16384)
> > failed (12: Cannot allocate memory)
> > 2018/07/31 04:47:54 [emerg] 3070638#3070638: posix_memalign(16, 16384)
> > failed (12: Cannot allocate memory)
> > 
> >
> > on a few servers
> >
> > The servers have enough memory free and the swap usage is 0, yet somehow
> > the kernel denies the posix_memalign with ENOMEM ( this is what I think
> is
> > happening!)
> >
> > The numbers requested are always 16, 16k . This makes me suspicious
> >
> > I have no setting in nginx.conf that reference a 16k
> >
> > Is there any chance of finding out what requests this and why this is not
> > fulfilled
>
> There are at least some buffers which default to 16k - for
> example, ssl_buffer_size (http://nginx.org/r/ssl_buffer_size).
>
> You may try debugging log to futher find out where the particular
> allocation happens, see here for details:
>
> http://nginx.org/en/docs/debugging_log.html
>
> But I don't really think it worth the effort.  The error is pretty
> clear, and it's better to focus on why these allocations are
> denied.  Likely you are hitting some limit.
>
> --
> Maxim Dounin
> http://mdounin.ru/
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

posix_memalign error

2018-07-30 Thread Anoop Alias
I am repeatedly seeing errors like

##
2018/07/31 03:46:33 [emerg] 2854560#2854560: posix_memalign(16, 16384)
failed (12: Cannot allocate memory)
2018/07/31 03:54:09 [emerg] 2890190#2890190: posix_memalign(16, 16384)
failed (12: Cannot allocate memory)
2018/07/31 04:08:36 [emerg] 2939230#2939230: posix_memalign(16, 16384)
failed (12: Cannot allocate memory)
2018/07/31 04:24:48 [emerg] 2992650#2992650: posix_memalign(16, 16384)
failed (12: Cannot allocate memory)
2018/07/31 04:42:09 [emerg] 3053092#3053092: posix_memalign(16, 16384)
failed (12: Cannot allocate memory)
2018/07/31 04:42:17 [emerg] 3053335#3053335: posix_memalign(16, 16384)
failed (12: Cannot allocate memory)
2018/07/31 04:42:28 [emerg] 3053937#3053937: posix_memalign(16, 16384)
failed (12: Cannot allocate memory)
2018/07/31 04:47:54 [emerg] 3070638#3070638: posix_memalign(16, 16384)
failed (12: Cannot allocate memory)


on a few servers

The servers have enough memory free and the swap usage is 0, yet somehow
the kernel denies the posix_memalign with ENOMEM ( this is what I think is
happening!)

The numbers requested are always 16, 16k . This makes me suspicious

I have no setting in nginx.conf that reference a 16k

Is there any chance of finding out what requests this and why this is not
fulfilled


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx Directory Listing - Restrict by IP Address

2018-05-18 Thread Anoop Alias
Since this requires more logic, I think you can implement this in an
application server / server-side scripting  like php/python etc

your application must verify the IP address and list files rather than web
server

On Fri, May 18, 2018 at 6:25 PM, Sathish Kumar  wrote:

> Hi,
>
> I tried this option but it says autoindex need to be on or off and it's
> not accepting a variable.
>
>
> [emerg] invalid value "$allowed" in "autoindex" directive, it must be "on"
> or "off" in domain.conf
>
>
> On Fri, May 18, 2018, 7:18 PM Friscia, Michael 
> wrote:
>
>> I think you need to change this a little
>>
>>
>>
>> map $remote_addr $allowed {
>> default “off”;
>> 1.1.1.1 “on”;
>> 2.2.2.2 “on:;
>> }
>>
>> and then in in the download location block
>>
>>  autoindex $allowed;
>>
>> I use similar logic on different variables and try at all costs to avoid
>> IF statements anywhere in the configs.
>>
>>
>>
>> ___
>>
>> Michael Friscia
>>
>> Office of Communications
>>
>> Yale School of Medicine
>>
>> (203) 737-7932 - office
>>
>> (203) 931-5381 - mobile
>>
>> http://web.yale.edu
>>
>>
>>
>> *From: *nginx  on behalf of PRAJITH <
>> prajithpalakk...@gmail.com>
>> *Reply-To: *"nginx@nginx.org" 
>> *Date: *Friday, May 18, 2018 at 2:16 AM
>> *To: *"nginx@nginx.org" 
>> *Subject: *Re: Nginx Directory Listing - Restrict by IP Address
>>
>>
>>
>> Hi Satish,
>>
>> There are "if" constructs in nginx, please check http://nginx.org/r/if
>> .
>> if you want to allow multiple IP addresses, it might be better idea to use
>> map. eg:
>>
>> map $remote_addr $allowed {
>> default 0;
>> 1.1.1.1 1;
>> 2.2.2.2 1;
>> }
>>
>> and then in in the download location block
>>
>>  if ($allowed = 1) {
>> autoindex on;
>> }
>>
>> Thanks,
>>
>> Prajith
>>
>>
>>
>> On 18 May 2018 at 05:35, Sathish Kumar  wrote:
>>
>> Hi Team,
>>
>> We have a requirement to allow directory listing from few servers and
>> disallow from other ip addresses and all IP addresses should be able to
>> download all files inside the directory.
>>
>> Can somebody provide the correct nginx config for the same.
>>
>> location / {
>>
>> root /downloads;
>>
>> autoindex on;
>>
>> allow 1.1.1.1;
>>
>> deny all;
>>
>> }
>>
>> If I use the above config, only on 1.1.1.1 IP address can directory list
>> from this server and can file download but from other IP addresses download
>> shows forbidden, due to IP address restriction
>>
>> Is there a way to overcome this issue, thanks.
>>
>>
>> Thanks & Regards
>> Sathish.V
>>
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>> 
>>
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Handling URL with the percentage character

2018-03-13 Thread Anoop Alias
Hi,

Is there a way URL like


http://domain.com/%product_cat%/myproduct
+

to be passed as is to an Apache proxy backend.

Currently, Nginx is throwing a 400 bad request error (which is correct),
but the Apache httpd using a php script can handle this . so is there a way
I can do like ..hey this will be handled someplace else so i just need to
pass on whatever i get to upstream?

Also if I encode the URL with

http://domain.com/%25product_cat%25/myproduct


That works too. So if the first is not possible is there a way to rewrite
all % to %25 ?


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Convert .htaccess to nginx rules

2018-01-09 Thread Anoop Alias
try_files $uri $uri/ /index.php;

should work

On Tue, Jan 9, 2018 at 7:10 PM, ThanksDude 
wrote:

> hey guys
>
> I tried the tools and it didn't worked for me.
> can u guys pls help me convert this to a nginx rules?
>
>
> RewriteEngine On
>
> #RewriteCond %{HTTPS} off
> #RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
>
> #RewriteCond %{HTTP_HOST} !^www\.
> #RewriteRule .* http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
>
> Options +FollowSymLinks
> Options -Indexes
>
> RewriteCond %{SCRIPT_FILENAME} !-d
> RewriteCond %{SCRIPT_FILENAME} !-f
> RewriteRule . index.php [L,QSA]
>
>
>
> # Performace optimization
>
> # BEGIN Compress text files
> 
>   AddOutputFilterByType DEFLATE text/html text/xml text/css text/plain
>   AddOutputFilterByType DEFLATE image/svg+xml application/xhtml+xml
> application/xml
>   AddOutputFilterByType DEFLATE application/rdf+xml application/rss+xml
> application/atom+xml
>   AddOutputFilterByType DEFLATE text/javascript application/javascript
> application/x-javascript application/json
>   AddOutputFilterByType DEFLATE application/x-font-ttf
> application/x-font-otf
>   AddOutputFilterByType DEFLATE font/truetype font/opentype
> 
> # END Compress text files
>
> # BEGIN Expire headers
> 
>   ExpiresActive On
>   ExpiresDefault "access plus 5 seconds"
>   ExpiresByType image/x-icon "access plus 31536000 seconds"
>   ExpiresByType image/jpeg "access plus 31536000 seconds"
>   ExpiresByType image/png "access plus 31536000 seconds"
>   ExpiresByType image/gif "access plus 31536000 seconds"
>   ExpiresByType application/x-shockwave-flash "access plus 31536000
> seconds"
>   ExpiresByType text/css "access plus 31536000 seconds"
>   ExpiresByType text/javascript "access plus 31536000 seconds"
>   ExpiresByType application/javascript "access plus 31536000 seconds"
>   ExpiresByType application/x-javascript "access plus 31536000 seconds"
> 
> # END Expire headers
>
> # BEGIN Cache-Control Headers
> 
>   
> Header set Cache-Control "public"
>   
>   
> Header set Cache-Control "public"
>   
>   
> Header set Cache-Control "private"
>   
>   
> Header set Cache-Control "private, must-revalidate"
>   
>
>   
> Header set Cache-Control "max-age=31536000 private, must-revalidate"
>   
> 
> # END Cache-Control Headers
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,278046,278046#msg-278046
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Moving SSL termination to the edge increased the instance of 502 errors

2017-11-30 Thread Anoop Alias
Since the upstream now has changed tcp ports - do check if it is a
firewall/network buffer etc issue too on the new port.

On Wed, Nov 29, 2017 at 11:42 PM, Peter Booth  wrote:

> There are many things that *could* cause what you’re seeing - say at least
> eight. You might be lucky and guess the right one- but probably smarter to
> see exactly what the issue is.
>
> Presumably you changed your upstream webservers to do this work, replacing
> ssl with unencrypted connections? Do you have sar data showing #tcp
> connections before and after the change? Perhaps every request is
> negotiating SSL now?
> What if you add another nginx instance that doesn’t use ssl at all (just
> as a test) - does that also have 502s?. You probably have data you need to
> isolate
>
> Sent from my iPhone
>
> > On Nov 29, 2017, at 8:05 AM, Michael Ottoson 
> wrote:
> >
> > Thanks, Maxim.
> >
> > That makes a lot of sense.  However, the problem started at exactly the
> same time we moved SSL termination.  There were no changes to the
> application.  It is unlikely to be a mere coincidence - but it could be.
> >
> > We were previously using HAPROXY for load balancing (well, the company
> we inherited this from did) and the same happened when they tried moving
> SSL termination.
> >
> > There is a reply to my question on serverfault, suggesting increasing
> keepalives (https://www.nginx.com/blog/load-balancing-with-nginx-
> plus-part2/#keepalive).  This is because moving SSL increases the number
> of TCP connects.  I'll give that a try and report back.
> >
> > -Original Message-
> > From: nginx [mailto:nginx-boun...@nginx.org] On Behalf Of Maxim Dounin
> > Sent: Wednesday, November 29, 2017 7:43 AM
> > To: nginx@nginx.org
> > Subject: Re: Moving SSL termination to the edge increased the instance
> of 502 errors
> >
> > Hello!
> >
> >> On Wed, Nov 29, 2017 at 04:27:37AM +, Michael Ottoson wrote:
> >>
> >> Hi All,
> >>
> >> We installed nginx as load balancer/failover in front of two upstream
> web servers.
> >>
> >> At first SSL terminated at the web servers and nginx was configured as
> TCP passthrough on 443.
> >>
> >> We rarely experiences 502s and when it did it was likely due to
> tuning/tweaking.
> >>
> >> About a week ago we moved SSL termination to the edge.  Since then
> we've been getting daily 502s.  A small percentage - never reaching 1%.
> But with ½ million requests per day, we are starting to get complaints.
> >>
> >> Stranger: the percentage seems to be rising.
> >>
> >> I have more details and a pretty picture here:
> >>
> >> https://serverfault.com/questions/885638/moving-ssl-termination-to-the
> >> -edge-increased-the-instance-of-502-errors
> >>
> >>
> >> Any advice how to squash those 502s?  Should I be worried nginx is
> leaking?
> >
> > First of all, you have to find the reason for these 502 errors.
> > Looking into the error log is a good start.
> >
> > As per provided serverfault question, you see "no live upstreams"
> > errors in logs.  These errors mean that all configured upstream servers
> were disabled due to previous errors (see http://nginx.org/en/docs/http/
> ngx_http_upstream_module.html#max_fails),
> > that is, these errors are just a result of previous errors.  You have to
> find out real errors, they should be in the error log too.
> >
> > --
> > Maxim Dounin
> > http://mdounin.ru/
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: How to force php-fpm to display slow log?

2017-10-24 Thread Anoop Alias
/index.php  -- is the URL as stated by the log you provided.

On Tue, Oct 24, 2017 at 8:39 PM, agriz  wrote:

> slowlog = /var/log/php-fpm/slow.log
> request_slowlog_timeout = 1s
> This following two lines are added in the php config file. Once it is added
> and restart php-fpm, the slow.log file is created.
>
> (request: "GET /index.php") executing too slow (1.072177 sec), logging
> This error is displayed in error.log file of php-fpm. But there is no
> additional details.
>
> I am not able to trace which url is causing the trouble. It is codeigniter
> framework. So I used the framework's benchmarking tool on all the methods
> and found the execution is fast 0.004 to 0.01 for multiple test
>
> What could be the possible reason for slow.log being empty? Is there a way
> to get complete url in the error log for slow process?
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,277048,277048#msg-277048
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: limit_conn is dropping valid connections and causing memory leaks on nginx reload

2017-09-30 Thread Anoop Alias
What is the change (workaround) you made ?I don't see a difference?

On Sat, Sep 30, 2017 at 3:35 PM, Dejan Grofelnik Pelzel <
nginx-fo...@forum.nginx.org> wrote:

> Hello,
>
> We are running the nginx 1.13.5  with HTTP/2 in a proxy_pass proxy_cache
> configuration with clients having relatively long open connections. Our
> system does automatic reloads for any new configuration and we recently
> introduced a limit_conn to some of the config files. After that, I've
> started noticing a rapid drop in connections and outgoing network
> every-time
> the system would perform a configuration reload. Even stranger, on every
> reload the memory usage would go up for about 1-2GB until ultimately
> everything crashed if the reloads were too frequent. The memory usage did
> go
> down after old workers were released, but that could take up to 30 minutes,
> while the configuration could get reloaded up to twice per minute.
>
> We used the following configuration as recommended by pretty much any
> example:
> limit_conn_zone $binary_remote_addr zone=1234con:10m;
> limit_conn zone1234con 10;
>
> I was able to verify the connection drop by doing a simple ab test, for
> example, I would run ab -c 100 -n -k 1000 https://127.0.0.1/file.bin
> 990 of the connections went through, however, 10 would still be active.
> Immediately after the reload, those would get dropped as well. Adding -r
> option would help the problem, but that doesn't fix our problem.
>
> Finally, after I tried to create a workaround, I've configured the limit
> zone to:
> limit_conn_zone "v$binary_remote_addr" zone=1234con:10m;
>
> Suddenly everything magically started to work. The connections were not
> being dropped, the limit worked as expected and even more surprisingly the
> memory usage was not going up anymore. I've been tearing my hair out almost
> all day yesterday trying to figure this out. While I was very happy to see
> this resolved, I am now confused as to why nginx behaves in such a way.
>
> I'm thinking this might likely be a bug, so I'm just wondering if anyone
> could explain why it is happening or has a similar problem.
>
> Thank you!
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,276633,276633#msg-276633
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: (111: Connection refused) while connecting to upstream NGINX Uwsgi

2017-09-18 Thread Anoop Alias
What is the output of

ls -l /home/workspace/project/tmp/uwsgi.sock

and

ps aux|grep nginx



On Mon, Sep 18, 2017 at 12:13 PM, sandyman 
wrote:

> after struggling for almost a week on this I feel I need to reach out for
> assistance . Please Help
>
> I am trying to deploy my project to a production environment
>
>
> In terms of permissions I have read all the forums changed all permissions
> i.e. 777 etc etc
> srwxrwxrwx  uwsgi.sock  (owned by me full access ) !!
> I've checked over and over all the directory structures etc. Ive swtiched
> from unix sockets to http but still
> no joy .
>
> exact error
>
>
> 2017/09/18 06:32:56 [error] 15451#0: *1 connect() to
> unix:home/workspace/project//
> tmp/uwsgi.sock failed (111: Connection refused) while connecting to
> upstream, client: 1933
> .247.239.160, server: website.xyz, request: "GET / HTTP/1.0", upstream:
> "uwsgi://unix:
> /home/workspace/project/tmp/uwsgi.sock:", host: "www.website.xyz"
>
>
> Nginx configuration:
>
> upstream _django {
> server unix:home/workspace/project/tmp/uwsgi.sock;
> }
>
> server {
>listen 62032;
>server_name website.xyz www.website.xyz ;
>
>location = /favicon.ico { access_log off; log_not_found off; }
>location = /test-this { return 200 "Yes, this is correct\n"; }
>location /foo { return 200 "YIKES  what a load of codswollop";}
>root  /home/workspace/project;
>
>location /static {
>   alias   /home/workspace/project/testsite/assets;
> }
>
>location /assets {
>   root   /home/workspace/project/testsite/assets;
> }
>
> location / {
> include /home/workspace/project/uwsgi_params;
> #include uwsgi parameters.
> uwsgi_pass _django;
> #tell nginx to communicate with uwsgi though unix socket
> /run/uwsgi/sock.
> }
>
>
>
>
> uwsgi ini file
>
> # project.ini file
> [uwsgi]
> chdir =  /home/workspace/project/testsite
> module=testsite.wsgi:application
> socket  = /home/workspace/project/uwsgi.sock
> chmod-socket= 666
> daemonize = /home/workspace/project/tmp/uwsgi.log
> protocol = http
> master = true
> vacuum=true
> max-requests=5000
> processes = 10
>
>
> start script
>
> #! /bin/bash
> PIDFILE=/home/workspace/project/startselvacura.pid
>
> source /home/workspace/project/venv/bin/activate
> uwsgi --ini /home/workspace/project/uwsgi-prod.ini --venv
> /home/workspace/project/venv  --pidfile $PIDFILE
> ~
>
>
>
> running https://www.asandhu.xyz/foo
>
> does return the  expected result :
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,276427,276427#msg-276427
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: mediawiki, php-fpm, and nginx

2017-09-17 Thread Anoop Alias
t/html; charset=UTF-8
> Transfer-Encoding: chunked
> Connection: close
>
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 write new buf t:1 f:0 0998F010, 
> pos 0998F010, size: 165 file: 0, size: 0
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter: l:0 f:0 s:165
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http cacheable: 0
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream process upstream
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe read upstream: 1
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe preread: 22
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 readv: eof:1, avail:0
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 readv: 1, last:4024
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe recv chain: 0
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe buf free s:0 t:1 f:0 
> 0E18, pos 0E4A, size: 22 file: 0, size: 0
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe length: -1
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 03
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 01
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 08
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record byte: 00
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi record length: 8
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http fastcgi sent end request
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 0E18
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe write downstream: 1
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 pipe write downstream done
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer: 6, old: 2415705830, 
> new: 2415705831
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream exit: 
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 finalize http upstream request: 0
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 finalize http fastcgi request
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 free rr peer 1 0
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 close http upstream connection: 6
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 0998AA00, unused: 88
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 event timer del: 6: 2415705830
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 0
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http upstream temp fd: -1
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http output filter 
> "/wiki/index.php?"
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http copy filter: 
> "/wiki/index.php?"
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 image filter
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http postpone filter 
> "/wiki/index.php?" BFEBBC94
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http chunk: 0
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 write old buf t:1 f:0 0998F010, 
> pos 0998F010, size: 165 file: 0, size: 0
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 write new buf t:0 f:0 , 
> pos 080F0A8B, size: 5 file: 0, size: 0
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter: l:1 f:0 s:170
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter limit 0
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 writev: 170 of 170
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http write filter 
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http copy filter: 0 
> "/wiki/index.php?"
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http finalize request: 0, 
> "/wiki/index.php?" a:1, c:1
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http request count:1 blk:0
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http close request
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 http log handler
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 09992DB0, unused: 4
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 0998F000, unused: 3418
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 close http connection: 4
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 reusable connection: 0
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 099AEEE8
> 2017/09/17 09:16:09 [debug] 21873#21873: *7 free: 0998ACC0, unused: 24
>
>
> Le 2017-09-17 à 08:28, Anoop Alias a écrit :
>
> try changing
>
> ##
>
> location = /wiki {
>   root /home/www/isotoperesearch.ca/wiki;
>   fastcgi_index index.php;
>   index index.php;
>   

Re: mediawiki, php-fpm, and nginx

2017-09-17 Thread Anoop Alias
try changing

##

location = /wiki {
  root /home/www/isotoperesearch.ca/wiki;
  fastcgi_index index.php;
  index index.php;
  include fastcgi_params;
  fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;

}

##
to

#

location /wiki/ {
  # root /home/www/isotoperesearch.ca/wiki;
  fastcgi_index index.php;
  index /wiki/index.php;
  include fastcgi_params;
  fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;

}

##3

On Sun, Sep 17, 2017 at 5:48 PM, Etienne Robillard 
wrote:

> Hi,
>
> I'm trying to configure nginx with php-fpm to run mediawiki in a distinct
> location (/wiki).
>
> Here's my config:
>
> # configuration file /etc/nginx/nginx.conf:
> user www-data;
> worker_processes 4;
> pid /run/nginx.pid;
>
> events {
> worker_connections 512;
> multi_accept on;
> use epoll;
> }
>
> http {
>
> ##
> # Basic Settings
> ##
>
> sendfile on;
> tcp_nopush on;
> tcp_nodelay on;
> keepalive_timeout 80;
> types_hash_max_size 2048;
> # server_tokens off;
>
> # server_names_hash_bucket_size 64;
> # server_name_in_redirect off;
>
> include /etc/nginx/mime.types;
> default_type application/octet-stream;
>
> ##
> # SSL Settings
> ##
>
> ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
> ssl_prefer_server_ciphers on;
>
> ##
> # Logging Settings
> ##
>
> access_log /var/log/nginx/access.log;
> error_log /var/log/nginx/error.log;
>
> ##
> # Gzip Settings
> ##
>
> gzip off;
> gzip_disable "msie6";
>
> # gzip_vary on;
> # gzip_proxied any;
> # gzip_comp_level 6;
> # gzip_buffers 16 8k;
> # gzip_http_version 1.1;
> # gzip_types text/plain text/css application/json
> application/javascript text/xml application/xml application/xml+rss
> text/javascript;
>
> ##
> # Virtual Host Configs
> ##
>
> #isotopesoftware.ca:
> #include /etc/nginx/conf.d/development.conf;
> include /etc/nginx/conf.d/isotoperesearch.conf;
> #include /etc/nginx/sites-enabled/*;
> }
>
> server {
>
> # static medias web server configuration, for development
> # and testing purposes.
>
> listen   80;
> server_name  localhost;
> error_log /var/log/nginx/error_log; #debug
> root /home/www/isotoperesearch.ca;
> #autoindex on;
> client_max_body_size 5m;
> client_body_timeout 60;
>
> location / {
> ## host and port to fastcgi server
> #uwsgi_pass django; # 8808=gthc.org; 8801=tm
> #include uwsgi_params;
> fastcgi_pass 127.0.0.1:8808;
> include fastcgi_params;
> }
>
>
> # debug url rewriting to the error log
> rewrite_log on;
>
> location /media {
> autoindex on;
> gzip on;
> }
>
> location /pub {
> autoindex on;
> gzip on;
> }
>
> location /webalizer {
> autoindex on;
> gzip on;
> #auth_basic "Private Property";
> #auth_basic_user_file /etc/nginx/.htpasswd;
> allow 67.68.76.70;
> deny all;
> }
>
> location /documentation {
> autoindex on;
> gzip on;
> }
>
> location /moin_static184 {
> autoindex on;
> gzip on;
> }
> location /favicon.ico {
> empty_gif;
> }
> location /robots.txt {
>  root /home/www/isotopesoftware.ca;
> }
> location /sitemap.xml {
> root /home/www/isotopesoftware.ca;
> }
>
> #location /public_html {
> # root /home/www/;
> # autoindex on;
> #}
> # redirect server error pages to the static page /50x.html
> #error_page 404 /404.html;
> #error_page 403/403.html;
> #error_page 500 502 503 504  /50x.html;
> #location = /50x.html {
> #root   /var/www/nginx-default;
> #}
>
> include conf.d/mediawiki.conf;
> #include conf.d/livestore.conf;
> }
>
>
> # configuration file /etc/nginx/fastcgi_params:
> fastcgi_param  PATH_INFO  $fastcgi_script_name;
> fastcgi_param  QUERY_STRING   $query_string;
> fastcgi_param  REQUEST_METHOD $request_method;
> fastcgi_param  CONTENT_TYPE   $content_type;
> fastcgi_param  CONTENT_LENGTH $content_length;
>
> fastcgi_param  SCRIPT_NAME$fastcgi_script_name;
> fastcgi_param  REQUEST_URI$request_uri;
> fastcgi_param  DOCUMENT_URI   $document_uri;
> fastcgi_param  DOCUMENT_ROOT  $document_root;
> fastcgi_param  SERVER_PROTOCOL$server_protocol;
>
> fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
> fastcgi_param  SERVER_SOFTWAREnginx;
>
> fastcgi_param  REMOTE_ADDR$remote_addr;
> fastcgi_param  REMOTE_PORT$remote_port;
> #fastcgi_param  REMOTE_USER  $remote_user;
> fastcgi_param  SERVER_ADDR$server_addr;
> fastcgi_param  SERVER_PORT$server_port;
> fastcgi_param  

Re: Too many connections in waiting state

2017-09-07 Thread Anoop Alias
Doing strace on a nginx child in the shutdown state i get

##
 strace -p 23846
strace: Process 23846 attached
restart_syscall(<... resuming interrupted futex ...>

) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5395,
{1504781553, 30288000}, 

) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5397,
{1504781554, 30408000}, ) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5399,
{1504781555, 30535000}, ) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5401,
{1504781556, 30675000}, ) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5403,
{1504781557, 30767000}, ) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5405,
{1504781558, 30889000}, ) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5407,
{1504781559, 3098}, ) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5409,
{1504781560, 31099000}, ) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5411,
{1504781561, 3121}, ) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5413,
{1504781562, 31317000}, ) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5415,
{1504781563, 31428000}, ) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5417,
{1504781564, 31575000}, ) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5419,
{1504781565, 31678000}, ) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5421,
{1504781566, 31828000}, ) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5423,
{1504781567, 31941000}, ) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5425,
{1504781568, 32085000}, ) = -1 ETIMEDOUT (Connection timed out)
###



On Thu, Sep 7, 2017 at 3:59 PM, Lucas Rolff <lu...@lucasrolff.com> wrote:

> Check if any of the sites you run on the server gets crawled by any
> crawlers around the time you see an increase – I know that a crawler such
> as Screaming Frog doesn’t handle servers that are capable of http2
> connections and have it activated for sites that are getting crawled, and
> will result in connections with a “waiting” state in nginx.
>
>
>
> It might be there’s other tools that behave the same way, but I’d
> personally look into what kind of traffic/requests happened that increased
> the waiting state a lot.
>
>
>
> Best Regards,
>
>
>
> *From: *nginx <nginx-boun...@nginx.org> on behalf of Anoop Alias <
> anoopalia...@gmail.com>
> *Reply-To: *"nginx@nginx.org" <nginx@nginx.org>
> *Date: *Thursday, 7 September 2017 at 11.52
> *To: *Nginx <nginx@nginx.org>
> *Subject: *Too many connections in waiting state
>
>
>
> Hi,
>
>
>
> I see sometimes too many waiting connections on nginx .
>
>
>
> This often gets cleared on a restart , but otherwise pileup
>
>
>
> ###
>
> Active connections: 4930
>
>
>
> server accepts handled requests
>
>
>
>  442071 442071 584163
>
>
>
> Reading: 2 Writing: 539 Waiting: 4420
>
>
>
> ###
>
> [root@web1 ~]# grep keep /etc/nginx/conf.d/http_settings_custom.conf
>
> keepalive_timeout   10s;
>
> keepalive_requests  200;
>
> keepalive_disable   m

Too many connections in waiting state

2017-09-07 Thread Anoop Alias
Hi,

I see sometimes too many waiting connections on nginx .

This often gets cleared on a restart , but otherwise pileup

###
Active connections: 4930


server accepts handled requests


 442071 442071 584163


Reading: 2 Writing: 539 Waiting: 4420


###
[root@web1 ~]# grep keep /etc/nginx/conf.d/http_settings_custom.conf
keepalive_timeout   10s;
keepalive_requests  200;
keepalive_disable   msie6 safari;


[root@web1 ~]# nginx -V
nginx version: nginx/1.13.3
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC)
built with LibreSSL 2.5.5
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
--modules-path=/etc/nginx/modules --with-pcre=./pcre-8.41 --with-pcre-jit
--with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.5.5
--conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log
--http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid
--lock-path=/var/run/nginx.lock
--http-client-body-temp-path=/var/cache/nginx/client_temp
--http-proxy-temp-path=/var/cache/nginx/proxy_temp
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
--http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nobody
--group=nobody --with-http_ssl_module --with-http_realip_module
--with-http_addition_module --with-http_sub_module --with-http_dav_module
--with-http_flv_module --with-http_mp4_module --with-http_gunzip_module
--with-http_gzip_static_module --with-http_random_index_module
--with-http_secure_link_module --with-http_stub_status_module
--with-http_auth_request_module --add-dynamic-module=naxsi-http2/naxsi_src
--with-file-aio --with-threads --with-stream --with-stream_ssl_module
--with-http_slice_module --with-compat --with-http_v2_module
--with-http_geoip_module=dynamic
--add-dynamic-module=ngx_pagespeed-1.12.34.2-stable
--add-dynamic-module=/usr/local/rvm/gems/ruby-2.4.1/gems/passenger-5.1.8/src/nginx_module
--add-dynamic-module=ngx_brotli --add-dynamic-module=echo-nginx-module-0.60
--add-dynamic-module=headers-more-nginx-module-0.32
--add-dynamic-module=ngx_http_redis-0.3.8
--add-dynamic-module=redis2-nginx-module
--add-dynamic-module=srcache-nginx-module-0.31
--add-dynamic-module=ngx_devel_kit-0.3.0
--add-dynamic-module=set-misc-nginx-module-0.31
--add-dynamic-module=testcookie-nginx-module
--add-dynamic-module=ModSecurity-nginx --with-cc-opt='-O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
--param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic'
--with-ld-opt=-Wl,-E
###


What could be causing this? The server is quite capable and this happens
only rarely


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: nginx limit_req and limit_conn not working to prevent DoS attack

2017-08-01 Thread Anoop Alias
You can use an external tool to parse Nginx error log and block the IP in
iptables/netfilter

On Wed, Aug 2, 2017 at 7:43 AM, Phani Sreenivasa Prasad <
nginx-fo...@forum.nginx.org> wrote:

> I assume it would help dropping connections . since we are setting rate
> limit per ip and any client IP which is suspicious by sending requests in
> bulk(lets say 1 connections/requests), it makes sense to not to accept
> connections/requests from that IP.
>
> Thoughts ??
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,275796,275798#msg-275798
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Strange issue after nginx update

2017-06-28 Thread Anoop Alias
give a try changing the nameservers in /etc/resolv.conf

On Thu, Jun 29, 2017 at 3:51 AM, Andrea Soracchi 
wrote:

> Hi,
>
> I have attached part of the ettercap log.
>
> I have posted a test file of 40MB.
>
> The delay  is 29 second:
>
>  from the last file's chunk at 23:56:06
>
>  to the response of  index2.php at 23:56:35
>
> The nginx's log show:
>
> 192.168.18.18 - - [28/Jun/2017:23:56:35 +0200] "POST /index2.php HTTP/1.1"
> 200 37 "-" "Generic Client"
>
> Nothing retransmits, SElinux isn't installed and apparmor is stopped.
>
> Nothing in dmesg...
>
> Thanks a lot,
>
>
> *ANDREA SORACCHI*
> *+39 329 0512704 <+393290512702>*
> System Engineer
>
> +39 0521 24 77 91
> sorac...@netbuilder.it
>
> --
> *Da: *"Payam Chychi" 
> *A: *"nginx" 
> *Inviato: *Mercoledì, 28 giugno 2017 19:56:04
> *Oggetto: *Re: Strange issue after nginx update
>
>
> On Wed, Jun 28, 2017 at 8:41 AM Andrea Soracchi 
> wrote:
>
>> Hi,
>> could you please help me solve this issue? I'm getting crazy!
>>
>> Before the nginx update my client worked perfectly: it posted files to my
>> website without any delay.
>>
>> How, after nginx update (ubuntu 16.04 LTS) I've got this issue:
>>
>> - the client posts files successfully but the answer of the post is
>> delayed. The more the file is bigger, the more the answer is delayed.
>>
>> I put a sniffer into the website' server and I noticed that the nginx
>> receives the post but it waits to transfer the file to php-fpm process, so
>> also the answer to the client is delayed
>>
>> The nginx server is:
>>
>> nginx/1.10.0 (Ubuntu) and its conf is:
>>
>> -
>> user www-data;
>> worker_processes auto;
>> pid /run/nginx.pid;
>>
>> events {
>> worker_connections 768;
>> # multi_accept on;
>> }
>>
>> http {
>> sendfile on;
>> tcp_nodelay on;
>> keepalive_timeout 65;
>> types_hash_max_size 2048;
>> client_max_body_size 0;
>> log_not_found off;
>> server_name_in_redirect off;
>> client_body_timeout 120s;
>> autoindex off;
>> include /etc/nginx/mime.types;
>> default_type application/octet-stream;
>> ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
>> ssl_prefer_server_ciphers on;
>> access_log /var/log/nginx/access.log;
>> error_log /var/log/nginx/error.log info;
>> gzip on;
>> gzip_disable "msie6";
>> gzip_types text/plain text/css application/json
>> application/javascript text/xml application/xml application/xml+rss
>> text/javascript;
>> include /etc/nginx/conf.d/*.conf;
>> include /etc/nginx/sites-enabled/*;
>> ---
>>
>> and website's php-fpm conf is:
>>
>> server {
>>   listen80;
>>   server_name   test.it;
>>   server_name_in_redirect   off;
>>   autoindex off;
>>   client_max_body_size  500m;
>>   index index.html;
>>   root  /home/test/test;
>>   location ~ \.(php|html|htm|php3)$ {
>> try_files $uri 404;
>> fastcgi_pass  unix:/run/php/mdtest-fpm.sock;
>> include   fastcgi_params;
>>   }
>> }
>>
>> fastcgi_params config:
>>
>> fastcgi_param  QUERY_STRING   $query_string;
>> fastcgi_param  REQUEST_METHOD $request_method;
>> fastcgi_param  CONTENT_TYPE   $content_type;
>> fastcgi_param  CONTENT_LENGTH $content_length;
>>
>> fastcgi_param  SCRIPT_NAME$fastcgi_script_name;
>> fastcgi_param  REQUEST_URI$request_uri;
>> fastcgi_param  DOCUMENT_URI   $document_uri;
>> fastcgi_param  DOCUMENT_ROOT  $document_root;
>> fastcgi_param  SERVER_PROTOCOL$server_protocol;
>> fastcgi_param  REQUEST_SCHEME $scheme;
>> fastcgi_param  HTTPS  $https if_not_empty;
>>
>> fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
>> fastcgi_param  SERVER_SOFTWAREnginx/$nginx_version;
>>
>> fastcgi_param  REMOTE_ADDR$remote_addr;
>> fastcgi_param  REMOTE_PORT$remote_port;
>> fastcgi_param  SERVER_ADDR$server_addr;
>> fastcgi_param  SERVER_PORT$server_port;
>> #fastcgi_param  SERVER_NAME$server_name;
>> fastcgi_param  SERVER_NAME   $http_host;
>>
>> # PHP only, required if PHP was built with --enable-force-cgi-redirect
>> fastcgi_param  REDIRECT_STATUS200;
>>
>> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
>>
>>
>> Thanks a lot,
>> Andrea
>>
>>
>> *ANDREA SORACCHI*
>> *+39 329 0512704 <+393290512702>*
>> System Engineer
>>
>> +39 0521 24 77 91
>> sorac...@netbuilder.it
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/
>> nginx
>
>
> hi,
>
> can you show the related wireshark data, how long is the response delayed
> by? and anything else like 

Re: Unable to start php-fpm

2017-06-06 Thread Anoop Alias
grep apache /etc/passwd

should return something.

FYI this has nothing to do with nginx

On Wed, Jun 7, 2017 at 7:11 AM, marcospaulo877 
wrote:

> /etc/init.d/php-fpm restart
> Stopping php-fpm:  [FAILED]
> Starting php-fpm: [07-Jun-2017 01:35:37] ERROR: [pool www] cannot get uid
> for user 'apache'
> [07-Jun-2017 01:35:37] ERROR: FPM initialization failed
>[FAILED]
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,225788,274718#msg-274718
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: "server" directive is not allowed here error

2017-06-04 Thread Anoop Alias
You can do

nginx -T > mynginx.conf

to have it in single file

On Sun, Jun 4, 2017 at 6:28 PM, Peter Booth  wrote:

> FWIWI have never understood the desire to have nginx configuration spread
> across multiple files.
> It just seems to invite error and make it harder to see what is going on.
>
> Perhaps if I worked for a hosting company I’d feel differently but on the
> sites that I have worked on,
> even with quite complicated, subtle caching logic the entire nginx.conf
> has been under 600 lines - not
> that different from a default Apache httpd.conf but with all configuration
> not 90% comments
>
>
> > On 4 Jun 2017, at 7:41 AM, Reinis Rozitis  wrote:
> >
> >> That can't be right, because before I used the multiple location
> directives, I
> >> didn't have http and it worked fine. Regardless, I followed your advice
> and I got
> >> the following now:
> >
> > As people have already pointed you probably have something like main
> config nginx.conf  with:
> >
> > http {
> > ..
> > include sites-enabled/*;
> > ..
> > }
> >
> > where each separate config file indeed doesn't need an extra http {} but
> the different server{} blocks still end up being within a (single) http {}.
> >
> >
> >> nginx: [emerg] "http" directive is not allowed here in
> >> /usr/local/nginx/conf/sites-enabled/ server.domain.tld -ssl:1
> >
> > Nginx includes/parses the files in the order they appear in the
> directory (sites-enabled/) - as it was stated you might try to check if the
> server file before " server.domain.tld -ssl" has a correct configuration
> (all the braces {} are closed etc).
> >
> > rr
> >
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: "server" directive is not allowed here error

2017-06-04 Thread Anoop Alias
Hi Dino,

I believe you have an unbalanced curly brace somewhere causing the error.

You should check this in a text editor that can highlight syntax.



On Sun, Jun 4, 2017 at 3:58 PM, Dino Edwards 
wrote:

>
> > You can't have server {} block outside http {} (
> http://nginx.org/en/docs/http/ngx_http_core_module.html#server )
>
> > So it has to be:
>
> > http {
> > server {
> >   // whatever goes here
> >  }
> > }
>
>
> That can't be right, because before I used the multiple location
> directives, I didn't have http and it worked fine. Regardless, I followed
> your advice and I got the following now:
>
> nginx: [emerg] "http" directive is not allowed here in
> /usr/local/nginx/conf/sites-enabled/ server.domain.tld -ssl:1
>
> Thanks in advance
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: execution error - pcre limits exceeded (-8)

2017-04-22 Thread Anoop Alias
>From the docs:

yum install gcc-c++ flex bison yajl yajl-devel curl-devel curl GeoIP-devel
doxygen zlib-devel pcre-devel
git clone https://github.com/SpiderLabs/ModSecurity
cd ModSecurity
git checkout -b v3/master origin/v3/master
sh build.sh
git submodule init
git submodule update

On Sat, Apr 22, 2017 at 5:43 PM, Dino Edwards  wrote:

>
> > It's worth to try libmodsecurity (aka ModSecurity 3.x) + nginx connector
> instead:
>
> > https://github.com/SpiderLabs/ModSecurity/tree/v3/master
> > https://github.com/SpiderLabs/ModSecurity-nginx
>
> I'm trying to download/compile libmodsecurity and everything I read
> concerning Ubuntu, it instructs me to use build.sh (./build.sh), however
> when I clone https://github.com/SpiderLabs/ModSecurity/tree/v3/master
> build.sh file is not there. I'm not that familiar with git so I'm sure I'm
> doing something wrong.
>
>
>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: URL-Rewriting not working

2017-04-08 Thread Anoop Alias
The 404 is thrown by whatever is working on port 2000 ;so you can check its
access log and see


On Sat, Apr 8, 2017 at 6:54 PM, Ajay Garg <ajaygargn...@gmail.com> wrote:

> Hi Anoop.
>
> As per http://serverfault.com/questions/379675/nginx-
> reverse-proxy-url-rewrite, the rewrite should be automatic.
> But it does not work for me :(
>
> On Sat, Apr 8, 2017 at 6:49 PM, Anoop Alias <anoopalia...@gmail.com>
> wrote:
>
>> I think you are confusing between url-rewrite and location
>>
>> On Sat, Apr 8, 2017 at 6:39 PM, Ajay Garg <ajaygargn...@gmail.com> wrote:
>>
>>> Hi All.
>>>
>>> When I setup the following, the authentication+proxying works perfect,
>>> with the url changing from http://1.2.3.4:2001 to
>>> http://1.2.3.4:2001/cgi-bin/webproc, and the proxied0server opening up
>>> perfectly.
>>>
>>> 
>>> 
>>> server {
>>> listen 2001;
>>> location / {
>>>
>>> auth_basic 'Restricted';
>>> auth_basic_user_file
>>> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd;
>>> proxy_pass http://127.0.0.1:2000;
>>> }
>>> }
>>> 
>>> #
>>>
>>>
>>>
>>> However, I am not able to do the proxying if I perform url-rewriting.
>>> Nothing of the following works ::
>>>
>>> a)
>>> 
>>> 
>>> server {
>>> listen 2001;
>>> location /78 {
>>>
>>> auth_basic 'Restricted';
>>> auth_basic_user_file
>>> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd;
>>> proxy_pass http://127.0.0.1:2000;
>>> }
>>> }
>>> 
>>> 
>>>
>>> No URL change happens, and 404 (illegal-file-access) is obtained.
>>>
>>>
>>> b)
>>> 
>>> 
>>> server {
>>> listen 2001;
>>> location /78 {
>>>
>>> auth_basic 'Restricted';
>>> auth_basic_user_file
>>> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd;
>>> proxy_pass http://127.0.0.1:2000/;
>>> }
>>> }
>>> 
>>> 
>>>
>>> No URL change happens, and 404 (illegal-file-access) is obtained.
>>>
>>>
>>> c)
>>> 
>>> 
>>> server {
>>> listen 2001;
>>> location /78/ {
>>>
>>> auth_basic 'Restricted';
>>> auth_basic_user_file
>>> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd;
>>> proxy_pass http://127.0.0.1:2000/;
>>> }
>>> }
>>> 
>>> 
>>>
>>> The URL does changes from http://1.2.3.4:2001/78 to
>>> http://1.2.3.4:2001/cgi-bin/webproc, but a 404 is obtained.
>>>
>>>
>>> d)
>>> 
>>> 
>>> server {
>>> listen 2001;
>>> location /78/ {
>>>
>>> auth_basic 'Restricted';
>>> auth_basic_user_file
>>> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd;
>>> proxy_pass http://127.0.0.1:2000;
>>> }
>>> }
>>> 
>>> 
>>>
>>> No URL change happens, and 404 (illegal-file-access) is obtained.
>>>
>>>
>>> So, I guess c) is the closest to doing a url-rewrite, but I wonder why
>>> am I getting a 404, even though the URL-change is perfect.
>>>
>>>
>>> Any ideas please?
>>>
>>>
>>> Thanks and Regards,
>>> Ajay
>>>
>>> ___
>>> nginx mailing list
>>> nginx@nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>>
>>
>>
>>
>> --
>> *Anoop P Alias*
>>
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
>
> --
> Regards,
> Ajay
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: URL-Rewriting not working

2017-04-08 Thread Anoop Alias
I think you are confusing between url-rewrite and location

On Sat, Apr 8, 2017 at 6:39 PM, Ajay Garg  wrote:

> Hi All.
>
> When I setup the following, the authentication+proxying works perfect,
> with the url changing from http://1.2.3.4:2001 to
> http://1.2.3.4:2001/cgi-bin/webproc, and the proxied0server opening up
> perfectly.
>
> 
> 
> server {
> listen 2001;
> location / {
>
> auth_basic 'Restricted';
> auth_basic_user_file /home/
> 2819163155b64c4c81f8608aa23c9faa/.htpasswd;
> proxy_pass http://127.0.0.1:2000;
> }
> }
> 
> #
>
>
>
> However, I am not able to do the proxying if I perform url-rewriting.
> Nothing of the following works ::
>
> a)
> 
> 
> server {
> listen 2001;
> location /78 {
>
> auth_basic 'Restricted';
> auth_basic_user_file /home/
> 2819163155b64c4c81f8608aa23c9faa/.htpasswd;
> proxy_pass http://127.0.0.1:2000;
> }
> }
> 
> 
>
> No URL change happens, and 404 (illegal-file-access) is obtained.
>
>
> b)
> 
> 
> server {
> listen 2001;
> location /78 {
>
> auth_basic 'Restricted';
> auth_basic_user_file /home/
> 2819163155b64c4c81f8608aa23c9faa/.htpasswd;
> proxy_pass http://127.0.0.1:2000/;
> }
> }
> 
> 
>
> No URL change happens, and 404 (illegal-file-access) is obtained.
>
>
> c)
> 
> 
> server {
> listen 2001;
> location /78/ {
>
> auth_basic 'Restricted';
> auth_basic_user_file /home/
> 2819163155b64c4c81f8608aa23c9faa/.htpasswd;
> proxy_pass http://127.0.0.1:2000/;
> }
> }
> 
> 
>
> The URL does changes from http://1.2.3.4:2001/78 to
> http://1.2.3.4:2001/cgi-bin/webproc, but a 404 is obtained.
>
>
> d)
> 
> 
> server {
> listen 2001;
> location /78/ {
>
> auth_basic 'Restricted';
> auth_basic_user_file /home/
> 2819163155b64c4c81f8608aa23c9faa/.htpasswd;
> proxy_pass http://127.0.0.1:2000;
> }
> }
> 
> 
>
> No URL change happens, and 404 (illegal-file-access) is obtained.
>
>
> So, I guess c) is the closest to doing a url-rewrite, but I wonder why am
> I getting a 404, even though the URL-change is perfect.
>
>
> Any ideas please?
>
>
> Thanks and Regards,
> Ajay
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Memory issue

2017-04-06 Thread Anoop Alias
If a module is dynamic loadable has issue and if we do not load the module
, will it still cause the error ?

In the case above , ModSecurity-nginx was compiled as a dynamic module and
not loaded .


On Thu, Apr 6, 2017 at 3:31 PM, JohnCarne 
wrote:

> I let dev Anoop answer to you... he has a clue about the issue :
>
> https://github.com/SpiderLabs/ModSecurity-nginx/issues/45
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,273274,273444#msg-273444
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Allow /.well-known/acme-challenge but deny dot files

2017-04-04 Thread Anoop Alias
You can put it above the other deny location
# Allow "Well-Known URIs" as per RFC 5785
location ~* ^/.well-known/ {
allow all;
}



On Tue, Apr 4, 2017 at 2:06 PM, Martin Wolfert 
wrote:

> Hi,
>
> try this:
>
> # Allow access to the letsencrypt ACME Challenge
> location ~ /\.well-known\/acme-challenge {
> allow all;
> }
>
> Best,
> Martin
>
>
>
> Am 04.04.2017 um 10:33 schrieb basti:
>
>> Hello,
>>
>> at the Moment I use this config
>>
>> # Deny access to all .invisible files.
>> location ~ /\. { deny  all; access_log off; log_not_found off; }
>>
>>
>> Now I need access to Let's Encrypt acme-challenge and add this to my
>> config before deny all .invisible files, now it looks like
>>
>> ...
>> # Allow Let's Encrypt acme-challenge
>> location /.well-known/acme-challenge { allow all; access_log on; }
>>
>> # Deny access to all .invisible files.
>> location ~ /\. { deny  all; access_log off; log_not_found off; }
>> ...
>>
>> I have reload nginx but I have no access to
>> http://example.com/.well-known/acme-challenge
>>
>> Log say "access forbidden by rule."
>> Is there a way to allow /.well-known/ and deny all other?
>>
>> Best Regards,
>> basti
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Error with nginx reverse proxy setup

2017-03-18 Thread Anoop Alias
Are you sure you don't have a default vhost?

Try adding a server name and input the server name in browser so you are
sure you are hitting the correct server {} in the config.
Or if you want to use IP ensure the vhost you add is the default.

On 19-Mar-2017 8:35 AM, "Jun Chen via nginx"  wrote:

> Hi All,
>
> I am setting my first reverse proxy by following online posts. The problem
> is that when I type the http://my_ip_address/my_rev and it returns an 404
> error:
>
> Not Found
> The requested URL was not found on the server.
> If you entered the URL manually please check your spelling and try again.
>
> Here is what I did:
>
> 1. installed nginx 1.10.0 on ubuntu 16.04
> 2. created file my_nx.conf under /etc/sites-available with following:
>
> server {
> listen 80;
> server_name my_ip_address;
>
> location /my_rev {
> proxy_pass http://192.168.1.65:5000;
> include /etc/nginx/proxy_params;
> }
> }
> 3. Under /etc/sites-enabled, a symlink my_nx.conf was generated pointing
> to /etc/sites-available/my_nx.conf
> 4. restart nginx
> 5. On browser, type http://my_ip_address/my_rev and, the error
>
> The configuration seems very straightforward. Where have I missed? Many
> thanks.
>
> -Jun C
>
>
>
>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: combining map

2017-03-09 Thread Anoop Alias
Hi Igor,

I need to use this with

##
srcache_fetch_skip $skip_cache;
srcache_store_skip $skip_cache;
##

As per srcache docs the value must be 0 for not skipping and anything other
than 0 will be considered for skipping

Will combining the variables work here too?

Thanks,

On Thu, Mar 9, 2017 at 1:39 PM, Igor A. Ippolitov <iippoli...@nginx.com>
wrote:

> If you are going to use it inside proxy_no_cache directive, you can
> combine proxy_cache_method (POST is not included by default) and
> 'proxy_no_cache $query_string$cookie__mcnc'
> The latter will not cache the request until there is query string or a
> cookie with a value set.
> So basically, it looks like you can avoid using maps in this case.
>
>
> On 09.03.2017 10:01, Anoop Alias wrote:
>
> Hi,
>
> I have 3 maps defined
> 
> map $request_method $requestnocache {
> default 0;
> POST1;
> }
>
> map $query_string $querystringnc {
> default 1;
> ""0;
> }
>
> map $http_cookie $mccookienocache {
> default 0;
> _mcnc   1;
> }
> ###
>
> I need to create a single variable that is 1 if either of the 3 above is 1
> and 0 if all are 0. Will the following be enough
>
> map "$requestnocache$querystringnc$mccookienocache" {
> default  0;
> ~1 1;
> }
>
>
>
> Thanks,
> --
> *Anoop P Alias*
>
>
>
> ___
> nginx mailing 
> listnginx@nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx
>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

combining map

2017-03-08 Thread Anoop Alias
Hi,

I have 3 maps defined

map $request_method $requestnocache {
default 0;
POST1;
}

map $query_string $querystringnc {
default 1;
""0;
}

map $http_cookie $mccookienocache {
default 0;
_mcnc   1;
}
###

I need to create a single variable that is 1 if either of the 3 above is 1
and 0 if all are 0. Will the following be enough

map "$requestnocache$querystringnc$mccookienocache" {
default  0;
~1 1;
}



Thanks,
-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

map directive doubt

2017-03-08 Thread Anoop Alias
Hi,

Just have  a doubt in map directive

map $http_user_agent $upstreamname {
default desktop;
~(iPhone|Android) mobile;
}

is correct ?



or does the regex need to fully match the variable?

map $http_user_agent $upstreamname {
default desktop;
~*.*Android.*   mobile;
~*.*iPhone.* mobile;
}





-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: nginx stopping abruptly at fix time (2:00 am) repeatedly on Cent OS 7.2

2017-02-27 Thread Anoop Alias
What does the error log say when it is stopping?



On Mon, Feb 27, 2017 at 11:58 AM, omkar_jadhav_20 <
nginx-fo...@forum.nginx.org> wrote:

> Hi ,
>
> Please note that we are using nginx v 1.10.2 and on one of our webserver
> (centos 7.2) we are observing below error and sudden stopping of nginx
> service repeatedly at fix time i.e. at 2:00 am. Below are error lines for
> your reference :
>
> 2017/02/26 02:00:01 [alert] 57550#57550: *131331605 open socket #97 left in
> connection 453
> 2017/02/26 02:00:01 [alert] 57550#57550: *131334225 open socket #126 left
> in
> connection 510
> 2017/02/26 02:00:01 [alert] 57550#57550: *131334479 open socket #160 left
> in
> connection 532
> 2017/02/26 02:00:01 [alert] 57550#57550: *131334797 open socket #121 left
> in
> connection 542
> 2017/02/26 02:00:01 [alert] 57550#57550: *131334478 open socket #159 left
> in
> connection 552
> 2017/02/26 02:00:01 [alert] 57550#57550: *131334802 open socket #194 left
> in
> connection 633
> 2017/02/26 02:00:01 [alert] 57570#57570: aborting
> 2017/02/26 02:00:01 [alert] 57553#57553: aborting
> 2017/02/26 02:00:01 [alert] 57539#57539: aborting
> 2017/02/26 02:00:01 [alert] 57550#57550: aborting
>
> Also find below nginx conf files for your reference :
>
> worker_processes  auto;
> events {
>  worker_connections  4096;
>  use epoll;
>  multi_accept on;
> }
> worker_rlimit_nofile   11;
>
> http {
>  include   mime.types;
>  default_type  video/mp4;
>  proxy_buffering   on;
>  proxy_buffer_size 4096k;
>  proxy_buffers 5 4096k;
>  sendfile  on;
>  keepalive_timeout 30;
>  tcp_nodelay   on;
>  tcp_nopush   on;
>  reset_timedout_connection on;
>  gzip  off;
>  server_tokens  off;
> log_format access '$remote_addr $http_x_forwarded_for $host [$time_local] '
> '$upstream_cache_status ' '"$request" $status $body_bytes_sent '
> '"$http_referer" "$http_user_agent" $request_time'
>
> Also note that we have similar servers with exact same nginx config running
> but those servers are not giving any such errors. Also we are not running
> any script or cron at this point of time.
> Kindly help us to resolve this issue.  Also let me know in case any other
> details are required from my end.
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,272633,272633#msg-272633
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: nginx-1.11.10

2017-02-15 Thread Anoop Alias
*_cache_background_update - What does it do?

On Tue, Feb 14, 2017 at 9:22 PM, Maxim Dounin  wrote:

> Changes with nginx 1.11.10   14 Feb
> 2017
>
> *) Change: cache header format has been changed, previously cached
>responses will be invalidated.
>
> *) Feature: support of "stale-while-revalidate" and "stale-if-error"
>extensions in the "Cache-Control" backend response header line.
>
> *) Feature: the "proxy_cache_background_update",
>"fastcgi_cache_background_update", "scgi_cache_background_update",
>and "uwsgi_cache_background_update" directives.
>
> *) Feature: nginx is now able to cache responses with the "Vary" header
>line up to 128 characters long (instead of 42 characters in previous
>versions).
>
> *) Feature: the "build" parameter of the "server_tokens" directive.
>Thanks to Tom Thorogood.
>
> *) Bugfix: "[crit] SSL_write() failed" messages might appear in logs
>when handling requests with the "Expect: 100-continue" request
> header
>line.
>
> *) Bugfix: the ngx_http_slice_module did not work in named locations.
>
> *) Bugfix: a segmentation fault might occur in a worker process when
>using AIO after an "X-Accel-Redirect" redirection.
>
> *) Bugfix: reduced memory consumption for long-lived requests using
>gzipping.
>
>
> --
> Maxim Dounin
> http://nginx.org/
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: need help reverse-proxy config

2017-01-09 Thread Anoop Alias
main context means it come directly in nginx.conf

http context means it should be put inside http{ }
server context means it should be in server { }

likewise..

You can search the directive like http://nginx.org/r/x_

For eg: http://nginx.org/r/stream

check the context where that directive is applicable ..since stream says
main..if you put like

http{
stream
..
..
}

it will be invalid syntax .

On Tue, Jan 10, 2017 at 11:34 AM, Thierry 
wrote:

> error_log /var/log/nginx/error.log info;
>
> events {
> worker_connections  1024;
> }
>
> stream {
> upstream backend {
> hash xxx.xxx.xxx.xxx consistent;
>
> server email.domain.tld:448;
> }
>
>
> server {
> listen 448;
> proxy_connect_timeout 1s;
> proxy_timeout 3s;
> proxy_pass backend;
> }
> }
>
> I have difficulties to understand the "main context" idea  With this
> exemple, is my "stream" in the right context ?? Seems not.
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,271891,271899#msg-271899
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: need help reverse-proxy config

2017-01-09 Thread Anoop Alias
http://nginx.org/en/docs/stream/ngx_stream_core_module.html#stream

stream should be in the main context.

On Tue, Jan 10, 2017 at 10:17 AM, Thierry 
wrote:

> proxy nginx[20076]: nginx: [emerg] "stream" directive is not allowed here
> in
> /etc/nginx/conf.d/reverse-proxy.conf:47
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,271891,271897#msg-271897
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Trouble in configuring fir REST support

2016-11-25 Thread Anoop Alias
yes nothing to do with the problem at hand.

You can also try to execute the index.php using the php (cli) and see if if
there is an error etc.

probably turn on display_error in php.ini .

Its strange nginx is not logging any errors.

Since the phpinfo page is working..this is more or less a problem with php
than nginx I think.

Good luck.

On Fri, Nov 25, 2016 at 5:35 PM, <hemanthn...@yahoo.com> wrote:

> Hi Anoop,
>
> Phpinfo() is working fine … is there something to look  for specifically?
>
>
>
> I need to move from APACHE to NGINX .. so as a back-up, APACHE has been
> configured to work on 9080 port. Once NGINX works, APACHE will be removed.
>
>
>
> Thanks for pointer on “in the server {} block instead of location /  and
> repeating in php . Read nginx pitfalls for a better understanding of why
> this is good.” – will look into this. I guess this is nothing to do with
> the problem
>
>
>
> ----
> -Best
> Hemanth
>
>
>
> *From: *Anoop Alias <anoopalia...@gmail.com>
> *Sent: *Friday, November 25, 2016 5:23 PM
> *To: *hemanthn...@yahoo.com
> *Cc: *Nginx <nginx@nginx.org>
>
> *Subject: *Re: Trouble in configuring fir REST support
>
>
>
> You can put a phpinfo page and see if that works.
>
> I am not sure why you mention apache as you are not proxy passing
>
>
>
> Also while not related to the error  Try
>
>
>
> root /opt/riversilica/pixflex/install/app_server/pixflex/public;
>
>
>
> in the server {} block instead of location /  and repeating in php . Read
> nginx pitfalls for a better understanding of why this is good.
>
>
>
>
>
>
>
>
>
> On Fri, Nov 25, 2016 at 5:17 PM, <hemanthn...@yahoo.com> wrote:
>
> Hi Anoop,
>
> The /var/log/nginx/error.log file is empty …
>
>
>
> 
> -Best
> Hemanth
>
>
>
> *From: *Anoop Alias <anoopalia...@gmail.com>
> *Sent: *Friday, November 25, 2016 5:15 PM
> *To: *Nginx <nginx@nginx.org>
> *Cc: *hemanthn...@yahoo.com
> *Subject: *Re: Trouble in configuring fir REST support
>
>
>
> What does the error log say?
>
>
>
> On Fri, Nov 25, 2016 at 5:03 PM, Hemanthnews via nginx <nginx@nginx.org>
> wrote:
>
> Hi,
>
> Following is the environment
>
> OS: CentOS 7 (64 bit)
>
> NGINX: 1.10.1
>
> PHP/PHP-FPM:  5.6
>
> ZF2
>
> Apache 2.4
>
>
>
> Nginx configured on port-80 and apache on port-9080
>
>
>
> I am having trouble in configuring for REST support using nginx + php +
> zf2
>
> When I enter /user-rest while using APACHE (Port: 9080),
> I get all the user data
>
> When I try with /user-rest  (nginx on port:80), I get a blank
> screen  with Firefox and Chrome reports “HTTP ERROR 500”
>
>
>
>
>
> Following is my configuration file under /etc/nginx/conf.d/pixflex_
> nginx.conf
>
> Would appreciate feedback / fix to support REST
>
>
>
> server {
>
>   2 listen   80 default;
>
>   3 listen   443 ssl;
>
>   4 server_name  $hostname;
>
>   5 client_max_body_size 8192M;
>
>   6 client_header_timeout 300s;
>
>   7 client_body_timeout 300s;
>
>   8 fastcgi_read_timeout 300s;
>
>   9 fastcgi_buffers 16 128k;
>
>  10 fastcgi_buffer_size 256k;
>
>  11
>
>  12 #SSL Support - key & certificate location;
>
>  13 ssl_certificate /etc/pki/tls/certs/ca.crt;
>
>  14 ssl_certificate_key /etc/pki/tls/private/ca.key;
>
>  15
>
>  16 #VirtualHost for HTML Support
>
>  17 location / {
>
>  18 #root   /usr/share/nginx/html;
>
>  19 limit_rate 512k;
>
>  20 limit_conn pfs 100;
>
>  21 add_header 'Access-Control-Allow-Origin' "*";
>
>  22 add_header 'Access-Control-Allow-Credentials'
> 'true';
>
>  23 add_header 'Access-Control-Allow-Headers'
> 'Content-Type,accept,x-wsse,origin';
>
>  24 add_header 'Access-Control-Allow-Methods' 'GET,
> POST, OPTIONS, PUT, DELETE';
>
>  25
>
>  26 root /opt/riversilica/pixflex/
> install/app_server/pixflex/public;
>
>  27 index  index.php index.phtml index.html index.htm;
>
>  28 try_files $uri $uri/ /index.php$is_args$args;
>
>  29  }
>
>  30
>
>  31 #error_page  404  /404.html;
>
>

Re: Trouble in configuring fir REST support

2016-11-25 Thread Anoop Alias
You can put a phpinfo page and see if that works.
I am not sure why you mention apache as you are not proxy passing

Also while not related to the error  Try

root /opt/riversilica/pixflex/install/app_server/pixflex/public;

in the server {} block instead of location /  and repeating in php . Read
nginx pitfalls for a better understanding of why this is good.




On Fri, Nov 25, 2016 at 5:17 PM, <hemanthn...@yahoo.com> wrote:

> Hi Anoop,
>
> The /var/log/nginx/error.log file is empty …
>
>
>
> 
> -Best
> Hemanth
>
>
>
> *From: *Anoop Alias <anoopalia...@gmail.com>
> *Sent: *Friday, November 25, 2016 5:15 PM
> *To: *Nginx <nginx@nginx.org>
> *Cc: *hemanthn...@yahoo.com
> *Subject: *Re: Trouble in configuring fir REST support
>
>
>
> What does the error log say?
>
>
>
> On Fri, Nov 25, 2016 at 5:03 PM, Hemanthnews via nginx <nginx@nginx.org>
> wrote:
>
> Hi,
>
> Following is the environment
>
> OS: CentOS 7 (64 bit)
>
> NGINX: 1.10.1
>
> PHP/PHP-FPM:  5.6
>
> ZF2
>
> Apache 2.4
>
>
>
> Nginx configured on port-80 and apache on port-9080
>
>
>
> I am having trouble in configuring for REST support using nginx + php +
> zf2
>
> When I enter /user-rest while using APACHE (Port: 9080),
> I get all the user data
>
> When I try with /user-rest  (nginx on port:80), I get a blank
> screen  with Firefox and Chrome reports “HTTP ERROR 500”
>
>
>
>
>
> Following is my configuration file under /etc/nginx/conf.d/pixflex_
> nginx.conf
>
> Would appreciate feedback / fix to support REST
>
>
>
> server {
>
>   2 listen   80 default;
>
>   3 listen   443 ssl;
>
>   4 server_name  $hostname;
>
>   5 client_max_body_size 8192M;
>
>   6 client_header_timeout 300s;
>
>   7 client_body_timeout 300s;
>
>   8 fastcgi_read_timeout 300s;
>
>   9 fastcgi_buffers 16 128k;
>
>  10 fastcgi_buffer_size 256k;
>
>  11
>
>  12 #SSL Support - key & certificate location;
>
>  13 ssl_certificate /etc/pki/tls/certs/ca.crt;
>
>  14 ssl_certificate_key /etc/pki/tls/private/ca.key;
>
>  15
>
>  16 #VirtualHost for HTML Support
>
>  17 location / {
>
>  18 #root   /usr/share/nginx/html;
>
>  19 limit_rate 512k;
>
>  20 limit_conn pfs 100;
>
>  21 add_header 'Access-Control-Allow-Origin' "*";
>
>  22 add_header 'Access-Control-Allow-Credentials'
> 'true';
>
>  23 add_header 'Access-Control-Allow-Headers'
> 'Content-Type,accept,x-wsse,origin';
>
>  24 add_header 'Access-Control-Allow-Methods' 'GET,
> POST, OPTIONS, PUT, DELETE';
>
>  25
>
>  26 root /opt/riversilica/pixflex/
> install/app_server/pixflex/public;
>
>  27 index  index.php index.phtml index.html index.htm;
>
>  28 try_files $uri $uri/ /index.php$is_args$args;
>
>  29  }
>
>  30
>
>  31 #error_page  404  /404.html;
>
>  32 #redirect server error pages to the static page /50x.html
>
>  33
>
>  34 #error_page   500 502 503 504  /50x.html;
>
>  35 #location = /50x.html {
>
>  36 #   root   /usr/share/nginx/html;
>
>  37 #}
>
>  38
>
>  39 #proxy the PHP scripts to Apache listening on 127.0.0.1:80
>
>  40 #location ~ \.php$ {
>
>  41 #proxy_pass   http://127.0.0.1;
>
>  42 #}
>
>  43
>
>  44 #pass the PHP scripts to FastCGI server listening on
> 127.0.0.1:9000
>
>  45 location ~ \.php$ {
>
>  46 #root   /usr/share/nginx/html;
>
>  47 limit_rate 512k;
>
>  48 limit_conn pfs 100;
>
>  49
>
>  50 root /opt/riversilica/pixflex/
> install/app_server/pixflex/public;
>
>  51 try_files $uri =404;
>
>  52 fastcgi_pass   127.0.0.1:9000;
>
>  53 fastcgi_index  index.php;
>
>  54 fastcgi_param SCRIPT_FILENAME
> $document_root$fastcgi_script_name;
>
>  55 fastcgi_split_path_info ^(.+\.php)(/.+)$;
>
>  56 fastcgi_intercept_errors o

Re: Trouble in configuring fir REST support

2016-11-25 Thread Anoop Alias
What does the error log say?

On Fri, Nov 25, 2016 at 5:03 PM, Hemanthnews via nginx 
wrote:

> Hi,
>
> Following is the environment
>
> OS: CentOS 7 (64 bit)
>
> NGINX: 1.10.1
>
> PHP/PHP-FPM:  5.6
>
> ZF2
>
> Apache 2.4
>
>
>
> Nginx configured on port-80 and apache on port-9080
>
>
>
> I am having trouble in configuring for REST support using nginx + php +
> zf2
>
> When I enter /user-rest while using APACHE (Port: 9080),
> I get all the user data
>
> When I try with /user-rest  (nginx on port:80), I get a blank
> screen  with Firefox and Chrome reports “HTTP ERROR 500”
>
>
>
>
>
> Following is my configuration file under /etc/nginx/conf.d/pixflex_
> nginx.conf
>
> Would appreciate feedback / fix to support REST
>
>
>
> server {
>
>   2 listen   80 default;
>
>   3 listen   443 ssl;
>
>   4 server_name  $hostname;
>
>   5 client_max_body_size 8192M;
>
>   6 client_header_timeout 300s;
>
>   7 client_body_timeout 300s;
>
>   8 fastcgi_read_timeout 300s;
>
>   9 fastcgi_buffers 16 128k;
>
>  10 fastcgi_buffer_size 256k;
>
>  11
>
>  12 #SSL Support - key & certificate location;
>
>  13 ssl_certificate /etc/pki/tls/certs/ca.crt;
>
>  14 ssl_certificate_key /etc/pki/tls/private/ca.key;
>
>  15
>
>  16 #VirtualHost for HTML Support
>
>  17 location / {
>
>  18 #root   /usr/share/nginx/html;
>
>  19 limit_rate 512k;
>
>  20 limit_conn pfs 100;
>
>  21 add_header 'Access-Control-Allow-Origin' "*";
>
>  22 add_header 'Access-Control-Allow-Credentials'
> 'true';
>
>  23 add_header 'Access-Control-Allow-Headers'
> 'Content-Type,accept,x-wsse,origin';
>
>  24 add_header 'Access-Control-Allow-Methods' 'GET,
> POST, OPTIONS, PUT, DELETE';
>
>  25
>
>  26 root /opt/riversilica/pixflex/
> install/app_server/pixflex/public;
>
>  27 index  index.php index.phtml index.html index.htm;
>
>  28 try_files $uri $uri/ /index.php$is_args$args;
>
>  29  }
>
>  30
>
>  31 #error_page  404  /404.html;
>
>  32 #redirect server error pages to the static page /50x.html
>
>  33
>
>  34 #error_page   500 502 503 504  /50x.html;
>
>  35 #location = /50x.html {
>
>  36 #   root   /usr/share/nginx/html;
>
>  37 #}
>
>  38
>
>  39 #proxy the PHP scripts to Apache listening on 127.0.0.1:80
>
>  40 #location ~ \.php$ {
>
>  41 #proxy_pass   http://127.0.0.1;
>
>  42 #}
>
>  43
>
>  44 #pass the PHP scripts to FastCGI server listening on
> 127.0.0.1:9000
>
>  45 location ~ \.php$ {
>
>  46 #root   /usr/share/nginx/html;
>
>  47 limit_rate 512k;
>
>  48 limit_conn pfs 100;
>
>  49
>
>  50 root /opt/riversilica/pixflex/
> install/app_server/pixflex/public;
>
>  51 try_files $uri =404;
>
>  52 fastcgi_pass   127.0.0.1:9000;
>
>  53 fastcgi_index  index.php;
>
>  54 fastcgi_param SCRIPT_FILENAME
> $document_root$fastcgi_script_name;
>
>  55 fastcgi_split_path_info ^(.+\.php)(/.+)$;
>
>  56 fastcgi_intercept_errors on;
>
>  57 fastcgi_read_timeout 300;
>
>  58 include fastcgi_params;
>
>  59 }
>
>  60
>
>  61 # deny access to .htaccess files, if Apache's document root
>
>  62 # concurs with nginx's one
>
>  63 location ~ /\.ht {
>
>  64 deny  all;
>
>  65 }
>
>  66 }
>
>
>
>
>
>
>
>
>
>
>
> 
> -Best
> Hemanth
>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Bloking Bad bots

2016-11-14 Thread Anoop Alias
I had asked the same question once and got no to the point response.

So here is what I infer:

the if causes nginx to check the header for each request against the list
of patterns you have configured and return a 403 if found .

So the processing slows down on each request to for the if processing..

If you see mod_security etc ..this is also doing something similar and
doing a check on each request - so in that way (that is if you are willing
to compromise lack of speed for the user agent checking) this is fine . But
you are definitely making the nginx slower and consume more resource by
adding the if  there and making it more by increasing the list size.



On Mon, Nov 14, 2016 at 9:00 PM,  wrote:

> You can block some of those bots at the firewall permanently.
>
> I use the nginx map feature in a similar manner, but I don't know if map
> is more efficient than your code. ‎I started out blocking similar to your
> scheme, but the map feature looks clear to me in the conf file.
>
> Majestic and Sogou sure are annoying. For what I block, I use 444 rather
> than 403. (And yes, I know that destroys the matter/anti-matter mix of the
> universe, so don't lecture me.) I then eyeball the 444 hits periodically,
> using a script to pull the 444 requests out of the access.log file. I have
> another script to get just the IP addresses from access.log.
>
> For the search engines like Majestic and Sogou, which don't seem to have
> an IP space you can look up via BGP tools, I take the IP used and add it to
> my firewall blocking table. I can go weeks before a new IP gets used.
>
>   Original Message
> From: debilish99
> Sent: Monday, November 14, 2016 7:04 AM
> To: nginx@nginx.org
> Reply To: nginx@nginx.org
> Subject: Bloking Bad bots
>
> Hello,
>
> I have a server with several domains, in the configuration file of each
> domain I have a line like this to block bad bots.
>
> If ($ http_user_agent ~ *
> (zealbot|MJ12bot|AhrefsBot|sogou|PaperLiBot|uipbot|
> DotBot|GetIntent|Cliqzbot|YandexBot|Nutch|TurnitinBot|IndeedBot)
> Return 403;
> }
>
> This works fine.
>
> The question is, if I increase the list of bad bots to 1000, for example,
> this would be a speed problem when nginx manages every request that
> arrives.
>
> I have domains that can have 500,000 hits daily and up to 20,000 hits.
>
> Thank you all.
>
> Greetings.
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,270930,270930#msg-270930
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Rewrite help

2016-11-08 Thread Anoop Alias
try

rewrite ^/(.*)\.gifv /vid.php?id=$1 last;

On Tue, Nov 8, 2016 at 8:20 PM, khav  wrote:

> Suppose i have a url as `http://somesite.com/ekjkASDs.gifv` , i want to
> rewrite it as `http://somesite.com/vid.php?id=ekjkASDs`
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,270815,270815#msg-270815
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: [nginx-announce] nginx-1.11.5

2016-10-12 Thread Anoop Alias
*) Feature: the --with-compat configure option.

What does this do actually?

On Tue, Oct 11, 2016 at 10:53 PM, Kevin Worthington 
wrote:

> Hello Nginx users,
>
> Now available: Nginx 1.11.5 for Windows https://kevinworthington.com/
> nginxwin1115 (32-bit and 64-bit versions)
>
> These versions are to support legacy users who are already using Cygwin
> based builds of Nginx. Officially supported native Windows binaries are
> at nginx.org.
>
> Announcements are also available here:
> Twitter http://twitter.com/kworthington
> Google+ https://plus.google.com/+KevinWorthington/
>
> Thank you,
> Kevin
> --
> Kevin Worthington
> kworthington *@* (gmail]  [dot} {com)
> http://kevinworthington.com/
> http://twitter.com/kworthington
> https://plus.google.com/+KevinWorthington/
>
> On Tue, Oct 11, 2016 at 11:32 AM, Maxim Dounin  wrote:
>
>> Changes with nginx 1.11.511 Oct
>> 2016
>>
>> *) Change: the --with-ipv6 configure option was removed, now IPv6
>>support is configured automatically.
>>
>> *) Change: now if there are no available servers in an upstream, nginx
>>will not reset number of failures of all servers as it previously
>>did, but will wait for fail_timeout to expire.
>>
>> *) Feature: the ngx_stream_ssl_preread_module.
>>
>> *) Feature: the "server" directive in the "upstream" context supports
>>the "max_conns" parameter.
>>
>> *) Feature: the --with-compat configure option.
>>
>> *) Feature: "manager_files", "manager_threshold", and "manager_sleep"
>>parameters of the "proxy_cache_path", "fastcgi_cache_path",
>>"scgi_cache_path", and "uwsgi_cache_path" directives.
>>
>> *) Bugfix: flags passed by the --with-ld-opt configure option were not
>>used while building perl module.
>>
>> *) Bugfix: in the "add_after_body" directive when used with the
>>"sub_filter" directive.
>>
>> *) Bugfix: in the $realip_remote_addr variable.
>>
>> *) Bugfix: the "dav_access", "proxy_store_access",
>>"fastcgi_store_access", "scgi_store_access", and
>> "uwsgi_store_access"
>>directives ignored permissions specified for user.
>>
>> *) Bugfix: unix domain listen sockets might not be inherited during
>>binary upgrade on Linux.
>>
>> *) Bugfix: nginx returned the 400 response on requests with the "-"
>>character in the HTTP method.
>>
>>
>> --
>> Maxim Dounin
>> http://nginx.org/
>>
>> ___
>> nginx-announce mailing list
>> nginx-annou...@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx-announce
>>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

proxying to upstream port based on scheme

2016-10-05 Thread Anoop Alias
I have an httpd upstream server that listen on both http and https at
different port and want to send all http=>http_upstream and https =>
https_upstream

The following does the trick

#
if ( $scheme = https ) {
set $port 4430;
}
if ( $scheme = http ) {
set $port ;
}

location / {

proxy_pass   $scheme://127.0.0.1:$port;
}
#

Just wanted to know if this is very inefficient (if-being evil) than
hard-coding the port and having two different server{} blocks for http and
https .

Thanks in advance.



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: performance hit in using too many if's

2016-09-26 Thread Anoop Alias
Ok .. reiterating my original question.

Is the usage of if / map  in nginx config  more efficient than say naxsi (
or libmodsecurity )  for something like blocking SQL injection ?

For example,
https://github.com/nbs-system/naxsi/blob/master/naxsi_config/naxsi_core.rules
rules 1000-1099 - blockes sql injection attempt

So ..do (to a limited extent )

## Block SQL injections
set $block_sql_injections 0;
if ($query_string ~ "union.*select.*\(") {
set $block_sql_injections 1;
   
   .
if ($block_file_injections = 1) {
return 403;
}



>From the point of application performance which one is better .. ?
Performance for a shared hosting server with around 500 vhosts.


On Mon, Sep 26, 2016 at 3:39 PM,  wrote:

> For one thing, I have trouble making fail2ban work. ;-)  I run sshguard,
> so the major port 22 hacking is covered. And that is continous.
>
> I don't know if fail2ban can read nginx logs. I thought you need to run
> swatch, which requires actual perl skill to set up.
>
> In any event, my 444 is harmless other than someone not getting a reply. I
> find hackers try to log into WordPress. I find Google trys to log into
> WordPress. My guess is maybe Google is trying to figure out if you run
> WordPress, while the hackers would dictionary search if you were actually
> running WordPress. In my case, I am not running WordPress, but anyone
> trying to log into it is suspicious. Blocking Google is bad.
>
> So I examine the IP addresses. If from a colo, VPS, etc. , they get a
> lifetime ban of the entire IP space. No eyeballs there, or if a VPN, they
> can just drop it. If the IP goes back to some ISP or occasionally Google, I
> figure who cares.
>
> WordPress isn't my only trigger. I've learned the words like the Chinese
> use for backup, which they search for. Of course "backup" is searched as
> well. I have maybe 30 triggers in the map. I also limit my verbs to "get"
> and "head" since I only serve static pages. Ask for php, you get 444. Use
> wget, curl, nutch, etc., get a 444. The bad referrals get a 404.
>
> Since whatever I consider to be hacking is blocked in real time, no
> problem to the server. I then use the scripts to look at the IPs I deem
> shady and see who they are. The list is like four or so unique IP addresses
> a day. Most go to ISPs, often mobile. So I just live with it. If I find a
> commercial site, I block the hosting company associated with that
> commercial site.
>
> When I ran Naxsi, it would trigger on words like update. I had to change
> all URLs with the word update in them to a non reserved word. Some triggers
> I couldn't even figure out. Thus I determined using the map modules and my
> own triggers to be a better plan.
>
>   Original Message
> From: Alt
> Sent: Monday, September 26, 2016 1:43 AM
> To: nginx@nginx.org
> Reply To: nginx@nginx.org
> Subject: Re: performance hit in using too many if's
>
> Hello,
>
> I don't agree with Robert Paprocki: adding modules like naxsi or
> modsecurity
> to nginx is not a solution. They have bugs, performance hits, need patch
> when there's new versions of nginx,...
>
> gariac, you say you send 444 to hackers then use a script to display those.
> Why not use fail2ban to scan the logs and ban them for some time. But of
> course, fail2ban could also be a performance hit if you have tons of logs
> to
> scan :-(
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,269808,269848#msg-269848
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: performance hit in using too many if's

2016-09-24 Thread Anoop Alias
I understand that the map may look cleaner on the config as each vhost
don't need the if matchings ..but the variable evaluation and
therefore the pattern matching for all possible values  is still
happening when the mapped variable in encountered? and therefore there
is still a huge performance penalty ?

I am mainly asking this..as the above type of security configs are
mostly not seen on nginx official blogs /documentation etc .
Just wanted to know if people who know the internals have purposefully
omitted these setting even though they are serving the purpose of
security.



On Sat, Sep 24, 2016 at 2:45 PM,  <li...@lazygranch.com> wrote:
> ‎I suspect the map module can do that more efficiently. There is an example 
> of how to use the map module in this post:
>
> http://ask.xmodulo.com/block-specific-user-agents-nginx-web-server.html
>
> The code is certainly cleaner using map. I use three maps, specifically for  
> bad user agent, bad request, and bad referrer.
>
>
>
>   Original Message
> From: Anoop Alias
> Sent: Saturday, September 24, 2016 1:58 AM
> To: Nginx
> Reply To: nginx@nginx.org
> Subject: performance hit in using too many if's
>
> Hi,
>
> I was following some suggestions on blocking user agents,sql
> injections etc as in the following URL
>
> https://www.howtoforge.com/nginx-how-to-block-exploits-sql-injections-file-injections-spam-user-agents-etc
>
> Just wanted to know what is the performance hit when using so many of
> these if's ( in light of the if-is-evil policy ). Especially if the
> server is having a lot of virtual hosts and the rules are matched for
> each of them.
>
> Is it like:
>
> If the server is capable (beefy) it should be able to handle these URL ?
>
> or
>
> There is a huge performance penalty .Significantly more than
> apache+mod_security as an example
>
> or
>
> The is a performance penalty but not as much as other security tools
> or WAF's like naxsi or mod_security
>
>
> Thanks in advance,
>
> --
> Anoop P Alias
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



-- 
Anoop P Alias

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

performance hit in using too many if's

2016-09-24 Thread Anoop Alias
Hi,

I was following some suggestions on blocking user agents,sql
injections etc as in the following URL

https://www.howtoforge.com/nginx-how-to-block-exploits-sql-injections-file-injections-spam-user-agents-etc

Just wanted to know what is the performance hit when using so many of
these if's ( in light of the if-is-evil policy ). Especially if the
server is having a lot of virtual hosts and the rules are matched for
each of them.

Is it like:

If the server is capable (beefy) it should be able to handle these URL ?

or

There is a huge performance penalty .Significantly more than
apache+mod_security as an example

or

The is a performance penalty but not as much as other security tools
or WAF's like naxsi or mod_security


Thanks in advance,

-- 
Anoop P Alias

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: "502 Bad Gateway" on first request in a setup with Apache 2.4-servers as upstreams

2016-09-13 Thread Anoop Alias
Check the logs of the apache server.

You might need to tweak the proxy_*_timeout settings in nginx , but
usually its the problem with the upstream server that is causing this.
try connecting to the upstream via http://domain:port directly and you
should face the error.



On Tue, Sep 13, 2016 at 3:22 PM, hheiko  wrote:
> I don't think there is an OS relation on the frontend, the same problem
> occurs with an Centos Nginx as Reverse proxy in front of 3 Apache backends
> on Centos - but it never occurs on windows based Apache backends...
>
> But we´re on version 1.11.4.1 Lion (http://nginx-win.ecsds.eu)
>
> Posted at Nginx Forum: 
> https://forum.nginx.org/read.php?2,268306,269511#msg-269511
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



-- 
Anoop P Alias

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: pcre.org down?

2016-09-02 Thread Anoop Alias
; Received 682 bytes from 192.228.79.201#53(b.root-servers.net) in 405 ms

pcre.org. 86400 IN NS ns.figure1.net.
pcre.org. 86400 IN NS monid01.nebcorp.com.
pcre.org. 86400 IN NS meow.raye.com.
pcre.org. 86400 IN NS koffing.ivysaur.com.
h9p7u7tr2u91d0v0ljs9l1gidnp90u3h.org. 86400 IN NSEC3 1 1 1 D399EAAB
H9PARR669T6U8O1GSG9E1LMITK4DEM0T NS SOA RRSIG DNSKEY NSEC3PARAM
h9p7u7tr2u91d0v0ljs9l1gidnp90u3h.org. 86400 IN RRSIG NSEC3 7 2 86400
20160923150233 20160902140233 48497 org.
EBTmSR2rCyGj0HzJr5zL5uMIWD6K7inbPUctZ4iWRKfpQjOy02jW+ETu
psvQCa3dtWGGWUfTM820sMbsG7Uue3BX+/2Utrq0lB0XAcL/Z/p9Fwra
h2W8fKHOMyy+6TimoR45A7PnLwqLdLLhY03ISp9pcd7WTGJQ/V/0M5nO Ss8=
jnqfik42o561r7a65jpdqln7gouvgjbs.org. 86400 IN NSEC3 1 1 1 D399EAAB
JNRF2EBH2M0FOJG163S5KVHSBO31O5RF NS DS RRSIG
jnqfik42o561r7a65jpdqln7gouvgjbs.org. 86400 IN RRSIG NSEC3 7 2 86400
20160923095353 20160902085353 48497 org.
Zt8KcXmYsykQQV1hnF3X012jXqorxh8Hj4X12HzQftD/U/CmH03x925I
rvRSY4wYXzlNaHyJ5vDTeYzAG9TIdxG66RDHeOwn3HRGqht2u14oc+sE
pNbYm/cE2ozbf4ohQ0VBT3ma5UInu6ATU9pkJ1nOldYW+LtmPY4/MYFJ DVs=
couldn't get address for 'monid01.nebcorp.com': failure
;; Received 645 bytes from 199.19.57.1#53(d0.org.afilias-nst.org) in 435 ms

;; Received 37 bytes from 66.93.34.236#53(ns.figure1.net) in 329 ms

On Fri, Sep 2, 2016 at 6:19 PM, itpp2012  wrote:
> Anyone any idea what happened to www.pcre.org ?
>
> Posted at Nginx Forum: 
> https://forum.nginx.org/read.php?2,269359,269359#msg-269359
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



-- 
Anoop P Alias

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx a NIGHTMARE for me

2016-08-24 Thread Anoop Alias
A bad workman always blames his tools !

On Wed, Aug 24, 2016 at 11:24 PM, Amanat  wrote:
> i was using Apache from last 3 years. never faced a single problem. Few days
> ago i think to try Nginx. As i heard from mny people. Its very fast memory
> efficient webserver. Trust me guys. Nginx can never be memory efficient.
> Rather it consumes memory same like my mozilla. I never understand for what
> reason it creates a garbage of disk cache for every request.
> look at the pic
>
> http://i.imgur.com/pD7rEDe.png
>
> Though i like some features of Nginx like header modification and error
> 444.
>
> Apart from that Nginx is worst.
>
> My Server configuration:
> Architecture:  x86_64
> CPU op-mode(s):32-bit, 64-bit
> Byte Order:Little Endian
> CPU(s):8
> On-line CPU(s) list:   0-7
> Thread(s) per core:2
> Core(s) per socket:4
> Socket(s): 1
> NUMA node(s):  1
> Vendor ID: GenuineIntel
> CPU family:6
> Model: 58
> Model name:Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz
> Stepping:  9
> CPU MHz:   1601.453
> CPU max MHz:   3800.
> CPU min MHz:   1600.
> BogoMIPS:  6784.50
> Virtualization:VT-x
> L1d cache: 32K
> L1i cache: 32K
> L2 cache:  256K
> L3 cache:  8192K
> NUMA node0 CPU(s): 0-7
>
>
> I can use 2 Gb ram whole day with apache and with Nginx even 32 gb ram works
> only for 5 min.
>
> If any of you have any suggestion to tweak nginx config. Please tell me.
>
> Posted at Nginx Forum: 
> https://forum.nginx.org/read.php?2,269159,269159#msg-269159
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



-- 
Anoop P Alias

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: CPU load monitoring / dynamically limit number of connections to server

2016-05-20 Thread Anoop Alias
http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html  - not
system load based though



-- 
Anoop P Alias

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Rewrite before regex location

2016-05-04 Thread Anoop Alias
Hi ,

Can you try putting the rewrite in server{} block outside of all
location{} blocks like

rewrite "^/test/([a-z]+).php$" /test/test.php?q=$1 last;



On Thu, May 5, 2016 at 5:13 AM, Joyce Babu  wrote:
>> If you've got a messy config with no common patterns, you've got a messy
>> config with no common patterns, and there's not much you can do about it.
>>
>> If you can find common patterns, maybe you can make the config more
>> maintainable (read: no top-level regex locations); but you don't want
>> to break previously-working urls.
>
>
> The site was initially using Apache + mod_php. Hence these ere not an issue.
> It was only when
> I tried to migrate to PHP-FPM, I realized the mistakes. Now the urls cannot
> be chanced due to
> SEO implications.
>
>>
>>
>> > I tried using ^~ as you suggested. Now the rewrite is working correctly,
>> > but the files are not executed. The request is returning the actual PHP
>> > source file, not the HTML generated by executing the script.
>>
>> Can you show one configuration that leads to the php content being
>> returned?
>>
>> If you rewrite /test/x.php to /test.php, /test.php should be handled in
>> the "~ php" location.
>
>
> I am sorry, I did not rewrite it to a location outside /test/, which was why
> the file content was being returned.
>
> Is it possible to do something like this?
>
> location /test/ {
> rewrite "^/test/([a-z]+).php$" /php-fpm/test/test.php?q=$1 last;
> }
>
> location ~ ^/php-fpm/ {
> location ~ [^/]\.php(/|$) {
> fastcgi_split_path_info ^/php-fpm(.+?\.php)(/.*)$;
>
> fastcgi_pass 127.0.0.1:9000;
> fastcgi_index index.php;
> include fastcgi_params;
> }
> }
>
>
> What I have tried to do here is rewrite to add a special prefix (/php-fpm)
> to the rewritten urls. and nest the php location block within it. Then use
> fastcgi_split_path_info to create new $fastcgi_script_name without the
> special prefix. I tried the above code, but it is not working.
> fastcgi_split_path_info is not generating $fastcgi_script_name without the
> /php-fpm prefix.
>
>
>>
>> An alternative possibility could be to put these rewrites at server
>> level rather than inside location blocks. That is unlikely to be great
>> for efficiency; but only you can judge whether it could be adequate.
>>
>> > > > location ~ [^/]\.php(/|$) {
>> > > > fastcgi_split_path_info ^(.+?\.php)(/.*)$;
>> > > >
>> > > > set $fastcgi_script_name_custom $fastcgi_script_name;
>> > > > if (!-f $document_root$fastcgi_script_name) {
>> > > > set $fastcgi_script_name_custom "/cms/index.php";
>> > > > }
>> > >
>> > > I suspect that it should be possible to do what you want to do there,
>> > > with a "try_files". But I do not know the details.
>> >
>> > There is a CMS engine which will intercept all unmatched requests and
>> > check
>> > the database to see if there is an article with that URI. Some times it
>> > has
>> > to match existing directories without index.php. If I use try_files, it
>> > will either lead to a 403 error (if no index is specified), or would
>> > internally redirect the request to the index file (if it is specified),
>> > leading to 404 error. The if condition correctly handles all the
>> > non-existing files.
>>
>> There is more than one possible try_files configuration; but that does not
>> matter: if you have a system that works for you, you can keep using it.
>>
>> Good luck with it,
>>
>> f
>> --
>> Francis Dalyfran...@daoine.org
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



-- 
Anoop P Alias

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: naxsi as a dynamic module error on nginx 1.10.0

2016-04-28 Thread Anoop Alias
Hi Andrew,

As an update the Passenger 5.0.28 version they just released seem to
work fine and does not cause any issue .

Here is the config args

# nginx -V
nginx version: nginx/1.10.0
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
--modules-path=/etc/nginx/modules --conf-path=/etc/nginx/nginx.conf
--error-log-path=/var/log/nginx/error_log
--http-log-path=/var/log/nginx/access_log
--pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock
--http-client-body-temp-path=/var/cache/nginx/client_temp
--http-proxy-temp-path=/var/cache/nginx/proxy_temp
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
--http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nobody
--group=nobody --with-http_ssl_module --with-http_realip_module
--with-http_addition_module --with-http_sub_module
--with-http_dav_module --with-http_flv_module --with-http_mp4_module
--with-http_gunzip_module --with-http_gzip_static_module
--with-http_random_index_module --with-http_secure_link_module
--with-http_stub_status_module --with-http_auth_request_module
--add-dynamic-module=naxsi-0.55rc1/naxsi_src --with-file-aio
--with-threads --with-stream --with-stream_ssl_module
--with-http_slice_module --with-ipv6 --with-http_v2_module
--add-dynamic-module=ngx_pagespeed-release-1.11.33.0-beta
--add-dynamic-module=/usr/local/rvm/gems/ruby-2.3.0/gems/passenger-5.0.28/src/nginx_module
--add-module=ngx_cache_purge-2.3 --with-cc-opt='-O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
--param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic'
--with-ld-opt=-Wl,-E
###

While this may still be an issue with 5.0.27 ;since we have a new
working version.I would consider this issue closed.

Thanks a bunch for your time .

Thank you,
Anoop

On Thu, Apr 28, 2016 at 1:22 PM, Anoop Alias <anoopalia...@gmail.com> wrote:
> Hi Andrew ,
>
> Thank you. Here are some more from strace and whats shown in stdout
> while compiling . Not sure if its gonna help .
>
> ##
>
> relevant portion of strace nginx -t
>
> open("/etc/group", O_RDONLY|O_CLOEXEC)  = 5
> fstat(5, {st_mode=S_IFREG|0644, st_size=1122, ...}) = 0
> mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
> 0) = 0x7efdf5621000
> read(5, "root:x:0:\nbin:x:1:\ndaemon:x:2:\ns"..., 4096) = 1122
> close(5)= 0
> munmap(0x7efdf5621000, 4096)= 0
> open("/etc/nginx/conf.d/dynamic_modules.conf", O_RDONLY) = 5
> fstat(5, {st_mode=S_IFREG|0644, st_size=110, ...}) = 0
> pread(5, "load_module \"/etc/nginx/modules/"..., 110, 0) = 110
> open("/etc/nginx/modules/ngx_http_naxsi_module.so", O_RDONLY|O_CLOEXEC) = 6
> read(6, 
> "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320\344\3\0\0\0\0\0"...,
> 832) = 832
> fstat(6, {st_mode=S_IFREG|0755, st_size=1499305, ...}) = 0
> mmap(NULL, 2705464, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 6,
> 0) = 0x7efdf1a5d000
> mprotect(0x7efdf1aca000, 2097152, PROT_NONE) = 0
> mmap(0x7efdf1cca000, 163840, PROT_READ|PROT_WRITE,
> MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 6, 0x6d000) = 0x7efdf1cca000
> close(6)= 0
> munmap(0x7efdf1a5d000, 2705464) = 0
> gettid()= 30492
> write(3, "2016/04/28 03:42:08 [emerg] 3049"..., 232) = 232
> write(2, "nginx: [emerg] dlopen() \"/etc/ng"..., 206nginx: [emerg]
> dlopen() "/etc/nginx/modules/ngx_http_naxsi_module.so" failed
> (/etc/nginx/modules/ngx_http_naxsi_module.so: undefined symbol:
> psg_variant_map_new) in /etc/nginx/conf.d/dynamic_modules.conf:1
> ) = 206
> close(5)= 0
> close(4)= 0
> write(2, "nginx: configuration file /etc/n"..., 60nginx: configuration
> file /etc/nginx/nginx.conf test failed
> ) = 60
> exit_group(1)   = ?
> +++ exited with 1 +++
> ##
> output of ltrace nginx -t
>
> __errno_location()
>   = 0x7ff9024927c0
> getpwnam("nobody")
>   = 0x7ff900739260
> getgrnam("nobody")
>   = 0x7ff900739100
> strcmp("worker_processes", "timer_resolution")
>   = 3
> strcmp("worker_processes", "worker_processes")
>   = 0
> strcmp("thread_pool", "load_module")
>   = 8

Re: naxsi as a dynamic module error on nginx 1.10.0

2016-04-28 Thread Anoop Alias
get
`objs/addon/nginx_module/Configuration.o'
objs/Makefile:1645: warning: overriding recipe for target
`objs/addon/nginx_module/ContentHandler.o'
objs/Makefile:1565: warning: ignoring old recipe for target
`objs/addon/nginx_module/ContentHandler.o'
objs/Makefile:1652: warning: overriding recipe for target
`objs/addon/nginx_module/StaticContentHandler.o'
objs/Makefile:1572: warning: ignoring old recipe for target
`objs/addon/nginx_module/StaticContentHandler.o'
objs/Makefile:1659: warning: overriding recipe for target
`objs/addon/ngx_cache_purge-2.3/ngx_cache_purge_module.o'
objs/Makefile:1579: warning: ignoring old recipe for target
`objs/addon/ngx_cache_purge-2.3/ngx_cache_purge_module.o'
cc -c -pipe  -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror
-g -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protect
##


Thanks,
Anoop

On Thu, Apr 28, 2016 at 11:29 AM, Andrew Hutchings <ahutchi...@nginx.com> wrote:
> Hi Anoop,
>
> The "config" file that comes with the source of a module is a shell script
> that is executed by our build system. If it modifies things inside our build
> system then there isn't a lot we can do for that.
>
> Things have improved with the way you define dynamic modules in that file
> but it is still possible to break the build with it.
>
> I am out of the office today but I'll see if I can reproduce the issue
> tomorrow and pin down the exact cause.
>
> Kind Regards
> Andrew
>
>
> On 28/04/16 06:42, Anoop Alias wrote:
>>
>> the passenger community is not aware of any issues where passenger
>> breaks other modules.
>>
>> Pardon me if I am wrong - I am not a c programmer so my knowledge here
>> is limited. But shouldn't nginx offer a mechanism by which one module
>> should not be interfering with loading of another module .
>>
>> I have not seen similar issues in the apache world and the apxs  seem
>> to be facilitating loading of multiple modules from various developers
>> without any issue.
>>
>>
>>
>> On Wed, Apr 27, 2016 at 9:24 PM, Andrew Hutchings <ahutchi...@nginx.com>
>> wrote:
>>>
>>> Hi Anoop,
>>>
>>> Yes, it would probably be better to contact their community. I would also
>>> recommend trying the latest GitHub checkout of their 5.0 branch as the
>>> changes there may have already fixed it.
>>>
>>> Kind Regards
>>> Andrew
>>>
>>>
>>> On 27/04/16 16:52, Anoop Alias wrote:
>>>>
>>>>
>>>> Hi Andrew,
>>>>
>>>> Yes you are correct . Without passenger naxsi is loading and working
>>>> fine.
>>>>
>>>> So I should be contacting passenger list with the error right?
>>>>
>>>> Thank you,
>>>> Anoop
>>>>
>>>>
>>>>
>>>> On Wed, Apr 27, 2016 at 8:03 PM, Andrew Hutchings <ahutchi...@nginx.com>
>>>> wrote:
>>>>>
>>>>>
>>>>> Hi Anoop,
>>>>>
>>>>> This looks to me like another module has broken the linking a bit.
>>>>> Possibly
>>>>> Passenger given the symbols triggering the error and the fact they
>>>>> released
>>>>> a fix for their module linking 8 days ago.
>>>>>
>>>>> Can you try compiling without Passenger and then starting NGINX to see
>>>>> if
>>>>> this fixes it?
>>>>>
>>>>> Kind Regards
>>>>> Andrew
>>>>>
>>>>>
>>>>> On 27/04/16 14:59, Anoop Alias wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> I build naxsi-0.55rc1 as a module for nginx 1.10.0 and getting the
>>>>>> following 2 different error on centos7 and centos6
>>>>>>
>>>>>> Error on Centos6
>>>>>> nginx: [emerg] dlopen() "/etc/nginx/modules/ngx_http_naxsi_module.so"
>>>>>> failed (/etc/nginx/modules/ngx_http_naxsi_module.so: undefined symbol:
>>>>>> pp_get_app_type_name) in /etc/nginx/conf.d/dynamic_modules.conf:1
>>>>>>
>>>>>> # nginx -V nginx version: nginx/1.10.0 built by gcc 4.8.2 20140120
>>>>>> (Red Hat 4.8.2-15) (GCC)built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS
>>>>>> SNI support enabled configure arguments: --prefix=/etc/nginx
>>>>>> --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules
>>>>>> --conf-path=/etc/nginx/ngin

Re: naxsi as a dynamic module error on nginx 1.10.0

2016-04-27 Thread Anoop Alias
the passenger community is not aware of any issues where passenger
breaks other modules.

Pardon me if I am wrong - I am not a c programmer so my knowledge here
is limited. But shouldn't nginx offer a mechanism by which one module
should not be interfering with loading of another module .

I have not seen similar issues in the apache world and the apxs  seem
to be facilitating loading of multiple modules from various developers
without any issue.



On Wed, Apr 27, 2016 at 9:24 PM, Andrew Hutchings <ahutchi...@nginx.com> wrote:
> Hi Anoop,
>
> Yes, it would probably be better to contact their community. I would also
> recommend trying the latest GitHub checkout of their 5.0 branch as the
> changes there may have already fixed it.
>
> Kind Regards
> Andrew
>
>
> On 27/04/16 16:52, Anoop Alias wrote:
>>
>> Hi Andrew,
>>
>> Yes you are correct . Without passenger naxsi is loading and working fine.
>>
>> So I should be contacting passenger list with the error right?
>>
>> Thank you,
>> Anoop
>>
>>
>>
>> On Wed, Apr 27, 2016 at 8:03 PM, Andrew Hutchings <ahutchi...@nginx.com>
>> wrote:
>>>
>>> Hi Anoop,
>>>
>>> This looks to me like another module has broken the linking a bit.
>>> Possibly
>>> Passenger given the symbols triggering the error and the fact they
>>> released
>>> a fix for their module linking 8 days ago.
>>>
>>> Can you try compiling without Passenger and then starting NGINX to see if
>>> this fixes it?
>>>
>>> Kind Regards
>>> Andrew
>>>
>>>
>>> On 27/04/16 14:59, Anoop Alias wrote:
>>>>
>>>>
>>>> I build naxsi-0.55rc1 as a module for nginx 1.10.0 and getting the
>>>> following 2 different error on centos7 and centos6
>>>>
>>>> Error on Centos6
>>>> nginx: [emerg] dlopen() "/etc/nginx/modules/ngx_http_naxsi_module.so"
>>>> failed (/etc/nginx/modules/ngx_http_naxsi_module.so: undefined symbol:
>>>> pp_get_app_type_name) in /etc/nginx/conf.d/dynamic_modules.conf:1
>>>>
>>>> # nginx -V nginx version: nginx/1.10.0 built by gcc 4.8.2 20140120
>>>> (Red Hat 4.8.2-15) (GCC)built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS
>>>> SNI support enabled configure arguments: --prefix=/etc/nginx
>>>> --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules
>>>> --conf-path=/etc/nginx/nginx.conf
>>>> --error-log-path=/var/log/nginx/error_log
>>>> --http-log-path=/var/log/nginx/access_log
>>>> --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock
>>>> --http-client-body-temp-path=/var/cache/nginx/client_temp
>>>> --http-proxy-temp-path=/var/cache/nginx/proxy_temp
>>>> --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
>>>> --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
>>>> --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nobody
>>>> --group=nobody --with-http_ssl_module --with-http_realip_module
>>>> --with-http_addition_module --with-http_sub_module
>>>> --with-http_dav_module --with-http_flv_module --with-http_mp4_module
>>>> --with-http_gunzip_module --with-http_gzip_static_module
>>>> --with-http_random_index_module --with-http_secure_link_module
>>>> --with-http_stub_status_module --with-http_auth_request_module
>>>> --add-dynamic-module=naxsi-0.55rc1/naxsi_src --with-file-aio
>>>> --with-threads --with-stream --with-stream_ssl_module
>>>> --with-http_slice_module --with-ipv6 --with-http_v2_module
>>>> --add-dynamic-module=ngx_pagespeed-release-1.11.33.0-beta
>>>> --with-cc=/opt/rh/devtoolset-2/root/usr/bin/gcc
>>>>
>>>>
>>>> --add-module=/usr/local/rvm/gems/ruby-2.3.0/gems/passenger-5.0.27/src/nginx_module
>>>> --add-module=ngx_cache_purge-2.3 --with-cc-opt='-O2 -g -pipe -Wall
>>>> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
>>>> --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic'
>>>> --with-ld-opt=-Wl,-E
>>>>
>>>> Error on Centos7
>>>>
>>>> nginx -t nginx: [emerg] dlopen()
>>>> "/etc/nginx/modules/ngx_http_naxsi_module.so" failed
>>>> (/etc/nginx/modules/ngx_http_naxsi_module.so: undefined symbol:
>>>> psg_variant_map_new) in /etc/nginx/conf.d/dynamic_modules.conf:1
>>>>
>>>> # nginx -V nginx version: nginx/1.10.0 built by gcc 4.8.5 20150623
>>

Re: naxsi as a dynamic module error on nginx 1.10.0

2016-04-27 Thread Anoop Alias
Hi Andrew,

Yes you are correct . Without passenger naxsi is loading and working fine.

So I should be contacting passenger list with the error right?

Thank you,
Anoop



On Wed, Apr 27, 2016 at 8:03 PM, Andrew Hutchings <ahutchi...@nginx.com> wrote:
> Hi Anoop,
>
> This looks to me like another module has broken the linking a bit. Possibly
> Passenger given the symbols triggering the error and the fact they released
> a fix for their module linking 8 days ago.
>
> Can you try compiling without Passenger and then starting NGINX to see if
> this fixes it?
>
> Kind Regards
> Andrew
>
>
> On 27/04/16 14:59, Anoop Alias wrote:
>>
>> I build naxsi-0.55rc1 as a module for nginx 1.10.0 and getting the
>> following 2 different error on centos7 and centos6
>>
>> Error on Centos6
>> nginx: [emerg] dlopen() "/etc/nginx/modules/ngx_http_naxsi_module.so"
>> failed (/etc/nginx/modules/ngx_http_naxsi_module.so: undefined symbol:
>> pp_get_app_type_name) in /etc/nginx/conf.d/dynamic_modules.conf:1
>>
>> # nginx -V nginx version: nginx/1.10.0 built by gcc 4.8.2 20140120
>> (Red Hat 4.8.2-15) (GCC)built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS
>> SNI support enabled configure arguments: --prefix=/etc/nginx
>> --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules
>> --conf-path=/etc/nginx/nginx.conf
>> --error-log-path=/var/log/nginx/error_log
>> --http-log-path=/var/log/nginx/access_log
>> --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock
>> --http-client-body-temp-path=/var/cache/nginx/client_temp
>> --http-proxy-temp-path=/var/cache/nginx/proxy_temp
>> --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
>> --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
>> --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nobody
>> --group=nobody --with-http_ssl_module --with-http_realip_module
>> --with-http_addition_module --with-http_sub_module
>> --with-http_dav_module --with-http_flv_module --with-http_mp4_module
>> --with-http_gunzip_module --with-http_gzip_static_module
>> --with-http_random_index_module --with-http_secure_link_module
>> --with-http_stub_status_module --with-http_auth_request_module
>> --add-dynamic-module=naxsi-0.55rc1/naxsi_src --with-file-aio
>> --with-threads --with-stream --with-stream_ssl_module
>> --with-http_slice_module --with-ipv6 --with-http_v2_module
>> --add-dynamic-module=ngx_pagespeed-release-1.11.33.0-beta
>> --with-cc=/opt/rh/devtoolset-2/root/usr/bin/gcc
>>
>> --add-module=/usr/local/rvm/gems/ruby-2.3.0/gems/passenger-5.0.27/src/nginx_module
>> --add-module=ngx_cache_purge-2.3 --with-cc-opt='-O2 -g -pipe -Wall
>> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
>> --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic'
>> --with-ld-opt=-Wl,-E
>>
>> Error on Centos7
>>
>> nginx -t nginx: [emerg] dlopen()
>> "/etc/nginx/modules/ngx_http_naxsi_module.so" failed
>> (/etc/nginx/modules/ngx_http_naxsi_module.so: undefined symbol:
>> psg_variant_map_new) in /etc/nginx/conf.d/dynamic_modules.conf:1
>>
>> # nginx -V nginx version: nginx/1.10.0 built by gcc 4.8.5 20150623
>> (Red Hat 4.8.5-4) (GCC)built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS
>> SNI support enabled configure arguments: --prefix=/etc/nginx
>> --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules
>> --conf-path=/etc/nginx/nginx.conf
>> --error-log-path=/var/log/nginx/error_log
>> --http-log-path=/var/log/nginx/access_log
>> --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock
>> --http-client-body-temp-path=/var/cache/nginx/client_temp
>> --http-proxy-temp-path=/var/cache/nginx/proxy_temp
>> --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
>> --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
>> --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nobody
>> --group=nobody --with-http_ssl_module --with-http_realip_module
>> --with-http_addition_module --with-http_sub_module
>> --with-http_dav_module --with-http_flv_module --with-http_mp4_module
>> --with-http_gunzip_module --with-http_gzip_static_module
>> --with-http_random_index_module --with-http_secure_link_module
>> --with-http_stub_status_module --with-http_auth_request_module
>> --add-dynamic-module=naxsi-0.55rc1/naxsi_src --with-file-aio
>> --with-threads --with-stream --with-stream_ssl_module
>> --with-http_slice_module --with-ipv6 --with-http_v2_module
>> --add-dynamic-module=ngx_pagespeed-release-1.11.33.0-beta
>>
>> --add-module=/usr/local/rvm/gems/ruby-2.3.0/gems/passenger-5.0.27/src/nginx_module
&

naxsi as a dynamic module error on nginx 1.10.0

2016-04-27 Thread Anoop Alias
I build naxsi-0.55rc1 as a module for nginx 1.10.0 and getting the
following 2 different error on centos7 and centos6

Error on Centos6
nginx: [emerg] dlopen() "/etc/nginx/modules/ngx_http_naxsi_module.so"
failed (/etc/nginx/modules/ngx_http_naxsi_module.so: undefined symbol:
pp_get_app_type_name) in /etc/nginx/conf.d/dynamic_modules.conf:1

# nginx -V nginx version: nginx/1.10.0 built by gcc 4.8.2 20140120
(Red Hat 4.8.2-15) (GCC)built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS
SNI support enabled configure arguments: --prefix=/etc/nginx
--sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules
--conf-path=/etc/nginx/nginx.conf
--error-log-path=/var/log/nginx/error_log
--http-log-path=/var/log/nginx/access_log
--pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock
--http-client-body-temp-path=/var/cache/nginx/client_temp
--http-proxy-temp-path=/var/cache/nginx/proxy_temp
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
--http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nobody
--group=nobody --with-http_ssl_module --with-http_realip_module
--with-http_addition_module --with-http_sub_module
--with-http_dav_module --with-http_flv_module --with-http_mp4_module
--with-http_gunzip_module --with-http_gzip_static_module
--with-http_random_index_module --with-http_secure_link_module
--with-http_stub_status_module --with-http_auth_request_module
--add-dynamic-module=naxsi-0.55rc1/naxsi_src --with-file-aio
--with-threads --with-stream --with-stream_ssl_module
--with-http_slice_module --with-ipv6 --with-http_v2_module
--add-dynamic-module=ngx_pagespeed-release-1.11.33.0-beta
--with-cc=/opt/rh/devtoolset-2/root/usr/bin/gcc
--add-module=/usr/local/rvm/gems/ruby-2.3.0/gems/passenger-5.0.27/src/nginx_module
--add-module=ngx_cache_purge-2.3 --with-cc-opt='-O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
--param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic'
--with-ld-opt=-Wl,-E

Error on Centos7

nginx -t nginx: [emerg] dlopen()
"/etc/nginx/modules/ngx_http_naxsi_module.so" failed
(/etc/nginx/modules/ngx_http_naxsi_module.so: undefined symbol:
psg_variant_map_new) in /etc/nginx/conf.d/dynamic_modules.conf:1

# nginx -V nginx version: nginx/1.10.0 built by gcc 4.8.5 20150623
(Red Hat 4.8.5-4) (GCC)built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS
SNI support enabled configure arguments: --prefix=/etc/nginx
--sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules
--conf-path=/etc/nginx/nginx.conf
--error-log-path=/var/log/nginx/error_log
--http-log-path=/var/log/nginx/access_log
--pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock
--http-client-body-temp-path=/var/cache/nginx/client_temp
--http-proxy-temp-path=/var/cache/nginx/proxy_temp
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
--http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nobody
--group=nobody --with-http_ssl_module --with-http_realip_module
--with-http_addition_module --with-http_sub_module
--with-http_dav_module --with-http_flv_module --with-http_mp4_module
--with-http_gunzip_module --with-http_gzip_static_module
--with-http_random_index_module --with-http_secure_link_module
--with-http_stub_status_module --with-http_auth_request_module
--add-dynamic-module=naxsi-0.55rc1/naxsi_src --with-file-aio
--with-threads --with-stream --with-stream_ssl_module
--with-http_slice_module --with-ipv6 --with-http_v2_module
--add-dynamic-module=ngx_pagespeed-release-1.11.33.0-beta
--add-module=/usr/local/rvm/gems/ruby-2.3.0/gems/passenger-5.0.27/src/nginx_module
--add-module=ngx_cache_purge-2.3 --with-cc-opt='-O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
--param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic'
--with-ld-opt=-Wl,-E

if naxsi loading is disabled .Everything works.

NAXSI changelog for 0.55rc1 at https://github.com/nbs-system/naxsi/releases

states

Confirmed support as a dynamic module (introduced in nginx 1.9.11)

Just wanted to know if this is an issue with NAXSI itself or something
to do with my configure args for nginx .

Thank you,

-- 
Anoop P Alias

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Help for nginx proxy pass rule

2016-04-06 Thread Anoop Alias
Are you sure the location is working?

abc.com/static/js/widget.js" failed (2: No such file or directory) - I
think its trying to access that file locally and not via the proxy

On Wed, Apr 6, 2016 at 4:14 PM, Roni Baby  wrote:
> HI,
>
>
>
> We have a WordPress site named it as https://www.abc.com; We are using Nginx
> as the web server for this site. I wanted to create a proxy pass rule like
> this
>
>
>
> https://www.abc.com/static/js/widget.js will be load from
> https://www.xys.org/static/js/widget.js without changing URL
>
>
>
> Here is the configuration that I configured for this requirement in the
> www.abc.com Nginx site configuration file
>
>
>
> location /static {
>
> proxy_pass  https://www.xys.org/static;
>
> proxy_set_header Host  www.xys.org;
>
> proxy_set_header X-Real-IP $remote_addr;
>
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
>
> proxy_set_header X-Forwarded-Proto $scheme;
>
> }
>
>
>
> When I access https://www.abc.com/static/js/widget.js,  I am getting Nginx
> 404 Not Found error with the following error log in its log file
>
> 2016/04/06 06:39:14 [error] 20107#0: *907 open()
> "/usr/share/nginx/html/www.abc.com/static/js/widget.js" failed (2: No such
> file or directory), client: 118.102.223.138, server: www.abc.com, request:
> "GET /static/js/widget.js HTTP/1.1", host: www.abc.com
>
> I have tried different Nginx proxy pass configuration but not success yet.
> It will be good to get your thought to fix this issues
>
> Thanks
>
> Roni
>
>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



-- 
Anoop P Alias

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: convert $msec to sec

2016-04-03 Thread Anoop Alias
Thanks Valentin. That works.

Can you explain

On Sun, Apr 3, 2016 at 6:10 PM, Валентин Бартенев <vb...@nginx.com> wrote:

> On Sunday 03 April 2016 11:31:08 Anoop Alias wrote:
> > I need to log the seconds since epoch (without the millisecond
> resolution)
> > in the access_log file
> >
> > is there is a way to convert the $msec to seconds or drop the exponential
> > part of the time . Probably using the map function?.
> >
>
> It's easy with the map directive:
>
> map $msec $sec {
> ~^(?P<_sec>.+)\. $_sec;
> }
>
>
>   wbr, Valentin V. Bartenev
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

convert $msec to sec

2016-04-03 Thread Anoop Alias
I need to log the seconds since epoch (without the millisecond resolution)
in the access_log file

is there is a way to convert the $msec to seconds or drop the exponential
part of the time . Probably using the map function?.

Thank you,
-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx Tuning.

2016-02-24 Thread Anoop Alias
You should check the nginx error log as it may have vital clues to resolve
this.



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: directio sendfile aio

2016-01-22 Thread Anoop Alias
I think the weird issue I mentioned had something to do with ngx_pagespeed
with a memcached backed and memcached was not running . It is working fine
with memcached now running .

Somehow the sendfile and directio setting was affecting that. as I
mentioned the issue fixed with enabling either sendfile and directio on
with memcached not running (I think pagespeed falls back to a file based
cache if memcached is not running) .

Right now with

sendfile on;
sendfile_max_chunk 512k;
aio threads=iopool;
directio 4m;

and memcached running ;I dont see any issues .

if memcached is not running (used by pagespeed) and the above setting
produce weird errors that goes away if directio and sendfile is used in a
mutually exclusive fashion.



#
the book is NGINX High Performance
By Rahul Sharma

You can check the exact section in page #53 available in google books as a
sample.
#

So the setting

sendfile on;
sendfile_max_chunk 512k;
aio threads=iopool;   #thread_pool iopool is defined in the main context
directio 4m;


is good ?


On Fri, Jan 22, 2016 at 3:35 PM, Valentin V. Bartenev <vb...@nginx.com>
wrote:

> On Friday 22 January 2016 14:38:13 Anoop Alias wrote:
> > From an nginx book i read setting
> >
>
> What's the name of the book?
>
>
> > ###
> > http {
> >
> > sendfile on;
> > sendfile_max_chunk 512k;
> > aio threads=default;
> > directio 4m;
> > 
> >
> > is good as it use (if i understand it correctly)
>
> In some specific use case scenarios these settings can be good.
>
>
> >
> > sendfile for files less than 4m and directio for files larger than 4m
> >
> > But the above config is causing issues like static css files images etc
> not
> > being served. I am not sure what exactly is the issue But commenting out
> >
> > directio from the above fix it or commenting out sendfile fix it .
> >
> > But adding them both creates a mess.
> >
> > The question is is the above combination valid and if yes what might be
> > causing  the issue .
> >
>
> Could you provide the full configuration and a debug log
> (see http://nginx.org/en/docs/debugging_log.html)?
>
> I'm unable to reproduce any issues on a simple configuration
> example with the settings above.
>
>   wbr, Valentin V. Bartenev
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

directio sendfile aio

2016-01-22 Thread Anoop Alias
>From an nginx book i read setting

###
http {

sendfile on;
sendfile_max_chunk 512k;
aio threads=default;
directio 4m;


is good as it use (if i understand it correctly)

sendfile for files less than 4m and directio for files larger than 4m

But the above config is causing issues like static css files images etc not
being served. I am not sure what exactly is the issue But commenting out

directio from the above fix it or commenting out sendfile fix it .

But adding them both creates a mess.

The question is is the above combination valid and if yes what might be
causing  the issue .


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: directio sendfile aio

2016-01-22 Thread Anoop Alias
My use case is mixed mass hosting environment where some vhost may be
serving large files and some may be serving small files and where adding
something like location /video with directio enabled is not practical as I
being the webhost may not be knowing if the vhost user is serving a video
etc .

In such cases ..do you recommend using something like

sendfile on;
sendfile_max_chunk 512k;
aio threads=default;
directio 100m;

in the http context . The logic being

file served of size 100m or less use sendfile and anything larger than 100m
( in which case it may have a high chance of being a multimedia file) is
served via directio .

Part of these setting are derived from what I understood is good from
https://www.nginx.com/blog/thread-pools-boost-performance-9x/


-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: config reverse proxy nginx to apache2

2016-01-02 Thread Anoop Alias
Is this apache behind nginx or nginx behing apache?.

Whichever be the case - The rule is that the frontend (or the server
terminating 443 ) need to have the cert configured as the web browsers need
to talk to it with ssl .So in short if nginx is the frontend it must have
the SSL eventhough apache(if the proxy backend) also has ssl on it.

All your individual vhost need individual ssl entries.  If 2 vhost use the
same cert all you have as an advantage is you can use the same filenames .



On Sat, Jan 2, 2016 at 3:27 PM, Thierry  wrote:

> Bonjour,
>
>  I  have  made  some  modification on my  nginx reverse proxy
>  server.
>
>  I have add these lines:
>
>  listen 445;
>  server_name *.domain.org;
>  ssl on;
>  ssl_certificate /etc/ssl/certs/file.crt; (same as apache)
>  ssl_certificate_key   /etc/ssl/private/file.key;  (same  as
>  apache)
>  ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
>
> I  have  access to my web server from outside, but I do not
> understand how
> the ssl certificate is managed.
>
> Why  do  I  need  to add on nginx those certificates ? This is
> already handled by my apache server through his vhosts.
>
> How  to  deal  when  I have three vhosts, 2 have the same ssl
> certificate but the third one his using a different one.
>
> Thx
>
>
>
> --
> Cordialement,
>  Thierry  e-mail : lenai...@maelenn.org
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: nginx modsecurity on Debian 8

2015-12-23 Thread Anoop Alias
nginx -V will show configure arguments. You need to add mod_sec at the
beginning of whatever is in there.


On Wed, Dec 23, 2015 at 5:51 PM, Thierry  wrote:

> Hi,
>
> A bit lost ...
> I know nothing concerning nginx, I am more confortable with Apache2.
> I am using an email server who is using nginx on debian 8.
> I would need to install modsecurity as module.
> I have understood that I need to compile from the working directory of
> nginx 
>
> ./configure --add-module=/opt/ModSecurity-nginx
>
> But how to deal with it if nginx as been installed from binary (debian
> package) ?
>
> I have followed these instructions:
>
>  $ sudo dnf install gcc-c++ flex bison curl-devel curl yajl yajl-devel
> GeoIP-devel doxygen
> $ cd /opt/
> $ git clone https://github.com/SpiderLabs/ModSecurity
> $ cd ModSecurity
> $ git checkout libmodsecurity
> $ sh build.sh
> $ ./configure
> $ make
> $ make install
> $ cd /opt/
> $ git clone https://github.com/SpiderLabs/ModSecurity-nginx
> $ cd /opt/Modsecurity-nginx
> $ git checkout experimental
> $ cd /opt/
> ***
> $ wget http://nginx.org/download/nginx-1.9.2.tar.gz
> $ tar -xvzf nginx-1.9.2.tar.gz
> $ yum install zlib-devel
> ***
> $ ./configure --add-module=/opt/ModSecurity-nginx
>
>
>
> Everything went fine until the last ./configure 
> I  didn't  apply  what's  between  " *** " because my nginx server is
> already installed and working.
>
> Any ideas ?
>
> Thx
> --
> Cordialement,
>  Thierry  e-mail : lenai...@maelenn.org
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: nginx modsecurity on Debian 8

2015-12-23 Thread Anoop Alias
append  the configure argument you already mentioned  ./configure
--add-module=/opt/ModSecurity-nginx with the

--with-cc-opt='-g -O2 -fstack-protector-strong -Wformat
-Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt=-Wl,-z,relro
--prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf
--http-log-path=/var/log/nginx/access.log
--error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock
--pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body
--http-fastcgi-temp-path=/var/lib/nginx/fastcgi
--http-proxy-temp-path=/var/lib/nginx/proxy
--http-scgi-temp-path=/var/lib/nginx/scgi
--http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit
--with-ipv6 --with-http_ssl_module --with-http_stub_status_module
--with-http_realip_module --with-http_auth_request_module
--with-http_addition_module --with-http_dav_module --with-http_geoip_module
--with-http_gzip_static_module --with-http_image_filter_module
--with-http_spdy_module --with-http_sub_module --with-http_xslt_module
--with-mail --with-mail_ssl_module
--add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-auth-pam
--add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-dav-ext-module
--add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-echo
--add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-upstream-fair
--add-module=/tmp/buildd/nginx-1.6.2/debian/modules/ngx_http_substitutions_filter_module


##

One problem I see here is that you need to place the modules added there in
their exact path like for
example /tmp/buildd/nginx-1.6.2/debian/modules/nginx-upstream-fair
.Otherwise you will have to modify those path accordingly. you need to
install build deps for nginx too

Also you might be able to use 1.8.0 stable version

Follow -
https://www.digitalocean.com/community/tutorials/how-to-add-ngx_pagespeed-module-to-nginx-in-debian-wheezy
. The difference is you are adding mod_sec instead of pagespeed .



On Wed, Dec 23, 2015 at 6:14 PM, Thierry  wrote:

> What I have ...
> Could you please explain to me what do I have to do ? I do not understand
> ...
> Sorry
>
> nginx version: nginx/1.6.2
> TLS SNI support enabled
> configure arguments: --with-cc-opt='-g -O2 -fstack-protector-strong
> -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2'
> --with-ld-opt=-Wl,-z,relro --prefix=/usr/share/nginx
> --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log
> --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock
> --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body
> --http-fastcgi-temp-path=/var/lib/nginx/fastcgi
> --http-proxy-temp-path=/var/lib/nginx/proxy
> --http-scgi-temp-path=/var/lib/nginx/scgi
> --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit
> --with-ipv6 --with-http_ssl_module --with-http_stub_status_module
> --with-http_realip_module --with-http_auth_request_module
> --with-http_addition_module --with-http_dav_module --with-http_geoip_module
> --with-http_gzip_static_module --with-http_image_filter_module
> --with-http_spdy_module --with-http_sub_module --with-http_xslt_module
> --with-mail --with-mail_ssl_module
> --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-auth-pam
> --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-dav-ext-module
> --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-echo
> --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-upstream-fair
> --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/ngx_http_substitutions_filter_module
>
> > nginx -V will show configure arguments. You need to add mod_sec at
> > the beginning of whatever is in there.
>
>
>
>
> > On Wed, Dec 23, 2015 at 5:51 PM, Thierry  wrote:
>
> > Hi,
> >
> >  A bit lost ...
> >  I know nothing concerning nginx, I am more confortable with Apache2.
> >  I am using an email server who is using nginx on debian 8.
> >  I would need to install modsecurity as module.
> >  I have understood that I need to compile from the working directory of
> >  nginx 
> >
> >  ./configure --add-module=/opt/ModSecurity-nginx
> >
> >  But how to deal with it if nginx as been installed from binary (debian
> >  package) ?
> >
> >  I have followed these instructions:
> >
> >   $ sudo dnf install gcc-c++ flex bison curl-devel curl yajl yajl-devel
> GeoIP-devel doxygen
> >  $ cd /opt/
> >  $ git clone https://github.com/SpiderLabs/ModSecurity
> >  $ cd ModSecurity
> >  $ git checkout libmodsecurity
> >  $ sh build.sh
> >  $ ./configure
> >  $ make
> >  $ make install
> >  $ cd /opt/
> >  $ git clone https://github.com/SpiderLabs/ModSecurity-nginx
> >  $ cd /opt/Modsecurity-nginx
> >  $ git checkout experimental
> >  $ cd /opt/
> >  ***
> >  $ wget http://nginx.org/download/nginx-1.9.2.tar.gz
> >  $ tar -xvzf nginx-1.9.2.tar.gz
> >  $ yum install zlib-devel
> >  

Re: PHP and CGI on UserDir

2015-11-29 Thread Anoop Alias
What does the nginx error log (  /var/log/nginx/error.log) say when you
access a php page?
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

NAXSI directive for fixing this error

2015-11-07 Thread Anoop Alias
Hi,

I had an issue with nginx compiled in with the NAXSI waf .

#
nginx: [emerg] could not build the wlr_url_hash, you should increase
wlr_url_hash_bucket_size: 512
nginx: [emerg] $URL hashtable init failed in /etc/nginx/nginx.conf:87
nginx: [emerg] WhiteList Hash building failed in /etc/nginx/nginx.conf:87
nginx: configuration file /etc/nginx/nginx.conf test failed
###

Cant seem to find the directive to increase this wlr_url_hash_bucket_size .

tried

wlr_url_hash_bucket_size 1024;

but it says invalid directive .

The naxsi docs doesnt seem to include this .

Thanks in advance.

-- 
*Anoop P Alias*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

  1   2   >