Re: how to completely disable request body buffering

2016-09-01 Thread Phani Sreenivasa Prasad
Hi B.R

Please find the nginx confiuration below that we are using. and any help
would be greatful.

nginx -V
=
nginx version: nginx/1.8.0
built with OpenSSL 1.0.2h-fips  3 May 2016
TLS SNI support enabled
configure arguments: --crossbuild=Linux::arm
--with-cc=arm-linux-gnueabihf-gcc --with-cpp=arm-linux-gnueabihf-gcc
--with-cc-opt='-pipe -Os -gdwarf-4 -mfpu=neon
--sysroot=/work/autobuild/project_hub_release/nginx/service/001.1635A/sol_aux_build/sbq_sysroot'
--with-ld-opt=--sysroot=/work/autobuild/project_hub_release/nginx/service/001.1635A/sol_aux_build/sbq_sysroot
--prefix=/usr --conf-path=/etc/nginx/nginx.conf --sbin-path=/usr/sbin/nginx
--pid-path=/var/run/nginx.pid --lock-path=/var/run/lock/nginx.lock
--error-log-path=/var/log/nginx/error.log
--http-log-path=/var/log/nginx/access.log
--http-client-body-temp-path=/var/tmp/nginx/client-body
--http-proxy-temp-path=/var/tmp/nginx/proxy
--http-fastcgi-temp-path=/var/tmp/nginx/fastcgi
--http-scgi-temp-path=/var/tmp/nginx/scgi
--http-uwsgi-temp-path=/var/tmp/nginx/uwsgi --user=www-data --group=www-data
--with-ipv6 --with-http_ssl_module --with-http_gzip_static_module
--with-debug

nginx.conf
=
worker_processes  1;

events {
worker_connections  1024;
}

http {
include   mime.types;
default_type  application/octet-stream;

sendfileon;

keepalive_timeout  65;

server {
listen   80;
listen   [::]:80;
listen   8080;
listen   [::]:8080;
listen   127.0.0.1:14200;   #usb port
listen   443 ssl;
listen   [::]:443 ssl;
listen   127.0.0.1:14199;# for internal LEDM
requests to bypass authentication check
listen   127.0.0.1:6015;# websocket internal port to
talk to nginx.

server_name  localhost;

include /project/ram/secutils/*.conf;
   
include /project/rom/httpmgr_nginx/*.conf;

fastcgi_param PATH_INFO $fastcgi_path_info;
include fastcgi_params;

error_page   500 502 503 504  /50x.html;
location = /50x.html {
root   html;
}
}
   
}

proj_server.conf:
=

server {
  listen [::]:5678 ssl ipv6only=off;

  ssl_certificate /project/rw/cert_svc/dev_cert.pem;
  ssl_certificate_key /mnt/encfs/cert_svc/dev_key.pem;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

  gzip on;
  gzip_types *;
  gzip_min_length 0;

  # If the incoming request body is greater than client_max_body_size,
  # NGINX will return 413 request entitiy too large.
  # Setting to 0 will disable this size check.
  client_max_body_size 0;

  # By default, NGINX will try to buffer up the entire request body
before
  # sending it to the backend server.
  # Turning it off should stop this behavior and pass the request on
immediately.
  fastcgi_request_buffering off;

  # By default, NGINX will try to buffer up the entire response before
  # sending it to the client.
  # Turning it off should stop this behavior and pass the response on
immediately.
  fastcgi_buffering off;

  # Default timeout is 60s and there is no way to disable the read
timeout.
  # If a read has not been performed in the specified interval
  # a 504 response is sent from NGINX to the client.
  # This could happen if there is a flow stoppage in the upstream.
  fastcgi_read_timeout 7d;

  # Default timeout is 60s and there is no way to disable the send
timeout.
  # If NGINX has not sent data to the FastCGI server in the specified
interval
  # a 504 response is sent from NGINX to the client.
  # This could happen if there is a flow stoppage in the upstream.
  fastcgi_send_timeout 7d;

  # This server's listen directive says to use SSL on port 5678.
  # When HTTP requests come to an SSL port NGINX throws a 497 HTTP
Request Sent to HTTPS Port
  # Since our requests will be HTTP on port 5678, NGINX will throw error
code 497
  # To fix this, when NGINX throws 497, we tell it to use the status
code
  # from the upstream server.
  error_page 497 = $request_uri;

  fastcgi_param PATH_INFO $fastcgi_path_info;
  fastcgi_param HOST $host;
  include fastcgi_params;

  location = /path/to/resource1 {
  fastcgi_pass 127.0.0.1:14052;
  }

  location = /path/to/resource2 {
  fastcgi_pass 127.0.0.1:14052;
  }

}

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,269196,269353#msg-269353

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: how to completely disable request body buffering

2016-08-26 Thread B.R.
fastcgi_request_buffering does deactivate request buffering from what I
understand from the docs.
client_body_buffer_size is said to be useful/used only when the previous
directive is activated.
>From what I read it seems your configuration attempts failed to load or to
be activated where needed.

Could you provide us with a minimal loaded configuration reproducing the
problem (ie buffering while you configured it not to do so), through the
use of nginx -V ?
---
*B. R.*

On Fri, Aug 26, 2016 at 7:19 AM, phani prasad  wrote:

> Hi all,
>
> for one of our products we have chosen nginx as our webserver and using
> fastCGI to talk to upstream(application) layer. We have a usecase where in
> the client sends huge payload typically in MB and nginx is quick enough
> reading all the data and buffering it . Whereas our upstream server is
> quite slower in consuming the data.  This resulted in timeout on client
> side since the upstream cant respond with status code until unless it
> finish reading the complete payload. Additional information is, the request
> is chunked.
>
> To address this we have tried several options but  nothing worked out so
> far.
>
> 1. we turned off fastcgi_request_buffering setting it to off.
>
> This would only allow nginx not to buffer request payload into a temp file
> before forwarding it to application/upstream. But it still use some buffer
> chains to write to upstream.
>
> 2. setting client_body_buffer_size .
>
> this would only check if request body size is larger than
> client_body_buffer_size, then it writes whole or part of body to file.
> How does this work in case of chunked request body?
> What is the max chunk size that nginx can allocate?
> What if upstream is slow in consuming the data ? Does nginx still try to
> writev chain of buffers to the pipe?
> How many max chain buffers nginx would maintain to buffer request body? If
> so is it configurable?
>
>
> What other options that we can try out? we want to completely disable
> request body buffering and would want to stream the data as it just arrives
> from client. and if upstream is busy , *nginx should be able to tune
> itself in the sense it should wait reading further data from client until
> upstream is ready to be written.*
>
> Any help is much appreciated as this is blocking one of our product
> certifications.
>
>
> Thanks
> Prasad.
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx