Thanks for reply,
I had read all the Discussions you suggested.
The main reason is  that  multiplexing seems useless when using “keep alive” 
feature and backend is fast enough.
It’s true!  But real world is more sophisticated.

Our system is very big, and over 5k machines are providing services. In Our 
system, nginx proxy_pass  http request to http applications by using “keep 
alive”, it works well,  over 10 k requests were processed  per second and  tcp 
connections between nginx and backend were blow 100. But, sometimes, response 
time become 1-10s or more for a while, because maybe a db server fail over or 
network shrink.  Over 10k tcp connection need to be setup as we see.
our backend is written by java, connections cannot be setup all a sudden, and 
memory needed is big , GC collections became bottleneck, GC keep on working 
even if db server or network resumed to normal, and backend server did not work 
orderly any more, I observed these things several times.

      If multiplexing, no more connections are needed and memory needed is far 
small under such a circumstance. We use multiplexing everywhere in our java 
applications, It can prove my idea.

      Nginx is needed for sure for client http access, so I study fastcgi to 
solve above problem, but nginx does support fastcgi multiplexing, which can 
trigger the same problem.

      As a conclusion, a big production system really need that nginx pass 
request to backend by multiplexing. Can you make nginx developing team work on 
it?



发件人: Sergey Brester [mailto:serg.bres...@sebres.de]
发送时间: 2015年5月29日 16:40
收件人: nginx-devel@nginx.org
抄送: 林谡
主题: Re: 答复: problems when use fastcgi_pass to deliver request to backend


Hi,

It's called fastcgi multiplexing and nginx currently does not implement that 
(and I don't know .

There were already several discussions about that, so read here, 
please.<http://forum.nginx.org/read.php?2,237158>

Short, very fast fastcgi processing may be implemented without multiplexing 
(should be event-driven also).

Regards,
sebres.



Am 29.05.2015 09:58, schrieb 林谡:

        /* we support the single request per connection */

2573<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2573>



2574<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2574>

        case ngx_http_fastcgi_st_request_id_hi:

2575<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2575>

            if (ch != 0) {

2576<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2576>

                ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,

2577<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2577>

                              "upstream sent unexpected FastCGI "

2578<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2578>

                              "request id high byte: %d", ch);

2579<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2579>

                return NGX_ERROR;

2580<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2580>

            }

2581<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2581>

            state = ngx_http_fastcgi_st_request_id_lo;

2582<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2582>

            break;

2583<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2583>



2584<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2584>

        case ngx_http_fastcgi_st_request_id_lo:

2585<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2585>

            if (ch != 1) {

2586<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2586>

                ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,

2587<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2587>

                              "upstream sent unexpected FastCGI "

2588<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2588>

                              "request id low byte: %d", ch);

2589<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2589>

                return NGX_ERROR;

2590<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2590>

            }

2591<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2591>

            state = ngx_http_fastcgi_st_content_length_hi;

2592<http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2592>

            break;

By reading source code, I saw the reason , so can nginx support multi request 
per connection in future?

发件人: 林谡
发送时间: 2015年5月29日 11:37
收件人: 'nginx-devel@nginx.org'
主题: problems when use fastcgi_pass to deliver request to backend

Hi,
         I write a fastcgi server and use nginx to pass request to my server. 
It works till now.
         But I find a problem. Nginx always set requestId = 1 when sending 
fastcgi record.
         I was a little upset for this, cause according to fastcgi protocol, 
web server can send fastcgi records belonging to different request 
simultaneously, and  requestIds are different and keep unique. I really need 
this feature, because  requests can be handled simultaneously just over one 
connetion.
         Can I find a way out?



_______________________________________________

nginx-devel mailing list

nginx-devel@nginx.org<mailto:nginx-devel@nginx.org>

http://mailman.nginx.org/mailman/listinfo/nginx-devel
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Reply via email to