Re: Dropped https client connection doesn't drop backend proxy_pass connection

2013-03-18 Thread Robert Mueller

> > When an https client drops it's connection, the upstream http proxy
> > connection is not dropped. If nginx can't detect an https client
> > disconnect properly, that must mean it's leaking connection information
> > internally doesn't it?
> 
> No.  It just can't say if a connection was closed or not as there 
> are pending data in the connection, and it can't read data (there 
> may be a pipelined request).  Therefore in this case, being on the 
> safe side, it assumes the connection isn't closed and doesn't try 
> to abort upstream request.

Oh right I see now.

So the underlying problem is that the nginx stream layer abstraction
isn't clean enough to handle low level OS events and then map them
through the SSL layer to read/write/eof conceptual events as needed.
Instead you need an OS level "eof" event, which you then assume maps
through the SSL abstraction layer to a SSL stream eof event.

Ok, so I had a look at the kqueue eof handling, and what's needed for
epoll eof handling, and created a quick patch that seems to work.

Can you check this out, and see if it looks right. If so, any chance you
can incorporate it upstream?

http://robm.fastmail.fm/downloads/nginx-epoll-eof.patch

If there's anything you want changed, let me know and I'll try and fix
it up.

Rob

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Need to proxypass to different servers.

2013-03-18 Thread Geo P.C.
We have 3 servers with Nginx as webserver. The setup is as follows:



Server1 : Proxy server

Server2 : App Server1

Server3 : App Server 2



In both App servers port 80 is accessed only by Proxy server.



We need to setup in such a way that while accessing geotest.com it will go
to proxy server and then it should proxypass to app server1 and while
accessing  geotest.com/cms it should go to proxy server and then to app
server 2.



So in proxy server we need to setup as while accessing geotest.com and all
its subdirectories like geotest.com/* it should go to app server 1 except
while accessing geotest.com/cms and its subdirectories it should go to app
server2.



Please let us know how we can configure it.



In proxy server we setup as follows but is not working:



server {

listen 80;

server_name geotest.com;

location / {

proxy_pass  http://app1.com;

proxy_set_header Host  $host;

proxy_set_header X-Real-IP $remote_addr;

}

location /ui {

proxy_pass http://app2.com;

proxy_set_header Host  $host;

proxy_set_header X-Real-IP $remote_addr;

}

}



Can anyone please hlp us on it.



Thanks

Geo
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: securing access to a folder - 404 error

2013-03-18 Thread mottwsc
I was able to get partway through the problem with some help.  The basic
problem was that I had been missing a root directive in one of the location
blocks.  I was advised to (and did) move the root statement up to the server
block and comment it out from any sub-blocks. I have found that this now
works as it should to protect the /secure folder when trying to view html
files, but it does not when viewing php files in the /secure folder (it just
bypasses authentication and displays the file.  I must be missing something
in the /php block (I guess), but I'm not sure what that would be.

Any suggestions?

Here's the entire nginx config file

CODE
--
server {
listen   80;
server_name  mm201.myserver.com;

root  /var/www/html;

#charset koi8-r;
#access_log  /var/log/nginx/log/host.access.log  main;

location / {
#   root   /var/www/html;
# this statement allows static content to be served first
try_files $uri $uri/ /index.php

index  index.php  index.html index.htm;
}

# protect the "secure" folder  ( /var/www/html/secure )
location /secure/ {
#   root  /var/www/html;
auth_basic "Restricted";
auth_basic_user_file /var/www/protected/.htpasswd;
#   auth_basic_user_file /var/www/html/secure/.htpasswd;
}

# protect the "munin" folder  ( /var/www/html/munin ) and subfolders
location ^~ /munin/ {
auth_basic "Restricted";
auth_basic_user_file /var/www/protected/.htpasswd;
}

error_page  404  /404.html;
location = /404.html {
#   root   /var/www/html;
}

# redirect server error pages to the static page /50x.html
#
error_page   500 502 503 504  /50x.html;
location = /50x.html {
#   root   /var/www/html;
}

# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
#proxy_pass   http://127.0.0.1;
#}

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
#   root/var/www/html;
try_files $uri =404;
# the above was inserted to block malicious code uploads, but nginx and
# the php-fcgi workers must be on the same physical server

fastcgi_pass   127.0.0.1:9000;
fastcgi_index  index.php;
fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
includefastcgi_params;
}

# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny  all;
}
}
--

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,237196,237518#msg-237518

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


How to change cookie header in a filter?

2013-03-18 Thread Cool
Hi,

What's the right way to change incoming cookie header so that upstream
can get it just like it's from user's original request header? For
example, user's browser sends:

Cookie: PHPSESSID=34406134e25e5e07727f5de6d5ff7aa3; __utmc=78548747

and I want it to be:

Cookie: PHPSESSID=34406134e25e5e07727f5de6d5ff7aa3; __utmc=78548747;
mycookie=something

when upstream processes the request.

I'm trying to migrate an Apache HTTPd module to nginx, it's more or less
like mod_usertrack
(http://httpd.apache.org/docs/2.2/mod/mod_usertrack.html) but I need to
implement my own logic to enforce compatibility among Apache, Nginx,
IIS, and Jetty.

The question is, for the first time visitor, the incoming request does
not have mycookie in the header, I can determine this and generate
cookie and Set-Cookie in response, however, I also need to change
incoming cookie header so that upstream (php-fpm now, but should be same
to all other upstreams as I'm guessing) can get this generated
"mycookie" as well.

I tried to add new entry to r->headers_in.cookies but it does not work,
also tried r->headers_in.headers but no luck either.

Thanks,

-C
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: securing access to a folder - 404 error

2013-03-18 Thread Francis Daly
On Sun, Mar 10, 2013 at 04:07:23PM -0400, mottwsc wrote:

Hi there,

> I'm trying to secure a directory on a CentOS 6.3 64 server running NGINX
> 1.2.7.  I think I've set this up correctly, but it keeps giving me a 404 Not
> Found error when I try to access a file in that folder in the browser using
> domainName/secure/hello2.html.

A 404 error from nginx for a local file should usually show something in
the error log. Is there anything there?

> I even moved the .htpasswd
> file into the /secure/ folder and changed the config file to reflect that
> change (just to see what would happen), but I still get the 404 Not Found
> error.  
> 
> Can anyone tell me what I'm missing?

I get 401 if I don't give the right credentials, and 403 if the passwd
file is missing or if the requested file is not readable. But the only
way I get 404 is if the file requested does not exist.

What "root" directive is effective in this location{}?

f
-- 
Francis Dalyfran...@daoine.org

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Strange $upstream_response_time latency spikes with reverse proxy

2013-03-18 Thread Jay Oster
Hi Maxim,

On Sun, Mar 17, 2013 at 4:42 AM, Maxim Dounin  wrote:

> Hello!
>
> On "these hosts"?  Note that listen queue aka backlog size is
> configured in _applications_ which call listen().  At a host level
> you may only configure somaxconn, which is maximum allowed listen
> queue size (but an application may still use anything lower, even
> just 1).
>

"These hosts" means we have a lot of servers in production right now, and
they all exhibit the same issue. It hasn't been a showstopper, but it's
been occurring for as long as anyone can remember. The total number of
upstream servers on a typical day is 6 machines (each running 3 service
processes), and hosts running nginx account for another 4 machines. All of
these are Ubuntu 12.04 64-bit VMs running on AWS EC2 m3.xlarge instance
types.

I was under the impression that /proc/sys/net/ipv4/tcp_max_syn_backlog was
for configuring the maximum queue size on the host. It's set to 1024, here,
and increasing the number doesn't change the frequency of the missed
packets.

/proc/sys/net/core/somaxconn is set to 500,000

Make sure to check actual listen queue sizes used on listen
> sockets involved.  On Linux (you are using Linux, right?) this
> should be possible with "ss -nlt" (or "netstat -nlt").


According to `ss -nlt`, send-q on these ports is set to 128. And recv-q on
all ports is 0. I don't know what this means for recv-q, use default? And
would default be 1024?

But according to `netstat -nlt` both queues are 0?


> > > 2) Some other queue in the network stack is exhausted.  This
> > > might be nontrivial to track (but usually possible too).
> >
> > This is interesting, and could very well be it! Do you have any
> > suggestions on where to start looking?
>
> I'm not a Linux expert, but quick search suggests it should be
> possible with dropwatch, see e.g. here:
>
>
> http://prefetch.net/blog/index.php/2011/07/11/using-netstat-and-dropwatch-to-observe-packet-loss-on-linux-servers/


Thanks for the tip! I'll take some time to explore this some more. And
before anyone asks, I'm not using iptables or netfilter. That appears to be
a common cause for TCP overhead when investigating similar issues.

Jay
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: 502 Bad Gateway- Nginx and thin

2013-03-18 Thread Francis Daly
On Mon, Mar 18, 2013 at 08:53:10PM +0100, Yunior Miguel A. wrote:

Hi there,

I see that I was wrong in assuming how thin works.

> thin configuration:

> servers: 1
> socket: /tmp/thin.sock

It looks like the *actual* file name used for the socket can add a
".integer" before the final "." in the configured socket file name,
with the integer going from 0 up to one below the number of servers.

So this bit:

> ngin configuration:
> 
> upstream thin_cluster {
> server unix:/tmp/thin.0.sock;
># server unix:/tmp/thin.1.sock;
># server unix:/tmp/thin.2.sock;
> }

is correct, despite looking odd to my non-thin eyes.

It's good that you found a configuration that works for you.

>From the initial error message reported, I can only assume that thin
wasn't running in that mode at the time you tested.

f
-- 
Francis Dalyfran...@daoine.org

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: 502 Bad Gateway- Nginx and thin

2013-03-18 Thread basti

thin => socket: /tmp/thin.sock
nginx => server unix:/tmp/thin.0.sock;

would be the problem i think.

Am 18.03.2013 20:53, schrieb Yunior Miguel A.:

Thans for all. The end configuration:

thin configuration:

chdir: /var/www/redmine/
environment: production
address: 127.0.0.1
port: 3000
timeout: 30
log: /var/log/thin/gespro.log
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 512
require: []
wait: 30
servers: 1
daemonize: true
socket: /tmp/thin.sock
group: www-data
user: www-data

ngin configuration:

upstream thin_cluster {
 server unix:/tmp/thin.0.sock;
# server unix:/tmp/thin.1.sock;
# server unix:/tmp/thin.2.sock;
}

server {

   listen   80; ## listen for ipv4

   # Set appropriately for virtual hosting and to use
server_name_in_redirect
   server_name  redmine.ipp.uci.cu;
   server_name_in_redirect off;

   access_log  /var/log/nginx/localhost.access.log;
   error_log  /var/log/nginx/localhost.error.log;

   # Note: Documentation says proxy_set_header should work in location
   #   block, but testing did not support this statement so it has
   #   been placed here in server block
   include /etc/nginx/proxy_opts;
   proxy_redirect off;

   # Note:  Must match the prefix used in Thin configuration for Redmine
   #or / if no prefix configured
   location / {
 root   /var/www/redmine/public;

 error_page 404  404.html;
 error_page 500 502 503 504  500.html;
   try_files $uri/index.html $uri.html $uri @redmine_thin_servers;
   }

   location @redmine_thin_servers {
 proxy_pass http://thin_cluster;
   }
}

thanks for all.



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: 502 Bad Gateway- Nginx and thin

2013-03-18 Thread Yunior Miguel A.
Thans for all. The end configuration:

thin configuration:

chdir: /var/www/redmine/
environment: production
address: 127.0.0.1
port: 3000
timeout: 30
log: /var/log/thin/gespro.log
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 512
require: []
wait: 30
servers: 1
daemonize: true
socket: /tmp/thin.sock
group: www-data
user: www-data

ngin configuration:

upstream thin_cluster {
server unix:/tmp/thin.0.sock;
   # server unix:/tmp/thin.1.sock;
   # server unix:/tmp/thin.2.sock;
}

server {

  listen   80; ## listen for ipv4

  # Set appropriately for virtual hosting and to use 
server_name_in_redirect
  server_name  redmine.ipp.uci.cu;
  server_name_in_redirect off;

  access_log  /var/log/nginx/localhost.access.log;
  error_log  /var/log/nginx/localhost.error.log;

  # Note: Documentation says proxy_set_header should work in location
  #   block, but testing did not support this statement so it has
  #   been placed here in server block
  include /etc/nginx/proxy_opts;
  proxy_redirect off;

  # Note:  Must match the prefix used in Thin configuration for Redmine
  #or / if no prefix configured
  location / {
root   /var/www/redmine/public;

error_page 404  404.html;
error_page 500 502 503 504  500.html;
  try_files $uri/index.html $uri.html $uri @redmine_thin_servers;
  }

  location @redmine_thin_servers {
proxy_pass http://thin_cluster;
  }
}

thanks for all.

-- 
Posted via http://www.ruby-forum.com/.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Surge 2013 CFP Open

2013-03-18 Thread Katherine Jeschke
The Surge 2013 CFP is open. For details or to submit a paper, please visit
http://surge.omniti.com/2013

-- 
Katherine Jeschke
Director of Marketing and Creative Services
OmniTI Computer Consulting, Inc.
11830 West Market Place, Suite F
Fulton, MD 20759
O: 240-646-0770, 222
F: 301-497-2001
C: 443/643-6140
omniti.com
Surge 2013 

The information contained in this electronic message and any attached
documents is privileged, confidential, and protected from disclosure.  If
you are not the intended recipient, note that any review, disclosure,
copying, distribution, or use of the contents of this electronic message or
any attached documents is prohibited. If you have received this
communication in error, please destroy it and notify us immediately by
telephone (1-443-325-1360) or by electronic mail (i...@omniti.com). Thank
you.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: 502 Bad Gateway- Nginx and thin

2013-03-18 Thread Francis Daly
On Mon, Mar 18, 2013 at 05:13:12PM +0100, Yunior Miguel A. wrote:

Hi there,

> I am put the same port and change  server unix:/tmp/thin.sock;
> 
> and the log is the same.

I'm not sure what exact configuration and log you are looking at in this
test, but if your nginx.conf says "unix:/tmp/thin.sock" and your nginx
error log says "connect() to unix:/tmp/thin.0.sock failed", then your
actually-running nginx is not using that nginx.conf.

Alternatively, if your nginx.conf says "unix:/tmp/thin.sock" and your
nginx error log says "connect() to unix:/tmp/thin.sock failed", then
that strongly suggests that your thin is not listening on that socket.

So either make sure that thin is doing what you expect, or make sure
that nginx is doing what you expect, depending on what your test showed.

f
-- 
Francis Dalyfran...@daoine.org

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


custom error page with errors in it

2013-03-18 Thread RedFoxy
Hi all!
I does some rules for various web errors, that rules shows a custom error
page:

error_page  500 502 503 504 /50x.html;
location = /50x.html
{
root/etc/nginx/_conf/error-page;
}

error_page  404 /404.html;
location = /404.html
{
root/etc/nginx/_conf/error-page;
}

That's ok but there is a way to add some informations about the error in the
pages? In apache there was some "apache-code" to include in the HTML page to
add the web server output error in the html page, there is something like it
in nginx?

thank's

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,237503,237503#msg-237503

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


rewrite for missing images

2013-03-18 Thread RedFoxy
Hello!
I want do a specific rewrite for missing image, I mean that I've a 404 page
for all other missing pages, now I want do a rewrite that shows an image
when is an image that is missing.

is that possible?

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,237502,237502#msg-237502

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: 502 Bad Gateway- Nginx and thin

2013-03-18 Thread Yunior Miguel A.
Whn I am go /tmp/thin.0.sock that file exist. Sometime when i am reset 
the thin read thin:
/var/lib/gems/1.9.1/gems/activesupport-3.2.12/lib/active_support/dependencies.rb:251:in
 
`block in require': iconv will be deprecated in the future, use 
String#encode instead.

I am put the same port and change  server unix:/tmp/thin.sock;

and the log is the same.

-- 
Posted via http://www.ruby-forum.com/.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: 502 Bad Gateway- Nginx and thin

2013-03-18 Thread Maxim Dounin
Hello!

On Mon, Mar 18, 2013 at 04:12:58PM +0100, Yunior Miguel A. wrote:

> I have install nginx 1.1.19 and thin 1.5.0  in Ubuntu 12.04 and I am
> install redmine, when I try to access the page of redmine gives me the
> following error: 502 Bad Gateway. In Nginx log reads:
> 2013/03/18 10:45:09 [crit] 13886#0: *11 connect() to
> unix:/tmp/thin.0.sock failed (2: No such file or directory) while
> connecting to upstream, client: 127.0.0.1, server:
> redmine_nginx.ipp.com, request: "GET / HTTP/1.1", upstream:
> "http://unix:/tmp/thin.0.sock:/";, host: "redmine_nginx.ipp.com"
> thin configuration:

[...]

This:

> socket: /tmp/thin.sock

doesn't match this:

> server unix:/tmp/thin.0.sock;

and the mismatch explains the problem.

-- 
Maxim Dounin
http://nginx.org/en/donation.html

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: 502 Bad Gateway- Nginx and thin

2013-03-18 Thread Francis Daly
On Mon, Mar 18, 2013 at 04:12:58PM +0100, Yunior Miguel A. wrote:

Hi there,

> I have install nginx 1.1.19 and thin 1.5.0  in Ubuntu 12.04 and I am
> install redmine, when I try to access the page of redmine gives me the
> following error: 502 Bad Gateway. In Nginx log reads:
> 2013/03/18 10:45:09 [crit] 13886#0: *11 connect() to
> unix:/tmp/thin.0.sock failed (2: No such file or directory) while

Where did you tell thin to listen?

Where did you tell nginx that thin would be listening?

Are they the same?

f
-- 
Francis Dalyfran...@daoine.org

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


502 Bad Gateway- Nginx and thin

2013-03-18 Thread Yunior Miguel A.
I have install nginx 1.1.19 and thin 1.5.0  in Ubuntu 12.04 and I am
install redmine, when I try to access the page of redmine gives me the
following error: 502 Bad Gateway. In Nginx log reads:
2013/03/18 10:45:09 [crit] 13886#0: *11 connect() to
unix:/tmp/thin.0.sock failed (2: No such file or directory) while
connecting to upstream, client: 127.0.0.1, server:
redmine_nginx.ipp.com, request: "GET / HTTP/1.1", upstream:
"http://unix:/tmp/thin.0.sock:/";, host: "redmine_nginx.ipp.com"
thin configuration:

chdir: /var/www/redmine_nginx/
environment: production
address: 0.0.0.0
port: 3000
timeout: 30
log: /var/log/thin/redmine.log
#pid: /var/run/thin/redmine.pid
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 512
require: []
wait: 30
servers: 1
daemonize: true
socket: /tmp/thin.sock
group: www-data
user: www-data

Nginx Service block:
upstream thin_cluster {
server unix:/tmp/thin.0.sock;

}

server {

  listen   80; ## listen for ipv4

  # Set appropriately for virtual hosting and to use
server_name_in_redirect
  server_name  redmine_nginx.ipp.com;
  server_name_in_redirect off;

  access_log  /var/log/nginx/localhost.access.log;
  error_log  /var/log/nginx/localhost.error.log;

  # Note: Documentation says proxy_set_header should work in location
  #   block, but testing did not support this statement so it has
  #   been placed here in server block
  include /etc/nginx/proxy_opts;
  proxy_redirect off;

  # Note:  Must match the prefix used in Thin configuration for Redmine
  #or / if no prefix configured
  location / {
root   /var/www/redmine_nginx/public;

error_page 404  404.html;
error_page 500 502 503 504  500.html;
  try_files $uri/index.html $uri.html $uri @redmine_thin_servers;
  }

  location @redmine_thin_servers {
proxy_pass http://thin_cluster;
  }
}

the include /etc/nginx/proxy_opts;

# Shared options used by all proxies
proxy_set_headerHost $http_host;

# Following headers are not used by Redmine but may be useful for
plugins and
# other web applications
proxy_set_headerX-Real-IP $remote_addr;
proxy_set_headerX-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_headerX-Forwarded-Proto $scheme;

# Any other options for all proxies here
client_max_body_size   10m;
client_body_buffer_size128k;

proxy_connect_timeout  90;
proxy_send_timeout 90;
proxy_read_timeout 90;

proxy_buffer_size  4k;
proxy_buffers  4 32k;
proxy_busy_buffers_size64k;
proxy_temp_file_write_size 64k;

-- 
Posted via http://www.ruby-forum.com/.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Reverse Proxy Data Diode

2013-03-18 Thread WBrown
From: "Camayoc" 

> I'd argue the commercial solutions are value for money, given the
> complexities. 

Not to mention most organizations that would need such a device like 
having someone to hold accountable (usually via lawsuit) when it fails.



Confidentiality Notice: 
This electronic message and any attachments may contain confidential or 
privileged information, and is intended only for the individual or entity 
identified above as the addressee. If you are not the addressee (or the 
employee or agent responsible to deliver it to the addressee), or if this 
message has been addressed to you in error, you are hereby notified that 
you may not copy, forward, disclose or use any part of this message or any 
attachments. Please notify the sender immediately by return e-mail or 
telephone and delete this message from your system.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: upstream keepalive with upstream hash

2013-03-18 Thread beaufour
Maxim Dounin Wrote:
> > I'm suspecting that the upstream hash module "takes over" the
> upstream
> > handling, and thus ignores the 'keepalive' keyword, but I'm randomly
> > guessing.
> 
> Quoting http://nginx.org/r/keepalive:
> 
> : When using load balancer methods other than the default 
> : round-robin, it is necessary to activate them before the keepalive 
> : directive.

Doh! That seems to do the trick. Thanks!

Allan

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,237417,237489#msg-237489

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx + my module crashes only when ignore client abort = on

2013-03-18 Thread gadh
i changed to pcalloc as you told me and the crash seems to be solved !! 
thanks alot
Gad

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,237362,237488#msg-237488

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx + my module crashes only when ignore client abort = on

2013-03-18 Thread gadh
> Note though, that by changing headers_in structure you are 
> responsible for it's consistency.  It's usually much better idea 
> to use upstream functionality to create needed request to an 
> upstream instead (proxy_set_body, proxy_pass_headers and so on).
> 

but can i wait for the upstream to return and delay the request from passing
on to backend as i do in my subrequest ?
when i use your suggested proxy directives i have no control on that 

Gad

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,237362,237485#msg-237485

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Simple question about proxy cache

2013-03-18 Thread John Moore
On 17/03/13 23:08, Maxim Dounin wrote:
> Hello!
>
> On Sun, Mar 17, 2013 at 08:08:39PM +, John Moore wrote:
>
>> I've used nginx as a reverse proxy server for a long while but I've not
>> tried out the proxy cache until today, and I have to say I'm a little
>> bit confused by what I'm seeing in the cache log, and I'm wondering
>> whether I've set things up correctly. My requirements are actually
>> pretty simple. I have a couple of locations which I want to proxy to
>> another server and cache the results. Thus:
>>
>> location /media/house_images/{
>>   proxy_pass http://backend;
>>   proxy_cache one;
>>}
>>
>>location /media/boat_images/{
>>   proxy_pass http://backend;
>>   proxy_cache one;
>>}
>>
>>
>> Apart from this, I don't want any cacheing of responses to be done. I am
>> assuming that the default is NOT to cache unless a cache zone is
>> specified (at the server or location level, presumably), so either
>> omitting a proxy_cache or specifying 'proxy_cache off' should be
>> sufficient to achieve this, should it not?
> Yes, without proxy_cache (or with "proxy_cache off") configured in
> a location cache won't be used.
>
>> Two things are puzzling me, though. Firstly, in the cache log, I'm
>> seeing the URLs of all kinds of requests which SHOULD NOT be cached, and
>> I'm wondering whether all requests are logged whether they're cached or
>> not - I certainly hope this is the case and it's not actually cacheing
>> these responses. I would definitely prefer to only see entries in the
>> log for requests matching locations for which a cache has been
>> specified. I presume this is possible?
> You can configure logs for a specific location, see
> http://nginx.org/r/access_log.
>
>> Secondly, the very requests which I would expect to be cached are all
>> showing up in the log with the word 'MISS' in the $upstream_cache_status
>> column.
> This usually happens if your backend doesn't specify allowed cache
> time (in this case, proxy_cache_valid should be used to set one,
> see http://nginx.org/r/proxy_cache_valid) or if backend responses
> doesn't allow cache to be used (either directly with
> Cache-Control/Expires headers, or indirectly with Set-Cookie
> header, see http://nginx.org/r/proxy_ignore_headers).
>

Excellent - thanks, Maxim! That's got me sorted now, it all seems to be 
working as planned.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx + my module crashes only when ignore client abort = on

2013-03-18 Thread Maxim Dounin
Hello!

On Mon, Mar 18, 2013 at 01:40:24AM -0400, gadh wrote:

> thanks Maxim ! i very appreciate your help on this.
> about the temp file - i protect from a response to be written to a file by
> knowing the max size that can be sent by the server and enlarging the proxy
> buffers accordingly.

You are not initializing subrequest's request_body->temp_file 
pointer (among other request_body members).  It might point 
anywhere, and will cause problems.

> i know i ruin the original request header - its the main purpose for my code
> ! i want to issue an independant subrequest to another server, no to to the
> original. but the r->main->... is not ruined and acting ok afterwards.

Yes, indeed.

Note though, that by changing headers_in structure you are 
responsible for it's consistency.  It's usually much better idea 
to use upstream functionality to create needed request to an 
upstream instead (proxy_set_body, proxy_pass_headers and so on).

> in any case, i ask you to support this subrequest mechasnim, its obviously
> needed to send a subrequest to any server, not just to the original one, and
> also to control its response instead of just adding it to the start/end of
> page, its alot more flexible.
> can i use another mechanism in order to achive those goals ? to create a new
> upstream module ?

What is supported is subrequest in memory functionality, which 
allows you to get the response in memory instead of appending it 
to the response.  It only works with certain upstream protocols 
though.  And it wasn't supposed to work at arbitrary request 
processing phases, so it might be non-trivial to do things 
properly, in particular - ensure subrequest consistency at early 
phases of request processing and to rerun the main request once 
subrequest is complete.

-- 
Maxim Dounin
http://nginx.org/en/donation.html

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Find Duplicate Files and Free Up Disk Space

2013-03-18 Thread shajalalmia2
I faced same problem  my computers many files is Duplicate.
so my need help .please tell me your idea

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,237481,237481#msg-237481

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx