Hi,
On Sat, Nov 16, 2013 at 04:07:19AM +0400, Valentin V. Bartenev wrote:
> The "set" directive isn't something essential, and actually it is just a
> directive from the rewrite module.
> See here how it works:
> http://nginx.org/en/docs/http/ngx_http_rewrite_module.html
> It is evaluated on t
On Fri, Nov 15, 2013 at 05:58:41PM -0500, justin wrote:
> Can you link to the stackoverflow posts? I wish php-fpm told me what
> happened.
See, my memory. =8-((
It was this link that helped me most:
http://forum.nginx.org/read.php?11,215606,235395
Cheers,
--Toni++
On Thu, Nov 14, 2013 at 01:13:08PM -0500, justin wrote:
> My PHP application went down for a few hours with 502 bad gateway. In the
> nginx error log all I see is:
>
> 2013/11/14 10:02:16 [error] 1466#0: *57964 recv() failed (104: Connection
> reset by peer) while reading response header from ups
Hi,
to debug my locations, I have a variable in my configuration that I
reference during logging. The log format, included from nginx.conf:
log_format mylogformat '$remote_addr - $remote_user [$time_local] $request '
'"$status" $body_bytes_sent "$http_referer" '
H Francis,
On Mon, Oct 14, 2013 at 03:23:03PM +0100, Francis Daly wrote:
> In your map, let $is_spider be empty if is not a spider ("default",
> presumably), and be something else if it is a spider (possibly
> $binary_remote_addr if every client should be counted individually,
> or something els
Hi Nick,
On Sat, Oct 12, 2013 at 04:47:50PM +0300, Nikolaos Milas wrote:
> We'll add virtual RAM and cores. *Any other suggestions? *
did you investigate disk I/O?
I found this to be the limiting factor. If you have shell access and if
it is a Linux machine, you can run 'top', 'dstat' and 'htop
Hello,
On Mon, Oct 14, 2013 at 09:25:24AM -0400, Sylvia wrote:
> Doesnt robots.txt "Crawl-Delay" directive satisfy your needs?
I have it already there, but I don't know how long it takes for such a
directive, or any changes to robots.txt for that matter, to take effect.
Observing the logs, I'd
Hi,
I would like to put a brake on spiders which are hammering a site with
dynamic content generation. They should still get to see the content,
but only not generate excessive load. I therefore constructed a map to
identify spiders, which works well, and then tried to
limit_req_zone $binary_rem