I notice error_page is used at server level while fastcgi_cache_use_stale is at http level. error_page has thus higher precedence than the last. I would give a shot at putting both at the same level and see what happens.
I trust error_page is some kind of 'last resort' feature, handling an error which is considered as such. Since fastcgi_cache_use_stale might triggers on errors coming from the backend, I would say it should filter them before they are considered by nginx as an error (and thus being processed as such, as the *_intercept_errors directive do). So the only reason I see error_page being triggered before fastcgi_cache_use_stale would be the higher precedence of the server environment over the http one. Test, test, test. :o) --- *B. R.* On Thu, Dec 4, 2014 at 6:09 PM, Fry-kun <[email protected]> wrote: > I am; that was a copy/paste error. The original config is around 3000 > lines, > it was easier to type & copy/paste than cleaning up unnecessary lines. > > > ~Konstantin > > > B.R. Wrote: > ------------------------------------------------------- > > Quick quesiton; I see you are using proxy_intercept_errors. Should not > > you > > be using fastcgi_intercept_errors > > <http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_in > > tercept_errors> > > ? > > --- > > *B. R.* > > > > On Thu, Dec 4, 2014 at 2:25 AM, Fry-kun <[email protected]> wrote: > > > > > I'm trying to configure my sites to failover to fastcgi_cache when > > backends > > > are unavailable -- but at the same time I want to return nginx > > errors > > > (hiding backend errors) > > > > > > Here's a simplified version of my current config: > > > > > > fastcgi_cache_path /dev/shm/nginx_fastcgi_cache levels=1:2 > > inactive=3d > > > keys_zone=mycache:100m max_size=5000m; > > > fastcgi_cache_use_stale error http_500 http_503 timeout updating; > > > fastcgi_cache_valid 200 5m; > > > fastcgi_cache_valid 404 1m; > > > proxy_intercept_errors on; > > > server { > > > server_name domain.com > > > root /var/www/domain.com; > > > location / { > > > try_files $uri @hhvm_backends; > > > } > > > location @hhvm_backends { > > > fastcgi_pass backend-nodes; # upstream hhvm backends > > > fastcgi_cache mycache; > > > ... > > > } > > > error_page 404 @404; > > > error_page 500 @500; > > > location @404 { echo "404: file not found!"; } > > > location @500 { return 500; } # default nginx error page > > > } > > > > > > > > > Right now, if the server is down and location is stale in cache, I > > get the > > > default nginx 500 error page. > > > According to debug log, the problem with this one is that error_page > > > handling takes over before fastcgi_cache_use_stale has a chance to > > do its > > > thing. > > > > > > Is there an easy way to fix this? > > > > > > Thanks > > > > > > Posted at Nginx Forum: > > > http://forum.nginx.org/read.php?2,255298,255298#msg-255298 > > > > > > _______________________________________________ > > > nginx mailing list > > > [email protected] > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > > nginx mailing list > > [email protected] > > http://mailman.nginx.org/mailman/listinfo/nginx > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,255298,255307#msg-255307 > > _______________________________________________ > nginx mailing list > [email protected] > http://mailman.nginx.org/mailman/listinfo/nginx >
_______________________________________________ nginx mailing list [email protected] http://mailman.nginx.org/mailman/listinfo/nginx
