Not sure if it's the heat and I'm tired...
Had access logs off for a long time. Decided to start it up again to try
and track down a bot issue.
Added access_log /var/log/nginx/access.log; to my server config.
Restarted. It creates a log file. Notice it has root:root permissions,
while the er
Just curious if anyone out there is using grunt and how you deal the
possibility of race conditions, i.e. people trying to access old assets
just as grunt is creating/renaming new ones.
I'd be interested to hear how people are handling this. I heard one
example might be to create a new build d
On 2013-12-07 15:31, B.R. wrote:
Hello,
On Sat, Dec 7, 2013 at 6:31 PM, Ian Evans wrote:
Thanks. Ill give this a spin. Is there anyway to still trigger the
mapping based on the existence of a maintenance.whatever file? Just
thinking of the ease of quickly touching the maintenance file to
On 2013-12-07 09:58, B.R. wrote:
I am new to the use of maps, but I suppose it would fit perfectly,
using core variables such as the binary IP address:
Maybe something like:
server {
error_page 503 /503.html # Configuring error page
map $binary_remote_addr $target { # Configuring white-
Getting ready to convert the site to UTF-8 (finally!) and wanted to
know how I could issue error code 503 to all people and bots but still
allow my IP in so I can go 'round the site checking for glitches due to
the change.
Right now I have this implementation for 503's but that issues the
err
On 2013-12-03 16:39, Francis Daly wrote:
On Tue, Dec 03, 2013 at 04:13:03PM -0500, Ian Evans wrote:
Hi there,
Yesterday, I discovered that someone had registered a site
(basically
taking our domain name and adding a word to it) and then framed our
whole site in theirs. By that I mean it
On 2013-12-03 16:32, Branden Visser wrote:
If they're using an iframe rather than a proxy then IP tricks won't
help.
Using the X-FRAME-OPTIONS header is probably your best bet [1]
Hope that helps,
Branden
[1]
http://stackoverflow.com/questions/2896623/how-to-prevent-my-site-page-to-be-loaded
On 2013-12-03 16:15, Ilan Berkner wrote:
One possibility (not Nginx related directly) is to block their IP
address at the firewall level from even getting to your server.
Or add a deny ###.###.###.### to the server block?
___
nginx mailing list
nginx
Yesterday, I discovered that someone had registered a site (basically
taking our domain name and adding a word to it) and then framed our
whole site in theirs. By that I mean it's a full iframe job, with no
toolbars showing.
Not sure what they're up to, but I'd like to stop it. I know I can us
On 24/11/2013 9:43 AM, Francis Daly wrote:
What does "diff" say about the config on the old server and the config
on the new server?
As I moved to a new server, I split everytng from one file to the whole
sites-available format so I'd have to recombine everything. However...
fastcgi_params
Okay, so rule #1 is to never think a server migration will go easy.
As I've said in another thread, I've been running nginx and php-fpm for
years on my site. But I'm moving from a CentOS to an Ubuntu server and
things aren't going as smooth as they should be.
I've got the non-ssl server worki
On 19/11/2013 4:31 AM, Francis Daly wrote:
> *If* that php is set to always return cookies, then you might want to
> run a separate php-fpm that does not return cookies for the static
> site. But that's a side issue.
Still experimenting with this...
What setting stops the php-fpm from not return
On 22/11/2013 4:44 AM, Ian Evans wrote:[snip]
> Scratching my head on this one. I created a file that has just the
> phpinfo(); and looking at the debug log I can see that it appears to hit
> the fastcgi (it returns an x-powered-by header) but returns a blank page:
>
[snip]
>
ri, Nov 22, 2013 at 4:06 AM, Francis Daly mailto:fran...@daoine.org>> wrote:
On Fri, Nov 22, 2013 at 02:01:52AM -0500, Ian Evans wrote:
Hi there,
> Does the package version not include debug (don't see it with -V)
and if
> so how can I get it through the packa
Been running nginx for _years_ on centos and am in the process of
migrating to an Ubuntu (raring) server.
I've always compiled from source before but figured I'd use the Ubuntu
apt-get install.
For whatever reason, testing is not working at all (I can serve static
but not php) and I set the
On 19/11/2013 4:31 AM, Francis Daly wrote:
*If* that php is set to always return cookies, then you might want to
run a separate php-fpm that does not return cookies for the static
site. But that's a side issue.
Will look into that. Thanks, hadn't thought of it. Did a cursory glance
at Google a
On 18/11/2013 6:26 PM, Francis Daly wrote:
Thanks for your response.
(You can possibly avoid that if you use something like
.../dhe403.shtml/$request_uri and handle it appropriately on the far
side.)
Ok.
It would seem easier to either proxy_pass or fastcgi_pass to the old
server, in an err
On 17/11/2013 3:16 PM, Ian M. Evans wrote:
Migrating to a new server and thought I'd take the time to set up a
cookieless subdomain on it for static files.
In my current setup, 403 errors are sent to a php file which grabs the
$_SERVER["REQUEST_URI"], locates the page of the photo on the site, a
Will soon be migrating from a 2 gig CentOS server to a 4 Gig Ubuntu.
We'll be running nginx (of course), php-fpm, the fastcgi cache, MariaDB
and Zend Opcache.
Though I realize that 4 gigs of RAM is still modest, it is twice what
we've previously had. I'm just curious for some general suggestio
Hi everyone.
First some background. I'm trying to integrate the method used by
Pixabay to handle Google Image Search's new design which makes it very
easy (one button click) for visitors to see an image outside of the
site's context. This has greatly slammed many sites' traffic and income.
T
On 22/02/2013 5:51 AM, Namson Mon wrote:
Hi Ian, we've just published an extensive blog post about how to achieve
this kind of hotlinking protection against Google Images:
http://pixabay.com/en/blog/posts/hotlinking-protection-and-watermarking-for-google-32/
Thanks for the link. I like the i
21 matches
Mail list logo