> I checked the site, everything was normal.  I went in via psql and tried some
> queries for about half an hour and continued to monitor the site.  then I went
> to bed at 7am (EU time).
>
> Why did it shutdown so much later ?

That’s one of the things I talked about. To be safe, PG will start to shut down 
but disallow new connections, and *that’s all*. Old connections are 
grandfathered in until they disconnect, and when they all go away, it shuts 
down gracefully.

pg_ctl –D /my/pg/dir stop –m fast
pg_ctl –D /my/pg/dir start

Is what you wanted.

> I have never restarted postgres before, so this was all new to me.  I 
> apologize
> that I wasn't born innately with such knowledge.

Forget about it. But you need to learn your tools. Restarting the DB server is 
something you’ll need to do occasionally. Just like restarting your Django 
proxy or app. You need to be fully knowledgeable about every part of your 
tool-chain, or at least the parts you’re responsible for.

> I double checked opinions against different people and I searched for more 
> docs
> on that param.  Amazingly none of the ones I found commented on the shared
> memory issue and I didn't even understand the docs discussing shared memory
> because it didn't seem to apply to what I was doing.

That’s no coincidence. I’ve seen that complaint if you increase shared_buffers, 
but not for max_fsm_pages. I guess I’m so used to bumping up shmmax and shmall 
that I forget how low default systems leave those values. But you do need to 
increase them. Every time. They’re crippling your install in more ways than 
just postgres.

So far as your Django install, have you activated the memcache contrib. module? 
Your pages should be lazy-caching and rarely depend on the DB, if they can. You 
should also rarely be doing count(*) on a 300k row table, even if everything is 
cached and speedy. 300k row tables have nasty habits of becoming 3M row tables 
(or more) after enough time, and no amount of cache will save you from counting 
that. It’ll take 1 second or more every time eventually, and then you’ll be in 
real trouble. That’s an application design issue you need to address before 
it’s too late, or you have to rush and implement a hasty fix.

I suggest setting your log_min_duration to 1000, so every query that takes 
longer than 1 second to execute is logged in your postgres logs. You can use 
that to track down trouble spots before they get really bad. That’s normally 
aggressive enough to catch the real problem queries without flooding your logs 
with too much output.

Being a DBA sucks sometimes. ☺


______________________________________________

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

Reply via email to