Re: [ADMIN] Backup routine

2003-08-11 Thread Bruce Momjian
Christopher Browne wrote:
 The world rejoiced as [EMAIL PROTECTED] (Peter and Sarah Childs) wrote:
  However there is a third way. That should be safe but some
  people may disagree with me! If you can freeze the disk while you
  take the backup. The backup can be used as if the computer had
  crashed with no hard disk failure at all. Ie WAL will be consistant
  and database may take longer but once it is up it will be safe (like
  paragaph 1). Now freezeing a disk for backup is not that
  difficult. You should be doing it anyway for user file
  consistancy. (You don't want the first 30 pages of you document to
  disagree with the end because somone was saving it during the
  backup!
 
 I heard D'Arcy Cain indicate that some SAN systems (I think he
 mentioned NetApp) support this sort of thing, too.  Digital's AdvFS
 also supports it.
 
 Of course, if you take this approach, you have to make _certain_ that
 when you freeze a replica of a filesystem, that _ALL_ of the
 database is contained in that one filesystem.  If you move WAL to a
 different filesystem, bets would be off again...

Also, I assume you have to stop the server just for a moment while you
do the freeze, right?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [ADMIN] syslog enabled causes random hangs?

2003-08-11 Thread Tom Lane
Arthur Ward [EMAIL PROTECTED] writes:
 It looks to me like the guy doing VACUUM is simply waiting for the other
 guy to release a page-level lock.  The other guy is running a deferred
 trigger and so I'd expect him to be holding one or two page-level locks,
 on the page or pages containing the tuple or tuples passed to the
 trigger.  Nothing evidently wrong there.

 If I remember what I was working on the other day when this whole thing
 started, I think it was a single backend and a checkpoint that collided.
 I'll trace that combination, assuming it happens again.

A checkpoint would also have reason to wait for a page-level lock, if
the stuck backend was holding one.  I am wondering though if the stuck
condition consistently happens while trying to fire a trigger?  That
would be very interesting ... not sure what it'd mean though ...

 The real question is why does vsyslog() have anything to block on, when
 it's running in an unthreaded process?  Seeing that you are using
 plpython, I wonder if Python is confusing matters somehow.

 Oof. I'm using plpython all over the place; I don't think this problem has
 happened in any location that can work without it easily. :-/

It looks to me like your plpython code is all dead in the water, seeing
that your Python installation is refusing creation of rexec.  (AFAIK the
only workaround is to downgrade Python to a version that allows rexec.)
If you're using it all over the place, how come you haven't noticed
that??

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] Backup routine

2003-08-11 Thread Dani Oderbolz
Hi Enio,

Enio Schutt Junior wrote:

Hi
 
Here, where I work, the backups of the postgresql databases are being 
done the following way:
There is a daily copy of nearly all the hd (excluding /tmp, /proc, 
/dev and so on) in which databases are
and besides this there is also one script which makes the pg_dump of 
each one of the databases on the server.
Hmm, I don't really see what you are doing with a backup of /tmp, /proc, 
/dev/tmp, /proc, /dev.
I mean /tmp might be ok, but /proc shouldnt be backuped in my opinion, 
as /proc is NOT on your hd,
but pointing directly to Kernel Memory.
I would not dare to restore such a Backup!
And /dev as well, I mean, these are your devices, so its completely 
Hardwarebound.

This daily copy of the hd is made with postmaster being active 
(without stopping the daemon), so the data
from /usr/local/pgsql/data would not be 100% consistent, I guess.
You need to stop Postgres, else forget about your backup.
The DB might not even come up again.
Here at my site, we have a nice little script, which can be configured to
do certain actions before doing a backup of a given directory,
and also after the backup.
 
There are some questions I have about this backup routine:
If I recover data from that inconsistent backup hd, I know that the 
binary files (psql, pg_dump and so on)
will remain ok. The data may have some inconsistencies. Would these 
inconsistencies let the postmaster
start and work properly (that is, even with the possible presence of 
inconsistent data) Would it start and
be able to work normally and keep the information about users and 
groups? I am talking about users and
groups information because these ones are not dumped by pg_dump. I was 
thinking about using
pg_dump -g to generate this information.
I would really not go down this road.

Regards,
Dani
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match