Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)

2003-09-22 Thread Andrew Sullivan
On Thu, Sep 18, 2003 at 06:49:56PM -0300, Marc G. Fournier wrote:
> 
> Hadn't thought of it that way ... but, what would prompt someone to
> upgrade, then use something like erserver to roll back?  All I can think
> of is that the upgrade caused alot of problems with the application
> itself, but in a case like that, would you have the time to be able to
> 're-replicate' back to the old version?

The trick is to have your former master set up as slave before you
turn your application back on.

The lack of a rollback strategy in PostgreSQL upgrades is a major
barrier for corporate use.  One can only do so much testing, and it's
always possible you've missed something.  You need to be able to go
back to some known-working state.

A

-- 

Andrew Sullivan 204-4141 Yonge Street
Liberty RMS   Toronto, Ontario Canada
<[EMAIL PROTECTED]>  M2P 2A8
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)

2003-09-19 Thread Ron Johnson
On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote:
> On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:
> 
> > So instead of 1TB of 15K fiber channel disks (and the requisite 
> > controllers, shelves, RAID overhead, etc), we'd need *two* TB of
> > 15K fiber channel disks (and the requisite controllers, shelves,
> > RAID overhead, etc) just for the 1 time per year when we'd upgrade
> > PostgreSQL?
> 
> Nope.  You also need it for the time when your vendor sells
> controllers or chips or whatever with known flaws, and you end up
> having hardware that falls over 8 or 9 times in a row.



-- 
-
Ron Johnson, Jr. [EMAIL PROTECTED]
Jefferson, LA USA

"A C program is like a fast dance on a newly waxed dance floor 
by people carrying razors."
Waldi Ravens


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)

2003-09-13 Thread Doug McNaught
Ron Johnson <[EMAIL PROTECTED]> writes:

> And I strongly dispute the notion that it would only take 3 hours
> to dump/restore a TB of data.  This seems to point to a downside
> of MVCC: this inability to to "page-level" database backups, which
> allow for "rapid" restores, since all of the index structures are
> part of the backup, and don't have to be created, in serial, as part
> of the pg_restore.

If you have a filesystem capable of atomic "snapshots" (Veritas offers
this I think), you *should* be able to do this fairly safely--take a
snapshot of the filesystem and back up the snapshot.  On a restore of
the snapshot, transactions in progress when the snapshot happened will
be rolled back, but everything that committed before then will be there
(same thing PG does when it recovers from a crash).  Of course, if you
have your database cluster split across multiple filesystems, this
might not be doable.

Note: I haven't done this, but it should work and I've seen it talked
about before.  I think Oracle does this at the storage manager level
when you put a database in backup mode; doing the same in PG would
probably be a lot of work.

This doesn't help with the upgrade issue, of course...

-Doug

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)

2003-09-13 Thread Ron Johnson
On Sat, 2003-09-13 at 10:10, Marc G. Fournier wrote:
> On Fri, 12 Sep 2003, Ron Johnson wrote:
> 
> > On Fri, 2003-09-12 at 17:48, Joshua D. Drake wrote:
> > > Hello,
> > >
> > >   The initdb is not always a bad thing. In reality the idea of just
> > > being able to "upgrade" is not a good thing. Just think about the
> > > differences between 7.2.3 and 7.3.x... The most annoying (although
> > > appropriate) one being that integers can no longer be ''.
> >
> > But that's just not going to cut it if PostgreSQL wants to be
> > a serious "player" in the enterprise space, where 24x7 systems
> > are common, and you just don't *get* 12/18/24/whatever hours to
> > dump/restore a 200GB database.
> >
> > For example, there are some rather large companies whose fac-
> > tories are run 24x365 on rather old versions of VAX/VMS and
> > Rdb/VMS, because the DBAs can't even get the 3 hours to do
> > in-place upgrades to Rdb, much less the time the SysAdmin needs
> > to upgrade VAX/VMS to VAX/OpenVMS.
> >
> > In our case, we have systems that have multiple 300+GB databases
> > (working in concert as one big system), and dumping all of them,
> > then restoring (which includes creating indexes on tables with
> > row-counts in the low 9 digits, and one which has gone as high
> > as 2+ billion records) is just totally out of the question.
> 
> 'k, but is it out of the question to pick up a duplicate server, and use
> something like eRServer to replicate the databases between the two
> systems, with the new system having the upgraded database version running
> on it, and then cutting over once its all in sync?

So instead of 1TB of 15K fiber channel disks (and the requisite 
controllers, shelves, RAID overhead, etc), we'd need *two* TB of
15K fiber channel disks (and the requisite controllers, shelves,
RAID overhead, etc) just for the 1 time per year when we'd upgrade
PostgreSQL?

Not a chance.

-- 
-
Ron Johnson, Jr. [EMAIL PROTECTED]
Jefferson, LA USA

Thanks to the good people in Microsoft, a great deal of the data 
that flows is dependent on one company. That is not a healthy 
ecosystem. The issue is that creativity gets filtered through 
the business plan of one company.
Mitchell Baker, "Chief Lizard Wrangler" at Mozilla


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)

2003-09-13 Thread Ron Johnson
On Sat, 2003-09-13 at 11:21, Marc G. Fournier wrote:
> On Sat, 13 Sep 2003, Ron Johnson wrote:
> 
> > So instead of 1TB of 15K fiber channel disks (and the requisite
> > controllers, shelves, RAID overhead, etc), we'd need *two* TB of 15K
> > fiber channel disks (and the requisite controllers, shelves, RAID
> > overhead, etc) just for the 1 time per year when we'd upgrade
> > PostgreSQL?
> 
> Ah, see, the post that I was responding to dealt with 300GB of data,
> which, a disk array for, is relatively cheap ... :)
> 
> But even with 1TB of data, do you note have a redundant system?  If you
> can't afford 3 hours to dump/reload, can you actually afford any better
> the cost of the server itself going poof?

We've survived all h/w issues so far w/ minimal downtime, running
in degraded mode (i.e., having to yank out a CPU or RAM board) until
HP could come out and install a new one.  We also have dual-redun-
dant disk and storage controllers, even though it's been a good 
long time since I've seen one of them die.

And I strongly dispute the notion that it would only take 3 hours
to dump/restore a TB of data.  This seems to point to a downside
of MVCC: this inability to to "page-level" database backups, which
allow for "rapid" restores, since all of the index structures are
part of the backup, and don't have to be created, in serial, as part
of the pg_restore.

-- 
-
Ron Johnson, Jr. [EMAIL PROTECTED]
Jefferson, LA USA

"...always eager to extend a friendly claw"


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)

2003-09-13 Thread Dennis Gearon

'k, but is it out of the question to pick up a duplicate server, and use
something like eRServer to replicate the databases between the two
systems, with the new system having the upgraded database version running
on it, and then cutting over once its all in sync?


 

That's just what I was thinking. It might be an easy way aournd the 
whole problem,for awhile, to set up the replication to be as version 
independent as possible.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)

2003-09-13 Thread Marc G. Fournier


On Sat, 13 Sep 2003, Ron Johnson wrote:

> So instead of 1TB of 15K fiber channel disks (and the requisite
> controllers, shelves, RAID overhead, etc), we'd need *two* TB of 15K
> fiber channel disks (and the requisite controllers, shelves, RAID
> overhead, etc) just for the 1 time per year when we'd upgrade
> PostgreSQL?

Ah, see, the post that I was responding to dealt with 300GB of data,
which, a disk array for, is relatively cheap ... :)

But even with 1TB of data, do you note have a redundant system?  If you
can't afford 3 hours to dump/reload, can you actually afford any better
the cost of the server itself going poof?


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)

2003-09-13 Thread Marc G. Fournier

On Fri, 12 Sep 2003, Ron Johnson wrote:

> On Fri, 2003-09-12 at 17:48, Joshua D. Drake wrote:
> > Hello,
> >
> >   The initdb is not always a bad thing. In reality the idea of just
> > being able to "upgrade" is not a good thing. Just think about the
> > differences between 7.2.3 and 7.3.x... The most annoying (although
> > appropriate) one being that integers can no longer be ''.
>
> But that's just not going to cut it if PostgreSQL wants to be
> a serious "player" in the enterprise space, where 24x7 systems
> are common, and you just don't *get* 12/18/24/whatever hours to
> dump/restore a 200GB database.
>
> For example, there are some rather large companies whose fac-
> tories are run 24x365 on rather old versions of VAX/VMS and
> Rdb/VMS, because the DBAs can't even get the 3 hours to do
> in-place upgrades to Rdb, much less the time the SysAdmin needs
> to upgrade VAX/VMS to VAX/OpenVMS.
>
> In our case, we have systems that have multiple 300+GB databases
> (working in concert as one big system), and dumping all of them,
> then restoring (which includes creating indexes on tables with
> row-counts in the low 9 digits, and one which has gone as high
> as 2+ billion records) is just totally out of the question.

'k, but is it out of the question to pick up a duplicate server, and use
something like eRServer to replicate the databases between the two
systems, with the new system having the upgraded database version running
on it, and then cutting over once its all in sync?



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly