Re: [GENERAL] minimizing downtime when upgrading

2006-06-23 Thread Kenneth Downs

Jim Nasby wrote:


On Jun 21, 2006, at 7:42 AM, H.J. Sanders wrote:

The last 15 years we also used Informix and we never, never had to  
unload/load

the database because of an upgrade.

Perhaps somebody knows how they do the trick?



Do they provide a migration/upgrade utility?


In the case of MS SQL Server the issue doesn't come up.  Here are some 
user experiences:


1) Create a database in MS SS 6
2) Upgrade software to MS SS 7
3) Use software


1) Create a database in MS SS 6
2) Backup database (closest analog is -Fc option of pg_dump)
3) Move backup to machine running MS SS 7
4) Attempt to restore.  Program warns it will upgrade database and you 
can't go back, proceed?  You say yes.


begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] minimizing downtime when upgrading

2006-06-22 Thread Jim Nasby

On Jun 21, 2006, at 7:42 AM, H.J. Sanders wrote:
The last 15 years we also used Informix and we never, never had to  
unload/load

the database because of an upgrade.

Perhaps somebody knows how they do the trick?


Do they provide a migration/upgrade utility?
--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] minimizing downtime when upgrading

2006-06-21 Thread H.J. Sanders


>>>
> >>> Is anybody over at the dev team considering what an onerous burden 
> >>> this is?  Is anyone considering doing away with it?

Just my 2 cents:

more and more databases have to run 24 * 7 , so something has to be done.

The last 15 years we also used Informix and we never, never had to unload/load
the database because of an upgrade.

Perhaps somebody knows how they do the trick?

Regards

Henk Sanders


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] minimizing downtime when upgrading

2006-06-21 Thread Alban Hertroys

Kenneth Downs wrote:

Richard Huxton wrote:


Kenneth Downs wrote:

AFAIK it has always been the case that you should expect to have to 
dump out your databases and reload them for version upgrades.


Is anybody over at the dev team considering what an onerous burden 
this is?  Is anyone considering doing away with it?


Is there any good reason not to invest in having a second database 
server? That way you could upgrade the slave server, switch that to be 
the master server and replicate the data (using Slony-I, for example) to 
the slave (formerly master) server.


It provides other benefits as well, like the ability to stay up during 
system maintenance, load balancing, etc.



Kind of gets to the heart of things, though, doesn't it.

It's the non-trivial stuff where we look to the machine to help us out.
As a user of PostgreSQL, I benefit from a lot of things.  I gain a total 
advantage of "X" units of time/money.  Then its time to upgrade and I 
have to give a lot of it back.  The more I use the package, the more 
non-trivial is my upgrade, and the more I give back.
Regardless of whether a package is commercial or free, it strikes me as 
counter to the very soul of programming to build in a burden that 
increases with the user's use of the program, threatening even to tip 
the balance altogether away from its use.  This seems to be the very 
kind of feature that you want to programmatically control precisely 
because it is non-trivial.


Which is why you have to use the pg_dump from the new version to dump 
your data, so it will be compatible with the new database on restore. 
That's a good example of this already being the case.


Your real burden isn't the (possible!) data incompatibility between 
major versions, but the fact that your data grows. The more data you 
have, the more time a dump/restore will take.


You could attempt to just upgrade and hope your data can be interpreted 
by a newer major version (you should dump first, of course). You'll want 
to have some kind of checksums over your data to check if everything 
went well.
This method can't be expected to always work, that'd be near impossible 
to guarantee. There'll be changes to data structures (for the better), 
for example. I suppose the developers could give some estimate about 
your chances...


As mentioned, with a replicated setup your trouble should be minimal.

P.S. We don't use replication as of yet, but we probably will soon.
--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] minimizing downtime when upgrading

2006-06-21 Thread Martijn van Oosterhout
On Wed, Jun 21, 2006 at 08:10:20AM -0400, Kenneth Downs wrote:
> Regardless of whether a package is commercial or free, it strikes me as 
> counter to the very soul of programming to build in a burden that 
> increases with the user's use of the program, threatening even to tip 
> the balance altogether away from its use.  This seems to be the very 
> kind of feature that you want to programmatically control precisely 
> because it is non-trivial.

That doesn't change the fact that it's a really hard problem. In-place
upgrades would require lots of safety checks because otherwise you
might end up with a cluster that's not readable by any version.

OTOH, you have something like slony which you can use to upgrade to
newer versions without any downtime at all. With a solution like that
working right now, why would people spend effort on making in-place
upgrades work?

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] minimizing downtime when upgrading

2006-06-21 Thread Kenneth Downs

Richard Huxton wrote:


Kenneth Downs wrote:

AFAIK it has always been the case that you should expect to have to 
dump out your databases and reload them for version upgrades.


Is anybody over at the dev team considering what an onerous burden 
this is?  Is anyone considering doing away with it?



Far from trivial. 


Kind of gets to the heart of things, though, doesn't it.

It's the non-trivial stuff where we look to the machine to help us out. 

As a user of PostgreSQL, I benefit from a lot of things.  I gain a total 
advantage of "X" units of time/money.  Then its time to upgrade and I 
have to give a lot of it back.  The more I use the package, the more 
non-trivial is my upgrade, and the more I give back. 

Regardless of whether a package is commercial or free, it strikes me as 
counter to the very soul of programming to build in a burden that 
increases with the user's use of the program, threatening even to tip 
the balance altogether away from its use.  This seems to be the very 
kind of feature that you want to programmatically control precisely 
because it is non-trivial.



You have changes in on-disk formats and actual functionality between 
major version numbers. For instance - what would you do to deal with 
the recent changes in unicode validation?




begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] minimizing downtime when upgrading

2006-06-16 Thread Bill Moran
In response to snacktime <[EMAIL PROTECTED]>:

> On 6/16/06, Richard Huxton  wrote:
> 
> > The other option would be to run replication, e.g. slony to migrate from
> > one version to another. I've done it and it works fine, but it will mean
> > slony adding its own tables to each database. I'd still do it one
> > merchant at a time, but that should reduce your downtime to seconds.
> 
> I'll have to take another look at slony, it's been a while.  Our
> database structure is a bit non standard.  Being a payment gateway, we
> are required to have a separation of data between merchants, which
> means not mixing data from different merchants in the same table.
> So what we do is every user has their own schema, with their own set
> of tables.  Yes I know that's not considered the best practice design
> wise, but separate databases would have caused even more issues, and
> as it turns out there are some advantages to the separate schema
> approach that we never thought of.  Last time I looked at slony you
> have to configure it for each individual table you want replicated.
> We have around 50,000 tables, and more are added on a daily basis.

We've got a script here that takes a pg_dump and automatically generates
a slony config that adds all tables and sequences.  I've got to check with
the Powers That Be, but i suspect we'll be opening up the code.

Does this duplicate any work that anyone else is doing?

-- 
Bill Moran
Collaborative Fusion Inc.


IMPORTANT: This message contains confidential information and is
intended only for the individual named. If the reader of this
message is not an intended recipient (or the individual
responsible for the delivery of this message to an intended
recipient), please be advised that any re-use, dissemination,
distribution or copying of this message is prohibited. Please
notify the sender immediately by e-mail if you have received
this e-mail by mistake and delete this e-mail from your system.
E-mail transmission cannot be guaranteed to be secure or
error-free as information could be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The
sender therefore does not accept liability for any errors or
omissions in the contents of this message, which arise as a
result of e-mail transmission.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] minimizing downtime when upgrading

2006-06-16 Thread snacktime

On 6/16/06, Richard Huxton  wrote:


The other option would be to run replication, e.g. slony to migrate from
one version to another. I've done it and it works fine, but it will mean
slony adding its own tables to each database. I'd still do it one
merchant at a time, but that should reduce your downtime to seconds.



I'll have to take another look at slony, it's been a while.  Our
database structure is a bit non standard.  Being a payment gateway, we
are required to have a separation of data between merchants, which
means not mixing data from different merchants in the same table.
So what we do is every user has their own schema, with their own set
of tables.  Yes I know that's not considered the best practice design
wise, but separate databases would have caused even more issues, and
as it turns out there are some advantages to the separate schema
approach that we never thought of.  Last time I looked at slony you
have to configure it for each individual table you want replicated.
We have around 50,000 tables, and more are added on a daily basis.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] minimizing downtime when upgrading

2006-06-16 Thread Richard Huxton

Kenneth Downs wrote:
AFAIK it has always been the case that you should expect to have to dump 
out your databases and reload them for version upgrades.


Is anybody over at the dev team considering what an onerous burden this 
is?  Is anyone considering doing away with it?


Far from trivial. You have changes in on-disk formats and actual 
functionality between major version numbers. For instance - what would 
you do to deal with the recent changes in unicode validation?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] minimizing downtime when upgrading

2006-06-16 Thread Oleg Bartunov

On Thu, 15 Jun 2006, snacktime wrote:


Anyone have any tips for minimizing downtime when upgrading?  So far
we have done upgrades during scheduled downtimes.  Now we are getting
to the point where the time required for a standard dump/restore is
just too long.  What have others done when downtime is critical?  The
only solution we have been able to come up with is to migrate the data
on a per user basis to a new database server.  Each user is a
merchant, and the data in the database is order data.  Migrating one
merchant at a time will keep the downtime per merchant limited to just
the time it takes to migrate the data for that merchant, which is
acceptable.

Any other ideas?


we use replication package slony for upgrading.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

 http://www.postgresql.org/docs/faq



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] minimizing downtime when upgrading

2006-06-16 Thread Kenneth Downs

snacktime wrote:


Anyone have any tips for minimizing downtime when upgrading?  So far
we have done upgrades during scheduled downtimes.  Now we are getting
to the point where the time required for a standard dump/restore is
just too long.  What have others done when downtime is critical?  The
only solution we have been able to come up with is to migrate the data
on a per user basis to a new database server.  Each user is a
merchant, and the data in the database is order data.  Migrating one
merchant at a time will keep the downtime per merchant limited to just
the time it takes to migrate the data for that merchant, which is
acceptable.



AFAIK it has always been the case that you should expect to have to dump 
out your databases and reload them for version upgrades.


Is anybody over at the dev team considering what an onerous burden this 
is?  Is anyone considering doing away with it?



begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] minimizing downtime when upgrading

2006-06-16 Thread Richard Huxton

snacktime wrote:

Anyone have any tips for minimizing downtime when upgrading?  So far
we have done upgrades during scheduled downtimes.  Now we are getting
to the point where the time required for a standard dump/restore is
just too long.  What have others done when downtime is critical?  The
only solution we have been able to come up with is to migrate the data
on a per user basis to a new database server.  Each user is a
merchant, and the data in the database is order data.  Migrating one
merchant at a time will keep the downtime per merchant limited to just
the time it takes to migrate the data for that merchant, which is
acceptable.


The other option would be to run replication, e.g. slony to migrate from 
one version to another. I've done it and it works fine, but it will mean 
slony adding its own tables to each database. I'd still do it one 
merchant at a time, but that should reduce your downtime to seconds.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] minimizing downtime when upgrading

2006-06-15 Thread snacktime

Anyone have any tips for minimizing downtime when upgrading?  So far
we have done upgrades during scheduled downtimes.  Now we are getting
to the point where the time required for a standard dump/restore is
just too long.  What have others done when downtime is critical?  The
only solution we have been able to come up with is to migrate the data
on a per user basis to a new database server.  Each user is a
merchant, and the data in the database is order data.  Migrating one
merchant at a time will keep the downtime per merchant limited to just
the time it takes to migrate the data for that merchant, which is
acceptable.

Any other ideas?

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq