-Original Message-
I think we had that problem solved too in principle: build the new
catalogs in a new $PGDATA directory alongside the old one, and hard-link
the old user table files into that directory as you go. Then pg_upgrade
never needs to change the old directory tree at all. This
On Oct 11, 2006, at 5:06 PM, Josh Berkus wrote:
What type of help did you envision? The answer is likely yes.
I don't know, whatever you have available. Design advice, at the very
least.
Absolutely. I might be able to contribute some coding time as well.
Testing time too.
// Theo S
Theo,
> What type of help did you envision? The answer is likely yes.
I don't know, whatever you have available. Design advice, at the very
least.
--
--Josh
Josh Berkus
PostgreSQL @ Sun
San Francisco
---(end of broadcast)---
TIP 9: In version
What type of help did you envision? The answer is likely yes.
On Oct 11, 2006, at 5:02 PM, Josh Berkus wrote:
Theo,
Would you be able to help me, Zdenek & Gavin in work on a new
pg_upgrade?
--
--Josh
Josh Berkus
PostgreSQL @ Sun
San Francisco
// Theo Schlossnagle
// CTO -- http://www.o
Theo,
Would you be able to help me, Zdenek & Gavin in work on a new pg_upgrade?
--
--Josh
Josh Berkus
PostgreSQL @ Sun
San Francisco
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-no
On Oct 11, 2006, at 9:36 AM, Tom Lane wrote:
Theo Schlossnagle <[EMAIL PROTECTED]> writes:
The real problem with a "dump" of the database is that you want to be
able to quickly switch back to a known working copy in the event of a
failure. A dump is the furthest possible thing from a working
Theo Schlossnagle <[EMAIL PROTECTED]> writes:
> The real problem with a "dump" of the database is that you want to be
> able to quickly switch back to a known working copy in the event of a
> failure. A dump is the furthest possible thing from a working copy
> as one has to rebuild the datab
On Oct 11, 2006, at 7:57 AM, Markus Schaber wrote:
Hi, Mark,
Mark Woodward wrote:
People are working it, someone even got so far as dealing with most
catalog upgrades. The hard part going to be making sure that even if
the power fails halfway through an upgrade that your data will
still be
Hi, Mark,
Mark Woodward wrote:
>> People are working it, someone even got so far as dealing with most
>> catalog upgrades. The hard part going to be making sure that even if
>> the power fails halfway through an upgrade that your data will still be
>> readable...
>
> Well, I think that any *real
Benny Amorsen wrote:
"TL" == Tom Lane <[EMAIL PROTECTED]> writes:
TL> (I suppose it wouldn't work in Windows for lack of hard links, but
TL> anyone trying to run a terabyte database on Windows deserves to
TL> lose anyway.)
Windows has hard links on NTFS, they are just rarely used.
> -Original Message-
> From: Magnus Hagander [mailto:[EMAIL PROTECTED]
> Sent: 10 October 2006 13:23
> To: Dave Page; Benny Amorsen; pgsql-hackers@postgresql.org
> Subject: RE: [HACKERS] Upgrading a database dump/restore
>
> > > TL> (I suppose it wouldn
> > TL> (I suppose it wouldn't work in Windows for lack of hard
> links, but
> > TL> anyone trying to run a terabyte database on Windows deserves
> to
> > TL> lose anyway.)
> >
> > Windows has hard links on NTFS, they are just rarely used.
>
> We use them in PostgreSQL to support tablespaces.
No,
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Benny Amorsen
> Sent: 10 October 2006 13:02
> To: pgsql-hackers@postgresql.org
> Subject: Re: [HACKERS] Upgrading a database dump/restore
>
> >>>>> &quo
> "TL" == Tom Lane <[EMAIL PROTECTED]> writes:
TL> (I suppose it wouldn't work in Windows for lack of hard links, but
TL> anyone trying to run a terabyte database on Windows deserves to
TL> lose anyway.)
Windows has hard links on NTFS, they are just rarely used.
/Benny
--
Mark Woodward wrote:
>> Mark,
>>
>>> No one could expect that this could happen by 8.2, or the release after
>>> that, but as a direction for the project, the "directors" of the
>>> PostgreSQL project must realize that the dump/restore is becomming like
>>> the old locking vacuum problem. It is a *
> Mark,
>
>> No one could expect that this could happen by 8.2, or the release after
>> that, but as a direction for the project, the "directors" of the
>> PostgreSQL project must realize that the dump/restore is becomming like
>> the old locking vacuum problem. It is a *serious* issue for PostgreS
Josh Berkus wrote:
> Mark,
>
>> No one could expect that this could happen by 8.2, or the release after
>> that, but as a direction for the project, the "directors" of the
>> PostgreSQL project must realize that the dump/restore is becomming like
>> the old locking vacuum problem. It is a *serious
Mark,
> No one could expect that this could happen by 8.2, or the release after
> that, but as a direction for the project, the "directors" of the
> PostgreSQL project must realize that the dump/restore is becomming like
> the old locking vacuum problem. It is a *serious* issue for PostgreSQL
> ad
Martijn van Oosterhout writes:
> The hard part going to be making sure that even if
> the power fails halfway through an upgrade that your data will still be
> readable...
I think we had that problem solved too in principle: build the new
catalogs in a new $PGDATA directory alongside the old one,
> On Mon, Oct 09, 2006 at 11:50:10AM -0400, Mark Woodward wrote:
>> > That one is easy: there are no rules. We already know how to deal
>> with
>> > catalog restructurings --- you do the equivalent of a pg_dump -s and
>> > reload. Any proposed pg_upgrade that can't cope with this will be
>> > rej
On Mon, Oct 09, 2006 at 11:50:10AM -0400, Mark Woodward wrote:
> > That one is easy: there are no rules. We already know how to deal with
> > catalog restructurings --- you do the equivalent of a pg_dump -s and
> > reload. Any proposed pg_upgrade that can't cope with this will be
> > rejected out
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>>> Whenever someone actually writes a pg_upgrade, we'll institute a policy
>>> to restrict changes it can't handle.
>
>> IMHO, *before* any such tool *can* be written, a set of rules must be
>> enacted regulating catalog changes.
>
> That one is easy:
"Mark Woodward" <[EMAIL PROTECTED]> writes:
>> Whenever someone actually writes a pg_upgrade, we'll institute a policy
>> to restrict changes it can't handle.
> IMHO, *before* any such tool *can* be written, a set of rules must be
> enacted regulating catalog changes.
That one is easy: there are
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> Not to cause any arguments, but this is sort a standard discussion that
>> gets brought up periodically and I was wondering if there has been any
>> "softening" of the attitudes against an "in place" upgrade, or movement
>> to
>> not having to dump a
Martijn van Oosterhout napsal(a):
On Thu, Oct 05, 2006 at 04:39:22PM -0400, Mark Woodward wrote:
Indeed. The main issue for me is that the dumping and replication
setups require at least 2x the space of one db. That's 2x the
hardware which equals 2x $$$. If there were some tool which modified
th
Well, there is a TODO item ( somewhere only we know ...).
Administration
* Allow major upgrades without dump/reload, perhaps using pg_upgrade
http://momjian.postgresql.org/cgi-bin/pgtodo?pg_upgrade
pg_upgrade resists itself to be born, but that discussion seems to
seed *certain* fundamentals
"Mark Woodward" <[EMAIL PROTECTED]> writes:
> Not to cause any arguments, but this is sort a standard discussion that
> gets brought up periodically and I was wondering if there has been any
> "softening" of the attitudes against an "in place" upgrade, or movement to
> not having to dump and restor
On Thu, Oct 05, 2006 at 04:39:22PM -0400, Mark Woodward wrote:
> >
> > Indeed. The main issue for me is that the dumping and replication
> > setups require at least 2x the space of one db. That's 2x the
> > hardware which equals 2x $$$. If there were some tool which modified
> > the storage while p
>
> Indeed. The main issue for me is that the dumping and replication
> setups require at least 2x the space of one db. That's 2x the
> hardware which equals 2x $$$. If there were some tool which modified
> the storage while postgres is down, that would save lots of people
> lots of money.
Its tim
On Oct 5, 2006, at 15:46 , Mark Woodward wrote:
Not to cause any arguments, but this is sort a standard discussion
that
gets brought up periodically and I was wondering if there has been any
"softening" of the attitudes against an "in place" upgrade, or
movement to
not having to dump and r
> Mark Woodward wrote:
>> I am currently building a project that will have a huge number of
>> records,
>> 1/2tb of data. I can't see how I would ever be able to upgrade
>> PostgreSQL
>> on this system.
>>
>>
>
> Slony will help you upgrade (and downgrade, for that matter) with no
> downtime at all
Mark Woodward wrote:
I am currently building a project that will have a huge number of records,
1/2tb of data. I can't see how I would ever be able to upgrade PostgreSQL
on this system.
Slony will help you upgrade (and downgrade, for that matter) with no
downtime at all, pretty much. Of co
Not to cause any arguments, but this is sort a standard discussion that
gets brought up periodically and I was wondering if there has been any
"softening" of the attitudes against an "in place" upgrade, or movement to
not having to dump and restore for upgrades.
I am aware that this is a difficult
33 matches
Mail list logo