On 4/6/19 5:47 PM, senor wrote:
Thanks Tom for the explanation. I assumed it was my ignorance of how the schema 
was handled that was making this look like a problem that had already been 
solved and I was missing something.

I fully expected the "You're Doing It Wrong" part. That is out of my control 
but not beyond my influence.

I suspect I know the answer to this but have to ask. Using a simplified example 
where there are 100K sets of 4 tables, each representing the output of a single 
job, are there any shortcuts to upgrading that would circumvent exporting the 
entire schema? I'm sure a different DB design would be better but that's not 
what I'm working with.

An answer is going to depend on more information:

1) What is the time frame for moving from one version to another?
Both the setup and the actual downtime.

2) There are 500,000+ tables, but what is the amount of data involved?

3) Are all the tables active?

4) How are the tables distributed across databases in the cluster and schemas in each database?



Thanks

________________________________________
From: Ron <ronljohnso...@gmail.com>
Sent: Saturday, April 6, 2019 4:57 PM
To: pgsql-general@lists.postgresql.org
Subject: Re: pg_upgrade --jobs

On 4/6/19 6:50 PM, Tom Lane wrote:

senor <frio_cerv...@hotmail.com><mailto:frio_cerv...@hotmail.com> writes:


[snip]

The --link option to pg_upgrade would be so much more useful if it
weren't still bound to serially dumping the schemas of half a million
tables.



To be perfectly blunt, if you've got a database with half a million
tables, You're Doing It Wrong.

Heavy (really heavy) partitioning?

--
Angular momentum makes the world go 'round.





--
Adrian Klaver
adrian.kla...@aklaver.com


Reply via email to