On Sun, May 27, 2012 at 12:31:09PM -0400, Andrew Dunstan wrote:
> 
> 
> On 05/27/2012 11:31 AM, Tom Lane wrote:
> >
> >
> >Having said that, I've got to also say that I think we've fundamentally
> >blown it with the current approach to upgrading extensions.  Because we
> >dump all the extension member objects, the extension contents have got
> >to be restorable into a new database version as-is, and that throws away
> >most of the flexibility that we were trying to buy with the extension
> >mechanism.  IMO we have *got* to get to a place where both pg_dump and
> >pg_upgrade dump extensions just as "CREATE EXTENSION", and the sooner
> >the better.  Once we have that, this type of issue could be addressed by
> >having different contents of the extension creation script for different
> >major server versions --- or maybe even the same server version but
> >different python library versions, to take something on-point for this
> >discussion.  For instance, Andrew's problem could be dealt with if the
> >backport were distributed as an extension "json-backport", and then all
> >that's needed in a new installation is an empty extension script of that
> >name.
> 
> 
> 
> It sounds nice, but we'd have to make pg_upgrade drop its current
> assumption that libraries wanted in the old version will be named
> the same (one for one) as the libraries wanted in the new version.
> Currently it looks for every shared library named in probin (other
> than plpgsql.so) in the old cluster and tries to LOAD it in the new
> cluster, and errors out if it can't.

I didn't fully understand this. Are you saying pg_upgrade will check
some extension config file for the library name?

-- 
  Bruce Momjian  <br...@momjian.us>        http://momjian.us
  EnterpriseDB                             http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to