Got it.

Thank you, that's very helpful: we could delete quite a few of the rows
before we did the operation and cut way down on the size of the table
before we issue the update.  Trimming the table size down seems obvious
enough, but that's good confirmation that it will very much help.  And
there are quite a few indexes that I've discovered are useless, so dropping
those will speed things up too.

Looking online I see that a query progress indicator is a commonly
requested feature, but isn't yet implemented, so it sound like my best bet
is to clone the db on similar hardware, take all the advice offered here,
and just see how it performs.

Thanks to everyone for the feedback,
Carson

On Tue, Mar 13, 2012 at 6:32 PM, John R Pierce <pie...@hogranch.com> wrote:

> On 03/13/12 6:10 PM, Carson Gross wrote:
>
>> As a follow up, is the upgrade from integer to bigint violent?  I assume
>> so: it has to physically resize the column on disk, right?
>>
>>
> I think we've said several times, any ALTER TABLE ADD/ALTER COLUMN like
> that will cause every single tuple (row) of the table to be updated.
>  when rows are updated, the new row is written, then the old row is flagged
> for eventual vacuuming.
>
>
>
> --
> john r pierce                            N 37, W 122
> santa cruz ca                         mid-left coast
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general<http://www.postgresql.org/mailpref/pgsql-general>
>

Reply via email to