CAUTION: This is very dangerous and may cause corruption.
         *** DO THIS IN A TEST DATABASE FIRST ***

--1. Get the oid for int8 (bigint)
SELECT t.oid
  FROM pg_type t
WHERE typname = 'int8';

--2. Get the oid for your table
SELECT c.oid,
       c.relname as table,
       a.attname ,
       a.atttypid,
       a.*
  FROM pg_class c
  JOIN pg_namespace n ON (n.oid = c.relnamespace)
  JOIN pg_attribute a ON ( a.attrelid = c.oid )
 WHERE c.relname = 'dogs'
   AND n.nspname = 'public'
   AND a.attname = 'good_watchdog'
   AND NOT attisdropped;

BEGIN;

UPDATE pg_attribute a
   SET atttypid = <t.oid from 1>
 WHERE a.attrelid = <c.oid from 2>
   AND attname = <your column to change>;

COMMIT;

On Fri, Jun 5, 2015 at 12:23 PM, Casey Deccio <ca...@deccio.net> wrote:

> I have a database in which one table references the primary key of
> another.  The type of the primary key was initially int, but I changed it
> to bigint.  However, I forgot to update the type of a column that
> references it.  So, I've initiated "ALTER TABLE foo ALTER COLUMN bar TYPE
> bigint", where foo/bar is the table/column referencing the primary key that
> is now of type bigint.
>
> However, with 2^31 rows, it is taking a "long" time to write the rows
> (it's been 12 hours).  Is there a more efficient way to do this?  Even
> if/when this one finishes, there are other column types that I have to
> update.  This update effectively locked me out of all access to the data
> anyway, so I don't foresee any concern of writes that might affect
> integrity.
>
> Cheers,
> Casey
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.

Reply via email to