On 6/8/2017 5:53 PM, marcinha rocha wrote:
> Hi guys! I have the following queries, which will basically select
> data, insert it onto a new table and update a column on the original
> table.
I'm sure your example is a gross simplification of what you're really
doing, but if that's really all you're doing, why not do it all at once,
instead of row at a time?
BEGIN;
insert into tableb (id) select id from tablea;
update tablea set migrated=true;
COMMIT;
thats far more efficient that the row-at-a-time iterative solution you
showed.
You're right, that is just an example.
I'm basically using a CTE to select the data and then, inserting some rows onto
a new table.
I just don't know how to tell my function to perform 2000 records at once, and
then when calling it again it will "know" where to start from
Maybe, I already have everything I need?
UPDATE tablea a SET migrated = yes WHERE a.id = row.id;
On my original select, the row will have migrated = false. Maybe All I need to
put is a limit 2000 and the query will do the rest?
Example:
CREATE or REPLACE FUNCTION migrate_data()
RETURNS integer;
declare
row record;
BEGIN
FOR row IN EXECUTE '
SELECT
id
FROM
tablea
WHERE
migrated = false
'
LOOP
INSERT INTO tableb (id)
VALUES (row.id);
UPDATE tablea a SET migrated = yes WHERE a.id = row.id;
END LOOP;
RETURN num_rows; -- I want it to return the number of processed rows
END
$$ language 'plpgsql';