So this is working great so far. This is a bit postgres specific, but this
is how I did it:
1. Create "migration ids" table manually
CREATE TABLE record_migration (
id INT NOT NULL PRIMARY KEY,
is_migrated BOOLEAN DEFAULT NULL
);
INSERT INTO record_migration (id) SELECT id FROM record_table;
2. In SqlAlchemy
Task runner can get a batch of IDS & lock them:
UPDATE record_migration
SET is_migrated = False
FROM
( SELECT id from record_migration
WHERE is_migrated IS NULL
ORDER BY id ASC
LIMIT 100
) subq
WHERE record_migration.id = subq.id
RETURNING record_migration.id;
Then just loop the ids for migration using sqlalchemy
And when done, mark the record as migrated.
UPDATE record_migration
SET is_migrated = True
WHERE id = %s;
Each taskrunner has it's own batch and "locks" them with a `False` flag.
After everything is done, I can go in and look for any records that are
still `False` (most likely because of an uncaught exception in that batch)
and set them to NULL for another migration run.
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.