Hi,
I am trying to copy a table (or more of them) from one database to the
other. I found somewhere in internet a code snippet that I have
slightly modified. I read sequentially rows from the existing table
and write them to the new one. The code worked for smaller table (up
to several thousand rows). However when working with the table with
more than million rows, the session commit failed more frequently as
the number of the copied rows grows. The rows to be committed in the
previous loop are still stored in the session and when the session
filled with the rows from next sequence they are still in the session
although they are supposed to have bee already committed.
Is there any problem with the code or it is dependent upon the RDBMS
(Postgres) configuration.
Is there any other way to copy a table from one database to the other
(except of using pg_dump)?
Thanks
___________________________________________________________________________________________


def pull_data(from_db, to_db, tables,slicesize):
    smeta,source, sengine = create_meta_session_engine(from_db)
    smeta = MetaData(bind=sengine)
    dmeta,destination, dengine = create_meta_session_engine(to_db)
    for table_name in tables:
        table = Table(table_name, smeta, autoload=True)
        table.metadata.create_all(dengine)
        NewRecord = quick_mapper(table)
        columns = table.columns.keys()
        rowquery=source.query(table)
        count=rowquery.count()
        loop=0
        while count-(loop*slicesize)>slicesize:
 
oneslice=source.query(table).slice(slicesize*loop,slicesize*(loop
+1)).all()
            for record in oneslice:
                data = dict([(str(column), getattr(record, column))
for column in columns])
                destination.add(NewRecord(**data))
            try:
                destination.commit()
                loop+=1
            except:
                destination.rollback()
                time.sleep(25)
            print loop
        lastpending=True
        lastslice=source.query(table).slice(slicesize*loop,count
+1).all()
        lastloop=0
        while lastpending:
            for record in lastslice:
                data = dict([(str(column), getattr(record, column))
for column in columns])
                destination.add(NewRecord(**data))
            try:
                destination.commit()
                lastpending=False
            except:
                destination.roolback()
                time.sleep(25)
                lastloop+=1
            print lastloop
    source.close()
    destination.close()
def quick_mapper(table):
    Base = declarative_base()
    class GenericMapper(Base):
        __table__ = table
    return GenericMapper
___________________________________________________________________________________________

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to