Is Firebird supported?
Hi, I am testing out Firebird b/c I want to use Alembic to set up my unit tests. Is Firebird supported by Alembic? I am getting the error message below. I am running Alembic 0.6.0 and SqlAlchemy 0.8.2 File .../.virtualenv/local/lib/python2.7/site-packages/alembic/ddl/impl.py , line 50, in get_by_dialect return _impls[dialect.name] KeyError: 'firebird' Thanks, Steve -- You received this message because you are subscribed to the Google Groups sqlalchemy-alembic group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy-alembic+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[sqlalchemy] ORM general concept question (how to combine all stuff together?)
Hello, First of all I'm new to DBs so please don't hang me on a tree ))) I have basic and simple questions(at least it simple for me ^_^ ). While googling and reading some tutorials about ORM DBs at whole, I still cannot build up a logical picture of how it works. Here is the brief example (I will use pony orm syntax cause it is readable and simple ) Lets begin. First of all we need ti initialize a DB, like this: db = Database('sqlite', '/path/to/the/test_db.sqlite', create_db=True) This command says that we've created a DB file named test_db.sqlite and it based on sqlite. Then, let's say I'll create two classes(connected via cars attribute): class Person (db.Entity): name = Required(str) age = Required(int) cars = Set(Car) ... class Car(db.Entity): make = Required(str) model = Required(str) owner = Required(Person) ... Finally I make a mapping between these classses and test_db.sqlite db.generate_mapping(create_tables=True) So, at this moment I have some classes stored in the memory (cause I worked in python interactive shell) and physical test_db.sqlite placed on my hard drive. This file is actually contains an empty tables, am I right? Because I didn't initialized any entities yet. I can keep working via interactive console in order to accomplish this task and then just write commit() command in order to update test_db.sqlite file. Till this point everything looks fine.(I hope) The question arises when I ask my self how all this stuff should work with my code? I mean that I need to write some functions that will update my_db data. Suppose I have my_main_routine.py which, among the other things, imports some data from xls files. But how do I actually put these xls values in my data base? Should it look like that: my_main_routine.py my_db_classes.py which will consist of all the classes I've created before.(i.e. Person and Car) test_db.sqlite supose I've already mapped my classes to this file. How do I handle all my these db-classes inside of main_routine? Does it look like that(generally): # my_main_routine.py import my_db_classes # db binding commands: db.bind('sqlite', '/path/to/the/test_db.sqlite', create_db=True) . my_code_ for_importing_xls_values... ..my_code_ for_importing_xls_values... ..my_code_ for_importing_xls_values... ..my_code_ for_importing_xls_values... # now I need to pass these xls values to my db # so should it be written like that? -- p1 = Person(name = 'John', age = 20) p2 = Person(name = 'Mary' , age = 23) c1 = Car(make='Toyota', model = 'Prius', owner=p2) c2 = Car(make='Ford', model='Exploler', owner=p1) #and then just commit() #is that all? I just work directly with classes (in SQLAlchemy for particularly) or I need some decorators/ other stuff? EOF Furthermore, as I can barely understand, in order to work with db inside my_routine file I need (preferably) create a separate files i.e.: my_db_classes.py my_db_initial_mapper.py # in this file my db_mapping functions should be placed # as a result the my_db.sqlite will be created. and then my main code should look like this: # my_main_routine.py import my_db_classes import my_db_initial_mapper #binding functions may be placed in main_routine file(right?) .. binding code.. ..some xls related code. ..entities assignment code. commit() EOF Is this right? I think my problem appeared because of a lot of examples about databases are using interactive prompt so, it's kinda tricky to combine all of it when talking about python modules. Hope my question is clear enough, in case it doesn't I'll may best to write it more clearly next time Any help will be highly appreciated. Ivan. P.S. What about raw SQL code? Should I write it directly in my_main_routine.py or there some special ways of implementation? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
[sqlalchemy] Loading of dependent objects performance issue
Hi, After having read http://docs.sqlalchemy.org/en/latest/orm/loading_relationships.html I understand there is one case where SQL is not emitted and I was expecting that my case was this one. I use polymorphism to store different objects in the same table (only one type displayed here) as follows : class BatchRecord(db.Model): __tablename__ = 'batch_record' id = db.Column(db.Integer, primary_key=True, nullable=False) batch_id = db.Column(db.Integer, db.ForeignKey('batch.id'), nullable=False) type = db.Column(db.String(15)) created = db.Column(db.DateTime, default=datetime.utcnow) modified = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) class Batch(db.Model): __tablename__ = 'batch' id = db.Column(db.Integer, primary_key=True, nullable=False) source = db.Column(db.String(10)) created = db.Column(db.DateTime, default=datetime.utcnow) modified = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) batch_records = db.relationship('BatchRecord', cascade='all,delete-orphan') accounts = db.relationship('Account', primaryjoin=and_(Batch.id == BatchRecord.batch_id, BatchRecord.type == 'account'), backref='batch') class Account(BatchRecord): __tablename__ = 'account' id = db.Column(db.Integer, db.ForeignKey('batch_record.id'), primary_key=True, nullable=False) uuid = db.Column(UUID, nullable=False) role = db.Column(db.String(15), nullable=False) first_name = db.Column(db.String(40)) last_name = db.Column(db.String(40)) email = db.Column(db.String(80)) phone = db.Column(db.String(40)) cohort = db.Column(db.String(255)) The code I use is similar to the following : (1) batch = Batch.query.filter(Batch.id == 50048).first() (2) a0 = batch.accounts[0] (3) a0.batch.source (4) a1 = batch.accounts[1] (5) a1.batch.source at (1) SqlAlchemy requests the database (PostgreSQL) to get the batch object using the query SELECT batch.id AS batch_id, batch.source AS batch_source, batch.created AS batch_created, batch.modified AS batch_modified FROM batch WHERE batch.id = 50048 LIMIT 1 at (2) it does the following query to get account objects : SELECT account.id AS account_id, batch_record.id AS batch_record_id, batch_record.batch_id AS batch_record_batch_id, batch_record.type AS batch_record_type, batch_record.created AS batch_record_created, batch_record.modified AS batch_record_modified, account.uuid AS account_uuid, account.role AS account_role, account.first_name AS account_first_name, account.last_name AS account_last_name, account.email AS account_email, account.phone AS account_phone, account.cohort AS account_cohort FROM batch_record JOIN account ON batch_record.id = account.id WHERE 50048 = batch_record.batch_id AND batch_record.type = 'account' and the issue is at (3). It does a the new following query : SELECT batch.id AS batch_id, batch.source AS batch_source, batch.created AS batch_created, batch.modified AS batch_modified FROM batch WHERE batch.id = 50048 AND 'account' = 'account' SqlAlchemy should know using account.batch_id (which was stored in the object at (2)) that it references the batch object requested at (1) and should not request the database again to get information it already has (at (1)). at (4) it does not request the database but at (5) it requests again the database for the same object : SELECT batch.id AS batch_id, batch.source AS batch_source, batch.created AS batch_created, batch.modified AS batch_modified FROM batch WHERE batch.id = 50048 AND 'account' = 'account' The matter is that we have thousand of objects and SqlAlchemy requests the database 1 time per object :( FYI we're using the stable release (0.9.8), but I've tested the last pre-released version (1.0.0b1) and the behavior is exactly the same. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
Re: [sqlalchemy] unhandled data type cx_Oracle.LOB
Ah! I will keep track of it - for now, I will just ignore AttributeError exception. I am loving sqlalchemy, thanks for creating and maintaining it! -GP On Monday, March 16, 2015 at 11:37:36 PM UTC-4, Michael Bayer wrote: GP pandit...@gmail.com javascript: wrote: That's what I thought, and it works, but there seems to be a difference in how resultset is handled when you select LOB column. Here is a basic script, that selects record from a source table which has 36 rows. It fetches 10 records at a time. from sqlalchemy import Table, select, create_engine, MetaData engine = create_engine('oracle+cx_oracle://xxx:yyy@zzz') conn = engine.connect() metadata = MetaData() metadata.bind = conn source_table = Table('contract_cancellation_test', metadata, autoload=True) target_table = Table('contract_cancellation_test_s', metadata, autoload=True) # Query 1 : without selecting LOB : Works fine #select_query = select([source_table.c.contract_id, source_table.c.cancel_dt]) # Query 2 : selecting canellation_quote LOB column : Fails in last fetchmany because query_rs is closed select_query = select([source_table.c.contract_id, source_table.c.cancel_dt, source_table.c.cancellation_obj]) query_rs = conn.execute(select_query) print(executing select) loop_count = 1 while True: rows = query_rs.fetchmany(size=10) if not rows: # we are done if result set list is empty query_rs.close() break row_dict = [dict(l_row) for l_row in rows] insert_target_stmt = target_table.insert() print(inserting for loop = {}.format(str(loop_count))) insert_target_stmt.execute(row_dict) loop_count += 1 print(done) conn.close() Query 1 does not have LOB type column, and it works fine. Query 2 has LOB type column in and it fails in fetchmany() call after last set is retrieved. Here is the output: - results query 1 - executing select inserting for loop = 1 inserting for loop = 2 inserting for loop = 3 inserting for loop = 4 done - results query 1 - executing select inserting for loop = 1 inserting for loop = 2 inserting for loop = 3 inserting for loop = 4 Traceback (most recent call last): File /home/xxx/.local/lib/python3.4/site-packages/sqlalchemy/engine/result.py, line 733, in _fetchone_impl return self.cursor.fetchone() AttributeError: 'NoneType' object has no attribute 'fetchone' During handling of the above exception, another exception occurred: Traceback (most recent call last): File /home/xxx/myprojects/python/sync/test_lob_1.py, line 23, in module rows = query_rs.fetchmany(size=10) ... ... File /home/xxx/.local/lib/python3.4/site-packages/sqlalchemy/engine/result.py, line 759, in _non_result raise exc.ResourceClosedError(This result object is closed.) sqlalchemy.exc.ResourceClosedError: This result object is closed. As long as I can check that resultset is empty and break from the loop, I am fine. Any better way of handling this? That’s a bug in the oracle-specific result proxy. I’ve created https://bitbucket.org/zzzeek/sqlalchemy/issue/3329/fetchmany-fails-on-bufferedcolproxy-on for that. Thanks GP On Monday, March 16, 2015 at 10:08:47 PM UTC-4, Michael Bayer wrote: GP pandit...@gmail.com wrote: OK, is “cancellation_obj” a column object with CLOB as the datatype ? Yes, that's how it's defined in the database. Because of dynamic nature of the code, I was using append_column without specifying column type. I made changes to define column in table.c.column_name format rather than just using Column('column name'). This way, I can make sure column data types are included with column definitions, without me having to specify the data type explicitly with each column. It's interesting that I used that one way (out of three possible ways) that wasn't 'right', but it's all good now :) Now onto changing from fetchmany() to fetchone() - since LOBs are pretty much forcing me to use fetchone(). OK, if you were to get the CLOB types working correctly, SQLAlchemy’s result proxy works around that issue also, by fetching rows in chunks and converting the LOB objects to strings while they are still readable, so you could keep with the fetchmany() calls. Thank you for your help! GP On Monday, March 16, 2015 at 5:54:54 PM UTC-4, Michael Bayer wrote: GP pandit...@gmail.com wrote: So that's what was happening: This select construct fails: select_query = select() select_query.append_column(contract_id) select_query.append_column(cancel_dt) select_query.append_column(cancellation_obj)
Re: [sqlalchemy] Loading of dependent objects performance issue
Cyril Scetbon cscet...@gmail.com wrote: Hi, After having read http://docs.sqlalchemy.org/en/latest/orm/loading_relationships.html I understand there is one case where SQL is not emitted and I was expecting that my case was this one. I use polymorphism to store different objects in the same table (only one type displayed here) as follows : class BatchRecord(db.Model): __tablename__ = 'batch_record' id = db.Column(db.Integer, primary_key=True, nullable=False) batch_id = db.Column(db.Integer, db.ForeignKey('batch.id'), nullable=False) type = db.Column(db.String(15)) created = db.Column(db.DateTime, default=datetime.utcnow) modified = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) class Batch(db.Model): __tablename__ = 'batch' id = db.Column(db.Integer, primary_key=True, nullable=False) source = db.Column(db.String(10)) created = db.Column(db.DateTime, default=datetime.utcnow) modified = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) batch_records = db.relationship('BatchRecord', cascade='all,delete-orphan') accounts = db.relationship('Account', primaryjoin=and_(Batch.id == BatchRecord.batch_id, BatchRecord.type == 'account'), backref='batch’) this primaryjoin is not necessary. Because you are creating a relationship to “Account”, it will query out to a JOIN of the batch_record and account tables automatically which will limit the rows to those with a type of “account”. Only if you’re trying to exclude rows from some other class that is a subclass of “Account” would this be at all necessary but that would be unusual. class Account(BatchRecord): __tablename__ = 'account' id = db.Column(db.Integer, db.ForeignKey('batch_record.id'), primary_key=True, nullable=False) uuid = db.Column(UUID, nullable=False) role = db.Column(db.String(15), nullable=False) first_name = db.Column(db.String(40)) last_name = db.Column(db.String(40)) email = db.Column(db.String(80)) phone = db.Column(db.String(40)) cohort = db.Column(db.String(255)) The code I use is similar to the following : (1) batch = Batch.query.filter(Batch.id == 50048).first() (2) a0 = batch.accounts[0] (3) a0.batch.source (4) a1 = batch.accounts[1] (5) a1.batch.source at (1) SqlAlchemy requests the database (PostgreSQL) to get the batch object using the query SELECT batch.id AS batch_id, batch.source AS batch_source, batch.created AS batch_created, batch.modified AS batch_modified FROM batch WHERE batch.id = 50048 LIMIT 1 at (2) it does the following query to get account objects : SELECT account.id AS account_id, batch_record.id AS batch_record_id, batch_record.batch_id AS batch_record_batch_id, batch_record.type AS batch_record_type, batch_record.created AS batch_record_created, batch_record.modified AS batch_record_modified, account.uuid AS account_uuid, account.role AS account_role, account.first_name AS account_first_name, account.last_name AS account_last_name, account.email AS account_email, account.phone AS account_phone, account.cohort AS account_cohort FROM batch_record JOIN account ON batch_record.id = account.id WHERE 50048 = batch_record.batch_id AND batch_record.type = 'account' and the issue is at (3). It does a the new following query : SELECT batch.id AS batch_id, batch.source AS batch_source, batch.created AS batch_created, batch.modified AS batch_modified FROM batch WHERE batch.id = 50048 AND 'account' = ‘account' this is occurring because the relationship is not aware that this is a so-called “simple many-to-one”, for which is can do a straight primary key lookup in the identity map. Because your custom “primaryjoin” condition is also shared on the many-to-one side, it assumes there is special SQL that must be emitted to ensure the correct results, and a simple identity lookup is not possible. the two configurations that will solve this issue are: accounts = db.relationship('Account', backref='batch’) or alternatively, if you really wanted to keep that primaryjoin: accounts = db.relationship('Account', primaryjoin=and_(Batch.id == BatchRecord.batch_id, BatchRecord.type == 'account'), backref=backref(‘batch’, primaryjoin=None)) SqlAlchemy should know using account.batch_id (which was stored in the object at (2)) that it references the batch object requested at (1) and should not request the database again to get information it already has (at (1)). at (4) it does not request the database but at (5) it requests again the database for the same object : SELECT batch.id AS batch_id, batch.source AS batch_source, batch.created AS batch_created, batch.modified AS batch_modified FROM batch WHERE batch.id = 50048 AND 'account' = 'account' The matter is that we