Re: [sqlalchemy] Unit testing, mocking and dependency injection with SA Declarative.
Regarding the Postgres in production and SQLite in testing differences, we've previously had problems with Postgres's Array - which doesn't exist in SQLite. Also watch out for the differences between how they interpret GROUP BY and DISTINCT - I've had this problem only today where a complicated query I wrote and tested in unittest (SQLite) doesn't work on development localhost (Postgres). Alex, thanks very much for this blog post - very helpful. I was just talking about this strategy with my colleague today. I've previously switch over to Postgres in test to address problems with Array - however I had to bring the project back over again because of speed issues - it was taking far too long to do a set up and tear down of the DB for each test. It looks like your solution will be much quicker than our previous one because of your strategy with transactions. Cheers, James On Wednesday, September 11, 2013 6:12:07 PM UTC+1, Alex Grönholm wrote: I wrote a blog post on this very topic recently: http://alextechrants.blogspot.fi/2013/08/unit-testing-sqlalchemy-apps.html tiistai, 10. syyskuuta 2013 19.43.35 UTC+3 Toph Burns kirjoitti: Could you use an in-memory, sqlite db for your testing? For our applications, we have an initialization function that loads the database connection strings from a config (.ini) file, passing those on to create_engine. In production it's a postgresql connection string, for test, it's a sqlite:///:memory:' Toph Burns | Software Engineer 5885 Hollis St. Suite 100 Emeryville, CA 94608 510-597-4797 bu...@amyris.com -- *From:* sqlal...@googlegroups.com [sqlal...@googlegroups.com] on behalf of Michel Albert [exh...@gmail.com] *Sent:* Tuesday, September 10, 2013 1:46 AM *To:* sqlal...@googlegroups.com *Subject:* [sqlalchemy] Unit testing, mocking and dependency injection with SA Declarative. I am trying to wrap my head around how to do Dependency Injection with SQLAlchemy and I am walking in circles. I want to be able to mock out SA for most of my tests. I trust SA and don't want to test serialisation into the DB. I just want to test my own code. So I was thinking to do dependency injection, and mock out SA during testing. But I don't know what to mock out, how and when to set up the session properly, without doing it at the module level (which causes unwanted side-effects only by importing the module). The only solution which comes to mind is to have one singleton which deals with that. But that feels very unpythonic to me and I am wondering if there's a better solution. I also saw that create_engine has an optional module kwarg, which I could mock out. But then SA begins complaining that the return types are not correct. And I don't want to specify return values for every possible db-module call. That's way out of scope of my tests. I am not calling anything on the db-module. That's SA's job, and, as said, I already trust SA. Whenever I work on this I always run into the session_maker initialisation as well. The examples to this on the module level, which I really make me feel uneasy. Any tips? Just prodding myself in the right direction might help me out enough. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+...@googlegroups.com. To post to this group, send email to sqlal...@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Mutable column_properties
Thanks Michael a column_property() against a SQL expression by definition is not writable. Your table doesn't have a CAST trigger inside of it for when an int is written to it that would convert it back to a string. in this case since data is already loaded fully as a single column you might as well just use a hybrid. I see what you mean about column_property not being writable. In my actual application I won't be loading the whole 'data'; what I am really trying to do is make a sort of psuedo-column so that the SQL that gets generated is something like SELECT id, data-foo AS foo FROM thing And 'foo' gets mapped as if it were a normal column so it's update-able too. I can deal with the casting on top of that I think. Does that make any sense? Perhaps I am trying to be too much of an Alchemist for my own good here :) All the best, Philip -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] dynamic schema with postgresql
On Sep 11, 2013, at 10:36 PM, Joe Martin jandos...@gmail.com wrote: Thank you for your reply. Then I thought the following would work: company_schema = 'c' + str(company_id) db.session.execute(CreateSchema(company_schema)) db.session.commit() meta = db.MetaData(bind=db.engine) for table in db.metadata.tables.items(): if table[1].name == 'customer': table[1].tometadata(meta, company_schema) elif table[1].name == 'company': table[1].tometadata(meta, 'app') # or table[1].tometadata(meta)? print meta.tables.items() meta.create_all() Now I see print meta shows 2 tables, but somehow error is still the same: NoReferencedTableError: Foreign key associated with column 'customer.company_id' could not find table 'company' with which to generate a foreign key to target column 'id' However, with my original metadata I was able to create both tables: app.company and public.customer. So, I'm confused with the issue. Thanks for your time. The way it works is this: a Table object that has a ForeignKey() object inside of it, is going to want to find the target column that this ForeignKey points to. There are two ways to specify it. One is via string: ForeignKey(schemaname.tablename.columname) if you use that approach, the MetaData for this table *must have a Table with exactly that target name present*. The table[1].name == 'company' conditional you have seems to be doing this, but then the error you're showing me doesn't include app inside of it, so perhaps you want to do tometadata(meta, None), not sure. the other way, which might make this all easier, is to put the actual Column object in the ForeignKey: ForeignKey(my_company_table.c.id) if you do it that way, then you can go back to your original approach where you don't copy company at all - if ForeignKey is given the Column directly, then no string lookup in the MetaData is needed and you can point to any Table anywhere. signature.asc Description: Message signed with OpenPGP using GPGMail
Re: [sqlalchemy] Mutable column_properties
I just spend 30mins with pdb; I was wrong ; I think it would be way too hard to get it into the ORM. The way MutableDict seems to be currently integrated, the entire value is updated for the key , and the original value seems to be obliterated. Outside of the ORM -- do you have any references for the update insert ? do you think it would be possible to implement this in select ? Our HSTORE data is small now, but growing -- we're using it to store revision history. I've been experimenting with using `deferred` columns to keep them from being loaded on the main query, then accessing only when needed. If we can only pull out and update the relevant KV-pairs, there would be even smaller overhead. -- Sorry if i'm hijacking this thread, Philip. From your example usage though, I think we have the same concerns/issue ; I've just been approaching it from a slightly different standpoint. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Mutable column_properties
On Sep 12, 2013, at 2:35 PM, Jonathan Vanasco jonat...@findmeon.com wrote: I just spend 30mins with pdb; I was wrong ; I think it would be way too hard to get it into the ORM. The way MutableDict seems to be currently integrated, the entire value is updated for the key , and the original value seems to be obliterated. Outside of the ORM -- do you have any references for the update insert ? do you think it would be possible to implement this in select ? this stuff should work completely in core, if you look at the examples in the 0.8 migration for ARRAY, that should give a feel for it, the same types of things should all work for HSTORE : http://docs.sqlalchemy.org/en/rel_0_8/changelog/migration_08.html#enhanced-postgresql-array-type Our HSTORE data is small now, but growing -- we're using it to store revision history. I've been experimenting with using `deferred` columns to keep them from being loaded on the main query, then accessing only when needed. If we can only pull out and update the relevant KV-pairs, there would be even smaller overhead. -- Sorry if i'm hijacking this thread, Philip. From your example usage though, I think we have the same concerns/issue ; I've just been approaching it from a slightly different standpoint. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out. signature.asc Description: Message signed with OpenPGP using GPGMail
Re: [sqlalchemy] Mutable column_properties
On Sep 12, 2013, at 12:02 PM, Philip Scott safetyfirstp...@gmail.com wrote: Thanks Michael a column_property() against a SQL expression by definition is not writable. Your table doesn't have a CAST trigger inside of it for when an int is written to it that would convert it back to a string. in this case since data is already loaded fully as a single column you might as well just use a hybrid. I see what you mean about column_property not being writable. In my actual application I won't be loading the whole 'data'; what I am really trying to do is make a sort of psuedo-column so that the SQL that gets generated is something like SELECT id, data-foo AS foo FROM thing And 'foo' gets mapped as if it were a normal column so it's update-able too. I can deal with the casting on top of that I think. Does that make any sense? Perhaps I am trying to be too much of an Alchemist for my own good here :) I can see the appeal of column_property() actually being writable, such that you'd provide some SQL expression that allows the operation to go back the other way when you assign.Postgresql's special types like ARRAY and HSTORE actually introduce the concept of this actually being useful, e.g. assignment to an expression within INSERT or UPDATE which I don't think applies at all to any other database (I could be wrong though). So currently column_property() doesn't do that. But you can handle the mutations with a simple event, though a mutation does require loading data. To really get the data-foo = '5' in the INSERT statement, the Core does support that but the ORM doesn't, the best you could do there at the moment would be a second INSERT statement. from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.dialects.postgresql import HSTORE from sqlalchemy.ext.mutable import MutableDict Base = declarative_base() class A(Base): __tablename__ = 'a' id = Column(Integer, primary_key=True) # deferred() since it seems like you don't normally # want to load this field data = deferred(Column(MutableDict.as_mutable(HSTORE))) # data.expression is to unwrap deferred() foo = column_property(cast(data.expression['foo'], Integer)) @validates(foo) def _set_foo(self, key, value): if not self.data: self.data = {} self.data[foo] = str(value) # returning the integer version for local access, isn't flushed return value e = create_engine(postgresql://scott:tiger@localhost/test, echo=True) Base.metadata.drop_all(e) Base.metadata.create_all(e) sess = Session(e) a1 = A(foo=5) sess.add(a1) # pre flush assert a1.foo == 5 sess.commit() assert a1.foo == 5 a1.foo = 6 sess.commit() a1 = sess.query(A).first() # .foo is in terms of the column property assert a1.foo == 6 # data is still deferred (usually) assert 'data' not in a1.__dict__ assert a1.data == {foo: 6} All the best, Philip -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out. signature.asc Description: Message signed with OpenPGP using GPGMail
[sqlalchemy] Re: Mutable column_properties
I might be interpreting all this wrong, but I don't think the column_property needs to be writable. I think the situation is this: Under Postgres, with HSTORE it's possible to INSERT/UPDATE/DELETE only certain values from within the store. Under SqlAlchemy, the entire object is retreived/replaced I think I know what you're talking about. Under Postgres, with HSTORE it's possible to INSERT/UPDATE/DELETE only certain values from within the store... Given: class TestClass(Base): __tablename__ = 'test_class' id = Column(Integer, primary_key=True) kv = Column(sa_mutable.MutableDict.as_mutable(sqlalchemy.dialects.postgresql.HSTORE) , nullable=True ) which creates... ## BEGIN ## CREATE TABLE test_class ( ## id SERIAL NOT NULL, ## kv HSTORE, ## PRIMARY KEY (id) ##) if we create a few hstore entires.. import string for x in range(0,5): sampledict = dict([(i,i) for i in string.ascii_lowercase]) expected_pass = TestClass( kv=sampledict ) dbSession.add(expected_pass) dbSession.flush() generated_id = expected_pass.id dbSession.commit() ## BEGIN (implicit) ## INSERT INTO test_class (kv) VALUES (%(kv)s) RETURNING test_class.id ## {'kv': {'a': 'a', 'c': 'c', 'b': 'b', 'e': 'e', 'd': 'd', 'g': 'g', 'f': 'f', 'i': 'i', 'h': 'h', 'k': 'k', 'j': 'j', 'm': 'm', 'l': 'l', 'o': 'o', 'n': 'n', 'q': 'q', 'p': 'p', 's': 's', 'r': 'r', 'u': 'u', 't': 't', 'w': 'w', 'v': 'v', 'y': 'y', 'x': 'x', 'z': 'z'}} ## COMMIT We can operate on k/v pairs within Postgres; but we replace the entire column in SqlAlchemy SELECT id, kv-'y' , kv-'z' AS foo FROM test_class ; UPDATE test_class SET kv = delete(kv, 'z'); SELECT id, kv-'y' kv-'z' AS foo FROM test_class ; UPDATE test_class SET kv = kv || ( 'z = z' ); SELECT id, kv-'y' , kv-'z' AS foo FROM test_class ; IIRC , sqlalchemy already tracks the changed keys via MutableDict ; the necessary work would be to generate the sql for specific UPDATE/DELETE on only the mutated keys. Not sure about the select. sidenote-- i found the postgres docs to have a bunch of errors on syntax. the above works on 9.2, but is not what the 9.x docs specify. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] Re: Mutable column_properties
Actually, this is more correct for multi-key updates: -- select before update SELECT id, kv-'x' AS kv_x , kv-'y' AS kv_y , kv-'z' AS kv_z FROM test_class ; -- update 2 columns ; these 3 are identical kvkv UPDATE test_class SET kv = kv || hstore(ARRAY['z','zz','x','xx']); kv,kv UPDATE test_class SET kv = kv || hstore(ARRAY[['z','zz'],['x','xx']]); kk , vv UPDATE test_class SET kv = kv || hstore(ARRAY['z','x'],ARRAY['zz','xx']); -- confirm update SELECT id, kv-'x' AS kv_x , kv-'y' AS kv_y , kv-'z' AS kv_z FROM test_class ; The selector syntax annoyingly flips between `=` for insert/operations and `-` for reading. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Mutable column_properties
the update() and insert() constructs support this but this usage isn't integrated in the ORM (and would seem like a pretty low priority feature in any case). On Sep 12, 2013, at 1:45 PM, Jonathan Vanasco jonat...@findmeon.com wrote: Actually, this is more correct for multi-key updates: -- select before update SELECT id, kv-'x' AS kv_x , kv-'y' AS kv_y , kv-'z' AS kv_z FROM test_class ; -- update 2 columns ; these 3 are identical kvkv UPDATE test_class SET kv = kv || hstore(ARRAY['z','zz','x','xx']); kv,kv UPDATE test_class SET kv = kv || hstore(ARRAY[['z','zz'],['x','xx']]); kk , vv UPDATE test_class SET kv = kv || hstore(ARRAY['z','x'],ARRAY['zz','xx']); -- confirm update SELECT id, kv-'x' AS kv_x , kv-'y' AS kv_y , kv-'z' AS kv_z FROM test_class ; The selector syntax annoyingly flips between `=` for insert/operations and `-` for reading. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out. signature.asc Description: Message signed with OpenPGP using GPGMail
Re: [sqlalchemy] Mutable column_properties
Sweet. This works : results = dbSession.execute( TestClass.__table__\ .update()\ .values( kv = TestClass.__table__.c.kv + sqlalchemy.dialects.postgresql.hstore(sqlalchemy.dialects.postgresql.array(['zz123', 'zz123'])) ) ) stmt = select( [ TestClass.__table__.c.kv['a'] ] ) print dbSession.execute(stmt).fetchall() stmt = select( [ TestClass.__table__.c.kv['a'] , TestClass.__table__.c.kv['b'] ] ) print dbSession.execute(stmt).fetchall() Philip - does this help you at all ? I used the ORM to access the connection and table data in the underlying engine. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] overriding lazy loading?
Is it possible to override the default loading strategy of a relationship at run-time? For example, I have a relationship that I almost always want to load with lazy='subquery' -- and so I set that as the default loading strategy in the relationship definition -- but in one instance, when I know I won't be accessing the related objects, I'd much rather load with lazy='select' (so as not to load the related objects). Is this possible? Thanks, Seth -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] overriding lazy loading?
have you looked at http://docs.sqlalchemy.org/en/rel_0_8/orm/loading.html ? On Sep 12, 2013, at 9:18 PM, Seth P spadow...@gmail.com wrote: Is it possible to override the default loading strategy of a relationship at run-time? For example, I have a relationship that I almost always want to load with lazy='subquery' -- and so I set that as the default loading strategy in the relationship definition -- but in one instance, when I know I won't be accessing the related objects, I'd much rather load with lazy='select' (so as not to load the related objects). Is this possible? Thanks, Seth -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out. signature.asc Description: Message signed with OpenPGP using GPGMail
[sqlalchemy] how to count function
Dear all, I have the following code: // query = self.dbObj.session.query(MaterialsTable) rowCounter = self.dbObj.session.query(MaterialsTable) for attr , val in interfaceCodesObject.filterNameDict.items(): query = query.filter(and_(getattr(MaterialsTable,attr).like(%%%s%% % val))) rowCounter = rowCounter.filter(and_(getattr(MaterialsTable,attr).like(%%%s%% % val))).count() query.all() / When i use mutiple field, i get the following traceback: /// Traceback (most recent call last): File /home/mohsen/codes/amlak/amlak/src/ui/materialsFindFrame.py, line 202, in lambda QtCore.QObject.connect(self.pushButtonSearch, QtCore.SIGNAL(_fromUtf8(clicked())), lambda: self.interfaceCodesConstructor.responseToRequestForData(self)) File /home/mohsen/codes/amlak/amlak/src/ui/interface/interface.py, line 109, in responseToRequestForData self.materialsObejct.findData(self.objectSearchMaterials,self) File /home/mohsen/codes/amlak/amlak/src/materials/materials.py, line 133, in findData rowCounter = rowCounter.filter(and_(getattr(MaterialsTable,attr).like(%%%s%% % val))).count() AttributeError: 'long' object has no attribute 'filter' /// But when i use just one field it use rowCounter as integer and real return. My question is , how can i adapt it with multiple field? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] overriding lazy loading?
D'oh! I did, though for some reason it didn't occur to me that I could specify .override(lazyload('points')) to override the relationship's default lazy='subquery'. Works like a charm. Thank you. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] Expire SQLAlchemy session if Oracle node failover
Hi all, We are using SQLAlchemy + cx_Oracle to connect to one of two Oracle nodes (setup to mirror the databases). Connection string as follows: oracle+cx_oracle://%s:%s@ (DESCRIPTION=(LOAD_BALANCE=off)(FAILOVER=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=%s)(PORT=%s))(ADDRESS=(PROTOCOL=TCP)(HOST=%s)(PORT=%s))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=%s))) engine constructed as follows: engine = create_engine(connect_string, echo_pool=False, pool_size=60, pool_recycle=True) Works well. Except if a user's DB session originally connected to the primary node don't failover to the secondary node if the primary node disappears. We are using version 0.8.0 installed via easy_install on a Red Hat 6 box. Is anyone able please to point me in the right direction for properly configuring failovers and being able to expire SQLAlchemy sessions if the database node is offline? Thanks for your time Devraj -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.