[sqlalchemy] SQLite and Foreign keys using schema name, again
Hi! In production server, in our product we use postgresql as db engine, than supports schema name as sqlalchmy's feature. But we have unit- tests, that must passed without postgresql server. In code we use schema names and foreign keys. There are a lot of topics with sqlite does not support foreign keys to external database. I read that Michael Bayer sad do not use ForeignKey() with sqlite. But we must use it for postgresql. The question is: may be add some argument for sqlite dialect, that suppress foreign key constraint? See the patch for example: diff -ru sqlalchemy.orig/dialects/sqlite/base.py sqlalchemy/dialects/ sqlite/base.py --- sqlalchemy.orig/dialects/sqlite/base.py 2010-11-12 17:13:23.0 +0300 +++ sqlalchemy/dialects/sqlite/base.py 2010-11-12 17:20:02.0 +0300 @@ -272,6 +272,13 @@ visit_primary_key_constraint(constraint) +def visit_foreign_key_constraint(self, constraint): +# Suppress foreign key constraint if configured +if getattr(self.dialect, 'suppress_fk_constraint', False): +return None +return super(SQLiteDDLCompiler, self).\ +visit_foreign_key_constraint(constraint) + def visit_create_index(self, create): index = create.element preparer = self.preparer @@ -342,7 +349,8 @@ supports_cast = True supports_default_values = True -def __init__(self, isolation_level=None, native_datetime=False, **kwargs): +def __init__(self, isolation_level=None, native_datetime=False, + suppress_fk_constraint=False, **kwargs): default.DefaultDialect.__init__(self, **kwargs) if isolation_level and isolation_level not in ('SERIALIZABLE', 'READ UNCOMMITTED'): @@ -350,6 +358,8 @@ Valid isolation levels for sqlite are 'SERIALIZABLE' and 'READ UNCOMMITTED'.) self.isolation_level = isolation_level + +self.suppress_fk_constraint = suppress_fk_constraint # this flag used by pysqlite dialect, and perhaps others in the # future, to indicate the driver is handling date/timestamp -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Re: Odd many-to-one problem
On Nov 11, 2010, at 7:50 PM, Sergey V. wrote: relationship() expects a class or a mapper instance, not a string. I got this error: ArgumentError: relationship 'available_deals' expects a class or a mapper argument (received: type 'str') Hmm... I'm not sure what I'm doing wrong but passing strings to relation() definitely works for me: The difference here is that usage of declarative adds an extra handler to relationship() calls, such that strings passed for most of its arguments are converted to callables which are later evaluated when the mapper structure is initialized. You should be able to use relationship() in a deferred argument style without declarative by using lambdas, i.e.: relationship(lambda: MyClass, primaryjoin=lambda:MyClass.foo==OtherClass.bar) But generally usage of mapper() wasn't intended to provide any tricks around import issues. You can always use mapper.add_property(...) to attach things as needed, and class_mapper(Class) to call up any mapper anywhere. class Host(Base): __tablename__ = 'hosts' id = sa.Column(sa.Integer, primary_key = True) ... datacentre_id = sa.Column(sa.Integer, sa.ForeignKey('datacentres.id')) datacentre = sa.orm.relation('Datacentre', backref='hosts') Can it be because I'm using declarative? In my case I don't even need to import Datacentre class before I declare Host class. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Inheriting custom collection to create another custom collection. Issues with the appenders/parents
Hello everyone. I was wondering if it's possible to inherit a custom collection to create another custom collection. A few days ago I was trying to use my own class as a custom_collection (http://groups.google.com/group/sqlalchemy/msg/ba1c64c3d227f586). Thanks to Michael Bayer I was able to do it, but now I would like to go one step further, and inherit my custom collection to create another custom collection. To simplify a little what I asked in the other message, let's say I have a: def ClassA(declarativeBase): __tablename__ = aes id = Column(id, Integer, primary_key=True) _whatever = Column(type, String(64)) def __init__(self): self._whatever = whatever Then I have my custom collection for instances of ClassA: def ContainerOfA(dict): __emulates__ = set def __init__(self): self._field = I'm a great... awesom! container #I also defined the appender, remover and iterator @collection.iterator def __iter__(self): return self.itervalues() @collection.appender def append(self, item): self[item.getUniqueHash()] = item @collection.remover def remove(self, item): if item.getUniqueHash() in self.keys(): del self[item.getUniqueHash()] And then I was happily able to use it in any relationships: def YetAnotherClass(declarativeBase): id = Column(id, Integer, primary_key=True) classesA = relationship(ClassA, uselist=True, secondary=intermediate_table, collection_class=lambda: ContainerOfA(), cascade=all, delete, delete-orphan, single_parent=True ) Now I needed to extend ClassA in a Class B and ContainerOfA in ContainerOfB. I added the polymorphic stuff to ClassA and ClassB to create a joined table inheritance, as detailed in http://www.sqlalchemy.org/docs/orm/inheritance.html#joined-table-inheritance . (it seems to be working fine, that's why I am not completely detailing it here) def ClassB(ClassA): __tablename__ = bs #Sorry for that __mapper_args__ = {'polymorphic_identity': 'ClassB'} id = Column(id, Integer, ForeignKey('aes.id'), primary_key=True) def __init__(self): self._anotherWhatever = another whatever def ContainerOfB(ContainerOfA): def __init__(self): super(ContainerOfB, self).__init__() def anotherMethodOnlyForBInstances(self): # do interesting stuff for B classes Then I tried to use it in a relationship: def YetYetAnotherClass(declarativeBase): id = Column(id, Integer, primary_key=True) classesB = relationship(ClassB, uselist=True, secondary=another_intermediate_table, collection_class=lambda: ContainerOfB(), cascade=all, delete, delete-orphan, single_parent=True ) But when I tried to append a ClassB instance through the relationship detailed above, I got this exception: Type ContainerOfB must elect an appender method to be a collection class I thought... ok, ok... let's just explicitly add the 'appender' to the ContainerOfB class... The only thing I need to do is calling the appender of the super class, anyway... no biggie and so I did: def ContainerOfB(ContainerOfA): # [ . . . ] # @collection.appender def append(self, classBInstance): return super(ContainerOfB, self).append(classBInstance) But then... another exception when I tried to add an instance of ClassB(): InvalidRequestError: Instance ClassB at 0xba9726c is already associated with an instance of class 'mylibraries.classesTest.YetYetAnotherClass' via its YetYetAnotherClass.classesB attribute, and is only allowed a single parent. Well... I need the cascade to properly delete the items (http://www.sqlalchemy.org/docs/orm/session.html#deleting) and in order to use that, I need the single_parent = True. Then funny thing is that if I totally rewrite the appender method in ContainerOfB: def ContainerOfB(ContainerOfA): # [ . . . ] # @collection.appender def append(self, classBInstance): # write here the exact same code than ContainerOfA changing # the reference to the item parameter by classBInstance # (that's the only difference) then everything is working fine. I have made some more tests, and the inheritance ClassA - ClassB seems to be working fine. In said tests I removed the cascade and the single_parent parameters of the classesB relationship. By doing that, I was able to insert instances of ClassB in the classesB container and all the information was properly stored in the database (the polymorphic identity was added properly, the foreign key of the ClassB() instance was
Re: [sqlalchemy] SQLite and Foreign keys using schema name, again
On Nov 12, 2010, at 9:37 AM, Gennady Kovalev wrote: Hi! In production server, in our product we use postgresql as db engine, than supports schema name as sqlalchmy's feature. But we have unit- tests, that must passed without postgresql server. In code we use schema names and foreign keys. There are a lot of topics with sqlite does not support foreign keys to external database. I read that Michael Bayer sad do not use ForeignKey() with sqlite. But we must use it for postgresql. The question is: may be add some argument for sqlite dialect, that suppress foreign key constraint? I don't think I would have ever said that ForeignKey() should not be used with SQLite. ForeignKey() should be used liberally since SQLAlchemy assigns importance to this token as defined in table metadata. SQLite by default ignores REFERENCES clauses when CREATE TABLE is issued so there is usually no issue there. However, you're likely referring here to ticket 1851, http://www.sqlalchemy.org/trac/ticket/1851, which is specific to the REFERENCES clause when rendered with a table that has a schema name. The only issue is that we had no documentation on how to generate the REFERENCES clause to a remote table that has a schema name. I've installed sqlite3 version 3.6.20 so that I could test its foreign key support and the correct syntax is stated in that ticket, i.e. you omit the schema name in the REFERENCES clause and the remote table is assumed to be in the same schema. I implemented that fix, as well as an additional change so that the REFERENCES clause is omitted entirely only if the two schemas of the tables are different, that is as of 8cc53b0afb99 . See the patch for example: diff -ru sqlalchemy.orig/dialects/sqlite/base.py sqlalchemy/dialects/ sqlite/base.py --- sqlalchemy.orig/dialects/sqlite/base.py 2010-11-12 17:13:23.0 +0300 +++ sqlalchemy/dialects/sqlite/base.py 2010-11-12 17:20:02.0 +0300 @@ -272,6 +272,13 @@ visit_primary_key_constraint(constraint) +def visit_foreign_key_constraint(self, constraint): +# Suppress foreign key constraint if configured +if getattr(self.dialect, 'suppress_fk_constraint', False): +return None +return super(SQLiteDDLCompiler, self).\ +visit_foreign_key_constraint(constraint) + def visit_create_index(self, create): index = create.element preparer = self.preparer @@ -342,7 +349,8 @@ supports_cast = True supports_default_values = True -def __init__(self, isolation_level=None, native_datetime=False, **kwargs): +def __init__(self, isolation_level=None, native_datetime=False, + suppress_fk_constraint=False, **kwargs): default.DefaultDialect.__init__(self, **kwargs) if isolation_level and isolation_level not in ('SERIALIZABLE', 'READ UNCOMMITTED'): @@ -350,6 +358,8 @@ Valid isolation levels for sqlite are 'SERIALIZABLE' and 'READ UNCOMMITTED'.) self.isolation_level = isolation_level + +self.suppress_fk_constraint = suppress_fk_constraint # this flag used by pysqlite dialect, and perhaps others in the # future, to indicate the driver is handling date/timestamp -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: SQLite and Foreign keys using schema name, again
I implemented that fix, as well as an additional change so that the REFERENCES clause is omitted entirely only if the two schemas of the tables are different, that is as of 8cc53b0afb99 . It is a good solution. Thank you. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Odd many-to-one problem
On Nov 12, 6:42 am, Michael Bayer mike...@zzzcomputing.com wrote: But generally usage of mapper() wasn't intended to provide any tricks around import issues. You can always use mapper.add_property(...) to attach things as needed, and class_mapper(Class) to call up any mapper anywhere. This is what I am really looking for. I knew there had to be a way to modify the mappings after the declaration. This is going to solve a ton of problems I've had to code around in interesting ways and make everything much, much more clean and clear. As always, SQLAlchemy delivers. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Two relationships with a same backref name. Is that (even) possible or I got everything wrong?
You're welcome, and I hope that works for you. I went through the same process a few years ago when picking up SqlAlchemy... the backref facility is so cool that it's easy to forget that it's optional and that most relationship backrefs /could/ be handled as just another relationship on the opposite mapper. On Nov 12, 7:31 am, Hector Blanco white.li...@gmail.com wrote: 2010/11/12 Eric Ongerth ericonge...@gmail.com: Hi Hector, If I'm not mistaken, everywhere you wrote (MyObject.id==MyObject.containerId), you meant to write: (Container.id==MyObject.containerId). Ups... yeah... great eye. Instead of the backref technique, why not just create the MyObject-- Container relationship a single time in your MyObject class. That should be able to coexist with your first code example (with no backrefs). Oh, right!! That's a great approach... I was so blinded with the backref thing that I didn't think it could be the other way around! I'll do that! Thank you Eric! -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Re: Two relationships with a same backref name. Is that (even) possible or I got everything wrong?
Yeah... I'm pretty newbie myself with sqlalchemy, so when I discovered that I could specify primary joins, secondary... and all that juicy stuff in the backref I got so into writing it from A -- to -- B that I forgot that it can be done B --from-- A :) Thanks again! 2010/11/12 Eric Ongerth ericonge...@gmail.com: You're welcome, and I hope that works for you. I went through the same process a few years ago when picking up SqlAlchemy... the backref facility is so cool that it's easy to forget that it's optional and that most relationship backrefs /could/ be handled as just another relationship on the opposite mapper. On Nov 12, 7:31 am, Hector Blanco white.li...@gmail.com wrote: 2010/11/12 Eric Ongerth ericonge...@gmail.com: Hi Hector, If I'm not mistaken, everywhere you wrote (MyObject.id==MyObject.containerId), you meant to write: (Container.id==MyObject.containerId). Ups... yeah... great eye. Instead of the backref technique, why not just create the MyObject-- Container relationship a single time in your MyObject class. That should be able to coexist with your first code example (with no backrefs). Oh, right!! That's a great approach... I was so blinded with the backref thing that I didn't think it could be the other way around! I'll do that! Thank you Eric! -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] searching for new objects not yet in database
Hi, I just want to check on something. Let's say I've got a script that's populating a database and will commit the transaction at the end. It looks for a particular object (let's call it A), and if NoResultFound it creates a new object and does a session.add(A). What if in a later iteration the script (before commit), I look up A again. The query seems to not find the object in the database (of course), but not the session either. The 'solution' is to keep track of new objects of that type I create and look in that list before attempting to create a new one. This doesn't seem elegant. Ideally I would have thought that a session.query()... would have found the object newly added into the session. Am I missing something or is there a more elegant way to handle this? In this case, I'm ok creating a nested session and saving the object directly to the database so it's available for future queries. What's the best practice method to do this? Cheers, Demitri -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] searching for new objects not yet in database
On Nov 12, 2010, at 7:32 PM, thatsanicehatyouh...@mac.com wrote: Hi, I just want to check on something. Let's say I've got a script that's populating a database and will commit the transaction at the end. It looks for a particular object (let's call it A), and if NoResultFound it creates a new object and does a session.add(A). What if in a later iteration the script (before commit), I look up A again. The query seems to not find the object in the database (of course), mm, right there that's not the default behavior. If you did an add(A), the next query() you do will autoflush. A is now in the database within the scope of the current transaction, so query() will find it. but not the session either. The 'solution' is to keep track of new objects of that type I create and look in that list before attempting to create a new one. This doesn't seem elegant. Ideally I would have thought that a session.query()... would have found the object newly added into the session. Am I missing something or is there a more elegant way to handle this? so...if you happen to have autoflush=False, turning that on would be your elegant switch. Otherwise would need more detail. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] searching for new objects not yet in database
Thanks for the quick reply! On Nov 12, 2010, at 7:41 PM, Michael Bayer wrote: mm, right there that's not the default behavior. If you did an add(A), the next query() you do will autoflush. A is now in the database within the scope of the current transaction, so query() will find it. Yes, I have autoflush=False. A typical script for me is to load a batch of files into our database. If there is any problem with processing any of the files, I want the commit to fail - I don't want a partial import. I'd rather fix the problem and do another batch import (because I don't want to write a bunch of code checking how much was written and where to pick up from). I definitely don't want a query() to be a write operation. Or is this a case where a nested transaction would be appropriate? Cheers, Demitri -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] searching for new objects not yet in database
On Nov 12, 2010, at 7:49 PM, thatsanicehatyouh...@mac.com wrote: Thanks for the quick reply! On Nov 12, 2010, at 7:41 PM, Michael Bayer wrote: mm, right there that's not the default behavior. If you did an add(A), the next query() you do will autoflush. A is now in the database within the scope of the current transaction, so query() will find it. Yes, I have autoflush=False. A typical script for me is to load a batch of files into our database. If there is any problem with processing any of the files, I want the commit to fail - I don't want a partial import. I'd rather fix the problem and do another batch import (because I don't want to write a bunch of code checking how much was written and where to pick up from). I definitely don't want a query() to be a write operation. Or is this a case where a nested transaction would be appropriate? If the commit fails, then nothing is written to the database. The whole point of transactions is so that partial operations are not possible, even though you get to send your data to the database as its constructed, and you get the full advantage of SQL querying on that data without anything being permanent. It's truly the best of both worlds. Unless you're using MySQL + MyISAM, this would give you the behavior it seems like you're looking for. Nested transactions would be useful if you wanted to roll back partially within the same transaction and then try some operation again, but this is a more exotic need - your description that you'd want to fix the problem then do another batch import is what people normally would do, and shouldn't require anything beyond default Session behavior. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Found an old This will raise an error in 0.6 warning still included in 0.6.5
Just a heads-up: I was experimenting with various cascade options on mappers and came across the following warning: SAWarning: The 'delete-orphan' cascade option requires 'delete'. This will raise an error in 0.6. But I'm running 0.6.5. Maybe this warning message just never got updated since the 0.6.x releases. No complaint here, just mentioning it in case it helps bring things up to date. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.