[sqlalchemy] Batch Delete with ORM
I have a table described with: BASE = declarative_base(name=APP_ID) class Period(BASE): __tablename__ = 'period' realperiod = Column(Integer, primary_key=True) cycle = Column(Integer) [more columns] I want to delete all records with a given value for the cycle column. With the SQLAlchemy session in the variable session and the desired cycle in y, I execute q = session.query(Period).filter(Period.cycle==y).delete() This correctly deletes the desired rows and returns the number of rows deleted; however, if I turn echo on I see the following SQL commands: SELECT period.realperiod AS period_realperiod FROM period WHERE period.cycle = ? DELETE FROM period WHERE period.cycle = ? If I were doing this in SQL, I would to the first command as SELECT count(*) FROM period WHERE period.cycle = ? to get the count of rows to be deleted so I am wondering if I am doing things correctly. Thanks, Mark -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] (IntegrityError) null value in column post_id violates not-null constraint
Hello all, So, here's simple script that produces the error: http://pastie.org/1197851 please take a look at it, the script isn't big at all and there's traceback for the error as well. In short, it issues a query to get a simple object (Post) with one-to-many relation tag_links (i.e. array of PostTagLink instances), then it does post.tag_links = [] and tries to Session.commit() it. And fails terribly. Guts tell me I may be doing something wrong, but I don't know what exactly. I'll greatly appreciate your help. Best regards, Anton Shestakov. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] (IntegrityError) null value in column post_id violates not-null constraint
On Oct 4, 2010, at 1:34 AM, Anton Shestakov wrote: Hello all, So, here's simple script that produces the error: http://pastie.org/1197851 please take a look at it, the script isn't big at all and there's traceback for the error as well. In short, it issues a query to get a simple object (Post) with one-to-many relation tag_links (i.e. array of PostTagLink instances), then it does post.tag_links = [] and tries to Session.commit() it. And fails terribly. Guts tell me I may be doing something wrong, but I don't know what exactly. I'll greatly appreciate your help. Best regards, Anton Shestakov. you need delete-orphan cascade. http://www.sqlalchemy.org/docs/orm/tutorial.html#configuring-delete-delete-orphan-cascade -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Cannot deepcopy() model instance with attribute mapped collection on it
Hey, I had trouble deepcopying model instances that use attribute mapped collection classes. I'm aware that copying an instance implies a certain use case with a better solution - I accidentally hit this error when I had an instance as an attribute on an object I was deepcopy()ing. The following example fails with TypeError: attrgetter expected 1 arguments, got 0. Is there a way to remedy the situation a bit so the deepcopy doesn't fail so spectacularly? Thanks, N from sqlalchemy import create_engine, Column, ForeignKey from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import scoped_session, sessionmaker, relationship from sqlalchemy.orm.collections import attribute_mapped_collection from sqlalchemy.types import Integer, String import copy engine = create_engine('sqlite:///:memory:', echo=True) Base = declarative_base(bind=engine) Session = scoped_session(sessionmaker(bind=engine)) class Note(Base): __tablename__ = 'notes' keyword = Column(String, primary_key=True) item_name = Column(String, ForeignKey('items.name')) class Item(Base): __tablename__ = 'items' name = Column(String, primary_key=True) notes = relationship(Note, collection_class=attribute_mapped_collection('keyword')) Base.metadata.create_all() item = Item(name='Foo') note = Note(keyword='Bar') item.notes.set(note) Session.commit() assert item.notes['Bar'] is note copy.deepcopy(note) # Passes copy.deepcopy(item) # Fails -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] collection_class = set
I'm trying to use a relationship that uses a set as the collection_class by setting the collection_class parameter to set. I chose a set since there should only be one instance corresponding to a give table row since it is enforced by a foreign key on the many side. When initially populating the data, I found that I can add two objects that were instantiated with the same data. This makes sense because by default when Python compares two instances it looks at their object id and since these are two separate objects, there is no duplication. I added __hash__, __cmp__ and __eq__ methods to the class so that instances with the same data compare equal. Is this the correct way to use a set as a collection_class? Thanks, Mark -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Unexpected behavior of is_modified()
Hi, On Sep 27, 11:53 pm, Michael Bayer mike...@zzzcomputing.com wrote: This is a limitation of is_modified(), and it is definitely a bug here that this limitation is not documented (ticket 1928). Thanks. I could probably be prevailed upon to submit a documentation patch once I'm sure I understand everything and know what to write. The explanation makes sense, and is what I had inferred was going on. The reasons for why it is the way it is are OK by me. It was more my confusion and surprise at the way the behavior was, especially since it wasn't mentioned in the documentation -- and because at least one of the sample extensions on versioning would be affected by this behavior in principle. Let me ask after some workarounds: A workaround is to enable full history tracking for the attribute, so that the old value is loaded upon a set event. This is not public API at the moment but it would look like: Spam.cans.impl.active_history = True Since I want to know about changes to just about all attributes, this doesn't really scale well (aside from not wanting to trust something not *yet* in the public API :) or, set expire_on_commit=False on your session. Let me see if I understand the implications of this. The downside risk of expire_on_commit=False is concurrency; if there are changes being written to the same record elsewhere, there is a risk of unwittingly carrying around stale data. Is there any other downside to expire_on_commit? It seems a third suggestion would be simply not to re-use instances across transactions, and instead always start fresh by pulling rows into objects via queries in each new transaction. In my case, that ought to work just fine, and would require just about no changes to my existing code. Ted The active_history feature should become public API at some point. The alternative notion of modifying attributes.get_history() to get the old value to load from a SELECT at the point of history, rather than upon attribute set, would be doable but would require more intensive changes, as we'd have to break out the load the old value behavior into a separate function that calls upon loader callables in such a way that they know not to populate the current instance dictionary. The feature would still not be ideal as a default within the ORM since you'd see many more SELECTs being emitted within a typical flush() against a set of persistent, modified objects, most of which are unnecessary. - import sqlalchemy import sqlalchemy.orm from sqlalchemy import Table, Column, Integer class Spam(object): def __init__(self, cans): self.cans = cans engine = sqlalchemy.create_engine('sqlite:///:memory:') Session = sqlalchemy.orm.sessionmaker(autocommit=False, bind=engine) session = Session() meta = sqlalchemy.MetaData() meta.bind = engine spam_table = Table('spam', meta, Column('id', Integer, primary_key=True), Column('cans', Integer) ) meta.create_all(engine) sqlalchemy.orm.mapper(Spam, spam_table) # Setup finished. Now test things out instance = Spam(42) session.add(instance) session.commit() print Cans of spam = , instance.cans instance.cans = 42 print Has instance been modified? (Should be False): , session.is_modified(instance) session.commit() instance.cans = 42 print We have made no net change to the instance... print Has instance been modified? (Should be False): , session.is_modified(instance) - When I run this, I get: - Cans of spam = 42 Has instance been modified? (Should be False): False We have made no net change to the instance... Has instance been modified? (Should be False): True - That's surprising to me. In both cases, no net change is being made to the instance. Noodling around further, it appears that this is triggered by re-using the same instance object, and doing another __setattr__ on the column without having done a __getattr__ on it earlier. I suspect, without having dug around, that after the commit, the instance's attribute values are marked as needing to be re-loaded, but they are not being re-loaded first in order to check whether the new value is in fact different than the existing one. I've also noticed if I re-obtain the instance via a query using .get() on the primary key, the behavior is the expected one, again presumably because the query operation is loading in all the column values. Am I missing something here? Ted -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are
Re: [sqlalchemy] Re: Unexpected behavior of is_modified()
On Oct 4, 2010, at 5:32 PM, Ted Turocy wrote: Hi, On Sep 27, 11:53 pm, Michael Bayer mike...@zzzcomputing.com wrote: This is a limitation of is_modified(), and it is definitely a bug here that this limitation is not documented (ticket 1928). Thanks. I could probably be prevailed upon to submit a documentation patch once I'm sure I understand everything and know what to write. The explanation makes sense, and is what I had inferred was going on. The reasons for why it is the way it is are OK by me. It was more my confusion and surprise at the way the behavior was, especially since it wasn't mentioned in the documentation -- and because at least one of the sample extensions on versioning would be affected by this behavior in principle. Let me ask after some workarounds: A workaround is to enable full history tracking for the attribute, so that the old value is loaded upon a set event. This is not public API at the moment but it would look like: Spam.cans.impl.active_history = True Since I want to know about changes to just about all attributes, this doesn't really scale well (aside from not wanting to trust something not *yet* in the public API :) Its not public because the public use case would have to be nailed down very well first. we like to identify use cases first so that we can come up with the most useful version of a feature, rather than exposing internals. However, this particular attribute is already semi-public in that you can create an attribute listener that turns it on. or, set expire_on_commit=False on your session. Let me see if I understand the implications of this. The downside risk of expire_on_commit=False is concurrency; if there are changes being written to the same record elsewhere, there is a risk of unwittingly carrying around stale data. Is there any other downside to expire_on_commit? there's no other downside to turning off that flag. If your app tends to finish what it's doing after a commit then there's usually no reason to have it on. It seems a third suggestion would be simply not to re-use instances across transactions, and instead always start fresh by pulling rows into objects via queries in each new transaction. In my case, that ought to work just fine, and would require just about no changes to my existing code. Ted The active_history feature should become public API at some point. The alternative notion of modifying attributes.get_history() to get the old value to load from a SELECT at the point of history, rather than upon attribute set, would be doable but would require more intensive changes, as we'd have to break out the load the old value behavior into a separate function that calls upon loader callables in such a way that they know not to populate the current instance dictionary. The feature would still not be ideal as a default within the ORM since you'd see many more SELECTs being emitted within a typical flush() against a set of persistent, modified objects, most of which are unnecessary. - import sqlalchemy import sqlalchemy.orm from sqlalchemy import Table, Column, Integer class Spam(object): def __init__(self, cans): self.cans = cans engine = sqlalchemy.create_engine('sqlite:///:memory:') Session = sqlalchemy.orm.sessionmaker(autocommit=False, bind=engine) session = Session() meta = sqlalchemy.MetaData() meta.bind = engine spam_table = Table('spam', meta, Column('id', Integer, primary_key=True), Column('cans', Integer) ) meta.create_all(engine) sqlalchemy.orm.mapper(Spam, spam_table) # Setup finished. Now test things out instance = Spam(42) session.add(instance) session.commit() print Cans of spam = , instance.cans instance.cans = 42 print Has instance been modified? (Should be False): , session.is_modified(instance) session.commit() instance.cans = 42 print We have made no net change to the instance... print Has instance been modified? (Should be False): , session.is_modified(instance) - When I run this, I get: - Cans of spam = 42 Has instance been modified? (Should be False): False We have made no net change to the instance... Has instance been modified? (Should be False): True - That's surprising to me. In both cases, no net change is being made to the instance. Noodling around further, it appears that this is triggered by re-using the same instance object, and doing another __setattr__ on the column without having done a __getattr__ on it earlier. I suspect, without having dug around, that after the commit, the instance's attribute values are marked as needing to be re-loaded, but they are not being re-loaded first in order to check whether the new value is in fact different than the existing one. I've also noticed if I re-obtain the instance via a query using .get() on the
Re: [sqlalchemy] Batch Delete with ORM
On Oct 4, 2010, at 8:16 AM, Mark Erbaugh wrote: I have a table described with: BASE = declarative_base(name=APP_ID) class Period(BASE): __tablename__ = 'period' realperiod = Column(Integer, primary_key=True) cycle = Column(Integer) [more columns] I want to delete all records with a given value for the cycle column. With the SQLAlchemy session in the variable session and the desired cycle in y, I execute q = session.query(Period).filter(Period.cycle==y).delete() This correctly deletes the desired rows and returns the number of rows deleted; however, if I turn echo on I see the following SQL commands: SELECT period.realperiod AS period_realperiod FROM period WHERE period.cycle = ? DELETE FROM period WHERE period.cycle = ? If I were doing this in SQL, I would to the first command as SELECT count(*) FROM period WHERE period.cycle = ? to get the count of rows to be deleted so I am wondering if I am doing things correctly. Most relational databases report the number of rows matched by any UPDATE or DELETE statement that just executed, and SQLA acquires this value via cursor.rowcount on any such statement, so a separate count() call is not required. Thanks, Mark -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] MySQL DATE_ADD function
I'm having trouble converting this SQL into an ORM statement. DATE_ADD(datecol, INTERVAL(1 - DAYOFWEEK(datecol)) DAY) This is as far as I can get, which is basically nowhere. The second argument to date_add requires literal strings INTERVAL and DAY, but I also need to insert a function in the middle there. Any help is appreciated. func.date_add(Class.dateAttr, INTERVAL(1 - DAYOFWEEK(Class.dateAttr)) DAY) -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] (IntegrityError) null value in column post_id violates not-null constraint
2010/10/4 Michael Bayer mike...@zzzcomputing.com: On Oct 4, 2010, at 1:34 AM, Anton Shestakov wrote: Hello all, So, here's simple script that produces the error: http://pastie.org/1197851 please take a look at it, the script isn't big at all and there's traceback for the error as well. In short, it issues a query to get a simple object (Post) with one-to-many relation tag_links (i.e. array of PostTagLink instances), then it does post.tag_links = [] and tries to Session.commit() it. And fails terribly. Guts tell me I may be doing something wrong, but I don't know what exactly. I'll greatly appreciate your help. Best regards, Anton Shestakov. you need delete-orphan cascade. http://www.sqlalchemy.org/docs/orm/tutorial.html#configuring-delete-delete-orphan-cascade Oh. Thanks for your answer. I feel stupid for asking this question now that is answered in the tutorial. Well, serves me right. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Batch Delete with ORM
On Oct 4, 2010, at 5:46 PM, Michael Bayer wrote: On Oct 4, 2010, at 8:16 AM, Mark Erbaugh wrote: I have a table described with: BASE = declarative_base(name=APP_ID) class Period(BASE): __tablename__ = 'period' realperiod = Column(Integer, primary_key=True) cycle = Column(Integer) [more columns] I want to delete all records with a given value for the cycle column. With the SQLAlchemy session in the variable session and the desired cycle in y, I execute q = session.query(Period).filter(Period.cycle==y).delete() This correctly deletes the desired rows and returns the number of rows deleted; however, if I turn echo on I see the following SQL commands: SELECT period.realperiod AS period_realperiod FROM period WHERE period.cycle = ? DELETE FROM period WHERE period.cycle = ? If I were doing this in SQL, I would to the first command as SELECT count(*) FROM period WHERE period.cycle = ? to get the count of rows to be deleted so I am wondering if I am doing things correctly. Most relational databases report the number of rows matched by any UPDATE or DELETE statement that just executed, and SQLA acquires this value via cursor.rowcount on any such statement, so a separate count() call is not required. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. Thanks. What is the purpose of SELECT period.realperiod AS period_realperiod FROM period WHERE period.cycle = ? that appears to be generated by the session.query ... call? Mark -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] reconstruct_instance not being called when adding an extension dynamically
Hello This has been making me pull my hair for quite sometime now :) I am trying to use MapperExtension and I have a code like: http://bpaste.net/show/9936/ Basically, I share the model across two apps, so I want one with the extension and one without. Thus the need to use one of the internal methods: __mapper__,exception.push() Unfortunately, it the whole thing works when I use, __mapper_args__ but does not if I add it dynamically using the above method. Though, other methods like before_update, row_translate get called in both ways of setup. Just not reconstruct_instance, which I what I wanted. I will now startup a debugger but I was wondering if anybody knew about it or its a bug or its just not possible? -- Ritesh -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Batch Delete with ORM
On Oct 4, 2010, at 8:06 PM, Mark Erbaugh wrote: Thanks. What is the purpose of SELECT period.realperiod AS period_realperiod FROM period WHERE period.cycle = ? that appears to be generated by the session.query ... call? the delete() and update() methods on Query want to invalidate all objects currently in the session which match the criterion. There are three options for this behavior, and in 0.5 the default is fetch which is what that appears to be doing. I would set it to evaluate so that it only scans through memory instead of hitting the DB, or None so that it doesn't invalidate anything, if you aren't concerned about coming across those objects in the same transaction. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Batch Delete with ORM
On Oct 4, 2010, at 8:30 PM, Michael Bayer wrote: On Oct 4, 2010, at 8:06 PM, Mark Erbaugh wrote: Thanks. What is the purpose of SELECT period.realperiod AS period_realperiod FROM period WHERE period.cycle = ? that appears to be generated by the session.query ... call? the delete() and update() methods on Query want to invalidate all objects currently in the session which match the criterion. There are three options for this behavior, and in 0.5 the default is fetch which is what that appears to be doing. I would set it to evaluate so that it only scans through memory instead of hitting the DB, or None so that it doesn't invalidate anything, if you aren't concerned about coming across those objects in the same transaction. That makes sense. Thanks, Mark -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] reconstruct_instance not being called when adding an extension dynamically
On Mon, Oct 4, 2010 at 5:45 PM, Michael Bayer mike...@zzzcomputing.com wrote: On Oct 4, 2010, at 8:26 PM, Ritesh Nadhani wrote: Hello This has been making me pull my hair for quite sometime now :) I am trying to use MapperExtension and I have a code like: http://bpaste.net/show/9936/ Basically, I share the model across two apps, so I want one with the extension and one without. Thus the need to use one of the internal methods: __mapper__,exception.push() Unfortunately, it the whole thing works when I use, __mapper_args__ but does not if I add it dynamically using the above method. Though, other methods like before_update, row_translate get called in both ways of setup. Just not reconstruct_instance, which I what I wanted. I will now startup a debugger but I was wondering if anybody knew about it or its a bug or its just not possible? It's not a bug since ad-hoc addition of extensions is not public API. The mapper checks its extensions for this particular hook when its created and sets an extra event listener on the class if present. 0.7 will allow ad-hoc listeners in a more flexible way. There is actually a listener interface for this in 0.6 which is semi-private. It will be changing a little bit in 0.7 (whereas MapperExtension will remain backwards compatible for at least through 0.7/0.8). If you wanted to try that its: from sqlalchemy.orm.attributes import manager_of_class def reconstruct(instance): print reconstruct! manager_of_class(SomeTable).events.add_listener('on_load', reconstruct) In this case its likely much easier to set up __mapper_args__ as a callable on your class instead (see the examples in declarative regarding mixins - the same technique works on the class itself), or to use the @reconstructor decorator inside a conditional: class MyClass(Base): if my_app_config_foo_bar: �...@reconstructor def reconstruct(self): print reconstruct! -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. Aha. Thanks. I guess I will wait till 0.7 to use the feature. I understood the reconstructor way but not sure the mixin way. I will go now and reread the docs on the subject. -- Ritesh -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.