Re: [sqlalchemy]Close database connection in SQLAlchemy
On Sunday 22 Jan 2012 12:57:33 AM Sana wrote: • On page load im displaying data from the database each time the page is refreshed the query hits the database to retrieve the data there by taking long time for the page to load is thr any way by which the query hit the table oly for new request and for the same request access data from the cache There's a beaker caching example in SQLA. http://docs.sqlalchemy.org/en/latest/orm/examples.html#beaker-caching I myself am looking forward to implement this pattern using memcached in my own app. -- Fayaz Yusuf Khan Cloud developer and architect Dexetra SS, Bangalore, India fayaz.yusuf.khan_AT_gmail_DOT_com fayaz_AT_dexetra_DOT_com +91-9746-830-823 signature.asc Description: This is a digitally signed message part.
[sqlalchemy] session.query().get() is unsupported during flush for getting an object that was just added?
I think I understand why, during a flush(), if I use session.query().get() for an item that was just added during this flush, I don't get the persistent object I might expect because the session still has it as pending even though, logically, it is already persistent. I don't suppose you have any desire to support that, huh? The use case would be related to the future ticket http://www.sqlalchemy.org/trac/ticket/1939 (and http://www.sqlalchemy.org/trac/ticket/2350). Attached is a script demonstrating the issue I've hit. I can work around it with some difficulty, but I wanted your input and thoughts. Thanks, Kent -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. # Show what happens when we use session.query().get() # during a flush to load an object that is being inserted during the same flush # Instead of getting what was the pending object, we get a new copy of what # the orm thinks is persistent and then it is detached after the flush finishes from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy import event from sqlalchemy.orm.util import has_identity engine = create_engine('sqlite:///', echo=True) metadata = MetaData(engine) Session = sessionmaker(bind=engine) rocks_table = Table(rocks, metadata, Column(id, Integer, primary_key=True), ) bugs_table = Table(bugs, metadata, Column(id, Integer, primary_key=True), Column(rockid, Integer, ForeignKey('rocks.id'),), ) class Rock(object): def __repr__(self): return 'Rock@%d: id=[%s] in session:[%s] has_identity[%s]' % (id(self), self.__dict__.get('id'), self in session, has_identity(self)) class Bug(object): def __repr__(self): return 'Bug@%d: id=[%s] rockid[%s] with rock[%s]' % (id(self), self.__dict__.get('id'), self.__dict__.get('rockid'), self.__dict__.get('rock','not set')) class BugAgent(MapperExtension): def before_update(self, mapper, connection, instance): assert 'rock' not in instance.__dict__ print \n\n during flush # after http://www.sqlalchemy.org/trac/ticket/2350, we could just reference like this: #instance.rock instance.rock = session.query(Rock).get(instance.rockid) # # we just loaded a Rock that was just inserted during this flush, so # it looks persistent to the orm, but the orm also has this object # already (still pending). After the flush is done, # the pending object will be the only one in the session and the # object we just loaded here will be removed from the session (detached) # print \n\n*before_update: %r\n % instance assert 'rock' in instance.__dict__ mapper(Rock, rocks_table, properties={'bugs': relationship(Bug, cascade='all,delete-orphan', backref='rock') }) mapper(Bug, bugs_table, extension=BugAgent()) @event.listens_for(Bug.rockid, 'set') def autoexpire_rock_attribute(instance, value, oldvalue, initiator): if value != oldvalue: if instance in session and has_identity(instance): assert 'rock' in instance.__dict__ print \n\nBug.rockid changing from [%s] to [%s]... % (oldvalue, value) print **about to expire rock for %r % instance session.expire(instance, ['rock']) print **expired rock for %r\n % instance assert 'rock' not in instance.__dict__ metadata.create_all() session = Session() # add a rock and bug rock=Rock() rock.id = 0 bug=Bug() bug.id = 0 rock.bugs.append(bug) session.add(rock) session.commit() # later... new session session = Session() b1 = Bug() b1.id = 0 rock=Rock() rock.id = 1 rock.bugs.append(b1) print \n\nmerge start\n merged = session.merge(rock) print \n\nmerge end\n print flush\n session.flush() assert 'rock' in merged.bugs[0].__dict__ # show that the pending object has become persistent print \n\nsession's pending obj turned persistent: %r % session.query(Rock).get(1) # show that the object we loaded has been detached from the session print 'merged.bugs[0].rock (copy of same object, no longer in session): %r' % merged.bugs[0].rock
Re: [sqlalchemy] session.query().get() is unsupported during flush for getting an object that was just added?
On Jan 26, 2012, at 11:28 AM, Kent Bower wrote: I think I understand why, during a flush(), if I use session.query().get() for an item that was just added during this flush, I don't get the persistent object I might expect because the session still has it as pending even though, logically, it is already persistent. I don't suppose you have any desire to support that, huh? The use case would be related to the future ticket http://www.sqlalchemy.org/trac/ticket/1939 (and http://www.sqlalchemy.org/trac/ticket/2350). Attached is a script demonstrating the issue I've hit. I can work around it with some difficulty, but I wanted your input and thoughts. No, there's no plans to support this case at all; you're using the Session inside of a mapper event, which is just not supported, and can never be due to the nature of the unit of work. The most recent docstrings try to be very explicit about this: http://docs.sqlalchemy.org/en/latest/orm/events.html#sqlalchemy.orm.events.MapperEvents.before_update I guess I have to add session.query() and get() in there as well. The way the flush works is not as straightforward as persist object A; persist object B; persist object C - that is, these are not atomic operations inside the flush.It's more like, Perform step X for objects A, B, and C; perform step Y for objects A, B and C. This is basically batching, and is necessary since it is vastly more efficient than atomically completing each object one at a time. Also, some decisions are needed by Y which can't always be made until X has completed for objects involved in dependencies. A side effect of batching is that if we provide a hook that emits after X and before Y, you're being exposed to the objects in an unusual state. Hence, the hooks that are in the middle like that are only intended to emit SQL on the given Connection; not to do anything ORM level beyond assigning column-based values on the immediate object.As always, before_flush() is where ORM-level manipulations are intended to be placed. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] session.query().get() is unsupported during flush for getting an object that was just added?
Fair enough. I had enough understanding of what must be going on to know flush isn't straightforward, but I'm still glad I asked. Sorry for having not read the documents very well and thanks for your answer, because from it, I surmise that before_flush() *is* safe for session operations, which is very good to understand more clearly. Thanks. On 1/26/2012 12:06 PM, Michael Bayer wrote: On Jan 26, 2012, at 11:28 AM, Kent Bower wrote: I think I understand why, during a flush(), if I use session.query().get() for an item that was just added during this flush, I don't get the persistent object I might expect because the session still has it as pending even though, logically, it is already persistent. I don't suppose you have any desire to support that, huh? The use case would be related to the future ticket http://www.sqlalchemy.org/trac/ticket/1939 (and http://www.sqlalchemy.org/trac/ticket/2350). Attached is a script demonstrating the issue I've hit. I can work around it with some difficulty, but I wanted your input and thoughts. No, there's no plans to support this case at all; you're using the Session inside of a mapper event, which is just not supported, and can never be due to the nature of the unit of work. The most recent docstrings try to be very explicit about this: http://docs.sqlalchemy.org/en/latest/orm/events.html#sqlalchemy.orm.events.MapperEvents.before_update I guess I have to add session.query() and get() in there as well. The way the flush works is not as straightforward as persist object A; persist object B; persist object C - that is, these are not atomic operations inside the flush.It's more like, Perform step X for objects A, B, and C; perform step Y for objects A, B and C. This is basically batching, and is necessary since it is vastly more efficient than atomically completing each object one at a time. Also, some decisions are needed by Y which can't always be made until X has completed for objects involved in dependencies. A side effect of batching is that if we provide a hook that emits after X and before Y, you're being exposed to the objects in an unusual state. Hence, the hooks that are in the middle like that are only intended to emit SQL on the given Connection; not to do anything ORM level beyond assigning column-based values on the immediate object.As always, before_flush() is where ORM-level manipulations are intended to be placed. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] session.query().get() is unsupported during flush for getting an object that was just added?
yup, before_flush is made for that, and I've for some time had some vague plans to add some more helpers there so you could get events local to certain kinds of objects in certain kinds of states, meaning it would look a lot like before_update. But looping through .new, .dirty, and .deleted is how to do it for now. On Jan 26, 2012, at 12:12 PM, Kent wrote: Fair enough. I had enough understanding of what must be going on to know flush isn't straightforward, but I'm still glad I asked. Sorry for having not read the documents very well and thanks for your answer, because from it, I surmise that before_flush() *is* safe for session operations, which is very good to understand more clearly. Thanks. On 1/26/2012 12:06 PM, Michael Bayer wrote: On Jan 26, 2012, at 11:28 AM, Kent Bower wrote: I think I understand why, during a flush(), if I use session.query().get() for an item that was just added during this flush, I don't get the persistent object I might expect because the session still has it as pending even though, logically, it is already persistent. I don't suppose you have any desire to support that, huh? The use case would be related to the future ticket http://www.sqlalchemy.org/trac/ticket/1939 (and http://www.sqlalchemy.org/trac/ticket/2350). Attached is a script demonstrating the issue I've hit. I can work around it with some difficulty, but I wanted your input and thoughts. No, there's no plans to support this case at all; you're using the Session inside of a mapper event, which is just not supported, and can never be due to the nature of the unit of work. The most recent docstrings try to be very explicit about this: http://docs.sqlalchemy.org/en/latest/orm/events.html#sqlalchemy.orm.events.MapperEvents.before_update I guess I have to add session.query() and get() in there as well. The way the flush works is not as straightforward as persist object A; persist object B; persist object C - that is, these are not atomic operations inside the flush.It's more like, Perform step X for objects A, B, and C; perform step Y for objects A, B and C. This is basically batching, and is necessary since it is vastly more efficient than atomically completing each object one at a time. Also, some decisions are needed by Y which can't always be made until X has completed for objects involved in dependencies. A side effect of batching is that if we provide a hook that emits after X and before Y, you're being exposed to the objects in an unusual state. Hence, the hooks that are in the middle like that are only intended to emit SQL on the given Connection; not to do anything ORM level beyond assigning column-based values on the immediate object.As always, before_flush() is where ORM-level manipulations are intended to be placed. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] session.query().get() is unsupported during flush for getting an object that was just added?
So, as a typical example of where it seems very natural to use before_update, suppose you need to automatically update the not null sequence of a related table. This but to get the sequence you need to loop over the parent table's collection. You want the sequence to be human friendly (natural primary key) and also you want to be able to sort by sequence guaranteed in order without the possibility of a database sequence wrap around. So you want the sequence 1,2,3... This seems extremely well fit for before_insert, like this: == parents_table = Table(parents, metadata, Column(id, Integer, primary_key=True), ) children_table = Table(children, metadata, Column(parentid, Integer, ForeignKey('parents.id'),), Column(sequence, Integer, primary_key=True), ) class Parent(object): pass class Child(object): pass mapper(Parent, parents_table, properties={'children': relationship(Child, cascade='all,delete-orphan', backref='parent') }) mapper(Child, children_table) @event.listens_for(Child, 'before_insert') def set_sequence(mapper, connection, instance): if instance.sequence is None: instance.sequence = (max(c.sequence for c in instance.parent.children) or 0) + 1 == But this reaches across relationships, so that is actually not desired here, is that correct? For this, you would loop over session.new in before_update, is that how you would approach this requirement? On 1/26/2012 12:34 PM, Michael Bayer wrote: yup, before_flush is made for that, and I've for some time had some vague plans to add some more helpers there so you could get events local to certain kinds of objects in certain kinds of states, meaning it would look a lot like before_update. But looping through .new, .dirty, and .deleted is how to do it for now. On Jan 26, 2012, at 12:12 PM, Kent wrote: Fair enough. I had enough understanding of what must be going on to know flush isn't straightforward, but I'm still glad I asked. Sorry for having not read the documents very well and thanks for your answer, because from it, I surmise that before_flush() *is* safe for session operations, which is very good to understand more clearly. Thanks. On 1/26/2012 12:06 PM, Michael Bayer wrote: On Jan 26, 2012, at 11:28 AM, Kent Bower wrote: I think I understand why, during a flush(), if I use session.query().get() for an item that was just added during this flush, I don't get the persistent object I might expect because the session still has it as pending even though, logically, it is already persistent. I don't suppose you have any desire to support that, huh? The use case would be related to the future ticket http://www.sqlalchemy.org/trac/ticket/1939 (and http://www.sqlalchemy.org/trac/ticket/2350). Attached is a script demonstrating the issue I've hit. I can work around it with some difficulty, but I wanted your input and thoughts. No, there's no plans to support this case at all; you're using the Session inside of a mapper event, which is just not supported, and can never be due to the nature of the unit of work. The most recent docstrings try to be very explicit about this: http://docs.sqlalchemy.org/en/latest/orm/events.html#sqlalchemy.orm.events.MapperEvents.before_update I guess I have to add session.query() and get() in there as well. The way the flush works is not as straightforward as persist object A; persist object B; persist object C - that is, these are not atomic operations inside the flush.It's more like, Perform step X for objects A, B, and C; perform step Y for objects A, B and C. This is basically batching, and is necessary since it is vastly more efficient than atomically completing each object one at a time. Also, some decisions are needed by Y which can't always be made until X has completed for objects involved in dependencies. A side effect of batching is that if we provide a hook that emits after X and before Y, you're being exposed to the objects in an unusual state. Hence, the hooks that are in the middle like that are only intended to emit SQL on the given Connection; not to do anything ORM level beyond assigning column-based values on the immediate object.As always, before_flush() is where ORM-level manipulations are intended to be placed. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to
Re: [sqlalchemy] session.query().get() is unsupported during flush for getting an object that was just added?
On Jan 26, 2012, at 1:21 PM, Kent wrote: So, as a typical example of where it seems very natural to use before_update, suppose you need to automatically update the not null sequence of a related table. This but to get the sequence you need to loop over the parent table's collection. You want the sequence to be human friendly (natural primary key) and also you want to be able to sort by sequence guaranteed in order without the possibility of a database sequence wrap around. So you want the sequence 1,2,3... This seems extremely well fit for before_insert, like this: == parents_table = Table(parents, metadata, Column(id, Integer, primary_key=True), ) children_table = Table(children, metadata, Column(parentid, Integer, ForeignKey('parents.id'),), Column(sequence, Integer, primary_key=True), ) class Parent(object): pass class Child(object): pass mapper(Parent, parents_table, properties={'children': relationship(Child, cascade='all,delete-orphan', backref='parent') }) mapper(Child, children_table) @event.listens_for(Child, 'before_insert') def set_sequence(mapper, connection, instance): if instance.sequence is None: instance.sequence = (max(c.sequence for c in instance.parent.children) or 0) + 1 == But this reaches across relationships, so that is actually not desired here, is that correct? that is correct. For this, you would loop over session.new in before_update, is that how you would approach this requirement? If the value is based on what's already been INSERTed for previous rows, I'd emit a SQL statement to get at the value.If it's based on some kind of natural consideration that isn't dependent on the outcome of an INSERT statement, then you can do the looping above within the before_flush() event and assign everything at once.Basically you need to batch the same way the UOW itself does. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] backrefs
Is there a straightforward way to determine if a RelationshipProperty has a corresponding reverse (backref)? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: backrefs
I assume the non public property._reverse_property is just what I'm looking for. :) On Jan 26, 2:06 pm, Kent jkentbo...@gmail.com wrote: Is there a straightforward way to determine if a RelationshipProperty has a corresponding reverse (backref)? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] a list as a named argument for an in clause
Another possible approach is using the sql module to build the query: from sqlalchemy import sql ids = [1,2,3] query = sql.select([Table.col1, Table.col2], Table.id.in_(ids)) session.execute(query) I'm not sure how that fits into the larger context of what you're doing, but it's flexible and pretty easy to build the queries dynamically. -Eric On Tue, Jan 24, 2012 at 7:40 AM, Conor conor.edward.da...@gmail.com wrote: On 01/22/2012 01:49 PM, alex bodnaru wrote: hello friends, i'm using sa at a quite low level, with session.execute(text, dict) is it possible to do something in the spirit of: session.execute(select * from tablename where id in (:ids), dict(ids=[1,2,3,4])) ? thanks in advance, alex I'm not aware of a general way to do this. If you are using PostgreSQL+psycopg2, you can use the = ANY(...) operator instead of the IN operator: session.execute(select * from tablename where id = ANY (:ids), dict(ids=[1,2,3,4])) -Conor -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] ArgumentError: Only update via a single table query is currently supported
Hi, I have some problem understanding this error. From googling around, this is about modifying data on more than 1 table, so the ORM wouldn't do it. However, my query is self.session.query(e).filter(e.src_id == src_id).filter(e.tar_id == tar_id).update({e.status: self._DELETE}) and it only involve 1 table. From debug, without the update part, the select is translated to SELECT events_1.event_id AS events_1_event_id, events_1.src_id AS events_1_src_id, events_1.tar_id AS events_1_tar_id, events_1.type AS events_1_type, events_1.event_ts AS events_1_event_ts, events_1.status AS events_1_status, events_1.media_id AS events_1_media_id FROM events AS events_1 WHERE events_1.src_id = :src_id_1 AND events_1.tar_id = :tar_id_1 It looks very simple, what am I missing? Thanks, Mason -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] ArgumentError: Only update via a single table query is currently supported
On Jan 26, 2012, at 8:26 PM, Mason wrote: Hi, I have some problem understanding this error. From googling around, this is about modifying data on more than 1 table, so the ORM wouldn't do it. However, my query is self.session.query(e).filter(e.src_id == src_id).filter(e.tar_id == tar_id).update({e.status: self._DELETE}) and it only involve 1 table. From debug, without the update part, the select is translated to SELECT events_1.event_id AS events_1_event_id, events_1.src_id AS events_1_src_id, events_1.tar_id AS events_1_tar_id, events_1.type AS events_1_type, events_1.event_ts AS events_1_event_ts, events_1.status AS events_1_status, events_1.media_id AS events_1_media_id FROM events AS events_1 WHERE events_1.src_id = :src_id_1 AND events_1.tar_id = :tar_id_1 It looks very simple, what am I missing? The message there suggests e is mapped to more than one table, typically the child table in a joined table inheritance setup.If both of e.src_id and e.tar_id are local to the events table, 0.8 has plans to support considering just local_table within query.update(), in ticket #2365: http://www.sqlalchemy.org/trac/ticket/2365.There's a one line patch there now that opens things up for update() but further testing is needed to roll it out. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.