Re: [sqlalchemy] session lifecycle and wsgi
Michael Bayer wrote: if your application keeps a handle on objects after the request is complete, and then passed them somewhere else, like a background thread or something, then the subsequent request is going to be potentially touching those objects at the same time. This would all be pretty poor practice as individual threads should always have their own sessions. Right, you should either be .close() or .remove()'ing the session here or manually expunging the objects you want to shift to the other thread, correct? (I'm guessing session.merge will whine if handed an object that is already in another session?) Or maybe you loaded those objects into a globally-scoped in-memory cache of some kind - you probably don't want the next request touching them directly as once they're in a global cache of some kind, other threads would be calling upon them to copy their state locally. So putting non-expunged objects in something like a beaker cache would be a no-no, correct? (would .close() or .remove() fix the problem if the objects are already in the cache by the time the .close() or .remove() is called?) On the other hand, you might load the objects into a session-local cache of some kind that you've created.Now, when the next request comes in and calls upon those same rows, the ORM doesn't need to re-instantiate the objects, they are already present in the cache. Does the ORM check if the attributes of the cached object are correct or would you end up in a situation where you do a query but end up using the cached attributes rather than the ones just returned from the db? This is a use case where you'd want to keep the same session from one request to the next. It looks like this is what zope.sqlalchemy does. The last thing it does in a logical transaction (ie: a http://pypi.python.org/pypi/transaction transaction) is call .close(). Should it be calling .remove()? http://www.sqlalchemy.org/docs/session.html#lifespan-of-a-contextual-session would seem to imply that the .close() alone is fine? the bulletpoints at http://www.sqlalchemy.org/docs/session.html#lifespan-of-a-contextual-session are the best I can do here, I'm really just repeating myself over and over in these threads Well, except the info you've given in this thread, at least, goes way above what's in that list ;-) A few comments on the bullets: - They don't explain what happens to transactions and connections. The points for both remove() and close() say all of its transactional/connection resources are closed out; does this mean database connections or closed or just returned to the pool? (I hope the latter!) - The point for .commit() states The full state of the session is expired, so that when the next web request is started, all data will be reloaded but your last reply implied this wouldn't always be the case. Also, is it fair game to assume that session.close() rolls back any open database transaction? Is there any difference between that rollback and calling session.rollback() explicitly? Finally, in nosing around session.py, I notice that SessionTransactions can be used as context managers. Where can I find good examples of this? Chris -- Simplistix - Content Management, Batch Processing Python Consulting - http://www.simplistix.co.uk -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Select.compare()
Thank you for the reply, I was thinking to use something similar. On Apr 28, 11:25 pm, Michael Bayer mike...@zzzcomputing.com wrote: dimazest wrote: Hi all, I faced a problem comparing Selects. It seems that Select.compare() works incorrectly. Here is the code that shows the problem: from sqlalchemy import MetaData from sqlalchemy import Table, Column from sqlalchemy import Integer, String from sqlalchemy import select metadata = MetaData() table1 = Table('table1', metadata, ... Column('col1', Integer, primary_key=True), ... ) s1 = select([table1]) s2 = select([table1]) assert s1.compare(s2) Traceback (most recent call last): File input, line 1, in module AssertionError Do I expect correctly that s1.compare(s2) should return True? If not, how can I compare to Selects? compare() is documented as producing an identity compare by default, i.e. s1.compare(s1) would be True. compare() is currently only used by the ORM for comparing the structure of column expressions and is not generally useful on a FromClause at this time. for a simple comparison of any two elements, just do: def compare(x, y): x = x.compile() y = y.compile() return unicode(x) == unicode(y) and x.params == y.params -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: session lifecycle and wsgi
On Apr 28, 4:38 pm, Chris Withers ch...@simplistix.co.uk wrote: Laurence Rowe wrote: Chris, This is what the combination of repoze.tm2/transaction and zope.sqlalchemy does for you. You don't have to do anything special other than that. It doesn't do the .remove(). BFG currently has a bit of horribleness to make that work. I'd like to get rid of it or make it less horrible... Can you point me at where it does that please. Laurence -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] making a transient copy of instances recursively at merge() time
Before saving objects to the database, we have need to inspect the changes. I am aware of the attributes.get_history() functionality, which is helpful to a point. attributes.get_history() falls short of what I need in two places: *** after a session.flush(), it is gone. There are times we need to flush, but the transaction has still not been committed, and more processing is required, so we still need access to the history of changes for this object in this transaction despite a call to flush(). *** if there are calculation methods I've written that objects have, I need to be able to call these for the old object. For example, say I write a method on Order class such as def calculate_total_volume(self): When an order is saved I need to compare the old volume to the new. On the merged order, I can call merged_obj.calculate_total_volume(), but I would need to use the information in attributes.get_history() to *recreate* the old version of the object in order to call old_order.calculate_total_volume() I've realized that if merge() attached a python attribute, such as _original (which is a transient clone of the object fetched from the database) to each object as it merges it, then this * would be accessible regardless of session.flush() * could be acted on with object methods like any other object * is probably more intuitive than attributes.get_history(), especially to those familiar with Oracle triggers and :OLD.orderid vs :NEW.orderid. This would be very similar: merged.orderid vs merged._original.orderid Disadvantage: * it goes against the concept of trying to be very hands-off on the actual object. Maybe it could instead be stored in the instance's state or something. Does this conceptually make sense? Does it make enough sense to make this a feature? In the meantime, can you recommend an approach for me? (Extend the Session class?) Thanks much, Kent -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Re: session lifecycle and wsgi
Laurence Rowe wrote: On Apr 28, 4:38 pm, Chris Withers ch...@simplistix.co.uk wrote: Laurence Rowe wrote: Chris, This is what the combination of repoze.tm2/transaction and zope.sqlalchemy does for you. You don't have to do anything special other than that. It doesn't do the .remove(). BFG currently has a bit of horribleness to make that work. I'd like to get rid of it or make it less horrible... Can you point me at where it does that please. http://svn.repoze.org/repoze.bfg/trunk/repoze/bfg/paster_templates/routesalchemy/+package+/run.py_tmpl ...the handle_teardown function. I no longer believe this is necessary, pending absolute confirmation from Michael in the Re: [sqlalchemy] session lifecycle and wsgi thread. It's certainly not necessary to do the close() *and* the remove() ;-) cheers, Chris -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Mapper can't map primary key
On 4/28/2010 11:31 PM, Mark wrote: Hi guys, I have the following Table construction: ADMIN_TABLE = Table('admin', bound_meta_data, Column('username', types.VARCHAR(100), primary_key=True), autoload=True, schema=schema) and a mapper as such: mapper(Admin, TABLES.ADMIN_TABLE, properties={'employee': relation(Employee, primaryjoin= TABLES.ALL_EMPLOYEES_TABLE.c.employee_id==\ TABLES.ADMIN_TABLE.c.employee_id, foreign_keys=[TABLES.ADMIN_TABLE.c.employee_id], backref=backref('user', foreign_keys= [TABLES.ADMIN_TABLE.c.employee_id], lazy=dynamic) ) }, extension = VerificationMapper() ) When I run paster serve --reload development.ini in my Pylons app, I get an irritating error complaining the following: sqlalchemy.exc.ArgumentError: Mapper Mapper|Admin|admin could not assemble any primary key columns for mapped table 'admin' As you can see above, I have already mapped the primary_key=True property, why is it still complaining that it can't find the primary key? With this error, I tried out something else, adding the code below to my mapper configuration: primary_key=[TABLES.ADMIN_TABLE.columns.username] Adding this, allowed me to run the server properly, however, when I query the database, it claims that it is unable to locate the username column. I am very sure my database is correct and this is definitely an issue with my SQLAlchemy code. Can someone please explain what's going on? Why do I get the exception? Thanks. This is just a guess I'm afraid, but could it be the autoload=True you're passing to the admin Table constructor? From the docs: Usually there will be no Column objects in the constructor if this property is set. Do you really mean to reflect everything _except_ the username column? Lance -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] further restricting a query provided as raw sql
Chris Withers wrote: Michael Bayer wrote: we have the in_() construct. It should be in the ORM and SQL expression tutorials: t1 = Table('mytable', metadata, Column('foo', String)) select([t1]).where(t1.c.foo.in_(['a', 'b', 'c'])) However, that requires table/column objects which I don't have. Are the innards of in_ exposed anywhere for public consumption or should I avoid? from sqlalchemy.sql import column column(anything_you_want).in_(['a', 'b', 'c']) Chris -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Mapper can't map primary key
Mark wrote: sqlalchemy.exc.ArgumentError: Mapper Mapper|Admin|admin could not assemble any primary key columns for mapped table 'admin' As you can see above, I have already mapped the primary_key=True property, why is it still complaining that it can't find the primary key? With this error, I tried out something else, adding the code below to my mapper configuration: primary_key=[TABLES.ADMIN_TABLE.columns.username] Adding this, allowed me to run the server properly, however, when I query the database, it claims that it is unable to locate the username column. I am very sure my database is correct and this is definitely an issue with my SQLAlchemy code. here is a script that imitates the patterns you are using, and it runs fine, both for the reflected/overridden primary key column, as well as the manual mapping of pk column. Assuming this script works fine for you, you'd need to identify what is different about your application/environment versus this test script. from sqlalchemy import * from sqlalchemy.orm import * engine = create_engine('sqlite://', echo=True) # note no pk in the table engine.execute(create table foo (id integer, bar integer)) engine.execute(insert into foo (id, bar) values (?, ?), [(1, 1), (2, 5), (3, 7)]) m = MetaData(engine) table = Table('foo', m, Column(id, Integer, primary_key=True), autoload=True ) assert list(table.primary_key) == [table.c.id] class Foo(object): pass mapper(Foo, table) print create_session().query(Foo).all() clear_mappers() mapper(Foo, table, primary_key=[table.c.id]) print create_session().query(Foo).all() -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] making a transient copy of instances recursively at merge() time
Kent wrote: Before saving objects to the database, we have need to inspect the changes. I am aware of the attributes.get_history() functionality, which is helpful to a point. attributes.get_history() falls short of what I need in two places: *** after a session.flush(), it is gone. There are times we need to flush, but the transaction has still not been committed, and more processing is required, so we still need access to the history of changes for this object in this transaction despite a call to flush(). *** if there are calculation methods I've written that objects have, I need to be able to call these for the old object. For example, say I write a method on Order class such as def calculate_total_volume(self): When an order is saved I need to compare the old volume to the new. On the merged order, I can call merged_obj.calculate_total_volume(), but I would need to use the information in attributes.get_history() to *recreate* the old version of the object in order to call old_order.calculate_total_volume() I've realized that if merge() attached a python attribute, such as _original (which is a transient clone of the object fetched from the database) to each object as it merges it, then this * would be accessible regardless of session.flush() * could be acted on with object methods like any other object * is probably more intuitive than attributes.get_history(), especially to those familiar with Oracle triggers and :OLD.orderid vs :NEW.orderid. This would be very similar: merged.orderid vs merged._original.orderid Disadvantage: * it goes against the concept of trying to be very hands-off on the actual object. Maybe it could instead be stored in the instance's state or something. Does this conceptually make sense? Does it make enough sense to make this a feature? In the meantime, can you recommend an approach for me? (Extend the Session class?) you want to be using a SessionExtension here, and the after_flush() hook is provided precisely for the use case that you want rules to execute after SQL has been flushed, but you still want a full accounting of everything that has changed - the attribute history on all objects has not yet been reset at this point. The information available in after_flush() is unique in that it also includes any foreign key values that have been synchronized from parent to child over relationships. Thanks much, Kent -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: making a transient copy of instances recursively at merge() time
That sounds like it could be very useful for me, thank you for pointing me there. That could solve one of the two issues I'm facing that I listed... what about the other? On Apr 29, 11:02 am, Michael Bayer mike...@zzzcomputing.com wrote: Kent wrote: Before saving objects to the database, we have need to inspect the changes. I am aware of the attributes.get_history() functionality, which is helpful to a point. attributes.get_history() falls short of what I need in two places: *** after a session.flush(), it is gone. There are times we need to flush, but the transaction has still not been committed, and more processing is required, so we still need access to the history of changes for this object in this transaction despite a call to flush(). *** if there are calculation methods I've written that objects have, I need to be able to call these for the old object. For example, say I write a method on Order class such as def calculate_total_volume(self): When an order is saved I need to compare the old volume to the new. On the merged order, I can call merged_obj.calculate_total_volume(), but I would need to use the information in attributes.get_history() to *recreate* the old version of the object in order to call old_order.calculate_total_volume() I've realized that if merge() attached a python attribute, such as _original (which is a transient clone of the object fetched from the database) to each object as it merges it, then this * would be accessible regardless of session.flush() * could be acted on with object methods like any other object * is probably more intuitive than attributes.get_history(), especially to those familiar with Oracle triggers and :OLD.orderid vs :NEW.orderid. This would be very similar: merged.orderid vs merged._original.orderid Disadvantage: * it goes against the concept of trying to be very hands-off on the actual object. Maybe it could instead be stored in the instance's state or something. Does this conceptually make sense? Does it make enough sense to make this a feature? In the meantime, can you recommend an approach for me? (Extend the Session class?) you want to be using a SessionExtension here, and the after_flush() hook is provided precisely for the use case that you want rules to execute after SQL has been flushed, but you still want a full accounting of everything that has changed - the attribute history on all objects has not yet been reset at this point. The information available in after_flush() is unique in that it also includes any foreign key values that have been synchronized from parent to child over relationships. Thanks much, Kent -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Re: making a transient copy of instances recursively at merge() time
if you have a method like 'calculate_total_volume()', that is a business method. you should not be relying upon the internals of the ORM to figure that out for you, and you should have two distinct fields on your object to represent the two values you need to make that calculation. Kent wrote: That sounds like it could be very useful for me, thank you for pointing me there. That could solve one of the two issues I'm facing that I listed... what about the other? On Apr 29, 11:02 am, Michael Bayer mike...@zzzcomputing.com wrote: Kent wrote: Before saving objects to the database, we have need to inspect the changes. I am aware of the attributes.get_history() functionality, which is helpful to a point. attributes.get_history() falls short of what I need in two places: *** after a session.flush(), it is gone. There are times we need to flush, but the transaction has still not been committed, and more processing is required, so we still need access to the history of changes for this object in this transaction despite a call to flush(). *** if there are calculation methods I've written that objects have, I need to be able to call these for the old object. For example, say I write a method on Order class such as def calculate_total_volume(self): When an order is saved I need to compare the old volume to the new. On the merged order, I can call merged_obj.calculate_total_volume(), but I would need to use the information in attributes.get_history() to *recreate* the old version of the object in order to call old_order.calculate_total_volume() I've realized that if merge() attached a python attribute, such as _original (which is a transient clone of the object fetched from the database) to each object as it merges it, then this * would be accessible regardless of session.flush() * could be acted on with object methods like any other object * is probably more intuitive than attributes.get_history(), especially to those familiar with Oracle triggers and :OLD.orderid vs :NEW.orderid. This would be very similar: merged.orderid vs merged._original.orderid Disadvantage: * it goes against the concept of trying to be very hands-off on the actual object. Maybe it could instead be stored in the instance's state or something. Does this conceptually make sense? Does it make enough sense to make this a feature? In the meantime, can you recommend an approach for me? (Extend the Session class?) you want to be using a SessionExtension here, and the after_flush() hook is provided precisely for the use case that you want rules to execute after SQL has been flushed, but you still want a full accounting of everything that has changed - the attribute history on all objects has not yet been reset at this point. The information available in after_flush() is unique in that it also includes any foreign key values that have been synchronized from parent to child over relationships. Thanks much, Kent -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: making a transient copy of instances recursively at merge() time
There might be a communication problem, which an example could help clarify. I'm constrained by a legacy database that I have no control over changing. When an order is changed, I need to calculate the change in volume (or points) so I can update a table that records this information. Here is an example of a method you might want to run on an the original, unchanged object (Order, in this case): = def calc_points(self): return sum(l.product.points * l.qtyordered for l in self.orderdetails if l.product.points is not None) = Notice that this calculation relies on several relations: orderdetails, and orderdetails[].product From an MVC view point, my argument is that certainly this business logic *should* reside in the Order class. That is to say, an Order should know how to calculate its own volume. Unfortunately, this is not a matter of two or three *fields* that I can easily extract from attributes.get_history(). This computation, again, relies on several relations. Any further insight or advice? On Apr 29, 12:57 pm, Michael Bayer mike...@zzzcomputing.com wrote: if you have a method like 'calculate_total_volume()', that is a business method. you should not be relying upon the internals of the ORM to figure that out for you, and you should have two distinct fields on your object to represent the two values you need to make that calculation. Kent wrote: That sounds like it could be very useful for me, thank you for pointing me there. That could solve one of the two issues I'm facing that I listed... what about the other? On Apr 29, 11:02 am, Michael Bayer mike...@zzzcomputing.com wrote: Kent wrote: Before saving objects to the database, we have need to inspect the changes. I am aware of the attributes.get_history() functionality, which is helpful to a point. attributes.get_history() falls short of what I need in two places: *** after a session.flush(), it is gone. There are times we need to flush, but the transaction has still not been committed, and more processing is required, so we still need access to the history of changes for this object in this transaction despite a call to flush(). *** if there are calculation methods I've written that objects have, I need to be able to call these for the old object. For example, say I write a method on Order class such as def calculate_total_volume(self): When an order is saved I need to compare the old volume to the new. On the merged order, I can call merged_obj.calculate_total_volume(), but I would need to use the information in attributes.get_history() to *recreate* the old version of the object in order to call old_order.calculate_total_volume() I've realized that if merge() attached a python attribute, such as _original (which is a transient clone of the object fetched from the database) to each object as it merges it, then this * would be accessible regardless of session.flush() * could be acted on with object methods like any other object * is probably more intuitive than attributes.get_history(), especially to those familiar with Oracle triggers and :OLD.orderid vs :NEW.orderid. This would be very similar: merged.orderid vs merged._original.orderid Disadvantage: * it goes against the concept of trying to be very hands-off on the actual object. Maybe it could instead be stored in the instance's state or something. Does this conceptually make sense? Does it make enough sense to make this a feature? In the meantime, can you recommend an approach for me? (Extend the Session class?) you want to be using a SessionExtension here, and the after_flush() hook is provided precisely for the use case that you want rules to execute after SQL has been flushed, but you still want a full accounting of everything that has changed - the attribute history on all objects has not yet been reset at this point. The information available in after_flush() is unique in that it also includes any foreign key values that have been synchronized from parent to child over relationships. Thanks much, Kent -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group
Re: [sqlalchemy] Re: making a transient copy of instances recursively at merge() time
Kent wrote: There might be a communication problem, which an example could help clarify. I'm constrained by a legacy database that I have no control over changing. When an order is changed, I need to calculate the change in volume (or points) so I can update a table that records this information. Here is an example of a method you might want to run on an the original, unchanged object (Order, in this case): = def calc_points(self): return sum(l.product.points * l.qtyordered for l in self.orderdetails if l.product.points is not None) = Notice that this calculation relies on several relations: orderdetails, and orderdetails[].product From an MVC view point, my argument is that certainly this business logic *should* reside in the Order class. That is to say, an Order should know how to calculate its own volume. Unfortunately, this is not a matter of two or three *fields* that I can easily extract from attributes.get_history(). This computation, again, relies on several relations. Any further insight or advice? the general idea to this type of thing is to listen for events using descriptors, custom collections, and/or AttributeListeners, and to trigger the desired calculations before things change. SQLAlchemy internals are only intended to implement as much as is needed to persist changes to the database. Such as here, if you need to run calc_points() everytime the contents of the orderdetails collection changes, a simple AttributeListener implementing append/remove can achieve that. But also, a custom list class which references the parent and overrides append() and __setitem__() would do it to, i.e. its not like any of this would be impossible if you weren't using an ORM. The ORM's attribute logic is designed only for the purposes of persisting changes to a database. That we expose some of its details for the purpose of business logic is only intended to be to the degree that this business logic is explicitly concerned with the details of the object's persistence. Things that aren't related to the mechanisms of persistence aren't within the domain of SQLA and can be implemented using regular Python techniques. On Apr 29, 12:57 pm, Michael Bayer mike...@zzzcomputing.com wrote: if you have a method like 'calculate_total_volume()', that is a business method. you should not be relying upon the internals of the ORM to figure that out for you, and you should have two distinct fields on your object to represent the two values you need to make that calculation. Kent wrote: That sounds like it could be very useful for me, thank you for pointing me there. That could solve one of the two issues I'm facing that I listed... what about the other? On Apr 29, 11:02 am, Michael Bayer mike...@zzzcomputing.com wrote: Kent wrote: Before saving objects to the database, we have need to inspect the changes. I am aware of the attributes.get_history() functionality, which is helpful to a point. attributes.get_history() falls short of what I need in two places: *** after a session.flush(), it is gone. There are times we need to flush, but the transaction has still not been committed, and more processing is required, so we still need access to the history of changes for this object in this transaction despite a call to flush(). *** if there are calculation methods I've written that objects have, I need to be able to call these for the old object. For example, say I write a method on Order class such as def calculate_total_volume(self): When an order is saved I need to compare the old volume to the new. On the merged order, I can call merged_obj.calculate_total_volume(), but I would need to use the information in attributes.get_history() to *recreate* the old version of the object in order to call old_order.calculate_total_volume() I've realized that if merge() attached a python attribute, such as _original (which is a transient clone of the object fetched from the database) to each object as it merges it, then this * would be accessible regardless of session.flush() * could be acted on with object methods like any other object * is probably more intuitive than attributes.get_history(), especially to those familiar with Oracle triggers and :OLD.orderid vs :NEW.orderid. This would be very similar: merged.orderid vs merged._original.orderid Disadvantage: * it goes against the concept of trying to be very hands-off on the actual object. Maybe it could instead be stored in the instance's state or something. Does this conceptually make sense? Does it make enough sense to make this a feature? In the meantime, can you recommend an approach for me? (Extend the Session class?) you want to be using a SessionExtension
Re: [sqlalchemy] Re: making a transient copy of instances recursively at merge() time
Michael Bayer wrote: Kent wrote: There might be a communication problem, which an example could help clarify. I'm constrained by a legacy database that I have no control over changing. When an order is changed, I need to calculate the change in volume (or points) so I can update a table that records this information. Here is an example of a method you might want to run on an the original, unchanged object (Order, in this case): = def calc_points(self): return sum(l.product.points * l.qtyordered for l in self.orderdetails if l.product.points is not None) = Notice that this calculation relies on several relations: orderdetails, and orderdetails[].product From an MVC view point, my argument is that certainly this business logic *should* reside in the Order class. That is to say, an Order should know how to calculate its own volume. Unfortunately, this is not a matter of two or three *fields* that I can easily extract from attributes.get_history(). This computation, again, relies on several relations. Any further insight or advice? also in case this is not apparent, this is exactly the kind of thing I do with SessionExtensions. For example if you look at examples/versioning/history_meta.py in the distribution you'd see exactly this technique using SessionExtension.before_flush() to create new version entries for objects being modified or deleted. Here's another example from a Pycon tutorial in 2009: http://bitbucket.org/zzzeek/pycon2009/src/tip/chap5/sessionextension.py - this one modifies a count attribute on a parent table in response to changes in a child collection, using after_flush(). we've hopefully implemented enough hooks into mapper and session to allow any possible change-based operation. there should be no need add esoteric hacks into methods like merge(). -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Re: making a transient copy of instances recursively at merge() time
I'm exactly trying to avoid adding esoteric hacks - (why I posted in the first place), so thanks for the information. I'll look into the examples you've provided; I'm hopeful that the versioned objects have in-tact relations (such that I could say old_object.orderdetails and get that version of the 'orderdetails' relation) On 4/29/2010 3:59 PM, Michael Bayer wrote: Michael Bayer wrote: Kent wrote: There might be a communication problem, which an example could help clarify. I'm constrained by a legacy database that I have no control over changing. When an order is changed, I need to calculate the change in volume (or points) so I can update a table that records this information. Here is an example of a method you might want to run on an the original, unchanged object (Order, in this case): = def calc_points(self): return sum(l.product.points * l.qtyordered for l in self.orderdetails if l.product.points is not None) = Notice that this calculation relies on several relations: orderdetails, and orderdetails[].product From an MVC view point, my argument is that certainly this business logic *should* reside in the Order class. That is to say, an Order should know how to calculate its own volume. Unfortunately, this is not a matter of two or three *fields* that I can easily extract from attributes.get_history(). This computation, again, relies on several relations. Any further insight or advice? also in case this is not apparent, this is exactly the kind of thing I do with SessionExtensions. For example if you look at examples/versioning/history_meta.py in the distribution you'd see exactly this technique using SessionExtension.before_flush() to create new version entries for objects being modified or deleted. Here's another example from a Pycon tutorial in 2009: http://bitbucket.org/zzzeek/pycon2009/src/tip/chap5/sessionextension.py - this one modifies a count attribute on a parent table in response to changes in a child collection, using after_flush(). we've hopefully implemented enough hooks into mapper and session to allow any possible change-based operation. there should be no need add esoteric hacks into methods like merge(). -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Re: making a transient copy of instances recursively at merge() time
It is helpful to know what SQLA was designed for. Also, you may be interested to know of our project as we are apparently stretching SQLA's use case/design. We are implementing a RESTful web app (using TurboGears) on an already existent legacy database. Since our webservice calls (and TurboGears' in general, I believe) are completely state-less, I think that precludes my being able to harness descriptors, custom collections, and/or AttributeListeners, and to trigger the desired calculations before things change because I'm just handed the 'new' version of the object. This is precisely why so much of my attention has been on merge() because it, for the most part, works that out magically. In this case, though, it would be nice to have merge() package up some extra information, namely the object as it looks in the database prior to its magic. Thanks again. On 4/29/2010 3:47 PM, Michael Bayer wrote: Kent wrote: There might be a communication problem, which an example could help clarify. I'm constrained by a legacy database that I have no control over changing. When an order is changed, I need to calculate the change in volume (or points) so I can update a table that records this information. Here is an example of a method you might want to run on an the original, unchanged object (Order, in this case): = def calc_points(self): return sum(l.product.points * l.qtyordered for l in self.orderdetails if l.product.points is not None) = Notice that this calculation relies on several relations: orderdetails, and orderdetails[].product From an MVC view point, my argument is that certainly this business logic *should* reside in the Order class. That is to say, an Order should know how to calculate its own volume. Unfortunately, this is not a matter of two or three *fields* that I can easily extract from attributes.get_history(). This computation, again, relies on several relations. Any further insight or advice? the general idea to this type of thing is to listen for events using descriptors, custom collections, and/or AttributeListeners, and to trigger the desired calculations before things change. SQLAlchemy internals are only intended to implement as much as is needed to persist changes to the database. Such as here, if you need to run calc_points() everytime the contents of the orderdetails collection changes, a simple AttributeListener implementing append/remove can achieve that. But also, a custom list class which references the parent and overrides append() and __setitem__() would do it to, i.e. its not like any of this would be impossible if you weren't using an ORM. The ORM's attribute logic is designed only for the purposes of persisting changes to a database. That we expose some of its details for the purpose of business logic is only intended to be to the degree that this business logic is explicitly concerned with the details of the object's persistence. Things that aren't related to the mechanisms of persistence aren't within the domain of SQLA and can be implemented using regular Python techniques. On Apr 29, 12:57 pm, Michael Bayermike...@zzzcomputing.com wrote: if you have a method like 'calculate_total_volume()', that is a business method. you should not be relying upon the internals of the ORM to figure that out for you, and you should have two distinct fields on your object to represent the two values you need to make that calculation. Kent wrote: That sounds like it could be very useful for me, thank you for pointing me there. That could solve one of the two issues I'm facing that I listed... what about the other? On Apr 29, 11:02 am, Michael Bayermike...@zzzcomputing.com wrote: Kent wrote: Before saving objects to the database, we have need to inspect the changes. I am aware of the attributes.get_history() functionality, which is helpful to a point. attributes.get_history() falls short of what I need in two places: *** after a session.flush(), it is gone. There are times we need to flush, but the transaction has still not been committed, and more processing is required, so we still need access to the history of changes for this object in this transaction despite a call to flush(). *** if there are calculation methods I've written that objects have, I need to be able to call these for the old object. For example, say I write a method on Order class such as def calculate_total_volume(self): When an order is saved I need to compare the old volume to the new. On the merged order, I can call merged_obj.calculate_total_volume(), but I would need to use the information in attributes.get_history() to *recreate* the old
Re: [sqlalchemy] Re: making a transient copy of instances recursively at merge() time
Kent Bower wrote: It is helpful to know what SQLA was designed for. Also, you may be interested to know of our project as we are apparently stretching SQLA's use case/design. We are implementing a RESTful web app (using TurboGears) on an already existent legacy database. Since our webservice calls (and TurboGears' in general, I believe) are completely state-less, I think that precludes my being able to harness descriptors, custom collections, and/or AttributeListeners, and to trigger the desired calculations before things change because I'm just handed the 'new' version of the object. This is precisely why so much of my attention has been on merge() because it, for the most part, works that out magically. If you are using merge(), and you're trying to use it to figure out what's changed, that implies that you are starting with a business object containing the old data and are populating it with the new data. Custom collection classes configured on relationship() and AttributeListeners on any attribute are both invoked during the merge() process (but external descriptors notably are not). merge() is nothing more than a hierarchical attribute-copying procedure which can be implemented externally. The only advantages merge() itself offers are the optimized load=False option which you aren't using here, support for objects that aren't hashable on identity (a rare use case), and overall a slightly more inlined approach that removes a small amount of public method overhead. You also should be able to even recreate the _original object from attribute history before a flush occurs. This is what the versioned example is doing for scalar attributes. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.