[sqlalchemy] Change relationship order_by on runtime
I need to change a collection order_by when a parent instance mapped attribute changes. Something like this: parents = Table('parents', metadata, Column('id', Integer, primary_key=True) Column('reverse', Boolean, primary_key=True) ) items = Table('items', metadata, Column('id', Integer, primary_key=True) ) mapper(parents, Parent, properties={ 'items': relationship(Item, backref='parent', collection_class=ItemsList, order_by=items.c.id ) } parent = session.query(Parent).first() parent.reverse False parent.items [Item 1, Item 2, Item 3] parent.reverse = True parent.items [Item 3, Item 2, Item 1] I already use a custom collection and I could make custom __iter__ according to first item (if any) `self.parent.reverse` but it seems to me a bit too tricky. Any advice? Thank you for your support -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Change relationship order_by on runtime
Thank you very much On Tue, Oct 25, 2011 at 3:46 PM, Michael Bayer mike...@zzzcomputing.comwrote: you can use a dynamic relationship which allows you to say: some_parent.items.order_by(items.c.id.desc()) http://www.sqlalchemy.org/docs/orm/collections.html#dynamic-relationship-loaders On Oct 25, 2011, at 5:02 AM, neurino wrote: I need to change a collection order_by when a parent instance mapped attribute changes. Something like this: parents = Table('parents', metadata, Column('id', Integer, primary_key=True) Column('reverse', Boolean, primary_key=True) ) items = Table('items', metadata, Column('id', Integer, primary_key=True) ) mapper(parents, Parent, properties={ 'items': relationship(Item, backref='parent', collection_class=ItemsList, order_by=items.c.id ) } parent = session.query(Parent).first() parent.reverse False parent.items [Item 1, Item 2, Item 3] parent.reverse = True parent.items [Item 3, Item 2, Item 1] I already use a custom collection and I could make custom __iter__ according to first item (if any) `self.parent.reverse` but it seems to me a bit too tricky. Any advice? Thank you for your support -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Self populating a new instance relationships, best practice
I have a mapped class with a one to many relationship to another class. My need is that when a parent class is added to session automatically it's populated with predefined children too. I used to hook to `after-insert` event but now I read it's not suitable for any Session.add (has it ever been like this or changed recently?) So, while it worked with sqlite, now I get no children at all in mysql / postgresql event.listen(UserStratigraphy, 'after_insert', user_stratigraphy_after_insert) def user_stratigraphy_after_insert(mapper, connection, user_strat): for layer in DBSession.query(BaseLayer) \ .filter(BaseLayer.id_str==user_strat.id_str): user_layer = UserLayer() user_layer.user_stratigraphy = user_strat user_layer.layer = layer DBSession.add(user_layer) What should be the best practice to avoid manually add all children every time I instantiate a new parent? Thanks for your support neurino -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Self populating a new instance relationships, best practice
Hi Michael, the reason is Parent does not know its children until another relationship is set (`id_str` in the code above). Consider this form validation part: if form.validate(): user_strat = form.bind(UserStratigraphy()) I cannot bind prior creating an instance and this is quite a common situation. I can always go with manual adds of course but... == To give more details I have a UserStratigraphy = UserLayer, containing user data, that lays over a fixed Stratigraphy = Layer structure. When a user creates a new UserStratigraphy and tells me which Stratigraphy it belongs I wish I can create also needed UserLayers accordingly to save relative data. Little mind map: http://postimage.org/image/286v46m10/ Thanks for your support neurino On Tue, Oct 4, 2011 at 3:17 PM, Michael Bayer mike...@zzzcomputing.comwrote: On Oct 4, 2011, at 8:21 AM, neurino wrote: I have a mapped class with a one to many relationship to another class. My need is that when a parent class is added to session automatically it's populated with predefined children too. I used to hook to `after-insert` event but now I read it's not suitable for any Session.add (has it ever been like this or changed recently?) So, while it worked with sqlite, now I get no children at all in mysql / postgresql event.listen(UserStratigraphy, 'after_insert', user_stratigraphy_after_insert) def user_stratigraphy_after_insert(mapper, connection, user_strat): for layer in DBSession.query(BaseLayer) \ .filter(BaseLayer.id_str==user_strat.id_str): user_layer = UserLayer() user_layer.user_stratigraphy = user_strat user_layer.layer = layer DBSession.add(user_layer) What should be the best practice to avoid manually add all children every time I instantiate a new parent? why not just give Parent a constructor (i.e. __init__) that adds the children it needs ? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Relationship spanning on multiple tables
Sorry if I bother again but adding some others relationships like this spanning on 5 tables: mapper(UserLayer, inherits=base_user_layer_mapper, polymorphic_identity=CLASS_LAYER, properties={ ... 'lca_transport': relationship(LCATransport, primaryjoin=and_( user_layers.c.id_mat==lca_transport.c.id_mat, user_layers.c.id_user_str==user_stratigraphies.c.id, user_stratigraphies.c.id_prov==provinces.c.id, provinces.c.id_reg==regions.c.id, regions.c.id_mr==lca_transport.c.id_mr), foreign_keys=(user_layers.c.id_mat, user_layers.c.id_user_str)), }) I can get it working at first attempt while I can't get `transm_limit` to work here: http://pastebin.com/mjfgPrcB As a sidenote if I add `viewonly=True` to relationship above I get this error: sqlalchemy.exc.ArgumentError: Remote column 'user_stratigraphies.id' is not part of mapping Mapper|LCATransport|lca_transport. Specify remote_side argument to indicate which column lazy join condition should bind. Thanks for your support neurino On Tue, Aug 2, 2011 at 9:43 AM, neurino neur...@gmail.com wrote: Sorry Michael, there was a typo in my code, due to some wrong copy/paste or search/replace I guess, I'm afraid. The error I reported comes up mapping to `TransmLimit` and not `SurfaceRes` (which is the one already mapped above). I edited the pastebin: http://pastebin.com/mjfgPrcB now with: 'transm_limit': relationship(TransmLimit, single_parent=True, uselist=False, primaryjoin=and_( user_stratigraphies.c.id_prov==provinces.c.id, provinces.c.id_cz==transm_limits.c.id_cz, user_stratigraphies.c.id_str==stratigraphies.c.id, stratigraphies.c.id_tec==tech_elements_classes.c.id, tech_elements_classes.c.id_tu==transm_limits.c.id_tu, ), foreign_keys=( user_stratigraphies.c.id_prov, ), ), I get the error I reported: sqlalchemy.exc.ArgumentError: Could not locate any foreign-key-equated, locally mapped column pairs for primaryjoin condition 'user_stratigraphies.id_prov = provinces.id AND provinces.id_cz = transm_limits.id_cz AND user_stratigraphies.id_str = stratigraphies.id AND stratigraphies.id_tec = tech_elements_classes.id AND tech_elements_classes.id_tu = transm_limits.id_tu' on relationship ustratatigraphy.transm_limit. For more relaxed rules on join conditions, the relationship may be marked as viewonly=True. Also there's the strange `viewonly=True` behavior that breakes `UserStratigraphy.surface_res` relationship if uncommented. Thanks for your patience. neurino On Mon, Aug 1, 2011 at 11:14 PM, Michael Bayer mike...@zzzcomputing.comwrote: 'transm_limit': relationship(SurfaceRes, single_parent=True, #uselist=False, #primaryjoin=and_( #user_stratigraphies.c.id_prov==provinces.c.id, #provinces.c.id_cz==transm_limits.c.id_cz, #user_stratigraphies.c.id_str==stratigraphies.c.id, #stratigraphies.c.id_tec==tech_elements_classes.c.id, #tech_elements_classes.c.id_tu==transm_limits.c.id_tu, #), this fails because you aren't joining to the table to which SurfaceRes is mapped, surface_res. On Aug 1, 2011, at 7:02 AM, neurino wrote: I'm trying to get some relationships spanning on multiple tables (4 or 5). While I got the `4 tables` one working on first attempt (I was surpized I could...) I can't get the `5 tables` one to work while the code is almost the same. Moreover with the first relationship if I add adding `viewonly=True` initialization fails. I just need these to get (not even set) some values with convenience of SA attributes. I published a working example on pastebin: http://pastebin.com/RsZ6GCRq I hope someone can sort out this thing, thank you. Greetings neurino -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy
Re: [sqlalchemy] Relationship spanning on multiple tables
Sorry Michael, there was a typo in my code, due to some wrong copy/paste or search/replace I guess, I'm afraid. The error I reported comes up mapping to `TransmLimit` and not `SurfaceRes` (which is the one already mapped above). I edited the pastebin: http://pastebin.com/mjfgPrcB now with: 'transm_limit': relationship(TransmLimit, single_parent=True, uselist=False, primaryjoin=and_( user_stratigraphies.c.id_prov==provinces.c.id, provinces.c.id_cz==transm_limits.c.id_cz, user_stratigraphies.c.id_str==stratigraphies.c.id, stratigraphies.c.id_tec==tech_elements_classes.c.id, tech_elements_classes.c.id_tu==transm_limits.c.id_tu, ), foreign_keys=( user_stratigraphies.c.id_prov, ), ), I get the error I reported: sqlalchemy.exc.ArgumentError: Could not locate any foreign-key-equated, locally mapped column pairs for primaryjoin condition 'user_stratigraphies.id_prov = provinces.id AND provinces.id_cz = transm_limits.id_cz AND user_stratigraphies.id_str = stratigraphies.id AND stratigraphies.id_tec = tech_elements_classes.id AND tech_elements_classes.id_tu = transm_limits.id_tu' on relationship ustratatigraphy.transm_limit. For more relaxed rules on join conditions, the relationship may be marked as viewonly=True. Also there's the strange `viewonly=True` behavior that breakes `UserStratigraphy.surface_res` relationship if uncommented. Thanks for your patience. neurino On Mon, Aug 1, 2011 at 11:14 PM, Michael Bayer mike...@zzzcomputing.comwrote: 'transm_limit': relationship(SurfaceRes, single_parent=True, #uselist=False, #primaryjoin=and_( #user_stratigraphies.c.id_prov==provinces.c.id, #provinces.c.id_cz==transm_limits.c.id_cz, #user_stratigraphies.c.id_str==stratigraphies.c.id, #stratigraphies.c.id_tec==tech_elements_classes.c.id, #tech_elements_classes.c.id_tu==transm_limits.c.id_tu, #), this fails because you aren't joining to the table to which SurfaceRes is mapped, surface_res. On Aug 1, 2011, at 7:02 AM, neurino wrote: I'm trying to get some relationships spanning on multiple tables (4 or 5). While I got the `4 tables` one working on first attempt (I was surpized I could...) I can't get the `5 tables` one to work while the code is almost the same. Moreover with the first relationship if I add adding `viewonly=True` initialization fails. I just need these to get (not even set) some values with convenience of SA attributes. I published a working example on pastebin: http://pastebin.com/RsZ6GCRq I hope someone can sort out this thing, thank you. Greetings neurino -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Relationship spanning on multiple tables
I'm trying to get some relationships spanning on multiple tables (4 or 5). While I got the `4 tables` one working on first attempt (I was surpized I could...) I can't get the `5 tables` one to work while the code is almost the same. Moreover with the first relationship if I add adding `viewonly=True` initialization fails. I just need these to get (not even set) some values with convenience of SA attributes. I published a working example on pastebin: http://pastebin.com/RsZ6GCRq I hope someone can sort out this thing, thank you. Greetings neurino -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] orm mapper polymorphic_identity as collection
Thanks Michael, I was until now using a plain row[u'layers_id_type'] found inspecting row.keys() with debugger (quick and ditry way, I know...) now I changed with your more orthodox row[mytable.c.id_type] (ahem...) I should not have to deal with subqueries so it can stay this way for the time being. I'm reading now for the first time in docs what's TypeDecorator for, and I understand (approximately, please allow me some time to dive into it) your suggestion. My doubt, at the moment is: using Type Decorator will I be able to keep using `mytable.c.id_type` as foreign key to types table (holding type name adn so on) or not? Probably reading docs better I'd get the answer by myself so don't mind... I'll post results if I reach some good point. Thanks again neurino On Fri, Jul 29, 2011 at 4:12 PM, Michael Bayer mike...@zzzcomputing.comwrote: On Jul 28, 2011, at 7:05 PM, neurino wrote: I tried create_instance event and it fires, now having: def see_what_type(mapper, context, row, class_): if **is_air**: return Air() else: return EXT_CONTINUE def initialize_sql(engine): ... layer_mapper = mapper(Layer, layers) mapper(Air, inherits=layer_mapper) ... event.listen(Layer, 'create_instance', see_what_type, retval=True) and setting **is_air** as True I get Air instances querying for Layer with filled attributes and relationships. I don't know about other caveats... Now I have to find a robust way to check id_type (one of `row` items) in see_what_type. yeah thats one of the issues, those old extension interfaces were made before we had the aliased row in place which happens with the more elaborate subquery/join scenarios. For the simple case you'd run in the Column object into the row: row[mytable.c.type] if you start dealing with subqueries and such, might have to make it look for a column that proxies the type column, which is entirely a workaround for the bad interface: for key in row: if key.shares_lineage(mytable.c.type): value = row[key] break but even that isn't going to work if you had two different Layer objects in the same result row. Another workaround might be to establish the type of the mytable.c.type column using a TypeDecorator - where process_result_value() performs the rules you're looking for, returning is_air or not. Then you'd use regular polymorphic_on. Maybe give that a try ? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] orm mapper polymorphic_identity as collection
I need a Single Table Inheritance where the `type` column already exists and is a foreign key - `id_type` - to `types` table. My concern is I only need to map to two different classes: - Foo for `polymorphic_identity=FOO_ID_TYPE` - Bar for all other `id_type`s Is there a way I can accomplish this with two mappings? Any advice? Thanks for your support neurino -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] orm mapper polymorphic_identity as collection
Thanks Michael, my need is quite easy, no need of complex querying. Simply my class represents a Layer and, while quite all layer types (concrete, wood, bricks, etc.) act the same, only one (air) acts in a completely different way. With act I refer to performing calculations on float attributes, no more. So I can simply put in all calc functions separate operations: if self.type == air: #air calcs else: #all others calcs or, in a more elegant way, use a Layer subclass named Air. I can't create subclasses for all other layers since I don't know them in advance. As far as I understand I have to go with first solution at the moment, right? On Thu, Jul 28, 2011 at 11:38 PM, Michael Bayer mike...@zzzcomputing.comwrote: column_property() -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] orm mapper polymorphic_identity as collection
I tried create_instance event and it fires, now having: def see_what_type(mapper, context, row, class_): if **is_air**: return Air() else: return EXT_CONTINUE def initialize_sql(engine): ... layer_mapper = mapper(Layer, layers) mapper(Air, inherits=layer_mapper) ... event.listen(Layer, 'create_instance', see_what_type, retval=True) and setting **is_air** as True I get Air instances querying for Layer with filled attributes and relationships. I don't know about other caveats... Now I have to find a robust way to check id_type (one of `row` items) in see_what_type. Any advice? Thanks for your support On Fri, Jul 29, 2011 at 12:15 AM, Michael Bayer mike...@zzzcomputing.comwrote: On Jul 28, 2011, at 6:08 PM, neurino wrote: Thanks Michael, my need is quite easy, no need of complex querying. Simply my class represents a Layer and, while quite all layer types (concrete, wood, bricks, etc.) act the same, only one (air) acts in a completely different way. With act I refer to performing calculations on float attributes, no more. So I can simply put in all calc functions separate operations: if self.type == air: #air calcs else: #all others calcs or, in a more elegant way, use a Layer subclass named Air. I can't create subclasses for all other layers since I don't know them in advance. As far as I understand I have to go with first solution at the moment, right? probably, there's an old event for this called create_instance (1) that was meant for this a long time ago but I don't know what kinds of caveats it has with modern usage. polymorphic_on=callable is ticket #1131 (2), note it is very old and the code examples there are out of date. 1: http://www.sqlalchemy.org/docs/orm/events.html#sqlalchemy.orm.events.MapperEvents.create_instance 2: http://www.sqlalchemy.org/trac/ticket/1131 On Thu, Jul 28, 2011 at 11:38 PM, Michael Bayer mike...@zzzcomputing.comwrote: column_property() -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Help with tricky relationship
I have a (deep) tree structure main area category criteria inputs all tables with one_to_many relationships except criteria - inputs which is many_to_many. I need to set up a relationship for `main` as list of all distinct inputs of its sub-sub-sub-criteria as in this query: select distinct inputs.* from main, areas, categories, criteria, inputs, criterion_input where main.id = areas.id_main and areas.id = categories.id_area and categories.id = criteria.id_cat and criteria.id = criterion_input.id_crit and inputs.id = criterion_input.id_input but I can't even reach criteria... :( In fact with: orm.mapper(Main, main, properties={ 'criteria': orm.relationship(Criterion, foreign_keys=[areas.c.id_main, categories.c.id_area, criteria.c.id_cat], primaryjoin=and_(main.c.id==areas.c.id_itaca, areas.c.id==categories.c.id_area, categories.c.id==criteria.c.id_cat), viewonly=True) }) I get this error: ArgumentError: Local column 'areas.id' is not part of mapping Mapper| Main|main. Specify remote_side argument to indicate which column lazy join condition should compare against. Thank you for your support -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Composite Foreign Key with ondelete='CASCADE' does not work on mysql
I have a composite Primary key in a table and a relative Foreign Key in another as: sensors = Table('sensors', metadata, Column('id_cu', Integer, ForeignKey('ctrl_units.id', ondelete='CASCADE'), primary_key=True, autoincrement=False), Column('id_meas', Integer, primary_key=True, autoincrement=False), ... ) view_opts = Table('view_opts', metadata, Column('id', Integer, primary_key=True), Column('id_view', Integer, ForeignKey('views.id', ondelete='CASCADE'), nullable=False), Column('id_cu', Integer, ForeignKey('ctrl_units.id'), nullable=False), Column('id_meas', Integer, nullable=False), ForeignKeyConstraint(('id_cu', 'id_meas'), ('sensors.id_cu', 'sensors.id_meas'), ondelete='CASCADE'), ... ) mapped like this: orm.mapper(Sensor, sensors, properties={ 'view_opts': orm.relationship(ViewOpt, backref='sensor', cascade='all, delete-orphan', passive_deletes=True, single_parent=True) }) Now when I delete a row from sensor relative view_opt rows are not removed. I can't understand if this depends on DDL, it's a MySQL bug, something sqlalchemy related or whatever. I know I set `passive_deletes=True`to get MySQL ONDELETE CASCADE take care of it more and more quickly. Any help appreciated, thanks for your support neurino -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Composite Foreign Key with ondelete='CASCADE' does not work on mysql
Sorry if I did not specified, yes it's InnoDB. So do I HAVE to put `mysql_engine='InnoDB'` in any Table using ondelete cascade? Is there a link to docs with some info on it? Thanks for your support On May 30, 7:04 pm, virhilo virh...@gmail.com wrote: You need to use InnoDB engine, so you tables definitions 'll look like: sensors = Table('sensors', metadata, ... mysql_engine='InnoDB' ) view_opts = Table('view_opts', metadata, ... mysql_engine='InnoDB' ) On 30 Maj, 17:38, neurino neur...@gmail.com wrote: I have a composite Primary key in a table and a relative Foreign Key in another as: sensors = Table('sensors', metadata, Column('id_cu', Integer, ForeignKey('ctrl_units.id', ondelete='CASCADE'), primary_key=True, autoincrement=False), Column('id_meas', Integer, primary_key=True, autoincrement=False), ... ) view_opts = Table('view_opts', metadata, Column('id', Integer, primary_key=True), Column('id_view', Integer, ForeignKey('views.id', ondelete='CASCADE'), nullable=False), Column('id_cu', Integer, ForeignKey('ctrl_units.id'), nullable=False), Column('id_meas', Integer, nullable=False), ForeignKeyConstraint(('id_cu', 'id_meas'), ('sensors.id_cu', 'sensors.id_meas'), ondelete='CASCADE'), ... ) mapped like this: orm.mapper(Sensor, sensors, properties={ 'view_opts': orm.relationship(ViewOpt, backref='sensor', cascade='all, delete-orphan', passive_deletes=True, single_parent=True) }) Now when I delete a row from sensor relative view_opt rows are not removed. I can't understand if this depends on DDL, it's a MySQL bug, something sqlalchemy related or whatever. I know I set `passive_deletes=True`to get MySQL ONDELETE CASCADE take care of it more and more quickly. Any help appreciated, thanks for your support neurino -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] setting passive_deletes=True on an already existing database
This is (part of) my mapping: data = Table('data', metadata, Column('id', Integer, primary_key=True), Column('id_acq', Integer, ForeignKey('acquisitions.id'), index=True, nullable=False), Column('value', Float, nullable=True), ) acquisitions = Table('acquisitions', metadata, Column('id', Integer, primary_key=True), Column('datetime', DateTime, index=True, nullable=False), ) orm.mapper(Data, data) orm.mapper(Acquisition, acquisitions, properties={ 'data': orm.relationship(Data, backref='acquisition', cascade='all, delete-orphan', single_parent=True) }) Now that my app is depolyed using MySQL (InnoDB) I need to add a feature for multiple acquisitions delete. As far as I understand I need to change relationship above adding `passive_deletes=True`so I can perform multiple deletions without having IntegrityError: 'Cannot delete or update a parent row: a foreign key constraint fails' with: Session.query(model.Acquisition).filter(...).delete(synchronize_session=False) My questions are: * how do I perform changes on my current MySQL db foreign key 'id_acq'? Is there a SA way or simply I have to use mysql tools? * when I will eventually run again `paster setup-app` on a new db will be acquisitions table built with ON DELETE CASCADE by default? * since MySQL (InnoDB) supports ON DELETE CASCADE is it recommended I switch on `passive_deletes=True` all my relationships so I can get better performances? Thanks for your support neurino -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: spanning relationship on 3 tables fails
I think it was missing a key in foreign_keys: mapper(Sensor, sensors, properties={ 'data': relationship(Data, backref='sensor', foreign_keys=[data.c.id_meas, acquisitions.c.id_cu], primaryjoin=and_(sensors.c.id_meas==data.c.id_meas, data.c.id_acq==acquisitions.c.id, acquisitions.c.id_cu==sensors.c.id_cu), cascade='all, delete-orphan', single_parent=True) }) Now it seems work as expected, thanks. On Feb 28, 1:03 pm, neurino neur...@gmail.com wrote: Sorry if I resume this after two months but I think there's a bug in cascade deletion of the relationship you suggested me: mapper(Sensor, sensors, properties={ 'data': relationship(Data, backref='sensor', foreign_keys=[data.c.id_meas], primaryjoin=and_(sensors.c.id_meas==data.c.id_meas, data.c.id_acq==acquisitions.c.id, acquisitions.c.id_cu==sensors.c.id_cu), cascade='all, delete-orphan', single_parent=True) }) since, on a cascade delete of a Sensor sqlalchemy issues this query: SELECT data.id AS data_id, data.id_acq AS data_id_acq, data.id_meas AS data_id_meas, data.id_elab AS data_id_elab, data.value AS data_value FROM data, acquisitions, sensors WHERE ? = data.id_meas AND data.id_acq = acquisitions.id AND acquisitions.id_cu = sensors.id_cu (1,) DELETE FROM data WHERE data.id = ? that's going to delete all data with id_meas = 1 while it should be SELECT data.id AS data_id, data.id_acq AS data_id_acq, data.id_meas AS data_id_meas, data.id_elab AS data_id_elab, data.value AS data_value FROM data, acquisitions, sensors WHERE ? = data.id_meas AND ? = acquisitions.id_cu AND data.id_acq = acquisitions.id AND acquisitions.id_cu = sensors.id_cu (1, 3) DELETE FROM data WHERE data.id = ? with the `AND ? = acquisitions.id_cu` part added because Sensor has a composite primary key (id_cu, id_meas). I know it's a rare situation so I have no problems in removing cascade and doing deletions on my own but I'd like to be sure it's not a fault of mine but a bug. Thanks for your support. On Dec 30 2010, 5:45 pm, Michael Bayer mike...@zzzcomputing.com wrote: this is again my error messages not telling the whole story, ill see if i can get the term foreign_keys back in there: mapper(Sensor, sensors, properties={ 'data': relationship(Data, backref='sensor', foreign_keys=[data.c.id_meas], primaryjoin=and_(sensors.c.id_meas==data.c.id_meas, data.c.id_acq==acquisitions.c.id, acquisitions.c.id_cu==sensors.c.id_cu), cascade='all, delete-orphan', single_parent=True) }) or mapper(Sensor, sensors, properties={ 'data': relationship(Data, backref='sensor', foreign_keys=[sensors.id_meas], primaryjoin=and_(sensors.c.id_meas==data.c.id_meas, data.c.id_acq==acquisitions.c.id, acquisitions.c.id_cu==sensors.c.id_cu), cascade='all, delete-orphan', single_parent=True) }) depending on if this is one-to-many or many-to-one. A relationship like this is really better off as a viewonly=True since populating it is not going to add rows to the acquisitions table. On Dec 30, 2010, at 10:15 AM,neurinowrote: data = Table('data', metadata, Column('id', Integer, primary_key=True), Column('id_acq', Integer, ForeignKey('acquisitions.id'), nullable=False), Column('id_meas', Integer, nullable=False), Column('value', Float, nullable=True), ) acquisitions = Table('acquisitions', metadata, Column('id', Integer, primary_key=True), Column('id_cu', Integer, ForeignKey('ctrl_units.id'), nullable=False), Column('datetime', DateTime, nullable=False), ) sensors = Table('sensors', metadata, Column('id_cu', Integer, ForeignKey('ctrl_units.id'), primary_key=True, autoincrement=False), Column('id_meas', Integer, primary_key=True, autoincrement=False), Column('name', Unicode(20), nullable=False), Column('desc', Unicode(40), nullable=False), ) ctrl_units = Table('ctrl_units', metadata, Column('id', Integer, primary_key=True, autoincrement=False), Column('desc', Unicode(40), nullable=False) ) and this mapping: ... orm.mapper(Sensor, sensors, properties={ 'data': orm.relationship(Data, backref='sensor', primaryjoin=and_(sensors.c.id_meas==data.c.id_meas, data.c.id_acq==acquisitions.c.id, acquisitions.c.id_cu==sensors.c.id_cu), cascade='all, delete-orphan', single_parent=True) }) ... -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group
[sqlalchemy] Re: Operational Error raised by except_
I guess since, I learn it now, EXCEPT is not supported by MySQL... I guess I'll have to change my query at all... On Feb 22, 12:57 pm, neurino neur...@gmail.com wrote: I have now problems with except_ in MySQL: the code that worked flawlessly in sqlite now causes an error, seems right after EXCEPT in query: ProgrammingError: (ProgrammingError) (1064, You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT sensors.id_cu AS sensors_id_cu, sensors.id_meas AS sensors_id_meas, senso' at line 3) 'SELECT anon_1.sensors_id_cu AS anon_1_sensors_id_cu, anon_1.sensors_id_meas AS anon_1_sensors_id_meas, anon_1.sensors_id_elab AS anon_1_sensors_id_elab, anon_1.sensors_name AS anon_1_sensors_name, anon_1.sensors_desc AS anon_1_sensors_desc \nFROM (SELECT sensors.id_cu AS sensors_id_cu, sensors.id_meas AS sensors_id_meas, sensors.id_elab AS sensors_id_elab, sensors.name AS sensors_name, sensors.`desc` AS sensors_desc \nFROM sensors EXCEPT SELECT sensors.id_cu AS sensors_id_cu, sensors.id_meas AS sensors_id_meas, sensors.id_elab AS sensors_id_elab, sensors.name AS sensors_name, sensors.`desc` AS sensors_desc \nFROM sensors, view_opts \nWHERE sensors.id_cu = view_opts.id_cu AND sensors.id_meas = view_opts.id_meas AND view_opts.id_view = %s AND view_opts.id IS NOT NULL ORDER BY sensors.id_cu, sensors.id_meas) AS anon_1 ORDER BY anon_1.sensors_id_cu, anon_1.sensors_id_meas' (1L,) I'm using: * sqlalchemy 0.6.6 * MySQLdb 1.2.3 * MySQL Ver 14.14 Distrib 5.1.41, Thanks for your support 2011/1/13 neurino neur...@gmail.com Thanks Michael, just for following readers I precise the ORDER BY clause causing the OperationalError is the one coming *before* the EXCEPT so I had to add .order_by(None) to the first query, now it looks like: Session.query(model.Sensor) \ .order_by(None) \ .except_( Session.query(model.Sensor) \ .filter(model.Sensor.id_cu==model.ViewOpt.id_cu) \ .filter(model.Sensor.id_meas==model.ViewOpt.id_meas) \ .filter(model.ViewOpt.id_view==1) ) and works perfectly, thanks again! Cheers neurino On Jan 12, 5:28 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Jan 12, 2011, at 11:20 AM, neurino wrote: Well as I wrote ordering involves everything, also forms creation with formalchemy (make a select where all sensors are ordered that way etc) anyway I understand your point of view. quickest is a where sensor id not in (query), as a simple WHERE clause Problem comes when Sensor primary key is composite (id_cu + id_meas)... The good 'ol python comes in handy anyway: all = Session.query(model.Sensor).all() selected = Session.query(model.Sensor).filter( ... model.Sensor.id_cu==model.ViewOpt.id_cu).filter( ... model.Sensor.id_meas==model.ViewOpt.id_meas).filter( ... model.ViewOpt.id_view==1).all() diff = [sens for sens in all if sens not in selected] len(all), len(selected), len(diff) (154, 6, 148) We're talking of working on max total 200/300 sensors. The OR way did not filter anything (maybe I made somwthing wrong). Oh you know what, I completely forgot the best solution. It *is* documented on query.order_by() though which is an argument forchecking! pass None to query.order_by(). That disables all order_by's for that query. So go back to your except_() and use except_(q.order_by(None)). Greetings On Jan 12, 4:04 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Jan 12, 2011, at 8:46 AM, neurino wrote: I need always the same order_by in all app and it could be subject of modification and / or integration in the near future so which better place than mapper to define it once instead of any time I do a query? It sounds like the ordering here is for the purposes of view logic so I'd have view logic that is factored down to receive Query objects that return Sensor rows, the view logic then applies the .order_by() to the Query. I.e. in a web app I use a Paginator object of some kind that does this, given a Query. This is probably a reason I don't like order_by to be within mapper(), it doesn't define persistence, rather a view. Anyway do you think there are alternate paths to get `all sensors but already choosen` which are order_by compatible? quickest is a where sensor id not in (query), as a simple WHERE clause, or use OR, query sensor where sensor.cu != cu OR sensor.meas != meas OR sensor.view != view. Except is not as widely used and I think its not even supported by all backends, even though it is a nice logical set operator, its got annoying quirks like this one. Thanks for your support On Jan 12, 2:38 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Jan 12, 2011, at 7:28 AM, neurino wrote: I have this model: ``I organize
Re: [sqlalchemy] Re: Operational Error raised by except_
Something like this: stmt = Session.query(model.ViewOpt.id_cu, model.ViewOpt.id_meas) \ .filter(model.ViewOpt.id_view==1).subquery() query = Session.query(model.Sensor) \ .outerjoin((stmt, and_(model.Sensor.id_cu==stmt.c.id_cu, model.Sensor.id_meas==stmt.c.id_meas))) \ .filter(and_(stmt.c.id_cu==None, stmt.c.id_meas==None)) Cheers 2011/2/22 neurino neur...@gmail.com I guess since, I learn it now, EXCEPT is not supported by MySQL... I guess I'll have to change my query at all... On Feb 22, 12:57 pm, neurino neur...@gmail.com wrote: I have now problems with except_ in MySQL: the code that worked flawlessly in sqlite now causes an error, seems right after EXCEPT in query: ProgrammingError: (ProgrammingError) (1064, You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT sensors.id_cu AS sensors_id_cu, sensors.id_meas AS sensors_id_meas, senso' at line 3) 'SELECT anon_1.sensors_id_cu AS anon_1_sensors_id_cu, anon_1.sensors_id_meas AS anon_1_sensors_id_meas, anon_1.sensors_id_elab AS anon_1_sensors_id_elab, anon_1.sensors_name AS anon_1_sensors_name, anon_1.sensors_desc AS anon_1_sensors_desc \nFROM (SELECT sensors.id_cu AS sensors_id_cu, sensors.id_meas AS sensors_id_meas, sensors.id_elab AS sensors_id_elab, sensors.name AS sensors_name, sensors.`desc` AS sensors_desc \nFROM sensors EXCEPT SELECT sensors.id_cu AS sensors_id_cu, sensors.id_meas AS sensors_id_meas, sensors.id_elab AS sensors_id_elab, sensors.name AS sensors_name, sensors.`desc` AS sensors_desc \nFROM sensors, view_opts \nWHERE sensors.id_cu = view_opts.id_cu AND sensors.id_meas = view_opts.id_meas AND view_opts.id_view = %s AND view_opts.id IS NOT NULL ORDER BY sensors.id_cu, sensors.id_meas) AS anon_1 ORDER BY anon_1.sensors_id_cu, anon_1.sensors_id_meas' (1L,) I'm using: * sqlalchemy 0.6.6 * MySQLdb 1.2.3 * MySQL Ver 14.14 Distrib 5.1.41, Thanks for your support 2011/1/13 neurino neur...@gmail.com Thanks Michael, just for following readers I precise the ORDER BY clause causing the OperationalError is the one coming *before* the EXCEPT so I had to add .order_by(None) to the first query, now it looks like: Session.query(model.Sensor) \ .order_by(None) \ .except_( Session.query(model.Sensor) \ .filter(model.Sensor.id_cu==model.ViewOpt.id_cu) \ .filter(model.Sensor.id_meas==model.ViewOpt.id_meas) \ .filter(model.ViewOpt.id_view==1) ) and works perfectly, thanks again! Cheers neurino On Jan 12, 5:28 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Jan 12, 2011, at 11:20 AM, neurino wrote: Well as I wrote ordering involves everything, also forms creation with formalchemy (make a select where all sensors are ordered that way etc) anyway I understand your point of view. quickest is a where sensor id not in (query), as a simple WHERE clause Problem comes when Sensor primary key is composite (id_cu + id_meas)... The good 'ol python comes in handy anyway: all = Session.query(model.Sensor).all() selected = Session.query(model.Sensor).filter( ... model.Sensor.id_cu==model.ViewOpt.id_cu).filter( ... model.Sensor.id_meas==model.ViewOpt.id_meas).filter( ... model.ViewOpt.id_view==1).all() diff = [sens for sens in all if sens not in selected] len(all), len(selected), len(diff) (154, 6, 148) We're talking of working on max total 200/300 sensors. The OR way did not filter anything (maybe I made somwthing wrong). Oh you know what, I completely forgot the best solution. It *is* documented on query.order_by() though which is an argument forchecking! pass None to query.order_by(). That disables all order_by's for that query. So go back to your except_() and use except_(q.order_by(None)). Greetings On Jan 12, 4:04 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Jan 12, 2011, at 8:46 AM, neurino wrote: I need always the same order_by in all app and it could be subject of modification and / or integration in the near future so which better place than mapper to define it once instead of any time I do a query? It sounds like the ordering here is for the purposes of view logic so I'd have view logic that is factored down to receive Query objects that return Sensor rows, the view logic then applies the .order_by() to the Query. I.e. in a web app I use a Paginator object of some kind that does this, given a Query. This is probably a reason I don't like order_by to be within mapper(), it doesn't define persistence, rather a view. Anyway do you think there are alternate paths to get `all sensors but already choosen` which
[sqlalchemy] VARCHAR requires a length when rendered on MySQL
I'm switching to MySQL from SQLite in my Pylons app and I get this error during setup-app: sqlalchemy.exc.InvalidRequestError: VARCHAR requires a length when rendered on MySQL I noticed some tables had simply Unicode without length but also adding length to all Unicode columns the error keeps coming. CREATE TABLE views ( id INTEGER NOT NULL AUTO_INCREMENT, name VARCHAR(40) NOT NULL, `desc` VARCHAR(80), PRIMARY KEY (id) ) 21-02-11 12:18:30,086 INFO [sqlalchemy.engine.base.Engine.0x...ff6c] [MainThread] () 21-02-11 12:18:30,170 INFO [sqlalchemy.engine.base.Engine.0x...ff6c] [MainThread] COMMIT Traceback (most recent call last): ... File ...python2.6/site-packages/SQLAlchemy-0.6.6-py2.6.egg/ sqlalchemy/dialects/mysql/base.py, line 1520, in visit_VARCHAR raise exc.InvalidRequestError(VARCHAR requires a length when rendered on MySQL) sqlalchemy.exc.InvalidRequestError: VARCHAR requires a length when rendered on MySQL Maybe is `desc` column that's nullable? I'm using: * sqlalchemy 0.6.6 * MySQLdb 1.2.3 * MySQL Ver 14.14 Distrib 5.1.41, Thanks for your support -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: VARCHAR requires a length when rendered on MySQL
Please ignore this issue, searched for all Unicode but forgot I also used a String for a column... It works flawlessly, I apologize. On Feb 21, 12:40 pm, neurino neur...@gmail.com wrote: I'm switching to MySQL from SQLite in my Pylons app and I get this error during setup-app: sqlalchemy.exc.InvalidRequestError: VARCHAR requires a length when rendered on MySQL I noticed some tables had simply Unicode without length but also adding length to all Unicode columns the error keeps coming. CREATE TABLE views ( id INTEGER NOT NULL AUTO_INCREMENT, name VARCHAR(40) NOT NULL, `desc` VARCHAR(80), PRIMARY KEY (id) ) 21-02-11 12:18:30,086 INFO [sqlalchemy.engine.base.Engine.0x...ff6c] [MainThread] () 21-02-11 12:18:30,170 INFO [sqlalchemy.engine.base.Engine.0x...ff6c] [MainThread] COMMIT Traceback (most recent call last): ... File ...python2.6/site-packages/SQLAlchemy-0.6.6-py2.6.egg/ sqlalchemy/dialects/mysql/base.py, line 1520, in visit_VARCHAR raise exc.InvalidRequestError(VARCHAR requires a length when rendered on MySQL) sqlalchemy.exc.InvalidRequestError: VARCHAR requires a length when rendered on MySQL Maybe is `desc` column that's nullable? I'm using: * sqlalchemy 0.6.6 * MySQLdb 1.2.3 * MySQL Ver 14.14 Distrib 5.1.41, Thanks for your support -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Is it possible to know in advance elements that will be cascade-deleted?
With a bit of duck typing I came out with an easier solution (to my needs). The only risky thing I had to use private _props dictionary. from sqlalchemy.orm import object_mapper for name, prop in object_mapper(item_to_be_deleted)._props.iteritems(): if 'delete' in getattr(prop, 'cascade', ()): print name Thanks again you for your support On Feb 3, 6:04 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Feb 3, 2011, at 11:58 AM, neurino wrote: That is great! Just for eventual followers I fix imports: from sqlalchemy.orm import object_mapper from sqlalchemy.orm.attributes import instance_state m = object_mapper(item_to_be_deleted) for rec in m.cascade_iterator(delete, instance_state(item_to_be_deleted)): obj = rec[0] print item will be deleted !, obj Anyway there's some way to stop recursiveness after a given level (or just 1)? That's because for me deleting one ctrl_unit means deleting hundreds of `Acquisition`s with thousands of `Data` each that means a **lot** of queries and I could assume the user is smart enough to know that if he deletes an Acquisition he deletes its data too... I'm looking at cascade_iterator def source, I could hack that end enclose directly in my code, the halt_on parameter is unused, as far as I understand. Thanks for your support halt_on works, you use that, its a callable. neurino On Feb 3, 5:19 pm, Michael Bayer mike...@zzzcomputing.com wrote: you could use the mapper's cascade function from sqlalchemy.orm import object_mapper, instance_state m = object_mapper(item_to_be_deleted) for rec in m.cascade_iterator(delete, instance_state(item_to_be_deleted)): obj = rec[0] print item will be deleted !, obj On Feb 3, 2011, at 6:15 AM, neurino wrote: Can I show the user a warning like: if you delete this item also [list of other items] will be removed whichever is the item? I was using something like this: import inspect def get_items(item_to_be_deleted): get_items(item_to_be_deleted) - [(child_item_name, number_of_child_items)] return [(name, len(inst)) for (name, inst) in inspect.getmembers(item_to_be_deleted) if isinstance(inst, orm.collections.InstrumentedList)] and it worked until all relationships had cascade delete but now I have one without it and it shows in the list too while it shouldn't... Any tips? Thank you for your support neurino -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Is it possible to know in advance elements that will be cascade-deleted?
Can I show the user a warning like: if you delete this item also [list of other items] will be removed whichever is the item? I was using something like this: import inspect def get_items(item_to_be_deleted): get_items(item_to_be_deleted) - [(child_item_name, number_of_child_items)] return [(name, len(inst)) for (name, inst) in inspect.getmembers(item_to_be_deleted) if isinstance(inst, orm.collections.InstrumentedList)] and it worked until all relationships had cascade delete but now I have one without it and it shows in the list too while it shouldn't... Any tips? Thank you for your support neurino -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Is it possible to know in advance elements that will be cascade-deleted?
That is great! Just for eventual followers I fix imports: from sqlalchemy.orm import object_mapper from sqlalchemy.orm.attributes import instance_state m = object_mapper(item_to_be_deleted) for rec in m.cascade_iterator(delete, instance_state(item_to_be_deleted)): obj = rec[0] print item will be deleted !, obj Anyway there's some way to stop recursiveness after a given level (or just 1)? That's because for me deleting one ctrl_unit means deleting hundreds of `Acquisition`s with thousands of `Data` each that means a **lot** of queries and I could assume the user is smart enough to know that if he deletes an Acquisition he deletes its data too... I'm looking at cascade_iterator def source, I could hack that end enclose directly in my code, the halt_on parameter is unused, as far as I understand. Thanks for your support neurino On Feb 3, 5:19 pm, Michael Bayer mike...@zzzcomputing.com wrote: you could use the mapper's cascade function from sqlalchemy.orm import object_mapper, instance_state m = object_mapper(item_to_be_deleted) for rec in m.cascade_iterator(delete, instance_state(item_to_be_deleted)): obj = rec[0] print item will be deleted !, obj On Feb 3, 2011, at 6:15 AM, neurino wrote: Can I show the user a warning like: if you delete this item also [list of other items] will be removed whichever is the item? I was using something like this: import inspect def get_items(item_to_be_deleted): get_items(item_to_be_deleted) - [(child_item_name, number_of_child_items)] return [(name, len(inst)) for (name, inst) in inspect.getmembers(item_to_be_deleted) if isinstance(inst, orm.collections.InstrumentedList)] and it worked until all relationships had cascade delete but now I have one without it and it shows in the list too while it shouldn't... Any tips? Thank you for your support neurino -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Operational Error raised by except_
Thanks Michael, just for following readers I precise the ORDER BY clause causing the OperationalError is the one coming *before* the EXCEPT so I had to add .order_by(None) to the first query, now it looks like: Session.query(model.Sensor) \ .order_by(None) \ .except_( Session.query(model.Sensor) \ .filter(model.Sensor.id_cu==model.ViewOpt.id_cu) \ .filter(model.Sensor.id_meas==model.ViewOpt.id_meas) \ .filter(model.ViewOpt.id_view==1) ) and works perfectly, thanks again! Cheers neurino On Jan 12, 5:28 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Jan 12, 2011, at 11:20 AM, neurino wrote: Well as I wrote ordering involves everything, also forms creation with formalchemy (make a select where all sensors are ordered that way etc) anyway I understand your point of view. quickest is a where sensor id not in (query), as a simple WHERE clause Problem comes when Sensor primary key is composite (id_cu + id_meas)... The good 'ol python comes in handy anyway: all = Session.query(model.Sensor).all() selected = Session.query(model.Sensor).filter( ... model.Sensor.id_cu==model.ViewOpt.id_cu).filter( ... model.Sensor.id_meas==model.ViewOpt.id_meas).filter( ... model.ViewOpt.id_view==1).all() diff = [sens for sens in all if sens not in selected] len(all), len(selected), len(diff) (154, 6, 148) We're talking of working on max total 200/300 sensors. The OR way did not filter anything (maybe I made somwthing wrong). Oh you know what, I completely forgot the best solution. It *is* documented on query.order_by() though which is an argument forchecking! pass None to query.order_by(). That disables all order_by's for that query. So go back to your except_() and use except_(q.order_by(None)). Greetings On Jan 12, 4:04 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Jan 12, 2011, at 8:46 AM, neurino wrote: I need always the same order_by in all app and it could be subject of modification and / or integration in the near future so which better place than mapper to define it once instead of any time I do a query? It sounds like the ordering here is for the purposes of view logic so I'd have view logic that is factored down to receive Query objects that return Sensor rows, the view logic then applies the .order_by() to the Query. I.e. in a web app I use a Paginator object of some kind that does this, given a Query. This is probably a reason I don't like order_by to be within mapper(), it doesn't define persistence, rather a view. Anyway do you think there are alternate paths to get `all sensors but already choosen` which are order_by compatible? quickest is a where sensor id not in (query), as a simple WHERE clause, or use OR, query sensor where sensor.cu != cu OR sensor.meas != meas OR sensor.view != view. Except is not as widely used and I think its not even supported by all backends, even though it is a nice logical set operator, its got annoying quirks like this one. Thanks for your support On Jan 12, 2:38 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Jan 12, 2011, at 7:28 AM, neurino wrote: I have this model: ``I organize views with many view_options each one showing a sensor. A sensor can appear just once per view.`` sensors = Table('sensors', metadata, Column('id_cu', Integer, ForeignKey('ctrl_units.id'), primary_key=True, autoincrement=False), Column('id_meas', Integer, primary_key=True, autoincrement=False), Column('id_elab', Integer, nullable=False), Column('name', Unicode(40), nullable=False), Column('desc', Unicode(80), nullable=True), ) ctrl_units = Table('ctrl_units', metadata, Column('id', Integer, primary_key=True, autoincrement=False), Column('name', Unicode(40), nullable=False) ) views = Table('views', metadata, Column('id', Integer, primary_key=True), Column('name', Unicode(40), nullable=False), Column('desc', Unicode(80), nullable=True), ) view_opts = Table('view_opts', metadata, Column('id', Integer, primary_key=True), Column('id_view', Integer, ForeignKey('views.id'), nullable=False), Column('id_cu', Integer, ForeignKey('ctrl_units.id'), nullable=False), Column('id_meas', Integer, nullable=False), Column('ord', Integer, nullable=False), ForeignKeyConstraint(('id_cu', 'id_meas'), ('sensors.id_cu', 'sensors.id_meas')), #sensor can appear just once per view UniqueConstraint('id_view', 'id_cu', 'id_meas'), ) Now I let the user add view_options letting him select the sensor. I'd like to show him only the sensors not already selected in other options of the same parent view so I tried to use except_ this way: q = Session.query(model.Sensor) \ .except_( Session.query(model.Sensor
[sqlalchemy] Operational Error raised by except_
I have this model: ``I organize views with many view_options each one showing a sensor. A sensor can appear just once per view.`` sensors = Table('sensors', metadata, Column('id_cu', Integer, ForeignKey('ctrl_units.id'), primary_key=True, autoincrement=False), Column('id_meas', Integer, primary_key=True, autoincrement=False), Column('id_elab', Integer, nullable=False), Column('name', Unicode(40), nullable=False), Column('desc', Unicode(80), nullable=True), ) ctrl_units = Table('ctrl_units', metadata, Column('id', Integer, primary_key=True, autoincrement=False), Column('name', Unicode(40), nullable=False) ) views = Table('views', metadata, Column('id', Integer, primary_key=True), Column('name', Unicode(40), nullable=False), Column('desc', Unicode(80), nullable=True), ) view_opts = Table('view_opts', metadata, Column('id', Integer, primary_key=True), Column('id_view', Integer, ForeignKey('views.id'), nullable=False), Column('id_cu', Integer, ForeignKey('ctrl_units.id'), nullable=False), Column('id_meas', Integer, nullable=False), Column('ord', Integer, nullable=False), ForeignKeyConstraint(('id_cu', 'id_meas'), ('sensors.id_cu', 'sensors.id_meas')), #sensor can appear just once per view UniqueConstraint('id_view', 'id_cu', 'id_meas'), ) Now I let the user add view_options letting him select the sensor. I'd like to show him only the sensors not already selected in other options of the same parent view so I tried to use except_ this way: q = Session.query(model.Sensor) \ .except_( Session.query(model.Sensor) \ .filter(model.Sensor.id_cu==model.ViewOpt.id_cu) \ .filter(model.Sensor.id_meas==model.ViewOpt.id_meas) \ .filter(model.ViewOpt.id_view==1) ) Sensor mapping has a order_by: orm.mapper(Sensor, sensors, order_by=[sensors.c.id_cu, sensors.c.id_meas ]) I get this SQL and this error, probably due to mapping order_by in Sensor: (OperationalError) ORDER BY clause should come after EXCEPT not before u'SELECT anon_1.sensors_id_cu AS anon_1_sensors_id_cu, anon_1.sensors_id_meas AS anon_1_sensors_id_meas, anon_1.sensors_id_elab AS anon_1_sensors_id_elab, anon_1.sensors_name AS anon_1_sensors_name, anon_1.sensors_desc AS anon_1_sensors_desc FROM (SELECT sensors.id_cu AS sensors_id_cu, sensors.id_meas AS sensors_id_meas, sensors.id_elab AS sensors_id_elab, sensors.name AS sensors_name, sensors.desc AS sensors_desc FROM sensors ORDER BY sensors.id_cu, sensors.id_meas EXCEPT SELECT sensors.id_cu AS sensors_id_cu, sensors.id_meas AS sensors_id_meas, sensors.id_elab AS sensors_id_elab, sensors.name AS sensors_name, sensors.desc AS sensors_desc FROM sensors, view_opts WHERE sensors.id_cu = view_opts.id_cu AND sensors.id_meas = view_opts.id_meas AND view_opts.id_view = ? ORDER BY sensors.id_cu, sensors.id_meas) AS anon_1 ORDER BY anon_1.sensors_id_cu, anon_1.sensors_id_meas' is this supposed to be a bug? Any alternative solution (and maybe simpler :) ) to get what I need? I'm using SqlAlchemy 0.6.4 -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Operational Error raised by except_
I need always the same order_by in all app and it could be subject of modification and / or integration in the near future so which better place than mapper to define it once instead of any time I do a query? Anyway do you think there are alternate paths to get `all sensors but already choosen` which are order_by compatible? Thanks for your support On Jan 12, 2:38 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Jan 12, 2011, at 7:28 AM, neurino wrote: I have this model: ``I organize views with many view_options each one showing a sensor. A sensor can appear just once per view.`` sensors = Table('sensors', metadata, Column('id_cu', Integer, ForeignKey('ctrl_units.id'), primary_key=True, autoincrement=False), Column('id_meas', Integer, primary_key=True, autoincrement=False), Column('id_elab', Integer, nullable=False), Column('name', Unicode(40), nullable=False), Column('desc', Unicode(80), nullable=True), ) ctrl_units = Table('ctrl_units', metadata, Column('id', Integer, primary_key=True, autoincrement=False), Column('name', Unicode(40), nullable=False) ) views = Table('views', metadata, Column('id', Integer, primary_key=True), Column('name', Unicode(40), nullable=False), Column('desc', Unicode(80), nullable=True), ) view_opts = Table('view_opts', metadata, Column('id', Integer, primary_key=True), Column('id_view', Integer, ForeignKey('views.id'), nullable=False), Column('id_cu', Integer, ForeignKey('ctrl_units.id'), nullable=False), Column('id_meas', Integer, nullable=False), Column('ord', Integer, nullable=False), ForeignKeyConstraint(('id_cu', 'id_meas'), ('sensors.id_cu', 'sensors.id_meas')), #sensor can appear just once per view UniqueConstraint('id_view', 'id_cu', 'id_meas'), ) Now I let the user add view_options letting him select the sensor. I'd like to show him only the sensors not already selected in other options of the same parent view so I tried to use except_ this way: q = Session.query(model.Sensor) \ .except_( Session.query(model.Sensor) \ .filter(model.Sensor.id_cu==model.ViewOpt.id_cu) \ .filter(model.Sensor.id_meas==model.ViewOpt.id_meas) \ .filter(model.ViewOpt.id_view==1) ) Sensor mapping has a order_by: orm.mapper(Sensor, sensors, order_by=[sensors.c.id_cu, sensors.c.id_meas ]) I get this SQL and this error, probably due to mapping order_by in Sensor: (OperationalError) ORDER BY clause should come after EXCEPT not before u'SELECT anon_1.sensors_id_cu AS anon_1_sensors_id_cu, anon_1.sensors_id_meas AS anon_1_sensors_id_meas, anon_1.sensors_id_elab AS anon_1_sensors_id_elab, anon_1.sensors_name AS anon_1_sensors_name, anon_1.sensors_desc AS anon_1_sensors_desc FROM (SELECT sensors.id_cu AS sensors_id_cu, sensors.id_meas AS sensors_id_meas, sensors.id_elab AS sensors_id_elab, sensors.name AS sensors_name, sensors.desc AS sensors_desc FROM sensors ORDER BY sensors.id_cu, sensors.id_meas EXCEPT SELECT sensors.id_cu AS sensors_id_cu, sensors.id_meas AS sensors_id_meas, sensors.id_elab AS sensors_id_elab, sensors.name AS sensors_name, sensors.desc AS sensors_desc FROM sensors, view_opts WHERE sensors.id_cu = view_opts.id_cu AND sensors.id_meas = view_opts.id_meas AND view_opts.id_view = ? ORDER BY sensors.id_cu, sensors.id_meas) AS anon_1 ORDER BY anon_1.sensors_id_cu, anon_1.sensors_id_meas' is this supposed to be a bug? Any alternative solution (and maybe simpler :) ) to get what I need? I suppose its a bug, though I'm not a huge fan of order_by on mapper though, so my recommendation would be to not rely upon that. A solution in SQLA would be if some flag were passed through to not render built-in order bys. I've added 2022 targeted for 0.7.xx for that. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Operational Error raised by except_
Well as I wrote ordering involves everything, also forms creation with formalchemy (make a select where all sensors are ordered that way etc) anyway I understand your point of view. quickest is a where sensor id not in (query), as a simple WHERE clause Problem comes when Sensor primary key is composite (id_cu + id_meas)... The good 'ol python comes in handy anyway: all = Session.query(model.Sensor).all() selected = Session.query(model.Sensor).filter( ... model.Sensor.id_cu==model.ViewOpt.id_cu).filter( ... model.Sensor.id_meas==model.ViewOpt.id_meas).filter( ... model.ViewOpt.id_view==1).all() diff = [sens for sens in all if sens not in selected] len(all), len(selected), len(diff) (154, 6, 148) We're talking of working on max total 200/300 sensors. The OR way did not filter anything (maybe I made somwthing wrong). Greetings On Jan 12, 4:04 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Jan 12, 2011, at 8:46 AM, neurino wrote: I need always the same order_by in all app and it could be subject of modification and / or integration in the near future so which better place than mapper to define it once instead of any time I do a query? It sounds like the ordering here is for the purposes of view logic so I'd have view logic that is factored down to receive Query objects that return Sensor rows, the view logic then applies the .order_by() to the Query. I.e. in a web app I use a Paginator object of some kind that does this, given a Query. This is probably a reason I don't like order_by to be within mapper(), it doesn't define persistence, rather a view. Anyway do you think there are alternate paths to get `all sensors but already choosen` which are order_by compatible? quickest is a where sensor id not in (query), as a simple WHERE clause, or use OR, query sensor where sensor.cu != cu OR sensor.meas != meas OR sensor.view != view. Except is not as widely used and I think its not even supported by all backends, even though it is a nice logical set operator, its got annoying quirks like this one. Thanks for your support On Jan 12, 2:38 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Jan 12, 2011, at 7:28 AM, neurino wrote: I have this model: ``I organize views with many view_options each one showing a sensor. A sensor can appear just once per view.`` sensors = Table('sensors', metadata, Column('id_cu', Integer, ForeignKey('ctrl_units.id'), primary_key=True, autoincrement=False), Column('id_meas', Integer, primary_key=True, autoincrement=False), Column('id_elab', Integer, nullable=False), Column('name', Unicode(40), nullable=False), Column('desc', Unicode(80), nullable=True), ) ctrl_units = Table('ctrl_units', metadata, Column('id', Integer, primary_key=True, autoincrement=False), Column('name', Unicode(40), nullable=False) ) views = Table('views', metadata, Column('id', Integer, primary_key=True), Column('name', Unicode(40), nullable=False), Column('desc', Unicode(80), nullable=True), ) view_opts = Table('view_opts', metadata, Column('id', Integer, primary_key=True), Column('id_view', Integer, ForeignKey('views.id'), nullable=False), Column('id_cu', Integer, ForeignKey('ctrl_units.id'), nullable=False), Column('id_meas', Integer, nullable=False), Column('ord', Integer, nullable=False), ForeignKeyConstraint(('id_cu', 'id_meas'), ('sensors.id_cu', 'sensors.id_meas')), #sensor can appear just once per view UniqueConstraint('id_view', 'id_cu', 'id_meas'), ) Now I let the user add view_options letting him select the sensor. I'd like to show him only the sensors not already selected in other options of the same parent view so I tried to use except_ this way: q = Session.query(model.Sensor) \ .except_( Session.query(model.Sensor) \ .filter(model.Sensor.id_cu==model.ViewOpt.id_cu) \ .filter(model.Sensor.id_meas==model.ViewOpt.id_meas) \ .filter(model.ViewOpt.id_view==1) ) Sensor mapping has a order_by: orm.mapper(Sensor, sensors, order_by=[sensors.c.id_cu, sensors.c.id_meas ]) I get this SQL and this error, probably due to mapping order_by in Sensor: (OperationalError) ORDER BY clause should come after EXCEPT not before u'SELECT anon_1.sensors_id_cu AS anon_1_sensors_id_cu, anon_1.sensors_id_meas AS anon_1_sensors_id_meas, anon_1.sensors_id_elab AS anon_1_sensors_id_elab, anon_1.sensors_name AS anon_1_sensors_name, anon_1.sensors_desc AS anon_1_sensors_desc FROM (SELECT sensors.id_cu AS sensors_id_cu, sensors.id_meas AS sensors_id_meas, sensors.id_elab AS sensors_id_elab, sensors.name AS sensors_name, sensors.desc AS sensors_desc FROM sensors ORDER BY sensors.id_cu, sensors.id_meas EXCEPT SELECT
[sqlalchemy] Re: spanning relationship on 3 tables fails
Thank you Michael, I will try it... next year... ^^ good 2011 again! neurino On Dec 30, 5:45 pm, Michael Bayer mike...@zzzcomputing.com wrote: this is again my error messages not telling the whole story, ill see if i can get the term foreign_keys back in there: mapper(Sensor, sensors, properties={ 'data': relationship(Data, backref='sensor', foreign_keys=[data.c.id_meas], primaryjoin=and_(sensors.c.id_meas==data.c.id_meas, data.c.id_acq==acquisitions.c.id, acquisitions.c.id_cu==sensors.c.id_cu), cascade='all, delete-orphan', single_parent=True) }) or mapper(Sensor, sensors, properties={ 'data': relationship(Data, backref='sensor', foreign_keys=[sensors.id_meas], primaryjoin=and_(sensors.c.id_meas==data.c.id_meas, data.c.id_acq==acquisitions.c.id, acquisitions.c.id_cu==sensors.c.id_cu), cascade='all, delete-orphan', single_parent=True) }) depending on if this is one-to-many or many-to-one. A relationship like this is really better off as a viewonly=True since populating it is not going to add rows to the acquisitions table. On Dec 30, 2010, at 10:15 AM, neurino wrote: data = Table('data', metadata, Column('id', Integer, primary_key=True), Column('id_acq', Integer, ForeignKey('acquisitions.id'), nullable=False), Column('id_meas', Integer, nullable=False), Column('value', Float, nullable=True), ) acquisitions = Table('acquisitions', metadata, Column('id', Integer, primary_key=True), Column('id_cu', Integer, ForeignKey('ctrl_units.id'), nullable=False), Column('datetime', DateTime, nullable=False), ) sensors = Table('sensors', metadata, Column('id_cu', Integer, ForeignKey('ctrl_units.id'), primary_key=True, autoincrement=False), Column('id_meas', Integer, primary_key=True, autoincrement=False), Column('name', Unicode(20), nullable=False), Column('desc', Unicode(40), nullable=False), ) ctrl_units = Table('ctrl_units', metadata, Column('id', Integer, primary_key=True, autoincrement=False), Column('desc', Unicode(40), nullable=False) ) and this mapping: ... orm.mapper(Sensor, sensors, properties={ 'data': orm.relationship(Data, backref='sensor', primaryjoin=and_(sensors.c.id_meas==data.c.id_meas, data.c.id_acq==acquisitions.c.id, acquisitions.c.id_cu==sensors.c.id_cu), cascade='all, delete-orphan', single_parent=True) }) ... -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] spanning relationship on 3 tables fails
I have this 4 tables, the base concept is: `sensor data comes from its id_meas / id_cu pair, I can find id_meas directly in data and id_cu in data parent acquisition.` data = Table('data', metadata, Column('id', Integer, primary_key=True), Column('id_acq', Integer, ForeignKey('acquisitions.id'), nullable=False), Column('id_meas', Integer, nullable=False), Column('value', Float, nullable=True), ) acquisitions = Table('acquisitions', metadata, Column('id', Integer, primary_key=True), Column('id_cu', Integer, ForeignKey('ctrl_units.id'), nullable=False), Column('datetime', DateTime, nullable=False), ) sensors = Table('sensors', metadata, Column('id_cu', Integer, ForeignKey('ctrl_units.id'), primary_key=True, autoincrement=False), Column('id_meas', Integer, primary_key=True, autoincrement=False), Column('name', Unicode(20), nullable=False), Column('desc', Unicode(40), nullable=False), ) ctrl_units = Table('ctrl_units', metadata, Column('id', Integer, primary_key=True, autoincrement=False), Column('desc', Unicode(40), nullable=False) ) and this mapping: ... orm.mapper(Sensor, sensors, properties={ 'data': orm.relationship(Data, backref='sensor', primaryjoin=and_(sensors.c.id_meas==data.c.id_meas, data.c.id_acq==acquisitions.c.id, acquisitions.c.id_cu==sensors.c.id_cu), cascade='all, delete-orphan', single_parent=True) }) ... and I can't get 'data' relationship to work, I get this error: sqlalchemy.exc.ArgumentError: Could not locate any equated, locally mapped column pairs for primaryjoin condition 'sensors.id_meas = data.id_meas AND data.id_acq = acquisitions.id AND acquisitions.id_cu = ctrl_units.id AND ctrl_units.id = sensors.id_cu' on relationship Sensor.data. For more relaxed rules on join conditions, the relationship may be marked as viewonly=True. I also tried involving also ctr_units in primary join but had same error. Thanks for your support -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] execute a sql string passing params as tuple like in python DB API
How can I execute a query like if I'm using the normal python DB API, passing params as a sequence and not as a mapping? Something like: myParams = (1, 1, 2, 51, 3, 101, 4, 151, 6, 201, 7, 251) Session.execute( SELECT T.datetime, MAX(T.v0), MAX(T.v1), MAX(T.v2), MAX(T.v3), MAX(T.v4), MAX(T.v5) FROM ( SELECT data.value AS v0, NULL AS v1, NULL AS v2, NULL AS v3, NULL AS v4, NULL AS v5 FROM acquisitions INNER JOIN data ON acquisitions.id = data.id_acq WHERE acquisitions.id_centr=? and dati.id_mis=? UNION SELECT NULL AS v0, data.value AS v1, NULL AS v2, NULL AS v3, NULL AS v4, NULL AS v5 FROM acquisitions INNER JOIN data ON acquisitions.id = data.id_acq WHERE acquisitions.id_centr=? and dati.id_mis=? UNION SELECT NULL AS v0, NULL AS v1, data.value AS v2, NULL AS v3, NULL AS v4, NULL AS v5 FROM acquisitions INNER JOIN data ON acquisitions.id = data.id_acq WHERE acquisitions.id_centr=? and dati.id_mis=? UNION SELECT NULL AS v0, NULL AS v1, NULL AS v2, data.value AS v3, NULL AS v4, NULL AS v5 FROM acquisitions INNER JOIN data ON acquisitions.id = data.id_acq WHERE acquisitions.id_centr=? and dati.id_mis=? UNION SELECT NULL AS v0, NULL AS v1, NULL AS v2, NULL AS v3, data.value AS v4, NULL AS v5 FROM acquisitions INNER JOIN data ON acquisitions.id = data.id_acq WHERE acquisitions.id_centr=? and dati.id_mis=? UNION SELECT NULL AS v0, NULL AS v1, NULL AS v2, NULL AS v3, NULL AS v4, data.value AS v5 FROM acquisitions INNER JOIN data ON acquisitions.id = data.id_acq WHERE acquisitions.id_centr=? and dati.id_mis=? ) AS T GROUP BY T.datetime, myParams ) Of course the number of UNIONs is not fixed and is built on lenght of given tuple. I searched around for it but I see Session.execute just accept bind params as a mapping. Thank you for your support -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: execute a sql string passing params as tuple like in python DB API
Crystal clear, thank you very much! On Dec 20, 3:56 pm, Michael Bayer mike...@zzzcomputing.com wrote: The execute() method of Connection and Engine passes through parameters unchanged to cursor.execute() or cursor.executemany(), depending on their structure, if you pass the method a plain SQL string followed by the parameters: connection.execute(SELECT * FROM table WHERE id=? AND foo=?, 25, 'bar') The usage of ? is dependent on the accepted paramstyles of the underlying DBAPI. A DBAPI may accept qmark (?), format (%s), numeric (:1, :2, :3), or no positional style at all. You'd need to consult its documentation for details. When using Session, acquire the current connection using conn = Session.connection(). On Dec 20, 2010, at 8:10 AM, neurino wrote: How can I execute a query like if I'm using the normal python DB API, passing params as a sequence and not as a mapping? Something like: myParams = (1, 1, 2, 51, 3, 101, 4, 151, 6, 201, 7, 251) Session.execute( SELECT T.datetime, MAX(T.v0), MAX(T.v1), MAX(T.v2), MAX(T.v3), MAX(T.v4), MAX(T.v5) FROM ( SELECT data.value AS v0, NULL AS v1, NULL AS v2, NULL AS v3, NULL AS v4, NULL AS v5 FROM acquisitions INNER JOIN data ON acquisitions.id = data.id_acq WHERE acquisitions.id_centr=? and dati.id_mis=? UNION SELECT NULL AS v0, data.value AS v1, NULL AS v2, NULL AS v3, NULL AS v4, NULL AS v5 FROM acquisitions INNER JOIN data ON acquisitions.id = data.id_acq WHERE acquisitions.id_centr=? and dati.id_mis=? UNION SELECT NULL AS v0, NULL AS v1, data.value AS v2, NULL AS v3, NULL AS v4, NULL AS v5 FROM acquisitions INNER JOIN data ON acquisitions.id = data.id_acq WHERE acquisitions.id_centr=? and dati.id_mis=? UNION SELECT NULL AS v0, NULL AS v1, NULL AS v2, data.value AS v3, NULL AS v4, NULL AS v5 FROM acquisitions INNER JOIN data ON acquisitions.id = data.id_acq WHERE acquisitions.id_centr=? and dati.id_mis=? UNION SELECT NULL AS v0, NULL AS v1, NULL AS v2, NULL AS v3, data.value AS v4, NULL AS v5 FROM acquisitions INNER JOIN data ON acquisitions.id = data.id_acq WHERE acquisitions.id_centr=? and dati.id_mis=? UNION SELECT NULL AS v0, NULL AS v1, NULL AS v2, NULL AS v3, NULL AS v4, data.value AS v5 FROM acquisitions INNER JOIN data ON acquisitions.id = data.id_acq WHERE acquisitions.id_centr=? and dati.id_mis=? ) AS T GROUP BY T.datetime, myParams ) Of course the number of UNIONs is not fixed and is built on lenght of given tuple. I searched around for it but I see Session.execute just accept bind params as a mapping. Thank you for your support -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Use of table aliases
Thanks for links, I found `aliased` in docs but not in tutorials, I got errors using it but probably I was using it in the wrong way, now it's clearer. On Dec 16, 2:39 am, Michael Bayer mike...@zzzcomputing.com wrote: On Dec 15, 2010, at 9:04 AM, neurino wrote: Hello I have 2 tables: data and acquisitions, - each Acquisition has many Data - each Data come from a different sensor - the single sensor is identified by the couple Acquisition.id_centr, Data.id_meas No I need a query with one colum for each sensor and a row for each Acquisition.datetime This is how I get it (in case of two sensors) with SQL: q = curs.execute( SELECT a.datetime, d1.value, d2.value FROM acquisitions AS a LEFT JOIN data AS d1 ON a.id_acq=d1.id_acq AND a.id_centr=159 AND d1.id_meas=1501 LEFT JOIN data AS d2 ON a.id_acq=d2.id_acq AND a.id_centr=320 AND d2.id_meas=1551 ) for n, row in enumerate(q): print n, row : 0 (u'2010-09-02 12:05:00', 23.98, 25.67) 1 (u'2010-09-02 12:10:00', 23.77, 25.57) 2 (u'2010-09-02 12:15:00', 23.96, 25.57) 3 (u'2010-09-02 12:20:00', 24.78, 25.94) 4 (u'2010-09-02 12:25:00', 25.48, 26.27) 5 (u'2010-09-02 12:30:00', 25.91, 26.46) 6 (u'2010-09-02 12:35:00', 26.14, 26.62) 7 (u'2010-09-02 12:40:00', 26.32, 26.73) 8 (u'2010-09-02 12:45:00', 26.44, 26.80) 9 (u'2010-09-02 12:50:00', 26.55, 26.87) 10 (u'2010-09-02 12:55:00', 26.62, 26.92) 11 (u'2010-09-02 13:00:00', 26.67, 26.94) 12 (u'2010-09-02 13:05:00', 26.69, 26.94) 13 (u'2010-09-02 13:10:00', 26.71, 26.96) 14 (u'2010-09-02 13:15:00', 26.73, 26.98) But I can't get the same result with sqlalchemy, here's my mapping: data = Table('data', metadata, Column('id_data', Integer, primary_key=True), Column('id_meas', Integer, nullable=False), Column('id_acq', Integer, ForeignKey('acquisitions.id_acq'), nullable=False), Column('value', Float, nullable=False), ) acquisitions = Table('acquisitions', metadata, Column('id_acq', Integer, primary_key=True), Column('id_centr', Integer, nullable=False), Column('datetime', DateTime, nullable=False), #acquisitions with same id_centr and datetime are duplicates UniqueConstraint('id_centr', 'datetime'), ) orm.mapper(Data, data, properties={ 'acquisitions': orm.relationship(Acquisition, backref='data'), }) orm.mapper(Acquisition, acquisitions) to create aliases during an ORM query you use the aliased() construct. There's examples at: http://www.sqlalchemy.org/docs/orm/tutorial.html#using-aliases you'd also be using sqlalchemy.and_() to formulate those outerjoin() conditions. Any advice? Thanks for your support neurino -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Use of table aliases
Hello I have 2 tables: data and acquisitions, - each Acquisition has many Data - each Data come from a different sensor - the single sensor is identified by the couple Acquisition.id_centr, Data.id_meas No I need a query with one colum for each sensor and a row for each Acquisition.datetime This is how I get it (in case of two sensors) with SQL: q = curs.execute( SELECT a.datetime, d1.value, d2.value FROM acquisitions AS a LEFT JOIN data AS d1 ON a.id_acq=d1.id_acq AND a.id_centr=159 AND d1.id_meas=1501 LEFT JOIN data AS d2 ON a.id_acq=d2.id_acq AND a.id_centr=320 AND d2.id_meas=1551 ) for n, row in enumerate(q): print n, row : 0 (u'2010-09-02 12:05:00', 23.98, 25.67) 1 (u'2010-09-02 12:10:00', 23.77, 25.57) 2 (u'2010-09-02 12:15:00', 23.96, 25.57) 3 (u'2010-09-02 12:20:00', 24.78, 25.94) 4 (u'2010-09-02 12:25:00', 25.48, 26.27) 5 (u'2010-09-02 12:30:00', 25.91, 26.46) 6 (u'2010-09-02 12:35:00', 26.14, 26.62) 7 (u'2010-09-02 12:40:00', 26.32, 26.73) 8 (u'2010-09-02 12:45:00', 26.44, 26.80) 9 (u'2010-09-02 12:50:00', 26.55, 26.87) 10 (u'2010-09-02 12:55:00', 26.62, 26.92) 11 (u'2010-09-02 13:00:00', 26.67, 26.94) 12 (u'2010-09-02 13:05:00', 26.69, 26.94) 13 (u'2010-09-02 13:10:00', 26.71, 26.96) 14 (u'2010-09-02 13:15:00', 26.73, 26.98) But I can't get the same result with sqlalchemy, here's my mapping: data = Table('data', metadata, Column('id_data', Integer, primary_key=True), Column('id_meas', Integer, nullable=False), Column('id_acq', Integer, ForeignKey('acquisitions.id_acq'), nullable=False), Column('value', Float, nullable=False), ) acquisitions = Table('acquisitions', metadata, Column('id_acq', Integer, primary_key=True), Column('id_centr', Integer, nullable=False), Column('datetime', DateTime, nullable=False), #acquisitions with same id_centr and datetime are duplicates UniqueConstraint('id_centr', 'datetime'), ) orm.mapper(Data, data, properties={ 'acquisitions': orm.relationship(Acquisition, backref='data'), }) orm.mapper(Acquisition, acquisitions) Any advice? Thanks for your support neurino -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] syntax error using in_ and tuple_ in Sqlite
I get an OperationalError trying to use tuple_ and in_ in a query with a Sqlite db. This is the query (select the distinct id_acq - id_centr couples not already present in another table) query = model.Session.query( Dato.id_elab, Acquisizione.id_centr)\ .filter(Dato.id_acq==Acquisizione.id_acq)\ .distinct()\ .filter(~tuple_(Dato.id_elab, Acquisizione.id_centr)\ .in_(select([Sensore.id_elab, Sensore.id_centr]))) I get this error: OperationalError: (OperationalError) near ,: syntax error u'SELECT DISTINCT dati.id_elab AS dati_id_elab, acquisizioni.id_centr AS acquisizioni_id_centr FROM dati, acquisizioni WHERE dati.id_acq = acquisizioni.id_acq AND (dati.id_elab, acquisizioni.id_centr) NOT IN (SELECT sensori.id_elab, sensori.id_centr FROM sensori)' () I guess the ',' cited in error is the one on the 3rd row Now I wonder if it's a Sqlite limitation, the SQL string seems correct to me. Any advice? Thanks for your support -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: syntax error using in_ and tuple_ in Sqlite
Ok, I found Multi-column IN clause is unsupported in Sqlite: http://www.sqlite.org/cvstrac/wiki?p=UnsupportedSql Is it supposed to work in MySQL instead or there is something wrong in my sqlalchemy query? I can't test it in MySQL now but it will be used in production environment In Sqlite I guess I need to retrieve data from both queries in 2 sets and make difference() between them. Thanks On Dec 9, 3:28 pm, neurino neur...@gmail.com wrote: I get an OperationalError trying to use tuple_ and in_ in a query with a Sqlite db. This is the query (select the distinct id_acq - id_centr couples not already present in another table) query = model.Session.query( Dato.id_elab, Acquisizione.id_centr)\ .filter(Dato.id_acq==Acquisizione.id_acq)\ .distinct()\ .filter(~tuple_(Dato.id_elab, Acquisizione.id_centr)\ .in_(select([Sensore.id_elab, Sensore.id_centr]))) I get this error: OperationalError: (OperationalError) near ,: syntax error u'SELECT DISTINCT dati.id_elab AS dati_id_elab, acquisizioni.id_centr AS acquisizioni_id_centr FROM dati, acquisizioni WHERE dati.id_acq = acquisizioni.id_acq AND (dati.id_elab, acquisizioni.id_centr) NOT IN (SELECT sensori.id_elab, sensori.id_centr FROM sensori)' () I guess the ',' cited in error is the one on the 3rd row Now I wonder if it's a Sqlite limitation, the SQL string seems correct to me. Any advice? Thanks for your support -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Selective relationship cascade
Thanks Michael, and what about `type` relationship? Or would it be much simpler have no relation at all, since there must be no actions on children on parent update / delete? If I had to do this with plain SQL I'd simply make a `measures LEFT JOIN types ON measures.type_id = types.id` to get type description, if any, so I realize probably the relationship is not really needed... Any advice welcome anyway... Thank you On Dec 2, 3:55 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Dec 2, 2010, at 6:48 AM, neurino wrote: Let's say I have acquisitions at regular intervals and each acquisition I get different measures. Each meausre has a value and a numeric type id (1 = temperature, 2 = humidity ecc...) I want user to insert text description for type id he knows, if he wants. I end up having this 3 tables: measures = Table('measure', metadata, Column('id', Integer, primary_key=True), Column('acq_id', Integer, ForeignKey('acquisitions.id'), nullable=False) Column('type_id', Integer, ForeignKey('types.id'), nullable=False), Column('value', Float, nullable=False), ) acquisitions = Table('acquisitions', metadata, Column('id', Integer, primary_key=True), Column('datetime', DateTime, nullable=False), ) types = Table('types', metadata, Column('id', Integer, primary_key=True, autoincrement=False), Column('desc', String, nullable=False), ) [class definitions...] orm.mapper(Acquisition, acquisitions) orm.mapper(Type, types) orm.mapper(Measures, measure, properties={ 'type': orm.relationship(Type, backref='measures'), 'acquisition': orm.relationship(Acquisition, backref='measures'), }) Now, while I want, deleting an acquisition, all child measures are deleted too, I DON'T want any modification in data if the user edit types, for example: - deletes a type (related measures are not deleted and their type_id is not changed in null value) - edits the type.id (precedent associated measures are not updated to new type_id but mantain the old one). How can I achieve this? I guess I should use cascade in relationship but can't figure how Thanks for your support! the cascade to delete child items when parent is deleted here would be: 'acquisition': orm.relationship(Acquisition, backref=backref('measures', cascade='all, delete-orphan')) -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Selective relationship cascade
Let's say I have acquisitions at regular intervals and each acquisition I get different measures. Each meausre has a value and a numeric type id (1 = temperature, 2 = humidity ecc...) I want user to insert text description for type id he knows, if he wants. I end up having this 3 tables: measures = Table('measure', metadata, Column('id', Integer, primary_key=True), Column('acq_id', Integer, ForeignKey('acquisitions.id'), nullable=False) Column('type_id', Integer, ForeignKey('types.id'), nullable=False), Column('value', Float, nullable=False), ) acquisitions = Table('acquisitions', metadata, Column('id', Integer, primary_key=True), Column('datetime', DateTime, nullable=False), ) types = Table('types', metadata, Column('id', Integer, primary_key=True, autoincrement=False), Column('desc', String, nullable=False), ) [class definitions...] orm.mapper(Acquisition, acquisitions) orm.mapper(Type, types) orm.mapper(Measures, measure, properties={ 'type': orm.relationship(Type, backref='measures'), 'acquisition': orm.relationship(Acquisition, backref='measures'), }) Now, while I want, deleting an acquisition, all child measures are deleted too, I DON'T want any modification in data if the user edit types, for example: - deletes a type (related measures are not deleted and their type_id is not changed in null value) - edits the type.id (precedent associated measures are not updated to new type_id but mantain the old one). How can I achieve this? I guess I should use cascade in relationship but can't figure how Thanks for your support! -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: relationships for no-table-related Class
Thanks Michael, this solved most of my doubts. Greetings neurino On Nov 16, 5:51 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Nov 16, 2010, at 4:19 AM, neurino wrote: I didn't mean mapping Root to a Table (if not necessary) is my intent, what I'd like to know is how to get the same behavior without the bloat of an extra table. make your application work a certain way (where certain way here is not clear) I make an example, maybe I'm wrong tho: let's say I delete an item, I'd expect not to find this item anymore in its (ex) parent items subarea.items [item] session.delete(subarea.items[0]) session.commit() subarea.items [] This would be not the same for root's areas if I just use a query root.areas = query(Area).all() root.areas [area] session.delete(root.areas[0]) session.commit() root.items [area] I hope I has been able to focus on my question now. right so I'd just make the attribute live: Session = scoped_session(sessionmaker()) class Root(object): �...@property def areas(self): return Session.query(Area).all() singleton_root = Root() class Area(object): parent = singleton_root This is no different than the example above - Session.delete(someobject.collection[someindex]) does not remove the item from the collection - its only because of the call to commit() that someobject.collection is expired, and is then reloaded. If you'd like to later add caching to Root.areas such that the collection is pulled from memory until Session.commit() is called, you could enhance Root.areas to maintain values in a cache, such as a WeakKeyDictionary which uses Session().transaction as the key. Thanks for your help neurino On Nov 15, 9:57 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Nov 15, 2010, at 10:46 AM, neurino wrote: Thanks for your answer first. Root is a singleton, its class is not mapped to a table. What I mean is I could add a table roots to the database with a sigle row and add areas a foreign key root_id and create a relationship as from subareas with parent area and get what I'm talking about. sure, then you're mapping Root to a table, and having just one row. That would make Root.area act exactly like a relationship() though its a little strange to have a row in the database just to make your application work a certain way (where certain way here is not clear). This relationship, between root and area, as long as areas and subareas would come in handy for example to traverse the tree for extracting an xml simply, or to make recursive calculations. Before sqlalchemy I was used to add all areas, subareas, items, parent attributes to classes by myself but now I'm in the situation that 80% of the work is done by sqlalchemy automatically and I'm not sure how to fill the remaining, possibly having both areas and subareas behave at the same way to avoid confusion (just as an example, lazy loading). Thanks for your support neurino On Nov 15, 3:49 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Nov 15, 2010, at 8:06 AM, neurino wrote: So no advice? Are relationships and backref something more than attributes I can setup with a query? Thank you for your support. what's not stated clearly here is what Root is. If that's not a class mapped to a table, then you'd just need to use regular Python attributes and descriptors to establish the in-python behavior you're looking for. Seems like its essentially some kind of query object, so your query.all()/.parent = some_root approach is what you'd go with, though it would appear that Root is a singleton anyway, meaning this could be established on Area at the class level instead of assigning to each instance. Its not clear what other behavior of relationship() would apply here, since Root has no database identity. On Nov 11, 9:45 am, neurino neur...@gmail.com wrote: I have a tree structure Root | +--Area | | | +--SubArea | | | | | +--Item | | | | | +--Item | | | +--SubArea | | | +--Item | | | +--Item | +--Area | +--SubArea | | | +--Item | | | +--Item | +--SubArea | +--Item | +--Item The tree structure corresponds to slqalchemy db tables `areas`, `subareas` and `items`. Something like this: mapper(Area, areas_table, properties={ 'subareas': relationship(SubArea, backref='parent'), }) mapper(SubArea, subareas__table, properties={ 'items': relationship(Item, backref='parent'), }) mapper(Item, items_table) so each Area instance will have a `subareas` list and each SubArea will have a `items` list
[sqlalchemy] Re: relationships for no-table-related Class
I didn't mean mapping Root to a Table (if not necessary) is my intent, what I'd like to know is how to get the same behavior without the bloat of an extra table. make your application work a certain way (where certain way here is not clear) I make an example, maybe I'm wrong tho: let's say I delete an item, I'd expect not to find this item anymore in its (ex) parent items subarea.items [item] session.delete(subarea.items[0]) session.commit() subarea.items [] This would be not the same for root's areas if I just use a query root.areas = query(Area).all() root.areas [area] session.delete(root.areas[0]) session.commit() root.items [area] I hope I has been able to focus on my question now. Thanks for your help neurino On Nov 15, 9:57 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Nov 15, 2010, at 10:46 AM, neurino wrote: Thanks for your answer first. Root is a singleton, its class is not mapped to a table. What I mean is I could add a table roots to the database with a sigle row and add areas a foreign key root_id and create a relationship as from subareas with parent area and get what I'm talking about. sure, then you're mapping Root to a table, and having just one row. That would make Root.area act exactly like a relationship() though its a little strange to have a row in the database just to make your application work a certain way (where certain way here is not clear). This relationship, between root and area, as long as areas and subareas would come in handy for example to traverse the tree for extracting an xml simply, or to make recursive calculations. Before sqlalchemy I was used to add all areas, subareas, items, parent attributes to classes by myself but now I'm in the situation that 80% of the work is done by sqlalchemy automatically and I'm not sure how to fill the remaining, possibly having both areas and subareas behave at the same way to avoid confusion (just as an example, lazy loading). Thanks for your support neurino On Nov 15, 3:49 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Nov 15, 2010, at 8:06 AM, neurino wrote: So no advice? Are relationships and backref something more than attributes I can setup with a query? Thank you for your support. what's not stated clearly here is what Root is. If that's not a class mapped to a table, then you'd just need to use regular Python attributes and descriptors to establish the in-python behavior you're looking for. Seems like its essentially some kind of query object, so your query.all()/.parent = some_root approach is what you'd go with, though it would appear that Root is a singleton anyway, meaning this could be established on Area at the class level instead of assigning to each instance. Its not clear what other behavior of relationship() would apply here, since Root has no database identity. On Nov 11, 9:45 am, neurino neur...@gmail.com wrote: I have a tree structure Root | +--Area | | | +--SubArea | | | | | +--Item | | | | | +--Item | | | +--SubArea | | | +--Item | | | +--Item | +--Area | +--SubArea | | | +--Item | | | +--Item | +--SubArea | +--Item | +--Item The tree structure corresponds to slqalchemy db tables `areas`, `subareas` and `items`. Something like this: mapper(Area, areas_table, properties={ 'subareas': relationship(SubArea, backref='parent'), }) mapper(SubArea, subareas__table, properties={ 'items': relationship(Item, backref='parent'), }) mapper(Item, items_table) so each Area instance will have a `subareas` list and each SubArea will have a `items` list, also I easyly get a backref `parent` from Item to parent SubArea and from SubArea to parent Area. But this won't be for Root: it will not have a `areas` list in Root nor its areas will have a parent reference to Root. The quick-and-dirty solution is to do this in Root: self.areas = query(Area).all() for area in self.areas: area.parent = self But it won't be the same thing as sqlalchemy `relationship` attributes so: are there alternative solutions more sqlalchemy-like? Any tip appreciated! Thank you for your support Greetings neurino -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google
[sqlalchemy] Re: relationships for no-table-related Class
So no advice? Are relationships and backref something more than attributes I can setup with a query? Thank you for your support. On Nov 11, 9:45 am, neurino neur...@gmail.com wrote: I have a tree structure Root | +--Area | | | +--SubArea | | | | | +--Item | | | | | +--Item | | | +--SubArea | | | +--Item | | | +--Item | +--Area | +--SubArea | | | +--Item | | | +--Item | +--SubArea | +--Item | +--Item The tree structure corresponds to slqalchemy db tables `areas`, `subareas` and `items`. Something like this: mapper(Area, areas_table, properties={ 'subareas': relationship(SubArea, backref='parent'), }) mapper(SubArea, subareas__table, properties={ 'items': relationship(Item, backref='parent'), }) mapper(Item, items_table) so each Area instance will have a `subareas` list and each SubArea will have a `items` list, also I easyly get a backref `parent` from Item to parent SubArea and from SubArea to parent Area. But this won't be for Root: it will not have a `areas` list in Root nor its areas will have a parent reference to Root. The quick-and-dirty solution is to do this in Root: self.areas = query(Area).all() for area in self.areas: area.parent = self But it won't be the same thing as sqlalchemy `relationship` attributes so: are there alternative solutions more sqlalchemy-like? Any tip appreciated! Thank you for your support Greetings neurino -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: relationships for no-table-related Class
Thanks for your answer first. Root is a singleton, its class is not mapped to a table. What I mean is I could add a table roots to the database with a sigle row and add areas a foreign key root_id and create a relationship as from subareas with parent area and get what I'm talking about. This relationship, between root and area, as long as areas and subareas would come in handy for example to traverse the tree for extracting an xml simply, or to make recursive calculations. Before sqlalchemy I was used to add all areas, subareas, items, parent attributes to classes by myself but now I'm in the situation that 80% of the work is done by sqlalchemy automatically and I'm not sure how to fill the remaining, possibly having both areas and subareas behave at the same way to avoid confusion (just as an example, lazy loading). Thanks for your support neurino On Nov 15, 3:49 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Nov 15, 2010, at 8:06 AM, neurino wrote: So no advice? Are relationships and backref something more than attributes I can setup with a query? Thank you for your support. what's not stated clearly here is what Root is. If that's not a class mapped to a table, then you'd just need to use regular Python attributes and descriptors to establish the in-python behavior you're looking for. Seems like its essentially some kind of query object, so your query.all()/.parent = some_root approach is what you'd go with, though it would appear that Root is a singleton anyway, meaning this could be established on Area at the class level instead of assigning to each instance. Its not clear what other behavior of relationship() would apply here, since Root has no database identity. On Nov 11, 9:45 am, neurino neur...@gmail.com wrote: I have a tree structure Root | +--Area | | | +--SubArea | | | | | +--Item | | | | | +--Item | | | +--SubArea | | | +--Item | | | +--Item | +--Area | +--SubArea | | | +--Item | | | +--Item | +--SubArea | +--Item | +--Item The tree structure corresponds to slqalchemy db tables `areas`, `subareas` and `items`. Something like this: mapper(Area, areas_table, properties={ 'subareas': relationship(SubArea, backref='parent'), }) mapper(SubArea, subareas__table, properties={ 'items': relationship(Item, backref='parent'), }) mapper(Item, items_table) so each Area instance will have a `subareas` list and each SubArea will have a `items` list, also I easyly get a backref `parent` from Item to parent SubArea and from SubArea to parent Area. But this won't be for Root: it will not have a `areas` list in Root nor its areas will have a parent reference to Root. The quick-and-dirty solution is to do this in Root: self.areas = query(Area).all() for area in self.areas: area.parent = self But it won't be the same thing as sqlalchemy `relationship` attributes so: are there alternative solutions more sqlalchemy-like? Any tip appreciated! Thank you for your support Greetings neurino -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] relationships for no-table-related Class
I have a tree structure Root | +--Area || |+--SubArea ||| ||+--Item ||| ||+--Item || |+--SubArea | | | +--Item | | | +--Item | +--Area | +--SubArea || |+--Item || |+--Item | +--SubArea | +--Item | +--Item The tree structure corresponds to slqalchemy db tables `areas`, `subareas` and `items`. Something like this: mapper(Area, areas_table, properties={ 'subareas': relationship(SubArea, backref='parent'), }) mapper(SubArea, subareas__table, properties={ 'items': relationship(Item, backref='parent'), }) mapper(Item, items_table) so each Area instance will have a `subareas` list and each SubArea will have a `items` list, also I easyly get a backref `parent` from Item to parent SubArea and from SubArea to parent Area. But this won't be for Root: it will not have a `areas` list in Root nor its areas will have a parent reference to Root. The quick-and-dirty solution is to do this in Root: self.areas = query(Area).all() for area in self.areas: area.parent = self But it won't be the same thing as sqlalchemy `relationship` attributes so: are there alternative solutions more sqlalchemy-like? Any tip appreciated! Thank you for your support Greetings neurino -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.