Re: [sqlalchemy] (Hopefully) simple problem with backrefs not being loaded when eagerloading.
On 13/09/10 18:21, Michael Bayer wrote: On Sep 13, 2010, at 12:26 PM, Jon Siddle wrote: This relationship is satisfied as you request it, and it works by looking in the current Session's identity map for the primary key stored by the many-to-one. The operation falls under the realm of lazyloading even though no SQL is emitted. If you consider that Child may have many related many-to-ones, all of which may already be in the Session, it would be quite wasteful for the ORM to assume that you're going to be working with the object in a detached state and that you need all of them. I'm not sure I see what you're saying here. I've explicitly asked for all children relating to parent and these are correctly queried and loaded. While they are being added to the parent.children list, why not also set each child.parent since this is known? because you didn't specify it, and it takes a palpable amount of additional overhead to do so I don't see why it's more overhead than an assignment child.parent = ... at the same time as the list append parent.children.append(...). There's obviously something more complex going on behind the scenes. as well as a palpable amount of complexity to decide if it should do so based on the logic you'd apply here, when in 99% of the cases it is not needed. I just don't see the complexity of the logic here. I've specified I want to join parent to each child, and it's already doing so in one direction. I realise this is only a problem for detached objects, but it leads to quite confusing behaviour, I think. I don't see how this is wasteful, but I may be missing something. Child may have parent, foo, bar, bat attached to it, all many-to-ones. Which ones should it assume the user wants to load ? parent. Because I have explicitly asked it to using joinedload or eagerload. If you are loading 1 rows, and each Child object has three many-to-ones on it, and suppose it takes 120 function calls to look at a relationship, determine the values to send to query._get(), look in the identity map, etc., that is 3 x 1 x 120 = 3.6 million function calls But you don't have to look in the identity map at all, since you've just set the parent-child association in the other direction and thus have both entities to hand, right? , by default, almost never needed since normally they are all just there in the current session, without the user asking to do so.There is nothing special about Child.parent just because Parent.children is present up the chain. While Hibernate may have decided that the complexity and overhead of adding this decisionmaking was worth it, they have many millions more function calls to burn with the Java VM in any case than we do in Python, and they also have a much higher bar to implement lazyloading since their native class instrumentation is itself a huge beast. In our case it is simply unnecessary. Any such automatic decisionmaking you can be sure quickly leads to many uncomfortable edge cases and thorny situations, causing needless surprise issues for users who don't really need such a behavior in the first place. I would agree with all of this if I understood why a) it takes an appreciable number of function calls or b) automatic decisionmaking is necessary. I don't think there's any ambiguity here, but again; perhaps I'm missing something fundamental. As I've mentioned, you will have an option to tell it which many-to-ones you'd like it to spend time pre-populating using the upcoming immedateload option. I still think this can be done with negligible overhead if it's done at the same time as the other side of the relation (parent-child). Perhaps I'll have to dig around in the code to see why this is such a problem. The Session's default assumption is that you're going to be leaving it around while you work with the objects contained, and in that way you interact with the database for as long as you deal with its objects, which represent proxies to the underlying transaction. When objects are detached, for reasons like caching and serialization to other places, normally you'd merge() them back when you want to use them again. So if it were me I'd normally be looking to not be closing the session. I'm closing the session before I forward the objects to the view template in a web application. The template has no business doing database operations, I disagree with this interpretation of abstraction. That's like saying that pushing the button in an elevator means you're now in charge of starting up the elevator motors and instructing them how many feet to travel. Huh? I didn't use the word abstraction. The template is not doing database operations, it is working with high level objects that you've sent it, and knows nothing of a database. That these objects may be doing database calls behind the scenes to lazily fetch additional data is known as the proxy pattern. It is one of the most
Re: [sqlalchemy] internationalization of content
On 13/09/2010 22:37, NiL wrote: Has anyone tried ti implement this ? a working solution ? willing to participate in a effort to provide a solution ? Isn't this something better suited to the application framework rather than the database framework (ie: not SQLAlchemy)? cheers, Chris -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] paranoia - does flush ensure a unique id?
On 13/09/2010 18:02, Chris Withers wrote: What ensures obj.id will be unique and will it always be unique, even in the case of high volumes of parallel writes to the database? Does it depend on the back end? Are any backends known not to work this way? In short, the unique constrant of the primary key will make sure the id is always unique. d'uh. Chris -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: [elixir] problem with cascade deletes
hey yacine, friends, indeed the problem came partly because i haven't followed the traces of elixir close enough: i've used backref instead of inverse for manytoone relations. the only drawback of using inverse, is that it requires the inverse relation to really exist, hence it can't be implied. here comes to light an elixir extension i've made a time ago (inverse_orphans), that would create an inverse relation on the onetomany side when it is missing. for your interest, the plugin is attached. i'd extend this plugin to take kwargs too. best regards, alex On 09/12/2010 10:37 AM, chaouche yacine wrote: Yes, except I wanted the children *not* to be deleted but raise an integrity_error exception instead, because what was done is that they were not deleted but their FK (pointing to the parent) were set to NULL and they were raising a non-null constraint related exception. --- On Sun, 9/12/10, alex bodnaru alexbodn.gro...@gmail.com wrote: From: alex bodnaru alexbodn.gro...@gmail.com Subject: Re: [elixir] problem with cascade deletes To: sqleli...@googlegroups.com Date: Sunday, September 12, 2010, 1:21 AM hello yacine, elixir isn't known to reinvent sa, but please point me to things you would change for a pure approach. part of the lower level stuff is needed to turn foreign keys on in sqlite. in the mean time, i did a declarative example which fails like elixir. btw. this is the same problem you have also previously reported on this list. alex On 09/12/2010 09:58 AM, chaouche yacine wrote: hello alex, In your elixir program, you are mixing some imports from sqlalchemy (create_engine from example) with imports from elixir. Did you try an elixir only approach ? Y.Chaouche --- On Sat, 9/11/10, alex bodnaru alexbodn.gro...@gmail.com wrote: From: alex bodnaru alexbodn.gro...@gmail.com Subject: [elixir] problem with cascade deletes To: sqleli...@googlegroups.com Date: Saturday, September 11, 2010, 6:31 AM hello friends, there seems to be a flaw in elixir with cascade deletes. i have a program that does it with sqlalchemy orm, and a similar one to do it with elixir. instead of deleting the elixir program only nulls the keys in the child. the programs are attached. best regards, alex -- You received this message because you are subscribed to the Google Groups SQLElixir group. To post to this group, send email to sqleli...@googlegroups.com. To unsubscribe from this group, send email to sqlelixir+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlelixir?hl=en. -- You received this message because you are subscribed to the Google Groups SQLElixir group. To post to this group, send email to sqleli...@googlegroups.com. To unsubscribe from this group, send email to sqlelixir+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlelixir?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. inverse_orphans Elixir Statement Generator === inverse_orphans === i am using an identity model module from a third party, having, among others, an User class. this model file might well be upgraded by it's supplier. in another model file, which imports the identity model entities, i have a Person class, which has a ManyToOne relationship with the User table. However i'd like to also know the Person which references a given User, adding OneToMany relationships to User, will need to be maintained everytime the supplier upgrades identity model. to implement this, i am giving an inverse name on the ManyToOne side, and adding an after_mapper action to create the OneToMany relationship. from elixir.statements import Statement from elixir import OneToOne, OneToMany, ManyToOne, ManyToMany __all__ = ['inverse_orphans'] __doc_all__ = __all__ #TODO: inherit from entity builder class inverse_orphans_entity_builder(object): An inverse_orphans Elixir Statement object def __init__(self, entity): self.entity = entity def before_table(self): ''' if we name an inverse relationship which is not already defined on the target, here we create the inverse relationship on the target. should run this for each relationship property. ''' for r in self.entity._descriptor.relationships: desc = r.target._descriptor if r.inverse_name and desc.find_relationship(r.inverse_name) is None: if type(r) == ManyToOne: # should probably test uselist if 'unique' in r.column_kwargs and
[sqlalchemy] Re: internationalization of content
Hi chris, thanks for your reply. I guess it is not an application framework oriented question. It seems to me rather a question of database design/access. I have a pointer to modify the elixir versioning extension to provide this functionnality. It would be framework oriented, if we were talking about the translations of templates, which is not the case. best NiL -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] passive_deletes/updates with sqlite
On 09/13/2010 05:49 PM, Michael Bayer wrote: On Sep 13, 2010, at 11:16 AM, alex bodnaru wrote: hope my approach isn't too simplist, but onetomany is usually implemented in rdbms by an manytoone column or a few of them, with or without ri clauses: thus, a foreign key or an index. conversely, a manytoone relation has an implicit onetomany one (or an explicit onetoone). if you read what I wrote, I was explaining, that we architecturally choose not to generate the implicit reverse direction when it isn't specified by the user. And that this decision is not too controversial since Hibernate made the same one. the example i've given with elixir (look at the sql echo) shows the onetomany updates the foreign key to null, not knowing they wouldn't be found in the cascading delete. i'm searching the exact point elixir should give the passive_deletes to the sa relation, thus to force it to give it to the right side of it. right - Elixir has a more abstracted layer of user configuration, which is basically what you're asking SQLAlchemy to build into it. Id rather make things simpler on the inside, not more magical. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. well i found the problem (it's in my code). i was hilariously dealing with something more magical in sa then in elixir: i've used backref from sa instead of inverse from elixir. for this specific case it's simply my error to do this, since passive_deletes is an argument to be passed to an existing relation, but the magic i usually wanted to achieve with backref was auto-creating a onetomany relation to complement a manytoone one, especially when i don't wish to touch the file of the parent entity. btw, to achieve this same magic with elixir i've made in the past an elixir extension, that was rejected by elixir people, that pointed me to backref. best regards and thanks again, alex -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. from sqlalchemy import (MetaData, Table, Column, Integer, String, ForeignKey, create_engine, ForeignKeyConstraint) from sqlalchemy.orm import (mapper, relationship, sessionmaker) from elixir import * from sqlalchemy.interfaces import PoolListener class SQLiteFKListener(PoolListener): def connect(self, dbapi_con, con_record): dbapi_con.execute('PRAGMA foreign_keys = ON;') engine = create_engine(sqlite:///:memory:, echo=True, listeners=[SQLiteFKListener()]) metadata.bind = engine class MyClass(Entity): using_options(tablename='mytable') id = Field(Integer, primary_key=True, autoincrement=True) name = Field(String(20)) children = OneToMany('MyOtherClass', passive_deletes=True) def __repr__(self): return 'MyClass %s, %s' % (None if self.id is None else str(self.id), self.name) class MyOtherClass(Entity): using_options(tablename='myothertable') id = Field(Integer, primary_key=True, autoincrement=True) name = Field(String(20)) parent = ManyToOne('MyClass', inverse='children', colname='parent_id', ondelete=cascade) def __repr__(self): return 'MyOtherClass %s, %s, %s' % (None if self.parent_id is None else str(self.parent_id), None if self.id is None else str(self.id), self.name) setup_all() create_all() alex = MyClass(name='alex') pisi = MyClass(name='pisi') print alex, pisi #session.commit() session.flush() print alex, pisi alexdagan = MyOtherClass(parent=alex, name='dagan') alexshaked = MyOtherClass(parent=alex, name='shaked') pisidagan = MyOtherClass(parent=pisi, name='dagan') pisishaked = MyOtherClass(parent=pisi, name='shaked') #session.commit() session.flush() shaked1 = session.query(MyOtherClass).filter_by(parent_id=1, name=u'shaked') session.delete(alex) #session.commit() session.flush() for my in session.query(MyClass).all(): print my for my in session.query(MyOtherClass).all(): print my
[sqlalchemy] Re: Updating a detached object
Channel is just type of object. I realize what my problem is. I don't know why my object isn't saved correctly. for chan in channels: if chan.id == channel.id: chan = session.merge(channel) break On Sep 13, 2:47 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Sep 13, 2010, at 2:31 PM, Alvaro Reinoso wrote: Yes, I've done that. I doesn't work either. for chan in channels: if chan.id == channel.id: chan = session.merge(channel) break i dont understand the context of that code (what's channel) ? This is how a basic merge works: def merge_new_data(some_xml): my_objects = parse_xml(some_xml) # at this point, every object in my_objects should # have a primary key, as well as every child of every element, # all the way down. Existing primary keys must be fully populated, # that's your job. This is the intricate part, obviously. But you don't # merge anything here, just get the PKs filled in. # then you merge the whole thing. merge() cascades along all # relationships. The rule is simple - if PK is present and exists in the DB, it # updates. otherwise, it inserts. for obj in my_objects: Session.merge(obj) Session.commit() # done On Sep 13, 2:27 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Sep 13, 2010, at 2:13 PM, Alvaro Reinoso wrote: If I merge the updated channel like you can see in this piece of code it's working: def insertXML(channels, strXml): Insert a new channel given XML string channel = Channel() session = rdb.Session() channel.fromXML(strXml) fillChannelTemplate(channel, channels) for item in channel.items: if item.id == 0: item.id = None break session.merge(channel) for chan in channels: if chan.id == channel.id: chan.items.append(item) break My problem is I'm using channels, it's a list of channels which I save it in HTTP session object. The channels list is a detached object which I get using joinload option. So in this case, I update the object correctly in database, but It isn't persistent in channels if I do this: for chan in channels: if chan.id == channel.id: chan.items.append(item) break Do you have any idea how I can solve this problem? or another approach? here: session.merge(channel) use the return value of merge(): channel = session.merge(channel) the returned channel plus all children is the fully merged result. Thanks! On Sep 10, 5:09 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Sep 10, 2010, at 2:57 PM, Alvaro Reinoso wrote: Hello guys, I have this table: class Channel(rdb.Model): rdb.metadata(metadata) rdb.tablename(channels) id = Column(id, Integer, primary_key=True) title = Column(title, String(100)) hash = Column(hash, String(50)) runtime = Column(runtime, Float) items = relationship(MediaItem, secondary=channel_items, order_by=MediaItem.position, backref=channels) I have a list of channels, but they are detached objects. I get them using joinedload option because I maniputale those objects sometimes. When I do that, I update the object. This time, I'm trying to add a new item to a detached channel object. This is the code: def insertXML(channels, strXml): Insert a new channel given XML string channel = Channel() session = rdb.Session() result = channel.fromXML(strXml) fillChannelTemplate(channel, channels) if channel.id == 0: session.add(channel) session.flush() channels.append(channel) else: for chan in channels: if chan.id == channel.id: chan.runtime = channel.runtime chan.modified = datetime.date.today() for item in channel.items: if item.id == 0: chan.items.append(item) session.merge(chan) The item is inserted in the database, but It doesn't create the relation in channel_items. Besides, I get this error:
Re: [sqlalchemy] on padded character fields again
On Sep 14, 2010, at 11:42 AM, Victor Olex wrote: We have discussed one aspect of this before and it was hugely helpful (http://groups.google.com/group/sqlalchemy/browse_thread/thread/ 965287c91b790b68/361e0a53d4100b5d?lnk=gstq=padding#361e0a53d4100b5d) This time I wanted to ask not about the WHERE clause but mapped object contents, where field is of padded type such as CHAR. Currently SQLAlchemy populates such fields consistently with what a raw SQL query would return for the database engine. In Oracle it would be with padding. I would like to suggest however that this behavior be parametrized. The reason being that the same code operating on objects retrieved from a mapped database may behave differently depending on the underlying engine. For example a field defined as follows: description = Column(u'desciption', CHAR(length=100), nullable=False) would return padded values when run on Oracle but on SQLite it would be trimmed to the string length. This behavior led to having to duplicate a lot of unit tests (SQLite) into functional test (Oracle) to avoid unpleasant surprises such as: myobj.description == some vaule behaving differently in each environment. One of the most important features of the ORM's is abstracting away the physical database store. Unless I missed something obvious this could be a room for improvement. By the way the mapping was reverse engineered from existing database. In forward engineering scenario one would probably use a generic type String instead, which would map to VARCHAR where the issue is non- existent. Well the first thing I'd note is that the CHAR type is not part of the ORM, its the part of schema definition language. The schema definition and SQL expression languages attempt to strike a balance between backend-agnosticism and literal DBAPI/database behavior. The other thing is I'd ask is have you looked at TypeDecorator (http://www.sqlalchemy.org/docs/core/types.html?highlight=typedecorator#sqlalchemy.types.TypeDecorator ), is that adequate here or otherwise why not ? A real world ORM application generally has a whole module dedicated to custom types that are tailored to the application's needs. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Re: (Hopefully) simple problem with backrefs not being loaded when eagerloading.
Thank you for such a full elaboration. I still think the end result is something a little unintuitive (albeit only for those using detached objects, who will come across it); but I can't argue against your decision here. Based on the information you've given, keeping things the way they are is undoubtedly the decision that will benefit most users. I've worked with ORM internals before, and I know only too well that architectural decisions made for sound reasons can nevertheless make some things difficult, which feel like they should be simple. As always, the best you can do is maximise the value to the most people. Perhaps just a note in the eagerload docs to say ...will not load backreferences eagerly. If you are using detached objects, try the join...contains_eager pattern. or words to that effect? Thanks Jon On 14/09/10 16:27, Michael Bayer wrote: On Sep 14, 6:03 am, Jon Siddlej...@corefiling.co.uk wrote: I would agree with all of this if I understood why a) it takes an appreciable number of function calls or b) automatic decisionmaking is necessary. I don't think there's any ambiguity here, but again; perhaps I'm missing something fundamental. Theres a backref with an attribute handler that receives append events in userland, and emits a corresponding set event on the other side. The Collection attribute implementation could receive extra code that would cause it to do something similar to the backref handler when the collection is populated.This is what you are looking for here. The append of the collection ultimately occurs in mapper.py line 2298 in the current tip. The function calling it, a closure called _instance() that is generated inside of _instance_process(), is always the top hit on any profiling run against a load of objects. Removing just three function calls from this method is a win. Months of work have gone into the workings of this method so that our load speeds remain competitive with, and sometimes even faster than, other ORMs that do not use identity maps or units of work, ORMs that are now written almost entirely in C (I leave the identification of these products as an exercise for the reader). So the implementation here would probably replace the list being appended onto with one that is additionally instrumented to catch these events and populate in the other direction. We already have the GenericBackrefExtension which does this. GBE is designed to work in userland, not during a load but when you populate things manually. GBE is handy, but is a primary place for bugs to occur, as it is an event handler that issues more events. I just fixed one the other day involving some rare endless loop where the events could not decide when to stop back-populating in each direction. I was pretty amazed that there were still some of those cases left after all these years. So GBE as it is wouldn't work here, we'd need to write something just like GBE but tailored towards loads, when there are no events firing off. It would be simpler than GBE, maybe ten lines total all inlined. But it would be a little bit redundant (now we have two back populators ! one for loads, one for userland events).If we OTOH tried to make this new reverse-populator and GBE somehow work on the same system, that's probably more complicated. At the very least it would add method overhead as GBE would now be delegating down a deeper chain. So this is already complexity that really would have to be worth it in order to dig into. We don't just now have two ways of backref population, we also have two different ways that Child.parent may be populated. It might be populated by the LoaderStrategy associated with Child.parent, or it might be populated by the one associated with Parent.children. If the query spans across both relationships, now both LoaderStrategy objects are there, and unless we do something about it, *both* will populate Child.parent - totally redundant effort, sloppy. Do we want to prevent that ? Sure, now we need more messages and flags running around trying to prevent that case - the internals become that more intricate. Why are they loading this attribute in two different ways when the first way is totally fine ? Who wrote this crap ? I know you don't believe me but I've been writing this thing for five years - everything gets much more hairy than you'd like. Even the immediateloader patch I made you the other day, it immediately failed when i started testing it further. I had to hack into that same _instance_process() method I told you about and modify it so that it could run certain attribute loaders after all the others. I cannot recall ever adding any feature, however small and innocuous, that did not have some unforeseen side effects that required further testing, further checks and decisions that we didn't realize would be needed. So with the effort of adding new code and possibly refactoring how backref events work, all the new tests needed
Re: [sqlalchemy] Re: (Hopefully) simple problem with backrefs not being loaded when eagerloading.
On Sep 14, 2010, at 12:22 PM, Jon Siddle wrote: Thank you for such a full elaboration. I still think the end result is something a little unintuitive (albeit only for those using detached objects, who will come across it); but I can't argue against your decision here. Based on the information you've given, keeping things the way they are is undoubtedly the decision that will benefit most users. I've worked with ORM internals before, and I know only too well that architectural decisions made for sound reasons can nevertheless make some things difficult, which feel like they should be simple. As always, the best you can do is maximise the value to the most people. Perhaps just a note in the eagerload docs to say ...will not load backreferences eagerly. If you are using detached objects, try the join...contains_eager pattern. or words to that effect? yeah definitely. I am psyched for the immediateload option, I think this option was not so easy to implement originally but the current architecture allows it to go in nicely with very little effort. Thanks Jon On 14/09/10 16:27, Michael Bayer wrote: On Sep 14, 6:03 am, Jon Siddlej...@corefiling.co.uk wrote: I would agree with all of this if I understood why a) it takes an appreciable number of function calls or b) automatic decisionmaking is necessary. I don't think there's any ambiguity here, but again; perhaps I'm missing something fundamental. Theres a backref with an attribute handler that receives append events in userland, and emits a corresponding set event on the other side. The Collection attribute implementation could receive extra code that would cause it to do something similar to the backref handler when the collection is populated.This is what you are looking for here. The append of the collection ultimately occurs in mapper.py line 2298 in the current tip. The function calling it, a closure called _instance() that is generated inside of _instance_process(), is always the top hit on any profiling run against a load of objects. Removing just three function calls from this method is a win. Months of work have gone into the workings of this method so that our load speeds remain competitive with, and sometimes even faster than, other ORMs that do not use identity maps or units of work, ORMs that are now written almost entirely in C (I leave the identification of these products as an exercise for the reader). So the implementation here would probably replace the list being appended onto with one that is additionally instrumented to catch these events and populate in the other direction. We already have the GenericBackrefExtension which does this. GBE is designed to work in userland, not during a load but when you populate things manually. GBE is handy, but is a primary place for bugs to occur, as it is an event handler that issues more events. I just fixed one the other day involving some rare endless loop where the events could not decide when to stop back-populating in each direction. I was pretty amazed that there were still some of those cases left after all these years. So GBE as it is wouldn't work here, we'd need to write something just like GBE but tailored towards loads, when there are no events firing off. It would be simpler than GBE, maybe ten lines total all inlined. But it would be a little bit redundant (now we have two back populators ! one for loads, one for userland events).If we OTOH tried to make this new reverse-populator and GBE somehow work on the same system, that's probably more complicated. At the very least it would add method overhead as GBE would now be delegating down a deeper chain. So this is already complexity that really would have to be worth it in order to dig into. We don't just now have two ways of backref population, we also have two different ways that Child.parent may be populated. It might be populated by the LoaderStrategy associated with Child.parent, or it might be populated by the one associated with Parent.children. If the query spans across both relationships, now both LoaderStrategy objects are there, and unless we do something about it, *both* will populate Child.parent - totally redundant effort, sloppy. Do we want to prevent that ? Sure, now we need more messages and flags running around trying to prevent that case - the internals become that more intricate. Why are they loading this attribute in two different ways when the first way is totally fine ? Who wrote this crap ? I know you don't believe me but I've been writing this thing for five years - everything gets much more hairy than you'd like. Even the immediateloader patch I made you the other day, it immediately failed when i started testing it further. I had to hack into that same _instance_process() method I told you about and modify it so that it could run certain attribute loaders after all the
Re: [sqlalchemy] internationalization of content
Hi Nil, On 13/09/2010 23:37, NiL wrote: Hi all, I'm lookin for a good solution to internationalize the content of my application. that is provide many translations for the database content (as opposed to the translation of the application itself with babel/gettext for template and code messages). Has anyone tried ti implement this ? a working solution ? willing to participate in a effort to provide a solution ? Very interested in this too. Some time ago I looked into this too. At the time I came across the following: - a gettext type solution implemented in SQL stored procedures - done by Karsten Hilbert for gnumed - http://cvs.savannah.gnu.org/viewvc/gnumed/gnumed/server/sql/gmI18N.sql?root=gnumedview=log I also had a go at it using SA's dynamic_loader and/or query enabled properties. see the thread in January 2010 on this list with a subject of dynamic_loader Got some test code on this using Firebird SQL, but never really finalized anything as I got a bit side tracked, and the code is probably pretty ugly as I am not that good a programmer. Just lately I also saw the following, which sounded interesting but it uses PostgreSQL - which is not an option for me at the moment. - http://rwec.co.uk/blog/2009/11/atomic-translations-part-1/ - http://rwec.co.uk/blog/2009/12/atomic-translations-part-2/ Hope some of this is useful to you Werner -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Use regexp in like
Is it possible to use a regexp in a like() clause? Or some other way to achieve something similar? Thanks, Michael -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: internationalization of content
Hi Werner, many thanks for your rich reply. I'm going to try an elixir implementation for now. If you want follow the thread of the same title in the elixir mailing list. I'll stay tuned to any sqla development best NiL -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.