Hi John, > Are all Element "trees" in the same domain?
Yes they are. > If so then you could make > Domain implement Element and use it as the root of an element > hierarchy. This would completely disregard my business model. A Domain is not an Element, nor vice-versa. It's something analogous to making both classes Elephant and Corn implement the same interface because both have Ears. And as you say, "this limits the concurrency of changes to Elements to a few writes per second per Domain" which would be very bad in an enterprise system. > > Do you update more than one element in a transaction? If not then an > unowned relationship between entities would be fine. Yes, I do. A parent Element knows the order of its children and must be updated accordingly. Even were it not the case, I still want operations to be transactional, because without a transaction, optimistic locking will not function. > Are domains added and removed often? If not then is it sufficient to > check that the domain exists before working on your elements outside > of the transaction? Just never delete Domains - mark them as deleted > instead. No, they're not added and removed often, and as it happens the Domain does have an "active" flag. Nonetheless, I think you're missing the point. If I have to interleave a constraint check with transaction elements inside my destructive operations, I am defeating the principle of separation of concerns. I already have to manually verify "foreign key" relationships, which is blends concerns badly enough! A succinct explanation for a transaction is that it is supposed to make different operations involving multiple resources atomic to the level of isolation specified. If you can only have one resource, or one small group of very tightly coupled resources, then transactions give you almost no benefit. At this point the only thing I can trust the transaction layer to do is to provide an optimistic locking version. Highly unsatisfactory, this is! > Even load > operations on a non existent entities (e.g. Domain) are included in > the transaction. Yeah. Why on earth? You can read a record a billion times and the it will not "clobber" an update. Let me put it this way. I'm not looking for a way to hack this into working. I've already hacked it into working by adding an extra, very hacky layer to my architecture. What I would like is some sort of explanation, and hopefully some indication that at some point this is going to be fixed. I'd even be interested in helping out with said fix. Google, can you provide me with any answers? On Jul 26, 10:04 am, John Patterson <[email protected]> wrote: > On 26 Jul 2010, at 19:21, Bill wrote: > > > > > Good thought, but unforunately Elements already have a parent key in > > this manner. Elements are hierarchical, so that an Element's parent > > is another parent. > > > Can an entity have more than one parent? I haven't tried that... > > An Entity can only have one parent as it is encoded into its key as a > path of ancestors leading back to the root entity. > > Are all Element "trees" in the same domain? If so then you could make > Domain implement Element and use it as the root of an element > hierarchy. But this limits the concurrency of changes to Elements to > a few writes per second per Domain. > > Do you update more than one element in a transaction? If not then an > unowned relationship between entities would be fine. > > Are domains added and removed often? If not then is it sufficient to > check that the domain exists before working on your elements outside > of the transaction? Just never delete Domains - mark them as deleted > instead. > > Your original problem "Cannot operate on more than one...." was caused > by trying to load an element in a different entity group. Even load > operations on a non existent entities (e.g. Domain) are included in > the transaction. That is how you can guarantee you don't clobber > existing data when you do a put(...) in a transaction. > > You can think of starting a transaction as creating a timestamp on the > root entity - any changes to any element in the entire entity group > after that time and before the commit will cause a concurrency > exception. I think of transactions on non-existing roots as making a > kind of temporary place-holder root to which the timestamp is added. > See this for more details: > > http://www.youtube.com/watch?v=tx5gdoNpcZM > > If non of those work arounds is sufficient then perhaps you need > distributed transactions. The Slim3 project implements distributed > transactions using the task queue and there was a project with the aim > of making a generic solution: > > http://www.youtube.com/watch?v=IOtBlzsx0m8 > > Not sure what happened to that though. > > It is possible that you need a full RDBMS - in which case you will > either need to patiently wait for the release of the App Engine one > recently announced or run your own on a hosted server. Just don't > expect the same scalability as you get from the datastore. -- You received this message because you are subscribed to the Google Groups "Google App Engine for Java" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.
