Re: Improvements to better support implementing optimistic concurrency control
On Thu, Aug 11, 2011 at 12:06 PM, Steven Cummings wrote: > On Thu, Aug 11, 2011 at 11:14 AM, Simon Riggs wrote: > >> IDEA 2 >> (1) SELECT data & version >> (2) UPDATE data & test version & COMMIT immediately >> > > This is pretty much where I started when I initiated the ticket and thread > and still am at. The other discussion here is what happens when the test > fails (i.e., transaction control). > Also, I was hesitant that my "version" example really covered everybody's needs, which is why I was also leaning towards generalizing to gaining some access to rows modified. -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: Improvements to better support implementing optimistic concurrency control
On Thu, Aug 11, 2011 at 11:14 AM, Simon Riggs wrote: > IDEA 2 > (1) SELECT data & version > (2) UPDATE data & test version & COMMIT immediately > This is pretty much where I started when I initiated the ticket and thread and still am at. The other discussion here is what happens when the test fails (i.e., transaction control). As I show above, the proposed patch doesn't solve the problem, I am > sorry to say. > It appears I need to go ahead and get started on my patch which would pretty much be idea 2. > > Yes, not updating atomically is a problem. Doing the updates > atomically could also create a deadlock risk. It's a hard problem. > Agreed. The very best way to do this from the database perspective is to use > the SQL Standard RETURNING clause, which allows you to return the data > after the update > > IDEA5 > UPDATE foo > SET val = x > WHERE pk = y > RETURNING data > > but I can see that causing a few discussion points, so I didn't > mention it previously. > This I'm not familiar with, so I'll need to look at that. -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: Improvements to better support implementing optimistic concurrency control
On Wed, Aug 10, 2011 at 3:08 PM, Anssi Kääriäinen wrote: > On 08/10/2011 03:18 PM, Simon Riggs wrote: > > That adds additional SELECT statements, which then extends a lock > window between the check statement and the changes. It works, but in > doing so it changes an optimistic lock into a pessimistic lock. > > True. The issue I am trying to solve is: Guard against concurrent > modifications to some > object without taking a lock when the edit page is loaded. I did some > Googling and I see this is not what is meant by optimistic locking. Sorry > for that. The problem we're all trying to solve is that we don't want locks to be held across multiple SQL statements. If we do this IDEA 1 (0) BEGIN (1) SELECT... FOR UPDATE (2) UPDATE (3) COMMIT then the lock is taken at (1) and held all the way until commit at (3). Avoiding that is desirable, and is known as optimistic locking. IDEA 2 (1) SELECT data & version (2) UPDATE data & test version & COMMIT immediately Here we don't take the write lock until (2) and so the lock window is much shorter. If we do this using an additional SQL statement like this IDEA 3 (1) SELECT data & version (2) SELECT data & test version (3) UPDATE data & COMMIT immediately then we add in an extra SQL statement, though now we have a race condition between steps (2) and (3) so this should be avoided in favour of IDEA 4 (1) SELECT data & version (2) SELECT data & test version & FOR UPDATE (3) UPDATE data (4) COMMIT which avoids the race condition inherent in IDEA 3. But IDEA 4 now has exactly the same problem as IDEA 1 had in the first place, so we haven't solved the problem. So IDEA 2 is the only one that avoids holding long locks and avoids race conditions. > IMHO the right way to do this would be to add the OptimisticLockField > check as an additional item on the WHERE clause of the UPDATE or > DELETE. If that action returns 0 rows then we know that the lock check > failed and we can handle that. This keeps the locking optimistic and > doesn't add any additional SQL statements. > > e.g. > > UPDATE foo > SET col = newvalue, optimistic_lock_field = optimistic_lock_field + 1 > WHERE pkcol = p_key > AND optimistic_lock_field = p_version > > DELETE FROM foo > WHERE pkcol = p_key > AND optimistic_lock_field = p_version > > The problem with this is that I feel the checking should be done already in > data validation part, not when saving is already under way. But I guess that > is use case specific. Doing the checking while saving will result into > situations where half of the stuff is saved and the other half is not. The > model.save() isn't atomic itself without a savepoint due to model > inheritance. And if it is not atomic, then it is easy to do a half-update > using the shell, for example. As I show above, the proposed patch doesn't solve the problem, I am sorry to say. Yes, not updating atomically is a problem. Doing the updates atomically could also create a deadlock risk. It's a hard problem. > On the other hand, if a clean implementation is written for this, including > it in Django would be really nice. This doesn't cost much in performance, > and gives a nice guard against concurrent edits. BTW there is more > discussion in the ticket #16549 > (https://code.djangoproject.com/ticket/16549). The very best way to do this from the database perspective is to use the SQL Standard RETURNING clause, which allows you to return the data after the update IDEA5 UPDATE foo SET val = x WHERE pk = y RETURNING data but I can see that causing a few discussion points, so I didn't mention it previously. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: Improvements to better support implementing optimistic concurrency control
On 08/10/2011 03:18 PM, Simon Riggs wrote: That adds additional SELECT statements, which then extends a lock window between the check statement and the changes. It works, but in doing so it changes an optimistic lock into a pessimistic lock. True. The issue I am trying to solve is: Guard against concurrent modifications to some object without taking a lock when the edit page is loaded. I did some Googling and I see this is not what is meant by optimistic locking. Sorry for that. IMHO the right way to do this would be to add the OptimisticLockField check as an additional item on the WHERE clause of the UPDATE or DELETE. If that action returns 0 rows then we know that the lock check failed and we can handle that. This keeps the locking optimistic and doesn't add any additional SQL statements. e.g. UPDATE foo SET col = newvalue, optimistic_lock_field = optimistic_lock_field + 1 WHERE pkcol = p_key AND optimistic_lock_field = p_version DELETE FROM foo WHERE pkcol = p_key AND optimistic_lock_field = p_version The problem with this is that I feel the checking should be done already in data validation part, not when saving is already under way. But I guess that is use case specific. Doing the checking while saving will result into situations where half of the stuff is saved and the other half is not. The model.save() isn't atomic itself without a savepoint due to model inheritance. And if it is not atomic, then it is easy to do a half-update using the shell, for example. On the other hand, if a clean implementation is written for this, including it in Django would be really nice. This doesn't cost much in performance, and gives a nice guard against concurrent edits. BTW there is more discussion in the ticket #16549 (https://code.djangoproject.com/ticket/16549). - Anssi -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: Improvements to better support implementing optimistic concurrency control
On Tue, Aug 9, 2011 at 1:33 PM, akaariai wrote: > On Aug 9, 1:17 am, Steven Cummings wrote: >> I don't think we're talking about new or specific fields as part of the base >> implementation here. Just enhanced behavior around updates to: >> >> 1) Provide more information about the actual rows modified >> 2) Check preconditions with the actual DB stored values; and >> 3) Avoid firing post-update/delete signals if nothing was changed >> >> From there you could implement fields as you see fit for your app, e.g., >> version=IntegerField() that you use in a precondition. > > That would be useful. Especially if that can be done without too much > code duplication. > > I had another idea for optimistic locking: why not use the pre_save > signal for this? There is a proof of concept how to do this in > https://github.com/akaariai/django_optimistic_lock > > The idea is basically that if you add a OptimisticLockField to your > model, the pre_save (and pre_delete) signal will check that there have > been no concurrent modifications. That's it. > > The code is really quickly written and downright ugly. It is a proof > of concept and nothing more. I have tested it quickly using PostgreSQL > and it seems to work for simple usage. However, it will probably eat > your data. That adds additional SELECT statements, which then extends a lock window between the check statement and the changes. It works, but in doing so it changes an optimistic lock into a pessimistic lock. IMHO the right way to do this would be to add the OptimisticLockField check as an additional item on the WHERE clause of the UPDATE or DELETE. If that action returns 0 rows then we know that the lock check failed and we can handle that. This keeps the locking optimistic and doesn't add any additional SQL statements. e.g. UPDATE foo SET col = newvalue, optimistic_lock_field = optimistic_lock_field + 1 WHERE pkcol = p_key AND optimistic_lock_field = p_version DELETE FROM foo WHERE pkcol = p_key AND optimistic_lock_field = p_version -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: Improvements to better support implementing optimistic concurrency control
On Aug 9, 1:17 am, Steven Cummings wrote: > I don't think we're talking about new or specific fields as part of the base > implementation here. Just enhanced behavior around updates to: > > 1) Provide more information about the actual rows modified > 2) Check preconditions with the actual DB stored values; and > 3) Avoid firing post-update/delete signals if nothing was changed > > From there you could implement fields as you see fit for your app, e.g., > version=IntegerField() that you use in a precondition. That would be useful. Especially if that can be done without too much code duplication. I had another idea for optimistic locking: why not use the pre_save signal for this? There is a proof of concept how to do this in https://github.com/akaariai/django_optimistic_lock The idea is basically that if you add a OptimisticLockField to your model, the pre_save (and pre_delete) signal will check that there have been no concurrent modifications. That's it. The code is really quickly written and downright ugly. It is a proof of concept and nothing more. I have tested it quickly using PostgreSQL and it seems to work for simple usage. However, it will probably eat your data. - Anssi -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: Improvements to better support implementing optimistic concurrency control
I don't think we're talking about new or specific fields as part of the base implementation here. Just enhanced behavior around updates to: 1) Provide more information about the actual rows modified 2) Check preconditions with the actual DB stored values; and 3) Avoid firing post-update/delete signals if nothing was changed >From there you could implement fields as you see fit for your app, e.g., version=IntegerField() that you use in a precondition. -- Steven On Mon, Aug 8, 2011 at 3:57 PM, akaariai wrote: > On Aug 8, 6:30 pm, Steven Cummings wrote: > > For backward compatibility, there may be a Model sub-class that would > leave > > Model alone altogether (this was suggested on the ticket). This seems > fair > > since many seem to be getting by without better optimistic concurrency > > control from Django's ORM today. > > Would the subclass-based method automatically append a field into the > model, or would there be need to also create the field used for > version control? How does the subclass know which field to use? > > Yet another option is models.OptimisticLockField(). If there is one > present in the model, and a save will result in update, the save > method will check for conflicts and set the version to version + 1 if > there are no conflicts. There is some precedent for a somewhat similar > field, the AutoField(). AutoField also changes how save behaves. > > I wonder what to do if the save does not result in an update and the > version is set to something else than 1. This could happen if another > user deleted the model and you are now saving it. This would result in > a reinsert. This should also be an error? > > - Anssi > > -- > You received this message because you are subscribed to the Google Groups > "Django developers" group. > To post to this group, send email to django-developers@googlegroups.com. > To unsubscribe from this group, send email to > django-developers+unsubscr...@googlegroups.com. > For more options, visit this group at > http://groups.google.com/group/django-developers?hl=en. > > -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: Improvements to better support implementing optimistic concurrency control
On Aug 8, 6:30 pm, Steven Cummings wrote: > For backward compatibility, there may be a Model sub-class that would leave > Model alone altogether (this was suggested on the ticket). This seems fair > since many seem to be getting by without better optimistic concurrency > control from Django's ORM today. Would the subclass-based method automatically append a field into the model, or would there be need to also create the field used for version control? How does the subclass know which field to use? Yet another option is models.OptimisticLockField(). If there is one present in the model, and a save will result in update, the save method will check for conflicts and set the version to version + 1 if there are no conflicts. There is some precedent for a somewhat similar field, the AutoField(). AutoField also changes how save behaves. I wonder what to do if the save does not result in an update and the version is set to something else than 1. This could happen if another user deleted the model and you are now saving it. This would result in a reinsert. This should also be an error? - Anssi -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: Improvements to better support implementing optimistic concurrency control
For backward compatibility, there may be a Model sub-class that would leave Model alone altogether (this was suggested on the ticket). This seems fair since many seem to be getting by without better optimistic concurrency control from Django's ORM today. -- Steven On Mon, Aug 8, 2011 at 9:40 AM, akaariai wrote: > On Aug 8, 4:54 pm, Steven Cummings wrote: > > Interesting feature I hadn't noticed in memcached. That does seem like it > > would do the trick where memcached is being used. I think the ability to > > control it in Django would generally still be desirable though, as that > is > > where the data ultimately lives and I'd be hesitant to assume to control > the > > DB's concurrency from memcached. Ideally it should be the other way > around. > > I assume the memcached implementation would be a version value stored > in memcached. Can you really trust that memcached keeps the version > value and doesn't discard it at will when it has been unused long > enough? > > There are a couple of other things in model saving which could be > better handled. If composite primary keys are included in Django, one > would need the ability to update the primary key. If you have a model > with (first_name, last_name) primary key, and you change the > first_name and save, current implementation (and definition) of model > save() would insert a new instance into the DB instead of doing an > update. Another thing that could be handled better is update of just > the changed fields. > > I wonder how to implement these things with backwards compatibility. > Maybe a method update(condition=None, only_fields=None) which returns > True if something was actually updated (or raises an exception if > nothing was updated). The method would use the old pk and the > condition (if given) in the where clause. If only_fields=None, then it > would only update the changed fields... Seems uqly, but I can't think > of anything better. > > - Anssi > > -- > You received this message because you are subscribed to the Google Groups > "Django developers" group. > To post to this group, send email to django-developers@googlegroups.com. > To unsubscribe from this group, send email to > django-developers+unsubscr...@googlegroups.com. > For more options, visit this group at > http://groups.google.com/group/django-developers?hl=en. > > -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: Improvements to better support implementing optimistic concurrency control
On Aug 8, 4:54 pm, Steven Cummings wrote: > Interesting feature I hadn't noticed in memcached. That does seem like it > would do the trick where memcached is being used. I think the ability to > control it in Django would generally still be desirable though, as that is > where the data ultimately lives and I'd be hesitant to assume to control the > DB's concurrency from memcached. Ideally it should be the other way around. I assume the memcached implementation would be a version value stored in memcached. Can you really trust that memcached keeps the version value and doesn't discard it at will when it has been unused long enough? There are a couple of other things in model saving which could be better handled. If composite primary keys are included in Django, one would need the ability to update the primary key. If you have a model with (first_name, last_name) primary key, and you change the first_name and save, current implementation (and definition) of model save() would insert a new instance into the DB instead of doing an update. Another thing that could be handled better is update of just the changed fields. I wonder how to implement these things with backwards compatibility. Maybe a method update(condition=None, only_fields=None) which returns True if something was actually updated (or raises an exception if nothing was updated). The method would use the old pk and the condition (if given) in the where clause. If only_fields=None, then it would only update the changed fields... Seems uqly, but I can't think of anything better. - Anssi -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: [RFC] Improvements to better support implementing optimistic concurrency control
Interesting feature I hadn't noticed in memcached. That does seem like it would do the trick where memcached is being used. I think the ability to control it in Django would generally still be desirable though, as that is where the data ultimately lives and I'd be hesitant to assume to control the DB's concurrency from memcached. Ideally it should be the other way around. -- Steven On Sun, Aug 7, 2011 at 8:00 PM, Peter Portante wrote: > Essentially, you want a compare-and--swap instruction for a database? > > Have you considered using memcached atomicity (add and cas) to handle this > kind of thing? It might get pretty elaborate, but with just a cursory > thought seems doable. > > -peter > > > On Sat, Aug 6, 2011 at 9:59 PM, Steven Cummings wrote: > >> Bad community member that I am, I jumped the gun and logged a ticket >> [1] on this one... anyway: >> >> Websites general avoid overly aggressive locking of data to deal with >> concurrency issues. When two users allowed to edit some data are >> simultaneously doing so, they're both allowed to do so and the >> last .save() wins (race condition). This is largely acceptable on the >> web as when two people are allowed to edit data they shouldn't be >> prevented to doing so. Further sites often have audit history (whether >> or not exposed to those users owning/managing the data) which tracks >> exactly what went on. >> >> However, some sites may wish to detect this situation and present the >> 2nd user with a page or the previous form stating that what they >> edited is now obsolete. Examples of situations where we don't want >> edits to be silently stomped are health and financial data. Bugzilla >> mid-air collisions are an example of an implementation in the wild. >> >> So, we could try to detect this by always re-querying the object just >> before update (or delete), but with sufficient traffic there is still >> the chance that two requests *think* they know the current state of it >> in storage. This is where knowing rows modified, and/or having a >> precondition check would help. I've outlined the details on the >> ticket, but generally: >> >> * To ensure the object I'm saving is not stale, I might like to do >> Model.save_if(version=version), where I have an incremented version- >> field and save_if would compare the stored value (using something like >> an F('version')) against the given value. If the version was >> different, it should raise something like PreconditionFailed. >> * For delete()s, it might be nice to get rows-modified to know that >> the current request really did perform the delete. This is important >> in cases like using an OAuth auth code, where allowing multiple uses >> (deletes) of it would be a security problem. >> >> For both these cases, it would also be useful to either get rows- >> modified (or precondition-met) in the post-save/delete signal, or to >> somehow avoid those signals firing when no data was updated. >> >> I'm fully willing to hack on this and provide patches and/or use an >> experimental branch, just wanted to get some thoughts on it. >> >> [1] https://code.djangoproject.com/ticket/16549 >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Django developers" group. >> To post to this group, send email to django-developers@googlegroups.com. >> To unsubscribe from this group, send email to >> django-developers+unsubscr...@googlegroups.com. >> For more options, visit this group at >> http://groups.google.com/group/django-developers?hl=en. >> >> > -- > You received this message because you are subscribed to the Google Groups > "Django developers" group. > To post to this group, send email to django-developers@googlegroups.com. > To unsubscribe from this group, send email to > django-developers+unsubscr...@googlegroups.com. > For more options, visit this group at > http://groups.google.com/group/django-developers?hl=en. > -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: [RFC] Improvements to better support implementing optimistic concurrency control
Essentially, you want a compare-and--swap instruction for a database? Have you considered using memcached atomicity (add and cas) to handle this kind of thing? It might get pretty elaborate, but with just a cursory thought seems doable. -peter On Sat, Aug 6, 2011 at 9:59 PM, Steven Cummings wrote: > Bad community member that I am, I jumped the gun and logged a ticket > [1] on this one... anyway: > > Websites general avoid overly aggressive locking of data to deal with > concurrency issues. When two users allowed to edit some data are > simultaneously doing so, they're both allowed to do so and the > last .save() wins (race condition). This is largely acceptable on the > web as when two people are allowed to edit data they shouldn't be > prevented to doing so. Further sites often have audit history (whether > or not exposed to those users owning/managing the data) which tracks > exactly what went on. > > However, some sites may wish to detect this situation and present the > 2nd user with a page or the previous form stating that what they > edited is now obsolete. Examples of situations where we don't want > edits to be silently stomped are health and financial data. Bugzilla > mid-air collisions are an example of an implementation in the wild. > > So, we could try to detect this by always re-querying the object just > before update (or delete), but with sufficient traffic there is still > the chance that two requests *think* they know the current state of it > in storage. This is where knowing rows modified, and/or having a > precondition check would help. I've outlined the details on the > ticket, but generally: > > * To ensure the object I'm saving is not stale, I might like to do > Model.save_if(version=version), where I have an incremented version- > field and save_if would compare the stored value (using something like > an F('version')) against the given value. If the version was > different, it should raise something like PreconditionFailed. > * For delete()s, it might be nice to get rows-modified to know that > the current request really did perform the delete. This is important > in cases like using an OAuth auth code, where allowing multiple uses > (deletes) of it would be a security problem. > > For both these cases, it would also be useful to either get rows- > modified (or precondition-met) in the post-save/delete signal, or to > somehow avoid those signals firing when no data was updated. > > I'm fully willing to hack on this and provide patches and/or use an > experimental branch, just wanted to get some thoughts on it. > > [1] https://code.djangoproject.com/ticket/16549 > > -- > You received this message because you are subscribed to the Google Groups > "Django developers" group. > To post to this group, send email to django-developers@googlegroups.com. > To unsubscribe from this group, send email to > django-developers+unsubscr...@googlegroups.com. > For more options, visit this group at > http://groups.google.com/group/django-developers?hl=en. > > -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
[RFC] Improvements to better support implementing optimistic concurrency control
Bad community member that I am, I jumped the gun and logged a ticket [1] on this one... anyway: Websites general avoid overly aggressive locking of data to deal with concurrency issues. When two users allowed to edit some data are simultaneously doing so, they're both allowed to do so and the last .save() wins (race condition). This is largely acceptable on the web as when two people are allowed to edit data they shouldn't be prevented to doing so. Further sites often have audit history (whether or not exposed to those users owning/managing the data) which tracks exactly what went on. However, some sites may wish to detect this situation and present the 2nd user with a page or the previous form stating that what they edited is now obsolete. Examples of situations where we don't want edits to be silently stomped are health and financial data. Bugzilla mid-air collisions are an example of an implementation in the wild. So, we could try to detect this by always re-querying the object just before update (or delete), but with sufficient traffic there is still the chance that two requests *think* they know the current state of it in storage. This is where knowing rows modified, and/or having a precondition check would help. I've outlined the details on the ticket, but generally: * To ensure the object I'm saving is not stale, I might like to do Model.save_if(version=version), where I have an incremented version- field and save_if would compare the stored value (using something like an F('version')) against the given value. If the version was different, it should raise something like PreconditionFailed. * For delete()s, it might be nice to get rows-modified to know that the current request really did perform the delete. This is important in cases like using an OAuth auth code, where allowing multiple uses (deletes) of it would be a security problem. For both these cases, it would also be useful to either get rows- modified (or precondition-met) in the post-save/delete signal, or to somehow avoid those signals firing when no data was updated. I'm fully willing to hack on this and provide patches and/or use an experimental branch, just wanted to get some thoughts on it. [1] https://code.djangoproject.com/ticket/16549 -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.