Re: Django, initial data and custom SQL
On Feb 12, 2009, at 07:48, Russell Keith-Magee wrote: > On Thu, Feb 12, 2009 at 2:54 PM, Ludvig Ericson >wrote: >> >> I fail to see how "it has consequences for existing code", as Russell >> put it. > > It has consequences because you are proposing to change the order in > which indexes and custom SQL are applied. Any code that depends on the > existing order will be affected. Just want to note that *I'm* not proposing any changes. >> I did discuss this with Bergström, and we came to the conclusion that >> it won't actually break much code, if any. > > If it has the potential to break _any_ code, it is an unacceptable > change. The Django core developers have stated that we will maintain > backwards compatibility of the v1.0 interface, so any change with even > the _potential_ to affect backwards compatibility will need to be > checked very carefully. > > However, the larger point is that you don't get to make that decision > about whether a change is acceptable or not. You can discuss a change > and make a recommendation, but ultimately the decision needs to be > made by a core developer. The Design Decision Required triage state > exists for exactly this reason. I don't think it's "the larger point." I misunderstood the triaging system, and incorrectly changed it. Bennett reverted. I apologized. End of that. What I meant by "ready for checking" was that the patch applies cleanly, runs, and does what it is intended to do. And obviously that was mistake, and again: I realize this now. >> The only case which it could break, AFAICT, is if custom SQL manages >> to depend on the absence of indexes. I guess that could break code >> that violates indexing constraints, which are applied later, maybe? I >> don't know. > > This is what needs to be confirmed, and given that it is a non-trivial > change, your decision needs to be confirmed by a core developer. I am > willing to be convinced, but you will need to prove to me that there > is no backwards compatibility problem here. The discussion in this > thread so far hasn't done that. I realize that my opinions and decisions play a very small role in the development of Django, but I try to express them so that core developers may read them, and either think "what a retard", or "fair point." I guess the question really is: is this custom SQL execution order change really the right fix for the issue? I can't say I'm convinced that it is, given your next paragraph. I would suggest the need for signaling that a custom SQL file should execute post-indices. One solution, which is entirely backwards-compatible, would be to say that "if you want your custom SQL to run after indices creation, name files '*.post.sql,'" or something like that. That has its own issues, but you get the general idea. > To answer the original question (why is it done in this order) - > mostly historical reasons. Originally, Django didn't have fixtures, so > initial data were all loaded from initial SQL. It's generally faster > to insert data and then add indicies, hence the order. When Django > added fixtures, the order wasn't changed. I see, so the custom SQL machinery grew out of being a crude version of fixtures then. I think the cause of confusion here is that Bergström and I both were expecting it to be for more than loading data -- I want to create a view in it (hence my interest), and he wants to drop some excessive indices. After pondering some more, I realized the docs actually say "initial SQL data", and so I'm not sure if this change is actually a good idea or not. - Ludvig --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Django, initial data and custom SQL
On Thu, Feb 12, 2009 at 2:54 PM, Ludvig Ericsonwrote: > > Feb 11, Johan Bergström: >> I took the liberty of creating a ticket with attached patch at: >> http://code.djangoproject.com/ticket/10236 > > I fail to see how "it has consequences for existing code", as Russell > put it. It has consequences because you are proposing to change the order in which indexes and custom SQL are applied. Any code that depends on the existing order will be affected. > I did discuss this with Bergström, and we came to the conclusion that > it won't actually break much code, if any. If it has the potential to break _any_ code, it is an unacceptable change. The Django core developers have stated that we will maintain backwards compatibility of the v1.0 interface, so any change with even the _potential_ to affect backwards compatibility will need to be checked very carefully. However, the larger point is that you don't get to make that decision about whether a change is acceptable or not. You can discuss a change and make a recommendation, but ultimately the decision needs to be made by a core developer. The Design Decision Required triage state exists for exactly this reason. > The only case which it could break, AFAICT, is if custom SQL manages > to depend on the absence of indexes. I guess that could break code > that violates indexing constraints, which are applied later, maybe? I > don't know. This is what needs to be confirmed, and given that it is a non-trivial change, your decision needs to be confirmed by a core developer. I am willing to be convinced, but you will need to prove to me that there is no backwards compatibility problem here. The discussion in this thread so far hasn't done that. To answer the original question (why is it done in this order) - mostly historical reasons. Originally, Django didn't have fixtures, so initial data were all loaded from initial SQL. It's generally faster to insert data and then add indicies, hence the order. When Django added fixtures, the order wasn't changed. Yours, Russ Magee %-) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Django, initial data and custom SQL
Feb 11, Johan Bergström: > I took the liberty of creating a ticket with attached patch at: > http://code.djangoproject.com/ticket/10236 I fail to see how "it has consequences for existing code", as Russell put it. I did discuss this with Bergström, and we came to the conclusion that it won't actually break much code, if any. The only case which it could break, AFAICT, is if custom SQL manages to depend on the absence of indexes. I guess that could break code that violates indexing constraints, which are applied later, maybe? I don't know. - Ludvig --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: #5903 DecimalField returns default value as unicode string
On Wed, 2009-02-11 at 22:50 +0900, Russell Keith-Magee wrote: [...] > However, in this case, I'm reasonably convinced it is the right thing > to do. The list of 'ignored' types for force_unicode is essentially > the list of data types that we use for native data representations. I > can't think of any reason that Decimals shouldn't be on that list. > > If you're looking for a little more background on exactly what > force_unicode "should" do, here's a discussion from way back, when we > expanded the non-string types to include dates, times, etc: > > http://groups.google.com/group/django-developers/browse_thread/thread/c74e881a5f0dc8a6 > > My reading of that (then, as now), is that > force_unicode(strings_only=True) exists to catch string proxy objects, > not non-string objects; adding Decimal to the list of non-string > objects should be fine. Yes, that's the idea. I agree that allowing Decimal objects to pass-through natively should be fine. /me makes a note to add that summary to internal documentation in that future when I have time to write some. Regards, Malcolm --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: App Engine port
On Wed, 2009-02-11 at 09:08 -0800, Waldemar Kornewald wrote: > Hi Malcolm, > > On 10 Feb., 05:25, Malcolm Tredinnick> wrote: > > I have a reasonably fleshed out plan to make things easier here in the > > Django 1.2 timeframe. The rough idea is that everything under > > django/db/models/sql/ could be replaced with a module of the developer's > > choosing (probably set via a setting). That package is the only place > > that really cares much about SQL. > > > > So somebody wanting a GAE backend or an Hadoop backend or something else > > would write something that behaved the same way as the Query class (and > > subclasses) and could be called by the QuerySet class appropriately. > > Is the plan somewhere on the wiki? No, because it's only something I'm pulling together slowly in my head. > App Engine support requires at least that Model and probably a few > other classes can be overridden (maybe partially) > For example, > save_base() makes a few queries to check if the row already exists, > but this would have to be done differently on App Engine (hopefully > with a transaction). No. Backends should require piecemeal futzing around at that level --the Model level should be reasonably abstracted from backend specifics. We should, instead, fix the problem more generically by moving those tests into the django/db/models/sql/ level in some fashion. That's one of those "we'll do it when we need it" tasks. Since this is the first time anybody's mentioned, we now have a reason to look at it. Regards, Malcolm --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Contenttype Generation Inconsistency During Serialization
On Wed, Feb 11, 2009 at 4:59 PM, jameslon...@gmail.com < jameslon...@gmail.com> wrote: > > This is a great solution; when I wrote this post I was sure no one had > really run into the problem. I will use this for serializing my DB in > the future. Though, the last paragraph of your reply states that > content type doesn't define app_label and model as unique. I believe > that this is true now, at least mine appears to have a unique > constraint on it right away. Ah yes, you're right. I was looking at the model definition, but they're declared unique in the Meta unique_together. > > > This is still a legitimate issue during serialization, it's great to > see someone has made steps in the right direction. Glad it's been helpful. I want to get this into a more generic solution, and hopefully get part of it into django or a real third party app. > > > On Feb 11, 3:45 pm, Eric Holscherwrote: > > On Wed, Feb 11, 2009 at 1:48 PM, jameslon...@gmail.com < > > > > > > > > jameslon...@gmail.com> wrote: > > > > > There is a small road block that makes contenttype a little dangerous > > > to use during application development. Especially in regards to > > > serializing your data to different databases. During syncdb the > > > contenttypes are generated in a way that makes regeneration at a later > > > date inconsistent with the previously generated primary keys. > > > > > The contenttype IDs can be different depending on when your syncdb was > > > run in the development of your application. In addition, loaddata and > > > dumpdata are prevented from working correctly if the contenttypes have > > > already been created (integrity errors). > > > > > My use pattern for this is during application development I would > > > architect all of my data models before adding any explicit indexes. > > > After the models are complete and data is loaded I will analyze the > > > use patterns and index accordingly. Since django has no way of syncing > > > indexes my approach would be to dump the data to a JSON file, drop the > > > database and use syncdb to create the canonical copy. > > > > > My experience was as follows: > > > 1st Try: > > > 1. Dump data from old db using management command (dumpdata) > > > 2. Drop DB and use django to create the database via syncdb > > > 3. Load data using management command on the new database > > > 4. Become irritated with integrity errors while the load tries to > > > import the contenttype table which already exists. > > > 2nd Try: > > > 1. Dump data from old db using management command (dumpdata), > > > excluding the contenttype table (-e contenttypes) > > > 2. Drop DB and use django to create the database via syncdb > > > 3. Load data using management command on the new database > > > 4. Realize all data is completely useless since contenttype's PK's > > > are not connected to the same models as before. > > > 3rd Try: > > > 1. Dump data from old db using management command (dumpdata) > > > 2. Drop DB and use django to create the database via syncdb > > > 3. Truncate contenttype table > > > 4. Load data using management command on the new database > > > > > Possible solution that doesn't suck a lot: > > > I came up with quite a few different ways to handle this, but the best > > > so far (even thought it's not stellar) is to create a new column in > > > contenttypes that's a combined column. The combined column would > > > contain the app_label and model_name. > > > > > GenericForeignKey could use the combined column instead of the PK to > > > keep the references pointing to the same locations. I understand there > > > are some performance implications here, but it's the best I can come > > > up with. I would love to hear thoughts on this topic. > > > > I have run into this problem as well, and have come up with a basic > solution > > (for content types). The code is here:http://dpaste.com/119487/. It is > > implemented as a serializer, which you would plug into django, and then > use > > for serialization and deserialization of models with content types. > > > > It is rather simple (only about 10 lines of additional code). When it is > > dumping data, it checks to see if the field it is dumping is a content > type, > > and if so, it dumps a dictionary of app_label and model. Then, when this > > fixture is loaded back in, it runs a query against the Content Types for > > that object. Then plugs that in for the content type. > > > > This fixes the problem of content types being an ID, and the ID's not > > matching when you move across databases (Your try #2). > > > > I have also been working on a more generic solution to this problem. I > have > > a copy of it on github(1). The approach taken there is similar. When it > > loads a ForeignKey field to be serialized, it checks the related model > (the > > one being pointed to) for any unique constraints. If any of these exist, > > then the model is dumped as a dictionary of kwargs containing the >
Re: Contenttype Generation Inconsistency During Serialization
On Thu, Feb 12, 2009 at 4:48 AM, jameslon...@gmail.comwrote: > > There is a small road block that makes contenttype a little dangerous > to use during application development. Especially in regards to > serializing your data to different databases. During syncdb the > contenttypes are generated in a way that makes regeneration at a later > date inconsistent with the previously generated primary keys. > > The contenttype IDs can be different depending on when your syncdb was > run in the development of your application. In addition, loaddata and > dumpdata are prevented from working correctly if the contenttypes have > already been created (integrity errors). This is a well known, well understood problem with at least one solution that has been designed, but not implemented: http://code.djangoproject.com/ticket/7052 In short, the solution I have historically preferred is to modify the serialization language to allow queries to take the place of literal primary keys - that way, you can ask in a fixture for "the article content type", rather than content type 37. However, I am open to other suggestions. Eric Holscher has been working in this area recently, and he has made some interesting progress with some slightly different approaches. > Possible solution that doesn't suck a lot: > I came up with quite a few different ways to handle this, but the best > so far (even thought it's not stellar) is to create a new column in > contenttypes that's a combined column. The combined column would > contain the app_label and model_name. This has been proposed in the past, but is problematic because it is backwards incompatible. There is a very large existing codebase that uses the current implementation of ContentType; changing this model would be non-trivial. Yours, Russ Magee %-) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: enhance reverse url resolving
On Wed, Feb 11, 2009 at 5:00 PM, smcollwrote: > How about requiring that any urls.py file at least have a > corresponding __init__.py before processing it? That's not really something Django can do: if you don't have an __init__.py, the urls.py can't be imported to make the check. In other words, we can't tell (from Django) weather you're missing a urls file, or missing some associated __init__.py. However, this is something that the Python team has been working on. Python 2.6 and newer now warn you if you try to import something from a directory missing __init__.py. So as folks upgrade to new versions of Python this little gotcha will go away. Jacob --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
enhance reverse url resolving
i had a problem recently where a stray urls.py was choking the reverse function. This particular urls.py was just thrown to the side in a directory, so it took me a while to realize what was going on. How about requiring that any urls.py file at least have a corresponding __init__.py before processing it? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Contenttype Generation Inconsistency During Serialization
This is a great solution; when I wrote this post I was sure no one had really run into the problem. I will use this for serializing my DB in the future. Though, the last paragraph of your reply states that content type doesn't define app_label and model as unique. I believe that this is true now, at least mine appears to have a unique constraint on it right away. This is still a legitimate issue during serialization, it's great to see someone has made steps in the right direction. On Feb 11, 3:45 pm, Eric Holscherwrote: > On Wed, Feb 11, 2009 at 1:48 PM, jameslon...@gmail.com < > > > > jameslon...@gmail.com> wrote: > > > There is a small road block that makes contenttype a little dangerous > > to use during application development. Especially in regards to > > serializing your data to different databases. During syncdb the > > contenttypes are generated in a way that makes regeneration at a later > > date inconsistent with the previously generated primary keys. > > > The contenttype IDs can be different depending on when your syncdb was > > run in the development of your application. In addition, loaddata and > > dumpdata are prevented from working correctly if the contenttypes have > > already been created (integrity errors). > > > My use pattern for this is during application development I would > > architect all of my data models before adding any explicit indexes. > > After the models are complete and data is loaded I will analyze the > > use patterns and index accordingly. Since django has no way of syncing > > indexes my approach would be to dump the data to a JSON file, drop the > > database and use syncdb to create the canonical copy. > > > My experience was as follows: > > 1st Try: > > 1. Dump data from old db using management command (dumpdata) > > 2. Drop DB and use django to create the database via syncdb > > 3. Load data using management command on the new database > > 4. Become irritated with integrity errors while the load tries to > > import the contenttype table which already exists. > > 2nd Try: > > 1. Dump data from old db using management command (dumpdata), > > excluding the contenttype table (-e contenttypes) > > 2. Drop DB and use django to create the database via syncdb > > 3. Load data using management command on the new database > > 4. Realize all data is completely useless since contenttype's PK's > > are not connected to the same models as before. > > 3rd Try: > > 1. Dump data from old db using management command (dumpdata) > > 2. Drop DB and use django to create the database via syncdb > > 3. Truncate contenttype table > > 4. Load data using management command on the new database > > > Possible solution that doesn't suck a lot: > > I came up with quite a few different ways to handle this, but the best > > so far (even thought it's not stellar) is to create a new column in > > contenttypes that's a combined column. The combined column would > > contain the app_label and model_name. > > > GenericForeignKey could use the combined column instead of the PK to > > keep the references pointing to the same locations. I understand there > > are some performance implications here, but it's the best I can come > > up with. I would love to hear thoughts on this topic. > > I have run into this problem as well, and have come up with a basic solution > (for content types). The code is here:http://dpaste.com/119487/. It is > implemented as a serializer, which you would plug into django, and then use > for serialization and deserialization of models with content types. > > It is rather simple (only about 10 lines of additional code). When it is > dumping data, it checks to see if the field it is dumping is a content type, > and if so, it dumps a dictionary of app_label and model. Then, when this > fixture is loaded back in, it runs a query against the Content Types for > that object. Then plugs that in for the content type. > > This fixes the problem of content types being an ID, and the ID's not > matching when you move across databases (Your try #2). > > I have also been working on a more generic solution to this problem. I have > a copy of it on github(1). The approach taken there is similar. When it > loads a ForeignKey field to be serialized, it checks the related model (the > one being pointed to) for any unique constraints. If any of these exist, > then the model is dumped as a dictionary of kwargs containing the key/value > pair for these unique constraints. > > The content type model doesn't define app_label and model as unique, which > is a problem for this approach. If this ever gets into django core, it's > going to require a special case for content type things (or some other > approach which I haven't thought of). Having references to contrib apps is > frowned upon, so I think having a third party serializer that does this is > the answer for now. > > Hope this helps > >
Re: Contenttype Generation Inconsistency During Serialization
On Wed, Feb 11, 2009 at 1:48 PM, jameslon...@gmail.com < jameslon...@gmail.com> wrote: > > There is a small road block that makes contenttype a little dangerous > to use during application development. Especially in regards to > serializing your data to different databases. During syncdb the > contenttypes are generated in a way that makes regeneration at a later > date inconsistent with the previously generated primary keys. > > The contenttype IDs can be different depending on when your syncdb was > run in the development of your application. In addition, loaddata and > dumpdata are prevented from working correctly if the contenttypes have > already been created (integrity errors). > > My use pattern for this is during application development I would > architect all of my data models before adding any explicit indexes. > After the models are complete and data is loaded I will analyze the > use patterns and index accordingly. Since django has no way of syncing > indexes my approach would be to dump the data to a JSON file, drop the > database and use syncdb to create the canonical copy. > > My experience was as follows: > 1st Try: > 1. Dump data from old db using management command (dumpdata) > 2. Drop DB and use django to create the database via syncdb > 3. Load data using management command on the new database > 4. Become irritated with integrity errors while the load tries to > import the contenttype table which already exists. > 2nd Try: > 1. Dump data from old db using management command (dumpdata), > excluding the contenttype table (-e contenttypes) > 2. Drop DB and use django to create the database via syncdb > 3. Load data using management command on the new database > 4. Realize all data is completely useless since contenttype's PK's > are not connected to the same models as before. > 3rd Try: > 1. Dump data from old db using management command (dumpdata) > 2. Drop DB and use django to create the database via syncdb > 3. Truncate contenttype table > 4. Load data using management command on the new database > > > Possible solution that doesn't suck a lot: > I came up with quite a few different ways to handle this, but the best > so far (even thought it's not stellar) is to create a new column in > contenttypes that's a combined column. The combined column would > contain the app_label and model_name. > > GenericForeignKey could use the combined column instead of the PK to > keep the references pointing to the same locations. I understand there > are some performance implications here, but it's the best I can come > up with. I would love to hear thoughts on this topic. > > > I have run into this problem as well, and have come up with a basic solution (for content types). The code is here: http://dpaste.com/119487/ . It is implemented as a serializer, which you would plug into django, and then use for serialization and deserialization of models with content types. It is rather simple (only about 10 lines of additional code). When it is dumping data, it checks to see if the field it is dumping is a content type, and if so, it dumps a dictionary of app_label and model. Then, when this fixture is loaded back in, it runs a query against the Content Types for that object. Then plugs that in for the content type. This fixes the problem of content types being an ID, and the ID's not matching when you move across databases (Your try #2). I have also been working on a more generic solution to this problem. I have a copy of it on github(1). The approach taken there is similar. When it loads a ForeignKey field to be serialized, it checks the related model (the one being pointed to) for any unique constraints. If any of these exist, then the model is dumped as a dictionary of kwargs containing the key/value pair for these unique constraints. The content type model doesn't define app_label and model as unique, which is a problem for this approach. If this ever gets into django core, it's going to require a special case for content type things (or some other approach which I haven't thought of). Having references to contrib apps is frowned upon, so I think having a third party serializer that does this is the answer for now. Hope this helps 1. http://github.com/ericholscher/sandbox/blob/d32da8c36f257bb973a5c0b0fd8f9bca79062f11/serializers/yamlfk.py -- Eric Holscher Web Developer at The World Company in Lawrence, Ks http://ericholscher.com e...@ericholscher.com --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Contenttype Generation Inconsistency During Serialization
There is a small road block that makes contenttype a little dangerous to use during application development. Especially in regards to serializing your data to different databases. During syncdb the contenttypes are generated in a way that makes regeneration at a later date inconsistent with the previously generated primary keys. The contenttype IDs can be different depending on when your syncdb was run in the development of your application. In addition, loaddata and dumpdata are prevented from working correctly if the contenttypes have already been created (integrity errors). My use pattern for this is during application development I would architect all of my data models before adding any explicit indexes. After the models are complete and data is loaded I will analyze the use patterns and index accordingly. Since django has no way of syncing indexes my approach would be to dump the data to a JSON file, drop the database and use syncdb to create the canonical copy. My experience was as follows: 1st Try: 1. Dump data from old db using management command (dumpdata) 2. Drop DB and use django to create the database via syncdb 3. Load data using management command on the new database 4. Become irritated with integrity errors while the load tries to import the contenttype table which already exists. 2nd Try: 1. Dump data from old db using management command (dumpdata), excluding the contenttype table (-e contenttypes) 2. Drop DB and use django to create the database via syncdb 3. Load data using management command on the new database 4. Realize all data is completely useless since contenttype's PK's are not connected to the same models as before. 3rd Try: 1. Dump data from old db using management command (dumpdata) 2. Drop DB and use django to create the database via syncdb 3. Truncate contenttype table 4. Load data using management command on the new database Possible solution that doesn't suck a lot: I came up with quite a few different ways to handle this, but the best so far (even thought it's not stellar) is to create a new column in contenttypes that's a combined column. The combined column would contain the app_label and model_name. GenericForeignKey could use the combined column instead of the PK to keep the references pointing to the same locations. I understand there are some performance implications here, but it's the best I can come up with. I would love to hear thoughts on this topic. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: App Engine port
Hi Malcolm, On 10 Feb., 05:25, Malcolm Tredinnickwrote: > I have a reasonably fleshed out plan to make things easier here in the > Django 1.2 timeframe. The rough idea is that everything under > django/db/models/sql/ could be replaced with a module of the developer's > choosing (probably set via a setting). That package is the only place > that really cares much about SQL. > > So somebody wanting a GAE backend or an Hadoop backend or something else > would write something that behaved the same way as the Query class (and > subclasses) and could be called by the QuerySet class appropriately. Is the plan somewhere on the wiki? App Engine support requires at least that Model and probably a few other classes can be overridden (maybe partially). For example, save_base() makes a few queries to check if the row already exists, but this would have to be done differently on App Engine (hopefully with a transaction). BTW, Mitchell Garnaat, the creator of boto (popular AWS library for Python), might be interested in helping with a SimpleDB port: http://groups.google.com/group/boto-users/msg/c68e9456d2c6393e Bye, Waldemar Kornewald --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Django, initial data and custom SQL
On Feb 10, 5:07 pm, Johan Bergströmwrote: > Hey, > > On Feb 10, 4:51 pm, "ludvig.ericson" wrote: > > > > > > > On Feb 10, 1:13 pm, Johan Bergström wrote: > > > > Since Django executes my custom SQL before creating indexes, it's > > > impossible to achieve something that hooks into initdb/syncdb. I know > > > that it is "good custom" to create indexes after inserting data – but > > > fixtures in Django is already executed after creating indexes, so that > > > can't be the reason.. So, without further ado, what I would like to > > > know is if there's a reason to why custom SQL is executed before index > > > creation. > > > Isn't this doable with initial > > SQL?http://docs.djangoproject.com/en/dev/howto/initial-data/#initial-sql > > > Testing here with SQLite, it'd seem it runs the custom SQL at the very > > last point, so you could actually add some ALTER TABLE statements, I > > guess. Again, this is testing with SQLite, and SQLite doesn't do > > indexing. > > Actually it doesn't. I think you just did a reset/sqlall instead of > sync/initdb: > > # cat settings.py | grep DATABASE_E > DATABASE_ENGINE = "sqlite3" > > # python manage.py syncdb > > Creating table testapp_message > Creating table testapp_avatar > Installing custom SQL for testapp.Message model > Failed to install custom SQL for testapp.Message model: no such index: > testapp_message_avatar_id > Installing index testapp.Message model > > Installing json fixture 'initial_data' from '/fixtures'. > > As you most likely can tell from above, sql/message.sql contains a > "drop index ..." operation. > > (nitpick: SQlite has indexes - you could of course argue their > effectiveness :-) > > > > > Maybe I misunderstood? > > Perhaps I should've been more verbose :-) Thanks for your input > though. > > > - Ludvig > > Regards, > Johan Bergström I took the liberty of creating a ticket with attached patch at: http://code.djangoproject.com/ticket/10236 Thanks, Johan Bergström --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: #5903 DecimalField returns default value as unicode string
On Wed, Feb 11, 2009 at 9:06 AM, Brian Rosnerwrote: > > Hey all, > > I recently came across the issue described in #5903 [1] earlier. There > are two distinct patches that fix the problem, but at different > levels. My inclination is to fix this issue at the model field level > and properly override get_default. My feeling is that allowing Decimal > objects to pass through force_unicode (when strings_only=True) might > cause ill-effects in other parts of Django, but I am not entirely sure > (running the test suite with the fix in force_unicode didn't cause any > failed test, but that doesn't make it right to me :). I don't see much > reason to do so. Perhaps someone can shed some light on this? Our usage of Decimals has historically been undertested, so the fact that you get no test failures doesn't necessarily mean that changing force_unicode won't cause problems :-) However, in this case, I'm reasonably convinced it is the right thing to do. The list of 'ignored' types for force_unicode is essentially the list of data types that we use for native data representations. I can't think of any reason that Decimals shouldn't be on that list. If you're looking for a little more background on exactly what force_unicode "should" do, here's a discussion from way back, when we expanded the non-string types to include dates, times, etc: http://groups.google.com/group/django-developers/browse_thread/thread/c74e881a5f0dc8a6 My reading of that (then, as now), is that force_unicode(strings_only=True) exists to catch string proxy objects, not non-string objects; adding Decimal to the list of non-string objects should be fine. However, before you check anything in, I would suggest making sure that we have got good test coverage for this change. In particular, I would check that we have good Decimal tests for the following: * Populating initial values on forms. One of the few places that might be relying on get_default() returning a string is in an initial value for a form * Serialization, especially of an object with a decimal default value. Yours, Russ Magee %-) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Interaction of annotate() and values()
On Wed, Feb 11, 2009 at 4:43 PM, Malcolm Tredinnickwrote: > >> By way of example, this is what 1.0.X produces: >> >> >>> Book.objects.extra(select={'a':'name','b':'price','c':'pages'}).values('name','pages','a') >> [{'a': u'Book 1', 'c': 11, 'b': Decimal("11.11"), 'name': u'Book 1', >> 'pages': 11}, ... > > That looks like A Bug(tm). I wouldn't have expected 'b' and 'c' to > appear there, since values() should describe the full set of values > returned (sans any extra behaviour of the annotate() portion). > > If you don't have time or feel motivated to poke at that, assign a > ticket to me. I've had a brain failure somewhere along the lines by the > look of it, particularly in light of the next example. I'm in the area, so I'm happy to look at it. I should be able to bash out a solution this evening. Thanks, Russ %-) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---