Re: Deletion of related objects

2009-04-12 Thread bo

On Apr 9, 5:04 am, Russell Keith-Magee <freakboy3...@gmail.com> wrote:
> On Thu, Apr 9, 2009 at 5:02 AM, Jeremy Dunck <jdu...@gmail.com> wrote:
>
> > On Tue, Mar 31, 2009 at 5:47 PM, Malcolm Tredinnick
> > <malc...@pointy-stick.com> wrote:
>
> >> On Tue, 2009-03-31 at 14:48 -0500, Jeremy Dunck wrote:
> > ...
> >>> I'm aware of ticket #7539, but would prefer to keep the scope narrower
> >>> and ask the hopefully-useful question-- is #9308 a bug?  If so, I'd
> >>> like to close it for 1.1.
>
> >>> In summary, #9308 describes a situation where B has a nullable FK to
> >>> A, and an instance of A is being deleted.   Currently, any B's related
> >>> to A also get deleted.
> > ...
> >> In the case of #9308, specifically, I think setting the related pointers
> >> to NULL would be a better solution, but I'm not sure if it's considered
> >> a fundamental change in behaviour, inconsistent with what we're doing
> >> now.
>
> > Would any other committers chime in here?  Is #9308 a bug?
>
> Apologies - I had this marked as something to respond to, but it got
> buried in some other mail.
>
> I can see at least 2 readings of #9038. Consider the following setup
> (much the same as the ticket describes):
>
> class F(Model):
>  ...
>
> class E(Model):
>    fk = ForeignKey(F, null=True)
>
> f1 = F().save()
> e1 = E(fk=f1).save()
>
> What does f1.delete() do at this point:
>
> Current behaviour
>  1. Transaction is started
>  2. f1 is deleted
>  3. e1 is deleted
>  4. Transaction is closed.
>
> This relies upon deferrable constraint checking, as the deletion of f1
> makes e1 a temporarily inconsistent object.
>
> The ticket description seems to imply that this is a bad thing to do,
> but I can't see why - this is exactly what deferred checks are for.
> The ticket suggests that the problem is with MSSQL not implementing
> deferred constraints - in which case, the problem lies with MSSQL, not
> with Django.
>
> There is a secondary case, which is raised by #7778 (which was closed
> as a dupe of #9308) - dealing with legacy databases that haven't got
> foreign keys defined as DEFERRABLE.
>
> Reading 1:
>  1. Transaction is started
>  2. e1.fk is set to NULL
>  3. f1 is deleted
>  4. e1 is deleted
>  5. Transaction is closed.
>
> I can't say this suggestion impresses me. If your database doesn't
> implement deferred constraint checking properly, you have much bigger
> problems to deal with. Here's a nickel - go buy yourself a real
> database.
>
> The 'legacy database' problem is a slightly more compelling reason to
> fix the problem, but I'm still not convinced it's a problem that we
> want to tackle. Malcolm commented on #7778 that this wasn't
> necessarily a class of problem that we want to address, and I tend to
> agree. Django guarantees that it works with tables that are created
> the way Django expects them to be created. Django may also work with
> legacy tables, but I view this as a convenience, not part of our
> design contract. If you have a foreign key that isn't set up the way
> Django expects, then I would expect to see problems.
>
> Reading 2:
>  1. Transaction is started
>  2. e1.fk is set to NULL
>  3. f1 is deleted
>  4. Transaction is closed.
>
> This is the backwards incompatible interpretation, changing the
> behavior of bulk deletion from an emulation of ON DELETE CASCADE to an
> emulation of ON DELETE SET NULL. This is essentially #7539-lite.
>
> As appealing as this change may be, unfortunately, I don't think we
> have much wiggle room. The documentation for delete() [1] says
> outright that qs.delete() == ON CASCADE DELETE, and that's what the
> current implementation does. Regardless of the merits of any other
> approach, that is both the documented and implemented interface as of
> Django v1.0. Backwards incompatibility sets the rules on this
> occasion.
>
> [1]http://docs.djangoproject.com/en/dev/topics/db/queries/#deleting-objects
>
> However, I can see the appeal of having better controls over deletion
> behavior. IMHO, there is a lot more we could be doing to (1) expose
> other deletion cascade strategies, and (2) use DB native
> implementations rather than python implementations of deletion where
> they are available. This is the goal of #7539, but it's not going to
> be a simple change.
>
> In summary - for me #9308 isn't a wontfix ticket; by reading 1, it's a
> pointless change; by reading 2, it's a backwards incompatible change.

doesn't all this backward compatibility and nullables and deletions of
related objects get solved by something mentioned in

http:/

Re: Deletion of related objects

2009-03-31 Thread bo blanton


On Mar 31, 2009, at 12:48 PM, Jeremy Dunck wrote:

>
> Malcolm, Jacob pointed me at you, since the code in question was a
> commit around QSRF-time.
>
> I'm aware of ticket #7539, but would prefer to keep the scope narrower
> and ask the hopefully-useful question-- is #9308 a bug?  If so, I'd
> like to close it for 1.1.
>
> In summary, #9308 describes a situation where B has a nullable FK to
> A, and an instance of A is being deleted.   Currently, any B's related
> to A also get deleted.
>
> #9308 takes the position that any B.a_id's should be set to null
> rather than B instances being removed along with the desired deletion
> of A.   I'm asking explicitly whether this is is a bug?



i'm feeling that most of these "delete" issues could be fixed via  
something like this

http://code.djangoproject.com/ticket/8168

i know y'all are really hating 'new signals' but i'm not sure how to  
do the myriad of cases without one.
Then the user can set things to null or 'ignore it' , change it, etc,  
etc.  And it has the added benefit of not breaking the api as it  
stands now (where i'm sure some folks rely on the delete not setting  
things to null)

bo

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: How do you handle cascading deletes in your production apps?

2008-12-11 Thread bo

I'm not sure, but this ticket

http://code.djangoproject.com/ticket/8168

"Pre Prepare For Delete" signal (i.e. do things BEFORE delete() tries
to find all the relationships). Would be able to do all the custom
stuff you'd want.

"null out others", "bulk purges in custom queries", etc, etc .. i'd
think it would have to be a signal to allow for Cross Apps to deal
with issues of say an "auth.user" being deleted where overloading
'delete' in that model can no longer be performed (unless of course
one subclasses, but then that requires other apps to use a custom
subclass for each App and things would get messy fast)

just a thought.

bo


On Dec 10, 6:25 am, AcidTonic <acidto...@gmail.com> wrote:
> Currently the cascading delete functionality requires many users to
> change how their app works slightly to accommodate this "feature".
>
> I'm curious how people are disabling or working around this aspect of
> django.
>
> I've heard many people are implementing custom delete() methods on the
> model class.. But after reading "http://docs.djangoproject.com/en/
> dev/topics/db/queries/#deleting-objects" I found a nice little gotcha.
>
> "Keep in mind that this will, whenever possible, be executed purely in
> SQL, and so the delete() methods of individual object instances will
> not necessarily be called during the process. If you've provided a
> custom delete() method on a model class and want to ensure that it is
> called, you will need to "manually" delete instances of that model
> (e.g., by iterating over a QuerySet and calling delete() on each
> object individually) rather than using the bulk delete() method of a
> QuerySet."
>
> So before we had a bulk delete and now to get around functionality we
> dont want, we have to lose bulk deletes and do them one at a time?!?
>
> I'm building an application to track IP addresses on many corporate
> networks with a single subnet having around 65535 rows for IP
> addresses. Now this app has thousands of those subnets which means
> I have millions of rows for IP addresses.
>
> Since they have relationships to the parent subnet, switchports,
> devices, contacts, locations, applications etc. These relationships
> need to be cleared before removing the IP, because nothing else is
> supposed to get deleted.
>
> When before I could delete 5 subnets which removed a few hundred
> thousand rows in a couple seconds, now to delete a single subnet takes
> upwards of 5 minutes using the "for query in queryset: query.delete()"
> method.
>
> So unless I'm missing something it would appear django is crippling
> any application not wanting a cascading delete Since this is an
> inventory style application any missing data is extremely bad. Waiting
> 5 minutes for something that used to take a few seconds is also
> unacceptable.
>
> I'm seeking all suggestions or ideas how to keep my other models from
> getting deleted, without crippling my applications performance.
>
> Please advise,
> Zach
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Proposal: remove auth context processor dependancy for Admin

2008-11-03 Thread bo



On Nov 3, 10:24 am, "Collin Grady" <[EMAIL PROTECTED]> wrote:

>
> I think you're confused - the context processor doesn't set a cookie,
> so it isn't causing any Vary behavior, and admin *does* currently
> depend on it - it needs the info the processor provides. Unless admin
> was rewritten to add them to context manually on every view, it will
> continue to need it.

no it does not set a cookie, but it accesses the session which then
causes the "Vary: Cookie" header to get set (look at the session
middleware if the session is accessed at all the Vary: cookie is set).

>
> > and one would need to override every other function that calls
> > "template.RequestContext' which is most of the meat of sites.py.
>
> Uh, no you wouldn't - if you tell admin not to check for that context
> processor, you could replace the auth processor with a custom one that
> only works in admin's path and as such wouldn't add any db hits to the
> rest of the site.

the goal is to REMOVE the dependency on the auth context processors
for admin, but use it if it is included. Thus to effectively do that
one needs to redo "template.RequestContext" inside sites.py to set the
user/messages/perms if not present already. The "default" behavior
just to include admin (without any alterations) basically require a
session access for every page in a given 'app/project'. To me this is
alot of overhead to "start" with and alot more work to "remove" it.

Yes i know i can hack around the issue (and currently do), but i'm not
sure just how many folks out there realize that just by including the
auth context processor, effectively all you can cache is _one_ page
per user/sessionid rather then caching the page for arbitrary
anonymous users.  This may indeed be the desired behavior for some
sites, but certainly not all.

So i'm proposing a change to sites.py that does not effect any
backwards compatibility, but will improve django's performance in a
good number of use cases.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Proposal: remove auth context processor dependancy for Admin

2008-11-02 Thread bo

Yes, that may be true.

But why does the 'default' behavior impose these dependencies when
they are not required? As that context processor has this other 'side
effect' of making an entire site "Vary" under a cookie, hit the
session and hit the DB.

and one would need to override every other function that calls
"template.RequestContext' which is most of the meat of sites.py.

bo

On Nov 1, 1:23 pm, "Collin Grady" <[EMAIL PROTECTED]> wrote:
> On Sat, Nov 1, 2008 at 9:45 AM, bo <[EMAIL PROTECTED]> wrote:
> > One aid in this area would be to remove the dependancy of
> > context_processor.auth in the Admin world (which most certainly needs
> > the user, messages and perms in the context)
>
> You can already do this - simply make your own subclass of AdminSite
> and override check_dependencies
>
> --
> Collin Grady
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Mysql Ping and performance

2008-11-02 Thread bo


Would it be wise to suggest that it be removed from the django core
given the connection drops every request? It's not done for Postgres
at all and the Oracle base.py is doing what you suggest.

I guess this question would be posed to whomever controls that slice
of the django pie.

bo


On Nov 2, 1:39 am, Jesse Young <[EMAIL PROTECTED]> wrote:
> Oh, I forgot to mention that this is certainly a Django issue and not
> a mysqldb issue. The relevant code is in django/db/backends/mysql/
> base.py:232:
>
>     def _valid_connection(self):
>         if self.connection is not None:
>             try:
>                 self.connection.ping()
>                 return True
>             except DatabaseError:
>                 self.connection.close()
>                 self.connection = None
>         return False
>
> I replaced that in my local version of Django with :
>
>     def _valid_connection(self):
>         return self.connection is not None
>
> -Jesse
>
> On Nov 2, 1:33 am, Jesse Young <[EMAIL PROTECTED]> wrote:
>
> > Yeah, we found the ping() during performance profiling too. It's a
> > significant performance hit -- it basically doubles the number of
> > requests to the DB server. It seems that the reason is to support use
> > cases where the DB connection is long-lived, since sometimes DB
> > connections die if you leave them open for a really long time. In the
> > normal Django HTTP request use case this never happens because Django
> > closes the DB connection at the end of each request. But it could
> > potentially be useful for people who use django.db in some other
> > custom process.
>
> > We ended up commenting out the ping() stuff so that any opened
> > connection was always treated as valid.
>
> > It seems to me that the performance overhead in the 99% use case would
> > suggest that it would be beneficial for Django to let users configure
> > whether or not their DB connection constantly pings or not. Or maybe
> > just keep track of the time and don't ping unless some minimum time
> > has elapsed since the last one. Or do what we did, and just don't ping
> > at all and let the app deal with the case where the DB drops the
> > connection.
>
> > -Jesse
>
> > On Oct 31, 1:34 pm, bo <[EMAIL PROTECTED]> wrote:
>
> > > Not sure if this is a Django issue or the supporting Mysqldb (1.2.2)
> > > python package .. (i'll stop this here if its not, but it seems worthy
> > > of at least letting other know this)
>
> > > however while profiling a page i came across this seemingly
> > > performance hole.
>
> > > 
> > >    Ordered by: internal time
>
> > >    ncalls  tottime  percall  cumtime  percall
> > > filename:lineno(function)
> > >       230    0.343    0.001    0.343    0.001 {method 'query' of
> > > '_mysql.connection' objects}
> > >       228    0.116    0.001    0.116    0.001 {method 'ping' of
> > > '_mysql.connection' objects}
> > >       234    0.029    0.000    0.047    0.000 query.py:
> > > 473(get_default_columns)
> > >       972    0.021    0.000    0.048    0.000 __init__.py:
> > > 487(__init__)
> > >      1303    0.019    0.000    0.022    0.000 __init__.py:
> > > 633(__init__)
> > >      2068    0.017    0.000    0.216    0.000 __init__.py:
> > > 690(_resolve_lookup)
> > > ---
>
> > > #1 time sink is the queries themselves (i figured that would be the
> > > case) .. but #2 is "ping" and it seems to ping on every query.  This
> > > issue is probably not so bad on Localhost or Socket based connections,
> > > but on remote Mysql server, as you can see, it is not so good.
>
> > > again not sure if django can solve (or even wants to solve this) ..
>
> > > bo
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Proposal: remove auth context processor dependancy for Admin

2008-11-01 Thread bo


I regards to
http://groups.google.com/group/django-developers/browse_thread/thread/2bad3ad84e9beb81

One aid in this area would be to remove the dependancy of
context_processor.auth in the Admin world (which most certainly needs
the user, messages and perms in the context)

Admin can detect if it is present, and use it, otherwise it would
simply "add" the three vars (user, messages, perms) to the context
itself.

I think all it requires is to move the calls to

"template.RequestContext"
to
"self.RequestContext"

inside of contrib.admin.site.py .. making it backward compatible

bo
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Mysql Ping and performance

2008-10-31 Thread bo

Not sure if this is a Django issue or the supporting Mysqldb (1.2.2)
python package .. (i'll stop this here if its not, but it seems worthy
of at least letting other know this)

however while profiling a page i came across this seemingly
performance hole.


   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall
filename:lineno(function)
  2300.3430.0010.3430.001 {method 'query' of
'_mysql.connection' objects}
  2280.1160.0010.1160.001 {method 'ping' of
'_mysql.connection' objects}
  2340.0290.0000.0470.000 query.py:
473(get_default_columns)
  9720.0210.0000.0480.000 __init__.py:
487(__init__)
 13030.0190.0000.0220.000 __init__.py:
633(__init__)
 20680.0170.0000.2160.000 __init__.py:
690(_resolve_lookup)
---

#1 time sink is the queries themselves (i figured that would be the
case) .. but #2 is "ping" and it seems to ping on every query.  This
issue is probably not so bad on Localhost or Socket based connections,
but on remote Mysql server, as you can see, it is not so good.

again not sure if django can solve (or even wants to solve this) ..

bo


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Proposal: Manage.py custom commands overloading

2008-10-30 Thread bo


On Oct 30, 11:55 am, "Jacob Kaplan-Moss" <[EMAIL PROTECTED]>
wrote:
>
> This is by design: Django looks for custom commands in INSTALLED_APPS
> only. "Projects" have no real special standing as far as Django is
> concerned -- in fact, it's entirely possible to deploy a site without
> a "project" at all. IOW, "project" is pretty much an abstraction to
> help us humans organize things; Django doesn't care.

so i suppose you don't consider "django" to be the "master project"
then? :)

>
> If you've got a command that you want to be available, factor it out
> into an app and add it to INSTALLED_APPS.
>

yes, that's what i'm doing currently, but it feels like its in the
wrong place.

to pose the question:
Why would "syncdb" sit in the App level and not the project level?
(and if you think of 'django' as the base-project it already sits at a
project level)

Even if you think multi-app commands should sit in an app structure of
its own ..

One still cannot overload the 'base' commands.
Sure 'runserver' could be 'app' specific, but 90% of the time its
usually multi-app specific.  Leaving no real options for a custom
handler. (without writing a "runMyFancyserver" command in an "app"
that's not really an app .. ).

Then if one 'can' overload the base command what should be the
precedence order? should 2 apps defined "runserver" who wins ..
usually the 'last guy in the load order' i would assume. Then if one
allows the 'project' level (however you define it) who should take
precedence? or should "all" of them be run or some mechanism in place
the determines the proper one?

Or should there be certain commands that "concat" and some that
cannot?

 "cleanup" .. where a "contrib" (sessions) has somehow got stuck into
the "core". could then be included in an App as a multi-run command
 "runserver" .. need some way of figuring out "who wins" (either via
an option or some guess based on position in an over all import
structure)
 "syncdb" .. can easily run over then entire project, but if a app has
its own syncdb run that one instead of the default one.

 I'm sure the commands are rarely overloaded or even too far messed
with (the documentation says to look at the code).  I'm just posing
the question/suggestion.


bo


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Proposal: Manage.py custom commands overloading

2008-10-30 Thread bo

Hi Guys,

While trying out a new WSGI handler, that should run fine in the usual
command line "runserver", i noticed that the get_commands() in
managment.__init__ does not look for commands in the top level
directory.

suppose things are set up as follows

/project/
   /app1/
   /app2/
   manage.py
   settings.py
   

the get_commands will look into each app to find commands but not the
'project' level.  But also if an app defined a "core" command, it does
not 'overload' the core one (i.e. say 'runserver' for instance).

So the proposal is this
1) Add the ability to define a "project" command in the root directory
(which is _not_ app dependent)
2) reverse the load order of get_commands to start from the App -->
Project --> core

i.e.
...
/django/core/management/comands/
   blaa.py -- load #3

/project/
   /management/
 /commands/
  blaa.py -- load #2
  /app1/
 /management/
   /commands/
 blaa.py -- load #1

the reason why this might be handy is the project an control various
inter-relations between apps and also use some 'other' WSGI handlers
in runserver or different FCGI handlers

thoughts?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: RequestContext + Auth.middleware 'Accesses the Session' (aka LazyUser ain't that lazy)

2008-10-28 Thread bo

yes, it is "Lazy" in the "not evaluated until asked for sense" but

 File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/django/core/context_processors.py", line 20, in auth
if hasattr(request, 'user'):

  File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/django/contrib/auth/middleware.py", line 5, in __get__
request._cached_user = get_user(request)

  File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/django/contrib/auth/__init__.py", line 83, in get_user
user_id = request.session[SESSION_KEY]
does the first session access then to user DB access

says that in essence, it will never be lazy if using the auth context
processor because it is always asked for, thus the LazyUser is just
overhead in this case.

and that happens before the messages are even considered.

bo



On Oct 28, 4:02 pm, SmileyChris <[EMAIL PROTECTED]> wrote:
> LazyUser is lazy and works fine. It's only triggered because of the
> messages part of the processor so that's the root of the problem
> (which is currently being talked about in another thread)
>
> On Oct 29, 4:51 am, bo <[EMAIL PROTECTED]> wrote:
>
> > well, not exactly.
>
> > the "LazyUser" is the thing that is not so lazy (yes i agree that the
> > messages are not lazy either), the Session is hit on the get_user
> > statement in auth() before the messages are even considered (see the
> > back trace).
>
> > The idea i am proposing is to make that entire
> > context_processors.auth() function a lazy entity so it only hits any
> > session/DB/messages/etc,etc when 'called' from inside a template or
> > view.
>
> > bo
>
> > On Oct 27, 6:58 pm, SmileyChris <[EMAIL PROTECTED]> wrote:
>
> > > This is exactly why my patch in the session messages ticket [1] makes
> > > the messages lazy.
>
> > > [1]http://code.djangoproject.com/ticket/4604
>
> > > On Oct 28, 1:59 pm, bo <[EMAIL PROTECTED]> wrote:
>
> > > > Actually i've found that the issue lies with the
> > > > TEMPLATE_CONTEXT_PROCESSORS
>
> > > > django.core.context_processors.auth
>
> > > > which does the get_and_delete messages bit ..
>
> > > > so i guess that is the proper behavior.
>
> > > > Sad to say that although my app can work around that issue (by using a
> > > > different messages mechanism thus i do not need
> > > > django.core.context_processors.auth) but "contrib.admin" screams if it
> > > > is not included.
>
> > > > So either this may be a documentation issue to say that "using
> > > > django.core.context_processors.auth will always insert a Vary: Cookie
> > > > header" or fix up admin to use the "request.user" instead of "user"
> > > > directly in the Context and then require
> > > > "django.core.context_processors.request" to always be included ...
>
> > > > bo
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: RequestContext + Auth.middleware 'Accesses the Session' (aka LazyUser ain't that lazy)

2008-10-28 Thread bo

well, not exactly.

the "LazyUser" is the thing that is not so lazy (yes i agree that the
messages are not lazy either), the Session is hit on the get_user
statement in auth() before the messages are even considered (see the
back trace).

The idea i am proposing is to make that entire
context_processors.auth() function a lazy entity so it only hits any
session/DB/messages/etc,etc when 'called' from inside a template or
view.

bo

On Oct 27, 6:58 pm, SmileyChris <[EMAIL PROTECTED]> wrote:
> This is exactly why my patch in the session messages ticket [1] makes
> the messages lazy.
>
> [1]http://code.djangoproject.com/ticket/4604
>
> On Oct 28, 1:59 pm, bo <[EMAIL PROTECTED]> wrote:
>
> > Actually i've found that the issue lies with the
> > TEMPLATE_CONTEXT_PROCESSORS
>
> > django.core.context_processors.auth
>
> > which does the get_and_delete messages bit ..
>
> > so i guess that is the proper behavior.
>
> > Sad to say that although my app can work around that issue (by using a
> > different messages mechanism thus i do not need
> > django.core.context_processors.auth) but "contrib.admin" screams if it
> > is not included.
>
> > So either this may be a documentation issue to say that "using
> > django.core.context_processors.auth will always insert a Vary: Cookie
> > header" or fix up admin to use the "request.user" instead of "user"
> > directly in the Context and then require
> > "django.core.context_processors.request" to always be included ...
>
> > bo
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: RequestContext + Auth.middleware 'Accesses the Session' (aka LazyUser ain't that lazy)

2008-10-27 Thread bo


Actually i've found that the issue lies with the
TEMPLATE_CONTEXT_PROCESSORS

django.core.context_processors.auth

which does the get_and_delete messages bit ..

so i guess that is the proper behavior.

Sad to say that although my app can work around that issue (by using a
different messages mechanism thus i do not need
django.core.context_processors.auth) but "contrib.admin" screams if it
is not included.

So either this may be a documentation issue to say that "using
django.core.context_processors.auth will always insert a Vary: Cookie
header" or fix up admin to use the "request.user" instead of "user"
directly in the Context and then require
"django.core.context_processors.request" to always be included ...

bo



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



RequestContext + Auth.middleware 'Accesses the Session' (aka LazyUser ain't that lazy)

2008-10-27 Thread bo

I'm not sure if this is 'supposed' to be the case or not so i'll ask

i think i've seen a similar post like this before, but either a) i'm
bad a search for it or b) it was only slightly related

--- the issue ---

using RequestContext(request, {}) + Auth.middleware always seems to
access the session the "get_user" from the auth/__init__.py and thus
prefills the user .. thus hitting both the Session (as Accessed) and
the User (if present in the form of a DB query).

even if the "user" is never used in the Templates or in the View
functions

this has one major (only major if you site sits behind SQUID or some
other proxy goodness) that it _always_ sets the Vary: Cookie Header ..
pretty much killing any cache-ability beyond those with no cookies
enabled (and in these days what site does toss some cookies in the mix
for tracking/sesssions that may be used later in the site, like
shopping carts, but not for every page)

Here is the backtrace to confirm

 File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/
django/core/servers/basehttp.py", line 278, in run
self.result = application(self.environ, self.start_response)

  File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/django/core/servers/basehttp.py", line 635, in __call__
return self.application(environ, start_response)

  File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/django/core/handlers/wsgi.py", line 239, in __call__
response = self.get_response(request)

  File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/django/core/handlers/base.py", line 86, in get_response
response = callback(request, *callback_args, **callback_kwargs)

  File "/Sites/testy/views.py", line 262, in test_view
out = RequestContext(request, {})

  File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/django/template/context.py", line 105, in __init__
self.update(processor(request))

  File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/django/core/context_processors.py", line 20, in auth
if hasattr(request, 'user'):

  File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/django/contrib/auth/middleware.py", line 5, in __get__
request._cached_user = get_user(request)

  File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/django/contrib/auth/__init__.py", line 83, in get_user
user_id = request.session[SESSION_KEY]

  File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/django/contrib/sessions/backends/base.py", line 46, in
__getitem__
return self._session[key]





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: GET requests should not alter data?

2008-10-15 Thread bo

It seems that what you may want is something like

http://softwaremaniacs.org/soft/mysql_replicated/

(its in Russian that i cannot read and one of the links has the
source :)

master slave DB engine (for Mysql)

I modified it to force a master call for everything that was not a
"SELECT" in the final query, and once you force it to the master (or
it auto forces to the master) it will stay there for the duration of
the request to deal with the asynchronous nature of a master-slave
pair.

bo

On Oct 15, 12:50 am, "Amit Upadhyay" <[EMAIL PROTECTED]> wrote:
> Usecase: for scaling a website(bottlenecked on database), one of the
> first thing to do after caching and other optimizations is to split
> requests to go to master/slave replicated database servers. One way of
> doing it is based on request.METHOD[1], GET requests going to slave,
> and POSTs going to master.
>
> Problem: django has a few instances where a GET leads to database
> changes. 1. session creation(INSERT) 2.
> User.get_and_delete_messages(DELETE). And probably others.
>
> Question: 1. is the expectation that GET request should only do SELECT
> reasonable? 2. if 1, then should django enforce it? [So far using non
> db based session backend, and allowing delete for auth_messages from
> "GET machines" and living with "a message appears more than once" is
> what I am doing].
>
> [1]: For example throughhttp://www.djangosnippets.org/snippets/1141/
>
> --
> Amit Upadhyay
> Vakow!www.vakow.com
> +91-9820-295-512
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



readlines + InMemoryUploadedFile

2008-10-15 Thread bo

Hi Guys,

Is there a reason why InMemoryUploadedFile does not proxy readlines as
well from StringIO? seems like it should (especially if using PIL
directly from the uploaded file)

bo
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: threading, caching, and the yield in query.py

2008-09-30 Thread bo


I am doing some mildly weird things ..

all are related to the fact that the first iterator (i.e. for data in
myob.data) is _really_ Database heavy, so the entire thing is a very
lazy iterator (with other lazy sub iterators). So in caching the sub
objects (myob) the QuerySet could have been evaluated, but sometimes
not, and when that data is finally evaluated, that data is sub cached,
so that it is shared across all the other outstanding objects.  But
unlike the Memcached store, this 2 stage cacher does not need to
pickle anything to hold on to the data in its first fast cache (local)
stage (it's just some some Python Singleton).  and yes, if i force the
thing to get pickeled into the local cache these errors do not occur
(directly related to your statement that QuerySets are flattened in
this process).  I imagine what the issue is somehow outside of
django .. i was more curious if anyone else has come across this
before as i am stumped.

What seems strange to me is that all theses yield Value errors always
occur in the template rendering, never in any other part of the mix,
which means that the thread churning out the template is somehow
connected to another thread also using the same data object churning
out another template (or even the same template in a different
stage).  Which is why i find this strange.

bo

On Sep 29, 5:45 pm, Malcolm Tredinnick <[EMAIL PROTECTED]>
wrote:
> On Mon, 2008-09-29 at 10:37 -0700, bo wrote:
>
> > This little issue is really hard to replicate .. i've yet to find out
> > how to programatically do it because it certainly revolves around a
> > threading, object caching, and the Yield in Query.py (iteritems).
>
> > I just wanted to post this here to see if anyone, with more experience
> > then i, knows howto replicate this in a test case world.
>
> > On to the description
>
> > The set up:: Apache 2.2.6 + Linux + Threaded (_not_ forked) +
> > mod_python + Django 1.X
>
> > Suppose i have a 2 level caching object, that basically overloads the
> > function to store/get the object(s) from a Memcached overlord cache
> > (say 60 second expire), and 'local' cache (with a 1-2 second expire
> > time) ..  the basic function is to keep a highly requested object
> > _per_ HTTP request local in ram w/o having to go back to the Memcached
> > cache.
>
> > because of various iterators basing themselves off of "yield"
> > statements in db/models/query.py.  Should 2 threads access the same
> > Local RAM cache object and try to iterate (yes the READS from the
> > cache are read/write locked, but this issue appears after the read
> > lock is released and the object is begin used), the ususal "Value
> > Error: Generator already running" exception is thrown.
>
> >   File "mything/models.py", line 1072, in _set_data
> >     for p in self.data:
> >   File "/usr/lib/python2.5/site-packages/django/db/models/query.py",
> > line 179, in _result_iter
> >     self._fill_cache()
> >   File "/usr/lib/python2.5/site-packages/django/db/models/query.py",
> > line 612, in _fill_cache
> >     self._result_cache.append(self._iter.next())
> > ValueError: generator already executing
>
> > So, i'm aware this may not be a bug. But my own ignorance for not
> > doing something right.
>
> > This does not happen very often in the servers i am running (about 10
> > times a day on a 100k+ Django views/per day) which is why its really
> > hard to track down
>
> I don't understand from your description what you're actually doing, but
> it sounds a lot like you're trying to read from the same QuerySet in
> multiple threads whilst it's still retrieving results from the database
> cursor. Don't do that. Firstly, database cursor result sets aren't
> necesarily safe to be shared across threads. QuerySet and Query objects
> probably are once the result set is populated, since every non-trivial
> operation on them creates a copy and parallel iteration is supported,
> but that's more by accident than design, since it's not worth the extra
> overhead: if you want to share QuerySets via caching, they contain the
> results (the result_cache is already fully primed).
>
> Nothing in Django will cache a connection to the database or a cursor
> result set, so can you break down your problem a bit more to describe
> where the simultaneous access is happing. You say "the usual
> ValueError", but I have never seen that raised by anything in Django. So
> I'm wondering if you're doing something fairly unusual here.
>
> That particular block of code is *designed* to be used in parallel
> iterators in the same thread, so it's safe in that respect. But if
> you're sharing a partia

threading, caching, and the yield in query.py

2008-09-29 Thread bo


This little issue is really hard to replicate .. i've yet to find out
how to programatically do it because it certainly revolves around a
threading, object caching, and the Yield in Query.py (iteritems).

I just wanted to post this here to see if anyone, with more experience
then i, knows howto replicate this in a test case world.

On to the description

The set up:: Apache 2.2.6 + Linux + Threaded (_not_ forked) +
mod_python + Django 1.X

Suppose i have a 2 level caching object, that basically overloads the
function to store/get the object(s) from a Memcached overlord cache
(say 60 second expire), and 'local' cache (with a 1-2 second expire
time) ..  the basic function is to keep a highly requested object
_per_ HTTP request local in ram w/o having to go back to the Memcached
cache.

because of various iterators basing themselves off of "yield"
statements in db/models/query.py.  Should 2 threads access the same
Local RAM cache object and try to iterate (yes the READS from the
cache are read/write locked, but this issue appears after the read
lock is released and the object is begin used), the ususal "Value
Error: Generator already running" exception is thrown.

  File "mything/models.py", line 1072, in _set_data
for p in self.data:
  File "/usr/lib/python2.5/site-packages/django/db/models/query.py",
line 179, in _result_iter
self._fill_cache()
  File "/usr/lib/python2.5/site-packages/django/db/models/query.py",
line 612, in _fill_cache
self._result_cache.append(self._iter.next())
ValueError: generator already executing

So, i'm aware this may not be a bug. But my own ignorance for not
doing something right.

This does not happen very often in the servers i am running (about 10
times a day on a 100k+ Django views/per day) which is why its really
hard to track down



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---