It's interesting how quickly the knowledge about the code you yourself write evaporates without a trace... Somehow the point that import_module cannot require a request object because you usually don't have access to it at the beginning of your code where imports are usually done escaped me..

After this realization, I'm now more inclined to push this until after 3.2, and for how just make sure it is well noted in the documentation that import_module does not require a request object, but as a consequence it is not affected by the configuration directives. This currently isn't documented (in 3.1 at least).

Does this sound agreable?

P.S. On magic request caching - I think that if this works, then why not, though I think it needs to be bounced around a little more. BTW - from the code below:

   _requestCache[thread].pop()

was this meant to be

        del _requestCache[thread]

Grisha


On Sat, 9 Jul 2005, Graham Dumpleton wrote:

One issue at a time sounds good. :-)

On 08/07/2005, at 11:38 PM, Gregory (Grisha) Trubetskoy wrote:


I think that the issue of import_module not honoring the AutoReload and Debug directives (because it was originally only used internally and was used a level "below" the directives) is a fairly serious one and something should be done before 3.2.

So perhaps we rename import_module() to something else, and create a separate import_module which checks the directives, then calls the low level one?

Because a caller will not always have access to the req object and thus can't
be made to pass it in as an explicit argument, there needs to be a way of
caching the request at a high level. The internals of the import_module()
method can then access it directly to determine the log and autoreload options.
The current arguments thus become redundant.

The code I have been playing with for this is included at the end of the email.

Some of the issues in the implementation that came up were:

1. Must be thread safe because of multithreaded MPMs.

2. Needed to cope with same req object being pushed into cache more than once
for the thread handling the specific request. In case it was being called at
start of every handler phase for a request.

3. The req object obviously has to be discarded from cache at end of request
by way of cleanup handler.

4. The PythonCleanupHandler is called after registered cleanup handlers and
so if you cache req at beginning of every handler, you cache it again here
after you already discarded it once. It is interesting that a PythonCleanupHandler
can have its own distinct registered cleanup handler which is called after
that phase is called.

5. A stack for each thread was effectively required because of internal
redirects. Ie., same thread could be use to serve redirect request but
different req object instance is created for it.

Anyway, think that is all. I was simply calling cacheRequest() as the first
thing where top level handlers were being called. This caching may however
be better off in mod_python.c as it is probably holding it anyway and so
access to it simply needs to be provided based on looking at the thread
which is executing.

from mod_python import apache

try:
 from threading import currentThread
except:
 def currentThread():
   return None

_requestCache = {}

def _discardRequest(thread):
 try:
   _requestCache[thread].pop()
   if len(_requestCache[thread]) == 0:
     del _requestCache[thread]
 except:
   pass

def cacheRequest(req):
 thread = currentThread()
 if _requestCache.has_key(thread):
   if _requestCache[thread][-1] == req:
     return
   _requestCache[thread].append(req)
 else:
   _requestCache[thread] = [req]

 req.register_cleanup(_discardRequest,(thread,))

def currentRequest():
 try:
   thread = currentThread()
   return _requestCache[thread][-1]
 except:
   pass

Hope I didn't delete any important bits when I took out by debug. :-)

Graham

Reply via email to