Local thread variables would be perfect to store the request object. Unfortunately they are specific for Python 2.4 :

class local
A class that represents thread-local data. Thread-local data are data whose values are thread specific. To manage thread-local data, just create an instance of local (or a subclass) and store attributes on it:

mydata = threading.local()
mydata.x = 1

The instance's values will be different for separate threads.

For more details and extensive examples, see the documentation string of the _threading_local module.

New in version 2.4.



2005/7/8, Graham Dumpleton <[EMAIL PROTECTED]>:
One issue at a time sounds good. :-)

On 08/07/2005, at 11:38 PM, Gregory (Grisha) Trubetskoy wrote:

>
> I think that the issue of import_module not honoring the AutoReload
> and Debug directives (because it was originally only used internally
> and was used a level "below" the directives) is a fairly serious one
> and something should be done before 3.2.
>
> So perhaps we rename import_module() to something else, and create a
> separate import_module which checks the directives, then calls the low
> level one?

Because a caller will not always have access to the req object and thus
can't
be made to pass it in as an explicit argument, there needs to be a way
of
caching the request at a high level. The internals of the
import_module()
method can then access it directly to determine the log and autoreload
options.
The current arguments thus become redundant.

The code I have been playing with for this is included at the end of
the email.

Some of the issues in the implementation that came up were:

1. Must be thread safe because of multithreaded MPMs.

2. Needed to cope with same req object being pushed into cache more
than once
for the thread handling the specific request. In case it was being
called at
start of every handler phase for a request.

3. The req object obviously has to be discarded from cache at end of
request
by way of cleanup handler.

4. The PythonCleanupHandler is called after registered cleanup handlers
and
so if you cache req at beginning of every handler, you cache it again
here
after you already discarded it once. It is interesting that a
PythonCleanupHandler
can have its own distinct registered cleanup handler which is called
after
that phase is called.

5. A stack for each thread was effectively required because of internal
redirects. Ie., same thread could be use to serve redirect request but
different req object instance is created for it.

Anyway, think that is all. I was simply calling cacheRequest() as the
first
thing where top level handlers were being called. This caching may
however
be better off in mod_python.c as it is probably holding it anyway and so
access to it simply needs to be provided based on looking at the thread
which is executing.

from mod_python import apache

try:
   from threading import currentThread
except:
   def currentThread():
     return None

_requestCache = {}

def _discardRequest(thread):
   try:
     _requestCache[thread].pop()
     if len(_requestCache[thread]) == 0:
       del _requestCache[thread]
   except:
     pass

def cacheRequest(req):
   thread = currentThread()
   if _requestCache.has_key(thread):
     if _requestCache[thread][-1] == req:
       return
     _requestCache[thread].append(req)
   else:
     _requestCache[thread] = [req]

   req.register_cleanup(_discardRequest,(thread,))

def currentRequest():
   try:
     thread = currentThread()
     return _requestCache[thread][-1]
   except:
     pass

Hope I didn't delete any important bits when I took out by debug. :-)

Graham




Reply via email to