On May 25, 7:40 pm, "Diez B. Roggisch" <de...@web.de> wrote:
> On Monday 25 May 2009 11:20:48 Antoine Pitrou wrote:
>
> > On May 25, 2:53 am, Jorge Vargas <jorge.var...@gmail.com> wrote:
> > > Now here is the fun fact. I'm sure TG+ all it's dependencies is many
> > > more LOC than django, someone should count those.
>
> > First, let me say I'm sorry for my original bogus reasoning about
> > mod_wsgi's daemon mode.
>
> > In any case, besides the LOC count, there seems to be another issue
> > which is the unconditional loading of all dependencies. Here is the
> > list of loaded modules (taken from sys.modules) after a simple "import
> > tg":
>
> <snip/>
>
>
>
> > I'm not sure why all of PIL, genshi, mako, tempita, pygments (!), the
> > whole paste and tw.core subhierarchies, among others, have to be
> > loaded like that.
>
> And the gain of lazyfying that so that it happens when the first request hits
> the system would be what?
>
> I still don't see the issue - do you actually have *problems* serving a TG app
> with reasonable speed and scalability?
>
> There are costs at initialization, we suffer through these ourselves at our
> TG2 production site- but as daemon-processes work for a long time, it's
> neglectable over the course of time.

There are still issues to consider though.

When using Apache/mod_wsgi, by default it will lazily load the WSGI
script the first time it is required. This means the startup cost is
born by the first requests to hit the process. This is why mod_wsgi
benchmarks are often inaccurate as people don't realise this and
results skewed by startup costs.

If the processes are truly long lived then this possible isn't really
distinguishable from Apache startup delay if under load. If however
you are using daemon mode and are using maximum-requests or inactivity-
timeout, or are running in embedded mode, then it can be an issue.
This is because processes can be recycled on a periodic basis, so
every so often some number of requests will incur a delay while the
application loads.

In other words, the blocking nature of loading the application only
occurs after the process has already accepted the request for
processing. As such, the request cannot be handled by another process
which is already in a state to handle the request without delay.

If this is a presenting problem and you want to avoid it, you need to
configure Apache/mod_wsgi to preload the WSGI application on process
startup.

In mod_wsgi 2.X you can do this using:

  WSGIDaemonProcess myapp

  WSGIScriptAlias / /some/path/myapp.wsgi
  WSGIProcessGroup myapp
  WSGIApplicationGroup %{GLOBAL}

  WSGIImportScript /some/path/myapp.wsgi process-group=myapp
application-group=%{GLOBAL}

The WSGIImportScript here will cause the script file to be loaded on
process start. It is necessary for the process group and application
group to match and thus why necessary to use WSGIApplicationGroup to
set it to something. That or you would need to work out what
application group defaults to and use that with WSGIImportScript.

In mod_wsgi 3.X, this is simplified and can also just say:

  WSGIDaemonProcess myapp

  WSGIScriptAlias / /some/path/myapp.wsgi process-group=myapp
application-group=%{GLOBAL}

That is, using process-group and application group with
WSGIScriptAlias has side effect of preloading the WSGI script file,
given that it know what context it will be running in.

To avoid request delays when using maximum-requests with
WSGIDaemonProcess, also advisable to run with multiple processes. That
way when one process is starting up, then chance that other will still
be available to handle requests and so restart will not be noticed.

Graham




--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"TurboGears" group.
To post to this group, send email to turbogears@googlegroups.com
To unsubscribe from this group, send email to 
turbogears+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/turbogears?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to