On Thu, Apr 5, 2012 at 2:45 PM, Lars Ruoff <lars.ru...@gmail.com> wrote: > Thanks a lot, Tom. > That magnificantly answers my question. :-) > > One last thing: > > "... a function which loads, parses and returns > the data, and memoize the result." > > That (and the first option also) means that we stay within the same > Python interpreter environment even for different requests to the > site? >
No; that is highly unlikely, unless using a threaded model to run Django*. You would have a separate cache per interpreter process. Ideally, you would have a hook for server_init and hook for child_init, just like something like Apache httpd, to allow you to load data before forking. I believe something like this is planned, or at least desired! Cheers Tom * Whether you run threaded or non threaded is entirely at the deployer's choice. See [1] for a detailed explanation of how mod_wsgi will run your site, depending on how you configure it. So if deployment is not under your control, you shouldn't use methods that rely on running a threaded model. [1] http://code.google.com/p/modwsgi/wiki/ProcessesAndThreading -- You received this message because you are subscribed to the Google Groups "Django users" group. To post to this group, send email to django-users@googlegroups.com. To unsubscribe from this group, send email to django-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-users?hl=en.