Seth Gottlieb wrote:
Hello list,
I am working on an evaluation of Lenya and I was wondering about common
strategies for scaling to high traffic volumes. I noticed some posts
recommending the use of reverse proxies and I saw this article:
http://wiki.apache.org/cocoon-data/attachments/GT2006Notes/attachments/10-caching.pdf
Does anyone know what high traffic sites like NZZ and Wired do? Do they
deploy static HTML to simple web servers? Do they deploy the
publication to a cluster of read-only Lenya instances? Any information
would be greatly appreciated.
we have just removed a naive caching mechanism from the trunk that you
might want to re-insert for your purposes. basically the publication
sitemap checks if a cached version of the requested page is present on
disk. if yes, it is served by <map:read>, otherwise it is generated and
written to disk.
the reason it was removed is that there is no cache invalidation
whatsoever, i.e. you would need to employ external tools to delete the
cache (like a nightly rm -r, effectively giving you a daily updated
page). if that's sufficient, you should be fine. then again, if you can
live with such long update cycles, a cleaner approach imho would be a
scripted wget job to a static webserver.
i know there was a mod_lenya apache http server module developed by
wyona (it's open-source and available via the lenya svn sandbox iirc.) i
don't know what it does and whether it can be adapted to 2.0 - others
might be able to comment.
thanks for considering lenya!
--
Jörn Nettingsmeier
"One of my most productive days was throwing away 1000 lines of code."
- Ken Thompson.
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]