While you're talking about the question of performance, may I suggest
that JSPWiki could gain substantial performance benefits by using
RESTful principles? (See http://en.wikipedia.org/wiki/REST). I hasten
to add that JSPWiki's performance is plenty good enough for me, I have a
closed membership of only 2,000 or so people to service. But if the
JSPWiki development community decides that it needs to handle really big
volumes then you could refactor the URLs that they conform to the REST
approach. So for example where the URL might currently look like
http://bigwiki.org/Wiki.jsp?page=SomePage to read SomePage, or
http://bigwiki.org/Edit.jsp?page=SomePage to get an editable copy, or
http://bigwiki.org/Delete.jsp?page=SomePage to delete the page, the URLs
could be changed to something like http://bigwiki.org/page/SomePage/GET
to get a read-only copy, http://bigwiki.org/page/SomePage/EDIT to get an
editable copy, http://bigwiki.org/page/SomePage/POST to send an updated
copy, http://bigwiki.org/page/SomePage/DELETE to delete the page, and so
forth.
Together with this change JSPWiki could generate an ETag (see
http://en.wikipedia.org/wiki/HTTP_ETag) for each wiki page that is
changed only when the source wiki page changes. If a browser needs to
get a wiki page (in HTML format of course), it will check whether it has
a cached copy, and if it does, it will include its ETag in the http
request that it issues. Most http requests go via a caching proxy
server somewhere along the line, and these will check if they have a
cached copy and whether it is still valid. If so, and if its ETag
matches that in the browser's request then the caching proxy will return
a 304, telling the browser that its copy of the page is still current.
Only if the caching proxy servers don't have a copy of the page, or they
do and it has expired, will the http request go to the originating web
server. It can still issue a 304 return code if the browser's (or
caching proxy server's) copy of the page is still current, otherwise it
will send back the new page (converted into HTML) with the page's latest
ETag. Most web accesses are read-only, and this surely applies to most
wikis as well. RESTful principles would allow the source wiki server to
offload most read-only accesses to the rich web infrastructure that is
already in place for just that purpose.
Trevor Turton
Janne Jalkanen wrote:
Nope, you're quite right. This is one of the reasons for the backend
change in 3.0.
What I would do is that you should reserve one machine for updates,
and have the rest just work as read-only machines. JSPWiki *should* be
able to detect any changes, as it would treat them as "someone changed
the page manually outside JSPWiki" -events. You may need to tweak the
CachingProvider parameters though.
/Janne
On 7 Apr 2009, at 18:06, Alexey Kakunin wrote:
Hi!
Does anybody has experience of deploying JspWiki into clustered
environment?
Actually, we are prepearing EmForge to be installed into cluster of two
tomcats used one shared DB.
JspWiki played key-role in our project (it is used for storing all
textual
information) and many functionality in EmForge are related to JspWiki.
Question is - does anybody has experience of deployment of JspWiki into
clustered environment.
For example - looking into ReferenceManager - there is some info
cached in
map, and updated by events.
But, if during changing some page, request may be sent to one of
computer in
cluster. As result, even will be fired on this node, and reference
manager
information will be updated only in this node.
So, another node will contain outdated information in ReferenceManager.
Same problem (looks like) may happens in CachePageProvider (ok, it is
possible to disable caching)
Or, I'm wrong - and everything will work well?
--
With Best Regards,
Alexey Kakunin, EmDev Limited
Professional Software Development:
http://www.emdev.ru