In the CMS that I've designed for my last company, we had a "read-only cluster" 
solution. So we had just one edit-server that allowed people to change content 
and some other nodes that got notified when content changed.
So this is a simple solution if you want to achieve high performance read 
access (we used this for a large customer with very high web traffic). If you 
want to achieve a fail-safe edit scenario where content can be edited at each 
node everything is getting much more complicated (and much slower).
So I think it would make no big difference in performance if the clustering is 
achieved by using one backend or by transaction-aware notifications. But would 
be nice to have a real edit-cluster without the need of a db...
Cheers,
Daniel



"Slide Users Mailing List" <[EMAIL PROTECTED]> schrieb am 24.11.04 
19:05:22:
> 
> 
> Can anyone speculate as to how any of these distributed caches actually
> perform though? I'm wondering how much overhead the network communication
> between cluster participants introduces and whether a many-to-one
> (slide-to-filesystem) design using an NFS mounted filesystem wouldn't be
> just as fast as a many-to-many design where each slide has its own
> filesystem.
> 
> Warwick
> 
> 
> > -----Original Message-----
> > From: Richard Emberson [mailto:[EMAIL PROTECTED] 
> > Sent: Wednesday, November 24, 2004 11:40 AM
> > To: Slide Users Mailing List
> > Subject: Re: slide clustering support question
> > 
> > 
> > Build your own or use whats out there. For Slide to have 
> > top-grade clustering either Slide must invest the time and 
> > effort to develop such a system or just use existing 
> > libraries that provide such capabilities. I would think that 
> > the correct approach is to build on whats out there, 
> > specifically, JGroups which has a distinguished academic 
> > pedigree, rich computer science heritage and significant 
> > usage. JGroups has distributed hashtables, distributed lock 
> > management, and, very appealing, group membership can be 
> > determined dynamically - one does not have to know every 
> > member of the group when the whole cluster starts.
> > 
> > Slide might have to be refactored some so that the clustering 
> > implementation is an addon and the main Slide code can 
> > compile and run without the JGroups jar file. But for those 
> > who wish to use Slide where clustering is a must, either for 
> > performance or failover, then the clustering implementation 
> > with its dependence on JGroups would have to be compiled and loaded.
> > 
> > Example usages:
> > JBoss uses JGroups.
> > CJDBC uses JGroups.
> > 
> > Richard
> > 
> > 
> > Warwick Burrows wrote:
> > > We've implemented that configuration: a jdbc nodestore with tx 
> > > filesystem store much like you've outlined with a HTTP load 
> > balancer 
> > > between our DAV clients and Slide servers. It is untested in this 
> > > target (load balanced) configuration but we have tested in 
> > a simpler 
> > > configuration that shares the jdbc store and the content 
> > (using NFS) 
> > > between two Slide servers.
> > > 
> > > Unfortunately the clustering implementation is untested in terms of 
> > > how locking will work. Ie. when a lock is taken by one client a 
> > > notification is sent to the other servers in the cluster to 
> > let them 
> > > know that this object has changed. But its not certain what will 
> > > happen if two requests for the lock come in at exactly the 
> > same time 
> > > as they would both take the lock and send a notification off to the 
> > > other clustered servers. I believe that there's no code to resolve 
> > > this issue.
> > > 
> > > So for our deployment we've gone with disabling the cache for the 
> > > nodestore altogether so that updates for locks and other 
> > metadata are 
> > > written directly to the db. The content store also has its cache 
> > > disabled right now as it seems that caching for both node 
> > and content 
> > > stores is controlled from the encompassing <store> definition.
> > > 
> > > So far we think it will meet our particular performance 
> > requirements 
> > > even with the caches disabled. A fully distributed cache (and so 
> > > lock-safe) cache would be great but the question is would 
> > it be more 
> > > performant than just writing directly to the db?... 
> > particularly when 
> > > you consider that any negotiation for the lock that would need to 
> > > occur between cluster caches would be over the network and 
> > subject to 
> > > network latency. Anyone have any ideas as to how distributed caches 
> > > actually perform in the real world?
> > > 
> > > Warwick
> > > 
> > > 
> > > 
> > >>-----Original Message-----
> > >>From: Alessandro Apostoli [mailto:[EMAIL PROTECTED]
> > >>Sent: Wednesday, November 24, 2004 5:28 AM
> > >>To: Slide Users Mailing List
> > >>Subject: slide clustering support question
> > >>
> > >>
> > >>I have a couple questions about the clustering features of
> > >>slide, suppose a scenario where you have a distributed 
> > >>replicated filesystem such as coda and a replicated db 
> > >>running on each node. Each node has the same data both on 
> > >>filesystem and db, the nodes are part of a big wan with links 
> > >>speed around 2 Mbit. The idea would be to use slide tx store 
> > >>for resources and revisions whilst using a jdbc store for 
> > >>properties, users, groups, roles and security
> > >>1) In such a scenario how would the locks on the filesystem 
> > >>behave, I guess that the transaction support in the 
> > >>commons.transaction.file package would be broken for there 
> > >>would be two or more instances of FileResourceManager 
> > >>accessing the same directory or am I missing something ?
> > >>2) For ideal clustering support would I be confined to the 
> > JDBC store?
> > >>3) If the Tx Store still works in this configuration how does slide 
> > >>solve the
> > >>above distributed transaction problem?
> > >>
> > >>Alessandro
> > >>
> > >>
> > >>
> > >>------------------------------------------------------------
> > ---------
> > >>To unsubscribe, e-mail: [EMAIL PROTECTED]
> > >>For additional commands, e-mail: [EMAIL PROTECTED]
> > >>
> > > 
> > > 
> > > 
> > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > > For additional commands, e-mail: [EMAIL PROTECTED]
> > > 
> > 
> > 
> > -- 
> > This email message is for the sole use of the intended 
> > recipient(s) and may contain confidential information.  Any 
> > unauthorized review, use, disclosure or distribution is 
> > prohibited.  If you are not the intended recipient, please 
> > contact the sender by reply email and destroy all copies of 
> > the original message.
> > 
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail: [EMAIL PROTECTED]
> > 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 


________________________________________________________________
Verschicken Sie romantische, coole und witzige Bilder per SMS!
Jetzt neu bei WEB.DE FreeMail: http://freemail.web.de/?mc=021193


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to