HI Qian,

I think your solution should work.

My suggestion or improvement would be to use multicast to propagate the changes 
directly to all the management nodes and instead of writing it to a file 
sending the file why not to send the .sf file directly to all the management 
nodes.


Another thing that you could do is to use the console to send the new .sf file 
to all the nodes (you could do this in text form of in ComponentDescription 
form if the recipients are sf Components). The master node would act on the new 
configuration and the other nodes would just store the configuration locally. 
In this way each management node would have a cached copy of the cluster 
configuration.

Another thing that you will need to add is how to recover a management node 
from a failure, for example should a recovered node get a full copy of the 
entire configuration from the master node or peer or should it try to re-synch 
it config data. The first option is probably easier.

This is quite easy to do with Anubis because its protocol guaranties that what 
you received is the same that the other nodes in your "partition" see and it 
simplifies what you have to do to avoid errors when synchronizing your 
configuration data in the cluster but as I said, it could work equally well 
with your own protocol or with some other form of multicast and extra 
programming.

Regards,

Julio Guijarro


-----Original Message-----
From: Zhang Qian [mailto:[EMAIL PROTECTED]
Sent: 09 December 2007 02:15
To: Guijarro, Julio
Cc: Steve Loughran; smartfrog-developer; [email protected]
Subject: Re: [Smartfrog-developer] Questions about SmartFrog

Hi Julio,

The configuration data of my cluster are small sets of attribute value
pairs, not lots of data. The data amount is not large, but we really
need the reliability.

Usually, I make config change in the management console of my cluster,
then this console will communicate with the daemon in the master node,
and send the config change to it. The daemon will activate the change
and write them in the config file stored in the NFS. Then other
management nodes will also see the changes. But obviously, NFS coulde
be a single-point failure of my cluster.

Now I am trying to change this flow. The config change I make in the
management console will be saved as a .sf file, then I will run my own
SmartFrog component which extends some SmartFrog inbuilt services.
This component will get the config change by parsing the .sf file and
send the change to the daemon in master node. The daemon will activate
this change, then the component I mentioned before will write the
change to the local file, and propagate this file to all the
manangement nodes.

Any suggestions about this approach?:-)
Thanks!



Regards,
Qian

-------------------------------------------------------------------------
SF.Net email is sponsored by: 
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
_______________________________________________
Smartfrog-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/smartfrog-users

Reply via email to