Author: azeez
Date: Wed Jan 28 05:44:34 2009
New Revision: 29873
URL: http://wso2.org/svn/browse/wso2?view=rev&revision=29873

Log:
updated

Modified:
   trunk/wsas/java/modules/distribution/src/site/xdoc/wsas-clustering.xml

Modified: trunk/wsas/java/modules/distribution/src/site/xdoc/wsas-clustering.xml
URL: 
http://wso2.org/svn/browse/wso2/trunk/wsas/java/modules/distribution/src/site/xdoc/wsas-clustering.xml?rev=29873&r1=29872&r2=29873&view=diff
==============================================================================
--- trunk/wsas/java/modules/distribution/src/site/xdoc/wsas-clustering.xml      
(original)
+++ trunk/wsas/java/modules/distribution/src/site/xdoc/wsas-clustering.xml      
Wed Jan 28 05:44:34 2009
@@ -34,11 +34,12 @@
   <li><a href="#WhyClustering">Why Web Services Clustering?</a></li>
   <li><a href="#WSASClustering">WSAS Clustering</a>
     <ul>
-      <li><a href="#NodeManagement">Node Management</a></li>
-      <li><a href="#SessionStateReplication">Session State
-      Replication</a></li>
-      <li><a href="#WSASClusteringConfiguration">WSAS Clustering
-        Configuration</a></li>
+      <li><a href="#SessionStateReplication">
+          Session State Replication</a>
+      </li>
+      <li><a href="#WSASClusteringConfiguration">
+          WSAS Clustering Configuration</a>
+      </li>
     </ul>
   </li>
   <li><a href="#WSASClusteringinAction">WSAS Clustering in Action</a>
@@ -75,11 +76,7 @@
 and concurrency. In addition, almost all the transactions happening in such
 enterprise deployments are critical to the business of the organization. This
 imposes another requirement for production ready Web services servers. That
-is to maintain very low downtime. Another booming trend in the software
-industry is the AJAX based user interfaces. Such Web interfaces usually
-depend on a Web services based back-end. Therefore, Web services servers that
-provide the business logic for AJAX interfaces should exhibit the scalability
-of pure web servers used for serving pure HTTP requests.</p>
+is to maintain very low downtime. </p>
 
 <p>It is impossible to support that level of scalability and high
 availability from a single server despite how powerful the server hardware or
@@ -102,192 +99,8 @@
 instances. Although WSAS ships with this build in Tribes based
 implementation, other clustering implementations based on different group
 management systems can be plugged in easily. The only thing you have to do is
-to implement a set of clustering interfaces provided by WSAS. The WSAS
-clustering functionality can be divided into two categories as described
-below.</p>
-
-<h3><a name="NodeManagement">Node Management</a></h3>
-
-<p>Node management provides functionality to configure and maintain multiple
-WSAS instances running on different servers identically. There can be a large
-number of WSAS instances in clustered deployments. We have to make sure that
-all of those instances have the same run time parameters and the same set of
-services deployed. If we want to make a change in the configuration or deploy
-a new service, we should apply that change to all the WSAS instances at the
-same time. Failure to do so would result in unpredictable behaviours. For
-example, assume that we want to deploy a new version of a service, and we
-couldn't deploy the new version in all clustered instances at the same time.
-Then a client using that service may be directed to the new version and to
-the old version of the service in successive requests. WSAS handles this by
-using a URL based central repository and a command line admin tool to issue
-cluster wide commands. Figure 1 depicts the basic node management behaviour
-of WSAS.</p>
+to implement a set of clustering interfaces provided by WSAS.</p>
 
-<p></p>
-
-<p>
-       <br/><br/>
-    <img alt="Node management in WSAS" src="images/ClusterNodeManagement.jpg"/>
-    <br/><br/>
-</p>
-
-<p><i>Figure 1: WSAS Node management</i></p>
-
-<p></p>
-
-<p>As shown in figure 1, all the clustered WSAS instances access
-configuration files and service archives from a central repository. Each WSAS
-instance has to be configured to point to this repository at deployment time.
-In a production environment, this repository should be hosted in a web server
-(e.g., Apache HTTPD) and WSAS instances can access it via HTTP by using the
-URL of the repository. It is also possible to use a local file system
-repository for testing purposes. In that case all the WSAS instances should
-run on the same computer and they can access the repository by using the file
-system path of the repository. In either case, we can maintain a single
-repository for the entire cluster. Therefore, configuration and services in
-the repository are immediately applied to all WSAS instances of the cluster
-upon start up.</p>
-
-<p></p>
-
-<p>We should also be able make changes to the cluster at runtime. For
-example, we may want to add a new service, replace an old version of a
-service with a new version, or make changes to the configuration parameters
-of WSAS instances. WSAS provides an admin tool for performing such
-operations. It is a command line tool, where clustering commands can be
-issued as command line parameters. Administrators can connect to one node in
-the cluster and issue the required commands using this tool. Then the
-connected node replicates the command to all other nodes in the cluster. WSAS
-clustering layer takes care of applying such commands consistently and
-transactionally. The two phase commit protocol is used for achieving the
-above properties.</p>
-
-<p>Let's consider the deployment shown in figure 1 and let's assume the
-administrator wants to deploy a new service named service A in the cluster.
-First, the administrator copies the service archive containing service A to
-the shared repository. Then he connects to the node management web service
-named NodeManagerService of Node 1 and issues the service deployment command.
-Then Node 1 becomes the coordinator for this transaction and executes the two
-phase commit protocol as listed below:</p>
-
-<p></p>
-
-<h4>Commit-request phase</h4>
-<ul>
-  <li>Node 1 sends a command to load service A from the shared
-  repository.</li>
-  <li>All nodes try to load service A. Each node sends a success message to
-    Node 1 upon successful loading of the service. If a node has failed to
-    load the service, it sends a failure message to Node 1.</li>
-  <li>If Node 1 receives a failure message from any node, it sends a message
-    to all the nodes to abort the action.</li>
-  <li>If Node 1 receives success messages from all the nodes, it sends the
-    prepare message to all the nodes in the cluster.</li>
-  <li>In response to that, all the nodes try to block client requests
-    temporally. All client requests will be responded with a notification
-    message describing the server reloading state. This is required to allow
-    clients to adjust existing transactions with the server's state
-  change.</li>
-  <li>As in the previous step, each node sends a success message upon
-    successful blocking of client requests. Any failure is notified to Node
-  1.</li>
-  <li>If Node 1 receives a failure message from any node, it sends a message
-    to all the nodes to abort the action.</li>
-  <li>If Node 1 receives success messages from all the nodes, Node 1 sends a
-    message indicating successful completion of phase 1 to the admin
-  tool.</li>
-  <li>Admin tool informs this to the user.
-    <p></p>
-  </li>
-</ul>
-
-<h4>Commit phase</h4>
-<ul>
-  <li>The user should issue the commit command to start the second phase.
-    Node 1 sends this commit message to all the nodes in the cluster.</li>
-  <li>All nodes try to deploy the service A, loaded in the first phase.</li>
-  <li>Each node sends a success message to Node 1 upon successful deployment
-    of service A. Any failure is also notified to Node 1.</li>
-  <li>If Node 1 receives a failure message from any node, it sends a message
-    to all the nodes to rollback the action.</li>
-  <li>If Node 1 receives success messages from all the nodes, Node 1 sends a
-    message to all nodes to start serving clients.</li>
-</ul>
-
-<p>The commands supported by the admin tool are listed below:</p>
-
-<p></p>
-
-<p><i>Reload configuration:</i></p>
-<pre>Linux: admin.sh --username admin --password admin --epr 
https://&lt;ip&gt;:9443/services/Axis2NodeManager --operation reloadconfig
-Windows: admin.bat --username admin --password admin --epr 
https://&lt;ip&gt;:9443/services/Axis2NodeManager --operation reloadconfig</pre>
-
-<p>Reloads the configuration from the axis2.xml in the shared repository.
-This includes the changes in global parameters and global module engagement
-details.</p>
-
-<p></p>
-
-<p><i>Loading new service groups</i></p>
-<pre>Linux: admin.sh --username admin --password admin --epr 
https://&lt;ip&gt;:9443/services/Axis2NodeManager 
-                                     --operation loadsgs --service-groups 
&lt;service-group1&gt;,&lt;service-group2&gt;,...
-Windows: admin.bat --username admin --password admin --epr 
https://&lt;ip&gt;:9443/services/Axis2NodeManager 
-                                     --operation loadsgs --service-groups 
&lt;service-group1&gt;,&lt;service-group2&gt;,...</pre>
-
-<p>Loads and deploys the specified service groups in all the clustered nodes.
-Service archives containing required service groups have to be available in
-the shared repository prior to issuing this command.</p>
-
-<p></p>
-
-<p><i>Unloading service groups</i></p>
-<pre>Linux: admin.sh --username admin --password admin --epr 
https://&lt;ip&gt;:9443/services/Axis2NodeManager
-                                     --operation unloadsgs --service-groups 
&lt;service-group1&gt;,&lt;service-group2&gt;,...
-Windows: admin.bat --username admin --password admin --epr 
https://&lt;ip&gt;:9443/services/Axis2NodeManager 
-                                     --operation unloadsgs --service-groups 
&lt;service-group1&gt;,&lt;service-group2&gt;,...</pre>
-
-<p>Unloads previously deployed service groups from all the nodes in the
-cluster.</p>
-
-<p></p>
-
-<p><i>Apply service policy</i></p>
-<pre>Linux: admin.sh --username admin --password admin --epr 
https://&lt;ip&gt;:9443/services/Axis2NodeManager 
-                                     --operation applypolicy --service 
&lt;service&gt; --policy-file &lt;policy.xml&gt;
-Windows: admin.bat --username admin --password admin --epr 
https://&lt;ip&gt;:9443/services/Axis2NodeManager 
-                                     --operation applypolicy --service 
&lt;service&gt; --policy-file &lt;policy.xml&gt;</pre>
-
-<p>Applies the policy defined in the policy.xml file to the specified
-service.</p>
-
-<p></p>
-
-<p>All the above commands only execute the first phase of the two phase
-commit protocol. Once that phase is complete, the administrator should issue
-the commit command using the admin tool to start the second phase. The syntax
-of the commit command is given below:</p>
-<pre>Linux: admin.sh --username admin --password admin --epr 
https://&lt;ip&gt;:9443/services/Axis2NodeManager --operation commit
-Windows: admin.bat --username admin --password admin --epr 
https://&lt;ip&gt;:9443/services/Axis2NodeManager --operation commit</pre>
-
-<p></p>
-
-<p>The commit phase is executed separately as another command so that it is
-possible to write command line scripts by combining multiple admin commands
-as well as operating system specific commands. Such scripts can be written to
-issue the commit command after completing all the other commands. Thus all
-those admin commands belong to a single transaction, which can be rolled back
-if any of those failed. For example, we can write a script to load service
-groups A and B, then apply policy X to service C and apply policy Y to
-service D.</p>
-
-<p></p>
-
-<p>As we can deploy and maintain a cluster of identical WSAS instances using
-the above features, it is possible to support high availability and
-scalability for stateless services by only using the node management
-functionality of WSAS clustering.</p>
-
-<p></p>
 
 <h3><a name="SessionStateReplication">Session State Replication</a></h3>
 
@@ -378,116 +191,14 @@
 <p>WSAS clustering is configured using the axis2.xml file. As all instances
 of a WSAS cluster can be configured to load this file from the shared
 repository, initial clustering configuration can be done by editing a single
-file. The default clustering configuration ships with WSAS is listed
-below:</p>
+file.</p>
 
-<p></p>
-<pre>&lt;cluster 
class="org.apache.axis2.clustering.tribes.TribesClusterManager"&gt;
-    &lt;parameter name="AvoidInitiation"&gt;true&lt;/parameter&gt;
-    &lt;parameter name="domain"&gt;wso2wsas.domain&lt;/parameter&gt;
-    &lt;configurationManager
-            
class="org.wso2.wsas.clustering.configuration.WSASConfigurationManager"&gt;
-        &lt;parameter name="CommitTimeout"&gt;20000&lt;/parameter&gt;
-        &lt;parameter name="NotificationWaitTime"&gt;2000&lt;/parameter&gt;
-        &lt;listener 
class="org.wso2.wsas.clustering.configuration.WSASConfigurationManagerListener"/&gt;
-    &lt;/configurationManager&gt;
-    &lt;contextManager 
class="org.apache.axis2.clustering.context.DefaultContextManager"&gt;
-        &lt;listener 
class="org.apache.axis2.clustering.context.DefaultContextManagerListener"/&gt;
-        &lt;replication&gt;
-            &lt;defaults&gt;
-                &lt;exclude name="local_*"/&gt;
-                &lt;exclude name="LOCAL_*"/&gt;
-                &lt;exclude name="wso2tracer.msg.seq.buff"/&gt;
-                &lt;exclude name="wso2tracer.trace.persister.impl"/&gt;
-                &lt;exclude name="wso2tracer.trace.filter.impl"/&gt;
-            &lt;/defaults&gt;
-            &lt;context 
class="org.apache.axis2.context.ConfigurationContext"&gt;
-                &lt;exclude name="SequencePropertyBeanMap"/&gt;
-                &lt;exclude name="WORK_DIR"/&gt;
-                &lt;exclude name="NextMsgBeanMap"/&gt;
-                &lt;exclude name="RetransmitterBeanMap"/&gt;
-                &lt;exclude name="StorageMapBeanMap"/&gt;
-                &lt;exclude name="CreateSequenceBeanMap"/&gt;
-                &lt;exclude name="WSO2 WSAS"/&gt;
-                &lt;exclude name="wso2wsas.generated.pages"/&gt;
-                &lt;exclude name="ConfigContextTimeoutInterval"/&gt;
-                &lt;exclude name="ContainerManaged"/&gt;
-            &lt;/context&gt;
-            &lt;context 
class="org.apache.axis2.context.ServiceGroupContext"&gt;
-                &lt;exclude name="my.sandesha.*"/&gt;
-            &lt;/context&gt;
-            &lt;context class="org.apache.axis2.context.ServiceContext"&gt;
-                &lt;exclude name="my.sandesha.*"/&gt;
-            &lt;/context&gt;
-        &lt;/replication&gt;
-    &lt;/contextManager&gt;
-&lt;/cluster&gt;</pre>
-
-<p>The class attribute of the cluster element specifies the main class of the
-clustering implementation. This class should implement the
-org.apache.axis2.clustering.ClusterManager interface. As mentioned earlier,
-the WSAS built-in clustering implementation is based on Tribes. Therefore,
-the Tribes based ClusterManager implementation is specified by default. There
-are two top level parameters in the configuration. The AvoidInitiation
-parameter specifies whether the clustering should be initialized
-automatically on start up. By default this is set to True, which will not
-initialize the clustering on start up. WSAS will call the initialization
-mechanism appropriately and users are not supposed to change the value of
-this parameter. The domain parameter defines the domain of the cluster. All
-the nodes with the same domain name belongs to the same cluster. This allows
-us to create multiple clusters in the same network by specifying different
-domain names. Apart from these, there are two major sections in the
-configuration. They are the configurationManager and the contextManager.</p>
-
-<p>The configurationManager section configures the node management activities
-of the cluster. The configurationManager element's class attribute specifies
-the class implementing the
-org.apache.axis2.clustering.configuration.ConfigurationManager interface. It
-should support all node management activities described earlier. There is an
-associated listener implementation for the configurationManager, which
-implements the
-org.apache.axis2.clustering.configuration.ConfigurationManagerListener
-interface. It should listen for node management events and take appropriate
-actions. Default implementations of these classes are based on Tribes.
-Configuration commands are applied using the two phase commit protocol as
-mentioned in the node management section. According to that, all nodes block
-client requests in the "prepare" step and wait for the "commit" command to
-apply the new configuration. But, if for some reason the "commit" command is
-not issued, all nodes block the client requests forever, making the entire
-cluster useless.</p>
-
-<p>The CommitTimeOut parameter is introduced to handle this scenario. Nodes
-wait for the "commit" command, only for the time specified in the
-CommitTimeOut parameter. If the "commit" command is not issued during that
-time, all nodes rollback to the old configuration. The NotificationWaitTime
-parameter specifies the time for the coordinator node to wait for
-success/failure messages from other nodes. If any node fails to send a
-success or a failure message for a particular command within this time, the
-coordinator node assumes that the node has failed to perform the command, and
-the coordinator node sends an error message to the admin tool describing the
-failure to execute the command.</p>
-
-<p>Session data replication is configured in the contextManager section. The
-class attribute of the contextManager element specifies the implementing
-class of the org.apache.axis2.clustering.context.ContextManager interface.
-There is an associated listener class implementing the
-org.apache.axis2.clustering.context.ContextManagerListener interface. This
-class is specified in the class attribute of the listener element. As in
-other implementations, these two classes are also based on Tribes by default.
-Data to exclude in the replication process can be specified in the
-replication element. As mentioned in the session replication section, session
-data is replicated for three context types. We can specify which data to
-exclude in each of these context types by listing them under the appropriate
-context element. Each context element can have one or more exclude elements.
-The name attribute of the exclude element specifies the name of the property
-to exclude. The default element of the replications section contains the data
-to exclude from all context types. It is possible to specify complete
-property names or the prefix or suffix of property names. Prefixes and
-suffixes are specified using the asterisk ( * ) character. For example,
-according to the above configuration, all session data beginning with the
-name my.sandesha. in service contexts and service group contexts will not be
-replicated. All the session data beginning with names local_ and LOCAL_ will
-not be replicated in all three contexts.</p>
+<p>
+    For more details about WSAS clustering, please see
+    <a 
href="http://wso2.org/library/articles/wso2-carbon-cluster-configuration-language";>
+        WSO Carbon Clustering Configuration Language
+    </a>
+</p>
 
 <p></p>
 
@@ -776,7 +487,13 @@
 WSAS clustering functionality, please feel free to ask them on WSAS user
 list: [email protected].</p>
 <br/>
-<h3></h3>
+
+<p>
+    Also see
+    <a 
href="http://wso2.org/library/articles/introduction-wso2-carbon-clustering";>
+             Introduction to WSO2 Carbon Clustering
+    </a>
+</p>
 </body>
 </html>
 

_______________________________________________
Wsas-java-dev mailing list
[email protected]
https://wso2.org/cgi-bin/mailman/listinfo/wsas-java-dev

Reply via email to