[
https://issues.apache.org/jira/browse/WHIRR-117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tom White updated WHIRR-117:
----------------------------
Attachment: WHIRR-117.patch
I've fleshed out an implementation for this, and am looking for some review
comments, especially on general direction. The basic idea is to implement
service roles with a callback interface:
{code}
public abstract class ClusterActionHandler {
public abstract String getRole();
public Cluster beforeAction(String action, ClusterSpec clusterSpec, Cluster
cluster,
RunUrlBuilder runUrlBuilder) {}
public Cluster afterAction(String action, ClusterSpec clusterSpec, Cluster
cluster) {}
}
{code}
There is an implementation for each role: e.g. one for zk, one for nn, jt, etc
(although in the current patch it was simpler to do hadoop-master and
hadoop-worker). In effect, it turns the Service implementation inside out,
since the roles don't start the cluster themselves, instead they can hook into
the lifecycle of a cluster being started, configured, etc.
The two pre-defined actions are bootstrap and configure (like Pallet). (This
change will make it easier to implement WHIRR-88, amongst other things.) So the
beforeAction method for the bootstrap action for zk would use the passed-in
runUrlBuilder to add the runurls of the install scripts:
{code}
runUrlBuilder.addRunUrl("sun/java/install");
runUrlBuilder.addRunUrl("apache/zookeeper/install");
{code}
The configure step is more complicated, it uses the passed-in cluster object to
work out the zk ensemble that is passed to the runurl as a parameter. It also
sets the firewall up.
More complex services which have interacting roles can coordinate via the
cluster object. For example, the handler for a Hadoop datanode can find the
namenode from the cluster object, and use it to configure the datanode
instances.
With this approach there is a single way to launch clusters - there is a single
Service object that knows which handler to use for each of the roles in the
template. Handlers register themselves using the java.util.ServiceLoader
pattern, just like for Service and ServiceFactory (which will be replaced by
this new approach). Look at the ClusterAction classes to see how this works in
more detail.
The composability comes about because the RunUrlBuilder can compose runurls
from different roles into a single script, de-duping as necessary. So if a
cluster had zk and hadoop instances, then the previous snippet plus
{code}
runUrlBuilder.addRunUrl("sun/java/install");
runUrlBuilder.addRunUrl("apache/hadoop/install");
{code}
would be equivalent to
{code}
runUrlBuilder.addRunUrl("sun/java/install");
runUrlBuilder.addRunUrl("apache/zookeeper/install");
runUrlBuilder.addRunUrl("apache/hadoop/install");
{code}
since the {{sun/java/install}} script is only run once.
This assumes that the runurls can co-exist. Currently the zookeeper install
clobbers /etc/rc.local, which needs to be fixed.
Overall, this is quite a radical change, but it actually makes the code for
each service shorter, since the boilerplate for launching instances is moved
into a common class. I've managed to get the integration tests passing, but I
haven't updated the CLI to use the new classes.
> Composable services
> -------------------
>
> Key: WHIRR-117
> URL: https://issues.apache.org/jira/browse/WHIRR-117
> Project: Whirr
> Issue Type: New Feature
> Components: core
> Reporter: Tom White
> Fix For: 0.3.0
>
> Attachments: WHIRR-117.patch
>
>
> The current design does not support composable services, so you can't, for
> example, run ZooKeeper on the same cluster as Hadoop (a better example would
> be running Flume agents on a Hadoop cluster). We should make it easy to do
> this.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.