Frank Wilson wrote:
>> If you like your testing, it's very good for automating bringing up
>> stuff, running tests and tearing them down, which you can then do direct
>> from your JUnit test run.
>
> Yes we plan to do system testing using Smartfrog. You seem to say that I
> can bring stuff up and tear stuff down from a JUnit test run. I was thinking
> of
> embedding Smartfrog in my JUnit tests, although I am not too sure how to do
> it.
> My first thought was to look at the ant "deploy" task for guidance on how to
> control
> Smartfrog from within java. Is this a sensible approach?
> Do you have any tips or pointers on how to do this?
yes, you need to grab the testharness JAR which I think we distribute,
but whose source you should grab from SVN anyway. There is a class,
core/testharness/src/org/smartfrog/test/DeployingTestBase.java , which
extends TestCase and which provides everything you need to talk to a
SmartFrog daemon, deploy a test then wait for it to finish, while
fielding lifecycle events sent back over RMI.
There is more in
core/smartfrog/src/org/smartfrog/services/assertions/events/TestEventSink.java
, where something can deploy an app, wait for events and field failures
back up to the caller so that you get the remote stack trace in your
junit formatted results -something hudson likes too.
in your junit code every test method can deploy something; what they do
is up to them
package org.smartfrog.extras.hadoop.cluster.test.system.health;
import org.smartfrog.test.DeployingTestBase;
public class SystemHealthTest extends DeployingTestBase {
String PACKAGE =
"/org/smartfrog/extras/hadoop/cluster/test/system/health";
public SystemHealthTest(String name) {
super(name);
}
public void testJasperHealth() throws Throwable {
expectSuccessfulTestRunOrSkip(PACKAGE, "testJasperHealth");
}
public void testHealth() throws Throwable {
expectSuccessfulTestRunOrSkip(PACKAGE, "testHealth");
}
public void testLiveHealth() throws Throwable {
expectSuccessfulTestRunOrSkip(PACKAGE, "testLiveHealth");
}
public void testHadoopSiteResources() throws Throwable {
expectSuccessfulTestRunOrSkip(PACKAGE, "testHadoopSiteResources");
}
public void testHadoopDefaultResources() throws Throwable {
expectSuccessfulTestRunOrSkip(PACKAGE,
"testHadoopDefaultResources");
}
public void testNamenodeJspClasses() throws Throwable {
expectSuccessfulTestRunOrSkip(PACKAGE, "testNamenodeJspClasses");
}
}
>
> In the documentation and presentations that I have read about Smartfrog,
> I've seen
> tests regarded by Smartfrog as "another thing to deploy". However I have not
> seen
> much reference to the idea of tests deploying certain tasks, which would
> require
> calling Smartfrog from java test code. This makes me wonder very slightly
> whether I have
> some misconception about what Smartfrog should be used for.
You can do a fair amount from Java code, just use RMI to bind to a
service and you can deploy anything you want. The Ant tasks fork off
their own JVM to do the work, as if they were command line tools,
because that lets us set up the classpaths more easily and isolates Ant
from any trouble -in JUnit runs you want to get the failures, so you do
want the tight coupling.
Now, what do you deploy? The answer there is the TestBlock and
TestCompound components, which are defined in
/org/smartfrog/services/assertions/testcomponents.sf ; these components,
and especially the TestCompound, can host complex workflows where you
have a condition which must be met else the test is skipped, an action
which is then run, and, if the action successfully starts, a tests
component which contains a workflow of actions. Here is what a test to
run a Hadoop cluster and some file system actions looks like
testCluster extends ExpectDeploy {
action extends HadoopFilesystem;
tests extends FileSystemTestSequence {
namenode LAZY action:namenode:namenode;
datanode LAZY action:datanode:datanode;
}
}
Starting stuff on multiple machines follows on from there, or, like you
say, you can deploy a process killer.
One thing to bear in mind is that on the command line, the ant tasks or
the junit test runner, the .sf file is expanded locally, taking up any
JVM properties you set first by way of the PROPERTY reference; you can
get at JVM values in the target VM with a LAZY PROPERTY ref. That's an
easy way to add some late-binding information to a .sf file, such as
hostnames -just set the JVM properties first.
We have some really complicated stuff lurking in components/cloudfarmer
that expands a .sf file locally then uses SCP to copy over the expanded
file, then SSHes over to execute the sfDeploy command; it's designed to
work with VMs that may not be up or even have resolvable DNS names at
the start of the operation, so it does a lot of probing and spinning
waiting for the VM and then SmartFrog to go live before executing the
commands. That is probably overkill for your tests, but a handy trick if
you want to manage things over long distances
-steve
>
> I think I should give an example of why I am thinking of using Smartfrog in
> this way.
> In my case I am looking at a test where I need to kill a process on either
> one of a pair
> of machines. Since both machines are simultaneously under control of the
> tests I cannot
> simply use a shell command to kill the process because this would mean
> running the tests
> on one particular machine, denying me access to the other. Hence my plan was
> to deploy a
> "process killer" task on the remote host using Smartfrog from my JUnit code.
>
>
> Thanks,
>
> Frank
>
>
--
-----------------------
Hewlett-Packard Limited
Registered Office: Cain Road, Bracknell, Berks RG12 1HN
Registered No: 690597 England
------------------------------------------------------------------------------
_______________________________________________
Smartfrog-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/smartfrog-users