Now, regardless of how you start JUnit, the next problem: how to deploy 
from Junit. We have a special TestCase subclass in the 
smartfrog-testharness jar called SmartFrogTestBase. This abstract class 
provides methods to deploy SF applications to a remote SF Daemon, and 
undeploy them in the tearDown method. It adds extra assertions to check 
that that something deployed, or if it failed to deploy, what the errors 
were

http://smartfrog.svn.sourceforge.net/viewvc/smartfrog/trunk/core/testharness/src/org/smartfrog/test/SmartFrogTestBase.java?view=markup

This lets you deploy an application and make RMI calls of it, and to see 
that it actually worked.

Going one step beyond that, we have a set of components in 
org/smartfrog/services/assertions, which
can deploy a child component, and then deploy a test sequence against 
it. You can declare in the descriptor that you expect a deployment to 
fail (and what the exception classes/error strings will be). You can 
also add a condition to only deploy a test if the condition is true; 
this will skip tests that aren't appropriate for the target system.
http://smartfrog.svn.sourceforge.net/viewvc/smartfrog/trunk/core/smartfrog/src/org/smartfrog/services/assertions/
http://smartfrog.svn.sourceforge.net/viewvc/smartfrog/trunk/core/smartfrog/src/org/smartfrog/services/assertions/testcomponents.sf?view=markup


You can still run these deployments under JUnit, though now all JUnit 
becomes is a way to start a test from the IDE/CI server or build tool. 
The DeployingTestBase class extends SmartFrogTestBase to run a test 
deployment and wait for it to finish, -or timeout out

http://smartfrog.svn.sourceforge.net/viewvc/smartfrog/trunk/core/testharness/src/org/smartfrog/test/DeployingTestBase.java?view=markup

This is subclassed, and then you use the method 
expectSuccessfulTestRunOrSkip(package, test file name without prefix) to 
run a file. Here, for example, is us testing the Hadoop cluster 
configuration.


public class ClusterconfTest extends HadoopTestBase {
     public static final String PACKAGE = 
"/org/smartfrog/services/hadoop/test/system/local/clusterconf/";

     public ClusterconfTest(String name) {
         super(name);
     }

     public void testFilesystemOverride() throws Throwable {
         expectSuccessfulTestRunOrSkip(PACKAGE, "testFilesystemOverride");
     }


     public void testFilesystemOverrideValue() throws Throwable {
         expectSuccessfulTestRunOrSkip(PACKAGE, 
"testFilesystemOverrideValue");
     }

     public void testClusteredFilesystem() throws Throwable {
         checkNameNode();
         checkDataNode();
         expectSuccessfulTestRunOrSkip(PACKAGE, "testClusteredFilesystem");
         enableFailOnPortCheck();
     }
}

Every test has a matching .sf file in the given package, such as
/org/smartfrog/services/hadoop/test/system/local/clusterconf/testFilesystemOverride.sf

the test runner will deploy this file to the application 
testFilesystemOverride, and any tests inside it.

The final test, testClusteredFilesystem checks that ports are closed 
after the application is terminated. This is because HadoopTestBase 
extends PortCheckingTestBase, which lets you build a list of ports to 
check for being closed after the test run has finished -the test will 
fail with the port number and description text if one of the ports is 
left open. I've been using this to make sure that all of Hadoop shuts 
down cleanly.

------------------------------------------------------------------------------
_______________________________________________
Smartfrog-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/smartfrog-users

Reply via email to