On Thu, 2012-05-03 at 22:05 +0400, Roman Grigoryev wrote:
> Hi,
> 
> I am working distributed file system(Lustre) and evaluate our framework
> development(maybe switching to other framework from our current).
> 
> Could you please say how is it complex (maybe pretty simple) to use
> autotest in distrusted testing?

You meant distributed testing - I didn't know this as a formal
definition, and the best I could find is:

http://tetworks.opengroup.org/Wpapers/distributed_whitepaper.htm

Which pretty much boils down to tests executed on different boxes
(physical or virtual), with tests interacting with one another.

So yes, autotest does have a mechanism to execute 'distributed' testing,
as well as 'kvm autotest' does have tests that do use such a testing
arrangement. In autotest, one of the mechanisms used to coordinate
execution of tests among different machines is called barrier.

A barrier is a class that blocks test execution until all 'members' have
'checked in' the barrier. So, consider this example from the client
version of netperf:

            if role == 'server':
                self.server_start(cpu_affinity)
                try:
                    # Wait up to ten minutes for the client to reach this
                    # point.
                    self.job.barrier(server_tag, 'start_%d' % num_streams,
                                     600).rendezvous(*all)
                    # Wait up to test_time + 5 minutes for the test to
                    # complete
                    self.job.barrier(server_tag, 'stop_%d' % num_streams,
                                     test_time+300).rendezvous(*all)
                finally:
                    self.server_stop()

            elif role == 'client':
                # Wait up to ten minutes for the server to start
                self.job.barrier(client_tag, 'start_%d' % num_streams,
                                 600).rendezvous(*all)
                self.client(server_ip, test, test_time, num_streams,
                            test_specific_args, cpu_affinity)
                # Wait up to 5 minutes for the server to also reach this point
                self.job.barrier(client_tag, 'stop_%d' % num_streams,
                                 300).rendezvous(*all)

You can see above that the client only will start the client code if the
server is active and did check in the barrier.

The 'non virt' autotest version is:

client version
https://github.com/autotest/autotest/blob/master/client/tests/netperf2/netperf2.py

server version
https://github.com/autotest/autotest/blob/master/server/tests/netperf2/netperf2.py

The version currently used on kvm autotest is:

https://github.com/autotest/autotest/blob/master/client/virt/tests/netperf.py

> F.e. I want to have 4 nodes (maybe all VMs or real h/w or mixed): setup
> environment before testing, run test and harvest log and other data.
> Test is initiated on one node but all nodes are used for test. Simple
> analogy: do something on nfs server from few clients in fixed order.

So, I'd suggest you take a look at the netperf2 samples, and
configuration files, and see how the same mechanism can be used for your
test.

_______________________________________________
Autotest mailing list
[email protected]
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest

Reply via email to