Hi Lucas,

On 05/04/2012 12:22 AM, Lucas Meneghel Rodrigues wrote:
> On Thu, 2012-05-03 at 22:05 +0400, Roman Grigoryev wrote:
>> Hi,
>>
>> I am working distributed file system(Lustre) and evaluate our framework
>> development(maybe switching to other framework from our current).
>>
>> Could you please say how is it complex (maybe pretty simple) to use
>> autotest in distrusted testing?
> 
> You meant distributed testing - I didn't know this as a formal
> definition, and the best I could find is:
> 
> http://tetworks.opengroup.org/Wpapers/distributed_whitepaper.htm
> 
> Which pretty much boils down to tests executed on different boxes
> (physical or virtual), with tests interacting with one another.

yes, it is correct
> 
> So yes, autotest does have a mechanism to execute 'distributed' testing,
> as well as 'kvm autotest' does have tests that do use such a testing
> arrangement. In autotest, one of the mechanisms used to coordinate
> execution of tests among different machines is called barrier.
> 
> A barrier is a class that blocks test execution until all 'members' have
> 'checked in' the barrier. So, consider this example from the client
> version of netperf:
> 
>             if role == 'server':
>                 self.server_start(cpu_affinity)
>                 try:
>                     # Wait up to ten minutes for the client to reach this
>                     # point.
>                     self.job.barrier(server_tag, 'start_%d' % num_streams,
>                                      600).rendezvous(*all)
>                     # Wait up to test_time + 5 minutes for the test to
>                     # complete
>                     self.job.barrier(server_tag, 'stop_%d' % num_streams,
>                                      test_time+300).rendezvous(*all)
>                 finally:
>                     self.server_stop()
> 
>             elif role == 'client':
>                 # Wait up to ten minutes for the server to start
>                 self.job.barrier(client_tag, 'start_%d' % num_streams,
>                                  600).rendezvous(*all)
>                 self.client(server_ip, test, test_time, num_streams,
>                             test_specific_args, cpu_affinity)
>                 # Wait up to 5 minutes for the server to also reach this point
>                 self.job.barrier(client_tag, 'stop_%d' % num_streams,
>                                  300).rendezvous(*all)
> 
> You can see above that the client only will start the client code if the
> server is active and did check in the barrier.

As I see code it is enough for syncing remote nodes.

> 
> The 'non virt' autotest version is:
> 
> client version
> https://github.com/autotest/autotest/blob/master/client/tests/netperf2/netperf2.py
> 
> server version
> https://github.com/autotest/autotest/blob/master/server/tests/netperf2/netperf2.py
> 
> The version currently used on kvm autotest is:
> 
> https://github.com/autotest/autotest/blob/master/client/virt/tests/netperf.py
> 
>> F.e. I want to have 4 nodes (maybe all VMs or real h/w or mixed): setup
>> environment before testing, run test and harvest log and other data.
>> Test is initiated on one node but all nodes are used for test. Simple
>> analogy: do something on nfs server from few clients in fixed order.
> 
> So, I'd suggest you take a look at the netperf2 samples, and
> configuration files, and see how the same mechanism can be used for your
> test.
> 
I'm trying to prepare configuration for executing netperf but can't do
it. Could you please say where I can see configuration sample for
executing netperf?

I successfully executed qemu_kvm_f16_quick. After it I found this
discussion
:http://kerneltrap.org/mailarchive/linux-kvm/2009/7/6/6143513/thread and
try to use this info.

My result with netperf configuration is pretty short:
---------------------------
09:28:18 INFO | Writing results to /home/.../autotest/client/results/default
09:28:18 DEBUG| Initializing the state engine
09:28:18 DEBUG| Persistent state client.steps now set to []
09:28:18 DEBUG| Persistent option harness now set to None
09:28:18 DEBUG| Persistent option harness_args now set to None
09:28:18 DEBUG| Selected harness: standalone
09:28:18 DEBUG| Detected OS vendor: Ubuntu
09:28:18 INFO | START   ----    ----    timestamp=1336109298    localtime=May 04
09:28:18        
09:28:18 DEBUG| Persistent state client._record_indent now set to 1
09:28:18 INFO | END GOOD        ----    ----    timestamp=1336109298    
localtime=May
04 09:28:18     
09:28:18 DEBUG| Persistent state client._record_indent now set to 0
---------------------------

Thanks,
        Roman
_______________________________________________
Autotest mailing list
[email protected]
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest

Reply via email to