Hi guys. For a while we have been discussing ways to make the virtualization tests written on top of autotest useful for development level testing.

One of our main goals is to provide useful tools for the qemu community, since we have a good number of tests and libraries written to perform integration/QA testing for that tool, being successfuly used by a number of QA teams that work on qemu. Also, we recently provided a subset of that infrastructure to test libvirt, one of our virtualization projects of interest.

We realized that some (admittedly not very radical) changes have to be made on autotest itself, so we're inviting other users of autotest to give this a good read. This same document lives in the autotest wiki:

https://github.com/autotest/autotest/wiki/FuturePlans

Please note that splitting the virt tests from autotest is not discarded at the moment, and it's not incompatible with the plan outlined below.

====================================
Virt tests and autotest future goals
====================================

In order to make the autotest infrastructure, and the virt tests developed on top of autotest more useful for the people working on developing linux, virt and other platform software, as well as the QA teams, we are working on a number of goals, to streamline, simplify and make the available tools appropriate for *development level testing*.

Executing tests appropriate for *QA level testing* will continue to be supported, as it's one of the biggest strenghts of autotest.

The problem
-----------

Autotest provides as of today a local engine, used to run tests on your local machine (your laptop or a remote server). Currently, it runs tests properly wrapped and packaged under the autotest client/tests/ folder, using specific autotest rules.

For the virt tests that live inside autotest, we have even more rules to follow, causing a lot of frustration for people that are not used to how things are structured and how to execute and select tests.

The proposed solution
---------------------

A solution is needed for both scenarios (virt and other general purpose tests). The idea is to create specialized tools can run simple tests without packaging, code that:

 * Knows nothing about the underlying infrastructure
 * Written in any language (shell script, c, perl, you name it)

It'll be up to the test runner to make sense of the results, provided that the test writer follows some simple common sense principles while writing the code:

 1) Make the program to return 0 on success, !=0 on failure
 2) Make the program use a test output format, mainly TAP

For simple tests, we believe that option 1) will be far more popular. Autotest will harness the execution of the test and put the results under the test results directory, with all the sysinfo collection and other instrumentation transparently available for the user.

At some point, the test writer might want to start the framework features that need to be enabled explicitly, then he/she might want to learn how to use the python API to do so, but it'll not be a requirement.

More about the test runner
--------------------------

The test runner for both general and virt cases should have very simple and minimal output:

::

    Results stored in /path/to/autotest/results/default
    my-full-test.py -- PASS
    my-other-test.py -- PASS
    look-mom-i-can-use-shell.sh -- PASS
    look-mom-i-can-use-perl.pl -- FAIL
    test-name-is-the-description.sh -- PASS
    my-yet-another-test.sh -- SKIPPED
    i-like-python.py -- PASS
    whatever-test.pl -- PASS

Both will be specialized tools that use the infrastructure of
client/bin/autotest-local, but with special code to attend to the output spec above. They will know how to handle dependencies, and skip tests if needed.

Directory structure
-------------------

This is just to give a rough idea of how we won't depend the tests to be in the autotest source code folder:

::

/path/to/autotest -> top level dir, that will make the autotest libs available
     - client/bin -> Contains the test runners and auxiliary scripts
- client/virt/tests: Contains the virt tests that still live in autotest - client/tests/kvm/tests: Contains the qemu tests that still live in autotest

    /any/path/test1: Contains tests for software foo
    /any/path/test2: Contains tests for software bar

/any/path/images: Contains minimal guest images for virtualization tests

Bootstrap procedure
-------------------

In order to comfortably use the framework features, some bootstrap steps will be needed, along the following lines:

::

    git clone git://github.com/autotest/autotest.git /path/to/autotest
    export PATH='/path/to/autotest/client/bin':$PATH
    export PYTHON_PATH='/path/to/autotest':$PYTHON_PATH
    export AUTOTEST_DATA='/path/to/images'

Writing tests
-------------

Simple tests, general case
~~~~~~~~~~~~~~~~~~~~~~~~~~

As previously mentioned, writing a trivial test is as simple as writing a program that returns either 0 (true) or any other value (false). Autotest returns PASS on true and FAIL on false.

Simple tests, virt case
~~~~~~~~~~~~~~~~~~~~~~~

The difference is that the program might be executed in the guest or the host, so a command line flag or environment variable might be set to indicate where the program should be executed (host, guest or both). Autotest returns PASS on true and FAIL on false. This functionality is inspired on qemu-test, thanks to Anthony Liguori.

Instrumented tests, general case
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The test author can learn how to create an autotest wrapper for the test suite and use the specialized tool to run it.

Instrumented tests, virt case
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The test author can learn how to create an instrumented test for virtualization, using the python APIs, or any other language using auxiliary scripts that encapsulate high level functionality for use on shell scripts or other languages. Ideas for auxiliary scripts:

::

    virt_run_migration [params] [options]
    virt_run_timedrift [params] [options]
    virt_run_nic_hotplug [params] [options]
    virt_run_block_hotplug [params] [options]

Where tests live
----------------

The tests won't need to be in the autotest tree, they can live anywhere. The reason for this is that projects need in tree tests, that can be maintained by the project maintainers.

Standard use case for virt is to have both trivial and instrumented tests living in the respective project's tree (qemu and libvirt). Trivial tests don't need autotest libs, while instrumented tests will need to, but that's OK provided that the appropriate bootstrap procedure was made.

Test Examples
-------------

simple, non instrumented
~~~~~~~~~~~~~~~~~~~~~~~~

::

    uptime.sh:
        #!/bin/sh
        exec uptime

    uptime.py:
        #!/usr/bin/python
        import os, sys
        sys.exit(os.system("uptime"))

    uptime.pl:
        #!/usr/bin/perl
        system("uptime");
        exit($?);

    qemu-img-convert.sh
        #!/bin/bash
qemu-img convert -O qcow2 $DATA/qemu-imgs/reference.vdi $TEMPDIR/output.qcow2 diff -b $TEMPDIR/output.qcow2 $DATA/qemu-imgs/reference.qcow2 > /dev/null
        ...

uptime.py - instrummented using libautotest
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

::

    #!/bin/python

    from autotest import utils, logging

    def run_uptime_host(test, params, env):
        uptime = utils.system_output("uptime")
        logging.info("Host uptime result is: %s", uptime)


uptime.py - host/guest mode, instrummented using libautotest
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

::

    #!/bin/python

    from autotest import utils, logging

    def run_uptime_host_and_guest(test, params, env):
        vm = env.get_vm(params["main_vm"])
        vm.verify_alive()
        session = vm.wait_for_login()

        uptime_guest = session.cmd("uptime")
        logging.info("Guest uptime result is: %s", uptime_guest)

        uptime_host = utils.system_output("uptime")
        logging.info("Host uptime result is: %s", uptime_host)


Virt/qemu tests: Minimal guest images
-------------------------------------

In order to make development level test possible, we need the tests to run fast. In order to do that, a set of minimal guest images is being developed and we have a version for x86_64 ready and functional:

https://github.com/autotest/buildroot-autotest

This is a repo based on the buildroot project, that tracks the upstream project and contain branches to generate minimal images, for different architectures (so far x86_64), so people can reproduce the images available here:

http://lmr.fedorapeople.org/jeos_images/

For now, we have a x86_64 image already done and buit:

http://lmr.fedorapeople.org/jeos_images/jeos_x86_64.tar.bz2

This is a 18 MB (bz2 tarball) image, that expands to about ~50 MB, with the latest stable linux, busybox, python, ssh and networking, that is fairly capable by its size, being able to run a fair amount of testing, with small boot times. This functionality is also inspired on qemu-test, by Anthony Liguori.

The specialized virt/qemu tool will use these images, downloading them if needed to run its tests.

Where we are, how to help
-------------------------

We have a prototype version of the virt specialized tool, that still does not implement the output spec, as well as a functional x86_64 guest together with a recipe to re-create it. There's still a lot of work to do, we'd like your input to help us. The work items are being tracked at the label future-vision on the autotest issue tracker:

https://github.com/autotest/autotest/issues?labels=future-vision&sort=created&direction=desc&state=open&page=1

You might want to help us by giving us feedback about this plan. Thank you!

Reply via email to