Re: [Autotest] [PATCH] IOzone preprocessing: Fix wrong column mapping on graph generation

2010-05-06 Thread Martin Bligh
LGTM

On Thu, May 6, 2010 at 6:24 AM, Lucas Meneghel Rodrigues  
wrote:
> Fix a silly bug on graph generation: it was mapping the wrong
> columns when plotting the 2D throughput graphs. Sorry for the
> mistake.
>
> Signed-off-by: Lucas Meneghel Rodrigues 
> ---
>  client/tests/iozone/postprocessing.py |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/client/tests/iozone/postprocessing.py 
> b/client/tests/iozone/postprocessing.py
> index c995aea..3a77c83 100755
> --- a/client/tests/iozone/postprocessing.py
> +++ b/client/tests/iozone/postprocessing.py
> @@ -384,7 +384,7 @@ class IOzonePlotter(object):
>         record size vs. throughput.
>         """
>         datasource_2d = os.path.join(self.output_dir, '2d-datasource-file')
> -        for index, label in zip(range(1, 14), _LABELS[2:]):
> +        for index, label in zip(range(2, 15), _LABELS[2:]):
>             commands_path = os.path.join(self.output_dir, '2d-%s.do' % label)
>             commands = ""
>             commands += "set title 'Iozone performance: %s'\n" % label
> --
> 1.7.0.1
>
> ___
> Autotest mailing list
> autot...@test.kernel.org
> http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] IOzone test: Introduce postprocessing module v2

2010-05-03 Thread Martin Bligh
yup, fair enough. Go ahead and check it in. If we end up doing
this in another test, we should make an abstraction

On Mon, May 3, 2010 at 5:39 PM, Lucas Meneghel Rodrigues  
wrote:
> On Mon, 2010-05-03 at 16:52 -0700, Martin Bligh wrote:
>> only thing that strikes me is whether the gnuplot support
>> should be abstracted out a bit. See tko/plotgraph.py ?
>
> I thought about it. Ideally, we would do all the plotting using a python
> library, such as matplotlib, which has a decent API. However, I spent
> quite some time trying to figure out how to draw the surface graphs
> using matplot lib and in the end, I gave up (3d support on that lib is
> just starting). There are some other libs, such as mayavi
> (http://mayavi.sourceforge.net) that I would like to try out on the near
> future. Your code in plotgraph.py is aimed to 2D graphs, a good
> candidate for replacement using matplotlib (their support to 2D is
> excellent).
>
> So, instead of spending much time encapsulating gnuplot on a nice API,
> I'd prefer to have this intermediate work (anyway it does the job) and
> when possible, get back to this subject. What do you think?
>
>> On Mon, May 3, 2010 at 2:52 PM, Lucas Meneghel Rodrigues  
>> wrote:
>> > This module contains code to postprocess IOzone data
>> > in a convenient way so we can generate performance graphs
>> > and condensed data. The graph generation part depends
>> > on gnuplot, but if the utility is not present,
>> > functionality will gracefully degrade.
>> >
>> > Use the postprocessing module introduced on the previous
>> > patch, use it to analyze results and write performance
>> > graphs and performance tables.
>> >
>> > Also, in order for other tests to be able to use the
>> > postprocessing code, added the right __init__.py
>> > files, so a simple
>> >
>> > from autotest_lib.client.tests.iozone import postprocessing
>> >
>> > will work
>> >
>> > Note: Martin, as patch will ignore and not create the
>> > zero-sized files (high time we move to git), if the changes
>> > look good to you I can commit them all at once, making sure
>> > all files are created.
>> >
>> > Signed-off-by: Lucas Meneghel Rodrigues 
>> > ---
>> >  client/tests/iozone/common.py         |    8 +
>> >  client/tests/iozone/iozone.py         |   25 ++-
>> >  client/tests/iozone/postprocessing.py |  487 
>> > +
>> >  3 files changed, 515 insertions(+), 5 deletions(-)
>> >  create mode 100644 client/tests/__init__.py
>> >  create mode 100644 client/tests/iozone/__init__.py
>> >  create mode 100644 client/tests/iozone/common.py
>> >  create mode 100755 client/tests/iozone/postprocessing.py
>> >
>> > diff --git a/client/tests/__init__.py b/client/tests/__init__.py
>> > new file mode 100644
>> > index 000..e69de29
>> > diff --git a/client/tests/iozone/__init__.py 
>> > b/client/tests/iozone/__init__.py
>> > new file mode 100644
>> > index 000..e69de29
>> > diff --git a/client/tests/iozone/common.py b/client/tests/iozone/common.py
>> > new file mode 100644
>> > index 000..ce78b85
>> > --- /dev/null
>> > +++ b/client/tests/iozone/common.py
>> > @@ -0,0 +1,8 @@
>> > +import os, sys
>> > +dirname = os.path.dirname(sys.modules[__name__].__file__)
>> > +client_dir = os.path.abspath(os.path.join(dirname, "..", ".."))
>> > +sys.path.insert(0, client_dir)
>> > +import setup_modules
>> > +sys.path.pop(0)
>> > +setup_modules.setup(base_path=client_dir,
>> > +                    root_module_name="autotest_lib.client")
>> > diff --git a/client/tests/iozone/iozone.py b/client/tests/iozone/iozone.py
>> > index fa3fba4..03c2c04 100755
>> > --- a/client/tests/iozone/iozone.py
>> > +++ b/client/tests/iozone/iozone.py
>> > @@ -1,5 +1,6 @@
>> >  import os, re
>> >  from autotest_lib.client.bin import test, utils
>> > +import postprocessing
>> >
>> >
>> >  class iozone(test.test):
>> > @@ -63,17 +64,19 @@ class iozone(test.test):
>> >         self.results = utils.system_output('%s %s' % (cmd, args))
>> >         self.auto_mode = ("-a" in args)
>> >
>> > -        path = os.path.join(self.resultsdir, 'raw_output_%s' % 
>> > self.iteration)
>> > -        raw_output_file = open(path, 'w')
>>

Re: [PATCH] IOzone test: Introduce postprocessing module v2

2010-05-03 Thread Martin Bligh
only thing that strikes me is whether the gnuplot support
should be abstracted out a bit. See tko/plotgraph.py ?

On Mon, May 3, 2010 at 2:52 PM, Lucas Meneghel Rodrigues  
wrote:
> This module contains code to postprocess IOzone data
> in a convenient way so we can generate performance graphs
> and condensed data. The graph generation part depends
> on gnuplot, but if the utility is not present,
> functionality will gracefully degrade.
>
> Use the postprocessing module introduced on the previous
> patch, use it to analyze results and write performance
> graphs and performance tables.
>
> Also, in order for other tests to be able to use the
> postprocessing code, added the right __init__.py
> files, so a simple
>
> from autotest_lib.client.tests.iozone import postprocessing
>
> will work
>
> Note: Martin, as patch will ignore and not create the
> zero-sized files (high time we move to git), if the changes
> look good to you I can commit them all at once, making sure
> all files are created.
>
> Signed-off-by: Lucas Meneghel Rodrigues 
> ---
>  client/tests/iozone/common.py         |    8 +
>  client/tests/iozone/iozone.py         |   25 ++-
>  client/tests/iozone/postprocessing.py |  487 
> +
>  3 files changed, 515 insertions(+), 5 deletions(-)
>  create mode 100644 client/tests/__init__.py
>  create mode 100644 client/tests/iozone/__init__.py
>  create mode 100644 client/tests/iozone/common.py
>  create mode 100755 client/tests/iozone/postprocessing.py
>
> diff --git a/client/tests/__init__.py b/client/tests/__init__.py
> new file mode 100644
> index 000..e69de29
> diff --git a/client/tests/iozone/__init__.py b/client/tests/iozone/__init__.py
> new file mode 100644
> index 000..e69de29
> diff --git a/client/tests/iozone/common.py b/client/tests/iozone/common.py
> new file mode 100644
> index 000..ce78b85
> --- /dev/null
> +++ b/client/tests/iozone/common.py
> @@ -0,0 +1,8 @@
> +import os, sys
> +dirname = os.path.dirname(sys.modules[__name__].__file__)
> +client_dir = os.path.abspath(os.path.join(dirname, "..", ".."))
> +sys.path.insert(0, client_dir)
> +import setup_modules
> +sys.path.pop(0)
> +setup_modules.setup(base_path=client_dir,
> +                    root_module_name="autotest_lib.client")
> diff --git a/client/tests/iozone/iozone.py b/client/tests/iozone/iozone.py
> index fa3fba4..03c2c04 100755
> --- a/client/tests/iozone/iozone.py
> +++ b/client/tests/iozone/iozone.py
> @@ -1,5 +1,6 @@
>  import os, re
>  from autotest_lib.client.bin import test, utils
> +import postprocessing
>
>
>  class iozone(test.test):
> @@ -63,17 +64,19 @@ class iozone(test.test):
>         self.results = utils.system_output('%s %s' % (cmd, args))
>         self.auto_mode = ("-a" in args)
>
> -        path = os.path.join(self.resultsdir, 'raw_output_%s' % 
> self.iteration)
> -        raw_output_file = open(path, 'w')
> -        raw_output_file.write(self.results)
> -        raw_output_file.close()
> +        self.results_path = os.path.join(self.resultsdir,
> +                                         'raw_output_%s' % self.iteration)
> +        self.analysisdir = os.path.join(self.resultsdir,
> +                                        'analysis_%s' % self.iteration)
> +
> +        utils.open_write_close(self.results_path, self.results)
>
>
>     def __get_section_name(self, desc):
>         return desc.strip().replace(' ', '_')
>
>
> -    def postprocess_iteration(self):
> +    def generate_keyval(self):
>         keylist = {}
>
>         if self.auto_mode:
> @@ -150,3 +153,15 @@ class iozone(test.test):
>                             keylist[key_name] = result
>
>         self.write_perf_keyval(keylist)
> +
> +
> +    def postprocess_iteration(self):
> +        self.generate_keyval()
> +        if self.auto_mode:
> +            a = postprocessing.IOzoneAnalyzer(list_files=[self.results_path],
> +                                              output_dir=self.analysisdir)
> +            a.analyze()
> +            p = postprocessing.IOzonePlotter(results_file=self.results_path,
> +                                             output_dir=self.analysisdir)
> +            p.plot_all()
> +
> diff --git a/client/tests/iozone/postprocessing.py 
> b/client/tests/iozone/postprocessing.py
> new file mode 100755
> index 000..c995aea
> --- /dev/null
> +++ b/client/tests/iozone/postprocessing.py
> @@ -0,0 +1,487 @@
> +#!/usr/bin/python
> +"""
> +Postprocessing module for IOzone. It is capable to pick results from an
> +IOzone run, calculate the geometric mean for all throughput results for
> +a given file size or record size, and then generate a series of 2D and 3D
> +graphs. The graph generation functionality depends on gnuplot, and if it
> +is not present, functionality degrates gracefully.
> +
> +...@copyright: Red Hat 2010
> +"""
> +import os, sys, optparse, logging, math, time
> +import common
> +from autotest_lib.client.common_lib import logging_config, logging_manager
> +from aut

Re: [Autotest] [PATCH 1/2] IOzone test: Introduce postprocessing module

2010-04-30 Thread Martin Bligh
On Fri, Apr 30, 2010 at 2:37 PM, Lucas Meneghel Rodrigues
 wrote:
> On Fri, 2010-04-30 at 14:23 -0700, Martin Bligh wrote:
>> I'm slightly surprised this isn't called from postprocess
>> in the test? Any downside to doing that?
>
> In the second patch I do the change to make the test to use the
> postprocessing module.

Ah, OK, missed that. Will go look. This one looks good.

>
>> On Fri, Apr 30, 2010 at 2:20 PM, Lucas Meneghel Rodrigues
>>  wrote:
>> > This module contains code to postprocess IOzone data
>> > in a convenient way so we can generate performance graphs
>> > and condensed data. The graph generation part depends
>> > on gnuplot, but if the utility is not present,
>> > functionality will gracefully degrade.
>> >
>> > The reason why this was created as a separate module is:
>> >  * It doesn't pollute the main test class.
>> >  * Allows us to use the postprocess module as a stand alone program,
>> >   that can even do performance comparison between 2 IOzone runs.
>> >
>> > Signed-off-by: Lucas Meneghel Rodrigues 
>> > ---
>> >  client/tests/iozone/postprocessing.py |  487 
>> > +
>> >  1 files changed, 487 insertions(+), 0 deletions(-)
>> >  create mode 100755 client/tests/iozone/postprocessing.py
>> >
>> > diff --git a/client/tests/iozone/postprocessing.py 
>> > b/client/tests/iozone/postprocessing.py
>> > new file mode 100755
>> > index 000..b495502
>> > --- /dev/null
>> > +++ b/client/tests/iozone/postprocessing.py
>> > @@ -0,0 +1,487 @@
>> > +#!/usr/bin/python
>> > +"""
>> > +Postprocessing module for IOzone. It is capable to pick results from an
>> > +IOzone run, calculate the geometric mean for all throughput results for
>> > +a given file size or record size, and then generate a series of 2D and 3D
>> > +graphs. The graph generation functionality depends on gnuplot, and if it
>> > +is not present, functionality degrates gracefully.
>> > +
>> > +...@copyright: Red Hat 2010
>> > +"""
>> > +import os, sys, optparse, logging, math, time
>> > +import common
>> > +from autotest_lib.client.common_lib import logging_config, logging_manager
>> > +from autotest_lib.client.common_lib import error
>> > +from autotest_lib.client.bin import utils, os_dep
>> > +
>> > +
>> > +_LABELS = ('file_size', 'record_size', 'write', 'rewrite', 'read', 
>> > 'reread',
>> > +           'randread', 'randwrite', 'bkwdread', 'recordrewrite', 
>> > 'strideread',
>> > +           'fwrite', 'frewrite', 'fread', 'freread')
>> > +
>> > +
>> > +def unique(list):
>> > +    """
>> > +    Return a list of the elements in list, but without duplicates.
>> > +
>> > +   �...@param list: List with values.
>> > +   �...@return: List with non duplicate elements.
>> > +    """
>> > +    n = len(list)
>> > +    if n == 0:
>> > +        return []
>> > +    u = {}
>> > +    try:
>> > +        for x in list:
>> > +            u[x] = 1
>> > +    except TypeError:
>> > +        return None
>> > +    else:
>> > +        return u.keys()
>> > +
>> > +
>> > +def geometric_mean(values):
>> > +    """
>> > +    Evaluates the geometric mean for a list of numeric values.
>> > +
>> > +   �...@param values: List with values.
>> > +   �...@return: Single value representing the geometric mean for the list 
>> > values.
>> > +   �...@see: http://en.wikipedia.org/wiki/Geometric_mean
>> > +    """
>> > +    try:
>> > +        values = [int(value) for value in values]
>> > +    except ValueError:
>> > +        return None
>> > +    product = 1
>> > +    n = len(values)
>> > +    if n == 0:
>> > +        return None
>> > +    return math.exp(sum([math.log(x) for x in values])/n)
>> > +
>> > +
>> > +def compare_matrices(matrix1, matrix2, treshold=0.05):
>> > +    """
>> > +    Compare 2 matrices nxm and return a matrix nxm with comparison data
>> > +
>> > +   �...@param mat

Re: [Autotest] [PATCH 1/2] IOzone test: Introduce postprocessing module

2010-04-30 Thread Martin Bligh
I'm slightly surprised this isn't called from postprocess
in the test? Any downside to doing that?

On Fri, Apr 30, 2010 at 2:20 PM, Lucas Meneghel Rodrigues
 wrote:
> This module contains code to postprocess IOzone data
> in a convenient way so we can generate performance graphs
> and condensed data. The graph generation part depends
> on gnuplot, but if the utility is not present,
> functionality will gracefully degrade.
>
> The reason why this was created as a separate module is:
>  * It doesn't pollute the main test class.
>  * Allows us to use the postprocess module as a stand alone program,
>   that can even do performance comparison between 2 IOzone runs.
>
> Signed-off-by: Lucas Meneghel Rodrigues 
> ---
>  client/tests/iozone/postprocessing.py |  487 
> +
>  1 files changed, 487 insertions(+), 0 deletions(-)
>  create mode 100755 client/tests/iozone/postprocessing.py
>
> diff --git a/client/tests/iozone/postprocessing.py 
> b/client/tests/iozone/postprocessing.py
> new file mode 100755
> index 000..b495502
> --- /dev/null
> +++ b/client/tests/iozone/postprocessing.py
> @@ -0,0 +1,487 @@
> +#!/usr/bin/python
> +"""
> +Postprocessing module for IOzone. It is capable to pick results from an
> +IOzone run, calculate the geometric mean for all throughput results for
> +a given file size or record size, and then generate a series of 2D and 3D
> +graphs. The graph generation functionality depends on gnuplot, and if it
> +is not present, functionality degrates gracefully.
> +
> +...@copyright: Red Hat 2010
> +"""
> +import os, sys, optparse, logging, math, time
> +import common
> +from autotest_lib.client.common_lib import logging_config, logging_manager
> +from autotest_lib.client.common_lib import error
> +from autotest_lib.client.bin import utils, os_dep
> +
> +
> +_LABELS = ('file_size', 'record_size', 'write', 'rewrite', 'read', 'reread',
> +           'randread', 'randwrite', 'bkwdread', 'recordrewrite', 
> 'strideread',
> +           'fwrite', 'frewrite', 'fread', 'freread')
> +
> +
> +def unique(list):
> +    """
> +    Return a list of the elements in list, but without duplicates.
> +
> +   �...@param list: List with values.
> +   �...@return: List with non duplicate elements.
> +    """
> +    n = len(list)
> +    if n == 0:
> +        return []
> +    u = {}
> +    try:
> +        for x in list:
> +            u[x] = 1
> +    except TypeError:
> +        return None
> +    else:
> +        return u.keys()
> +
> +
> +def geometric_mean(values):
> +    """
> +    Evaluates the geometric mean for a list of numeric values.
> +
> +   �...@param values: List with values.
> +   �...@return: Single value representing the geometric mean for the list 
> values.
> +   �...@see: http://en.wikipedia.org/wiki/Geometric_mean
> +    """
> +    try:
> +        values = [int(value) for value in values]
> +    except ValueError:
> +        return None
> +    product = 1
> +    n = len(values)
> +    if n == 0:
> +        return None
> +    return math.exp(sum([math.log(x) for x in values])/n)
> +
> +
> +def compare_matrices(matrix1, matrix2, treshold=0.05):
> +    """
> +    Compare 2 matrices nxm and return a matrix nxm with comparison data
> +
> +   �...@param matrix1: Reference Matrix with numeric data
> +   �...@param matrix2: Matrix that will be compared
> +   �...@param treshold: Any difference bigger than this percent treshold 
> will be
> +            reported.
> +    """
> +    improvements = 0
> +    regressions = 0
> +    same = 0
> +    comparison_matrix = []
> +
> +    new_matrix = []
> +    for line1, line2 in zip(matrix1, matrix2):
> +        new_line = []
> +        for element1, element2 in zip(line1, line2):
> +            ratio = float(element2) / float(element1)
> +            if ratio < (1 - treshold):
> +                regressions += 1
> +                new_line.append((100 * ratio - 1) - 100)
> +            elif ratio > (1 + treshold):
> +                improvements += 1
> +                new_line.append("+" + str((100 * ratio - 1) - 100))
> +            else:
> +                same + 1
> +                if line1.index(element1) == 0:
> +                    new_line.append(element1)
> +                else:
> +                    new_line.append(".")
> +        new_matrix.append(new_line)
> +
> +    total = improvements + regressions + same
> +
> +    return (new_matrix, improvements, regressions, total)
> +
> +
> +class IOzoneAnalyzer(object):
> +    """
> +    Analyze an unprocessed IOzone file, and generate the following types of
> +    report:
> +
> +    * Summary of throughput for all file and record sizes combined
> +    * Summary of throughput for all file sizes
> +    * Summary of throughput for all record sizes
> +
> +    If more than one file is provided to the analyzer object, a comparison
> +    between the two runs is made, searching for regressions in performance.
> +    """
> +    def __init__(self, list_files, output_dir):
> +       

Re: [Autotest] [PATCH] Monotonic time test: Don't force static compilation of time_test

2010-03-23 Thread Martin Bligh
On Tue, Mar 23, 2010 at 1:56 PM, Lucas Meneghel Rodrigues
 wrote:
> On Tue, Mar 23, 2010 at 3:25 PM, Martin Bligh  wrote:
>> +cc:md (he wrote the test).
>>
>> On Tue, Mar 23, 2010 at 11:13 AM, Lucas Meneghel Rodrigues
>>  wrote:
>>> The Makefile for the monotonic_test C program forces static
>>> compilation of the object files. Since we are compiling the
>>> code already, not having a static binary doesn't make much
>>> of a difference on the systems we are running this test.
>>>
>>> As the static compilation might fail in some boxes, just remove
>>> this constraint from the Makefile.
>>
>> I presume this was to fix some Google interdependency.
>> Is it actually breaking something? If not, seems safer to leave it?
>> If so, we'll have to fix one end or the other ;-)
>
> Yes, I can't get a static build on a Fedora 13 box by no means, that's
> why I looked into what was going wrong and cooked this patch. If
> someone has any suggestions of what I need to do to work around this,
> let me know.

OK, sounds like Michael is happy - and there's a real problem to fix.
LGTM - go ahead and apply it.

Thanks,

M.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] Monotonic time test: Don't force static compilation of time_test

2010-03-23 Thread Martin Bligh
+cc:md (he wrote the test).

On Tue, Mar 23, 2010 at 11:13 AM, Lucas Meneghel Rodrigues
 wrote:
> The Makefile for the monotonic_test C program forces static
> compilation of the object files. Since we are compiling the
> code already, not having a static binary doesn't make much
> of a difference on the systems we are running this test.
>
> As the static compilation might fail in some boxes, just remove
> this constraint from the Makefile.

I presume this was to fix some Google interdependency.
Is it actually breaking something? If not, seems safer to leave it?
If so, we'll have to fix one end or the other ;-)

> Signed-off-by: Lucas Meneghel Rodrigues 
> ---
>  client/tests/monotonic_time/src/Makefile |    1 -
>  1 files changed, 0 insertions(+), 1 deletions(-)
>
> diff --git a/client/tests/monotonic_time/src/Makefile 
> b/client/tests/monotonic_time/src/Makefile
> index 56aa7b6..2121ec4 100644
> --- a/client/tests/monotonic_time/src/Makefile
> +++ b/client/tests/monotonic_time/src/Makefile
> @@ -1,7 +1,6 @@
>  CC=    cc
>
>  CFLAGS=        -O -std=gnu99 -Wall
> -LDFLAGS=-static
>  LIBS=  -lpthread -lrt
>
>  PROG=  time_test
> --
> 1.6.6.1
>
> ___
> Autotest mailing list
> autot...@test.kernel.org
> http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] Fix autotest client when checking only client from svn

2009-12-01 Thread Martin Bligh
yup, seems important - lmr, do you want to go ahead and apply this? I'm stuck
in a meeting for a while

On Tue, Dec 1, 2009 at 2:39 PM, John Admanski  wrote:
> This looks good to me.
>
> -- John
>
> On Tue, Dec 1, 2009 at 2:37 PM, Lucas Meneghel Rodrigues  
> wrote:
>> When the client was made configurable through
>> global_config.ini, the scenario "developer
>> checking out client directory only" wasn't
>> considered, and an exception will be thrown
>> due to the lack of a global_config.ini present.
>>
>> In order to fix this, instead of throwing an
>> exception, just print a warning on cases that
>> matter, and make all the values that are
>> supposed to be grabbed from the config file
>> to have a sensible default.
>>
>> 2nd try: Now a warning is printed only when
>> autotest is being run through autoserv, since
>> it's probably the only case worth printing
>> a warning.
>>
>> Signed-off-by: Lucas Meneghel Rodrigues 
>> ---
>>  client/bin/autotest                   |    3 ++-
>>  client/bin/harness_autoserv.py        |   15 ++-
>>  client/bin/job.py                     |    2 +-
>>  client/common_lib/global_config.py    |   17 -
>>  client/common_lib/host_protections.py |    9 -
>>  5 files changed, 29 insertions(+), 17 deletions(-)
>>
>> diff --git a/client/bin/autotest b/client/bin/autotest
>> index 285be4e..c83e755 100755
>> --- a/client/bin/autotest
>> +++ b/client/bin/autotest
>> @@ -61,7 +61,8 @@ if len(args) != 1:
>>
>>  drop_caches = global_config.global_config.get_config_value('CLIENT',
>>                                                            'drop_caches',
>> -                                                           type=bool)
>> +                                                           type=bool,
>> +                                                           default=True)
>>
>>  # JOB: run the specified job control file.
>>  job.runjob(os.path.realpath(args[0]), drop_caches, options)
>> diff --git a/client/bin/harness_autoserv.py b/client/bin/harness_autoserv.py
>> index 4ea16e4..0bfbcdd 100644
>> --- a/client/bin/harness_autoserv.py
>> +++ b/client/bin/harness_autoserv.py
>> @@ -1,5 +1,6 @@
>> -import os, logging
>> +import os, logging, ConfigParser
>>  from autotest_lib.client.common_lib import autotemp, base_packages, error
>> +from autotest_lib.client.common_lib import global_config
>>  from autotest_lib.client.bin import harness
>>
>>
>> @@ -20,6 +21,18 @@ class harness_autoserv(harness.harness):
>>         super(harness_autoserv, self).__init__(job)
>>         self.status = os.fdopen(3, 'w', 0)
>>
>> +        # If a bug on the client run code prevents global_config.ini
>> +        # from being copied to the client machine, the client will run
>> +        # without a global config, relying only on the defaults of the
>> +        # config items. To avoid that happening silently, the check below
>> +        # was written.
>> +        try:
>> +            cfg = global_config.global_config.get_section_values("CLIENT")
>> +        except ConfigParser.NoSectionError:
>> +            logging.error("Empty CLIENT configuration session. "
>> +                          "global_config.ini missing. This probably means "
>> +                          "a bug on the server code. Please verify.")
>> +
>>
>>     def run_start(self):
>>         # set up the package fetcher for direct-from-autoserv fetches
>> diff --git a/client/bin/job.py b/client/bin/job.py
>> index 7021105..f879100 100755
>> --- a/client/bin/job.py
>> +++ b/client/bin/job.py
>> @@ -233,7 +233,7 @@ class base_client_job(base_job.base_job):
>>         self.drop_caches_between_iterations = (
>>                        global_config.global_config.get_config_value('CLIENT',
>>                                             'drop_caches_between_iterations',
>> -                                            type=bool))
>> +                                            type=bool, default=True))
>>         self.drop_caches = drop_caches
>>         if self.drop_caches:
>>             logging.debug("Dropping caches")
>> diff --git a/client/common_lib/global_config.py 
>> b/client/common_lib/global_config.py
>> index 04ab7ff..24a93ea 100644
>> --- a/client/common_lib/global_config.py
>> +++ b/client/common_lib/global_config.py
>> @@ -5,7 +5,7 @@ provides access to global configuration file
>>
>>  __author__ = 'raph...@google.com (Travis Miller)'
>>
>> -import os, sys, ConfigParser
>> +import os, sys, ConfigParser, logging
>>  from autotest_lib.client.common_lib import error
>>
>>
>> @@ -44,11 +44,9 @@ elif config_in_client:
>>     DEFAULT_SHADOW_FILE = None
>>     RUNNING_STAND_ALONE_CLIENT = True
>>  else:
>> -    raise ConfigError("Could not find configuration files "
>> -                      "needed for this program to function. Please refer to 
>> "
>> -                      "http://autotest.kernel.org/wiki/GlobalConfig "
>> -                      "for more info.")
>> -
>> +    DEFAULT_CONFIG_FILE = None
>> 

Re: [Autotest] [PATCH] Move global configuration files to client dir

2009-11-11 Thread Martin Bligh
> I thought about it a bit more:
>
> Maybe a better approach would be to have the global_config module find
> the ini file in job.autodir (so on a client it would show up in the
> client/ dir, and on the server in the "true" top-level dir) and then
> add support to Autotest.run so that it copies over the server's copy
> of the config to the client before launching a client job?
>
> So that way it would "just work", and changes to the server config
> would automatically get pushed out to client jobs. All without moving
> the file that users running a server need to edit. And it's not too
> complex of a design; the Autotest.run code already needs to copy over
> a few files by hand like control files so copying over the config too
> isn't too much of a burden.

Sounds good to me.

> The only concern I have is that this still might not play well with a
> multi-server setup. If the servers have different configs I'm not sure
> that it works all that well (although I still don't know that this
> introduces any "new" problems, so I don't think it makes things any
> messier in that case then they already are). I cc'ed Scott and Steve
> in case they can comment on that.

By multi-server setup, do you mean multiple copies of the autotest
server code on the same tree? Or a master with drones?
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [RFC] KVM test: Refactoring the kvm control file and the config file

2009-07-23 Thread Martin Bligh
>> If any of those tests fails (with some built in fault tolerance for a
>> small
>> hardware fallout rate), we stop the testing. All of that control flow
>> is governed by a control file. It sounds complex, but it's really not
>> if you build your "building blocks" carefully, and it's extremely powerful
>
> +1
>
> The highly flexible config file currently serves client mode tests.
> We need to slowly shift functionality into the server while keeping the
> current advantages and simplicity of the client.
>
> Martin, can you give some links to the above meta control?

The control files themselves aren't published external to google, but
nearly all of the logic they use is - in server/frontend.py.

I really need to refactor that file, it has both flow logic in it, and
the basics for how to submit jobs to the frontend. The basic idea is
to create pairings of (test, machine label), then kick off a job to all
machines within that pairing and poll for the result. We typically use
3 to 5 of each machine type (platform) - if more than one of any
platform fails, we call that a failure.

The main entry point is run_test_suites(), which takes a list of
such pairings, along with a kernel to test, etc. We will probably need
do some work to generalize it, but the concept is all there, and it
works well for us.

>> Are all of your tests exclusive to KVM? I would think you'd want to be
>> able to run any "normal" test inside a KVM environment too?
>
> There are several autotest tests that run inside the guest today too.
> Today the config file controls their execution. It would be nice if we'll
> create dependency using the server tests, that first installs VM, boot it
> and then runs various 'normal' tests inside of it.

I'm assuming you want to be able to both run tests inside the guests
and on the raw host itself, at the same time? To that end, what we've
planned to do (but not completed yet) is to merge the client and server
code - so you can have autotest running on the scheduler, kicking off
jobs to the host. That host can then autonomously control it's own
guests, creating and destroying them, and kicking off tests inside of it.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [RFC] KVM test: Refactoring the kvm control file and the config file

2009-07-21 Thread Martin Bligh
> The advantages I see are: 1. it more closely follows the current
> autotest structure/layout, 2. solves the problem of separating each test
> out of the ever growing kvm_test.py and gives a sub dir of each test for
> better structure (something we have been talking about) and 3. addresses
> the config vs. control file ? that this thread originally brought up.
>
> I think the issue is in how the "kvm test" is viewed.  Is it one test
> that gets run against several configurations, or is it several different
> tests with different configurations?.  I have been looking at it as the
> later however I do also see it the other way as well.

I think if you try to force everything you do into one test, you'll lose
a lot of the power and flexibility of the system. I can't claim to have
entirely figured out what you're doing, but it seems somewhat like
you're reinventing some stuff with the current approach?

Some of the general design premises:
   1) Anything the user might want to configure should be in the control file
   2) Anything in test should be really pretty static.
   3) The way we get around a lot of the conflicts is by passing parameters
   to run_test, though leaving sensible defaults in for them makes things
   much easier to use.
   4) The frontend and cli are designed to allow you to edit control files,
   and/or save custom versions - that's the single object we throw
   to machines under test ... there's no passing of cfg files to clients?

We often end up with longer control files that contain a pre-canned set of
tests, and even "meta-control files" that kick off a multitude of jobs across
thousands of machines, using frontend.py. That can include control flow -
for example our internal kernel testing uses a waterfall model with several
steps:

1. Compile the kernel from source
2. Test on a bunch of single machines with a smoketest that takes an
hour or so.
3. Test on small groups of machines with cut down simulations of
cluster tests
4. Test on full clusters.

If any of those tests fails (with some built in fault tolerance for a small
hardware fallout rate), we stop the testing. All of that control flow
is governed by a control file. It sounds complex, but it's really not
if you build your "building blocks" carefully, and it's extremely powerful

> So maybe the solution is a little different than my first thought
>
> - all kvm tests are in $AUTOTEST/client/kvm_tests/
> - all kvm tests inherent form $AUTOTEST/client/common_lib/kvm_test.py
> - common functionality is in $AUTOTEST/client/common_lib/kvm_test_utils/
>  - does *not* include generic kvm_test.cfg
> - we keep the $AUTOTEST/client/kvm/ test dir which defines the test runs
> and houses kvm_test.cfg file and a master control.
>  - we could then define a couple sample test runs: full, quick, and
> others  or implement something like your kvm_tests.common file that
> other test runs can build on.

Are all of your tests exclusive to KVM? I would think you'd want to be able
to run any "normal" test inside a KVM environment too?
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [autotest] [PATCH 1/6] add ebizzy in autotest

2009-07-13 Thread Martin Bligh
On Sun, Jul 12, 2009 at 8:20 PM, Lucas Meneghel Rodrigues 
wrote:
> On Sun, Jul 12, 2009 at 7:08 AM, sudhir kumar wrote:
>> On Sat, Jul 11, 2009 at 6:05 AM, Martin Bligh wrote:
>>> On Fri, Jul 10, 2009 at 4:29 AM, sudhir kumar wrote:
>>>> So is there any plan for adding this patch set in the patch queue? I
>>>> would love to incorporate all the comments if any.
>>>
>>> Yup, just was behind on patches.
>>>
>>> I added it now - the mailer you are using seems to chew patches fairly
>>> thoroughly though ... if it's gmail, it does that ... might want to just
>>> attach as text ?
>> Thanks!
>> Ah! I have been using gmail only in the text mode. I was unable to
>> subscribe to the list using my imap id(and i use mutt client for that)
>> though.
>> Is this problem of gmail known to all? Any workaround ?
>
> Yes, gmail wraps stuff automagically, I don't know any workarounds for
> that. The best workaround I'd suggest is git-format-patch and
> git-send-email :)

Yeah, send from commandline or use attachments - either is fine.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [autotest] [PATCH 1/6] add ebizzy in autotest

2009-07-10 Thread Martin Bligh
On Fri, Jul 10, 2009 at 4:29 AM, sudhir kumar wrote:
> So is there any plan for adding this patch set in the patch queue? I
> would love to incorporate all the comments if any.

Yup, just was behind on patches.

I added it now - the mailer you are using seems to chew patches fairly
thoroughly though ... if it's gmail, it does that ... might want to just
attach as text ?

> On Wed, Jul 8, 2009 at 1:47 PM, sudhir kumar wrote:
>> This patch adds the wrapper for ebizzy into autotest. here is the link
>> to get a copy of the test tarball.
>> http://sourceforge.net/project/platformdownload.php?group_id=202378&sel_platform=3809
>>
>> Please review the patch and provide your comments.
>>
>>
>> Signed-off-by: Sudhir Kumar 
>>
>> Index: autotest/client/tests/ebizzy/control
>> ===
>> --- /dev/null
>> +++ autotest/client/tests/ebizzy/control
>> @@ -0,0 +1,11 @@
>> +NAME = "ebizzy"
>> +AUTHOR = "Sudhir Kumar "
>> +TIME = "MEDIUM, VARIABLE"
>> +TEST_CATEGORY = "FUNCTIONAL"
>> +TEST_CLASS = "SYSTEM STRESS"
>> +TEST_TYPE = "CLIENT"
>> +DOC = """
>> +http://sourceforge.net/project/platformdownload.php?group_id=202378&sel_platform=3809
>> +"""
>> +
>> +job.run_test('ebizzy', args = '-vv')
>> Index: autotest/client/tests/ebizzy/ebizzy.py
>> ===
>> --- /dev/null
>> +++ autotest/client/tests/ebizzy/ebizzy.py
>> @@ -0,0 +1,32 @@
>> +import os
>> +from autotest_lib.client.bin import utils, test
>> +from autotest_lib.client.common_lib import error
>> +
>> +class ebizzy(test.test):
>> +    version = 3
>> +
>> +    def initialize(self):
>> +        self.job.require_gcc()
>> +
>> +
>> +    # 
>> http://sourceforge.net/project/downloading.php?group_id=202378&filename=ebizzy-0.3.tar.gz
>> +    def setup(self, tarball = 'ebizzy-0.3.tar.gz'):
>> +        tarball = utils.unmap_url(self.bindir, tarball, self.tmpdir)
>> +        utils.extract_tarball_to_dir(tarball, self.srcdir)
>> +        os.chdir(self.srcdir)
>> +
>> +        utils.system('[ -x configure ] && ./configure')
>> +        utils.system('make')
>> +
>> +
>> +    # Note: default we use always mmap()
>> +    def run_once(self, args = '', num_chunks = 1000, chunk_size =
>> 512000, seconds = 100, num_threads = 100):
>> +
>> +        #TODO: Write small functions which will choose many of the above
>> +        # variables dynamicaly looking at guest's total resources
>> +        logfile = os.path.join(self.resultsdir, 'ebizzy.log')
>> +        args2 = '-m -n %s -P -R -s %s -S %s -t %s' % (num_chunks,
>> chunk_size, seconds, num_threads)
>> +        args = args + ' ' + args2
>> +
>> +        cmd = os.path.join(self.srcdir, 'ebizzy') + ' ' + args
>> +        utils.system(cmd)
>>
>>
>> --
>> Sudhir Kumar
>>
>
>
>
> --
> Sudhir Kumar
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] Add latest LTP test in autotest

2009-07-08 Thread Martin Bligh
>> Yup, we can pass an excluded test list. I really wish they'd fix their
>> tests, but I've been saying that for 6 years now, and it hasn't happened
>> yet ;-(
>
> I would slightly disagree to that. 6 years is history. But, have you
> recently checked with LTP ?

I hate to be completely cynical about this, but that's exactly the same
message I get every year.

Yes, absolute, the best thing would be for someone to run all the tests,
work through all the problems, categorize them as kernel / library / distro,
and get each of them fixed. However, it's a fair chunk of work that I don't
have time to do.

So all I'm saying is that I know which of the current tests we have issues
with, and I don't want to upgrade LTP without a new set of data, and that
work being done. From previous experience, I would be extremely
surprised if there's not at least one new problem, and I'm not just going
to dump that on users.

Does the LTP project do this itself on a regular basis ... ie are you running
LTP against the latest kernel (or even some known stable kernel) and
seeing which tests are broken? If you can point me to that, I'd have much
more faith about picking this up ...

Up until this point we've not even managed to agree that PASS means
"ran as expected" and FAIL meant "something is wrong". LTP always
had "expected failures" which seems like a completely broken model
to me.

M.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [AUTOTEST] [PATCH 1/2] Add latest LTP test in autotest

2009-07-07 Thread Martin Bligh
> ATM I will suggest to merge the patches in and let get tested so that
> we can collect failures/breakages if any.

I am not keen on causing regressions, which we've risked doing every
time we change LTP. I think we at least need to get a run on a non-virtualized
machine with some recent kernel, and exclude the tests that fail every time.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [AUTOTEST] [PATCH 1/2] Add latest LTP test in autotest

2009-07-07 Thread Martin Bligh
On Tue, Jul 7, 2009 at 12:24 AM, sudhir kumar wrote:
> On Tue, Jul 7, 2009 at 12:07 AM, Martin Bligh wrote:
>>>> Issues: LTP has a history of some of the testcases getting broken.
>>
>> Right, that's always the concern with doing this.
>>
>>>> Anyways
>>>> that has nothing to worry about with respect to autotest. One of the known 
>>>> issue
>>>> is broken memory controller issue with latest kernels(cgroups and memory
>>>> resource controller enabled kernels). The workaround for them I use is to
>>>> disable or delete those tests from ltp source and tar it again with the 
>>>> same
>>>> name. Though people might use different workarounds for it.
>>
>> OK, Can we encapsulate this into the wrapper though, rather than making
>> people do it manually? in the existing ltp.patch or something?
>>
> definitely we can do that, but that needs to know about all the corner
> cases of failure. So may be we can continue enhancing the patch as per
> the failure reports on different OSes.
>
> 1 more thing I wanted to start a discussion on LTP mailing list is to
> make aware the testcase if it is running on a physical host or on a
> guest(say KVM guest). Testcases like power management, group
> scheduling fairness etc do not make much sense to run on a guest(as
> they will fail or break). So It is better for the test to recognise
> the environment and not execute if it is under virtualization and it
> is supposed to fail or break under that environment. Does that make
> sense to you also ?

Yup, we can pass an excluded test list. I really wish they'd fix their
tests, but I've been saying that for 6 years now, and it hasn't happened
yet ;-(
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [AUTOTEST] [PATCH 1/2] Add latest LTP test in autotest

2009-07-06 Thread Martin Bligh
>> Issues: LTP has a history of some of the testcases getting broken.

Right, that's always the concern with doing this.

>> Anyways
>> that has nothing to worry about with respect to autotest. One of the known 
>> issue
>> is broken memory controller issue with latest kernels(cgroups and memory
>> resource controller enabled kernels). The workaround for them I use is to
>> disable or delete those tests from ltp source and tar it again with the same
>> name. Though people might use different workarounds for it.

OK, Can we encapsulate this into the wrapper though, rather than making
people do it manually? in the existing ltp.patch or something?
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: netperf in autotest

2009-07-06 Thread Martin Bligh
On Mon, Jul 6, 2009 at 4:14 AM, sudhir kumar wrote:
> Hi,
> In order to include netperf tests under kvm guests run I have been
> trying to run netperf testsuit in autotest but I am getting barrier
> failures. I want to understand the philosophy of implementation of the
> autotest wrappers, so thought of quickly asking it on the list.
> Here are my questions:
>
> 1. There are 3 control files control.server, control.client and
> control.parallel. What is the scenario for which file ? Will
> ../../autotest/bin control.client with proper configuration on machine
> (say, 9.126.89.168) be able to  run netperf completely automatically
> or do I need to run the netserver on the other machine(say,
> 9.124.124.82)?
> # cat control.client | grep ip
>             server_ip='9.124.124.82',
>             client_ip='9.126.89.168',
>
> 2. What is the purpose of control.parallel ?

Testing on a single machine

Normally you're going to run server/tests/netperf2, which will invoke
the separate sections of the client tests on a pair of machines for you

> 3. What is the use of barriers in netperf2.py? Is it mandatory ? I
> tried to understand it by going through the code but still I want to
> double check.

To ensure that the server is started and ready before the client
starts throwing queries at it, and stays up and running until
the client has finished. It is then torn down.

> The execution of this test using autotest is so far failing for me.
> (though a minimal manual execution from command lines passes for me).
> It mainly fails on barrier due to timeouts "timeout waiting for
> barrier: start_1". I tried by running
> ../../bin/autotest client.control on machineA with server_ip set to
> remote machine(B) and client ip set to this machine's ip(A).
> ../../bin/autotest server.control on one machineB with server_ip set
> to machineB and client ip set to the remote machine's ip(A).
>
> I want to ensure that I am not doing anything wrong.
> Thanks in advance!!

Take a look at how the server side test does it?
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] Add a client-side test qemu_iotests

2009-07-01 Thread Martin Bligh
From: root 


Signed-off-by: root 
---


;-)
Can we get these signed off by a person please? Preferably with a real email
address (see the DCO, in top level directory)
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM-AUTOTEST PATCH] Adding iperf test

2009-07-01 Thread Martin Bligh
 LMR: me too, hate putting binaries in source tree, but the alternative
 option is to provide separate *.tar.bz2 for all the binary utils, and
 I don't sure which way is better.

>>>
>>> Yes, I don't have a clear idea as well. It's currently under
>>> discussion...
>>>
>>
>> Is KVM x86_64 only?
>>
>
> It's x86-64, i386, ia64, s390, and powerpc 44x/e500 only.

OK, then it's difficult to see using binaries? Can we not
compile these on the system at use time (see the client/deps
directory for other stuff we do this for)

M.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM-AUTOTEST PATCH] Adding iperf test

2009-07-01 Thread Martin Bligh
On Wed, Jul 1, 2009 at 8:57 AM, Lucas Meneghel Rodrigues wrote:
> On Wed, 2009-07-01 at 14:43 +0300, Alexey Eremenko wrote:
>> LMR: me too, hate putting binaries in source tree, but the alternative
>> option is to provide separate *.tar.bz2 for all the binary utils, and
>> I don't sure which way is better.
>
> Yes, I don't have a clear idea as well. It's currently under
> discussion...

Is KVM x86_64 only?
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM-AUTOTEST PATCH 1/4] Make all programs on kvm test use /usr/bin/python

2009-06-15 Thread Martin Bligh
On Mon, Jun 15, 2009 at 6:35 AM, Alexey Eromenko wrote:
>
> - "Martin Bligh"  wrote:
>
>> On Wed, Jun 10, 2009 at 4:01 AM, Alexey Eromenko
>> wrote:
>> >
>> > Even better would be to use "/usr/bin/python2".
>>
>> That doesn't seem to exist, on Ubuntu at least.
>>
>
> Red Hat systems have it. "/usr/bin/python2" is a symlink to "/usr/bin/python" 
> (which is python2 executable)
>
> Is there any Ubuntu-compatible way of achieving this?

Not that I can see, other than explicit Python code, which we have already.
I think this is a solved issue?
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM-AUTOTEST PATCH 4/4] Adding control files dir to kvm test

2009-06-12 Thread Martin Bligh
That seems very strange ... why do you have to duplicate the existing
control files?

On Fri, Jun 12, 2009 at 9:13 AM, Lucas Meneghel Rodrigues
 wrote:
> Adding an autotest_control dir with control files that will be
> used on the 'autotest' kvm test, with the original control files
> used on the old kvm_runtest_2 directory.
>
> Signed-off-by: Lucas Meneghel Rodrigues 
> ---
>  client/tests/kvm/autotest_control/bonnie.control   |   21 
> 
>  client/tests/kvm/autotest_control/dbench.control   |   20 +++
>  .../tests/kvm/autotest_control/sleeptest.control   |   15 ++
>  3 files changed, 56 insertions(+), 0 deletions(-)
>  create mode 100644 client/tests/kvm/autotest_control/bonnie.control
>  create mode 100644 client/tests/kvm/autotest_control/dbench.control
>  create mode 100644 client/tests/kvm/autotest_control/sleeptest.control
>
> diff --git a/client/tests/kvm/autotest_control/bonnie.control 
> b/client/tests/kvm/autotest_control/bonnie.control
> new file mode 100644
> index 000..2717a80
> --- /dev/null
> +++ b/client/tests/kvm/autotest_control/bonnie.control
> @@ -0,0 +1,21 @@
> +AUTHOR = "Martin Bligh "
> +NAME = "bonnie"
> +TIME = "MEDIUM"
> +TEST_CLASS = "Kernel"
> +TEST_CATEGORY = "Functional"
> +TEST_TYPE = "client"
> +DOC = """\
> +Bonnie is a benchmark which measures the performance of Unix file system
> +operations. Bonnie is concerned with identifying bottlenecks; the name is a
> +tribute to Bonnie Raitt, who knows how to use one.
> +
> +For more info, see http://www.textuality.com/bonnie/
> +
> +This benchmark configuration run generates sustained write traffic
> +of 35-50MB/s of .1MB writes to just one disk.  It appears to have a
> +sequential and a random workload. It gives profile measurements for:
> +throughput, %CPU rand seeks per second. Not sure if the the CPU numbers
> +are trustworthy.
> +"""
> +
> +job.run_test('bonnie')
> diff --git a/client/tests/kvm/autotest_control/dbench.control 
> b/client/tests/kvm/autotest_control/dbench.control
> new file mode 100644
> index 000..7fb8a37
> --- /dev/null
> +++ b/client/tests/kvm/autotest_control/dbench.control
> @@ -0,0 +1,20 @@
> +TIME="SHORT"
> +AUTHOR = "Martin Bligh "
> +DOC = """
> +dbench is one of our standard kernel stress tests.  It produces filesystem
> +load like netbench originally did, but involves no network system calls.
> +Its results include throughput rates, which can be used for performance
> +analysis.
> +
> +More information on dbench can be found here:
> +http://samba.org/ftp/tridge/dbench/README
> +
> +Currently it needs to be updated in its configuration. It is a great test for
> +the higher level I/O systems but barely touches the disk right now.
> +"""
> +NAME = 'dbench'
> +TEST_CLASS = 'kernel'
> +TEST_CATEGORY = 'Functional'
> +TEST_TYPE = 'client'
> +
> +job.run_test('dbench', seconds=60)
> diff --git a/client/tests/kvm/autotest_control/sleeptest.control 
> b/client/tests/kvm/autotest_control/sleeptest.control
> new file mode 100644
> index 000..725ae81
> --- /dev/null
> +++ b/client/tests/kvm/autotest_control/sleeptest.control
> @@ -0,0 +1,15 @@
> +AUTHOR = "Autotest Team"
> +NAME = "Sleeptest"
> +TIME = "SHORT"
> +TEST_CATEGORY = "Functional"
> +TEST_CLASS = "General"
> +TEST_TYPE = "client"
> +
> +DOC = """
> +This test simply sleeps for 1 second by default.  It's a good way to test
> +profilers and double check that autotest is working.
> +The seconds argument can also be modified to make the machine sleep for as
> +long as needed.
> +"""
> +
> +job.run_test('sleeptest', seconds = 1)
> --
> 1.6.2.2
>
> ___
> Autotest mailing list
> autot...@test.kernel.org
> http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM-AUTOTEST PATCH] Make all programs on kvm test use /usr/bin/python - take 2

2009-06-10 Thread Martin Bligh
Looks good.

On Tue, Jun 9, 2009 at 5:57 PM, Lucas Meneghel Rodrigues wrote:
> All kvm modules that can be used as stand alone programs were
> updated to use #!/usr/bin/python instead of #!/usr/bin/env python,
> complying with the rest of the autotest code base. As suggested
> by Martin, common.py was added. With this, the stand alone
> programs will be able to use the autotest library namespace and
> choose the best python interpreter available in the system.
>
> Signed-off-by: Lucas Meneghel Rodrigues 
> ---
>  client/tests/kvm/common.py           |    8 
>  client/tests/kvm/fix_cdkeys.py       |    3 ++-
>  client/tests/kvm/kvm_config.py       |    4 +++-
>  client/tests/kvm/make_html_report.py |    5 +++--
>  client/tests/kvm/stepeditor.py       |    4 ++--
>  client/tests/kvm/stepmaker.py        |    4 +++-
>  6 files changed, 21 insertions(+), 7 deletions(-)
>  create mode 100644 client/tests/kvm/common.py
>
> diff --git a/client/tests/kvm/common.py b/client/tests/kvm/common.py
> new file mode 100644
> index 000..ce78b85
> --- /dev/null
> +++ b/client/tests/kvm/common.py
> @@ -0,0 +1,8 @@
> +import os, sys
> +dirname = os.path.dirname(sys.modules[__name__].__file__)
> +client_dir = os.path.abspath(os.path.join(dirname, "..", ".."))
> +sys.path.insert(0, client_dir)
> +import setup_modules
> +sys.path.pop(0)
> +setup_modules.setup(base_path=client_dir,
> +                    root_module_name="autotest_lib.client")
> diff --git a/client/tests/kvm/fix_cdkeys.py b/client/tests/kvm/fix_cdkeys.py
> index 4f7a824..7a821fa 100755
> --- a/client/tests/kvm/fix_cdkeys.py
> +++ b/client/tests/kvm/fix_cdkeys.py
> @@ -1,5 +1,6 @@
> -#!/usr/bin/env python
> +#!/usr/bin/python
>  import shutil, os, sys
> +import common
>
>  """
>  Program that replaces the CD keys present on a KVM autotest configuration 
> file.
> diff --git a/client/tests/kvm/kvm_config.py b/client/tests/kvm/kvm_config.py
> index 40f16f1..13fdac2 100755
> --- a/client/tests/kvm/kvm_config.py
> +++ b/client/tests/kvm/kvm_config.py
> @@ -1,4 +1,6 @@
> +#!/usr/bin/python
>  import re, os, sys, StringIO
> +import common
>  from autotest_lib.client.common_lib import error
>
>  """
> @@ -356,7 +358,7 @@ class config:
>                 # (inside an exception or inside subvariants)
>                 if restricted:
>                     e_msg = "Using variants in this context is not allowed"
> -                    raise error.AutotestError()
> +                    raise error.AutotestError(e_msg)
>                 if self.debug and not restricted:
>                     self.__debug_print(indented_line,
>                                      "Entering variants block (%d dicts in"
> diff --git a/client/tests/kvm/make_html_report.py 
> b/client/tests/kvm/make_html_report.py
> index 6aed39e..e69367b 100755
> --- a/client/tests/kvm/make_html_report.py
> +++ b/client/tests/kvm/make_html_report.py
> @@ -1,4 +1,7 @@
>  #!/usr/bin/python
> +import os, sys, re, getopt, time, datetime, commands
> +import common
> +
>  """
>  Script used to parse the test results and generate an HTML report.
>
> @@ -7,8 +10,6 @@ Script used to parse the test results and generate an HTML 
> report.
> �...@author: Dror Russo (dru...@redhat.com)
>  """
>
> -import os, sys, re, getopt, time, datetime, commands
> -
>
>  format_css="""
>  html,body {
> diff --git a/client/tests/kvm/stepeditor.py b/client/tests/kvm/stepeditor.py
> index 9669200..e7794ac 100755
> --- a/client/tests/kvm/stepeditor.py
> +++ b/client/tests/kvm/stepeditor.py
> @@ -1,6 +1,6 @@
> -#!/usr/bin/env python
> +#!/usr/bin/python
>  import pygtk, gtk, os, glob, shutil, sys, logging
> -import ppm_utils
> +import common, ppm_utils
>  pygtk.require('2.0')
>
>  """
> diff --git a/client/tests/kvm/stepmaker.py b/client/tests/kvm/stepmaker.py
> index 2b7fd54..8f16ffd 100644
> --- a/client/tests/kvm/stepmaker.py
> +++ b/client/tests/kvm/stepmaker.py
> @@ -1,8 +1,10 @@
> -#!/usr/bin/env python
> +#!/usr/bin/python
>  import pygtk, gtk, gobject, time, os, commands
> +import common
>  from autotest_lib.client.common_lib import error
>  import kvm_utils, logging, ppm_utils, stepeditor
>  pygtk.require('2.0')
> +
>  """
>  Step file creator/editor.
>
> --
> 1.6.2.2
>
> ___
> Autotest mailing list
> autot...@test.kernel.org
> http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM-AUTOTEST PATCH 1/4] Make all programs on kvm test use /usr/bin/python

2009-06-10 Thread Martin Bligh
On Wed, Jun 10, 2009 at 4:01 AM, Alexey Eromenko wrote:
>
> Even better would be to use "/usr/bin/python2".

That doesn't seem to exist, on Ubuntu at least.

> This is because future distros will include python3, which is incompatible 
> with python2 code.
>
> "python" will be symlink of "python3".
>
> -Alexey
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM-AUTOTEST PATCH 1/4] Make all programs on kvm test use /usr/bin/python

2009-06-09 Thread Martin Bligh
I'd suggest you use the same mechanism as the other entry points,
and override the python version where necessary - some distros
have ancient or bleeding edge default Python versions.

see common.py -> setup_modules.py -> check_version.check_python_version

On Tue, Jun 9, 2009 at 9:33 AM, Lucas Meneghel Rodrigues wrote:
> All kvm modules that can be used as stand alone programs were
> updated to use #!/usr/bin/python instead of #!/usr/bin/env python,
> complying with the rest of the autotest code base.
>
> Signed-off-by: Lucas Meneghel Rodrigues 
> ---
>  client/tests/kvm/fix_cdkeys.py   |    2 +-
>  client/tests/kvm/kvm_config.py   |    1 +
>  client/tests/kvm/scan_results.py |    2 +-
>  client/tests/kvm/stepeditor.py   |    2 +-
>  client/tests/kvm/stepmaker.py    |    2 +-
>  5 files changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/client/tests/kvm/fix_cdkeys.py b/client/tests/kvm/fix_cdkeys.py
> index 4f7a824..7f52c44 100755
> --- a/client/tests/kvm/fix_cdkeys.py
> +++ b/client/tests/kvm/fix_cdkeys.py
> @@ -1,4 +1,4 @@
> -#!/usr/bin/env python
> +#!/usr/bin/python
>  import shutil, os, sys
>
>  """
> diff --git a/client/tests/kvm/kvm_config.py b/client/tests/kvm/kvm_config.py
> index 40f16f1..a3467a0 100755
> --- a/client/tests/kvm/kvm_config.py
> +++ b/client/tests/kvm/kvm_config.py
> @@ -1,3 +1,4 @@
> +#!/usr/bin/python
>  import re, os, sys, StringIO
>  from autotest_lib.client.common_lib import error
>
> diff --git a/client/tests/kvm/scan_results.py 
> b/client/tests/kvm/scan_results.py
> index 156b7d4..a92c867 100755
> --- a/client/tests/kvm/scan_results.py
> +++ b/client/tests/kvm/scan_results.py
> @@ -1,4 +1,4 @@
> -#!/usr/bin/env python
> +#!/usr/bin/python
>  """
>  Program that parses the autotest results and return a nicely printed final 
> test
>  result.
> diff --git a/client/tests/kvm/stepeditor.py b/client/tests/kvm/stepeditor.py
> index 9669200..6fb371b 100755
> --- a/client/tests/kvm/stepeditor.py
> +++ b/client/tests/kvm/stepeditor.py
> @@ -1,4 +1,4 @@
> -#!/usr/bin/env python
> +#!/usr/bin/python
>  import pygtk, gtk, os, glob, shutil, sys, logging
>  import ppm_utils
>  pygtk.require('2.0')
> diff --git a/client/tests/kvm/stepmaker.py b/client/tests/kvm/stepmaker.py
> index 2b7fd54..a9ddf25 100644
> --- a/client/tests/kvm/stepmaker.py
> +++ b/client/tests/kvm/stepmaker.py
> @@ -1,4 +1,4 @@
> -#!/usr/bin/env python
> +#!/usr/bin/python
>  import pygtk, gtk, gobject, time, os, commands
>  from autotest_lib.client.common_lib import error
>  import kvm_utils, logging, ppm_utils, stepeditor
> --
> 1.6.2.2
>
> ___
> Autotest mailing list
> autot...@test.kernel.org
> http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] Log message format in KVM-Autotest

2009-06-08 Thread Martin Bligh
if it's specific to one test or whatever, you could also just put it
inside the message?
Possibly with your own wrapper function around the logging?

On Mon, Jun 8, 2009 at 6:03 AM, Lucas Meneghel Rodrigues wrote:
> On Mon, 2009-06-08 at 06:35 -0400, Michael Goldish wrote:
>> Hi Lucas,
>>
>> Before the merge with Autotest we used kvm_log.py to log formatted messages.
>> Each message contained the current test's 'shortname' (e.g. 
>> Fedora.8.32.install), the current date and time (down to a 1 sec resolution) 
>> and the message itself. In addition, debug messages contained the name of 
>> the calling function, e.g.
>> remote_login: Trying to login...
>>
>> What is the preferred way of obtaining this functionality using the new 
>> logging system inside Autotest? Should we define our own logging Handler for 
>> the KVM test in kvm.py, along with our own Formatter, or should we use 
>> logging.config.fileConfig(), or is there another preferred way?
>> I'm particularly interested in printing the name of the caller in debug 
>> messages. This feature makes debugging easier and improves overall 
>> readability of the logs. (Obviously we can manually hardcode the name of the 
>> current function into every debug message, but that doesn't seem like a good 
>> solution.)
>
> The logging system can be configured to display several LogRecord
> attributes, they are documented under
>
> http://docs.python.org/library/logging.html#formatter-objects
>
> %(funcName)s is the name of the function issuing the logging call, so
> that's what we are looking to mirror the wanted functionality.
>
> Creating a logging Handler with a formatter on its own is a possibility,
> although we can propose adding the name of the function to the file
> format being used by autotest. There's allways the concern that an
> excess of information may clutter the logs.
>
> Right now, for files we use the following format:
>
> http://autotest.kernel.org/browser/trunk/client/debug_client.ini
>
> [formatter_file_formatter]
> format=[%(asctime)s %(levelname)-5.5s %(module)s] %(message)s
> datefmt=%m/%d %H:%M:%S
>
> The formatter we're using for the logs contain a timestamp, the debug
> level name and the module (source file except the .py extension). I
> believe this makes debugging easy enough without cluttering too much the
> logs. Do you think the caller name would be an interesting addition even
> considering the above?
>
>
>> Thanks,
>> Michael
> --
> Lucas Meneghel Rodrigues
> Software Engineer (QE)
> Red Hat - Emerging Technologies
>
> ___
> Autotest mailing list
> autot...@test.kernel.org
> http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM-AUTOTEST PATCH 8/8] kvm_runtest_2.py: use pickle instead of shelve when loading/saving env

2009-06-08 Thread Martin Bligh
On Fri, Jun 5, 2009 at 1:46 PM, Lucas Meneghel Rodrigues wrote:
> pickle allows more control over the load/save process. Specifically, it
> enables us to dump the contents of an object to disk without having to
> unpickle it.
>
> shelve, which uses pickle, seems to pickle and unpickle every time sync()
> is called. This is bad for classes that need to be unpickled only once
> per test (such a class will be introduced in a future patch).
>

> +def dump_env(obj, filename):
> +    file = open(filename, "w")
> +    cPickle.dump(obj, file)
> +    file.close()

This seems like a strange function name - it's really pickling any
object, nothing specific to do with the environment?

> +
> +def load_env(filename, default=None):
> +    try:
> +        file = open(filename, "r")
> +    except:
> +        return default
> +    obj = cPickle.load(file)
> +    file.close()
> +    return obj
> +
> +
>  class kvm(test.test):
>     """
>     Suite of KVM virtualization functional tests.
> @@ -62,12 +78,12 @@ class kvm(test.test):
>         keys = params.keys()
>         keys.sort()
>         for key in keys:
> -            logging.debug("    %s = %s" % (key, params[key]))
> +            logging.debug("    %s = %s", key, params[key])
>             self.write_test_keyval({key: params[key]})
>
>         # Open the environment file
>         env_filename = os.path.join(self.bindir, "env")
> -        env = shelve.open(env_filename, writeback=True)
> +        env = load_env(env_filename, {})
>         logging.debug("Contents of environment: %s" % str(env))
>
>         try:
> @@ -90,21 +106,20 @@ class kvm(test.test):
>
>                 # Preprocess
>                 kvm_preprocessing.preprocess(self, params, env)
> -                env.sync()
> +                dump_env(env, env_filename)
>                 # Run the test function
>                 routine_obj.routine(self, params, env)
> -                env.sync()
> +                dump_env(env, env_filename)
>
>             except Exception, e:
> -                logging.error("Test failed: %s" % e)
> +                logging.error("Test failed: %s", e)
>                 logging.debug("Postprocessing on error...")
>                 kvm_preprocessing.postprocess_on_error(self, params, env)
> -                env.sync()
> +                dump_env(env, env_filename)
>                 raise
>
>         finally:
>             # Postprocess
>             kvm_preprocessing.postprocess(self, params, env)
> -            logging.debug("Contents of environment: %s" % str(env))
> -            env.sync()
> -            env.close()
> +            logging.debug("Contents of environment: %s", str(env))
> +            dump_env(env, env_filename)
> --
> 1.6.2.2
>
> ___
> Autotest mailing list
> autot...@test.kernel.org
> http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html