In perl.git, the branch smoke-me/davem/perf has been created

<http://perl5.git.perl.org/perl.git/commitdiff/383bfbb9aebedcd1d6df013b6ab8c73542b49a8c?hp=0000000000000000000000000000000000000000>

        at  383bfbb9aebedcd1d6df013b6ab8c73542b49a8c (commit)

- Log -----------------------------------------------------------------
commit 383bfbb9aebedcd1d6df013b6ab8c73542b49a8c
Author: David Mitchell <da...@iabyn.com>
Date:   Tue Oct 21 15:44:44 2014 +0100

    fix 't/TEST -benchmark'
    
    This has never worked, as it would look for t/benchmark/*.t files in
    the wrong place.

M       t/TEST

commit c35c48fbbaaf145992082a3e56041b28e8af37ad
Author: David Mitchell <da...@iabyn.com>
Date:   Tue Oct 21 15:43:01 2014 +0100

    add note about t/perf/ to t/README

M       t/README

commit 9248b22daa167ac54b57839a2e2ac0a9bd9d84a8
Author: David Mitchell <da...@iabyn.com>
Date:   Tue Oct 21 15:26:08 2014 +0100

    add t/perf/benchmarks, t/perf/benchmarks.t
    
    t/perf/benchmarks is a file intended to contain snippets of code
    that can be usefully benchmarked or otherwise profiled.
    
    The basic idea is that any time you add an optimisation that is intended
    to make a particular construct faster, then you should add that construct
    to this file.
    
    Under the normal test suite, the test file benchmarks.t does a basic
    compile and run of each of these snippets; not to test performance,
    but just to ensure that the code doesn't have errors.
    
    Over time, it is intended that various measurement and profiling tools
    will be written that can run selected (or all) snippets in various
    environments. These will not be run as part of a normal test suite run.

M       MANIFEST
A       t/perf/benchmarks
A       t/perf/benchmarks.t

commit 603b284e346eda4f420ba972c68abaf997c3308f
Author: David Mitchell <da...@iabyn.com>
Date:   Tue Oct 21 14:03:21 2014 +0100

    add t/perf/speed.t
    
    This test file is similar to /re/speed.t, but to test general-purpose
    optimisations.
    
    The idea is to run snippets of code that are 100s or 1000s times slower
    if a particular optimisation is broken. We are not so much interested
    in the individual tests passing, as in the whole file failing with a
    watchdog timeout (or just observing that it running more slowly)

M       MANIFEST
A       t/perf/speed.t

commit d80ec390998915bb6dbabaa2746d9061d2345d59
Author: David Mitchell <da...@iabyn.com>
Date:   Tue Oct 21 13:49:10 2014 +0100

    t/perf/optree.t: expand blurb
    
    explain (kind of) why this file is called optree.t

M       t/perf/optree.t

commit b14deab5e896a464b56ae74248bcd21d104a380d
Author: David Mitchell <da...@iabyn.com>
Date:   Tue Oct 21 13:41:16 2014 +0100

    rename t/op/opt.t -> t/perf/optree.t
    
    Now that we have a directory, t/perf/, for perfomance /optimsation
    tests, move this test file there, and rename to something slightly
    clearer.

M       MANIFEST
D       t/op/opt.t
A       t/perf/optree.t

commit fbeed680945f422c36804f0939392b30cbe2a6d9
Author: David Mitchell <da...@iabyn.com>
Date:   Tue Oct 21 13:25:25 2014 +0100

    add t/perf/, t/perf/opcount.t
    
    Add a new directory designed to hold performance / optimising tests
    and infrastructure, and add the first test file, opcount.t, that
    checks that a sub has the right numbers of particular op types

M       MANIFEST
M       Makefile.SH
M       t/TEST
M       t/harness
A       t/perf/opcount.t
-----------------------------------------------------------------------

--
Perl5 Master Repository

Reply via email to