I think for status-benchmark.cc we should just reduce the unrolling - I
don't see a valid reason to unroll a loop that many times unless you're
just testing the compiler. No reason we can't just unroll the loop, say 10
times and run that 100 times and get an equally valid result.

Todd's suggestion about just running the benchmark a couple of iterations
is a reasonable idea, although I think it depends whether the benchmarks
are once-off experiments (in which case it seems ok to let them bit-rot) or
they are actually likely to be reused.

I think if we're going to more actively maintain benchmarks we should also
consider more proactively disabling or removing once-off benchmarks that
aren't likely to be reused.

On Thu, Feb 23, 2017 at 10:26 AM, Henry Robinson <he...@cloudera.com> wrote:

> I think the main problem I want to avoid is paying the cost of linking,
> which is expensive for Impala as it often generates multi-hundred-MB
> binaries per benchmark or test.
>
> Building the benchmarks during GVO seems the best solution to that to me.
>
> On 23 February 2017 at 10:23, Todd Lipcon <t...@cloudera.com> wrote:
>
> > One thing we've found useful in Kudu to prevent bitrot of benchmarks is
> to
> > actually use gtest and gflags for the benchmark programs.
> >
> > We set some flag like --benchmark_num_rows or --benchmark_num_iterations
> > with a default that's low enough to only run for a second or two, and run
> > it as part of our normal test suite. Rarely catches any bugs, but serves
> to
> > make sure that the code keeps working. Then, when a developer wants to
> > actually test a change for performance, they can run it with
> > --num_iterations=<high number>.
> >
> > Doesn't help the weird case of status-benchmark where *compiling* takes
> 10
> > minutes... but I think the manual unrolling of 1000 status calls in there
> > is probably unrealistic anyway regarding how the different options
> perform
> > in a whole-program setting.
> >
> > -Todd
> >
> > On Thu, Feb 23, 2017 at 10:20 AM, Zachary Amsden <zams...@cloudera.com>
> > wrote:
> >
> > > Yes.  If you take a look at the benchmark, you'll notice the JNI call
> to
> > > initialize the frontend doesn't even have the right signature anymore.
> > > That's one easy way to bitrot while still compiling.
> > >
> > > Even fixing that isn't enough to get it off the ground.
> > >
> > >  - Zach
> > >
> > > On Tue, Feb 21, 2017 at 11:44 AM, Henry Robinson <he...@cloudera.com>
> > > wrote:
> > >
> > > > Did you run . bin/set-classpath.sh before running expr-benchmark?
> > > >
> > > > On 21 February 2017 at 11:30, Zachary Amsden <zams...@cloudera.com>
> > > wrote:
> > > >
> > > > > Unfortunately some of the benchmarks have actually bit-rotted.  For
> > > > > example, expr-benchmark compiles but immediately throws JNI
> > exceptions.
> > > > >
> > > > > On Tue, Feb 21, 2017 at 10:55 AM, Marcel Kornacker <
> > > mar...@cloudera.com>
> > > > > wrote:
> > > > >
> > > > > > I'm also in favor of not compiling it on the standard
> commandline.
> > > > > >
> > > > > > However, I'm very much against allowing the benchmarks to bitrot.
> > As
> > > > > > was pointed out, those benchmarks can be valuable tools during
> > > > > > development, and keeping them in working order shouldn't really
> > > impact
> > > > > > the development process.
> > > > > >
> > > > > > In other words, let's compile them as part of gvo.
> > > > > >
> > > > > > On Tue, Feb 21, 2017 at 10:50 AM, Alex Behm <
> > alex.b...@cloudera.com>
> > > > > > wrote:
> > > > > > > +1 for not compiling the benchmarks in -notests
> > > > > > >
> > > > > > > On Mon, Feb 20, 2017 at 7:55 PM, Jim Apple <
> jbap...@cloudera.com
> > >
> > > > > wrote:
> > > > > > >
> > > > > > >> > On which note, would anyone object if we disabled benchmark
> > > > > > compilation
> > > > > > >> by
> > > > > > >> > default when building the BE tests? I mean separating out
> > > -notests
> > > > > > into
> > > > > > >> > -notests and -build_benchmarks (the latter false by
> default).
> > > > > > >>
> > > > > > >> I think this is a great idea.
> > > > > > >>
> > > > > > >> > I don't mind if the benchmarks bitrot as a result, because
> we
> > > > don't
> > > > > > run
> > > > > > >> > them regularly or pay attention to their output except when
> > > > > > developing a
> > > > > > >> > feature. Of course, maybe an 'exhaustive' run should build
> the
> > > > > > benchmarks
> > > > > > >> > as well just to keep us honest, but I'd be happy if 95% of
> > > Jenkins
> > > > > > builds
> > > > > > >> > didn't bother.
> > > > > > >>
> > > > > > >> The pre-merge (aka GVM aka GVO) testing builds
> > > > > > >> http://jenkins.impala.io:8080/job/all-build-options, which
> > builds
> > > > > > >> without the "-notests" flag.
> > > > > > >>
> > > > > >
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Henry Robinson
> > > > Software Engineer
> > > > Cloudera
> > > > 415-994-6679
> > > >
> > >
> >
> >
> >
> > --
> > Todd Lipcon
> > Software Engineer, Cloudera
> >
>
>
>
> --
> Henry Robinson
> Software Engineer
> Cloudera
> 415-994-6679
>

Reply via email to