Thanks Diana a lot for offering to help. Please see my replies inline below.

On Sat, Sep 11, 2021 at 8:37 AM Diana Clarke
<diana.joan.cla...@gmail.com> wrote:
> If you point me to the existing benchmarks for each project and
> instructions on how to execute them, I can let you know the easiest
> integration path.
>

For arrow-datafusion, you just need to install the rust toolchain
using `rustup` [1], then run the `cargo bench` command within the
project root. Benchmark results will be saved under the
`/target/criterion/BENCH_NAME/new` folder as raw.csv file. You can
read more about the convention at
https://bheisler.github.io/criterion.rs/book/user_guide/csv_output.html.

For arrow-rs, it's the exact same setup.

We have some extra TPCH integration benchmarks in datafusion, but I
think we can work on integrating them later. Getting the basic
criterion benchmarks into conbench would already be a huge win for us.

[1]: https://rustup.rs

> If the arrow-rs benchmarks are executable from command line and return
> parsable results (like json), it should be pretty easy to publish the
> results.
>

By default, results are saved as csv files, but you can pass in a
`--message-format=json` argument to save the results as JSON files
instead.

> - The arrow-rs and arrow-datafusion GitHub repositories must use
> squash merges (or Conbench would have to be extended to understand the
> 2 other GitHub merge methods).

Yes, we are using squash merge for both repos.

> - I'm not sure what the security implications are with respect to
> adding our ursabot integration and buildkite hooks to other
> repositories.
>

Are you concerned about security from ursabot and buildkit's point of
views? If so, who should we reach out to discuss this matter?

Thanks,
QP

Reply via email to