Re: Benchmarking mailing list thread [was Fwd: [Discuss] Benchmarking infrastructure]

2019-04-25 Thread Wes McKinney
On Thu, Apr 25, 2019 at 1:28 AM Melik-Adamyan, Areg wrote: > > Hi, > > We are talking about the same thing actually, but you do not want to use 3rd > party tools. > For 3 and 4 - you run the first version store in 1.out, then second version > store in 2.out and run compare tool. Your tool does t

RE: Benchmarking mailing list thread [was Fwd: [Discuss] Benchmarking infrastructure]

2019-04-24 Thread Melik-Adamyan, Areg
Hi, We are talking about the same thing actually, but you do not want to use 3rd party tools. For 3 and 4 - you run the first version store in 1.out, then second version store in 2.out and run compare tool. Your tool does two steps automatically, that is fine. > Various reason why I think th

Re: Benchmarking mailing list thread [was Fwd: [Discuss] Benchmarking infrastructure]

2019-04-24 Thread Francois Saint-Jacques
Hello, archery is the "shim" scripts that glue some of the steps (2-4) that you described. It builds arrow (c++ for now), find the multiple benchmark binaries, runs them, and collects the outputs. I encourage you to check the implementation, notably [1] and [2] (and generally [3]). Think of it as

RE: Benchmarking mailing list thread [was Fwd: [Discuss] Benchmarking infrastructure]

2019-04-24 Thread Melik-Adamyan, Areg
Wes, The process as I think should be the following. 1. Commit triggers to build in TeamCity. I have set the TeamCity, but we can use whatever CI we would like. 2. TeamCity is using the pool of identical machines to run the predefined (or all) performance benchmarks on one the build machines fro

RE: Benchmarking mailing list thread [was Fwd: [Discuss] Benchmarking infrastructure]

2019-04-24 Thread Melik-Adamyan, Areg
mailing list thread [was Fwd: [Discuss] Benchmarking infrastructure] On Wed, Apr 24, 2019 at 11:22 AM Antoine Pitrou wrote: > > Hi Areg, > > Le 23/04/2019 à 23:43, Melik-Adamyan, Areg a écrit : > > Because we are using Google Benchmark, which has specific format > >

Re: Benchmarking mailing list thread [was Fwd: [Discuss] Benchmarking infrastructure]

2019-04-24 Thread Wes McKinney
In the benchmarking one of the hardest parts (IMHO) is the process/workflow automation. I'm in support of the development of a "meta-benchmarking" framework that offers automation, extensibility, and possibility for customization. One of the reasons that people don't do more benchmarking as part o

Re: Benchmarking mailing list thread [was Fwd: [Discuss] Benchmarking infrastructure]

2019-04-24 Thread Sebastien Binet
On Wed, Apr 24, 2019 at 11:22 AM Antoine Pitrou wrote: > > Hi Areg, > > Le 23/04/2019 à 23:43, Melik-Adamyan, Areg a écrit : > > Because we are using Google Benchmark, which has specific format there > is a tool called becnhcmp which compares two runs: > > > > $ benchcmp old.txt new.txt > > bench

Re: Benchmarking mailing list thread [was Fwd: [Discuss] Benchmarking infrastructure]

2019-04-24 Thread Antoine Pitrou
Hi Areg, Le 23/04/2019 à 23:43, Melik-Adamyan, Areg a écrit : > Because we are using Google Benchmark, which has specific format there is a > tool called becnhcmp which compares two runs: > > $ benchcmp old.txt new.txt > benchmark old ns/op new ns/op delta > BenchmarkConcat

RE: Benchmarking mailing list thread [was Fwd: [Discuss] Benchmarking infrastructure]

2019-04-23 Thread Melik-Adamyan, Areg
Because we are using Google Benchmark, which has specific format there is a tool called becnhcmp which compares two runs: $ benchcmp old.txt new.txt benchmark old ns/op new ns/op delta BenchmarkConcat 523 68.6 -86.88% So the comparison part is done and th