These dashboards look great!

Can publish the links to the dashboards somewhere, for better visibility?
E.g. in the jenkins website / emails, or the wiki.

Regards,
Anton

On Wed, Jul 18, 2018 at 10:08 AM Andrew Pilloud <apill...@google.com> wrote:

> Hi Etienne,
>
> I've been asking around and it sounds like we should be able to get a
> dedicated Jenkins node for performance tests. Another thing that might help
> is making the runs a few times longer. They are currently running around 2
> seconds each, so the total time of the build probably exceeds testing.
> Internally at Google we are running them with 2000x as many events on
> Dataflow, but a job of that size won't even complete on the Direct Runner.
>
> I didn't see the query 3 issues, but now that you point it out it looks
> like a bug to me too.
>
> Andrew
>
> On Wed, Jul 18, 2018 at 1:13 AM Etienne Chauchot <echauc...@apache.org>
> wrote:
>
>> Hi Andrew,
>>
>> Yes I saw that, except dedicating jenkins nodes to nexmark, I see no
>> other way.
>>
>> Also, did you see query 3 output size on direct runner? Should be a
>> straight line and it is not, I'm wondering if there is a problem with sate
>> and timers impl in direct runner.
>>
>> Etienne
>>
>> Le mardi 17 juillet 2018 à 11:38 -0700, Andrew Pilloud a écrit :
>>
>> I'm noticing the graphs are really noisy. It looks like we are running
>> these on shared Jenkins executors, so our perf tests are fighting with
>> other builds for CPU. I've opened an issue
>> https://issues.apache.org/jira/browse/BEAM-4804 and am wondering if
>> anyone knows an easy fix to isolate these jobs.
>>
>> Andrew
>>
>> On Fri, Jul 13, 2018 at 2:39 AM Łukasz Gajowy <lgaj...@apache.org> wrote:
>>
>> @Etienne: Nice to see the graphs! :)
>>
>> @Ismael: Good idea, there's no document yet. I think we could create a
>> small google doc with instructions on how to do this.
>>
>> pt., 13 lip 2018 o 10:46 Etienne Chauchot <echauc...@apache.org>
>> napisał(a):
>>
>> Hi,
>>
>> @Andrew, this is because I did not find a way to set 2 scales on the Y
>> axis on the perfkit graphs. Indeed numResults varies from 1 to 100 000 and
>> runtimeSec is usually bellow 10s.
>>
>> Etienne
>>
>> Le jeudi 12 juillet 2018 à 12:04 -0700, Andrew Pilloud a écrit :
>>
>> This is great, should make performance work much easier! I'm going to get
>> the Beam SQL Nexmark jobs publishing as well. (Opened
>> https://issues.apache.org/jira/browse/BEAM-4774 to track.) I might take
>> on the Dataflow runner as well if no one else volunteers.
>>
>> I am curious as to why you have two separate graphs for runtime and count
>> rather then graphing runtime/count to get the throughput rate for each run?
>> Or should that be a third graph? Looks like it would just be a small tweak
>> to the query in perfkit.
>>
>>
>>
>> Andrew
>>
>> On Thu, Jul 12, 2018 at 11:40 AM Pablo Estrada <pabl...@google.com>
>> wrote:
>>
>> This is really cool Etienne : ) thanks for working on this.
>> Our of curiosity, do you know how often the tests run on each runner?
>>
>> Best
>> -P.
>>
>> On Thu, Jul 12, 2018 at 2:15 AM Romain Manni-Bucau <rmannibu...@gmail.com>
>> wrote:
>>
>> Awesome Etienne, this is really important for the (user) community to
>> have that visibility since it is one of the most important aspect of the
>> Beam's quality, kudo!
>>
>>
>> Romain Manni-Bucau
>> @rmannibucau <https://twitter.com/rmannibucau> |  Blog
>> <https://rmannibucau.metawerx.net/> | Old Blog
>> <http://rmannibucau.wordpress.com> | Github
>> <https://github.com/rmannibucau> | LinkedIn
>> <https://www.linkedin.com/in/rmannibucau> | Book
>> <https://www.packtpub.com/application-development/java-ee-8-high-performance>
>>
>>
>> Le jeu. 12 juil. 2018 à 10:59, Jean-Baptiste Onofré <j...@nanthrax.net> a
>> écrit :
>>
>> It's really great to have these dashboards and integration in Jenkins !
>>
>> Thanks Etienne for driving this !
>>
>> Regards
>> JB
>>
>> On 11/07/2018 15:13, Etienne Chauchot wrote:
>> >
>> > Hi guys,
>> >
>> > I'm glad to announce that the CI of Beam has much improved ! Indeed
>> > Nexmark is now included in the perfkit dashboards.
>> >
>> > At each commit on master, nexmark suites are run and plots are created
>> > on the graphs.
>> >
>> > I've created 2 kind of dashboards:
>> > - one for performances (run times of the queries)
>> > - one for the size of the output PCollection (which should be constant)
>> >
>> > There are dashboards for these runners:
>> > - spark
>> > - flink
>> > - direct runner
>> >
>> > Each dashboard contains:
>> > - graphs in batch mode
>> > - graphs in streaming mode
>> > - graphs for the 13 queries.
>> >
>> > That gives more than a hundred of graphs (my right finger hurts after so
>> > many clics on the mouse :) ). It is detailed that much so that anyone
>> > can focus on the area they have interest in.
>> > Feel free to also create new dashboards with more aggregated data.
>> >
>> > Thanks to Lukasz and Cham for reviewing my PRs and showing how to use
>> > perfkit dashboards.
>> >
>> > Dashboards are there:
>> >
>> >
>> https://apache-beam-testing.appspot.com/explore?dashboard=5084698770407424
>> >
>> https://apache-beam-testing.appspot.com/explore?dashboard=5699257587728384
>> > <
>> https://apache-beam-testing.appspot.com/explore?dashboard=5138380291571712
>> >
>> https://apache-beam-testing.appspot.com/explore?dashboard=5138380291571712
>> >
>> >
>> https://apache-beam-testing.appspot.com/explore?dashboard=5099379773931520
>> >
>> https://apache-beam-testing.appspot.com/explore?dashboard=5731568492478464
>> >
>> https://apache-beam-testing.appspot.com/explore?dashboard=5163657986048000
>> >
>> >
>> > Enjoy,
>> >
>> > Etienne
>> >
>> >
>>
>>

Reply via email to