RE: [SparkScore]Performance portal for Apache Spark - WW26

2015-06-26 Thread Huang, Jie
Thanks. In general, we can see a stable trend in Spark master branch and latest 
release.

And we are also considering to add more benchmarks/workloads into this 
automation perf tool. Any comment and feedback is warmly welcomed.

Thank you && Best Regards,
Grace (Huang Jie)

From: Nan Zhu [mailto:zhunanmcg...@gmail.com]
Sent: Friday, June 26, 2015 8:21 PM
To: Huang, Jie
Cc: u...@spark.apache.org; dev@spark.apache.org
Subject: Re: [SparkScore]Performance portal for Apache Spark - WW26

Thank you, Jie! Very nice work!

--
Nan Zhu
http://codingcat.me

On Friday, June 26, 2015 at 8:17 AM, Huang, Jie wrote:

Correct. Your calculation is right!



We have been aware of that kmeans performance drop also. According to our 
observation, it is caused by some unbalanced executions among different tasks. 
Even we used the same test data between different versions (i.e., not caused by 
the data skew).



And the corresponding run time information has been shared with Xiangrui. Now 
he is also helping to identify the root cause altogether.



Thank you && Best Regards,

Grace (Huang Jie)



From: Nan Zhu [mailto:zhunanmcg...@gmail.com]
Sent: Friday, June 26, 2015 7:59 PM
To: Huang, Jie
Cc: u...@spark.apache.org<mailto:u...@spark.apache.org>; 
dev@spark.apache.org<mailto:dev@spark.apache.org>
Subject: Re: [SparkScore]Performance portal for Apache Spark - WW26



Hi, Jie,



Thank you very much for this work! Very helpful!



I just would like to confirm that I understand the numbers correctly: if we 
take the running time of 1.2 release as 100s



9.1% - means the running time is 109.1 s?



-4% - means it comes 96s?



If that’s the true meaning of the numbers, what happened to k-means in HiBench?



Best,



--

Nan Zhu

http://codingcat.me



On Friday, June 26, 2015 at 7:24 AM, Huang, Jie wrote:

Intel® Xeon® CPU E5-2697





Re: [SparkScore]Performance portal for Apache Spark - WW26

2015-06-26 Thread Nan Zhu
Thank you, Jie! Very nice work!

--  
Nan Zhu
http://codingcat.me


On Friday, June 26, 2015 at 8:17 AM, Huang, Jie wrote:

> Correct. Your calculation is right!  
>   
> We have been aware of that kmeans performance drop also. According to our 
> observation, it is caused by some unbalanced executions among different 
> tasks. Even we used the same test data between different versions (i.e., not 
> caused by the data skew).
>   
> And the corresponding run time information has been shared with Xiangrui. Now 
> he is also helping to identify the root cause altogether.  
>   
> Thank you && Best Regards,
> Grace (Huang Jie)
>   
> From: Nan Zhu [mailto:zhunanmcg...@gmail.com]  
> Sent: Friday, June 26, 2015 7:59 PM
> To: Huang, Jie
> Cc: u...@spark.apache.org (mailto:u...@spark.apache.org); 
> dev@spark.apache.org (mailto:dev@spark.apache.org)
> Subject: Re: [SparkScore]Performance portal for Apache Spark - WW26  
>   
> Hi, Jie,  
>  
>   
>  
> Thank you very much for this work! Very helpful!
>  
>   
>  
> I just would like to confirm that I understand the numbers correctly: if we 
> take the running time of 1.2 release as 100s
>  
>   
>  
> 9.1% - means the running time is 109.1 s?
>  
>   
>  
> -4% - means it comes 96s?
>  
>   
>  
> If that’s the true meaning of the numbers, what happened to k-means in 
> HiBench?
>  
>   
>  
> Best,
>  
>   
>  
> --  
>  
> Nan Zhu
>  
> http://codingcat.me
>  
>   
>  
>  
> On Friday, June 26, 2015 at 7:24 AM, Huang, Jie wrote:
> > Intel® Xeon® CPU E5-2697  
> >  
>  
>   
>  
>  
>  
>  




RE: [SparkScore]Performance portal for Apache Spark - WW26

2015-06-26 Thread Huang, Jie
Correct. Your calculation is right!

We have been aware of that kmeans performance drop also. According to our 
observation, it is caused by some unbalanced executions among different tasks. 
Even we used the same test data between different versions (i.e., not caused by 
the data skew).

And the corresponding run time information has been shared with Xiangrui. Now 
he is also helping to identify the root cause altogether.

Thank you && Best Regards,
Grace (Huang Jie)

From: Nan Zhu [mailto:zhunanmcg...@gmail.com]
Sent: Friday, June 26, 2015 7:59 PM
To: Huang, Jie
Cc: u...@spark.apache.org; dev@spark.apache.org
Subject: Re: [SparkScore]Performance portal for Apache Spark - WW26

Hi, Jie,

Thank you very much for this work! Very helpful!

I just would like to confirm that I understand the numbers correctly: if we 
take the running time of 1.2 release as 100s

9.1% - means the running time is 109.1 s?

-4% - means it comes 96s?

If that’s the true meaning of the numbers, what happened to k-means in HiBench?

Best,

--
Nan Zhu
http://codingcat.me


On Friday, June 26, 2015 at 7:24 AM, Huang, Jie wrote:
Intel® Xeon® CPU E5-2697



Re: [SparkScore]Performance portal for Apache Spark - WW26

2015-06-26 Thread Nan Zhu
Hi, Jie,  

Thank you very much for this work! Very helpful!

I just would like to confirm that I understand the numbers correctly: if we 
take the running time of 1.2 release as 100s

9.1% - means the running time is 109.1 s?

-4% - means it comes 96s?

If that’s the true meaning of the numbers, what happened to k-means in HiBench?

Best,  

--  
Nan Zhu
http://codingcat.me


On Friday, June 26, 2015 at 7:24 AM, Huang, Jie wrote:

> Intel® Xeon® CPU E5-2697  




RE: [SparkScore] Performance portal for Apache Spark

2015-06-17 Thread Duan, Jiangang
We are looking for more workloads – if you guys have any suggestions, let us 
know.

-jiangang

From: Sandy Ryza [mailto:sandy.r...@cloudera.com]
Sent: Wednesday, June 17, 2015 5:51 PM
To: Huang, Jie
Cc: u...@spark.apache.org; dev@spark.apache.org
Subject: Re: [SparkScore] Performance portal for Apache Spark

This looks really awesome.

On Tue, Jun 16, 2015 at 10:27 AM, Huang, Jie 
mailto:jie.hu...@intel.com>> wrote:
Hi All

We are happy to announce Performance portal for Apache Spark 
http://01org.github.io/sparkscore/ !
The Performance Portal for Apache Spark provides performance data on the Spark 
upsteam to the community to help identify issues, better understand performance 
differentials between versions, and help Spark customers get across the finish 
line faster. The Performance Portal generates two reports, regular (weekly) 
report and release based regression test report. We are currently using two 
benchmark suites which include HiBench 
(http://github.com/intel-bigdata/HiBench) and Spark-perf 
(https://github.com/databricks/spark-perf ). We welcome and look forward to 
your suggestions and feedbacks. More information and details provided below
Abount Performance Portal for Apache Spark
Our goal is to work with the Apache Spark community to further enhance the 
scalability and reliability of the Apache Spark. The data available on this 
site allows community members and potential Spark customers to closely track 
performance trend of the Apache Spark. Ultimately, we hope that this project 
will help community to fix performance issue quickly, thus providing better 
Apache spark code to end customers. The current workloads used in the 
benchmarking include HiBench (a benchmark suite to evaluate big data framework 
like Hadoop MR, Spark from Intel) and Spark-perf (a performance testing 
framework for Apache Spark from Databricks). Additional benchmarks will be 
added as they become available
Description

Each data point represents each workload runtime percent compared with the 
previous week. Different lines represents different workloads running on spark 
yarn-client mode.
Hardware

CPU type: Intel® Xeon® CPU E5-2697 v2 @ 2.70GHz
Memory: 128GB
NIC: 10GbE
Disk(s): 8 x 1TB SATA HDD
Software

JAVA ver sion: 1.8.0_25
Hadoop version: 2.5.0-CDH5.3.2
HiBench version: 4.0
Spark on yarn-client mode
Cluster

1 node for Master
10 nodes for Slave
Summary
The lower percent the better performance.

Group

ww19

ww20

ww22

ww23

ww24

ww25

HiBench

9.1%

6.6%

6.0%

7.9%

-6.5%

-3.1%

spark-perf

4.1%

4.4%

-1.8%

4.1%

-4.7%

-4.6%


Y-Axis: normalized completion time; X-Axis: Work Week.
The commit number can be found in the result table.
The performance score for each workload is normalized based on the elapsed time 
for 1.2 release.The lower the better.

HiBench

JOB

ww19

ww20

ww22

ww23

ww24

ww25

commit

489700c8

8e3822a0

530efe3e

90c60692

db81b9d8

4eb48ed1

sleep

%

%

-2.1%

-2.9%

-4.1%

12.8%

wordcount

17.6%

11.4%

8.0%

8.3%

-18.6%

-10.9%

kmeans

92.1%

61.5%

72.1%

92.9%

86.9%

95.8%

scan

-4.9%

-7.2%

%

-1.1%

-25.5%

-21.0%

bayes

-24.3%

-20.1%

-18.3%

-11.1%

-29.7%

-31.3%

aggregation

5.6%

10.5%

%

9.2%

-15.3%

-15.0%

join

4.5%

1.2%

%

1.0%

-12.7%

-13.9%

sort

-3.3%

-0.5%

-11.9%

-12.5%

-17.5%

-17.3%

pagerank

2.2%

3.2%

4.0%

2.9%

-11.4%

-13.0%

terasort

-7.1%

-0.2%

-9.5%

-7.3%

-16.7%

-17.0%


Comments: null means no such workload running or workload failed in this time.

Y-Axis: normalized completion time; X-Axis: Work Week.
The commit number can be found in the result table.
The performance score for each workload is normalized based on the elapsed time 
for 1.2 release.The lower the better.
spark-perf

JOB

ww19

ww20

ww22

ww23

ww24

ww25

commit

489700c8

8e3822a0

530efe3e

90c60692

db81b9d8

4eb48ed1

agg

13.2%

7.0%

%

18.3%

5.2%

2.5%

agg-int

16.4%

21.2%

%

9.6%

4.0%

8.2%

agg-naive

4.3%

-2.4%

%

-0.8%

-6.7%

-6.8 %

scheduling

-6.1%

-8.9%

-14.5%

-2.1%

-6.4%

-6.5%

count-filter

4.1%

1.0%

6.6%

6.8%

-10.2%

-10.4%

count

4.8%

4.6%

6.7%

8.0%

-7.3%

-7.0%

sort

-8.1%

-2.5%

-6.2%

-7.0%

-14.6%

-14.4%

sort-int

4.5%

15.3%

-1.6%

-0.1%

-1.5%

-2.2%


Comments: null means no such workload running or workload failed in this time.

Y-Axis: normalized completion time; X-Axis: Work Week.
The commit number can be found in the result table.
The pe rformance score for each workload is normalized based on the elapsed 
time for 1.2 release.The lower the better.
Release
Summary
The lower percent the better performance.

Group

1.2.1

1.3.0

1.3.1

1.4.0

HiBench

-1.0%

10.5%

8.4%

8.6%

spark-perf

3.2%

0.9%

1.9%

1.3%


Y-Axis: normalized comp

Re: [SparkScore] Performance portal for Apache Spark

2015-06-17 Thread Sandy Ryza
This looks really awesome.

On Tue, Jun 16, 2015 at 10:27 AM, Huang, Jie  wrote:

>  Hi All
>
> We are happy to announce Performance portal for Apache Spark
> http://01org.github.io/sparkscore/ !
>
> The Performance Portal for Apache Spark provides performance data on the
> Spark upsteam to the community to help identify issues, better understand
> performance differentials between versions, and help Spark customers get
> across the finish line faster. The Performance Portal generates two
> reports, regular (weekly) report and release based regression test report.
> We are currently using two benchmark suites which include HiBench (
> http://github.com/intel-bigdata/HiBench) and Spark-perf (
> https://github.com/databricks/spark-perf ). We welcome and look forward
> to your suggestions and feedbacks. More information and details provided
> below
> Abount Performance Portal for Apache Spark
>
> Our goal is to work with the Apache Spark community to further enhance the
> scalability and reliability of the Apache Spark. The data available on this
> site allows community members and potential Spark customers to closely
> track performance trend of the Apache Spark. Ultimately, we hope that this
> project will help community to fix performance issue quickly, thus
> providing better Apache spark code to end customers. The current workloads
> used in the benchmarking include HiBench (a benchmark suite to evaluate big
> data framework like Hadoop MR, Spark from Intel) and Spark-perf (a
> performance testing framework for Apache Spark from Databricks). Additional
> benchmarks will be added as they become available
> Description
> --
>
> Each data point represents each workload runtime percent compared with the
> previous week. Different lines represents different workloads running on
> spark yarn-client mode.
> Hardware
> --
>
> CPU type: Intel® Xeon® CPU E5-2697 v2 @ 2.70GHz
> Memory: 128GB
> NIC: 10GbE
> Disk(s): 8 x 1TB SATA HDD
> Software
> --
>
> JAVA ver sion: 1.8.0_25
> Hadoop version: 2.5.0-CDH5.3.2
> HiBench version: 4.0
> Spark on yarn-client mode
> Cluster
> --
>
> 1 node for Master
> 10 nodes for Slave
> Summary
>
> The lower percent the better performance.
>  --
>
> *Group*
>
> *ww19 *
>
> *ww20 *
>
> *ww22 *
>
> *ww23 *
>
> *ww24 *
>
> *ww25 *
>
> HiBench
>
> 9.1%
>
> 6.6%
>
> 6.0%
>
> 7.9%
>
> -6.5%
>
> -3.1%
>
> spark-perf
>
> 4.1%
>
> 4.4%
>
> -1.8%
>
> 4.1%
>
> -4.7%
>
> -4.6%
>
>
> *Y-Axis: normalized completion time; X-Axis: Work Week. *
>
> * The commit number can be found in the result table. The performance
> score for each workload is normalized based on the elapsed time for 1.2
> release.The lower the better.*
> HiBench
> --
>
> *JOB*
>
> *ww19 *
>
> *ww20 *
>
> *ww22 *
>
> *ww23 *
>
> *ww24 *
>
> *ww25 *
>
> *commit*
>
> *489700c8 *
>
> *8e3822a0 *
>
> *530efe3e *
>
> *90c60692 *
>
> *db81b9d8 *
>
> *4eb48ed1 *
>
> sleep
>
> %
>
> %
>
> -2.1%
>
> -2.9%
>
> -4.1%
>
> 12.8%
>
> wordcount
>
> 17.6%
>
> 11.4%
>
> 8.0%
>
> 8.3%
>
> -18.6%
>
> -10.9%
>
> kmeans
>
> 92.1%
>
> 61.5%
>
> 72.1%
>
> 92.9%
>
> 86.9%
>
> 95.8%
>
> scan
>
> -4.9%
>
> -7.2%
>
> %
>
> -1.1%
>
> -25.5%
>
> -21.0%
>
> bayes
>
> -24.3%
>
> -20.1%
>
> -18.3%
>
> -11.1%
>
> -29.7%
>
> -31.3%
>
> aggregation
>
> 5.6%
>
> 10.5%
>
> %
>
> 9.2%
>
> -15.3%
>
> -15.0%
>
> join
>
> 4.5%
>
> 1.2%
>
> %
>
> 1.0%
>
> -12.7%
>
> -13.9%
>
> sort
>
> -3.3%
>
> -0.5%
>
> -11.9%
>
> -12.5%
>
> -17.5%
>
> -17.3%
>
> pagerank
>
> 2.2%
>
> 3.2%
>
> 4.0%
>
> 2.9%
>
> -11.4%
>
> -13.0%
>
> terasort
>
> -7.1%
>
> -0.2%
>
> -9.5%
>
> -7.3%
>
> -16.7%
>
> -17.0%
>
> Comments: null means no such workload running or workload failed in this
> time.
>
>
> *Y-Axis: normalized completion time; X-Axis: Work Week. *
>
> * The commit number can be found in the result table. The performance
> score for each workload is normalized based on the elapsed time for 1.2
> release.The lower the better.*
> spark-perf
> --
>
> *JOB*
>
> *ww19 *
>
> *ww20 *
>
> *ww22 *
>
> *ww23 *
>
> *ww24 *
>
> *ww25 *
>
> *commit*
>
> *489700c8 *
>
> *8e3822a0 *
>
> *530efe3e *
>
> *90c60692 *
>
> *db81b9d8 *
>
> *4eb48ed1 *
>
> agg
>
> 13.2%
>
> 7.0%
>
> %
>
> 18.3%
>
> 5.2%
>
> 2.5%
>
> agg-int
>
> 16.4%
>
> 21.2%
>
> %
>
> 9.6%
>
> 4.0%
>
> 8.2%
>
> agg-naive
>
> 4.3%
>
> -2.4%
>
> %
>
> -0.8%
>
> -6.7%
>
> -6.8 %
>
> scheduling
>
> -6.1%
>
> -8.9%
>
> -14.5%
>
> -2.1%
>
> -6.4%
>
> -6.5%
>
> count-filter
>
> 4.1%
>
> 1.0%
>
> 6.6%
>
> 6.8%
>
> -10.2%
>
> -10.4%
>
> count
>
> 4.8%
>
> 4.6%
>
> 6.7%
>
> 8.0%
>
> -7.3%
>
> -7.0%
>
> sort
>
> -8.1%
>
> -2.5%
>
> -6.2%
>
> -7.0%
>
> -14.6%
>
> -14.4%
>
> sort-int
>
> 4.5%
>
> 15.3%
>
> -1.6%
>
> -0.1%
>
> -1.5%
>
> -2.2%
>
> Comments: null means no such workload running or workload failed in this
> time.
>
>
> *Y-Axis: normalized completion t