Hi,

>From my experience it is possible to cut quite a lot by reducing
spark.sql.shuffle.partitions to some reasonable value (let's say comparable
to the number of cores). 200 is a serious overkill for most of the test
cases anyway.


Best,
Maciej



On 21 August 2017 at 03:00, Dong Joon Hyun <dh...@hortonworks.com> wrote:

> +1 for any efforts to recover Jenkins!
>
>
>
> Thank you for the direction.
>
>
>
> Bests,
>
> Dongjoon.
>
>
>
> *From: *Reynold Xin <r...@databricks.com>
> *Date: *Sunday, August 20, 2017 at 5:53 PM
> *To: *Dong Joon Hyun <dh...@hortonworks.com>
> *Cc: *"dev@spark.apache.org" <dev@spark.apache.org>
> *Subject: *Re: Increase Timeout or optimize Spark UT?
>
>
>
> It seems like it's time to look into how to cut down some of the test
> runtimes. Test runtimes will slowly go up given the way development
> happens. 3 hr is already a very long time for tests to run.
>
>
>
>
>
> On Sun, Aug 20, 2017 at 5:45 PM, Dong Joon Hyun <dh...@hortonworks.com>
> wrote:
>
> Hi, All.
>
>
>
> Recently, Apache Spark master branch test (SBT with hadoop-2.7 / 2.6) has
> been hitting the build timeout.
>
>
>
> Please see the build time trend.
>
>
>
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%
> 20Test%20(Dashboard)/job/spark-master-test-sbt-hadoop-2.7/buildTimeTrend
>
>
>
> All recent 22 builds fail due to timeout directly/indirectly. The last
> success (SBT with Hadoop-2.7) is 15th August.
>
>
>
> We may do the followings.
>
>
>
>    1. Increase Build Timeout (3 hr 30 min)
>    2. Optimize UTs (Scala/Java/Python/UT)
>
>
>
> But, Option 1 will be the immediate solution for now . Could you update
> the Jenkins setup?
>
>
>
> Bests,
>
> Dongjoon.
>
>
>

Reply via email to