[ 
https://issues.apache.org/jira/browse/SPARK-19181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcelo Vanzin resolved SPARK-19181.
------------------------------------
       Resolution: Fixed
    Fix Version/s: 2.3.1
                   2.4.0

Issue resolved by pull request 21280
[https://github.com/apache/spark/pull/21280]

> SparkListenerSuite.local metrics fails when average executorDeserializeTime 
> is too short.
> -----------------------------------------------------------------------------------------
>
>                 Key: SPARK-19181
>                 URL: https://issues.apache.org/jira/browse/SPARK-19181
>             Project: Spark
>          Issue Type: Bug
>          Components: Tests
>    Affects Versions: 2.1.0
>            Reporter: Jose Soltren
>            Assignee: Attila Zsolt Piros
>            Priority: Minor
>             Fix For: 2.4.0, 2.3.1
>
>
> https://github.com/apache/spark/blob/master/core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala#L249
> The "local metrics" test asserts that tasks should take more than 1ms on 
> average to complete, even though a code comment notes that this is a small 
> test and tasks may finish faster. I've been seeing some "failures" here on 
> fast systems that finish these tasks quite quickly.
> There are a few ways forward here:
> 1. Disable this test.
> 2. Relax this check.
> 3. Implement sub-millisecond granularity for task times throughout Spark.
> 4. (Imran Rashid's suggestion) Add buffer time by, say, having the task 
> reference a partition that implements a custom Externalizable.readExternal, 
> which always waits 1ms before returning.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to