I took a look at the pull request for memory management and I actually
agree with the existing assessment that the patch is too big and risky to
port into an existing maintenance branch. Things that are backported are
low-risk patches that won't break existing applications on 1.6.x. This
patch is large, invasive, directly hooks into the very internals of Spark.
The chance of it breaking an existing working 1.6.x application is not low.






On Fri, Oct 14, 2016 at 1:57 PM, Alexander Pivovarov <apivova...@gmail.com>
wrote:

> Hi Reynold
>
> Spark 1.6.x has serious bug related to shuffle functionality
> https://issues.apache.org/jira/browse/SPARK-14560
> https://issues.apache.org/jira/browse/SPARK-4452
>
> Shuffle throws OOM on serious load. I've seen this error several times on
> my heavy jobs
>
> java.lang.OutOfMemoryError: Unable to acquire 75 bytes of memory, got 0
>         at 
> org.apache.spark.memory.MemoryConsumer.allocatePage(MemoryConsumer.java:120)
>         at 
> org.apache.spark.shuffle.sort.ShuffleExternalSorter.acquireNewPageIfNecessary(ShuffleExternalSorter.java:346)
>
>
> It was fixed in both spark-2.0.0 and spark-1.6.x BUT spark-1.6 fix was NOT
> merged - https://github.com/apache/spark/pull/13027
>
> Is it possible to include the fix to spark-1.6.3?
>
>
> Thank you
> Alex
>
>
> On Fri, Oct 14, 2016 at 1:39 PM, Reynold Xin <r...@databricks.com> wrote:
>
>> It's been a while and we have fixed a few bugs in branch-1.6. I plan to
>> cut rc1 for 1.6.3 next week (just in time for Spark Summit Europe). Let me
>> know if there are specific issues that should be addressed before that.
>> Thanks.
>>
>
>

Reply via email to