[ 
https://issues.apache.org/jira/browse/SPARK-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell resolved SPARK-6405.
------------------------------------
    Resolution: Fixed
      Assignee: Matthew Cheah

> Spark Kryo buffer should be forced to be max. 2GB
> -------------------------------------------------
>
>                 Key: SPARK-6405
>                 URL: https://issues.apache.org/jira/browse/SPARK-6405
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.3.0
>            Reporter: Matt Cheah
>            Assignee: Matthew Cheah
>             Fix For: 1.4.0
>
>
> Kryo buffers used in serialization are backed by Java byte arrays, which have 
> a maximum size of 2GB. However, we blindly set the size without worrying 
> about numeric overflow or regards to the maximum array size. We should 
> enforce the maximum buffer size to be 2GB and warn the user when they have 
> exceeded that amount.
> I'm open to the idea of flat-out failing the initialization of the Spark 
> Context if the buffer size is over 2GB, but I'm afraid that could break 
> backwards-compatability... although one can argue that the user had incorrect 
> buffer sizes in the first place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to