[ 
https://issues.apache.org/jira/browse/HADOOP-15524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17102329#comment-17102329
 ] 

Srinivasu Majeti commented on HADOOP-15524:
-------------------------------------------

Hi [~nanda], [~arpaga] , Can we commit this Jira ? Please let me know if there 
are any concerns on this fix to be worried or safe to commit ? We have got a 
customer request on this and might need a backport .

> BytesWritable causes OOME when array size reaches Integer.MAX_VALUE
> -------------------------------------------------------------------
>
>                 Key: HADOOP-15524
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15524
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: io
>            Reporter: Joseph Smith
>            Priority: Major
>
> BytesWritable.setSize uses Integer.MAX_VALUE to initialize the internal 
> array.  On my environment, this causes an OOME
> {code:java}
> Exception in thread "main" java.lang.OutOfMemoryError: Requested array size 
> exceeds VM limit
> {code}
> byte[Integer.MAX_VALUE-2] must be used to prevent this error.
> Tested on OSX and CentOS 7 using Java version 1.8.0_131.
> I noticed that java.util.ArrayList contains the following
> {code:java}
> /**
>  * The maximum size of array to allocate.
>  * Some VMs reserve some header words in an array.
>  * Attempts to allocate larger arrays may result in
>  * OutOfMemoryError: Requested array size exceeds VM limit
>  */
> private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
> {code}
>  
> BytesWritable.setSize should use something similar to prevent an OOME from 
> occurring.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to