[ 
https://issues.apache.org/jira/browse/HIVE-4157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13617651#comment-13617651
 ] 

Hudson commented on HIVE-4157:
------------------------------

Integrated in Hive-trunk-h0.21 #2035 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2035/])
    HIVE-4157: ORC runs out of heap when writing (Kevin Wilfong vi Gang Tim 
Liu) (Revision 1462363)

     Result = FAILURE
gangtimliu : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1462363
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OutStream.java

                
> ORC runs out of heap when writing
> ---------------------------------
>
>                 Key: HIVE-4157
>                 URL: https://issues.apache.org/jira/browse/HIVE-4157
>             Project: Hive
>          Issue Type: Improvement
>          Components: Serializers/Deserializers
>    Affects Versions: 0.11.0
>            Reporter: Kevin Wilfong
>            Assignee: Kevin Wilfong
>             Fix For: 0.11.0
>
>         Attachments: HIVE-4157.1.patch.txt
>
>
> The OutStream class used by the ORC file format seems to aggressively 
> allocate memory for ByteBuffers and doesn't seem too eager to give it back.
> This causes issues with heap space, particularly when a wide tables/dynamic 
> partitions are involved.
> As a first step to resolving this problem, the OutStream class can be 
> modified to lazily allocate memory, and more actively make it available for 
> garbage collection.
> Follow ups could include checking the amount of free memory as part of 
> determining if a spill is needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to