[
https://issues.apache.org/jira/browse/HIVE-7685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14266632#comment-14266632
]
Brock Noland commented on HIVE-7685:
------------------------------------
bq. But should it be documented in Hive's wiki even though it's a Parquet
parameter, since it's in HiveConf.java?
Yes, this was implemented specifically for Hive users who cannot easily control
the number of partitions being written so I think it makes sense to doc in the
hive-parquet docs...
> Parquet memory manager
> ----------------------
>
> Key: HIVE-7685
> URL: https://issues.apache.org/jira/browse/HIVE-7685
> Project: Hive
> Issue Type: Improvement
> Components: Serializers/Deserializers
> Reporter: Brock Noland
> Assignee: Dong Chen
> Fix For: 0.15.0
>
> Attachments: HIVE-7685.1.patch, HIVE-7685.1.patch.ready,
> HIVE-7685.patch, HIVE-7685.patch.ready
>
>
> Similar to HIVE-4248, Parquet tries to write large very large "row groups".
> This causes Hive to run out of memory during dynamic partitions when a
> reducer may have many Parquet files open at a given time.
> As such, we should implement a memory manager which ensures that we don't run
> out of memory due to writing too many row groups within a single JVM.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)