[
https://issues.apache.org/jira/browse/PARQUET-340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632343#comment-14632343
]
Chris Bannister commented on PARQUET-340:
-----------------------------------------
resolved by pull request, https://github.com/apache/parquet-mr/pull/246
A better solution would be to type the ratio to a double but that requires API
change and version of Hadoop doesnt have getDouble
> totalMemoryPool is truncated to 32 bits
> ---------------------------------------
>
> Key: PARQUET-340
> URL: https://issues.apache.org/jira/browse/PARQUET-340
> Project: Parquet
> Issue Type: Bug
> Reporter: Chris Bannister
>
> with heap set to 50gb seeings lots of errors like this
> Jul 17, 2015 3:18:14 PM WARNING: org.apache.parquet.hadoop.MemoryManager:
> Total allocation exceeds 95.00% (2,147,483,647 bytes) of heap memory
> this is because the ratio passed into calculate the total memory is a float
> which when used is causing the long value to be truncate to max int32.
> Fix this by making it a double instead
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)