[
https://issues.apache.org/jira/browse/HDFS-6151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Junping Du updated HDFS-6151:
-
Target Version/s: (was: 2.8.0)
> HDFS should refuse to cache blocks >=2GB
>
>
> Key: HDFS-6151
> URL: https://issues.apache.org/jira/browse/HDFS-6151
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: caching, datanode
>Affects Versions: 2.4.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>
> If you try to cache a block that's >=2GB, the DN will silently fail to cache
> it since {{MappedByteBuffer}} uses a signed int to represent size. Blocks
> this large are rare, but we should log or alert the user somehow.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org