[ https://issues.apache.org/jira/browse/HDFS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Harsh J updated HDFS-583: ------------------------- Component/s: (was: data-node) name-node Summary: HDFS should enforce a max block size (was: DataNode should enforce a max block size) > HDFS should enforce a max block size > ------------------------------------ > > Key: HDFS-583 > URL: https://issues.apache.org/jira/browse/HDFS-583 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node > Reporter: Hairong Kuang > > When DataNode creates a replica, it should enforce a max block size, so > clients can't go crazy. One way of enforcing this is to make > BlockWritesStreams to be filter steams that check the block size. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira