[ https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Colin Patrick McCabe updated HDFS-3134: --------------------------------------- Attachment: HDFS-3134.006.patch * rebase on trunk > harden edit log loader against malformed or malicious input > ----------------------------------------------------------- > > Key: HDFS-3134 > URL: https://issues.apache.org/jira/browse/HDFS-3134 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node > Affects Versions: 0.23.0 > Reporter: Colin Patrick McCabe > Assignee: Colin Patrick McCabe > Attachments: HDFS-3134.001.patch, HDFS-3134.002.patch, > HDFS-3134.003.patch, HDFS-3134.004.patch, HDFS-3134.005.patch, > HDFS-3134.006.patch > > > Currently, the edit log loader does not handle bad or malicious input > sensibly. > We can often cause OutOfMemory exceptions, null pointer exceptions, or other > unchecked exceptions to be thrown by feeding the edit log loader bad input. > In some environments, an out of memory error can cause the JVM process to be > terminated. > It's clear that we want these exceptions to be thrown as IOException instead > of as unchecked exceptions. We also want to avoid out of memory situations. > The main task here is to put a sensible upper limit on the lengths of arrays > and strings we allocate on command. The other task is to try to avoid > creating unchecked exceptions (by dereferencing potentially-NULL pointers, > for example). Instead, we should verify ahead of time and give a more > sensible error message that reflects the problem with the input. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira