[jira] [Commented] (HDFS-3134) Harden edit log loader against malformed or malicious input
[ https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13273280#comment-13273280 ] Hudson commented on HDFS-3134: -- Integrated in Hadoop-Mapreduce-trunk #1076 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1076/]) HDFS-3134. harden edit log loader against malformed or malicious input. Contributed by Colin Patrick McCabe (Revision 1336943) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1336943 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenIdentifier.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java > Harden edit log loader against malformed or malicious input > --- > > Key: HDFS-3134 > URL: https://issues.apache.org/jira/browse/HDFS-3134 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node >Affects Versions: 0.23.0 >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Fix For: 2.0.0 > > Attachments: HDFS-3134.001.patch, HDFS-3134.002.patch, > HDFS-3134.003.patch, HDFS-3134.004.patch, HDFS-3134.005.patch, > HDFS-3134.006.patch, HDFS-3134.007.patch, HDFS-3134.009.patch > > > Currently, the edit log loader does not handle bad or malicious input > sensibly. > We can often cause OutOfMemory exceptions, null pointer exceptions, or other > unchecked exceptions to be thrown by feeding the edit log loader bad input. > In some environments, an out of memory error can cause the JVM process to be > terminated. > It's clear that we want these exceptions to be thrown as IOException instead > of as unchecked exceptions. We also want to avoid out of memory situations. > The main task here is to put a sensible upper limit on the lengths of arrays > and strings we allocate on command. The other task is to try to avoid > creating unchecked exceptions (by dereferencing potentially-NULL pointers, > for example). Instead, we should verify ahead of time and give a more > sensible error message that reflects the problem with the input. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3134) Harden edit log loader against malformed or malicious input
[ https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13273230#comment-13273230 ] Hudson commented on HDFS-3134: -- Integrated in Hadoop-Hdfs-trunk #1040 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1040/]) HDFS-3134. harden edit log loader against malformed or malicious input. Contributed by Colin Patrick McCabe (Revision 1336943) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1336943 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenIdentifier.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java > Harden edit log loader against malformed or malicious input > --- > > Key: HDFS-3134 > URL: https://issues.apache.org/jira/browse/HDFS-3134 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node >Affects Versions: 0.23.0 >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Fix For: 2.0.0 > > Attachments: HDFS-3134.001.patch, HDFS-3134.002.patch, > HDFS-3134.003.patch, HDFS-3134.004.patch, HDFS-3134.005.patch, > HDFS-3134.006.patch, HDFS-3134.007.patch, HDFS-3134.009.patch > > > Currently, the edit log loader does not handle bad or malicious input > sensibly. > We can often cause OutOfMemory exceptions, null pointer exceptions, or other > unchecked exceptions to be thrown by feeding the edit log loader bad input. > In some environments, an out of memory error can cause the JVM process to be > terminated. > It's clear that we want these exceptions to be thrown as IOException instead > of as unchecked exceptions. We also want to avoid out of memory situations. > The main task here is to put a sensible upper limit on the lengths of arrays > and strings we allocate on command. The other task is to try to avoid > creating unchecked exceptions (by dereferencing potentially-NULL pointers, > for example). Instead, we should verify ahead of time and give a more > sensible error message that reflects the problem with the input. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3134) Harden edit log loader against malformed or malicious input
[ https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13272924#comment-13272924 ] Hudson commented on HDFS-3134: -- Integrated in Hadoop-Mapreduce-trunk-Commit #2242 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2242/]) HDFS-3134. harden edit log loader against malformed or malicious input. Contributed by Colin Patrick McCabe (Revision 1336943) Result = ABORTED eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1336943 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenIdentifier.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java > Harden edit log loader against malformed or malicious input > --- > > Key: HDFS-3134 > URL: https://issues.apache.org/jira/browse/HDFS-3134 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node >Affects Versions: 0.23.0 >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Fix For: 2.0.0 > > Attachments: HDFS-3134.001.patch, HDFS-3134.002.patch, > HDFS-3134.003.patch, HDFS-3134.004.patch, HDFS-3134.005.patch, > HDFS-3134.006.patch, HDFS-3134.007.patch, HDFS-3134.009.patch > > > Currently, the edit log loader does not handle bad or malicious input > sensibly. > We can often cause OutOfMemory exceptions, null pointer exceptions, or other > unchecked exceptions to be thrown by feeding the edit log loader bad input. > In some environments, an out of memory error can cause the JVM process to be > terminated. > It's clear that we want these exceptions to be thrown as IOException instead > of as unchecked exceptions. We also want to avoid out of memory situations. > The main task here is to put a sensible upper limit on the lengths of arrays > and strings we allocate on command. The other task is to try to avoid > creating unchecked exceptions (by dereferencing potentially-NULL pointers, > for example). Instead, we should verify ahead of time and give a more > sensible error message that reflects the problem with the input. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3134) Harden edit log loader against malformed or malicious input
[ https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13272902#comment-13272902 ] Hudson commented on HDFS-3134: -- Integrated in Hadoop-Common-trunk-Commit #2224 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2224/]) HDFS-3134. harden edit log loader against malformed or malicious input. Contributed by Colin Patrick McCabe (Revision 1336943) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1336943 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenIdentifier.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java > Harden edit log loader against malformed or malicious input > --- > > Key: HDFS-3134 > URL: https://issues.apache.org/jira/browse/HDFS-3134 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node >Affects Versions: 0.23.0 >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Fix For: 2.0.0 > > Attachments: HDFS-3134.001.patch, HDFS-3134.002.patch, > HDFS-3134.003.patch, HDFS-3134.004.patch, HDFS-3134.005.patch, > HDFS-3134.006.patch, HDFS-3134.007.patch, HDFS-3134.009.patch > > > Currently, the edit log loader does not handle bad or malicious input > sensibly. > We can often cause OutOfMemory exceptions, null pointer exceptions, or other > unchecked exceptions to be thrown by feeding the edit log loader bad input. > In some environments, an out of memory error can cause the JVM process to be > terminated. > It's clear that we want these exceptions to be thrown as IOException instead > of as unchecked exceptions. We also want to avoid out of memory situations. > The main task here is to put a sensible upper limit on the lengths of arrays > and strings we allocate on command. The other task is to try to avoid > creating unchecked exceptions (by dereferencing potentially-NULL pointers, > for example). Instead, we should verify ahead of time and give a more > sensible error message that reflects the problem with the input. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3134) Harden edit log loader against malformed or malicious input
[ https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13272895#comment-13272895 ] Hudson commented on HDFS-3134: -- Integrated in Hadoop-Hdfs-trunk-Commit #2299 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2299/]) HDFS-3134. harden edit log loader against malformed or malicious input. Contributed by Colin Patrick McCabe (Revision 1336943) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1336943 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenIdentifier.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java > Harden edit log loader against malformed or malicious input > --- > > Key: HDFS-3134 > URL: https://issues.apache.org/jira/browse/HDFS-3134 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node >Affects Versions: 0.23.0 >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Fix For: 2.0.0 > > Attachments: HDFS-3134.001.patch, HDFS-3134.002.patch, > HDFS-3134.003.patch, HDFS-3134.004.patch, HDFS-3134.005.patch, > HDFS-3134.006.patch, HDFS-3134.007.patch, HDFS-3134.009.patch > > > Currently, the edit log loader does not handle bad or malicious input > sensibly. > We can often cause OutOfMemory exceptions, null pointer exceptions, or other > unchecked exceptions to be thrown by feeding the edit log loader bad input. > In some environments, an out of memory error can cause the JVM process to be > terminated. > It's clear that we want these exceptions to be thrown as IOException instead > of as unchecked exceptions. We also want to avoid out of memory situations. > The main task here is to put a sensible upper limit on the lengths of arrays > and strings we allocate on command. The other task is to try to avoid > creating unchecked exceptions (by dereferencing potentially-NULL pointers, > for example). Instead, we should verify ahead of time and give a more > sensible error message that reflects the problem with the input. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3134) harden edit log loader against malformed or malicious input
[ https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13272879#comment-13272879 ] Eli Collins commented on HDFS-3134: --- +1 looks good > harden edit log loader against malformed or malicious input > --- > > Key: HDFS-3134 > URL: https://issues.apache.org/jira/browse/HDFS-3134 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node >Affects Versions: 0.23.0 >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: HDFS-3134.001.patch, HDFS-3134.002.patch, > HDFS-3134.003.patch, HDFS-3134.004.patch, HDFS-3134.005.patch, > HDFS-3134.006.patch, HDFS-3134.007.patch, HDFS-3134.009.patch > > > Currently, the edit log loader does not handle bad or malicious input > sensibly. > We can often cause OutOfMemory exceptions, null pointer exceptions, or other > unchecked exceptions to be thrown by feeding the edit log loader bad input. > In some environments, an out of memory error can cause the JVM process to be > terminated. > It's clear that we want these exceptions to be thrown as IOException instead > of as unchecked exceptions. We also want to avoid out of memory situations. > The main task here is to put a sensible upper limit on the lengths of arrays > and strings we allocate on command. The other task is to try to avoid > creating unchecked exceptions (by dereferencing potentially-NULL pointers, > for example). Instead, we should verify ahead of time and give a more > sensible error message that reflects the problem with the input. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3134) harden edit log loader against malformed or malicious input
[ https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13271971#comment-13271971 ] Hadoop QA commented on HDFS-3134: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12526231/HDFS-3134.009.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2397//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2397//console This message is automatically generated. > harden edit log loader against malformed or malicious input > --- > > Key: HDFS-3134 > URL: https://issues.apache.org/jira/browse/HDFS-3134 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node >Affects Versions: 0.23.0 >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: HDFS-3134.001.patch, HDFS-3134.002.patch, > HDFS-3134.003.patch, HDFS-3134.004.patch, HDFS-3134.005.patch, > HDFS-3134.006.patch, HDFS-3134.007.patch, HDFS-3134.009.patch > > > Currently, the edit log loader does not handle bad or malicious input > sensibly. > We can often cause OutOfMemory exceptions, null pointer exceptions, or other > unchecked exceptions to be thrown by feeding the edit log loader bad input. > In some environments, an out of memory error can cause the JVM process to be > terminated. > It's clear that we want these exceptions to be thrown as IOException instead > of as unchecked exceptions. We also want to avoid out of memory situations. > The main task here is to put a sensible upper limit on the lengths of arrays > and strings we allocate on command. The other task is to try to avoid > creating unchecked exceptions (by dereferencing potentially-NULL pointers, > for example). Instead, we should verify ahead of time and give a more > sensible error message that reflects the problem with the input. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3134) harden edit log loader against malformed or malicious input
[ https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266728#comment-13266728 ] Colin Patrick McCabe commented on HDFS-3134: I looked at https://builds.apache.org/job/PreCommit-HDFS-Build/2359//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html and it contains this: bq. org.apache.hadoop.hdfs.server.namenode.INodeFileUnderConstruction doesn't override INodeFile.equals(Object) Since this patch doesn't change INodeFileUnderConstruction at all, I can only assume that this is a known problem where someone introduced a findbugs bug without introducing a suppression. So this patch should be ready to submit... > harden edit log loader against malformed or malicious input > --- > > Key: HDFS-3134 > URL: https://issues.apache.org/jira/browse/HDFS-3134 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node >Affects Versions: 0.23.0 >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: HDFS-3134.001.patch, HDFS-3134.002.patch, > HDFS-3134.003.patch, HDFS-3134.004.patch, HDFS-3134.005.patch, > HDFS-3134.006.patch, HDFS-3134.007.patch > > > Currently, the edit log loader does not handle bad or malicious input > sensibly. > We can often cause OutOfMemory exceptions, null pointer exceptions, or other > unchecked exceptions to be thrown by feeding the edit log loader bad input. > In some environments, an out of memory error can cause the JVM process to be > terminated. > It's clear that we want these exceptions to be thrown as IOException instead > of as unchecked exceptions. We also want to avoid out of memory situations. > The main task here is to put a sensible upper limit on the lengths of arrays > and strings we allocate on command. The other task is to try to avoid > creating unchecked exceptions (by dereferencing potentially-NULL pointers, > for example). Instead, we should verify ahead of time and give a more > sensible error message that reflects the problem with the input. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3134) harden edit log loader against malformed or malicious input
[ https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266308#comment-13266308 ] Hadoop QA commented on HDFS-3134: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12525248/HDFS-3134.007.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. -1 javadoc. The javadoc tool appears to have generated 2 warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2359//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/2359//artifact/trunk/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2359//console This message is automatically generated. > harden edit log loader against malformed or malicious input > --- > > Key: HDFS-3134 > URL: https://issues.apache.org/jira/browse/HDFS-3134 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node >Affects Versions: 0.23.0 >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: HDFS-3134.001.patch, HDFS-3134.002.patch, > HDFS-3134.003.patch, HDFS-3134.004.patch, HDFS-3134.005.patch, > HDFS-3134.006.patch, HDFS-3134.007.patch > > > Currently, the edit log loader does not handle bad or malicious input > sensibly. > We can often cause OutOfMemory exceptions, null pointer exceptions, or other > unchecked exceptions to be thrown by feeding the edit log loader bad input. > In some environments, an out of memory error can cause the JVM process to be > terminated. > It's clear that we want these exceptions to be thrown as IOException instead > of as unchecked exceptions. We also want to avoid out of memory situations. > The main task here is to put a sensible upper limit on the lengths of arrays > and strings we allocate on command. The other task is to try to avoid > creating unchecked exceptions (by dereferencing potentially-NULL pointers, > for example). Instead, we should verify ahead of time and give a more > sensible error message that reflects the problem with the input. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3134) harden edit log loader against malformed or malicious input
[ https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266189#comment-13266189 ] Hadoop QA commented on HDFS-3134: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12525230/HDFS-3134.006.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2357//console This message is automatically generated. > harden edit log loader against malformed or malicious input > --- > > Key: HDFS-3134 > URL: https://issues.apache.org/jira/browse/HDFS-3134 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node >Affects Versions: 0.23.0 >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: HDFS-3134.001.patch, HDFS-3134.002.patch, > HDFS-3134.003.patch, HDFS-3134.004.patch, HDFS-3134.005.patch, > HDFS-3134.006.patch > > > Currently, the edit log loader does not handle bad or malicious input > sensibly. > We can often cause OutOfMemory exceptions, null pointer exceptions, or other > unchecked exceptions to be thrown by feeding the edit log loader bad input. > In some environments, an out of memory error can cause the JVM process to be > terminated. > It's clear that we want these exceptions to be thrown as IOException instead > of as unchecked exceptions. We also want to avoid out of memory situations. > The main task here is to put a sensible upper limit on the lengths of arrays > and strings we allocate on command. The other task is to try to avoid > creating unchecked exceptions (by dereferencing potentially-NULL pointers, > for example). Instead, we should verify ahead of time and give a more > sensible error message that reflects the problem with the input. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3134) harden edit log loader against malformed or malicious input
[ https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13252834#comment-13252834 ] Hadoop QA commented on HDFS-3134: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12522464/HDFS-3134.004.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2267//console This message is automatically generated. > harden edit log loader against malformed or malicious input > --- > > Key: HDFS-3134 > URL: https://issues.apache.org/jira/browse/HDFS-3134 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: HDFS-3134.001.patch, HDFS-3134.002.patch, > HDFS-3134.003.patch, HDFS-3134.004.patch > > > Currently, the edit log loader does not handle bad or malicious input > sensibly. > We can often cause OutOfMemory exceptions, null pointer exceptions, or other > unchecked exceptions to be thrown by feeding the edit log loader bad input. > In some environments, an out of memory error can cause the JVM process to be > terminated. > It's clear that we want these exceptions to be thrown as IOException instead > of as unchecked exceptions. We also want to avoid out of memory situations. > The main task here is to put a sensible upper limit on the lengths of arrays > and strings we allocate on command. The other task is to try to avoid > creating unchecked exceptions (by dereferencing potentially-NULL pointers, > for example). Instead, we should verify ahead of time and give a more > sensible error message that reflects the problem with the input. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3134) harden edit log loader against malformed or malicious input
[ https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13252144#comment-13252144 ] Hadoop QA commented on HDFS-3134: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12522365/HDFS-3134.003.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2255//console This message is automatically generated. > harden edit log loader against malformed or malicious input > --- > > Key: HDFS-3134 > URL: https://issues.apache.org/jira/browse/HDFS-3134 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: HDFS-3134.001.patch, HDFS-3134.002.patch, > HDFS-3134.003.patch > > > Currently, the edit log loader does not handle bad or malicious input > sensibly. > We can often cause OutOfMemory exceptions, null pointer exceptions, or other > unchecked exceptions to be thrown by feeding the edit log loader bad input. > In some environments, an out of memory error can cause the JVM process to be > terminated. > It's clear that we want these exceptions to be thrown as IOException instead > of as unchecked exceptions. We also want to avoid out of memory situations. > The main task here is to put a sensible upper limit on the lengths of arrays > and strings we allocate on command. The other task is to try to avoid > creating unchecked exceptions (by dereferencing potentially-NULL pointers, > for example). Instead, we should verify ahead of time and give a more > sensible error message that reflects the problem with the input. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3134) harden edit log loader against malformed or malicious input
[ https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13248722#comment-13248722 ] Hadoop QA commented on HDFS-3134: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12521726/HDFS-3134.002.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2218//console This message is automatically generated. > harden edit log loader against malformed or malicious input > --- > > Key: HDFS-3134 > URL: https://issues.apache.org/jira/browse/HDFS-3134 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: HDFS-3134.001.patch, HDFS-3134.002.patch > > > Currently, the edit log loader does not handle bad or malicious input > sensibly. > We can often cause OutOfMemory exceptions, null pointer exceptions, or other > unchecked exceptions to be thrown by feeding the edit log loader bad input. > In some environments, an out of memory error can cause the JVM process to be > terminated. > It's clear that we want these exceptions to be thrown as IOException instead > of as unchecked exceptions. We also want to avoid out of memory situations. > The main task here is to put a sensible upper limit on the lengths of arrays > and strings we allocate on command. The other task is to try to avoid > creating unchecked exceptions (by dereferencing potentially-NULL pointers, > for example). Instead, we should verify ahead of time and give a more > sensible error message that reflects the problem with the input. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3134) harden edit log loader against malformed or malicious input
[ https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13248624#comment-13248624 ] Colin Patrick McCabe commented on HDFS-3134: > Strictly speaking, "readPositiveVInt" should be "readNonNegativeVInt". See if > you want to change it. Ha! You know, I was actually thinking that. 0 is not positive, but it is non-negative. I suppose I should just bite the bullet and call it readNonNegativeVInt, despite the fact that it's a longer name. > harden edit log loader against malformed or malicious input > --- > > Key: HDFS-3134 > URL: https://issues.apache.org/jira/browse/HDFS-3134 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: HDFS-3134.001.patch > > > Currently, the edit log loader does not handle bad or malicious input > sensibly. > We can often cause OutOfMemory exceptions, null pointer exceptions, or other > unchecked exceptions to be thrown by feeding the edit log loader bad input. > In some environments, an out of memory error can cause the JVM process to be > terminated. > It's clear that we want these exceptions to be thrown as IOException instead > of as unchecked exceptions. We also want to avoid out of memory situations. > The main task here is to put a sensible upper limit on the lengths of arrays > and strings we allocate on command. The other task is to try to avoid > creating unchecked exceptions (by dereferencing potentially-NULL pointers, > for example). Instead, we should verify ahead of time and give a more > sensible error message that reflects the problem with the input. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3134) harden edit log loader against malformed or malicious input
[ https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13240565#comment-13240565 ] Colin Patrick McCabe commented on HDFS-3134: Hi Suresh, I'm sorry if my description was unclear. I am not talking about blindly translating unchecked exceptions into something else. I'm talking about fixing the code so it doesn't generate those unchecked exceptions in the first place. Hope this helps. Colin > harden edit log loader against malformed or malicious input > --- > > Key: HDFS-3134 > URL: https://issues.apache.org/jira/browse/HDFS-3134 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > > Currently, the edit log loader does not handle bad or malicious input > sensibly. > We can often cause OutOfMemory exceptions, null pointer exceptions, or other > unchecked exceptions to be thrown by feeding the edit log loader bad input. > In some environments, an out of memory error can cause the JVM process to be > terminated. > It's clear that we want these exceptions to be thrown as IOException instead > of as unchecked exceptions. We also want to avoid out of memory situations. > The main task here is to put a sensible upper limit on the lengths of arrays > and strings we allocate on command. The other task is to try to avoid > creating unchecked exceptions (by dereferencing potentially-NULL pointers, > for example). Instead, we should verify ahead of time and give a more > sensible error message that reflects the problem with the input. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3134) harden edit log loader against malformed or malicious input
[ https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13240181#comment-13240181 ] Suresh Srinivas commented on HDFS-3134: --- bq. It's clear that we want these exceptions to be thrown as IOException instead of as unchecked exceptions. We also want to avoid out of memory situations. >From which methods? Unchecked exceptions indicate programming errors. Blindly turning them into checked exceptions is not a good idea (as you say so in some of your comments). I am not sure which part of the code you are talking about. > harden edit log loader against malformed or malicious input > --- > > Key: HDFS-3134 > URL: https://issues.apache.org/jira/browse/HDFS-3134 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > > Currently, the edit log loader does not handle bad or malicious input > sensibly. > We can often cause OutOfMemory exceptions, null pointer exceptions, or other > unchecked exceptions to be thrown by feeding the edit log loader bad input. > In some environments, an out of memory error can cause the JVM process to be > terminated. > It's clear that we want these exceptions to be thrown as IOException instead > of as unchecked exceptions. We also want to avoid out of memory situations. > The main task here is to put a sensible upper limit on the lengths of arrays > and strings we allocate on command. The other task is to try to avoid > creating unchecked exceptions (by dereferencing potentially-NULL pointers, > for example). Instead, we should verify ahead of time and give a more > sensible error message that reflects the problem with the input. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira