[ https://issues.apache.org/jira/browse/HDFS-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14720699#comment-14720699 ]
Colin Patrick McCabe commented on HDFS-8965: -------------------------------------------- bq. Do we also already have tests for invalid op lengths (e.g. greater than max op size)? I see testFuzzSequences but that's not explicit. {{TestNameNodeRecovery#testNonDefaultMaxOpSize}} tests maximum op sizes. The latest patch fixes the test failure in {{TestJournal}}. The issue was that we need to ensure that {{scanOp}} works when the edit log version is newer than the latest version. > Harden edit log reading code against out of memory errors > --------------------------------------------------------- > > Key: HDFS-8965 > URL: https://issues.apache.org/jira/browse/HDFS-8965 > Project: Hadoop HDFS > Issue Type: Improvement > Affects Versions: 2.0.0-alpha > Reporter: Colin Patrick McCabe > Assignee: Colin Patrick McCabe > Attachments: HDFS-8965.001.patch, HDFS-8965.002.patch, > HDFS-8965.003.patch, HDFS-8965.004.patch, HDFS-8965.005.patch > > > We should harden the edit log reading code against out of memory errors. Now > that each op has a length prefix and a checksum, we can validate the checksum > before trying to load the Op data. This should avoid out of memory errors > when trying to load garbage data as Op data. -- This message was sent by Atlassian JIRA (v6.3.4#6332)