[ https://issues.apache.org/jira/browse/HDFS-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14723320#comment-14723320 ]
Brahma Reddy Battula commented on HDFS-8965: -------------------------------------------- Thanks [~cmccabe] for working on this issue..Great work here!!! *Some Minor nits :* 1) {code}in.skip(MIN_OP_LENGTH - 1 - CHECKSUM_LENGTH); // skip length and txid{code} I think it will be easy to understand code like in.skip (4+8) compared to current statement. what do you say..? 2){code}- * @param logVersion The version of the data coming from the stream.{code} Seems to be by mistake you might have removed this..? 3) {code} FSIMAGE_CHECKSUM(-26, "Support checksum for fsimage"), REMOVE_REL13_DISK_LAYOUT_SUPPORT(-27, "Remove support for 0.13 disk layout"), - EDITS_CHESKUM(-28, "Support checksum for editlog"), + EDITS_CHEKSUM(-28, "Support checksum for editlog"), {code} I feel, it can be EDITS_CHE{color:red}C{color}KSUM where it will be similar to FSIMAGE_CHECKSUM and as [~andrew.wang] mentioned,it is good chance to correct this typo.. 4) {code}+ * 1-byte opid{code} how about change to {{opcode}} instead of opid which we did not use..? 5) {quote}The patch appears to introduce 1 new Findbugs (version 3.0.0) warnings.{quote} Currently jenkins report will not show this findbug warning,once after committed this patch,we can see..can you take care..? had seen like HDFS-8969... > Harden edit log reading code against out of memory errors > --------------------------------------------------------- > > Key: HDFS-8965 > URL: https://issues.apache.org/jira/browse/HDFS-8965 > Project: Hadoop HDFS > Issue Type: Improvement > Affects Versions: 2.0.0-alpha > Reporter: Colin Patrick McCabe > Assignee: Colin Patrick McCabe > Attachments: HDFS-8965.001.patch, HDFS-8965.002.patch, > HDFS-8965.003.patch, HDFS-8965.004.patch, HDFS-8965.005.patch > > > We should harden the edit log reading code against out of memory errors. Now > that each op has a length prefix and a checksum, we can validate the checksum > before trying to load the Op data. This should avoid out of memory errors > when trying to load garbage data as Op data. -- This message was sent by Atlassian JIRA (v6.3.4#6332)