[ 
https://issues.apache.org/jira/browse/HDFS-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14723831#comment-14723831
 ] 

Colin Patrick McCabe commented on HDFS-8965:
--------------------------------------------

bq. I think it will be easy to understand code like in.skip (4+8) compared to 
current statement. what do you say..?

ok

bq. Seems to be by mistake you might have removed \[the logVersion JavaDoc\]

Fixed, thanks

bq. I feel, it can be EDITS_CHECKSUM where it will be similar to 
FSIMAGE_CHECKSUM and as Andrew Wang mentioned,it is good chance to correct this 
typo..

Yes, it seems that I didn't actually correct the typo!  Fixed.

bq. how about change \[the comment about opid to opcode\]

Yes, we refer to "opcode" in other places, so why not here.  Changed.

> Harden edit log reading code against out of memory errors
> ---------------------------------------------------------
>
>                 Key: HDFS-8965
>                 URL: https://issues.apache.org/jira/browse/HDFS-8965
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>         Attachments: HDFS-8965.001.patch, HDFS-8965.002.patch, 
> HDFS-8965.003.patch, HDFS-8965.004.patch, HDFS-8965.005.patch
>
>
> We should harden the edit log reading code against out of memory errors.  Now 
> that each op has a length prefix and a checksum, we can validate the checksum 
> before trying to load the Op data.  This should avoid out of memory errors 
> when trying to load garbage data as Op data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to