[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13476402#comment-13476402
 ] 

Binglin Chang commented on HDFS-4046:
-------------------------------------

Thanks for the review.

bq. TestAuditLogs runs fine on my machine with or without your fix.

InputStream.read() return the first byte of the file, the bytes in the file is 
generated in using Random.nextBytes(), so you get 1/256 chance the first byte 
is 0, so some times it may fail.

I will fire another JIRA for this.
                
> ChecksumTypeProto use NULL as enum value which is illegal in C/C++
> ------------------------------------------------------------------
>
>                 Key: HDFS-4046
>                 URL: https://issues.apache.org/jira/browse/HDFS-4046
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Binglin Chang
>            Assignee: Binglin Chang
>            Priority: Minor
>         Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
> HDFS-4046-ChecksumType-NULL.patch
>
>
> I tried to write a native hdfs client using protobuf based protocol, when I 
> generate c++ code using hdfs.proto, the generated file can not compile, 
> because NULL is an already defined macro.
> I am thinking two solutions:
> 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
> fine for all languages, but this may breaking compatibility.
> 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
> enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
> and DataChecksum.Type, and make sure enum integer values are match(currently 
> already match).
> I can make a patch for solution 2.
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to