[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13476362#comment-13476362
 ] 

Kihwal Lee commented on HDFS-4046:
----------------------------------

If we do this, it can break the compatibility since the old proto util methods 
cannot deal with the changes.  We should get more input on the fix and ways to 
prevent further mistakes like this.
                
> ChecksumTypeProto use NULL as enum value which is illegal in C/C++
> ------------------------------------------------------------------
>
>                 Key: HDFS-4046
>                 URL: https://issues.apache.org/jira/browse/HDFS-4046
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Binglin Chang
>            Assignee: Binglin Chang
>            Priority: Minor
>         Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
> HDFS-4046-ChecksumType-NULL.patch
>
>
> I tried to write a native hdfs client using protobuf based protocol, when I 
> generate c++ code using hdfs.proto, the generated file can not compile, 
> because NULL is an already defined macro.
> I am thinking two solutions:
> 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
> fine for all languages, but this may breaking compatibility.
> 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
> enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
> and DataChecksum.Type, and make sure enum integer values are match(currently 
> already match).
> I can make a patch for solution 2.
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to