[ https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Suresh Srinivas updated HDFS-4046: ---------------------------------- Resolution: Fixed Fix Version/s: 2.0.3-alpha 3.0.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I committed the patch to trunk and branch-2. Thank you Binglin. > ChecksumTypeProto use NULL as enum value which is illegal in C/C++ > ------------------------------------------------------------------ > > Key: HDFS-4046 > URL: https://issues.apache.org/jira/browse/HDFS-4046 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: Binglin Chang > Assignee: Binglin Chang > Priority: Minor > Fix For: 3.0.0, 2.0.3-alpha > > Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, > HDFS-4046-ChecksumType-NULL.patch, HDFS-4096-ChecksumTypeProto-NULL.patch > > > I tried to write a native hdfs client using protobuf based protocol, when I > generate c++ code using hdfs.proto, the generated file can not compile, > because NULL is an already defined macro. > I am thinking two solutions: > 1. refactor all DataChecksum.Type.NULL references to NONE, which should be > fine for all languages, but this may breaking compatibility. > 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use > enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto > and DataChecksum.Type, and make sure enum integer values are match(currently > already match). > I can make a patch for solution 2. > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira