[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13491431#comment-13491431
 ] 

Hudson commented on HDFS-4046:
--

Integrated in Hadoop-Hdfs-trunk #1218 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1218/])
Add HDFS-4046 to Release 2.0.3 section (Revision 1406019)
HDFS-4046. Adding the missed file in revision 1406011 (Revision 1406012)
HDFS-4046. Rename ChecksumTypeProto enum NULL since it is illegal in C/C++. 
Contributed by Binglin Chang. (Revision 1406011)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406019
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt

suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406012
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/TestHdfsProtoUtil.java

suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406011
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto


 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node, name-node
Affects Versions: 2.0.0-alpha
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch, HDFS-4096-ChecksumTypeProto-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13491462#comment-13491462
 ] 

Hudson commented on HDFS-4046:
--

Integrated in Hadoop-Mapreduce-trunk #1248 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1248/])
Add HDFS-4046 to Release 2.0.3 section (Revision 1406019)
HDFS-4046. Adding the missed file in revision 1406011 (Revision 1406012)
HDFS-4046. Rename ChecksumTypeProto enum NULL since it is illegal in C/C++. 
Contributed by Binglin Chang. (Revision 1406011)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406019
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt

suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406012
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/TestHdfsProtoUtil.java

suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406011
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto


 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node, name-node
Affects Versions: 2.0.0-alpha
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch, HDFS-4096-ChecksumTypeProto-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-11-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13491052#comment-13491052
 ] 

Hudson commented on HDFS-4046:
--

Integrated in Hadoop-trunk-Commit #2958 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/2958/])
HDFS-4046. Adding the missed file in revision 1406011 (Revision 1406012)
HDFS-4046. Rename ChecksumTypeProto enum NULL since it is illegal in C/C++. 
Contributed by Binglin Chang. (Revision 1406011)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406012
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/TestHdfsProtoUtil.java

suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406011
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto


 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch, HDFS-4096-ChecksumTypeProto-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-11-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13491091#comment-13491091
 ] 

Hudson commented on HDFS-4046:
--

Integrated in Hadoop-trunk-Commit #2959 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/2959/])
Add HDFS-4046 to Release 2.0.3 section (Revision 1406019)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406019
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node, name-node
Affects Versions: 2.0.0-alpha
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch, HDFS-4096-ChecksumTypeProto-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-31 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487881#comment-13487881
 ] 

Kihwal Lee commented on HDFS-4046:
--

+1 (nb) Looks good to me. The namespace declaration was added in HADOOP-8985 
and HDFS-4121

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch, HDFS-4096-ChecksumTypeProto-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-25 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13483944#comment-13483944
 ] 

Binglin Chang commented on HDFS-4046:
-

NULL is not a identifier but a macro, macro can not be scoped by namespaces in 
C/C++.
And yes, since SUCCESS is not a predefined macro, SUCCESS in two different 
enums is can be scoped by namespaces or classes.
currently, proto files are using java_package= to specify namespace only 
for java, but the namespaces for all other languages are not specified.

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-25 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484122#comment-13484122
 ] 

Suresh Srinivas commented on HDFS-4046:
---

In that case lets change NULL by adding contextual prefix. Lets also make 
changes to add namespace scope for C++ to .proto files. 

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-25 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484144#comment-13484144
 ] 

Suresh Srinivas commented on HDFS-4046:
---

bq. In that case lets change NULL by adding contextual prefix.
Your proposal of using NONE also works. I think the patch that you have 
attached should be fine to address NULL as enum value. I will review it.

For the namespace scoping for C++, we could open a separate jira.

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-25 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484195#comment-13484195
 ] 

Binglin Chang commented on HDFS-4046:
-

OK, I think we agree the prefix approach, so I will post another patch which 
uses contextual prefix approach.


 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13484313#comment-13484313
 ] 

Hadoop QA commented on HDFS-4046:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12550792/HDFS-4096-ChecksumTypeProto-NULL.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3399//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3399//console

This message is automatically generated.

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch, HDFS-4096-ChecksumTypeProto-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-24 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13483913#comment-13483913
 ] 

Suresh Srinivas commented on HDFS-4046:
---

Given how think the DFSClient code and how a lot of logic keeps getting added 
to it, c++ client is a big undertaking. Maintaining it amidst the changes that 
continue to happen in java client is another pain.

Binglin, I might have misunderstood the issue - quick question. The problem you 
are highlighting is the enum name clashing with C++ predefined identifier NULL, 
right? Same enum member names (SUCCESS in your previous comment) in two 
different enums is not an issue right, as they can be scoped by namespaces in 
C++?

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-23 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13482172#comment-13482172
 ] 

Binglin Chang commented on HDFS-4046:
-

bq. I would suggest adding the enum name or prefix based on the context at the 
beginning like say STATUS_SUCCESS etc.
I prefer prefix solution, so how about first change ChecksumTypeProto values to 
CHECKSUM_NULL, CHECKSUM_CRC32, CHECKSUM_CRC32C? This change can not be splited 
into 2 patches(common and hdfs)

bq. are you planning to contribute the c library based on this work to Apache 
Hadoop?
I have no problem contributing the code, but I think it is too early to have 
any expectation that I will finally write the library... 
I am just doing some investigation on how many effort it will be, which seems a 
lot. And it seems someone already did it?(named libhdfs3 in HDFS-2656)


 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-23 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13482326#comment-13482326
 ] 

Todd Lipcon commented on HDFS-4046:
---

I have an early implementation of Hadoop RPC using Boost ASIO that I could 
probably throw on github if people find it a useful starting point. Like 
Binglin said, building a full HDFS client is a much larger project, so I hadn't 
bothered to push it anywhere yet. Perhaps we should start a new JIRA to discuss 
a true native C++ client.

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-22 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481431#comment-13481431
 ] 

Kihwal Lee commented on HDFS-4046:
--

 Surech: You mean renaming it to NONE from the existing name NULL? Why does it 
 break compatibility?
It shouldn't as long as the enum ordering does not change. 

 Binglin: And there is a class named ChecksumNull may also need change given 
 that NULL is renamed to NONE?
This is not critical, but I think it is okay to update it for the consistency.

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481602#comment-13481602
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4046:
--

Would NONE be a reserved word in some other languages?  How about adding a 
prefix such as PB_NULL or HADOOP_NULL?  Then the same prefix can be used for 
other values.

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-22 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481615#comment-13481615
 ] 

Suresh Srinivas commented on HDFS-4046:
---

bq. It shouldn't as long as the enum ordering does not change.
Kihwal, my question was meant to say the change is backward compatible :-)

I would suggest adding the enum name or prefix based on the context at the 
beginning like say STATUS_SUCCESS etc. 

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-22 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481617#comment-13481617
 ] 

Suresh Srinivas commented on HDFS-4046:
---

@Binglin are you planning to contribute the c library based on this work to 
Apache Hadoop?

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-17 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478423#comment-13478423
 ] 

Suresh Srinivas commented on HDFS-4046:
---

bq. 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
fine for all languages, but this may breaking compatibility.
You mean renaming it to NONE from the existing name NULL? Why does it break 
compatibility?

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-17 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478661#comment-13478661
 ] 

Binglin Chang commented on HDFS-4046:
-

I thought DataChecksum.Type.NULL was used by many other classes, after a quick 
search I found 6 places where NULL is used, in common and hdfs, not so many so 
I think it's fine. 
And there is a class named ChecksumNull many also need change given that NULL 
is renamed to NONE? 


 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476226#comment-13476226
 ] 

Kihwal Lee commented on HDFS-4046:
--

Here are some comments:
* There is one more use of NULL in {{datatransfer.proto}}. {{ChecksumType}} is 
redundant, but ended up that way for compatibility reasons. I see you already 
modified {{DataTransferProtoUtil}}.
* TestAuditLogs runs fine on my machine with or without your fix. In 
{{DFSTestUtil.Builder}}, the minimum file size is set to 1 by default, so the 
existing conditions seem correct. 

BTW, if you find a bug in an unrelated unit test, it is better to file a 
separate jira and post the patch there. It is easier to track changes that way. 

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476362#comment-13476362
 ] 

Kihwal Lee commented on HDFS-4046:
--

If we do this, it can break the compatibility since the old proto util methods 
cannot deal with the changes.  We should get more input on the fix and ways to 
prevent further mistakes like this.

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-15 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476402#comment-13476402
 ] 

Binglin Chang commented on HDFS-4046:
-

Thanks for the review.

bq. TestAuditLogs runs fine on my machine with or without your fix.

InputStream.read() return the first byte of the file, the bytes in the file is 
generated in using Random.nextBytes(), so you get 1/256 chance the first byte 
is 0, so some times it may fail.

I will fire another JIRA for this.

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476408#comment-13476408
 ] 

Kihwal Lee commented on HDFS-4046:
--

You are right. The content can be 0. 

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-15 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476440#comment-13476440
 ] 

Binglin Chang commented on HDFS-4046:
-

I fired HDFS-4055 for TestAuditLogs bug.
I think the current patch is still backward compatible, because all wire format 
are the same as before, so old versions of ProtoUtil, PBHelper etc. can still 
communicate with new version, because the enum values are still the same.

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-15 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476468#comment-13476468
 ] 

Binglin Chang commented on HDFS-4046:
-

bq. We should get more input on the fix and ways to prevent further mistakes 
like this.
BTW, it seams that the proto files are written in a java flavor. Many enums 
names are not proper named for language compatibility and may cause future 
problems. 
For example:

decster:~/projects/hadoop-trunk grep SUCCESS `find . | grep \.proto$`
./hadoop-common-project/hadoop-common/src/main/proto/RpcPayloadHeader.proto: 
SUCCESS = 0;  // RPC succeeded
./hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto:SUCCESS 
= 0;
./hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto:  SUCCESS = 
0;

If we don't have namespace declaration in proto files(which is the case for 
now), the SUCCESS enum values are redefined. In fact many enum names are just 
too short and ambiguous, such as:

RpcPayloadHeader.proto:

enum RpcStatusProto {
 SUCCESS = 0;  // RPC succeeded
 ERROR = 1;// RPC Failed
 FATAL = 2;// Fatal error - connection is closed
}

datatransfer.proto:
enum Status {
  SUCCESS = 0;
  ERROR = 1;
  ERROR_CHECKSUM = 2;
  ERROR_INVALID = 3;
  ERROR_EXISTS = 4;
  ERROR_ACCESS_TOKEN = 5;
  CHECKSUM_OK = 6;
}





 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475547#comment-13475547
 ] 

Hadoop QA commented on HDFS-4046:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12549006/HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3328//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3328//console

This message is automatically generated.

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-12 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475123#comment-13475123
 ] 

Kihwal Lee commented on HDFS-4046:
--

Binglin, thanks for finding this. I will review the patch once you upload one.

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Priority: Minor

 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475508#comment-13475508
 ] 

Hadoop QA commented on HDFS-4046:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12548992/HDFS-4046-ChecksumType-NULL.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestAuditLogs

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3326//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3326//console

This message is automatically generated.

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-12 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475537#comment-13475537
 ] 

Binglin Chang commented on HDFS-4046:
-

Ops, looks like there is a bug in 
org.apache.hadoop.hdfs.server.namenode.TestAuditLogs...
Should I fix the bug in this patch, or just fire another JIRA?

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-12 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475540#comment-13475540
 ] 

Binglin Chang commented on HDFS-4046:
-

attach new patch fixing TestAuditLogs bug:
InputStream.read() returns int value =0, so
assertTrue(failed to read from file, val  0);
should change to:
assertTrue(failed to read from file, val = 0);

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira