[jira] [Resolved] (HDFS-5245) shouldRetry() in WebHDFSFileSystem generates excessive warnings

2013-09-26 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao resolved HDFS-5245.
-

   Resolution: Fixed
Fix Version/s: 1.3.0
 Hadoop Flags: Reviewed

Thanks [~wheat9]! I've committed this to branch-1.

 shouldRetry() in WebHDFSFileSystem generates excessive warnings
 ---

 Key: HDFS-5245
 URL: https://issues.apache.org/jira/browse/HDFS-5245
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 1.2.0, 1.2.1
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 1.3.0

 Attachments: HDFS-5245.000.patch, HDFS-5245.001.patch


 In branch-1 shouldRetry() in WebHDFSFileSystem always prints out the original 
 exception when the retry policy decides not to retry.
 While informative, this behavior quickly fills out the log.
 The problem does not exist in branch-2 since shouldRetry() returns a enum, so 
 that the upper layer can decide whether to print out the exception.
 In branch-1 the code should silent this warnings.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5245) shouldRetry() in WebHDFSFileSystem generates excessive warnings

2013-09-26 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5245:


Priority: Minor  (was: Major)

 shouldRetry() in WebHDFSFileSystem generates excessive warnings
 ---

 Key: HDFS-5245
 URL: https://issues.apache.org/jira/browse/HDFS-5245
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 1.2.0, 1.2.1
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor
 Fix For: 1.3.0

 Attachments: HDFS-5245.000.patch, HDFS-5245.001.patch


 In branch-1 shouldRetry() in WebHDFSFileSystem always prints out the original 
 exception when the retry policy decides not to retry.
 While informative, this behavior quickly fills out the log.
 The problem does not exist in branch-2 since shouldRetry() returns a enum, so 
 that the upper layer can decide whether to print out the exception.
 In branch-1 the code should silent this warnings.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3970) BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead of DataStorage to read prev version file.

2013-09-26 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-3970:


Assignee: Vinay  (was: Andrew Wang)

 BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead 
 of DataStorage to read prev version file.
 ---

 Key: HDFS-3970
 URL: https://issues.apache.org/jira/browse/HDFS-3970
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Vinay
Assignee: Vinay
 Fix For: 2.0.3-alpha

 Attachments: hdfs-3970-1.patch, HDFS-3970.patch


 {code}// read attributes out of the VERSION file of previous directory
 DataStorage prevInfo = new DataStorage();
 prevInfo.readPreviousVersionProperties(bpSd);{code}
 In the above code snippet BlockPoolSliceStorage instance should be used. 
 other wise rollback results in 'storageType' property missing which will not 
 be there in initial VERSION file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4988) Datanode must support all the volumes as individual storages

2013-09-26 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13778521#comment-13778521
 ] 

Junping Du commented on HDFS-4988:
--

Hi Arpit, as I just mentioned, the following new added code in 
setFieldsFromProperties() in DataStorage.java are actually removing 
verification of storageID (now is DatanodeUUID) that from VERSION file. Do we 
think this is unnecessary now?

{code}
+// Update the datanode UUID if present.
+if (props.getProperty(datanodeUuid) != null) {
+  String dnUuid = props.getProperty(datanodeUuid);
+  setDatanodeUuid(dnUuid);
+
+  if (getDatanodeUuid() != null 
+  getDatanodeUuid().compareTo(dnUuid) != 0) {
+throw new InconsistentFSStateException(sd.getRoot(),
+Root  + sd.getRoot() + : DatanodeUuid= + dnUuid +
+, does not match  + datanodeUuid +  from other +
+ StorageDirectory.);
+  }
{code}

 Datanode must support all the volumes as individual storages
 

 Key: HDFS-4988
 URL: https://issues.apache.org/jira/browse/HDFS-4988
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Suresh Srinivas
Assignee: Arpit Agarwal
 Attachments: HDFS-4988.01.patch, HDFS-4988.02.patch, 
 HDFS-4988.05.patch, HDFS-4988.06.patch, HDFS-4988.07.patch, HDFS-4988.08.patch


 Currently all the volumes on datanode is reported as a single storage. This 
 change proposes reporting them as individual storage. This requires:
 # A unique storage ID for each storage
 #* This needs to be generated during formatting
 # There should be an option to allow existing disks to be reported as single 
 storage unit for backward compatibility.
 # A functionality is also needed to split the existing all volumes as single 
 storage unit to to individual storage units.
 # -Configuration must allow for each storage unit a storage type attribute. 
 (Now HDFS-5000)-
 # Block reports must be sent on a per storage basis. In some cases (such 
 memory tier) block reports may need to be sent more frequently. That means 
 block reporting period must be on a per storage type basis.
 My proposal is for new clusters to configure volumes by default as separate 
 storage unit. Lets discuss.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4988) Datanode must support all the volumes as individual storages

2013-09-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13778525#comment-13778525
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4988:
--

Arpit, thanks for the quick response.  I have no problem if you want to fix 
some of the issues in other JIRAs.  Could you add TODO comments in the code if 
we don't have it yet?

 Datanode must support all the volumes as individual storages
 

 Key: HDFS-4988
 URL: https://issues.apache.org/jira/browse/HDFS-4988
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Suresh Srinivas
Assignee: Arpit Agarwal
 Attachments: HDFS-4988.01.patch, HDFS-4988.02.patch, 
 HDFS-4988.05.patch, HDFS-4988.06.patch, HDFS-4988.07.patch, HDFS-4988.08.patch


 Currently all the volumes on datanode is reported as a single storage. This 
 change proposes reporting them as individual storage. This requires:
 # A unique storage ID for each storage
 #* This needs to be generated during formatting
 # There should be an option to allow existing disks to be reported as single 
 storage unit for backward compatibility.
 # A functionality is also needed to split the existing all volumes as single 
 storage unit to to individual storage units.
 # -Configuration must allow for each storage unit a storage type attribute. 
 (Now HDFS-5000)-
 # Block reports must be sent on a per storage basis. In some cases (such 
 memory tier) block reports may need to be sent more frequently. That means 
 block reporting period must be on a per storage type basis.
 My proposal is for new clusters to configure volumes by default as separate 
 storage unit. Lets discuss.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5225) datanode keeps logging the same 'is no longer in the dataset' message over and over again

2013-09-26 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13778534#comment-13778534
 ] 

Junping Du commented on HDFS-5225:
--

Hi [~ozawa], thanks for the patch! Besides fixing test failures, it would be 
nice if you can attach log of your reproduce bug (only related part) so that 
Roman can judge if it is the same bug.  

 datanode keeps logging the same 'is no longer in the dataset' message over 
 and over again
 -

 Key: HDFS-5225
 URL: https://issues.apache.org/jira/browse/HDFS-5225
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.1.1-beta
Reporter: Roman Shaposhnik
Assignee: Tsuyoshi OZAWA
Priority: Blocker
 Attachments: HDFS-5225.1.patch, HDFS-5225.2.patch, 
 HDFS-5225-reproduce.1.txt


 I was running the usual Bigtop testing on 2.1.1-beta RC1 with the following 
 configuration: 4 nodes fully distributed cluster with security on.
 All of a sudden my DN ate up all of the space in /var/log logging the 
 following message repeatedly:
 {noformat}
 2013-09-18 20:51:12,046 INFO 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: 
 BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1369 is no longer 
 in the dataset
 {noformat}
 It wouldn't answer to a jstack and jstack -F ended up being useless.
 Here's what I was able to find in the NameNode logs regarding this block ID:
 {noformat}
 fgrep -rI 'blk_1073742189' hadoop-hdfs-namenode-ip-10-224-158-152.log
 2013-09-18 18:03:16,972 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
 allocateBlock: 
 /user/jenkins/testAppendInputWedSep18180222UTC2013/test4.fileWedSep18180222UTC2013._COPYING_.
  BP-1884637155-10.224.158.152-1379524544853 
 blk_1073742189_1369{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
 replicas=[ReplicaUnderConstruction[10.83.107.80:1004|RBW], 
 ReplicaUnderConstruction[10.34.74.206:1004|RBW], 
 ReplicaUnderConstruction[10.224.158.152:1004|RBW]]}
 2013-09-18 18:03:17,222 INFO BlockStateChange: BLOCK* addStoredBlock: 
 blockMap updated: 10.224.158.152:1004 is added to 
 blk_1073742189_1369{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
 replicas=[ReplicaUnderConstruction[10.83.107.80:1004|RBW], 
 ReplicaUnderConstruction[10.34.74.206:1004|RBW], 
 ReplicaUnderConstruction[10.224.158.152:1004|RBW]]} size 0
 2013-09-18 18:03:17,222 INFO BlockStateChange: BLOCK* addStoredBlock: 
 blockMap updated: 10.34.74.206:1004 is added to 
 blk_1073742189_1369{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
 replicas=[ReplicaUnderConstruction[10.83.107.80:1004|RBW], 
 ReplicaUnderConstruction[10.34.74.206:1004|RBW], 
 ReplicaUnderConstruction[10.224.158.152:1004|RBW]]} size 0
 2013-09-18 18:03:17,224 INFO BlockStateChange: BLOCK* addStoredBlock: 
 blockMap updated: 10.83.107.80:1004 is added to 
 blk_1073742189_1369{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
 replicas=[ReplicaUnderConstruction[10.83.107.80:1004|RBW], 
 ReplicaUnderConstruction[10.34.74.206:1004|RBW], 
 ReplicaUnderConstruction[10.224.158.152:1004|RBW]]} size 0
 2013-09-18 18:03:17,899 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
 updatePipeline(block=BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1369,
  newGenerationStamp=1370, newLength=1048576, newNodes=[10.83.107.80:1004, 
 10.34.74.206:1004, 10.224.158.152:1004], 
 clientName=DFSClient_NONMAPREDUCE_-450304083_1)
 2013-09-18 18:03:17,904 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
 updatePipeline(BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1369)
  successfully to 
 BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1370
 2013-09-18 18:03:18,540 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
 updatePipeline(block=BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1370,
  newGenerationStamp=1371, newLength=2097152, newNodes=[10.83.107.80:1004, 
 10.34.74.206:1004, 10.224.158.152:1004], 
 clientName=DFSClient_NONMAPREDUCE_-450304083_1)
 2013-09-18 18:03:18,548 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
 updatePipeline(BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1370)
  successfully to 
 BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1371
 2013-09-18 18:03:26,150 INFO BlockStateChange: BLOCK* addToInvalidates: 
 blk_1073742189_1371 10.83.107.80:1004 10.34.74.206:1004 10.224.158.152:1004 
 2013-09-18 18:03:26,847 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
 InvalidateBlocks: ask 10.34.74.206:1004 to delete [blk_1073742178_1359, 
 blk_1073742183_1362, blk_1073742184_1363, blk_1073742186_1366, 
 blk_1073742188_1368, blk_1073742189_1371]
 2013-09-18 18:03:29,848 INFO org.apache.hadoop.hdfs.StateChange: 

[jira] [Commented] (HDFS-4988) Datanode must support all the volumes as individual storages

2013-09-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13778537#comment-13778537
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4988:
--

{quote}
 In new FsVolumeSpi.getStorageID() method, do you want to call it 
 getStorageUuid()?

Yes I would like to rename it when we rename it everywhere else. I wanted it to 
be consistent as much as possible for now. I filed HDFS-5264 for this.
{quote}
Some parts of the code already are using xxxUuid and some other parts are using 
xxxID, I don't see how to make them consistent at the moment.  Moreover, 
suppose xxxID is a bad name.  I don't see the advantage of making things 
consistently wrong.  So, for the newly added datanodeID/storageID, I suggest to 
use datanodeUuid/storageUuid instead of renaming them later.  It would also 
reduce the size of the future patch in HDFS-5264.  Reviewing a large rename 
patch is not fun.

It is not a big deal to me.  If you insist on this approach, I respect your 
decision.

 Datanode must support all the volumes as individual storages
 

 Key: HDFS-4988
 URL: https://issues.apache.org/jira/browse/HDFS-4988
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Suresh Srinivas
Assignee: Arpit Agarwal
 Attachments: HDFS-4988.01.patch, HDFS-4988.02.patch, 
 HDFS-4988.05.patch, HDFS-4988.06.patch, HDFS-4988.07.patch, HDFS-4988.08.patch


 Currently all the volumes on datanode is reported as a single storage. This 
 change proposes reporting them as individual storage. This requires:
 # A unique storage ID for each storage
 #* This needs to be generated during formatting
 # There should be an option to allow existing disks to be reported as single 
 storage unit for backward compatibility.
 # A functionality is also needed to split the existing all volumes as single 
 storage unit to to individual storage units.
 # -Configuration must allow for each storage unit a storage type attribute. 
 (Now HDFS-5000)-
 # Block reports must be sent on a per storage basis. In some cases (such 
 memory tier) block reports may need to be sent more frequently. That means 
 block reporting period must be on a per storage type basis.
 My proposal is for new clusters to configure volumes by default as separate 
 storage unit. Lets discuss.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5041) Add the time of last heartbeat to dead server Web UI

2013-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13778650#comment-13778650
 ] 

Hudson commented on HDFS-5041:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #344 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/344/])
HDFS-5041. Add the time of last heartbeat to dead server Web UI. Contributed by 
Shinichi Yamashita (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526368)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java


 Add the time of last heartbeat to dead server Web UI
 

 Key: HDFS-5041
 URL: https://issues.apache.org/jira/browse/HDFS-5041
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Ted Yu
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-5041.patch, NameNode-dfsnodelist-dead.png


 In Live Server page, there is a column 'Last Contact'.
 On the dead server page, similar column can be added which shows when the 
 last heartbeat came from the respective dead node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5246) Make Hadoop nfs server port and mount daemon port configurable

2013-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13778654#comment-13778654
 ] 

Hudson commented on HDFS-5246:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #344 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/344/])
HDFS-5246. Make Hadoop nfs server port and mount daemon port configurable. 
Contributed by Jinghui Wang (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526316)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/core-site.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/Nfs3Base.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/resources
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/resources/core-site.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Make Hadoop nfs server port and mount daemon port configurable
 --

 Key: HDFS-5246
 URL: https://issues.apache.org/jira/browse/HDFS-5246
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.1.0-beta
 Environment: Red Hat Enterprise 6 with Sun Java 1.7 and IBM Java 1.6
Reporter: Jinghui Wang
Assignee: Jinghui Wang
 Fix For: 3.0.0, 2.1.2-beta

 Attachments: HDFS-5246-2.patch, HDFS-5246-3.patch, HDFS-5246.patch


 Hadoop nfs binds the nfs server to port 2049,
 which is also the default port that Linux nfs uses. If Linux nfs is already 
 running on the machine then Hadoop nfs will not be albe to start.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5246) Make Hadoop nfs server port and mount daemon port configurable

2013-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13778757#comment-13778757
 ] 

Hudson commented on HDFS-5246:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1560 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1560/])
HDFS-5246. Make Hadoop nfs server port and mount daemon port configurable. 
Contributed by Jinghui Wang (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526316)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/core-site.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/Nfs3Base.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/resources
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/resources/core-site.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Make Hadoop nfs server port and mount daemon port configurable
 --

 Key: HDFS-5246
 URL: https://issues.apache.org/jira/browse/HDFS-5246
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.1.0-beta
 Environment: Red Hat Enterprise 6 with Sun Java 1.7 and IBM Java 1.6
Reporter: Jinghui Wang
Assignee: Jinghui Wang
 Fix For: 3.0.0, 2.1.2-beta

 Attachments: HDFS-5246-2.patch, HDFS-5246-3.patch, HDFS-5246.patch


 Hadoop nfs binds the nfs server to port 2049,
 which is also the default port that Linux nfs uses. If Linux nfs is already 
 running on the machine then Hadoop nfs will not be albe to start.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5041) Add the time of last heartbeat to dead server Web UI

2013-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13778753#comment-13778753
 ] 

Hudson commented on HDFS-5041:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1560 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1560/])
HDFS-5041. Add the time of last heartbeat to dead server Web UI. Contributed by 
Shinichi Yamashita (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526368)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java


 Add the time of last heartbeat to dead server Web UI
 

 Key: HDFS-5041
 URL: https://issues.apache.org/jira/browse/HDFS-5041
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Ted Yu
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-5041.patch, NameNode-dfsnodelist-dead.png


 In Live Server page, there is a column 'Last Contact'.
 On the dead server page, similar column can be added which shows when the 
 last heartbeat came from the respective dead node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5041) Add the time of last heartbeat to dead server Web UI

2013-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13778767#comment-13778767
 ] 

Hudson commented on HDFS-5041:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1534 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1534/])
HDFS-5041. Add the time of last heartbeat to dead server Web UI. Contributed by 
Shinichi Yamashita (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526368)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java


 Add the time of last heartbeat to dead server Web UI
 

 Key: HDFS-5041
 URL: https://issues.apache.org/jira/browse/HDFS-5041
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Ted Yu
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-5041.patch, NameNode-dfsnodelist-dead.png


 In Live Server page, there is a column 'Last Contact'.
 On the dead server page, similar column can be added which shows when the 
 last heartbeat came from the respective dead node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5246) Make Hadoop nfs server port and mount daemon port configurable

2013-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13778771#comment-13778771
 ] 

Hudson commented on HDFS-5246:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1534 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1534/])
HDFS-5246. Make Hadoop nfs server port and mount daemon port configurable. 
Contributed by Jinghui Wang (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526316)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/core-site.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/Nfs3Base.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/resources
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/resources/core-site.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Make Hadoop nfs server port and mount daemon port configurable
 --

 Key: HDFS-5246
 URL: https://issues.apache.org/jira/browse/HDFS-5246
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.1.0-beta
 Environment: Red Hat Enterprise 6 with Sun Java 1.7 and IBM Java 1.6
Reporter: Jinghui Wang
Assignee: Jinghui Wang
 Fix For: 3.0.0, 2.1.2-beta

 Attachments: HDFS-5246-2.patch, HDFS-5246-3.patch, HDFS-5246.patch


 Hadoop nfs binds the nfs server to port 2049,
 which is also the default port that Linux nfs uses. If Linux nfs is already 
 running on the machine then Hadoop nfs will not be albe to start.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5225) datanode keeps logging the same 'is no longer in the dataset' message over and over again

2013-09-26 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13778879#comment-13778879
 ] 

Kihwal Lee commented on HDFS-5225:
--

I am seeing cases of repeated logging of Verification succeeded for xxx for 
the same block. Since it loops, the disk fills up very quickly. 

These nodes were told to write a replica then minutes later it was deleted. 
Minutes went by and the same block with the identical gen stamp was transferred 
to the node. All these were successful. In the next block scanner scan period, 
however, the thread gets into seemingly infinite loop of verifying this block 
replica, after verifying some number of blocks.



 datanode keeps logging the same 'is no longer in the dataset' message over 
 and over again
 -

 Key: HDFS-5225
 URL: https://issues.apache.org/jira/browse/HDFS-5225
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.1.1-beta
Reporter: Roman Shaposhnik
Assignee: Tsuyoshi OZAWA
Priority: Blocker
 Attachments: HDFS-5225.1.patch, HDFS-5225.2.patch, 
 HDFS-5225-reproduce.1.txt


 I was running the usual Bigtop testing on 2.1.1-beta RC1 with the following 
 configuration: 4 nodes fully distributed cluster with security on.
 All of a sudden my DN ate up all of the space in /var/log logging the 
 following message repeatedly:
 {noformat}
 2013-09-18 20:51:12,046 INFO 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: 
 BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1369 is no longer 
 in the dataset
 {noformat}
 It wouldn't answer to a jstack and jstack -F ended up being useless.
 Here's what I was able to find in the NameNode logs regarding this block ID:
 {noformat}
 fgrep -rI 'blk_1073742189' hadoop-hdfs-namenode-ip-10-224-158-152.log
 2013-09-18 18:03:16,972 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
 allocateBlock: 
 /user/jenkins/testAppendInputWedSep18180222UTC2013/test4.fileWedSep18180222UTC2013._COPYING_.
  BP-1884637155-10.224.158.152-1379524544853 
 blk_1073742189_1369{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
 replicas=[ReplicaUnderConstruction[10.83.107.80:1004|RBW], 
 ReplicaUnderConstruction[10.34.74.206:1004|RBW], 
 ReplicaUnderConstruction[10.224.158.152:1004|RBW]]}
 2013-09-18 18:03:17,222 INFO BlockStateChange: BLOCK* addStoredBlock: 
 blockMap updated: 10.224.158.152:1004 is added to 
 blk_1073742189_1369{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
 replicas=[ReplicaUnderConstruction[10.83.107.80:1004|RBW], 
 ReplicaUnderConstruction[10.34.74.206:1004|RBW], 
 ReplicaUnderConstruction[10.224.158.152:1004|RBW]]} size 0
 2013-09-18 18:03:17,222 INFO BlockStateChange: BLOCK* addStoredBlock: 
 blockMap updated: 10.34.74.206:1004 is added to 
 blk_1073742189_1369{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
 replicas=[ReplicaUnderConstruction[10.83.107.80:1004|RBW], 
 ReplicaUnderConstruction[10.34.74.206:1004|RBW], 
 ReplicaUnderConstruction[10.224.158.152:1004|RBW]]} size 0
 2013-09-18 18:03:17,224 INFO BlockStateChange: BLOCK* addStoredBlock: 
 blockMap updated: 10.83.107.80:1004 is added to 
 blk_1073742189_1369{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
 replicas=[ReplicaUnderConstruction[10.83.107.80:1004|RBW], 
 ReplicaUnderConstruction[10.34.74.206:1004|RBW], 
 ReplicaUnderConstruction[10.224.158.152:1004|RBW]]} size 0
 2013-09-18 18:03:17,899 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
 updatePipeline(block=BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1369,
  newGenerationStamp=1370, newLength=1048576, newNodes=[10.83.107.80:1004, 
 10.34.74.206:1004, 10.224.158.152:1004], 
 clientName=DFSClient_NONMAPREDUCE_-450304083_1)
 2013-09-18 18:03:17,904 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
 updatePipeline(BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1369)
  successfully to 
 BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1370
 2013-09-18 18:03:18,540 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
 updatePipeline(block=BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1370,
  newGenerationStamp=1371, newLength=2097152, newNodes=[10.83.107.80:1004, 
 10.34.74.206:1004, 10.224.158.152:1004], 
 clientName=DFSClient_NONMAPREDUCE_-450304083_1)
 2013-09-18 18:03:18,548 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
 updatePipeline(BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1370)
  successfully to 
 BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1371
 2013-09-18 18:03:26,150 INFO BlockStateChange: BLOCK* addToInvalidates: 
 blk_1073742189_1371 10.83.107.80:1004 10.34.74.206:1004 10.224.158.152:1004 
 2013-09-18 

[jira] [Updated] (HDFS-5258) Skip tests in TestHDFSCLI that are not applicable on Windows.

2013-09-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5258:


Target Version/s: 3.0.0, 2.1.2-beta  (was: 3.0.0, 2.3.0)

 Skip tests in TestHDFSCLI that are not applicable on Windows.
 -

 Key: HDFS-5258
 URL: https://issues.apache.org/jira/browse/HDFS-5258
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chuan Liu
Priority: Minor
 Attachments: HDFS-5258.1.patch


 Use the new {{windowsfalse/windows}} flag of {{CLITestHelper}} to skip 
 tests that aren't applicable on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5186) TestFileJournalManager fails on Windows due to file handle leaks

2013-09-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5186:


Target Version/s: 3.0.0, 2.1.2-beta  (was: 3.0.0, 2.3.0)

 TestFileJournalManager fails on Windows due to file handle leaks
 

 Key: HDFS-5186
 URL: https://issues.apache.org/jira/browse/HDFS-5186
 Project: Hadoop HDFS
  Issue Type: Test
  Components: namenode, test
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: HDFS-5186.patch


 We have two unit test cases failing in this class on Windows due a file 
 handle leak in {{getNumberOfTransactions()}} method in this class.
 {noformat}
 Running org.apache.hadoop.hdfs.server.namenode.TestFileJournalManager
 Tests run: 13, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 1.693 sec 
  FAILURE!
 testReadFromMiddleOfEditLog(org.apache.hadoop.hdfs.server.namenode.TestFileJournalManager)
   Time elapsed: 12 sec   ERROR!
 java.io.IOException: Cannot remove current directory: 
 E:\Monarch\project\hadoop-monarch\hadoop-hdfs-project\hadoop-hdfs\target\test\data\filejournaltest2\current
   at 
 org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:299)
   at 
 org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:523)
   at 
 org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:544)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestEditLog.setupEdits(TestEditLog.java:1078)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestEditLog.setupEdits(TestEditLog.java:1133)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileJournalManager.testReadFromMiddleOfEditLog(TestFileJournalManager.java:436)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
   at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
 testExcludeInProgressStreams(org.apache.hadoop.hdfs.server.namenode.TestFileJournalManager)
   Time elapsed: 10 sec   ERROR!
 java.io.IOException: Cannot remove current directory: 
 E:\Monarch\project\hadoop-monarch\hadoop-hdfs-project\hadoop-hdfs\target\test\data\filejournaltest2\current
   at 
 org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:299)
   at 
 

[jira] [Updated] (HDFS-5258) Skip tests in TestHDFSCLI that are not applicable on Windows.

2013-09-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5258:


   Resolution: Fixed
Fix Version/s: 2.1.2-beta
   3.0.0
   Status: Resolved  (was: Patch Available)

I have committed this to trunk, branch-2, and branch-2.1-beta.  Thank you for 
the patch, Chuan.

 Skip tests in TestHDFSCLI that are not applicable on Windows.
 -

 Key: HDFS-5258
 URL: https://issues.apache.org/jira/browse/HDFS-5258
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.2-beta

 Attachments: HDFS-5258.1.patch


 Use the new {{windowsfalse/windows}} flag of {{CLITestHelper}} to skip 
 tests that aren't applicable on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5258) Skip tests in TestHDFSCLI that are not applicable on Windows.

2013-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779001#comment-13779001
 ] 

Hudson commented on HDFS-5258:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4474 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4474/])
HDFS-5258. Skip tests in TestHDFSCLI that are not applicable on Windows. 
Contributed by Chuan Liu. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526610)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml


 Skip tests in TestHDFSCLI that are not applicable on Windows.
 -

 Key: HDFS-5258
 URL: https://issues.apache.org/jira/browse/HDFS-5258
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.2-beta

 Attachments: HDFS-5258.1.patch


 Use the new {{windowsfalse/windows}} flag of {{CLITestHelper}} to skip 
 tests that aren't applicable on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5186) TestFileJournalManager fails on Windows due to file handle leaks

2013-09-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5186:


   Resolution: Fixed
Fix Version/s: 2.1.2-beta
   3.0.0
   Status: Resolved  (was: Patch Available)

I have committed this to trunk, branch-2, and branch-2.1-beta.  Thank you for 
the patch, Chuan.

 TestFileJournalManager fails on Windows due to file handle leaks
 

 Key: HDFS-5186
 URL: https://issues.apache.org/jira/browse/HDFS-5186
 Project: Hadoop HDFS
  Issue Type: Test
  Components: namenode, test
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.2-beta

 Attachments: HDFS-5186.patch


 We have two unit test cases failing in this class on Windows due a file 
 handle leak in {{getNumberOfTransactions()}} method in this class.
 {noformat}
 Running org.apache.hadoop.hdfs.server.namenode.TestFileJournalManager
 Tests run: 13, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 1.693 sec 
  FAILURE!
 testReadFromMiddleOfEditLog(org.apache.hadoop.hdfs.server.namenode.TestFileJournalManager)
   Time elapsed: 12 sec   ERROR!
 java.io.IOException: Cannot remove current directory: 
 E:\Monarch\project\hadoop-monarch\hadoop-hdfs-project\hadoop-hdfs\target\test\data\filejournaltest2\current
   at 
 org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:299)
   at 
 org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:523)
   at 
 org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:544)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestEditLog.setupEdits(TestEditLog.java:1078)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestEditLog.setupEdits(TestEditLog.java:1133)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileJournalManager.testReadFromMiddleOfEditLog(TestFileJournalManager.java:436)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
   at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
 testExcludeInProgressStreams(org.apache.hadoop.hdfs.server.namenode.TestFileJournalManager)
   Time elapsed: 10 sec   ERROR!
 java.io.IOException: Cannot remove current directory: 
 

[jira] [Updated] (HDFS-5260) Merge zero-copy memory-mapped HDFS client reads to trunk and branch-2.

2013-09-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5260:


Status: Open  (was: Patch Available)

I'll review the new warnings, patch them on the HDFS-4949 feature branch, and 
then post a new merge patch here.

 Merge zero-copy memory-mapped HDFS client reads to trunk and branch-2.
 --

 Key: HDFS-5260
 URL: https://issues.apache.org/jira/browse/HDFS-5260
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client, libhdfs
Affects Versions: 3.0.0, 2.3.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-5260.1.patch


 This issue tracks merging the code for zero-copy memory-mapped HDFS client 
 reads from the HDFS-4949 branch to trunk and branch-2.  This includes the 
 patches originally committed on the feature branch in HDFS-4953 and HDFS-5191.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5265) Namenode fails to start when dfs.https.port is unspecified

2013-09-26 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-5265:


 Summary: Namenode fails to start when dfs.https.port is unspecified
 Key: HDFS-5265
 URL: https://issues.apache.org/jira/browse/HDFS-5265
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


Namenode get the wrong address when dfs.https.port is unspecified.

java.lang.IllegalArgumentException: Does not contain a valid host:port 
authority: 0.0.0.0:0.0.0.0:0
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:102)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:633)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:490)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:691)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:676)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1265)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1331)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5265) Namenode fails to start when dfs.https.port is unspecified

2013-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5265:
-

Attachment: HDFS-5265.000.patch

 Namenode fails to start when dfs.https.port is unspecified
 --

 Key: HDFS-5265
 URL: https://issues.apache.org/jira/browse/HDFS-5265
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5265.000.patch


 Namenode get the wrong address when dfs.https.port is unspecified.
 java.lang.IllegalArgumentException: Does not contain a valid host:port 
 authority: 0.0.0.0:0.0.0.0:0
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:102)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:633)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:490)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:691)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:676)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1265)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1331)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5265) Namenode fails to start when dfs.https.port is unspecified

2013-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5265:
-

Status: Patch Available  (was: Open)

 Namenode fails to start when dfs.https.port is unspecified
 --

 Key: HDFS-5265
 URL: https://issues.apache.org/jira/browse/HDFS-5265
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5265.000.patch


 Namenode get the wrong address when dfs.https.port is unspecified.
 java.lang.IllegalArgumentException: Does not contain a valid host:port 
 authority: 0.0.0.0:0.0.0.0:0
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:102)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:633)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:490)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:691)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:676)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1265)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1331)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5186) TestFileJournalManager fails on Windows due to file handle leaks

2013-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779009#comment-13779009
 ] 

Hudson commented on HDFS-5186:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4475 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4475/])
HDFS-5186. TestFileJournalManager fails on Windows due to file handle leaks. 
Contributed by Chuan Liu. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526615)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileJournalManager.java


 TestFileJournalManager fails on Windows due to file handle leaks
 

 Key: HDFS-5186
 URL: https://issues.apache.org/jira/browse/HDFS-5186
 Project: Hadoop HDFS
  Issue Type: Test
  Components: namenode, test
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.2-beta

 Attachments: HDFS-5186.patch


 We have two unit test cases failing in this class on Windows due a file 
 handle leak in {{getNumberOfTransactions()}} method in this class.
 {noformat}
 Running org.apache.hadoop.hdfs.server.namenode.TestFileJournalManager
 Tests run: 13, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 1.693 sec 
  FAILURE!
 testReadFromMiddleOfEditLog(org.apache.hadoop.hdfs.server.namenode.TestFileJournalManager)
   Time elapsed: 12 sec   ERROR!
 java.io.IOException: Cannot remove current directory: 
 E:\Monarch\project\hadoop-monarch\hadoop-hdfs-project\hadoop-hdfs\target\test\data\filejournaltest2\current
   at 
 org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:299)
   at 
 org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:523)
   at 
 org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:544)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestEditLog.setupEdits(TestEditLog.java:1078)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestEditLog.setupEdits(TestEditLog.java:1133)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileJournalManager.testReadFromMiddleOfEditLog(TestFileJournalManager.java:436)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
   at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
   at 
 

[jira] [Commented] (HDFS-5265) Namenode fails to start when dfs.https.port is unspecified

2013-09-26 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779015#comment-13779015
 ] 

Jing Zhao commented on HDFS-5265:
-

Good catch [~wheat9]! Since you're in this area, maybe you can also remove the 
redundant if (certSSL) in the same method:

{code}
if (certSSL) {
  boolean needClientAuth = conf.getBoolean(dfs.https.need.client.auth, 
false);
  InetSocketAddress secInfoSocAddr = NetUtils.createSocketAddr(infoHost + 
: + conf.get(
DFSConfigKeys.DFS_NAMENODE_HTTPS_PORT_KEY, infoHost + : + 0));
  Configuration sslConf = new Configuration(false);
  if (certSSL) {

sslConf.addResource(conf.get(DFSConfigKeys.DFS_SERVER_HTTPS_KEYSTORE_RESOURCE_KEY,
 ssl-server.xml));
  }

{code}

 Namenode fails to start when dfs.https.port is unspecified
 --

 Key: HDFS-5265
 URL: https://issues.apache.org/jira/browse/HDFS-5265
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5265.000.patch


 Namenode get the wrong address when dfs.https.port is unspecified.
 java.lang.IllegalArgumentException: Does not contain a valid host:port 
 authority: 0.0.0.0:0.0.0.0:0
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:102)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:633)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:490)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:691)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:676)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1265)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1331)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5217) Namenode log directory link is inaccessible in secure cluster

2013-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779018#comment-13779018
 ] 

Hadoop QA commented on HDFS-5217:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12605131/HDFS-5217.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5041//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5041//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5041//console

This message is automatically generated.

 Namenode log directory link is inaccessible in secure cluster
 -

 Key: HDFS-5217
 URL: https://issues.apache.org/jira/browse/HDFS-5217
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-5217.000.patch, HDFS-5217.001.patch


 Currently in a secured HDFS cluster, 401 error is returned when clicking the 
 NameNode Logs link.
 Looks like the cause of the issue is that the httpServer does not correctly 
 set the security handler and the user realm currently, which causes the 
 httpRequest.getRemoteUser (for the log URL) to return null and later be 
 overwritten to the default web name (e.g., dr.who) by the filter. In the 
 meanwhile, in a secured cluster the log URL requires the http user to be an 
 administrator. That's why we see the 401 error.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5256) Use guava LoadingCache to implement DFSClientCache

2013-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5256:
-

Attachment: HDFS-5256.001.patch

Address Jing's comment + plus a little bit of formatting.

 Use guava LoadingCache to implement DFSClientCache
 --

 Key: HDFS-5256
 URL: https://issues.apache.org/jira/browse/HDFS-5256
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5256.000.patch, HDFS-5256.001.patch


 Google Guava provides an implementation of LoadingCache. Use the LoadingCache 
 to implement DFSClientCache in NFS. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5265) Namenode fails to start when dfs.https.port is unspecified

2013-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5265:
-

Attachment: HDFS-5256.001.patch

 Namenode fails to start when dfs.https.port is unspecified
 --

 Key: HDFS-5265
 URL: https://issues.apache.org/jira/browse/HDFS-5265
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5256.001.patch, HDFS-5256.001.patch


 Namenode get the wrong address when dfs.https.port is unspecified.
 java.lang.IllegalArgumentException: Does not contain a valid host:port 
 authority: 0.0.0.0:0.0.0.0:0
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:102)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:633)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:490)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:691)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:676)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1265)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1331)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5265) Namenode fails to start when dfs.https.port is unspecified

2013-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5265:
-

Attachment: HDFS-5256.001.patch

Address Jing's comments + a little bit of formatting.

 Namenode fails to start when dfs.https.port is unspecified
 --

 Key: HDFS-5265
 URL: https://issues.apache.org/jira/browse/HDFS-5265
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5256.001.patch, HDFS-5256.001.patch


 Namenode get the wrong address when dfs.https.port is unspecified.
 java.lang.IllegalArgumentException: Does not contain a valid host:port 
 authority: 0.0.0.0:0.0.0.0:0
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:102)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:633)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:490)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:691)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:676)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1265)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1331)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5265) Namenode fails to start when dfs.https.port is unspecified

2013-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5265:
-

Attachment: (was: HDFS-5265.000.patch)

 Namenode fails to start when dfs.https.port is unspecified
 --

 Key: HDFS-5265
 URL: https://issues.apache.org/jira/browse/HDFS-5265
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5256.001.patch, HDFS-5256.001.patch


 Namenode get the wrong address when dfs.https.port is unspecified.
 java.lang.IllegalArgumentException: Does not contain a valid host:port 
 authority: 0.0.0.0:0.0.0.0:0
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:102)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:633)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:490)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:691)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:676)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1265)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1331)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5265) Namenode fails to start when dfs.https.port is unspecified

2013-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5265:
-

Attachment: (was: HDFS-5256.001.patch)

 Namenode fails to start when dfs.https.port is unspecified
 --

 Key: HDFS-5265
 URL: https://issues.apache.org/jira/browse/HDFS-5265
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5265.001.patch


 Namenode get the wrong address when dfs.https.port is unspecified.
 java.lang.IllegalArgumentException: Does not contain a valid host:port 
 authority: 0.0.0.0:0.0.0.0:0
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:102)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:633)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:490)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:691)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:676)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1265)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1331)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5265) Namenode fails to start when dfs.https.port is unspecified

2013-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5265:
-

Attachment: (was: HDFS-5256.001.patch)

 Namenode fails to start when dfs.https.port is unspecified
 --

 Key: HDFS-5265
 URL: https://issues.apache.org/jira/browse/HDFS-5265
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5265.001.patch


 Namenode get the wrong address when dfs.https.port is unspecified.
 java.lang.IllegalArgumentException: Does not contain a valid host:port 
 authority: 0.0.0.0:0.0.0.0:0
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:102)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:633)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:490)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:691)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:676)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1265)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1331)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5265) Namenode fails to start when dfs.https.port is unspecified

2013-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5265:
-

Attachment: HDFS-5265.001.patch

 Namenode fails to start when dfs.https.port is unspecified
 --

 Key: HDFS-5265
 URL: https://issues.apache.org/jira/browse/HDFS-5265
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5265.001.patch


 Namenode get the wrong address when dfs.https.port is unspecified.
 java.lang.IllegalArgumentException: Does not contain a valid host:port 
 authority: 0.0.0.0:0.0.0.0:0
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:102)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:633)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:490)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:691)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:676)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1265)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1331)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5256) Use guava LoadingCache to implement DFSClientCache

2013-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5256:
-

Attachment: (was: HDFS-5256.001.patch)

 Use guava LoadingCache to implement DFSClientCache
 --

 Key: HDFS-5256
 URL: https://issues.apache.org/jira/browse/HDFS-5256
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5256.000.patch


 Google Guava provides an implementation of LoadingCache. Use the LoadingCache 
 to implement DFSClientCache in NFS. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5256) Use guava LoadingCache to implement DFSClientCache

2013-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5256:
-

Attachment: HDFS-5256.001.patch

 Use guava LoadingCache to implement DFSClientCache
 --

 Key: HDFS-5256
 URL: https://issues.apache.org/jira/browse/HDFS-5256
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5256.000.patch, HDFS-5256.001.patch


 Google Guava provides an implementation of LoadingCache. Use the LoadingCache 
 to implement DFSClientCache in NFS. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5256) Use guava LoadingCache to implement DFSClientCache

2013-09-26 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779100#comment-13779100
 ] 

Haohui Mai commented on HDFS-5256:
--

Address Brandon's comments. More comprehensive unit tests.

 Use guava LoadingCache to implement DFSClientCache
 --

 Key: HDFS-5256
 URL: https://issues.apache.org/jira/browse/HDFS-5256
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5256.000.patch, HDFS-5256.001.patch


 Google Guava provides an implementation of LoadingCache. Use the LoadingCache 
 to implement DFSClientCache in NFS. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5256) Use guava LoadingCache to implement DFSClientCache

2013-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779129#comment-13779129
 ] 

Hadoop QA commented on HDFS-5256:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12605309/HDFS-5256.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5047//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5047//console

This message is automatically generated.

 Use guava LoadingCache to implement DFSClientCache
 --

 Key: HDFS-5256
 URL: https://issues.apache.org/jira/browse/HDFS-5256
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5256.000.patch, HDFS-5256.001.patch


 Google Guava provides an implementation of LoadingCache. Use the LoadingCache 
 to implement DFSClientCache in NFS. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5266) ElasticByteBufferPool#Key does not implement equals.

2013-09-26 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-5266:
---

 Summary: ElasticByteBufferPool#Key does not implement equals.
 Key: HDFS-5266
 URL: https://issues.apache.org/jira/browse/HDFS-5266
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: HDFS-4949
Reporter: Chris Nauroth
Assignee: Chris Nauroth


{{ElasticByteBufferPool#Key}} does not implement {{equals}}.  It's used as a 
map key, so we should implement this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5266) ElasticByteBufferPool#Key does not implement equals.

2013-09-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5266:


Attachment: HDFS-5266.1.patch

This was found by Findbugs on the merge patch in HDFS-5260.  Here is a patch 
that adds the {{equals}} implementation.

I also corrected a minor Javadoc warning.

The test-patch run on HDFS-5260 also reported new javac warnings.  These are 
due to use of internal Sun APIs for {{munmap}}, and there isn't any way for us 
to suppress them.

 ElasticByteBufferPool#Key does not implement equals.
 

 Key: HDFS-5266
 URL: https://issues.apache.org/jira/browse/HDFS-5266
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: HDFS-4949
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-5266.1.patch


 {{ElasticByteBufferPool#Key}} does not implement {{equals}}.  It's used as a 
 map key, so we should implement this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5266) ElasticByteBufferPool#Key does not implement equals.

2013-09-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779141#comment-13779141
 ] 

Chris Nauroth commented on HDFS-5266:
-

[~cmccabe] or [~andrew.wang], how does this patch look?  If it looks good to 
you, I'll commit to HDFS-4949, and then prep a new merge patch for HDFS-5260.

 ElasticByteBufferPool#Key does not implement equals.
 

 Key: HDFS-5266
 URL: https://issues.apache.org/jira/browse/HDFS-5266
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: HDFS-4949
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-5266.1.patch


 {{ElasticByteBufferPool#Key}} does not implement {{equals}}.  It's used as a 
 map key, so we should implement this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HDFS-5266) ElasticByteBufferPool#Key does not implement equals.

2013-09-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-5266 started by Chris Nauroth.

 ElasticByteBufferPool#Key does not implement equals.
 

 Key: HDFS-5266
 URL: https://issues.apache.org/jira/browse/HDFS-5266
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: HDFS-4949
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-5266.1.patch


 {{ElasticByteBufferPool#Key}} does not implement {{equals}}.  It's used as a 
 map key, so we should implement this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3983) Hftp should support both SPNEGO and KSSL

2013-09-26 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779145#comment-13779145
 ] 

Arpit Agarwal commented on HDFS-3983:
-

Is this a dup of HDFS-3699? If so we can resolve one of the two.

 Hftp should support both SPNEGO and KSSL
 

 Key: HDFS-3983
 URL: https://issues.apache.org/jira/browse/HDFS-3983
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Blocker
 Attachments: hdfs-3983.txt, hdfs-3983.txt


 Hftp currently doesn't work against a secure cluster unless you configure 
 {{dfs.https.port}} to be the http port, otherwise the client can't fetch 
 tokens:
 {noformat}
 $ hadoop fs -ls hftp://c1225.hal.cloudera.com:50070/
 12/09/26 18:02:00 INFO fs.FileSystem: Couldn't get a delegation token from 
 http://c1225.hal.cloudera.com:50470 using http.
 ls: Security enabled but user not authenticated by filter
 {noformat}
 This is due to Hftp still using the https port. Post HDFS-2617 it should use 
 the regular http port. Hsftp should still use the secure port, however now 
 that we have HADOOP-8581 it's worth considering removing Hsftp entirely. I'll 
 start a separate thread about that.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5256) Use guava LoadingCache to implement DFSClientCache

2013-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5256:
-

Attachment: HDFS-5256.002.patch

 Use guava LoadingCache to implement DFSClientCache
 --

 Key: HDFS-5256
 URL: https://issues.apache.org/jira/browse/HDFS-5256
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5256.000.patch, HDFS-5256.001.patch, 
 HDFS-5256.002.patch


 Google Guava provides an implementation of LoadingCache. Use the LoadingCache 
 to implement DFSClientCache in NFS. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5256) Use guava LoadingCache to implement DFSClientCache

2013-09-26 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779161#comment-13779161
 ] 

Brandon Li commented on HDFS-5256:
--

Thanks Haohui for adding unit test. More ideas for better unit tests of the new 
cache:
1. add n clients to the cache, check the cache size is n, remove some client, 
and make sure the cache size changes correspondingly
2. do cache.get() for the same user, make sure the cache doesn't add new entry 
for the same user
3. make sure the evicted client is closed (which you are already trying to test 
with current patch)
4. give the cache a short expiration time to verify the client can be evicted 
and closed after expiration time

 Use guava LoadingCache to implement DFSClientCache
 --

 Key: HDFS-5256
 URL: https://issues.apache.org/jira/browse/HDFS-5256
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5256.000.patch, HDFS-5256.001.patch, 
 HDFS-5256.002.patch


 Google Guava provides an implementation of LoadingCache. Use the LoadingCache 
 to implement DFSClientCache in NFS. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5256) Use guava LoadingCache to implement DFSClientCache

2013-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5256:
-

Attachment: HDFS-5256.003.patch

 Use guava LoadingCache to implement DFSClientCache
 --

 Key: HDFS-5256
 URL: https://issues.apache.org/jira/browse/HDFS-5256
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5256.000.patch, HDFS-5256.001.patch, 
 HDFS-5256.002.patch, HDFS-5256.003.patch


 Google Guava provides an implementation of LoadingCache. Use the LoadingCache 
 to implement DFSClientCache in NFS. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5256) Use guava LoadingCache to implement DFSClientCache

2013-09-26 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779172#comment-13779172
 ] 

Haohui Mai commented on HDFS-5256:
--

Addressed Brandon's comments.

 Use guava LoadingCache to implement DFSClientCache
 --

 Key: HDFS-5256
 URL: https://issues.apache.org/jira/browse/HDFS-5256
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5256.000.patch, HDFS-5256.001.patch, 
 HDFS-5256.002.patch, HDFS-5256.003.patch


 Google Guava provides an implementation of LoadingCache. Use the LoadingCache 
 to implement DFSClientCache in NFS. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5266) ElasticByteBufferPool#Key does not implement equals.

2013-09-26 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779176#comment-13779176
 ] 

Andrew Wang commented on HDFS-5266:
---

+1 thanks Chris. Since it's a final class, ok to use casting.

 ElasticByteBufferPool#Key does not implement equals.
 

 Key: HDFS-5266
 URL: https://issues.apache.org/jira/browse/HDFS-5266
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: HDFS-4949
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-5266.1.patch


 {{ElasticByteBufferPool#Key}} does not implement {{equals}}.  It's used as a 
 map key, so we should implement this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5256) Use guava LoadingCache to implement DFSClientCache

2013-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779180#comment-13779180
 ] 

Hadoop QA commented on HDFS-5256:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12605321/HDFS-5256.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5048//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5048//console

This message is automatically generated.

 Use guava LoadingCache to implement DFSClientCache
 --

 Key: HDFS-5256
 URL: https://issues.apache.org/jira/browse/HDFS-5256
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5256.000.patch, HDFS-5256.001.patch, 
 HDFS-5256.002.patch, HDFS-5256.003.patch


 Google Guava provides an implementation of LoadingCache. Use the LoadingCache 
 to implement DFSClientCache in NFS. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5223) Allow edit log/fsimage format changes without changing layout version

2013-09-26 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779177#comment-13779177
 ] 

Nathan Roberts commented on HDFS-5223:
--

Thanks Aaron and Todd for bringing this up.

I love the flexibility of feature bits however I'm very nervous about the 
complexity they tend to bring. As long as there is incredibly tight controls it 
can work but more often than not I've seen this sort of approach lead to some 
incredibly unmaintainable code. The code can get very complex dealing with 
multiple combinations and the testing/QA can get also be very difficult to 
manage. Things can get overwhelmingly complex quite quickly. Having an 
-enableAllNewFeatures helps a bit but I'm not sure it lowers the complexity 
all that much. 

Of the two options, I'd lean in the direction of #1 at this point. 

iiuc, option 2 basically means that V2 software has to remember how to both 
read and write in V1 format whereas option 1 only requires that V2 be able to 
read V1 format (like we do today). I kind of like the fact that new software 
doesn't ever have to write things according to the older format. 

* When we update the SBN to V2 it would be allowed to come up and it would 
still be able to process V1 images/edits
* The first time it tries to write a new image, it would do so in V2 format 
* When uploading a new V2 image to ANN, the upload would not proceed because of 
the version mismatch (this way the ANN's local storage stays purely V1)
* At this point we can still rollback by simply re-bootstrapping the SBN
* Now we failover to the SBN, the SBN changes the shared edits area to indicate 
V2 (just an update to VERSION file I think)
* Upgrade old ANN with V2 software
* old ANN comes up as Standby, reads the new V2 image and starts processing new 
V2 edits (somewhere in here also has to change local storage to V2)

What's not great about this approach is that as soon as V2 software becomes 
active, we're writing in V2 format and at that point can't go back without 
losing edits. However, that's basically very similar to today's -upgrade. The 
only difference being that we haven't done anything to protect the blocks on 
the datanodes (with -upgrade we hardlink everything and therefore guarantee 
data blocks can't go away). So, maybe we need a mode where HDFS stops deleting 
blocks both from the NN's perspective (won't issue invalidates any longer), as 
well as from the DN side where it will ignore block deletion requests. Kind of 
a semi-safe-mode where the filesystem acts pretty much normally except that it 
refuses to delete any blocks. If we get ourselves into a true disaster-recovery 
situation, we can go back to V1 software + last V1 fsimage + all V1 edits that 
applied to that image + all blocks from the datanodes.




 Allow edit log/fsimage format changes without changing layout version
 -

 Key: HDFS-5223
 URL: https://issues.apache.org/jira/browse/HDFS-5223
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.1.1-beta
Reporter: Aaron T. Myers

 Currently all HDFS on-disk formats are version by the single layout version. 
 This means that even for changes which might be backward compatible, like the 
 addition of a new edit log op code, we must go through the full `namenode 
 -upgrade' process which requires coordination with DNs, etc. HDFS should 
 support a lighter weight alternative.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-5266) ElasticByteBufferPool#Key does not implement equals.

2013-09-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-5266.
-

   Resolution: Fixed
Fix Version/s: HDFS-4949
 Hadoop Flags: Reviewed

Thanks for the review, Andrew.  I've committed this to the HDFS-4949 branch.

 ElasticByteBufferPool#Key does not implement equals.
 

 Key: HDFS-5266
 URL: https://issues.apache.org/jira/browse/HDFS-5266
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: HDFS-4949
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: HDFS-4949

 Attachments: HDFS-5266.1.patch


 {{ElasticByteBufferPool#Key}} does not implement {{equals}}.  It's used as a 
 map key, so we should implement this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5260) Merge zero-copy memory-mapped HDFS client reads to trunk and branch-2.

2013-09-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5260:


Attachment: HDFS-5260.2.patch

Here is a new merge patch that also incorporates HDFS-5266 to correct the 
Findbugs warning and Javadoc warning.

We're still going to have new javac warnings due to use of internal Sun APIs 
for {{munmap}}.  We don't have a way to suppress those.

 Merge zero-copy memory-mapped HDFS client reads to trunk and branch-2.
 --

 Key: HDFS-5260
 URL: https://issues.apache.org/jira/browse/HDFS-5260
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client, libhdfs
Affects Versions: 3.0.0, 2.3.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-5260.1.patch, HDFS-5260.2.patch


 This issue tracks merging the code for zero-copy memory-mapped HDFS client 
 reads from the HDFS-4949 branch to trunk and branch-2.  This includes the 
 patches originally committed on the feature branch in HDFS-4953 and HDFS-5191.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5260) Merge zero-copy memory-mapped HDFS client reads to trunk and branch-2.

2013-09-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5260:


Status: Patch Available  (was: Open)

 Merge zero-copy memory-mapped HDFS client reads to trunk and branch-2.
 --

 Key: HDFS-5260
 URL: https://issues.apache.org/jira/browse/HDFS-5260
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client, libhdfs
Affects Versions: 3.0.0, 2.3.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-5260.1.patch, HDFS-5260.2.patch


 This issue tracks merging the code for zero-copy memory-mapped HDFS client 
 reads from the HDFS-4949 branch to trunk and branch-2.  This includes the 
 patches originally committed on the feature branch in HDFS-4953 and HDFS-5191.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5265) Namenode fails to start when dfs.https.port is unspecified

2013-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779214#comment-13779214
 ] 

Hadoop QA commented on HDFS-5265:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12605298/HDFS-5265.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5046//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5046//console

This message is automatically generated.

 Namenode fails to start when dfs.https.port is unspecified
 --

 Key: HDFS-5265
 URL: https://issues.apache.org/jira/browse/HDFS-5265
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5265.001.patch


 Namenode get the wrong address when dfs.https.port is unspecified.
 java.lang.IllegalArgumentException: Does not contain a valid host:port 
 authority: 0.0.0.0:0.0.0.0:0
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:102)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:633)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:490)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:691)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:676)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1265)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1331)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5266) ElasticByteBufferPool#Key does not implement equals.

2013-09-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779216#comment-13779216
 ] 

Colin Patrick McCabe commented on HDFS-5266:


I'm pretty sure FindBugs' next complaint will be that we implement {{equals}}, 
but not {{hashCode}}.  Of course we don't use any of those methods... sigh.  It 
might almost be worth adding a suppression here.

 ElasticByteBufferPool#Key does not implement equals.
 

 Key: HDFS-5266
 URL: https://issues.apache.org/jira/browse/HDFS-5266
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: HDFS-4949
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: HDFS-4949

 Attachments: HDFS-5266.1.patch


 {{ElasticByteBufferPool#Key}} does not implement {{equals}}.  It's used as a 
 map key, so we should implement this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5256) Use guava LoadingCache to implement DFSClientCache

2013-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779245#comment-13779245
 ] 

Hadoop QA commented on HDFS-5256:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12605324/HDFS-5256.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5050//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5050//console

This message is automatically generated.

 Use guava LoadingCache to implement DFSClientCache
 --

 Key: HDFS-5256
 URL: https://issues.apache.org/jira/browse/HDFS-5256
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5256.000.patch, HDFS-5256.001.patch, 
 HDFS-5256.002.patch, HDFS-5256.003.patch


 Google Guava provides an implementation of LoadingCache. Use the LoadingCache 
 to implement DFSClientCache in NFS. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5225) datanode keeps logging the same 'is no longer in the dataset' message over and over again

2013-09-26 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779241#comment-13779241
 ] 

Kihwal Lee commented on HDFS-5225:
--

The scan spins because getEarliestScanTime() will return the last scan time of 
the oldest block. Since it never gets removed, the scanner keeps calling 
verifyFirstBlock().

Also, jdk doc on TreeSet states, Note that the ordering maintained by a set 
(whether or not an explicit comparator is provided) must be consistent with 
equals if it is to correctly implement the Set interface.  This is the case in 
branch-0.23, but broken in branch-2.



 datanode keeps logging the same 'is no longer in the dataset' message over 
 and over again
 -

 Key: HDFS-5225
 URL: https://issues.apache.org/jira/browse/HDFS-5225
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.1.1-beta
Reporter: Roman Shaposhnik
Assignee: Tsuyoshi OZAWA
Priority: Blocker
 Attachments: HDFS-5225.1.patch, HDFS-5225.2.patch, 
 HDFS-5225-reproduce.1.txt


 I was running the usual Bigtop testing on 2.1.1-beta RC1 with the following 
 configuration: 4 nodes fully distributed cluster with security on.
 All of a sudden my DN ate up all of the space in /var/log logging the 
 following message repeatedly:
 {noformat}
 2013-09-18 20:51:12,046 INFO 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: 
 BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1369 is no longer 
 in the dataset
 {noformat}
 It wouldn't answer to a jstack and jstack -F ended up being useless.
 Here's what I was able to find in the NameNode logs regarding this block ID:
 {noformat}
 fgrep -rI 'blk_1073742189' hadoop-hdfs-namenode-ip-10-224-158-152.log
 2013-09-18 18:03:16,972 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
 allocateBlock: 
 /user/jenkins/testAppendInputWedSep18180222UTC2013/test4.fileWedSep18180222UTC2013._COPYING_.
  BP-1884637155-10.224.158.152-1379524544853 
 blk_1073742189_1369{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
 replicas=[ReplicaUnderConstruction[10.83.107.80:1004|RBW], 
 ReplicaUnderConstruction[10.34.74.206:1004|RBW], 
 ReplicaUnderConstruction[10.224.158.152:1004|RBW]]}
 2013-09-18 18:03:17,222 INFO BlockStateChange: BLOCK* addStoredBlock: 
 blockMap updated: 10.224.158.152:1004 is added to 
 blk_1073742189_1369{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
 replicas=[ReplicaUnderConstruction[10.83.107.80:1004|RBW], 
 ReplicaUnderConstruction[10.34.74.206:1004|RBW], 
 ReplicaUnderConstruction[10.224.158.152:1004|RBW]]} size 0
 2013-09-18 18:03:17,222 INFO BlockStateChange: BLOCK* addStoredBlock: 
 blockMap updated: 10.34.74.206:1004 is added to 
 blk_1073742189_1369{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
 replicas=[ReplicaUnderConstruction[10.83.107.80:1004|RBW], 
 ReplicaUnderConstruction[10.34.74.206:1004|RBW], 
 ReplicaUnderConstruction[10.224.158.152:1004|RBW]]} size 0
 2013-09-18 18:03:17,224 INFO BlockStateChange: BLOCK* addStoredBlock: 
 blockMap updated: 10.83.107.80:1004 is added to 
 blk_1073742189_1369{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
 replicas=[ReplicaUnderConstruction[10.83.107.80:1004|RBW], 
 ReplicaUnderConstruction[10.34.74.206:1004|RBW], 
 ReplicaUnderConstruction[10.224.158.152:1004|RBW]]} size 0
 2013-09-18 18:03:17,899 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
 updatePipeline(block=BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1369,
  newGenerationStamp=1370, newLength=1048576, newNodes=[10.83.107.80:1004, 
 10.34.74.206:1004, 10.224.158.152:1004], 
 clientName=DFSClient_NONMAPREDUCE_-450304083_1)
 2013-09-18 18:03:17,904 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
 updatePipeline(BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1369)
  successfully to 
 BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1370
 2013-09-18 18:03:18,540 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
 updatePipeline(block=BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1370,
  newGenerationStamp=1371, newLength=2097152, newNodes=[10.83.107.80:1004, 
 10.34.74.206:1004, 10.224.158.152:1004], 
 clientName=DFSClient_NONMAPREDUCE_-450304083_1)
 2013-09-18 18:03:18,548 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
 updatePipeline(BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1370)
  successfully to 
 BP-1884637155-10.224.158.152-1379524544853:blk_1073742189_1371
 2013-09-18 18:03:26,150 INFO BlockStateChange: BLOCK* addToInvalidates: 
 blk_1073742189_1371 10.83.107.80:1004 10.34.74.206:1004 10.224.158.152:1004 
 2013-09-18 18:03:26,847 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
 

[jira] [Commented] (HDFS-5256) Use guava LoadingCache to implement DFSClientCache

2013-09-26 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779297#comment-13779297
 ] 

Brandon Li commented on HDFS-5256:
--

Haohui, let's remove the TTL since the cache entry usage inside NFS servers may 
not actually visible here. Also please make this warning message more verbose 
in DFSClientCache.java, such as:
LOG.warn(DFSClientCache got IOException when closing the DFSClient(
  + notification.getValue() + ):  + e);



 Use guava LoadingCache to implement DFSClientCache
 --

 Key: HDFS-5256
 URL: https://issues.apache.org/jira/browse/HDFS-5256
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5256.000.patch, HDFS-5256.001.patch, 
 HDFS-5256.002.patch, HDFS-5256.003.patch


 Google Guava provides an implementation of LoadingCache. Use the LoadingCache 
 to implement DFSClientCache in NFS. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5260) Merge zero-copy memory-mapped HDFS client reads to trunk and branch-2.

2013-09-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779306#comment-13779306
 ] 

Colin Patrick McCabe commented on HDFS-5260:


+1 pending Jenkins.  Thanks, Chris.

 Merge zero-copy memory-mapped HDFS client reads to trunk and branch-2.
 --

 Key: HDFS-5260
 URL: https://issues.apache.org/jira/browse/HDFS-5260
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client, libhdfs
Affects Versions: 3.0.0, 2.3.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-5260.1.patch, HDFS-5260.2.patch


 This issue tracks merging the code for zero-copy memory-mapped HDFS client 
 reads from the HDFS-4949 branch to trunk and branch-2.  This includes the 
 patches originally committed on the feature branch in HDFS-4953 and HDFS-5191.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5260) Merge zero-copy memory-mapped HDFS client reads to trunk and branch-2.

2013-09-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779414#comment-13779414
 ] 

Chris Nauroth commented on HDFS-5260:
-

I've also performed some additional manual testing.  I deployed this code to a 
single node.  I wrote a client that uses the new zero-copy read API.  I ran 
pmap while the client was running to confirm that the block files were in fact 
getting memory-mapped into the client process.  This is all looking good.

 Merge zero-copy memory-mapped HDFS client reads to trunk and branch-2.
 --

 Key: HDFS-5260
 URL: https://issues.apache.org/jira/browse/HDFS-5260
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client, libhdfs
Affects Versions: 3.0.0, 2.3.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-5260.1.patch, HDFS-5260.2.patch


 This issue tracks merging the code for zero-copy memory-mapped HDFS client 
 reads from the HDFS-4949 branch to trunk and branch-2.  This includes the 
 patches originally committed on the feature branch in HDFS-4953 and HDFS-5191.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5260) Merge zero-copy memory-mapped HDFS client reads to trunk and branch-2.

2013-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779446#comment-13779446
 ] 

Hadoop QA commented on HDFS-5260:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12605329/HDFS-5260.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1529 javac 
compiler warnings (more than the trunk's current 1526 warnings).

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5049//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5049//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Javac warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5049//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5049//console

This message is automatically generated.

 Merge zero-copy memory-mapped HDFS client reads to trunk and branch-2.
 --

 Key: HDFS-5260
 URL: https://issues.apache.org/jira/browse/HDFS-5260
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client, libhdfs
Affects Versions: 3.0.0, 2.3.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-5260.1.patch, HDFS-5260.2.patch


 This issue tracks merging the code for zero-copy memory-mapped HDFS client 
 reads from the HDFS-4949 branch to trunk and branch-2.  This includes the 
 patches originally committed on the feature branch in HDFS-4953 and HDFS-5191.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5223) Allow edit log/fsimage format changes without changing layout version

2013-09-26 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779511#comment-13779511
 ] 

Todd Lipcon commented on HDFS-5223:
---

bq. I love the flexibility of feature bits however I'm very nervous about the 
complexity they tend to bring. As long as there is incredibly tight controls it 
can work but more often than not I've seen this sort of approach lead to some 
incredibly unmaintainable code. The code can get very complex dealing with 
multiple combinations and the testing/QA can get also be very difficult to 
manage. Things can get overwhelmingly complex quite quickly. 

I agree that it's a bit more complex, but I'm not sure it's quite as bad in our 
context as it might be in others. Most of our edit log changes to date have 
been fairly simple. Looking through the Feature enum, they tend to fall into 
the following categories:
- Entirely new opcodes (eg CONCAT) - these are easy to do on the writer side by 
just throwing an exception in logEdit() if the feature isn't supported. 
Sometimes these also involve a new set of data written to the FSImage (eg in 
the case of delegation token persistence) but again it should be pretty 
orthogonal to other features.
- New container format features (eg fsimage compression, or checksums on edit 
entries). These are new features which are off by default and orthogonal to any 
other features.
- Single additional fields in existing opcodes. We'd need to be somewhat 
careful not to make use of any of these fields if the feature isn't enabled, 
but I think there's usually pretty clear semantics.

Certainly it's more complex than option 1, but I think the ability to downgrade 
without data loss is pretty key. A lot of Hadoop operators are already hesitant 
to upgrade between minor versions already, and losing the ability to roll back 
would make it a non-starter for a lot of shops. If that's the case, then I 
think it would be really tough to add new opcodes or other format changes even 
between minor releases (eg 2.3 to 2.4) and convince an operator to do the 
upgrade.

Am I being overly conservative in what operators will put up with, instead of 
overly conservative in the complexity we introduce?


(btw, I agree completely about the no-delete mode -- I think a TTL delete 
mode is also a nice feature we could build in at the same time, where block 
deletions are always delayed for a day, to mitigate potential for data loss 
even with bugs present)

 Allow edit log/fsimage format changes without changing layout version
 -

 Key: HDFS-5223
 URL: https://issues.apache.org/jira/browse/HDFS-5223
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.1.1-beta
Reporter: Aaron T. Myers

 Currently all HDFS on-disk formats are version by the single layout version. 
 This means that even for changes which might be backward compatible, like the 
 addition of a new edit log op code, we must go through the full `namenode 
 -upgrade' process which requires coordination with DNs, etc. HDFS should 
 support a lighter weight alternative.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5230) Introduce RpcInfo to decouple XDR classes from the RPC API

2013-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5230:
-

Attachment: HDFS-5230.005.patch

Credits to [~brandonli] to pinpoint some more bugs. Address his comments.

 Introduce RpcInfo to decouple XDR classes from the RPC API
 --

 Key: HDFS-5230
 URL: https://issues.apache.org/jira/browse/HDFS-5230
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5230.002.patch, HDFS-5230.003.patch, 
 HDFS-5230.004.patch, HDFS-5230.005.patch


 The XDR class is one fundamental aspect in the current implementation of NFS 
 server. While the client might potentially have a higher level APIs, it also 
 requires redundant copying since the upstream clients have insufficient 
 information.
 This JIRA introduces a new class, RpcInfo, which (1) decouples XDR from the 
 APIs, turning it into a utility class, and (2) exposes ChannelBuffer directly 
 to the client in order to open the opportunity for avoid copying.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5256) Use guava LoadingCache to implement DFSClientCache

2013-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5256:
-

Attachment: HDFS-5256.004.patch

Thanks [~brandonli] for the comments. Address the comments.

 Use guava LoadingCache to implement DFSClientCache
 --

 Key: HDFS-5256
 URL: https://issues.apache.org/jira/browse/HDFS-5256
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5256.000.patch, HDFS-5256.001.patch, 
 HDFS-5256.002.patch, HDFS-5256.003.patch, HDFS-5256.004.patch


 Google Guava provides an implementation of LoadingCache. Use the LoadingCache 
 to implement DFSClientCache in NFS. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5230) Introduce RpcInfo to decouple XDR classes from the RPC API

2013-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5230:
-

Attachment: HDFS-5230.006.patch

Rebased.

 Introduce RpcInfo to decouple XDR classes from the RPC API
 --

 Key: HDFS-5230
 URL: https://issues.apache.org/jira/browse/HDFS-5230
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5230.002.patch, HDFS-5230.003.patch, 
 HDFS-5230.004.patch, HDFS-5230.005.patch, HDFS-5230.006.patch


 The XDR class is one fundamental aspect in the current implementation of NFS 
 server. While the client might potentially have a higher level APIs, it also 
 requires redundant copying since the upstream clients have insufficient 
 information.
 This JIRA introduces a new class, RpcInfo, which (1) decouples XDR from the 
 APIs, turning it into a utility class, and (2) exposes ChannelBuffer directly 
 to the client in order to open the opportunity for avoid copying.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5256) Use guava LoadingCache to implement DFSClientCache

2013-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779669#comment-13779669
 ] 

Hadoop QA commented on HDFS-5256:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12605382/HDFS-5256.004.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5051//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5051//console

This message is automatically generated.

 Use guava LoadingCache to implement DFSClientCache
 --

 Key: HDFS-5256
 URL: https://issues.apache.org/jira/browse/HDFS-5256
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5256.000.patch, HDFS-5256.001.patch, 
 HDFS-5256.002.patch, HDFS-5256.003.patch, HDFS-5256.004.patch


 Google Guava provides an implementation of LoadingCache. Use the LoadingCache 
 to implement DFSClientCache in NFS. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5230) Introduce RpcInfo to decouple XDR classes from the RPC API

2013-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13779671#comment-13779671
 ] 

Hadoop QA commented on HDFS-5230:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12605387/HDFS-5230.006.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5053//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5053//console

This message is automatically generated.

 Introduce RpcInfo to decouple XDR classes from the RPC API
 --

 Key: HDFS-5230
 URL: https://issues.apache.org/jira/browse/HDFS-5230
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5230.002.patch, HDFS-5230.003.patch, 
 HDFS-5230.004.patch, HDFS-5230.005.patch, HDFS-5230.006.patch


 The XDR class is one fundamental aspect in the current implementation of NFS 
 server. While the client might potentially have a higher level APIs, it also 
 requires redundant copying since the upstream clients have insufficient 
 information.
 This JIRA introduces a new class, RpcInfo, which (1) decouples XDR from the 
 APIs, turning it into a utility class, and (2) exposes ChannelBuffer directly 
 to the client in order to open the opportunity for avoid copying.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HDFS-5266) ElasticByteBufferPool#Key does not implement equals.

2013-09-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reopened HDFS-5266:
-


 ElasticByteBufferPool#Key does not implement equals.
 

 Key: HDFS-5266
 URL: https://issues.apache.org/jira/browse/HDFS-5266
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: HDFS-4949
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: HDFS-4949

 Attachments: HDFS-5266.1.patch


 {{ElasticByteBufferPool#Key}} does not implement {{equals}}.  It's used as a 
 map key, so we should implement this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5266) ElasticByteBufferPool#Key does not implement equals.

2013-09-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5266:


Attachment: HDFS-5266.addendum.patch

Right you are, Colin.  Do you mind if I tack an addendum on this issue?

This patch adds {{hashCode}} and also addresses a warning about checking for 
null in {{equals}}.  I prefer this over adding suppressions, because even if 
{{TreeMap}} doesn't call these methods now, it's possible we could switch data 
structures, or it's possible that future JDK versions of {{TreeMap}} will start 
calling them, and it could be challenging to diagnose.  Can I commit this?

My local run of Findbugs came back clean, so I expect this is the last of it.

 ElasticByteBufferPool#Key does not implement equals.
 

 Key: HDFS-5266
 URL: https://issues.apache.org/jira/browse/HDFS-5266
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: HDFS-4949
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: HDFS-4949

 Attachments: HDFS-5266.1.patch, HDFS-5266.addendum.patch


 {{ElasticByteBufferPool#Key}} does not implement {{equals}}.  It's used as a 
 map key, so we should implement this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5260) Merge zero-copy memory-mapped HDFS client reads to trunk and branch-2.

2013-09-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5260:


Attachment: HDFS-5260.3.patch

I'm attaching version 3 of the merge patch, which incorporates the addendum on 
HDFS-5266 to fix additional Findbugs warning.  I'm going to assume that the 
addendum will get +1'd later so that I can get the test-patch rolling on the 
merge here.

 Merge zero-copy memory-mapped HDFS client reads to trunk and branch-2.
 --

 Key: HDFS-5260
 URL: https://issues.apache.org/jira/browse/HDFS-5260
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client, libhdfs
Affects Versions: 3.0.0, 2.3.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-5260.1.patch, HDFS-5260.2.patch, HDFS-5260.3.patch


 This issue tracks merging the code for zero-copy memory-mapped HDFS client 
 reads from the HDFS-4949 branch to trunk and branch-2.  This includes the 
 patches originally committed on the feature branch in HDFS-4953 and HDFS-5191.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira