[jira] [Commented] (HDFS-4351) Fix BlockPlacementPolicyDefault#chooseTarget when avoiding stale nodes

2013-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13545764#comment-13545764
 ] 

Hudson commented on HDFS-4351:
--

Integrated in Hadoop-Yarn-trunk #89 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/89/])
HDFS-4351.  In BlockPlacementPolicyDefault.chooseTarget(..), numOfReplicas 
needs to be updated when avoiding stale nodes.  Contributed by Andrew Wang 
(Revision 1429653)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1429653
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java


 Fix BlockPlacementPolicyDefault#chooseTarget when avoiding stale nodes
 --

 Key: HDFS-4351
 URL: https://issues.apache.org/jira/browse/HDFS-4351
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 1.2.0, 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Fix For: 1.2.0, 2.0.3-alpha

 Attachments: hdfs-4351-2.patch, hdfs-4351-3.patch, hdfs-4351-4.patch, 
 hdfs-4351-branch-1-1.patch, hdfs-4351.patch


 There's a bug in {{BlockPlacementPolicyDefault#chooseTarget}} with stale node 
 avoidance enabled (HDFS-3912). If a NotEnoughReplicasException is thrown in 
 the call to {{chooseRandom()}}, {{numOfReplicas}} is not updated together 
 with the partial result in {{result}} since it is pass by value. The retry 
 call to {{chooseTarget}} then uses this incorrect value.
 This can be seen if you enable stale node detection for 
 {{TestReplicationPolicy#testChooseTargetWithMoreThanAvaiableNodes()}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4244) Support deleting snapshots

2013-01-07 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13545772#comment-13545772
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4244:
--

- update SafeModeException message for FSNamesystem.deleteSnapshot

- use getMutableINodesInPath for SnapshotManager.deleteSnapshot.

- why removing checkPathLength(..) in NameNodeRpcServer.createSnapshot?

- combine SnapshotManager.deleteDiffsForSnapshot(..) to 
INodeDirectorySnapshottable.removeSnapshot(..).  Note that 
INodeDirectorySnapshottable is extending INodeDirectoryWithSnapshot.

In the meantime, let me change TestINodeDirectoryWithSnapshot to test 
deleteSnapshotDiff.


 Support deleting snapshots
 --

 Key: HDFS-4244
 URL: https://issues.apache.org/jira/browse/HDFS-4244
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-4244.001.patch, HDFS-4244.002.patch, 
 HDFS-4244.003.patch, HDFS-4244.004.patch, HDFS-4244.005.patch


 Provide functionality to delete a snapshot, given the name of the snapshot 
 and the path to the directory where the snapshot was taken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4230) Listing all the current snapshottable directories

2013-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13545862#comment-13545862
 ] 

Hudson commented on HDFS-4230:
--

Integrated in Hadoop-Hdfs-Snapshots-Branch-build #63 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-Snapshots-Branch-build/63/])
HDFS-4230. Support listing of all the snapshottable directories.  
Contributed by Jing Zhao (Revision 1429643)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1429643
Files : 
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-2802.txt
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshottableDirListing.java


 Listing all the current snapshottable directories
 -

 Key: HDFS-4230
 URL: https://issues.apache.org/jira/browse/HDFS-4230
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4230.001.patch, HDFS-4230.001.patch, 
 HDFS-4230.002.patch, HDFS-4230.003.patch, HDFS-4230.004.patch


 Provide functionality to provide user with metadata about all the 
 snapshottable directories.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4351) Fix BlockPlacementPolicyDefault#chooseTarget when avoiding stale nodes

2013-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13545867#comment-13545867
 ] 

Hudson commented on HDFS-4351:
--

Integrated in Hadoop-Hdfs-trunk #1278 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1278/])
HDFS-4351.  In BlockPlacementPolicyDefault.chooseTarget(..), numOfReplicas 
needs to be updated when avoiding stale nodes.  Contributed by Andrew Wang 
(Revision 1429653)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1429653
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java


 Fix BlockPlacementPolicyDefault#chooseTarget when avoiding stale nodes
 --

 Key: HDFS-4351
 URL: https://issues.apache.org/jira/browse/HDFS-4351
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 1.2.0, 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Fix For: 1.2.0, 2.0.3-alpha

 Attachments: hdfs-4351-2.patch, hdfs-4351-3.patch, hdfs-4351-4.patch, 
 hdfs-4351-branch-1-1.patch, hdfs-4351.patch


 There's a bug in {{BlockPlacementPolicyDefault#chooseTarget}} with stale node 
 avoidance enabled (HDFS-3912). If a NotEnoughReplicasException is thrown in 
 the call to {{chooseRandom()}}, {{numOfReplicas}} is not updated together 
 with the partial result in {{result}} since it is pass by value. The retry 
 call to {{chooseTarget}} then uses this incorrect value.
 This can be seen if you enable stale node detection for 
 {{TestReplicationPolicy#testChooseTargetWithMoreThanAvaiableNodes()}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4098) Support append to original files which are snapshotted

2013-01-07 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4098:
-

Attachment: h4098_20130107.patch

h4098_20130107.patch: adds FileWithLink, INodeFileUnderConstructionWithLink and 
INodeFileUnderConstructionSnapshot.

 Support append to original files which are snapshotted
 --

 Key: HDFS-4098
 URL: https://issues.apache.org/jira/browse/HDFS-4098
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h4098_20130107.patch


 When a regular file is reopened for append, the type is changed from 
 INodeFile to INodeFileUnderConstruction.  The type of snapshotted files (i.e. 
 original files) is INodeFileWithLink.  We have to support similar under 
 construction INodeFileWithLink.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class

2013-01-07 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546016#comment-13546016
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4352:
--

The committed patch actually generated some new javadoc warnings since it did 
not update the @param, e.g.
{code}
  /**
   * Create a new BlockReader specifically to satisfy a read.
   * This method also sends the OP_READ_BLOCK request.
   *
   * @param sock  An established Socket to the DN. The BlockReader will not 
close it normally.
   * This socket must have an associated Channel.
   * @param file  File location
   * @param block  The block object
   * @param blockToken  The block token for security
   * @param startOffset  The read offset, relative to block head
   * @param len  The number of bytes to read
   * @param bufferSize  The IO buffer size (not the client buffer size)
   * @param verifyChecksum  Whether to verify checksum
   * @param clientName  Client name
   * @return New BlockReader instance, or null on error.
   */
  public static BlockReader newBlockReader(BlockReaderFactory.Params params)
 throws IOException {
{code}
However, test-patch failed to detect these warnings.


 Encapsulate arguments to BlockReaderFactory in a class
 --

 Key: HDFS-4352
 URL: https://issues.apache.org/jira/browse/HDFS-4352
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0

 Attachments: 01b.patch, 01.patch


 Encapsulate the arguments to BlockReaderFactory in a class to avoid having to 
 pass around 10+ arguments to a few different functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4363) Combine PBHelper and HdfsProtoUtil and remove redundant methods

2013-01-07 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546021#comment-13546021
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4363:
--

- Do you also want to clean up DataTransferProtoUtil?  Some minor suggestions:
{code}
@@ -41,14 +41,12 @@
 public abstract class DataTransferProtoUtil {
   static BlockConstructionStage fromProto(
   OpWriteBlockProto.BlockConstructionStage stage) {
-return BlockConstructionStage.valueOf(BlockConstructionStage.class,
-stage.name());
+return BlockConstructionStage.valueOf(stage.name());
   }
 
   static OpWriteBlockProto.BlockConstructionStage toProto(
   BlockConstructionStage stage) {
-return OpWriteBlockProto.BlockConstructionStage.valueOf(
-stage.name());
+return OpWriteBlockProto.BlockConstructionStage.valueOf(stage.name());
   }
{code}

- This is not directly related to the patch: use valueOf(..) instead of 
switch-case for converting enum types, e.g.
{code}
  public static DatanodeInfoProto.AdminState convert(
  final DatanodeInfo.AdminStates inAs) {
return DatanodeInfoProto.AdminState.valueOf(inAs.name());
  }
{code}

- PBHelper.convert(DatanodeInfo[] dnInfos, int startIdx) and other related 
methods should return List instead of ArrayList.

- Understood that you probably had used ecilpse for formatting.  However, it 
might not be better than manual formatting in some cases.  E.g.
{code}
   // DataEncryptionKey
   public static DataEncryptionKey convert(DataEncryptionKeyProto bet) {
 String encryptionAlgorithm = bet.getEncryptionAlgorithm();
-return new DataEncryptionKey(bet.getKeyId(),
-bet.getBlockPoolId(),
-bet.getNonce().toByteArray(),
-bet.getEncryptionKey().toByteArray(),
-bet.getExpiryDate(),
-encryptionAlgorithm.isEmpty() ? null : encryptionAlgorithm);
+return new DataEncryptionKey(bet.getKeyId(), bet.getBlockPoolId(), bet
+.getNonce().toByteArray(), bet.getEncryptionKey().toByteArray(),
+bet.getExpiryDate(), encryptionAlgorithm.isEmpty() ? null
+: encryptionAlgorithm);
   }
{code}
{code}
@@ -1132,41 +1140,44 @@
 if (fsStats.length = ClientProtocol.GET_STATS_REMAINING_IDX + 1)
   result.setRemaining(fsStats[ClientProtocol.GET_STATS_REMAINING_IDX]);
 if (fsStats.length = ClientProtocol.GET_STATS_UNDER_REPLICATED_IDX + 1)
-  result.setUnderReplicated(
-  fsStats[ClientProtocol.GET_STATS_UNDER_REPLICATED_IDX]);
+  result
+  
.setUnderReplicated(fsStats[ClientProtocol.GET_STATS_UNDER_REPLICATED_IDX]);
 if (fsStats.length = ClientProtocol.GET_STATS_CORRUPT_BLOCKS_IDX + 1)
-  result.setCorruptBlocks(
-  fsStats[ClientProtocol.GET_STATS_CORRUPT_BLOCKS_IDX]);
+  result
+  
.setCorruptBlocks(fsStats[ClientProtocol.GET_STATS_CORRUPT_BLOCKS_IDX]);
 if (fsStats.length = ClientProtocol.GET_STATS_MISSING_BLOCKS_IDX + 1)
-  result.setMissingBlocks(
-  fsStats[ClientProtocol.GET_STATS_MISSING_BLOCKS_IDX]);
+  result
+  
.setMissingBlocks(fsStats[ClientProtocol.GET_STATS_MISSING_BLOCKS_IDX]);
 return result.build();
   }
{code}


 Combine PBHelper and HdfsProtoUtil and remove redundant methods
 ---

 Key: HDFS-4363
 URL: https://issues.apache.org/jira/browse/HDFS-4363
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-4363.patch, HDFS-4363.patch


 There are many methods overlapping between PBHelper and HdfsProtoUtil. This 
 jira combines these two helper classes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4350) Make enabling of stale marking on read and write paths independent

2013-01-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4350:
--

Status: Open  (was: Patch Available)

 Make enabling of stale marking on read and write paths independent
 --

 Key: HDFS-4350
 URL: https://issues.apache.org/jira/browse/HDFS-4350
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-4350-1.patch


 Marking of datanodes as stale for the read and write path was introduced in 
 HDFS-3703 and HDFS-3912 respectively. This is enabled using two new keys, 
 {{DFS_NAMENODE_CHECK_STALE_DATANODE_KEY}} and 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY}}. However, there currently 
 exists a dependency, since you cannot enable write marking without also 
 enabling read marking, since the first key enables both checking of staleness 
 and read marking.
 I propose renaming the first key to 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_KEY}}, and make checking enabled 
 if either of the keys are set. This will allow read and write marking to be 
 enabled independently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4350) Make enabling of stale marking on read and write paths independent

2013-01-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4350:
--

Attachment: hdfs-4350-1.patch

 Make enabling of stale marking on read and write paths independent
 --

 Key: HDFS-4350
 URL: https://issues.apache.org/jira/browse/HDFS-4350
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-4350-1.patch


 Marking of datanodes as stale for the read and write path was introduced in 
 HDFS-3703 and HDFS-3912 respectively. This is enabled using two new keys, 
 {{DFS_NAMENODE_CHECK_STALE_DATANODE_KEY}} and 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY}}. However, there currently 
 exists a dependency, since you cannot enable write marking without also 
 enabling read marking, since the first key enables both checking of staleness 
 and read marking.
 I propose renaming the first key to 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_KEY}}, and make checking enabled 
 if either of the keys are set. This will allow read and write marking to be 
 enabled independently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4350) Make enabling of stale marking on read and write paths independent

2013-01-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4350:
--

Status: Patch Available  (was: Open)

 Make enabling of stale marking on read and write paths independent
 --

 Key: HDFS-4350
 URL: https://issues.apache.org/jira/browse/HDFS-4350
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-4350-1.patch


 Marking of datanodes as stale for the read and write path was introduced in 
 HDFS-3703 and HDFS-3912 respectively. This is enabled using two new keys, 
 {{DFS_NAMENODE_CHECK_STALE_DATANODE_KEY}} and 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY}}. However, there currently 
 exists a dependency, since you cannot enable write marking without also 
 enabling read marking, since the first key enables both checking of staleness 
 and read marking.
 I propose renaming the first key to 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_KEY}}, and make checking enabled 
 if either of the keys are set. This will allow read and write marking to be 
 enabled independently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4350) Make enabling of stale marking on read and write paths independent

2013-01-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4350:
--

Attachment: (was: hdfs-4350-1.patch)

 Make enabling of stale marking on read and write paths independent
 --

 Key: HDFS-4350
 URL: https://issues.apache.org/jira/browse/HDFS-4350
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-4350-1.patch


 Marking of datanodes as stale for the read and write path was introduced in 
 HDFS-3703 and HDFS-3912 respectively. This is enabled using two new keys, 
 {{DFS_NAMENODE_CHECK_STALE_DATANODE_KEY}} and 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY}}. However, there currently 
 exists a dependency, since you cannot enable write marking without also 
 enabling read marking, since the first key enables both checking of staleness 
 and read marking.
 I propose renaming the first key to 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_KEY}}, and make checking enabled 
 if either of the keys are set. This will allow read and write marking to be 
 enabled independently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4350) Make enabling of stale marking on read and write paths independent

2013-01-07 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546034#comment-13546034
 ] 

Andrew Wang commented on HDFS-4350:
---

Tried to bump Jenkins (cancel patch, upload, submit patch?). I think it's 
currently ready for review though.

 Make enabling of stale marking on read and write paths independent
 --

 Key: HDFS-4350
 URL: https://issues.apache.org/jira/browse/HDFS-4350
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-4350-1.patch


 Marking of datanodes as stale for the read and write path was introduced in 
 HDFS-3703 and HDFS-3912 respectively. This is enabled using two new keys, 
 {{DFS_NAMENODE_CHECK_STALE_DATANODE_KEY}} and 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY}}. However, there currently 
 exists a dependency, since you cannot enable write marking without also 
 enabling read marking, since the first key enables both checking of staleness 
 and read marking.
 I propose renaming the first key to 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_KEY}}, and make checking enabled 
 if either of the keys are set. This will allow read and write marking to be 
 enabled independently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4350) Make enabling of stale marking on read and write paths independent

2013-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546036#comment-13546036
 ] 

Hadoop QA commented on HDFS-4350:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563592/hdfs-4350-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3749//console

This message is automatically generated.

 Make enabling of stale marking on read and write paths independent
 --

 Key: HDFS-4350
 URL: https://issues.apache.org/jira/browse/HDFS-4350
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-4350-1.patch


 Marking of datanodes as stale for the read and write path was introduced in 
 HDFS-3703 and HDFS-3912 respectively. This is enabled using two new keys, 
 {{DFS_NAMENODE_CHECK_STALE_DATANODE_KEY}} and 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY}}. However, there currently 
 exists a dependency, since you cannot enable write marking without also 
 enabling read marking, since the first key enables both checking of staleness 
 and read marking.
 I propose renaming the first key to 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_KEY}}, and make checking enabled 
 if either of the keys are set. This will allow read and write marking to be 
 enabled independently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4350) Make enabling of stale marking on read and write paths independent

2013-01-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4350:
--

Attachment: hdfs-4350-1.patch

 Make enabling of stale marking on read and write paths independent
 --

 Key: HDFS-4350
 URL: https://issues.apache.org/jira/browse/HDFS-4350
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-4350-1.patch


 Marking of datanodes as stale for the read and write path was introduced in 
 HDFS-3703 and HDFS-3912 respectively. This is enabled using two new keys, 
 {{DFS_NAMENODE_CHECK_STALE_DATANODE_KEY}} and 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY}}. However, there currently 
 exists a dependency, since you cannot enable write marking without also 
 enabling read marking, since the first key enables both checking of staleness 
 and read marking.
 I propose renaming the first key to 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_KEY}}, and make checking enabled 
 if either of the keys are set. This will allow read and write marking to be 
 enabled independently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4350) Make enabling of stale marking on read and write paths independent

2013-01-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4350:
--

Attachment: (was: hdfs-4350-1.patch)

 Make enabling of stale marking on read and write paths independent
 --

 Key: HDFS-4350
 URL: https://issues.apache.org/jira/browse/HDFS-4350
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-4350-1.patch


 Marking of datanodes as stale for the read and write path was introduced in 
 HDFS-3703 and HDFS-3912 respectively. This is enabled using two new keys, 
 {{DFS_NAMENODE_CHECK_STALE_DATANODE_KEY}} and 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY}}. However, there currently 
 exists a dependency, since you cannot enable write marking without also 
 enabling read marking, since the first key enables both checking of staleness 
 and read marking.
 I propose renaming the first key to 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_KEY}}, and make checking enabled 
 if either of the keys are set. This will allow read and write marking to be 
 enabled independently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4350) Make enabling of stale marking on read and write paths independent

2013-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546042#comment-13546042
 ] 

Hadoop QA commented on HDFS-4350:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563592/hdfs-4350-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3750//console

This message is automatically generated.

 Make enabling of stale marking on read and write paths independent
 --

 Key: HDFS-4350
 URL: https://issues.apache.org/jira/browse/HDFS-4350
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-4350-1.patch


 Marking of datanodes as stale for the read and write path was introduced in 
 HDFS-3703 and HDFS-3912 respectively. This is enabled using two new keys, 
 {{DFS_NAMENODE_CHECK_STALE_DATANODE_KEY}} and 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY}}. However, there currently 
 exists a dependency, since you cannot enable write marking without also 
 enabling read marking, since the first key enables both checking of staleness 
 and read marking.
 I propose renaming the first key to 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_KEY}}, and make checking enabled 
 if either of the keys are set. This will allow read and write marking to be 
 enabled independently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4350) Make enabling of stale marking on read and write paths independent

2013-01-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4350:
--

Attachment: hdfs-4350-2.patch

 Make enabling of stale marking on read and write paths independent
 --

 Key: HDFS-4350
 URL: https://issues.apache.org/jira/browse/HDFS-4350
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-4350-1.patch, hdfs-4350-2.patch


 Marking of datanodes as stale for the read and write path was introduced in 
 HDFS-3703 and HDFS-3912 respectively. This is enabled using two new keys, 
 {{DFS_NAMENODE_CHECK_STALE_DATANODE_KEY}} and 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY}}. However, there currently 
 exists a dependency, since you cannot enable write marking without also 
 enabling read marking, since the first key enables both checking of staleness 
 and read marking.
 I propose renaming the first key to 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_KEY}}, and make checking enabled 
 if either of the keys are set. This will allow read and write marking to be 
 enabled independently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3970) BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead of DataStorage to read prev version file.

2013-01-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-3970:
--

Status: Open  (was: Patch Available)

 BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead 
 of DataStorage to read prev version file.
 ---

 Key: HDFS-3970
 URL: https://issues.apache.org/jira/browse/HDFS-3970
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.0.2-alpha, 3.0.0
Reporter: Vinay
Assignee: Vinay
 Attachments: hdfs-3970-1.patch, HDFS-3970.patch


 {code}// read attributes out of the VERSION file of previous directory
 DataStorage prevInfo = new DataStorage();
 prevInfo.readPreviousVersionProperties(bpSd);{code}
 In the above code snippet BlockPoolSliceStorage instance should be used. 
 other wise rollback results in 'storageType' property missing which will not 
 be there in initial VERSION file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3970) BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead of DataStorage to read prev version file.

2013-01-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-3970:
--

Attachment: (was: hdfs-3970-1.patch)

 BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead 
 of DataStorage to read prev version file.
 ---

 Key: HDFS-3970
 URL: https://issues.apache.org/jira/browse/HDFS-3970
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Vinay
Assignee: Vinay
 Attachments: hdfs-3970-1.patch, HDFS-3970.patch


 {code}// read attributes out of the VERSION file of previous directory
 DataStorage prevInfo = new DataStorage();
 prevInfo.readPreviousVersionProperties(bpSd);{code}
 In the above code snippet BlockPoolSliceStorage instance should be used. 
 other wise rollback results in 'storageType' property missing which will not 
 be there in initial VERSION file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3970) BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead of DataStorage to read prev version file.

2013-01-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-3970:
--

Attachment: hdfs-3970-1.patch

 BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead 
 of DataStorage to read prev version file.
 ---

 Key: HDFS-3970
 URL: https://issues.apache.org/jira/browse/HDFS-3970
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Vinay
Assignee: Vinay
 Attachments: hdfs-3970-1.patch, HDFS-3970.patch


 {code}// read attributes out of the VERSION file of previous directory
 DataStorage prevInfo = new DataStorage();
 prevInfo.readPreviousVersionProperties(bpSd);{code}
 In the above code snippet BlockPoolSliceStorage instance should be used. 
 other wise rollback results in 'storageType' property missing which will not 
 be there in initial VERSION file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3970) BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead of DataStorage to read prev version file.

2013-01-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-3970:
--

Status: Patch Available  (was: Open)

 BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead 
 of DataStorage to read prev version file.
 ---

 Key: HDFS-3970
 URL: https://issues.apache.org/jira/browse/HDFS-3970
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.0.2-alpha, 3.0.0
Reporter: Vinay
Assignee: Vinay
 Attachments: hdfs-3970-1.patch, HDFS-3970.patch


 {code}// read attributes out of the VERSION file of previous directory
 DataStorage prevInfo = new DataStorage();
 prevInfo.readPreviousVersionProperties(bpSd);{code}
 In the above code snippet BlockPoolSliceStorage instance should be used. 
 other wise rollback results in 'storageType' property missing which will not 
 be there in initial VERSION file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class

2013-01-07 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546130#comment-13546130
 ] 

Colin Patrick McCabe commented on HDFS-4352:


Hi Nicholas,

HDFS-4353 adds asserts to {{BlockReaderFactory#newBlockReader}} that check that 
all essential parameters are set.  That is how you can know that you have set 
all the essential parameters.

Does that address your concerns?  If not, we can revert this.  It was done to 
improve readability (and reviewability) but it is not an essential part of the 
patch set.

 Encapsulate arguments to BlockReaderFactory in a class
 --

 Key: HDFS-4352
 URL: https://issues.apache.org/jira/browse/HDFS-4352
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0

 Attachments: 01b.patch, 01.patch


 Encapsulate the arguments to BlockReaderFactory in a class to avoid having to 
 pass around 10+ arguments to a few different functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4350) Make enabling of stale marking on read and write paths independent

2013-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546171#comment-13546171
 ] 

Hadoop QA commented on HDFS-4350:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563594/hdfs-4350-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3751//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3751//console

This message is automatically generated.

 Make enabling of stale marking on read and write paths independent
 --

 Key: HDFS-4350
 URL: https://issues.apache.org/jira/browse/HDFS-4350
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-4350-1.patch, hdfs-4350-2.patch


 Marking of datanodes as stale for the read and write path was introduced in 
 HDFS-3703 and HDFS-3912 respectively. This is enabled using two new keys, 
 {{DFS_NAMENODE_CHECK_STALE_DATANODE_KEY}} and 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY}}. However, there currently 
 exists a dependency, since you cannot enable write marking without also 
 enabling read marking, since the first key enables both checking of staleness 
 and read marking.
 I propose renaming the first key to 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_KEY}}, and make checking enabled 
 if either of the keys are set. This will allow read and write marking to be 
 enabled independently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4362) GetDelegationTokenResponseProto does not handle null token

2013-01-07 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546197#comment-13546197
 ] 

Aaron T. Myers commented on HDFS-4362:
--

+1, patch looks good to me. Thanks, Suresh.

 GetDelegationTokenResponseProto does not handle null token
 --

 Key: HDFS-4362
 URL: https://issues.apache.org/jira/browse/HDFS-4362
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Critical
 Attachments: HDFS-4362.patch


 While working on HADOOP-9173, I notice that the 
 GetDelegationTokenResponseProto declares the token field as required. However 
 return of null token is to be expected both as defined in 
 FileSystem#getDelegationToken() and also based on HDFS implementation. This 
 jira intends to make the field as optional.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4295) Using port 1023 should be valid when starting Secure DataNode

2013-01-07 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546206#comment-13546206
 ] 

Aaron T. Myers commented on HDFS-4295:
--

Hi liuyang, we should really continue this conversation on the 
u...@hadoop.apache.org mailing list, since it's not an issue with this 
bug/patch. The short answer to your question is: you have to start the DN as 
root, and make sure that the HADOOP_SECURE_DN_USER environment variable is set 
to 'hdfs' so that the DN knows which user to switch to.

If you have any more questions about this, please email u...@hadoop.apache.org.

 Using port 1023 should be valid when starting Secure DataNode
 -

 Key: HDFS-4295
 URL: https://issues.apache.org/jira/browse/HDFS-4295
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Stephen Chu
Assignee: Stephen Chu
  Labels: trivial
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HDFS-4295.patch


 In SecureDataNodeStarter:
 {code}
 if ((ss.getLocalPort() = 1023 || listener.getPort() = 1023) 
 UserGroupInformation.isSecurityEnabled()) {
   throw new RuntimeException(Cannot start secure datanode with 
 unprivileged ports);
 }
 {code}
 This prohibits using port 1023, but this should be okay because only root can 
 listen to ports below 1024.
 We can change the = to .

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes

2013-01-07 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546217#comment-13546217
 ] 

Todd Lipcon commented on HDFS-4353:
---

Looking almost ready. A few minor nits:

{code}
+ * it has been configured-- like when we're reading from the last replica
+ * of a block.
{code}
This description is a little unclear to me. Better to say such as when the 
caller has explicitly asked for a file to be opened without checksum 
verification.



{code}
   public static BlockReader newBlockReader(Params params) throws IOException {
+assert(params.getPeer() != null);
+assert(params.getBlock() != null);
+assert(params.getDatanodeID() != null);
{code}

I think {{Preconditions.checkArgument}} is more appropriate here - the checks 
are cheap (likely free) null checks, so we may as well always verify them.



{code}
+  // TODO: create a wrapper class that makes channel-less sockets look like
+  // they have a channel, so that we can finally remove the legacy
+  // RemoteBlockReader.
{code}
Can you reference the JIRA for removing RBR here? I know we have one filed 
somewhere.


+1 after these are addressed.


 Encapsulate connections to peers in Peer and PeerServer classes
 ---

 Key: HDFS-4353
 URL: https://issues.apache.org/jira/browse/HDFS-4353
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, hdfs-client
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: 02b-cumulative.patch, 02c.patch, 02c.patch, 
 02-cumulative.patch, 02d.patch, 02e.patch


 Encapsulate connections to peers into the {{Peer}} and {{PeerServer}} 
 classes.  Since many Java classes may be involved with these connections, it 
 makes sense to create a container for them.  For example, a connection to a 
 peer may have an input stream, output stream, readablebytechannel, encrypted 
 output stream, and encrypted input stream associated with it.
 This makes us less dependent on the {{NetUtils}} methods which use 
 {{instanceof}} to manipulate socket and stream states based on the runtime 
 type.  it also paves the way to introduce UNIX domain sockets which don't 
 inherit from {{java.net.Socket}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes

2013-01-07 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4353:
---

Attachment: 02f.patch

 Encapsulate connections to peers in Peer and PeerServer classes
 ---

 Key: HDFS-4353
 URL: https://issues.apache.org/jira/browse/HDFS-4353
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, hdfs-client
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: 02b-cumulative.patch, 02c.patch, 02c.patch, 
 02-cumulative.patch, 02d.patch, 02e.patch, 02f.patch


 Encapsulate connections to peers into the {{Peer}} and {{PeerServer}} 
 classes.  Since many Java classes may be involved with these connections, it 
 makes sense to create a container for them.  For example, a connection to a 
 peer may have an input stream, output stream, readablebytechannel, encrypted 
 output stream, and encrypted input stream associated with it.
 This makes us less dependent on the {{NetUtils}} methods which use 
 {{instanceof}} to manipulate socket and stream states based on the runtime 
 type.  it also paves the way to introduce UNIX domain sockets which don't 
 inherit from {{java.net.Socket}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes

2013-01-07 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546246#comment-13546246
 ] 

Todd Lipcon commented on HDFS-4353:
---

Latest patch looks good. +1 pending Jenkins.

 Encapsulate connections to peers in Peer and PeerServer classes
 ---

 Key: HDFS-4353
 URL: https://issues.apache.org/jira/browse/HDFS-4353
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, hdfs-client
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: 02b-cumulative.patch, 02c.patch, 02c.patch, 
 02-cumulative.patch, 02d.patch, 02e.patch, 02f.patch


 Encapsulate connections to peers into the {{Peer}} and {{PeerServer}} 
 classes.  Since many Java classes may be involved with these connections, it 
 makes sense to create a container for them.  For example, a connection to a 
 peer may have an input stream, output stream, readablebytechannel, encrypted 
 output stream, and encrypted input stream associated with it.
 This makes us less dependent on the {{NetUtils}} methods which use 
 {{instanceof}} to manipulate socket and stream states based on the runtime 
 type.  it also paves the way to introduce UNIX domain sockets which don't 
 inherit from {{java.net.Socket}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3970) BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead of DataStorage to read prev version file.

2013-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546271#comment-13546271
 ] 

Hadoop QA commented on HDFS-3970:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563599/hdfs-3970-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3775//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3775//console

This message is automatically generated.

 BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead 
 of DataStorage to read prev version file.
 ---

 Key: HDFS-3970
 URL: https://issues.apache.org/jira/browse/HDFS-3970
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Vinay
Assignee: Vinay
 Attachments: hdfs-3970-1.patch, HDFS-3970.patch


 {code}// read attributes out of the VERSION file of previous directory
 DataStorage prevInfo = new DataStorage();
 prevInfo.readPreviousVersionProperties(bpSd);{code}
 In the above code snippet BlockPoolSliceStorage instance should be used. 
 other wise rollback results in 'storageType' property missing which will not 
 be there in initial VERSION file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3970) BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead of DataStorage to read prev version file.

2013-01-07 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3970:
-

Target Version/s: 2.0.3-alpha

 BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead 
 of DataStorage to read prev version file.
 ---

 Key: HDFS-3970
 URL: https://issues.apache.org/jira/browse/HDFS-3970
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Vinay
Assignee: Vinay
 Attachments: hdfs-3970-1.patch, HDFS-3970.patch


 {code}// read attributes out of the VERSION file of previous directory
 DataStorage prevInfo = new DataStorage();
 prevInfo.readPreviousVersionProperties(bpSd);{code}
 In the above code snippet BlockPoolSliceStorage instance should be used. 
 other wise rollback results in 'storageType' property missing which will not 
 be there in initial VERSION file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3970) BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead of DataStorage to read prev version file.

2013-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546310#comment-13546310
 ] 

Hadoop QA commented on HDFS-3970:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563599/hdfs-3970-1.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3784//console

This message is automatically generated.

 BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead 
 of DataStorage to read prev version file.
 ---

 Key: HDFS-3970
 URL: https://issues.apache.org/jira/browse/HDFS-3970
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Vinay
Assignee: Vinay
 Attachments: hdfs-3970-1.patch, HDFS-3970.patch


 {code}// read attributes out of the VERSION file of previous directory
 DataStorage prevInfo = new DataStorage();
 prevInfo.readPreviousVersionProperties(bpSd);{code}
 In the above code snippet BlockPoolSliceStorage instance should be used. 
 other wise rollback results in 'storageType' property missing which will not 
 be there in initial VERSION file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3970) BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead of DataStorage to read prev version file.

2013-01-07 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3970:
-

   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
 Assignee: Andrew Wang  (was: Vinay)
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this to trunk and branch-2 based on Todd's +1.

Thanks a lot for the contribution, Vinay and Andrew.

 BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead 
 of DataStorage to read prev version file.
 ---

 Key: HDFS-3970
 URL: https://issues.apache.org/jira/browse/HDFS-3970
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Vinay
Assignee: Andrew Wang
 Fix For: 2.0.3-alpha

 Attachments: hdfs-3970-1.patch, HDFS-3970.patch


 {code}// read attributes out of the VERSION file of previous directory
 DataStorage prevInfo = new DataStorage();
 prevInfo.readPreviousVersionProperties(bpSd);{code}
 In the above code snippet BlockPoolSliceStorage instance should be used. 
 other wise rollback results in 'storageType' property missing which will not 
 be there in initial VERSION file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4350) Make enabling of stale marking on read and write paths independent

2013-01-07 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546317#comment-13546317
 ] 

Todd Lipcon commented on HDFS-4350:
---

{code}
+   * Set the value of {@link DatanodeManager#isAvoidingStaleDataNodesForWrite}.
+   * The HeartbeatManager disables write avoidance when more than
+   * dfs.namenode.write.stale.datanode.ratio of DataNodes are marked as stale.
{code}

The double negative here is confusing. I think better to say: The 
HeartbeatManager will allow writes to stale datanodes when more than ...



{code}
+  boolean getCheckForStaleDataNodes() {
{code}

I think for a boolean-type return value, something like 
{{shouldCheckForStale...} is better than {{getCheckForStale...}}


{code}
   HeartbeatManager(final Namesystem namesystem,
-  final BlockManager blockManager, final Configuration conf) {
+  final BlockManager blockManager, final Configuration conf,
+  final boolean avoidStaleDataNodesForWrite, final long staleInterval) {
{code}

Not a fan of proliferating constructor parameters here. Since we already have 
the conf, and those two new parameters just come from the conf, I think the 
earlier approach was better (having both classes access the conf).


{code}
+  this.heartbeatRecheckInterval = staleInterval;
+  LOG.info(Setting hearbeat interval to  + staleInterval
+  +  since dfs.namenode.stale.datanode.interval  
+  + dfs.namenode.heartbeat.recheck-interval);
{code}

This info message is a little unclear -- the heartbeat interval isn't actually 
being changed, just the interval at which the HeartbeatManager wakes up to 
check for expired DNs. The message makes it sound like the datanodes will 
heartbeat more often, but in fact it's only an NN-side frequency that's being 
clamped down.

Also, please interpolate the {{BLAH_BLAH_KEY}} constants instead of actually 
writing the configuration keys in the log message.


{code}
+Indicate whether or not to avoid reading from stale datanodes whose
{code}
Should use quot; here, for valid XML.


 Make enabling of stale marking on read and write paths independent
 --

 Key: HDFS-4350
 URL: https://issues.apache.org/jira/browse/HDFS-4350
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-4350-1.patch, hdfs-4350-2.patch


 Marking of datanodes as stale for the read and write path was introduced in 
 HDFS-3703 and HDFS-3912 respectively. This is enabled using two new keys, 
 {{DFS_NAMENODE_CHECK_STALE_DATANODE_KEY}} and 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY}}. However, there currently 
 exists a dependency, since you cannot enable write marking without also 
 enabling read marking, since the first key enables both checking of staleness 
 and read marking.
 I propose renaming the first key to 
 {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_KEY}}, and make checking enabled 
 if either of the keys are set. This will allow read and write marking to be 
 enabled independently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3970) BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead of DataStorage to read prev version file.

2013-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546343#comment-13546343
 ] 

Hudson commented on HDFS-3970:
--

Integrated in Hadoop-trunk-Commit #3188 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3188/])
HDFS-3970. Fix bug causing rollback of HDFS upgrade to result in bad 
VERSION file. Contributed by Vinay and Andrew Wang. (Revision 1430037)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1430037
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSRollback.java


 BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead 
 of DataStorage to read prev version file.
 ---

 Key: HDFS-3970
 URL: https://issues.apache.org/jira/browse/HDFS-3970
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Vinay
Assignee: Andrew Wang
 Fix For: 2.0.3-alpha

 Attachments: hdfs-3970-1.patch, HDFS-3970.patch


 {code}// read attributes out of the VERSION file of previous directory
 DataStorage prevInfo = new DataStorage();
 prevInfo.readPreviousVersionProperties(bpSd);{code}
 In the above code snippet BlockPoolSliceStorage instance should be used. 
 other wise rollback results in 'storageType' property missing which will not 
 be there in initial VERSION file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-07 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546381#comment-13546381
 ] 

Eli Collins commented on HDFS-4261:
---

Any update Junping?   TestBalancerWithNodeGroup currently fails 100% of the 
time on my local jenkins slave running trunk. We should annotate these test 
methods with timeouts ala HDFS-4061 and HDFS-4008 so we get clean test failures 
in case this regresses.

 TestBalancerWithNodeGroup times out
 ---

 Key: HDFS-4261
 URL: https://issues.apache.org/jira/browse/HDFS-4261
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer
Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Junping Du
 Fix For: 3.0.0

 Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
 HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
 HDFS-4261-v7.patch, jstack-mac-18567, jstack-win-5488, 
 org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
  
 org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win


 When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
 machine.  Looking at the Jerkins report [build 
 #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
 detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4035) LightWeightGSet and LightWeightHashSet increment a volatile without synchronization

2013-01-07 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546428#comment-13546428
 ] 

Aaron T. Myers commented on HDFS-4035:
--

+1, patch looks good.

 LightWeightGSet and LightWeightHashSet increment a volatile without 
 synchronization
 ---

 Key: HDFS-4035
 URL: https://issues.apache.org/jira/browse/HDFS-4035
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-4035.txt


 LightWeightGSet and LightWeightHashSet have a volatile modification field 
 that they use to detect updates while iterating so they can throw a 
 ConcurrentModificationException. Since these LightWeight classes are 
 explicitly not thread safe (eg access to their members is not synchronized) 
 then the current use is OK, we just need to update findbugsExcludeFile.xml to 
 exclude them.
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4030) BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should be AtomicLongs

2013-01-07 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546431#comment-13546431
 ] 

Aaron T. Myers commented on HDFS-4030:
--

+1, patch looks good.

 BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should 
 be AtomicLongs
 --

 Key: HDFS-4030
 URL: https://issues.apache.org/jira/browse/HDFS-4030
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-4030.txt, hdfs-4030.txt


 The BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount 
 fields are currently volatile longs which are incremented, which isn't thread 
 safe. It looks like they're always incremented on paths that hold the NN 
 write lock but it would be easier and less error prone for future changes if 
 we made them AtomicLongs. The other volatile long members are just set in one 
 thread and read in another so they're fine as is.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4031) Update findbugsExcludeFile.xml to include findbugs 2 exclusions

2013-01-07 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546432#comment-13546432
 ] 

Aaron T. Myers commented on HDFS-4031:
--

+1, patch looks good.

 Update findbugsExcludeFile.xml to include findbugs 2 exclusions
 ---

 Key: HDFS-4031
 URL: https://issues.apache.org/jira/browse/HDFS-4031
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-4031.txt


 Findbugs 2 warns about some volatile increments (VO_VOLATILE_INCREMENT) that 
 unlike HDFS-4029 and HDFS-4030 are less problematic:
 - numFailedVolumes is only incremented in one thread and that access is 
 synchronized
 - pendingReceivedRequests in BPServiceActor is clearly synchronized
 It would be reasonable to make these Atomics as well but I think they're uses 
 are clearly correct so figured for these the warning was more obviously bogus 
 and so could be ignored.
 There's also a SE_BAD_FIELD_INNER_CLASS warning (LocalDatanodeInfo's 
 anonymous class is serializable but it is not) in BPServiceActor is OK to 
 ignore since we don't serialize LocalDatanodeInfo.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4034) Remove redundant null checks

2013-01-07 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546435#comment-13546435
 ] 

Aaron T. Myers commented on HDFS-4034:
--

+1, patch looks good to me.

 Remove redundant null checks
 

 Key: HDFS-4034
 URL: https://issues.apache.org/jira/browse/HDFS-4034
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-4034.txt, hdfs-4034.txt


 Findbugs 2 catches a number of places where we're checking for null in cases 
 where the value will never be null.
 We might need to wait until we switch to findbugs 2 to commit this as the 
 current findbugs may not be so smart.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4340) Update addBlock() to inculde inode id as additional argument

2013-01-07 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4340:
-

Attachment: HDFS-4340.patch

 Update addBlock() to inculde inode id as additional argument
 

 Key: HDFS-4340
 URL: https://issues.apache.org/jira/browse/HDFS-4340
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client, namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4340.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4340) Update addBlock() to inculde inode id as additional argument

2013-01-07 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4340:
-

Status: Patch Available  (was: Open)

 Update addBlock() to inculde inode id as additional argument
 

 Key: HDFS-4340
 URL: https://issues.apache.org/jira/browse/HDFS-4340
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client, namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4340.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4244) Support deleting snapshots

2013-01-07 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4244:


Attachment: HDFS-4244.006.patch

Thanks for the comments Nicholas! Upload the new patch to address all the 
comments, also fix a bug with INodeFileWithLink (in certain scenarios when 
combining diffs an INodeFileWithLink should be removed from the circular list). 
A new test is also included to test the INodeFileWithLink case.

 Support deleting snapshots
 --

 Key: HDFS-4244
 URL: https://issues.apache.org/jira/browse/HDFS-4244
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-4244.001.patch, HDFS-4244.002.patch, 
 HDFS-4244.003.patch, HDFS-4244.004.patch, HDFS-4244.005.patch, 
 HDFS-4244.006.patch


 Provide functionality to delete a snapshot, given the name of the snapshot 
 and the path to the directory where the snapshot was taken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4033) Miscellaneous findbugs 2 fixes

2013-01-07 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546439#comment-13546439
 ] 

Aaron T. Myers commented on HDFS-4033:
--

+1, patch looks good.

 Miscellaneous findbugs 2 fixes
 --

 Key: HDFS-4033
 URL: https://issues.apache.org/jira/browse/HDFS-4033
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-4033.txt, hdfs-4033.txt


 Fix some miscellaneous findbugs 2 warnings:
 - Switch statements missing default cases
 - Using \n instead of %n in format methods
 - A socket close that should use IOUtils#closeSocket that we missed
 - A use of SimpleDateFormat that is not threadsafe
 - In ReplicaInputStreams it's not clear that we always close the streams we 
 allocate, moving the stream creation into the class where we close them makes 
 that more obvious
 - A couple missing null checks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4032) Specify the charset explicitly rather than rely on the default

2013-01-07 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546444#comment-13546444
 ] 

Aaron T. Myers commented on HDFS-4032:
--

Patch looks pretty good to me. Almost all the changes look like they shouldn't 
change any behavior at all, but I think this one will:
{code}
   public static byte[] string2Bytes(String str) {
-try {
-  return str.getBytes(UTF8);
-} catch(UnsupportedEncodingException e) {
-  assert false : UTF8 encoding is not supported ;
-}
-return null;
+return str.getBytes(Charsets.UTF_8);
   }
{code}

Since assertions are typically disabled when not running the tests, this will 
result in a behavior change when running in production. Before this change, the 
assert in the catch block would just be skipped and null would be returned. 
After this change, the UnsupportedEncodingException would be thrown from this 
method. It's not obvious to me whether this could be problematic or not. 
Thoughts?

 Specify the charset explicitly rather than rely on the default
 --

 Key: HDFS-4032
 URL: https://issues.apache.org/jira/browse/HDFS-4032
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-4032.txt


 Findbugs 2 warns about relying on the default Java charset instead of 
 specifying it explicitly. Given that we're porting Hadoop to different 
 platforms it's better to be explicit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes

2013-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546476#comment-13546476
 ] 

Hadoop QA commented on HDFS-4353:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563623/02f.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3787//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3787//console

This message is automatically generated.

 Encapsulate connections to peers in Peer and PeerServer classes
 ---

 Key: HDFS-4353
 URL: https://issues.apache.org/jira/browse/HDFS-4353
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, hdfs-client
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: 02b-cumulative.patch, 02c.patch, 02c.patch, 
 02-cumulative.patch, 02d.patch, 02e.patch, 02f.patch


 Encapsulate connections to peers into the {{Peer}} and {{PeerServer}} 
 classes.  Since many Java classes may be involved with these connections, it 
 makes sense to create a container for them.  For example, a connection to a 
 peer may have an input stream, output stream, readablebytechannel, encrypted 
 output stream, and encrypted input stream associated with it.
 This makes us less dependent on the {{NetUtils}} methods which use 
 {{instanceof}} to manipulate socket and stream states based on the runtime 
 type.  it also paves the way to introduce UNIX domain sockets which don't 
 inherit from {{java.net.Socket}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4356) BlockReaderLocal should use passed file descriptors rather than paths

2013-01-07 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4356:
---

Attachment: 04d-cumulative.patch

This patch addresses a point that Todd brought up on reviewboard: that we don't 
want to cause a performance regression when doing a lot of seeks.

It introduces a cache for the {{FileInputStream}} objects.  This cache resides 
in {{FSInputStream}}, so it persists until we close the file we're reading, in 
keeping with HDFS' open-to-close consistency.

 BlockReaderLocal should use passed file descriptors rather than paths
 -

 Key: HDFS-4356
 URL: https://issues.apache.org/jira/browse/HDFS-4356
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, hdfs-client, performance
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: 04b-cumulative.patch, 04-cumulative.patch, 
 04d-cumulative.patch


 {{BlockReaderLocal}} should use file descriptors passed over UNIX domain 
 sockets rather than paths.  We also need some configuration options for these 
 UNIX domain sockets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes

2013-01-07 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546524#comment-13546524
 ] 

Todd Lipcon commented on HDFS-4353:
---

I'll commit this later tonight or tomorrow morning unless there are any other 
comments.

 Encapsulate connections to peers in Peer and PeerServer classes
 ---

 Key: HDFS-4353
 URL: https://issues.apache.org/jira/browse/HDFS-4353
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, hdfs-client
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: 02b-cumulative.patch, 02c.patch, 02c.patch, 
 02-cumulative.patch, 02d.patch, 02e.patch, 02f.patch


 Encapsulate connections to peers into the {{Peer}} and {{PeerServer}} 
 classes.  Since many Java classes may be involved with these connections, it 
 makes sense to create a container for them.  For example, a connection to a 
 peer may have an input stream, output stream, readablebytechannel, encrypted 
 output stream, and encrypted input stream associated with it.
 This makes us less dependent on the {{NetUtils}} methods which use 
 {{instanceof}} to manipulate socket and stream states based on the runtime 
 type.  it also paves the way to introduce UNIX domain sockets which don't 
 inherit from {{java.net.Socket}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4340) Update addBlock() to inculde inode id as additional argument

2013-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546538#comment-13546538
 ] 

Hadoop QA commented on HDFS-4340:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563652/HDFS-4340.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3788//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3788//console

This message is automatically generated.

 Update addBlock() to inculde inode id as additional argument
 

 Key: HDFS-4340
 URL: https://issues.apache.org/jira/browse/HDFS-4340
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client, namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4340.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4362) GetDelegationTokenResponseProto does not handle null token

2013-01-07 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4362:
--

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)
   Status: Resolved  (was: Patch Available)

Aaron, thanks for the review. I have committed the patch.

 GetDelegationTokenResponseProto does not handle null token
 --

 Key: HDFS-4362
 URL: https://issues.apache.org/jira/browse/HDFS-4362
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Critical
 Fix For: 3.0.0

 Attachments: HDFS-4362.patch


 While working on HADOOP-9173, I notice that the 
 GetDelegationTokenResponseProto declares the token field as required. However 
 return of null token is to be expected both as defined in 
 FileSystem#getDelegationToken() and also based on HDFS implementation. This 
 jira intends to make the field as optional.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4356) BlockReaderLocal should use passed file descriptors rather than paths

2013-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546599#comment-13546599
 ] 

Hadoop QA commented on HDFS-4356:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563669/04d-cumulative.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 19 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestShortCircuitLocalRead
  org.apache.hadoop.hdfs.TestParallelUnixDomainRead
  org.apache.hadoop.hdfs.TestParallelShortCircuitRead
  org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3789//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3789//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3789//console

This message is automatically generated.

 BlockReaderLocal should use passed file descriptors rather than paths
 -

 Key: HDFS-4356
 URL: https://issues.apache.org/jira/browse/HDFS-4356
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, hdfs-client, performance
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: 04b-cumulative.patch, 04-cumulative.patch, 
 04d-cumulative.patch


 {{BlockReaderLocal}} should use file descriptors passed over UNIX domain 
 sockets rather than paths.  We also need some configuration options for these 
 UNIX domain sockets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4362) GetDelegationTokenResponseProto does not handle null token

2013-01-07 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546601#comment-13546601
 ] 

Suresh Srinivas commented on HDFS-4362:
---

BTW this incompatible change needs to go into 2.0.3-alpha as well.

 GetDelegationTokenResponseProto does not handle null token
 --

 Key: HDFS-4362
 URL: https://issues.apache.org/jira/browse/HDFS-4362
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Critical
 Fix For: 3.0.0

 Attachments: HDFS-4362.patch


 While working on HADOOP-9173, I notice that the 
 GetDelegationTokenResponseProto declares the token field as required. However 
 return of null token is to be expected both as defined in 
 FileSystem#getDelegationToken() and also based on HDFS implementation. This 
 jira intends to make the field as optional.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4364) GetLinkTargetResponseProto does not handle null path

2013-01-07 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4364:
--

Attachment: HDFS-4364.patch

Here is a patch thank makes the field optional.

 GetLinkTargetResponseProto does not handle null path
 

 Key: HDFS-4364
 URL: https://issues.apache.org/jira/browse/HDFS-4364
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-4364.patch


 ClientProtocol#getLinkTarget() returns null targetPath. Hence protobuf 
 definition GetLinkTargetResponseProto#targetPath should be optional.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4364) GetLinkTargetResponseProto does not handle null path

2013-01-07 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4364:
--

Status: Patch Available  (was: Open)

 GetLinkTargetResponseProto does not handle null path
 

 Key: HDFS-4364
 URL: https://issues.apache.org/jira/browse/HDFS-4364
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-4364.patch


 ClientProtocol#getLinkTarget() returns null targetPath. Hence protobuf 
 definition GetLinkTargetResponseProto#targetPath should be optional.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4362) GetDelegationTokenResponseProto does not handle null token

2013-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546605#comment-13546605
 ] 

Hudson commented on HDFS-4362:
--

Integrated in Hadoop-trunk-Commit #3189 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3189/])
HDFS-4362. GetDelegationTokenResponseProto does not handle null token. 
Contributed by Suresh Srinivas. (Revision 1430137)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1430137
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto


 GetDelegationTokenResponseProto does not handle null token
 --

 Key: HDFS-4362
 URL: https://issues.apache.org/jira/browse/HDFS-4362
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Critical
 Fix For: 3.0.0

 Attachments: HDFS-4362.patch


 While working on HADOOP-9173, I notice that the 
 GetDelegationTokenResponseProto declares the token field as required. However 
 return of null token is to be expected both as defined in 
 FileSystem#getDelegationToken() and also based on HDFS implementation. This 
 jira intends to make the field as optional.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4154) BKJM: Two namenodes usng bkjm can race to create the version znode

2013-01-07 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-4154:
--

Status: Patch Available  (was: Open)

 BKJM: Two namenodes usng bkjm can race to create the version znode
 --

 Key: HDFS-4154
 URL: https://issues.apache.org/jira/browse/HDFS-4154
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Ivan Kelly
Assignee: Han Xiao
 Attachments: HDFS-4154.patch


 nd one will get the following error.
 2012-11-06 10:04:00,200 INFO 
 hidden.bkjournal.org.apache.zookeeper.ClientCnxn: Session establishment 
 complete on server 109-231-69-172.flexiscale.com/109.231.69.172:2181, 
 sessionid = 0x13ad528fcfe0005, negotiated timeout = 4000
 2012-11-06 10:04:00,710 FATAL 
 org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
 java.lang.IllegalArgumentException: Unable to construct journal, 
 bookkeeper://109.231.69.172:2181;109.231.69.173:2181;109.231.69.174:2181/hdfsjournal
 at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1251)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:226)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.initSharedJournalsForRead(FSEditLog.java:206)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.initEditLog(FSImage.java:657)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:590)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:259)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:544)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:423)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:385)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:401)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:435)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:611)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:592)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1135)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1201)
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1249)
 ... 14 more
 Caused by: java.io.IOException: Error initializing zk
 at 
 org.apache.hadoop.contrib.bkjournal.BookKeeperJournalManager.init(BookKeeperJournalManager.java:233)
 ... 19 more
 Caused by: 
 hidden.bkjournal.org.apache.zookeeper.KeeperException$NodeExistsException: 
 KeeperErrorCode = NodeExists for /hdfsjournal/version
 at 
 hidden.bkjournal.org.apache.zookeeper.KeeperException.create(KeeperException.java:119)
 at 
 hidden.bkjournal.org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at 
 hidden.bkjournal.org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:778)
 at 
 org.apache.hadoop.contrib.bkjournal.BookKeeperJournalManager.init(BookKeeperJournalManager.java:222)
 ... 19 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-4154) BKJM: Two namenodes usng bkjm can race to create the version znode

2013-01-07 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G reassigned HDFS-4154:
-

Assignee: Han Xiao  (was: Ivan Kelly)

 BKJM: Two namenodes usng bkjm can race to create the version znode
 --

 Key: HDFS-4154
 URL: https://issues.apache.org/jira/browse/HDFS-4154
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Ivan Kelly
Assignee: Han Xiao
 Attachments: HDFS-4154.patch


 nd one will get the following error.
 2012-11-06 10:04:00,200 INFO 
 hidden.bkjournal.org.apache.zookeeper.ClientCnxn: Session establishment 
 complete on server 109-231-69-172.flexiscale.com/109.231.69.172:2181, 
 sessionid = 0x13ad528fcfe0005, negotiated timeout = 4000
 2012-11-06 10:04:00,710 FATAL 
 org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
 java.lang.IllegalArgumentException: Unable to construct journal, 
 bookkeeper://109.231.69.172:2181;109.231.69.173:2181;109.231.69.174:2181/hdfsjournal
 at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1251)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:226)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.initSharedJournalsForRead(FSEditLog.java:206)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.initEditLog(FSImage.java:657)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:590)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:259)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:544)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:423)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:385)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:401)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:435)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:611)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:592)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1135)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1201)
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1249)
 ... 14 more
 Caused by: java.io.IOException: Error initializing zk
 at 
 org.apache.hadoop.contrib.bkjournal.BookKeeperJournalManager.init(BookKeeperJournalManager.java:233)
 ... 19 more
 Caused by: 
 hidden.bkjournal.org.apache.zookeeper.KeeperException$NodeExistsException: 
 KeeperErrorCode = NodeExists for /hdfsjournal/version
 at 
 hidden.bkjournal.org.apache.zookeeper.KeeperException.create(KeeperException.java:119)
 at 
 hidden.bkjournal.org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at 
 hidden.bkjournal.org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:778)
 at 
 org.apache.hadoop.contrib.bkjournal.BookKeeperJournalManager.init(BookKeeperJournalManager.java:222)
 ... 19 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4154) BKJM: Two namenodes usng bkjm can race to create the version znode

2013-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546618#comment-13546618
 ] 

Hadoop QA commented on HDFS-4154:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12562773/HDFS-4154.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3791//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3791//console

This message is automatically generated.

 BKJM: Two namenodes usng bkjm can race to create the version znode
 --

 Key: HDFS-4154
 URL: https://issues.apache.org/jira/browse/HDFS-4154
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Ivan Kelly
Assignee: Han Xiao
 Attachments: HDFS-4154.patch


 nd one will get the following error.
 2012-11-06 10:04:00,200 INFO 
 hidden.bkjournal.org.apache.zookeeper.ClientCnxn: Session establishment 
 complete on server 109-231-69-172.flexiscale.com/109.231.69.172:2181, 
 sessionid = 0x13ad528fcfe0005, negotiated timeout = 4000
 2012-11-06 10:04:00,710 FATAL 
 org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
 java.lang.IllegalArgumentException: Unable to construct journal, 
 bookkeeper://109.231.69.172:2181;109.231.69.173:2181;109.231.69.174:2181/hdfsjournal
 at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1251)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:226)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.initSharedJournalsForRead(FSEditLog.java:206)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.initEditLog(FSImage.java:657)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:590)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:259)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:544)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:423)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:385)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:401)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:435)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:611)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:592)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1135)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1201)
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1249)
 ... 14 more
 Caused by: java.io.IOException: Error initializing zk
 at 
 org.apache.hadoop.contrib.bkjournal.BookKeeperJournalManager.init(BookKeeperJournalManager.java:233)
 ... 19 more
 Caused by: 
 

[jira] [Commented] (HDFS-4364) GetLinkTargetResponseProto does not handle null path

2013-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13546658#comment-13546658
 ] 

Hadoop QA commented on HDFS-4364:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563694/HDFS-4364.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDecommission

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3790//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3790//console

This message is automatically generated.

 GetLinkTargetResponseProto does not handle null path
 

 Key: HDFS-4364
 URL: https://issues.apache.org/jira/browse/HDFS-4364
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-4364.patch


 ClientProtocol#getLinkTarget() returns null targetPath. Hence protobuf 
 definition GetLinkTargetResponseProto#targetPath should be optional.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira