[jira] [Updated] (HDFS-6348) SecondaryNameNode not terminating properly on runtime exceptions

2015-05-18 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-6348:

   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.
Thanks [~rakeshr]

> SecondaryNameNode not terminating properly on runtime exceptions
> 
>
> Key: HDFS-6348
> URL: https://issues.apache.org/jira/browse/HDFS-6348
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Rakesh R
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-6348-003.patch, HDFS-6348-004.patch, 
> HDFS-6348.patch, HDFS-6348.patch, secondaryNN_threaddump_after_exit.log
>
>
> Secondary Namenode is not exiting when there is RuntimeException occurred 
> during startup.
> Say I configured wrong configuration, due to that validation failed and 
> thrown RuntimeException as shown below. But when I check the environment 
> SecondaryNamenode process is alive. When analysed, RMI Thread is still alive, 
> since it is not a daemon thread JVM is nit exiting. 
> I'm attaching threaddump to this JIRA for more details about the thread.
> {code}
> java.lang.RuntimeException: java.lang.RuntimeException: 
> java.lang.ClassNotFoundException: Class 
> com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1900)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.getInstance(BlockPlacementPolicy.java:199)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.(BlockManager.java:256)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:635)
>   at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:260)
>   at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.(SecondaryNameNode.java:205)
>   at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:695)
> Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: 
> Class com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy 
> not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1868)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1892)
>   ... 6 more
> Caused by: java.lang.ClassNotFoundException: Class 
> com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1774)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1866)
>   ... 7 more
> 2014-05-07 14:27:04,666 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services 
> started for active state
> 2014-05-07 14:27:04,666 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services 
> started for standby state
> 2014-05-07 14:31:04,926 INFO 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: STARTUP_MSG: 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6348) SecondaryNameNode not terminating properly on runtime exceptions

2015-05-18 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-6348:

Labels:   (was: BB2015-05-RFC)

> SecondaryNameNode not terminating properly on runtime exceptions
> 
>
> Key: HDFS-6348
> URL: https://issues.apache.org/jira/browse/HDFS-6348
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-6348-003.patch, HDFS-6348-004.patch, 
> HDFS-6348.patch, HDFS-6348.patch, secondaryNN_threaddump_after_exit.log
>
>
> Secondary Namenode is not exiting when there is RuntimeException occurred 
> during startup.
> Say I configured wrong configuration, due to that validation failed and 
> thrown RuntimeException as shown below. But when I check the environment 
> SecondaryNamenode process is alive. When analysed, RMI Thread is still alive, 
> since it is not a daemon thread JVM is nit exiting. 
> I'm attaching threaddump to this JIRA for more details about the thread.
> {code}
> java.lang.RuntimeException: java.lang.RuntimeException: 
> java.lang.ClassNotFoundException: Class 
> com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1900)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.getInstance(BlockPlacementPolicy.java:199)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.(BlockManager.java:256)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:635)
>   at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:260)
>   at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.(SecondaryNameNode.java:205)
>   at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:695)
> Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: 
> Class com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy 
> not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1868)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1892)
>   ... 6 more
> Caused by: java.lang.ClassNotFoundException: Class 
> com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1774)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1866)
>   ... 7 more
> 2014-05-07 14:27:04,666 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services 
> started for active state
> 2014-05-07 14:27:04,666 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services 
> started for standby state
> 2014-05-07 14:31:04,926 INFO 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: STARTUP_MSG: 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549930#comment-14549930
 ] 

Leitao Guo commented on HDFS-7692:
--

Sorry it's my mistake to comment many times here! It seems that my network 
condition is not very good now...

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6348) SecondaryNameNode not terminating properly on runtime exceptions

2015-05-18 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549925#comment-14549925
 ] 

Vinayakumar B commented on HDFS-6348:
-

Patch LGTM +1,
failures are unrelated.
Going to commit shortly

> SecondaryNameNode not terminating properly on runtime exceptions
> 
>
> Key: HDFS-6348
> URL: https://issues.apache.org/jira/browse/HDFS-6348
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: BB2015-05-RFC
> Attachments: HDFS-6348-003.patch, HDFS-6348-004.patch, 
> HDFS-6348.patch, HDFS-6348.patch, secondaryNN_threaddump_after_exit.log
>
>
> Secondary Namenode is not exiting when there is RuntimeException occurred 
> during startup.
> Say I configured wrong configuration, due to that validation failed and 
> thrown RuntimeException as shown below. But when I check the environment 
> SecondaryNamenode process is alive. When analysed, RMI Thread is still alive, 
> since it is not a daemon thread JVM is nit exiting. 
> I'm attaching threaddump to this JIRA for more details about the thread.
> {code}
> java.lang.RuntimeException: java.lang.RuntimeException: 
> java.lang.ClassNotFoundException: Class 
> com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1900)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.getInstance(BlockPlacementPolicy.java:199)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.(BlockManager.java:256)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:635)
>   at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:260)
>   at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.(SecondaryNameNode.java:205)
>   at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:695)
> Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: 
> Class com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy 
> not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1868)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1892)
>   ... 6 more
> Caused by: java.lang.ClassNotFoundException: Class 
> com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1774)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1866)
>   ... 7 more
> 2014-05-07 14:27:04,666 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services 
> started for active state
> 2014-05-07 14:27:04,666 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services 
> started for standby state
> 2014-05-07 14:31:04,926 INFO 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: STARTUP_MSG: 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8428) Erasure Coding: Fix the NullPointerException when deleting file

2015-05-18 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-8428:
-
Status: Patch Available  (was: Open)

> Erasure Coding: Fix the NullPointerException when deleting file
> ---
>
> Key: HDFS-8428
> URL: https://issues.apache.org/jira/browse/HDFS-8428
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HDFS-8428-HDFS-7285.001.patch
>
>
> In HDFS, when removing some file, NN will also remove all its blocks from 
> {{BlocksMap}}, and send {{DNA_INVALIDATE}} (invalidate blocks) commands to 
> datanodes.  After datanodes successfully delete the block replicas, will 
> report {{DELETED_BLOCK}} to NameNode.
> snippet code logic in {{BlockManager#processIncrementalBlockReport}} is as 
> following
> {code}
> case DELETED_BLOCK:
> removeStoredBlock(storageInfo, getStoredBlock(rdbi.getBlock()), node);
> ...
> {code}
> {code}
> private void removeStoredBlock(DatanodeStorageInfo storageInfo, Block block,
>   DatanodeDescriptor node) {
> if (shouldPostponeBlocksFromFuture &&
> namesystem.isGenStampInFuture(block)) {
>   queueReportedBlock(storageInfo, block, null,
>   QUEUE_REASON_FUTURE_GENSTAMP);
>   return;
> }
> removeStoredBlock(getStoredBlock(block), node);
>   }
> {code}
> In EC branch, we add {{getStoredBlock}}. There is {{NullPointerException}} 
> when handling {{DELETED_BLOCK}} of incrementalBlockReport from DataNode after 
> delete a file, since the block is already removed, we need to check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-8428) Erasure Coding: Fix the NullPointerException when deleting file

2015-05-18 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549891#comment-14549891
 ] 

Yi Liu edited comment on HDFS-8428 at 5/19/15 6:40 AM:
---

We should not get block group from striped block replica right now when 
handling {{DELETED_BLOCK}} (same as we do for {{RECEIVED_BLOCK}} and 
{{RECEIVING_BLOCK}}), later we will convert it, and also we may postpone it.

I have checked that there is no exception in the log of test after this patch.


was (Author: hitliuyi):
We should not convert get block group from striped block replica right now when 
handling {{DELETED_BLOCK}} (same as we do for {{RECEIVED_BLOCK}} and 
{{RECEIVING_BLOCK}}), later we will convert it, and also we may postpone it.

I have checked that there is no exception in the log of test after this patch.

> Erasure Coding: Fix the NullPointerException when deleting file
> ---
>
> Key: HDFS-8428
> URL: https://issues.apache.org/jira/browse/HDFS-8428
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HDFS-8428-HDFS-7285.001.patch
>
>
> In HDFS, when removing some file, NN will also remove all its blocks from 
> {{BlocksMap}}, and send {{DNA_INVALIDATE}} (invalidate blocks) commands to 
> datanodes.  After datanodes successfully delete the block replicas, will 
> report {{DELETED_BLOCK}} to NameNode.
> snippet code logic in {{BlockManager#processIncrementalBlockReport}} is as 
> following
> {code}
> case DELETED_BLOCK:
> removeStoredBlock(storageInfo, getStoredBlock(rdbi.getBlock()), node);
> ...
> {code}
> {code}
> private void removeStoredBlock(DatanodeStorageInfo storageInfo, Block block,
>   DatanodeDescriptor node) {
> if (shouldPostponeBlocksFromFuture &&
> namesystem.isGenStampInFuture(block)) {
>   queueReportedBlock(storageInfo, block, null,
>   QUEUE_REASON_FUTURE_GENSTAMP);
>   return;
> }
> removeStoredBlock(getStoredBlock(block), node);
>   }
> {code}
> In EC branch, we add {{getStoredBlock}}. There is {{NullPointerException}} 
> when handling {{DELETED_BLOCK}} of incrementalBlockReport from DataNode after 
> delete a file, since the block is already removed, we need to check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549913#comment-14549913
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549910#comment-14549910
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549914#comment-14549914
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-8428) Erasure Coding: Fix the NullPointerException when deleting file

2015-05-18 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549891#comment-14549891
 ] 

Yi Liu edited comment on HDFS-8428 at 5/19/15 6:39 AM:
---

We should not convert get block group from striped block replica right now when 
handling {{DELETED_BLOCK}} (same as we do for {{RECEIVED_BLOCK}} and 
{{RECEIVING_BLOCK}}), later we will convert it, and also we may postpone it.

I have checked that there is no exception in the log of test after this patch.


was (Author: hitliuyi):
We should not convert get block group from striped block replica right now when 
handling {{DELETED_BLOCK}}, later we will convert it, and also we may postpone 
it.

I have checked that there is no exception in the log of test after this patch.

> Erasure Coding: Fix the NullPointerException when deleting file
> ---
>
> Key: HDFS-8428
> URL: https://issues.apache.org/jira/browse/HDFS-8428
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HDFS-8428-HDFS-7285.001.patch
>
>
> In HDFS, when removing some file, NN will also remove all its blocks from 
> {{BlocksMap}}, and send {{DNA_INVALIDATE}} (invalidate blocks) commands to 
> datanodes.  After datanodes successfully delete the block replicas, will 
> report {{DELETED_BLOCK}} to NameNode.
> snippet code logic in {{BlockManager#processIncrementalBlockReport}} is as 
> following
> {code}
> case DELETED_BLOCK:
> removeStoredBlock(storageInfo, getStoredBlock(rdbi.getBlock()), node);
> ...
> {code}
> {code}
> private void removeStoredBlock(DatanodeStorageInfo storageInfo, Block block,
>   DatanodeDescriptor node) {
> if (shouldPostponeBlocksFromFuture &&
> namesystem.isGenStampInFuture(block)) {
>   queueReportedBlock(storageInfo, block, null,
>   QUEUE_REASON_FUTURE_GENSTAMP);
>   return;
> }
> removeStoredBlock(getStoredBlock(block), node);
>   }
> {code}
> In EC branch, we add {{getStoredBlock}}. There is {{NullPointerException}} 
> when handling {{DELETED_BLOCK}} of incrementalBlockReport from DataNode after 
> delete a file, since the block is already removed, we need to check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549897#comment-14549897
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8428) Erasure Coding: Fix the NullPointerException when deleting file

2015-05-18 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-8428:
-
Attachment: HDFS-8428-HDFS-7285.001.patch

We should not convert get block group from striped block replica right now when 
handling {{DELETED_BLOCK}}, later we will convert it, and also we may postpone 
it.

I have checked that there is no exception in the log of test after this patch.

> Erasure Coding: Fix the NullPointerException when deleting file
> ---
>
> Key: HDFS-8428
> URL: https://issues.apache.org/jira/browse/HDFS-8428
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HDFS-8428-HDFS-7285.001.patch
>
>
> In HDFS, when removing some file, NN will also remove all its blocks from 
> {{BlocksMap}}, and send {{DNA_INVALIDATE}} (invalidate blocks) commands to 
> datanodes.  After datanodes successfully delete the block replicas, will 
> report {{DELETED_BLOCK}} to NameNode.
> snippet code logic in {{BlockManager#processIncrementalBlockReport}} is as 
> following
> {code}
> case DELETED_BLOCK:
> removeStoredBlock(storageInfo, getStoredBlock(rdbi.getBlock()), node);
> ...
> {code}
> {code}
> private void removeStoredBlock(DatanodeStorageInfo storageInfo, Block block,
>   DatanodeDescriptor node) {
> if (shouldPostponeBlocksFromFuture &&
> namesystem.isGenStampInFuture(block)) {
>   queueReportedBlock(storageInfo, block, null,
>   QUEUE_REASON_FUTURE_GENSTAMP);
>   return;
> }
> removeStoredBlock(getStoredBlock(block), node);
>   }
> {code}
> In EC branch, we add {{getStoredBlock}}. There is {{NullPointerException}} 
> when handling {{DELETED_BLOCK}} of incrementalBlockReport from DataNode after 
> delete a file, since the block is already removed, we need to check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549899#comment-14549899
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549902#comment-14549902
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549900#comment-14549900
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549895#comment-14549895
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549908#comment-14549908
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549904#comment-14549904
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549906#comment-14549906
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549893#comment-14549893
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549907#comment-14549907
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549903#comment-14549903
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549905#comment-14549905
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549894#comment-14549894
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549892#comment-14549892
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549901#comment-14549901
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549898#comment-14549898
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549896#comment-14549896
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549888#comment-14549888
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549890#comment-14549890
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549886#comment-14549886
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549885#comment-14549885
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549887#comment-14549887
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549889#comment-14549889
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549883#comment-14549883
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8320) Erasure coding: consolidate striping-related terminologies

2015-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549884#comment-14549884
 ] 

Hadoop QA commented on HDFS-8320:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m  5s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 58s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  12m  1s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 16s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 40s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 58s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 38s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 50s | The patch appears to introduce 7 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 49s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 109m 52s | Tests failed in hadoop-hdfs. |
| | | 156m 15s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
|  |  Inconsistent synchronization of 
org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 89% of time  
Unsynchronized access at DFSOutputStream.java:89% of time  Unsynchronized 
access at DFSOutputStream.java:[line 146] |
|  |  Possible null pointer dereference of arr$ in 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long)
  Dereferenced at BlockInfoStripedUnderConstruction.java:arr$ in 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long)
  Dereferenced at BlockInfoStripedUnderConstruction.java:[line 194] |
|  |  Unread field:field be static?  At ErasureCodingWorker.java:[line 252] |
|  |  Should 
org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker$StripedReader
 be a _static_ inner class?  At ErasureCodingWorker.java:inner class?  At 
ErasureCodingWorker.java:[lines 913-915] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String,
 ECSchema):in 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String,
 ECSchema): String.getBytes()  At ErasureCodingZoneManager.java:[line 117] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):in
 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):
 new String(byte[])  At ErasureCodingZoneManager.java:[line 81] |
|  |  Result of integer multiplication cast to long in 
org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock,
 int, int, int, int)  At StripedBlockUtil.java:to long in 
org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock,
 int, int, int, int)  At StripedBlockUtil.java:[line 108] |
| Failed unit tests | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
| Timed out tests | org.apache.hadoop.hdfs.TestDistributedFileSystem |
|   | org.apache.hadoop.hdfs.server.namenode.TestHostsFiles |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12733703/HDFS-8320-HDFS-7285.02.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / b596edc |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11044/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11044/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11044/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11044/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11044/console |


This message was automatically generated.

> Erasure coding: consolidate striping-related terminologies
> -

[jira] [Commented] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549882#comment-14549882
 ] 

Leitao Guo commented on HDFS-7692:
--

[~eddyxu], thanks for your comments, please have a check of the new patch.
1.In DataStorage#recoverTransitionRead, log the InterruptedException and 
rethrow it as InterruptedIOException;
2.In TestDataStorage#testAddStorageDirectoreis, catch InterruptedException then 
let the test case fail;
3.The multithread in DataStorage#addStorageLocations() is for one specific 
namespace, so in TestDataStorage#testAddStorageDirectoreis my intention is 
creating one thread pool for each namespace. Not change here.
4.Re-phrase the parameter successVolumes.

[~szetszwo],thanks for your comments, please have a check of the new patch.
1. InterruptedException re-thrown as InterruptedIOException;
2. I think it's a good idea to log the upgrade progress for each dir, but so 
far, we can not get the progress easily from the current api. Do you think it's 
necessary to file a new jira to follow this?

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8375) Add cellSize as an XAttr to ECZone

2015-05-18 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8375:

Attachment: HDFS-8375-HDFS-7285-03.patch

Attached the rebased patch. 
Please review

> Add cellSize as an XAttr to ECZone
> --
>
> Key: HDFS-8375
> URL: https://issues.apache.org/jira/browse/HDFS-8375
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8375-HDFS-7285-01.patch, 
> HDFS-8375-HDFS-7285-02.patch, HDFS-8375-HDFS-7285-03.patch
>
>
> Add {{cellSize}} as an Xattr for ECZone. as discussed 
> [here|https://issues.apache.org/jira/browse/HDFS-8347?focusedCommentId=14539108&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14539108]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8428) Erasure Coding: Fix the NullPointerException when deleting file

2015-05-18 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549852#comment-14549852
 ] 

Yi Liu commented on HDFS-8428:
--

I also see the {{NullPointerException}} in {{TestDFSStripedInputStream}}, 
although the test passed, but actually there is exception:
{code}
2015-05-19 13:27:08,944 WARN  ipc.Server (Server.java:run(2190)) - IPC Server 
handler 2 on 50789, call 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.blockReceivedAndDeleted 
from 127.0.0.1:59424 Call#123 Retry#0
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.getStoredBlock(BlockManager.java:3581)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeStoredBlock(BlockManager.java:3209)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processIncrementalBlockReport(BlockManager.java:3390)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processIncrementalBlockReport(FSNamesystem.java:5545)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReceivedAndDeleted(NameNodeRpcServer.java:1344)
at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReceivedAndDeleted(DatanodeProtocolServerSideTranslatorPB.java:222)
at 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:29418)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2166)
{code}

> Erasure Coding: Fix the NullPointerException when deleting file
> ---
>
> Key: HDFS-8428
> URL: https://issues.apache.org/jira/browse/HDFS-8428
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Yi Liu
>
> In HDFS, when removing some file, NN will also remove all its blocks from 
> {{BlocksMap}}, and send {{DNA_INVALIDATE}} (invalidate blocks) commands to 
> datanodes.  After datanodes successfully delete the block replicas, will 
> report {{DELETED_BLOCK}} to NameNode.
> snippet code logic in {{BlockManager#processIncrementalBlockReport}} is as 
> following
> {code}
> case DELETED_BLOCK:
> removeStoredBlock(storageInfo, getStoredBlock(rdbi.getBlock()), node);
> ...
> {code}
> {code}
> private void removeStoredBlock(DatanodeStorageInfo storageInfo, Block block,
>   DatanodeDescriptor node) {
> if (shouldPostponeBlocksFromFuture &&
> namesystem.isGenStampInFuture(block)) {
>   queueReportedBlock(storageInfo, block, null,
>   QUEUE_REASON_FUTURE_GENSTAMP);
>   return;
> }
> removeStoredBlock(getStoredBlock(block), node);
>   }
> {code}
> In EC branch, we add {{getStoredBlock}}. There is {{NullPointerException}} 
> when handling {{DELETED_BLOCK}} of incrementalBlockReport from DataNode after 
> delete a file, since the block is already removed, we need to check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7692) DataStorage#addStorageLocations(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-05-18 Thread Leitao Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leitao Guo updated HDFS-7692:
-
Attachment: HDFS-7692.02.patch

> DataStorage#addStorageLocations(...) should support MultiThread to speedup 
> the upgrade of block pool at multi storage directories.
> --
>
> Key: HDFS-7692
> URL: https://issues.apache.org/jira/browse/HDFS-7692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.2
>Reporter: Leitao Guo
>Assignee: Leitao Guo
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7692.01.patch, HDFS-7692.02.patch
>
>
> {code:title=DataStorage#addStorageLocations(...)|borderStyle=solid}
> for (StorageLocation dataDir : dataDirs) {
>   File root = dataDir.getFile();
>  ... ...
> bpStorage.recoverTransitionRead(datanode, nsInfo, bpDataDirs, 
> startOpt);
> addBlockPoolStorage(bpid, bpStorage);
> ... ...
>   successVolumes.add(dataDir);
> }
> {code}
> In the above code the storage directories will be analyzed one by one, which 
> is really time consuming when upgrading HDFS with datanodes have dozens of 
> large volumes.  MultiThread dataDirs analyzing should be supported here to 
> speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8428) Erasure Coding: Fix the NullPointerException when deleting file

2015-05-18 Thread Yi Liu (JIRA)
Yi Liu created HDFS-8428:


 Summary: Erasure Coding: Fix the NullPointerException when 
deleting file
 Key: HDFS-8428
 URL: https://issues.apache.org/jira/browse/HDFS-8428
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yi Liu
Assignee: Yi Liu


In HDFS, when removing some file, NN will also remove all its blocks from 
{{BlocksMap}}, and send {{DNA_INVALIDATE}} (invalidate blocks) commands to 
datanodes.  After datanodes successfully delete the block replicas, will report 
{{DELETED_BLOCK}} to NameNode.

snippet code logic in {{BlockManager#processIncrementalBlockReport}} is as 
following
{code}
case DELETED_BLOCK:
removeStoredBlock(storageInfo, getStoredBlock(rdbi.getBlock()), node);
...
{code}
{code}
private void removeStoredBlock(DatanodeStorageInfo storageInfo, Block block,
  DatanodeDescriptor node) {
if (shouldPostponeBlocksFromFuture &&
namesystem.isGenStampInFuture(block)) {
  queueReportedBlock(storageInfo, block, null,
  QUEUE_REASON_FUTURE_GENSTAMP);
  return;
}
removeStoredBlock(getStoredBlock(block), node);
  }
{code}

In EC branch, we add {{getStoredBlock}}. There is {{NullPointerException}} when 
handling {{DELETED_BLOCK}} of incrementalBlockReport from DataNode after delete 
a file, since the block is already removed, we need to check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8131) Implement a space balanced block placement policy

2015-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549847#comment-14549847
 ] 

Hadoop QA commented on HDFS-8131:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m  8s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 41s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 56s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 38s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 40s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 21s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 18s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 167m 57s | Tests passed in hadoop-hdfs. 
|
| | | 210m 38s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12733696/HDFS-8131.006.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0790275 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11042/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11042/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11042/console |


This message was automatically generated.

> Implement a space balanced block placement policy
> -
>
> Key: HDFS-8131
> URL: https://issues.apache.org/jira/browse/HDFS-8131
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
>  Labels: BlockPlacementPolicy
> Attachments: HDFS-8131-v1.diff, HDFS-8131-v2.diff, HDFS-8131-v3.diff, 
> HDFS-8131.004.patch, HDFS-8131.005.patch, HDFS-8131.006.patch, balanced.png
>
>
> The default block placement policy will choose datanodes for new blocks 
> randomly, which will result in unbalanced space used percent among datanodes 
> after an cluster expansion. The old datanodes always are in high used percent 
> of space and new added ones are in low percent.
> Through we can used the external balance tool to balance the space used rate, 
> it will cost extra network IO and it's not easy to control the balance speed.
> An easy solution is to implement an balanced block placement policy which 
> will choose low used percent datanodes for new blocks with a little high 
> possibility. In a not long term, the used percent of datanodes will trend to 
> be balanced.
> Suggestions and discussions are welcomed. Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8375) Add cellSize as an XAttr to ECZone

2015-05-18 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549840#comment-14549840
 ] 

Zhe Zhang commented on HDFS-8375:
-

HDFS-8320 was just committed so there will be some additional rebase. I can 
help with that part if needed. 

> Add cellSize as an XAttr to ECZone
> --
>
> Key: HDFS-8375
> URL: https://issues.apache.org/jira/browse/HDFS-8375
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8375-HDFS-7285-01.patch, 
> HDFS-8375-HDFS-7285-02.patch
>
>
> Add {{cellSize}} as an Xattr for ECZone. as discussed 
> [here|https://issues.apache.org/jira/browse/HDFS-8347?focusedCommentId=14539108&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14539108]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8320) Erasure coding: consolidate striping-related terminologies

2015-05-18 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8320:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Erasure coding: consolidate striping-related terminologies
> --
>
> Key: HDFS-8320
> URL: https://issues.apache.org/jira/browse/HDFS-8320
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8320-HDFS-7285.00.patch, 
> HDFS-8320-HDFS-7285.01.patch, HDFS-8320-HDFS-7285.02.patch, 
> HDFS-8320-HDFS-7285.03.patch
>
>
> Right now we are doing striping-based I/O in a number of places:
> # Client output stream (HDFS-7889)
> # Client input stream
> #* pread (HDFS-7782, HDFS-7678)
> #* stateful read (HDFS-8033, HDFS-8281, HDFS-8319)
> # DN reconstruction (HDFS-7348)
> In each place we use one or multiple of the following terminologies:
> # Cell
> # Stripe
> # Block group
> # Internal block
> # Chunk
> This JIRA aims to systematically define these terminologies in relation with 
> each other and in the context of the containing file. For example, a cell 
> belong to stripe _i_ and internal block _j_ can be indexed as {{(i, j)}} and 
> its logical index _k_ in the file can be calculated.
> With the above consolidation, hopefully we can further consolidate striping 
> I/O codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8320) Erasure coding: consolidate striping-related terminologies

2015-05-18 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549835#comment-14549835
 ] 

Zhe Zhang commented on HDFS-8320:
-

Thanks Jing for the additional fixes! I looked at the diff and 03 patch looks 
good to me. I just committed the patch to the branch.

> Erasure coding: consolidate striping-related terminologies
> --
>
> Key: HDFS-8320
> URL: https://issues.apache.org/jira/browse/HDFS-8320
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8320-HDFS-7285.00.patch, 
> HDFS-8320-HDFS-7285.01.patch, HDFS-8320-HDFS-7285.02.patch, 
> HDFS-8320-HDFS-7285.03.patch
>
>
> Right now we are doing striping-based I/O in a number of places:
> # Client output stream (HDFS-7889)
> # Client input stream
> #* pread (HDFS-7782, HDFS-7678)
> #* stateful read (HDFS-8033, HDFS-8281, HDFS-8319)
> # DN reconstruction (HDFS-7348)
> In each place we use one or multiple of the following terminologies:
> # Cell
> # Stripe
> # Block group
> # Internal block
> # Chunk
> This JIRA aims to systematically define these terminologies in relation with 
> each other and in the context of the containing file. For example, a cell 
> belong to stripe _i_ and internal block _j_ can be indexed as {{(i, j)}} and 
> its logical index _k_ in the file can be calculated.
> With the above consolidation, hopefully we can further consolidate striping 
> I/O codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8320) Erasure coding: consolidate striping-related terminologies

2015-05-18 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-8320:

Attachment: HDFS-8320-HDFS-7285.03.patch

Thanks Zhe! +1 for the 02 patch after addressing the following nits:
# there are two unused imports in StripedBlockUtil and DFSStripedInputStream
# there are still a couple of places need to fixed in StripedBlockUtil's javadoc
# Need to fix TestStripedBlockUtil#testParseDummyStripedBlock

The 03 patch addresses all the above nits and please see if it looks good to 
you.

> Erasure coding: consolidate striping-related terminologies
> --
>
> Key: HDFS-8320
> URL: https://issues.apache.org/jira/browse/HDFS-8320
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8320-HDFS-7285.00.patch, 
> HDFS-8320-HDFS-7285.01.patch, HDFS-8320-HDFS-7285.02.patch, 
> HDFS-8320-HDFS-7285.03.patch
>
>
> Right now we are doing striping-based I/O in a number of places:
> # Client output stream (HDFS-7889)
> # Client input stream
> #* pread (HDFS-7782, HDFS-7678)
> #* stateful read (HDFS-8033, HDFS-8281, HDFS-8319)
> # DN reconstruction (HDFS-7348)
> In each place we use one or multiple of the following terminologies:
> # Cell
> # Stripe
> # Block group
> # Internal block
> # Chunk
> This JIRA aims to systematically define these terminologies in relation with 
> each other and in the context of the containing file. For example, a cell 
> belong to stripe _i_ and internal block _j_ can be indexed as {{(i, j)}} and 
> its logical index _k_ in the file can be calculated.
> With the above consolidation, hopefully we can further consolidate striping 
> I/O codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8266) Erasure Coding: Test of snapshot/.trash with EC files

2015-05-18 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549813#comment-14549813
 ] 

Rakesh R commented on HDFS-8266:


I've raised HDFS-8420 and put a patch over there to handle this case separately.

> Erasure Coding: Test of snapshot/.trash with EC files
> -
>
> Key: HDFS-8266
> URL: https://issues.apache.org/jira/browse/HDFS-8266
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
> Attachments: HDFS-8266-HDFS-7285-00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8375) Add cellSize as an XAttr to ECZone

2015-05-18 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549808#comment-14549808
 ] 

Zhe Zhang commented on HDFS-8375:
-

Thanks Vinay for the updated patch! It looks good to me but it needs a rebase 
(I'm not sure caused by which commit)

> Add cellSize as an XAttr to ECZone
> --
>
> Key: HDFS-8375
> URL: https://issues.apache.org/jira/browse/HDFS-8375
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8375-HDFS-7285-01.patch, 
> HDFS-8375-HDFS-7285-02.patch
>
>
> Add {{cellSize}} as an Xattr for ECZone. as discussed 
> [here|https://issues.apache.org/jira/browse/HDFS-8347?focusedCommentId=14539108&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14539108]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7687) Change fsck to support EC files

2015-05-18 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549798#comment-14549798
 ] 

Jing Zhao commented on HDFS-7687:
-

FYI, I've merged HDFS-8405 into the feature branch.

> Change fsck to support EC files
> ---
>
> Key: HDFS-7687
> URL: https://issues.apache.org/jira/browse/HDFS-7687
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Takanobu Asanuma
> Attachments: HDFS-7687.1.patch, HDFS-7687.2.patch, HDFS-7687.3.patch
>
>
> We need to change fsck so that it can detect "under replicated" and corrupted 
> EC files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8352) Erasure Coding: test webhdfs read write stripe file

2015-05-18 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549746#comment-14549746
 ] 

Zhe Zhang commented on HDFS-8352:
-

No worries :)

> Erasure Coding: test webhdfs read write stripe file
> ---
>
> Key: HDFS-8352
> URL: https://issues.apache.org/jira/browse/HDFS-8352
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-8352-HDFS-7285.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8320) Erasure coding: consolidate striping-related terminologies

2015-05-18 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8320:

Attachment: HDFS-8320-HDFS-7285.02.patch

Thanks Jing for the good catch! Updated patch to address the issue.

The Jenkins failures don't seem related. I hand picked a few and they passed.

> Erasure coding: consolidate striping-related terminologies
> --
>
> Key: HDFS-8320
> URL: https://issues.apache.org/jira/browse/HDFS-8320
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8320-HDFS-7285.00.patch, 
> HDFS-8320-HDFS-7285.01.patch, HDFS-8320-HDFS-7285.02.patch
>
>
> Right now we are doing striping-based I/O in a number of places:
> # Client output stream (HDFS-7889)
> # Client input stream
> #* pread (HDFS-7782, HDFS-7678)
> #* stateful read (HDFS-8033, HDFS-8281, HDFS-8319)
> # DN reconstruction (HDFS-7348)
> In each place we use one or multiple of the following terminologies:
> # Cell
> # Stripe
> # Block group
> # Internal block
> # Chunk
> This JIRA aims to systematically define these terminologies in relation with 
> each other and in the context of the containing file. For example, a cell 
> belong to stripe _i_ and internal block _j_ can be indexed as {{(i, j)}} and 
> its logical index _k_ in the file can be calculated.
> With the above consolidation, hopefully we can further consolidate striping 
> I/O codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8427) Remove dataBlockNum and parityBlockNum from BlockInfoStriped

2015-05-18 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-8427:
-
Attachment: HDFS-8427-HDFS-7285-00.patch

> Remove dataBlockNum and parityBlockNum from BlockInfoStriped
> 
>
> Key: HDFS-8427
> URL: https://issues.apache.org/jira/browse/HDFS-8427
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Fix For: HDFS-7285
>
> Attachments: HDFS-8427-HDFS-7285-00.patch
>
>
> Remove unnecessary members such as {{dataBlockNum}} and {{parityBlockNum}} 
> from {{BlockInfoStriped}}. These are included in {{ECShema}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8320) Erasure coding: consolidate striping-related terminologies

2015-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549724#comment-14549724
 ] 

Hadoop QA commented on HDFS-8320:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m  7s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 46s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 43s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 38s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 41s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 17s | The patch appears to introduce 7 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 19s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 150m 10s | Tests failed in hadoop-hdfs. |
| | | 192m 38s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
|  |  Inconsistent synchronization of 
org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 89% of time  
Unsynchronized access at DFSOutputStream.java:89% of time  Unsynchronized 
access at DFSOutputStream.java:[line 146] |
|  |  Possible null pointer dereference of arr$ in 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long)
  Dereferenced at BlockInfoStripedUnderConstruction.java:arr$ in 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long)
  Dereferenced at BlockInfoStripedUnderConstruction.java:[line 194] |
|  |  Unread field:field be static?  At ErasureCodingWorker.java:[line 252] |
|  |  Should 
org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker$StripedReader
 be a _static_ inner class?  At ErasureCodingWorker.java:inner class?  At 
ErasureCodingWorker.java:[lines 913-915] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String,
 ECSchema):in 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String,
 ECSchema): String.getBytes()  At ErasureCodingZoneManager.java:[line 117] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):in
 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):
 new String(byte[])  At ErasureCodingZoneManager.java:[line 81] |
|  |  Result of integer multiplication cast to long in 
org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock,
 int, int, int, int)  At StripedBlockUtil.java:to long in 
org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock,
 int, int, int, int)  At StripedBlockUtil.java:[line 108] |
| Failed unit tests | 
hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots |
|   | hadoop.fs.TestUrlStreamHandler |
|   | hadoop.hdfs.server.namenode.TestFileLimit |
|   | hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot |
|   | hadoop.hdfs.server.namenode.TestEditLogAutoroll |
|   | hadoop.TestRefreshCallQueue |
|   | hadoop.hdfs.protocolPB.TestPBHelper |
|   | hadoop.cli.TestCryptoAdminCLI |
|   | hadoop.fs.viewfs.TestViewFsWithAcls |
|   | hadoop.hdfs.server.namenode.TestAddStripedBlocks |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.fs.contract.hdfs.TestHDFSContractDelete |
|   | hadoop.fs.TestFcHdfsSetUMask |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.server.namenode.TestClusterId |
|   | hadoop.hdfs.server.namenode.TestDeleteRace |
|   | hadoop.hdfs.server.namenode.TestFSDirectory |
|   | hadoop.fs.contract.hdfs.TestHDFSContractOpen |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.fs.contract.hdfs.TestHDFSContractMkdir |
|   | hadoop.fs.contract.hdfs.TestHDFSContractAppend |
|   | hadoop.hdfs.server.namenode.TestSecondaryWebUi |
|   | hadoop.hdfs.server.namenode.TestHDFSConcat |
|   | hadoop.hdfs.server.namenode.TestAddBlockRetry |
|   | hadoop.fs.TestSymlinkHdfsFileSystem |
|   | hadoop.fs.viewfs.TestViewFsDefaultValue |
|   | hadoop.fs.TestSymlink

[jira] [Updated] (HDFS-4273) Fix some issue in DFSInputstream

2015-05-18 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-4273:
---
Labels:   (was: BB2015-05-TBR)

> Fix some issue in DFSInputstream
> 
>
> Key: HDFS-4273
> URL: https://issues.apache.org/jira/browse/HDFS-4273
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Minor
> Attachments: HDFS-4273-v2.patch, HDFS-4273.patch, HDFS-4273.v3.patch, 
> HDFS-4273.v4.patch, HDFS-4273.v5.patch, HDFS-4273.v6.patch, 
> HDFS-4273.v7.patch, HDFS-4273.v8.patch, TestDFSInputStream.java
>
>
> Following issues in DFSInputStream are addressed in this jira:
> 1. read may not retry enough in some cases cause early failure
> Assume the following call logic
> {noformat} 
> readWithStrategy()
>   -> blockSeekTo()
>   -> readBuffer()
>  -> reader.doRead()
>  -> seekToNewSource() add currentNode to deadnode, wish to get a 
> different datanode
> -> blockSeekTo()
>-> chooseDataNode()
>   -> block missing, clear deadNodes and pick the currentNode again
> seekToNewSource() return false
>  readBuffer() re-throw the exception quit loop
> readWithStrategy() got the exception,  and may fail the read call before 
> tried MaxBlockAcquireFailures.
> {noformat} 
> 2. In multi-threaded scenario(like hbase), DFSInputStream.failures has race 
> condition, it is cleared to 0 when it is still used by other thread. So it is 
> possible that  some read thread may never quit. Change failures to local 
> variable solve this issue.
> 3. If local datanode is added to deadNodes, it will not be removed from 
> deadNodes if DN is back alive. We need a way to remove local datanode from 
> deadNodes when the local datanode is become live.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed

2015-05-18 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549719#comment-14549719
 ] 

Rakesh R commented on HDFS-8332:


Thanks [~umamaheswararao], [~busbey], [~vinayrpet], [~cnauroth] for the helpful 
discussions and resolving this.

> DFS client API calls should check filesystem closed
> ---
>
> Key: HDFS-8332
> URL: https://issues.apache.org/jira/browse/HDFS-8332
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Rakesh R
>Assignee: Rakesh R
> Fix For: 3.0.0
>
> Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, 
> HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, 
> HDFS-8332.001.branch-2.patch
>
>
> I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be 
> called even after the filesystem close. Instead these calls should do 
> {{checkOpen}} and throws:
> {code}
> java.io.IOException: Filesystem closed
>   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8186) Erasure coding: Make block placement policy for EC file configurable

2015-05-18 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8186:

Attachment: HDFS-8186-HDFS-7285.003.patch

> Erasure coding: Make block placement policy for EC file configurable
> 
>
> Key: HDFS-8186
> URL: https://issues.apache.org/jira/browse/HDFS-8186
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-8186-HDFS-7285.002.patch, 
> HDFS-8186-HDFS-7285.003.patch, HDFS-8186.001.patch
>
>
> This includes:
> 1. User can config block placement policy for EC file in xml configuration 
> file.
> 2. EC policy works for EC file, replication policy works for non-EC file. 
> They are coexistent.
> Not includes:
> 1. Details of block placement policy for EC. Discussion and implementation 
> goes to HDFS-7613.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4185) Add a metric for number of active leases

2015-05-18 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549706#comment-14549706
 ] 

Rakesh R commented on HDFS-4185:


Thanks [~raviprak], [~kihwal], [~vinayrpet] for the help.

> Add a metric for number of active leases
> 
>
> Key: HDFS-4185
> URL: https://issues.apache.org/jira/browse/HDFS-4185
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 0.23.4, 2.0.2-alpha
>Reporter: Kihwal Lee
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-4185-001.patch, HDFS-4185-002.patch, 
> HDFS-4185-003.patch, HDFS-4185-004.patch, HDFS-4185-005.patch, 
> HDFS-4185-006.patch, HDFS-4185-007.patch, HDFS-4185-008.patch, 
> HDFS-4185-009.patch
>
>
> We have seen cases of systematic open file leaks, which could have been 
> detected if we have a metric that shows number of active leases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4185) Add a metric for number of active leases

2015-05-18 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-4185:
---
Labels:   (was: BB2015-05-TBR)

> Add a metric for number of active leases
> 
>
> Key: HDFS-4185
> URL: https://issues.apache.org/jira/browse/HDFS-4185
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 0.23.4, 2.0.2-alpha
>Reporter: Kihwal Lee
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-4185-001.patch, HDFS-4185-002.patch, 
> HDFS-4185-003.patch, HDFS-4185-004.patch, HDFS-4185-005.patch, 
> HDFS-4185-006.patch, HDFS-4185-007.patch, HDFS-4185-008.patch, 
> HDFS-4185-009.patch
>
>
> We have seen cases of systematic open file leaks, which could have been 
> detected if we have a metric that shows number of active leases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8420) Erasure Coding: ECZoneManager#getECZoneInfo is not resolving the path properly if zone dir itself is the snapshottable dir

2015-05-18 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549696#comment-14549696
 ] 

Rakesh R commented on HDFS-8420:


Findbug warnings are not related to the patch, it will be taken care as part of 
HDFS-8294. Also, test case failures are unrelated to this patch.

> Erasure Coding: ECZoneManager#getECZoneInfo is not resolving the path 
> properly if zone dir itself is the snapshottable dir
> --
>
> Key: HDFS-8420
> URL: https://issues.apache.org/jira/browse/HDFS-8420
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8320-HDFS-7285-00.patch
>
>
> Presently the resultant zone dir will come with {{.snapshot}} only when the 
> zone dir itself is snapshottable dir. It will return the path including the 
> snapshot name like, {{/zone/.snapshot/snap1}}. Instead could improve this by 
> returning only path {{/zone}}.
> Thanks [~vinayrpet] for the helpful 
> [discussion|https://issues.apache.org/jira/browse/HDFS-8266?focusedCommentId=14543821&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14543821]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8131) Implement a space balanced block placement policy

2015-05-18 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549681#comment-14549681
 ] 

Liu Shaohui commented on HDFS-8131:
---

Thanks for [~kihwal]'s review.

> Implement a space balanced block placement policy
> -
>
> Key: HDFS-8131
> URL: https://issues.apache.org/jira/browse/HDFS-8131
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
>  Labels: BlockPlacementPolicy
> Attachments: HDFS-8131-v1.diff, HDFS-8131-v2.diff, HDFS-8131-v3.diff, 
> HDFS-8131.004.patch, HDFS-8131.005.patch, HDFS-8131.006.patch, balanced.png
>
>
> The default block placement policy will choose datanodes for new blocks 
> randomly, which will result in unbalanced space used percent among datanodes 
> after an cluster expansion. The old datanodes always are in high used percent 
> of space and new added ones are in low percent.
> Through we can used the external balance tool to balance the space used rate, 
> it will cost extra network IO and it's not easy to control the balance speed.
> An easy solution is to implement an balanced block placement policy which 
> will choose low used percent datanodes for new blocks with a little high 
> possibility. In a not long term, the used percent of datanodes will trend to 
> be balanced.
> Suggestions and discussions are welcomed. Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8131) Implement a space balanced block placement policy

2015-05-18 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HDFS-8131:
--
Attachment: HDFS-8131.006.patch

Update for [~kihwal]'s review.

- Format the long lines.

> Implement a space balanced block placement policy
> -
>
> Key: HDFS-8131
> URL: https://issues.apache.org/jira/browse/HDFS-8131
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
>  Labels: BlockPlacementPolicy
> Attachments: HDFS-8131-v1.diff, HDFS-8131-v2.diff, HDFS-8131-v3.diff, 
> HDFS-8131.004.patch, HDFS-8131.005.patch, HDFS-8131.006.patch, balanced.png
>
>
> The default block placement policy will choose datanodes for new blocks 
> randomly, which will result in unbalanced space used percent among datanodes 
> after an cluster expansion. The old datanodes always are in high used percent 
> of space and new added ones are in low percent.
> Through we can used the external balance tool to balance the space used rate, 
> it will cost extra network IO and it's not easy to control the balance speed.
> An easy solution is to implement an balanced block placement policy which 
> will choose low used percent datanodes for new blocks with a little high 
> possibility. In a not long term, the used percent of datanodes will trend to 
> be balanced.
> Suggestions and discussions are welcomed. Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8418) Fix the isNeededReplication calculation for Striped block in NN

2015-05-18 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549669#comment-14549669
 ] 

Yi Liu commented on HDFS-8418:
--

Thanks a lot for the review and commit, Jing!

> Fix the isNeededReplication calculation for Striped block in NN
> ---
>
> Key: HDFS-8418
> URL: https://issues.apache.org/jira/browse/HDFS-8418
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Critical
> Fix For: HDFS-7285
>
> Attachments: HDFS-8418-HDFS-7285.001.patch
>
>
> Currently when calculating {{isNeededReplication}} for striped block, we use 
> BlockCollection#getPreferredBlockReplication to get expected replica number 
> for striped block. See an example:
> {code}
> public void checkReplication(BlockCollection bc) {
> final short expected = bc.getPreferredBlockReplication();
> for (BlockInfo block : bc.getBlocks()) {
>   final NumberReplicas n = countNodes(block);
>   if (isNeededReplication(block, expected, n.liveReplicas())) { 
> neededReplications.add(block, n.liveReplicas(),
> n.decommissionedAndDecommissioning(), expected);
>   } else if (n.liveReplicas() > expected) {
> processOverReplicatedBlock(block, expected, null, null);
>   }
> }
>   }
> {code}
> But actually it's not correct, for example, if the length of striped file is 
> less than a cell, then the expected replica of the block should be {{1 + 
> parityBlkNum}} instead of {{dataBlkNum + parityBlkNum}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8352) Erasure Coding: test webhdfs read write stripe file

2015-05-18 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549666#comment-14549666
 ] 

Walter Su commented on HDFS-8352:
-

I'm very sorry. I'll keep it in mind.

> Erasure Coding: test webhdfs read write stripe file
> ---
>
> Key: HDFS-8352
> URL: https://issues.apache.org/jira/browse/HDFS-8352
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-8352-HDFS-7285.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8418) Fix the isNeededReplication calculation for Striped block in NN

2015-05-18 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-8418:

   Resolution: Fixed
Fix Version/s: HDFS-7285
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks Yi for the contribution!

> Fix the isNeededReplication calculation for Striped block in NN
> ---
>
> Key: HDFS-8418
> URL: https://issues.apache.org/jira/browse/HDFS-8418
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Critical
> Fix For: HDFS-7285
>
> Attachments: HDFS-8418-HDFS-7285.001.patch
>
>
> Currently when calculating {{isNeededReplication}} for striped block, we use 
> BlockCollection#getPreferredBlockReplication to get expected replica number 
> for striped block. See an example:
> {code}
> public void checkReplication(BlockCollection bc) {
> final short expected = bc.getPreferredBlockReplication();
> for (BlockInfo block : bc.getBlocks()) {
>   final NumberReplicas n = countNodes(block);
>   if (isNeededReplication(block, expected, n.liveReplicas())) { 
> neededReplications.add(block, n.liveReplicas(),
> n.decommissionedAndDecommissioning(), expected);
>   } else if (n.liveReplicas() > expected) {
> processOverReplicatedBlock(block, expected, null, null);
>   }
> }
>   }
> {code}
> But actually it's not correct, for example, if the length of striped file is 
> less than a cell, then the expected replica of the block should be {{1 + 
> parityBlkNum}} instead of {{dataBlkNum + parityBlkNum}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8418) Fix the isNeededReplication calculation for Striped block in NN

2015-05-18 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549658#comment-14549658
 ] 

Jing Zhao commented on HDFS-8418:
-

bq. INodeFile#getPreferredBlockReplication is method of INodeFile, not method 
of BlockInfo

Ahh, actually you're right. We do not need to and also cannot put the same 
logic into getPreferredBlockReplication. +1 for the current patch. I will 
commit it shortly.

> Fix the isNeededReplication calculation for Striped block in NN
> ---
>
> Key: HDFS-8418
> URL: https://issues.apache.org/jira/browse/HDFS-8418
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Critical
> Attachments: HDFS-8418-HDFS-7285.001.patch
>
>
> Currently when calculating {{isNeededReplication}} for striped block, we use 
> BlockCollection#getPreferredBlockReplication to get expected replica number 
> for striped block. See an example:
> {code}
> public void checkReplication(BlockCollection bc) {
> final short expected = bc.getPreferredBlockReplication();
> for (BlockInfo block : bc.getBlocks()) {
>   final NumberReplicas n = countNodes(block);
>   if (isNeededReplication(block, expected, n.liveReplicas())) { 
> neededReplications.add(block, n.liveReplicas(),
> n.decommissionedAndDecommissioning(), expected);
>   } else if (n.liveReplicas() > expected) {
> processOverReplicatedBlock(block, expected, null, null);
>   }
> }
>   }
> {code}
> But actually it's not correct, for example, if the length of striped file is 
> less than a cell, then the expected replica of the block should be {{1 + 
> parityBlkNum}} instead of {{dataBlkNum + parityBlkNum}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8320) Erasure coding: consolidate striping-related terminologies

2015-05-18 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549656#comment-14549656
 ] 

Jing Zhao commented on HDFS-8320:
-

Thanks for updating the patch, Zhe! The new patch looks pretty good to me. Only 
one comment, in {{getStartOffsetsForInternalBlocks}}, the start offset for 
parity blocks can be smaller than the start offset of the first data block. 
Here I think we can set it to the smallest offset among all the data blocks.
{code}
+for (int i = dataBlkNum; i < dataBlkNum + parityBlkNum; i++) {
+  startOffsets[i] = startOffsets[0];
+}
{code}


> Erasure coding: consolidate striping-related terminologies
> --
>
> Key: HDFS-8320
> URL: https://issues.apache.org/jira/browse/HDFS-8320
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8320-HDFS-7285.00.patch, 
> HDFS-8320-HDFS-7285.01.patch
>
>
> Right now we are doing striping-based I/O in a number of places:
> # Client output stream (HDFS-7889)
> # Client input stream
> #* pread (HDFS-7782, HDFS-7678)
> #* stateful read (HDFS-8033, HDFS-8281, HDFS-8319)
> # DN reconstruction (HDFS-7348)
> In each place we use one or multiple of the following terminologies:
> # Cell
> # Stripe
> # Block group
> # Internal block
> # Chunk
> This JIRA aims to systematically define these terminologies in relation with 
> each other and in the context of the containing file. For example, a cell 
> belong to stripe _i_ and internal block _j_ can be indexed as {{(i, j)}} and 
> its logical index _k_ in the file can be calculated.
> With the above consolidation, hopefully we can further consolidate striping 
> I/O codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8421) Move startFile() and related operations into FSDirWriteFileOp

2015-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549653#comment-14549653
 ] 

Hadoop QA commented on HDFS-8421:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m  7s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 43s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  0s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 16s | The applied patch generated  
14 new checkstyle issues (total was 422, now 432). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  8s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 27s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 111m 42s | Tests failed in hadoop-hdfs. |
| | | 155m 58s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots |
|   | hadoop.hdfs.TestModTime |
|   | hadoop.fs.TestUrlStreamHandler |
|   | hadoop.hdfs.security.TestDelegationToken |
|   | hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolarent |
|   | hadoop.hdfs.server.namenode.TestFileLimit |
|   | hadoop.hdfs.TestParallelShortCircuitRead |
|   | hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot |
|   | hadoop.hdfs.server.namenode.TestEditLogAutoroll |
|   | hadoop.TestRefreshCallQueue |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.cli.TestCryptoAdminCLI |
|   | hadoop.hdfs.TestSetrepDecreasing |
|   | hadoop.hdfs.server.datanode.TestDiskError |
|   | hadoop.fs.viewfs.TestViewFsWithAcls |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.server.namenode.TestHostsFiles |
|   | hadoop.hdfs.server.datanode.TestTransferRbw |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy |
|   | hadoop.fs.contract.hdfs.TestHDFSContractDelete |
|   | hadoop.hdfs.server.namenode.TestFileContextAcl |
|   | hadoop.fs.TestFcHdfsSetUMask |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.server.namenode.TestDeleteRace |
|   | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.server.namenode.TestFSDirectory |
|   | hadoop.fs.contract.hdfs.TestHDFSContractOpen |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing |
|   | hadoop.hdfs.server.datanode.TestBlockRecovery |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestReadWhileWriting |
|   | hadoop.fs.contract.hdfs.TestHDFSContractMkdir |
|   | hadoop.fs.contract.hdfs.TestHDFSContractAppend |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestRbwSpaceReservation |
|   | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.server.namenode.ha.TestQuotasWithHA |
|   | hadoop.hdfs.server.namenode.TestAuditLogger |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | hadoop.hdfs.TestWriteBlockGetsBlockLengthHint |
|   | hadoop.hdfs.server.namenode.TestHDFSConcat |
|   | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | hadoop.hdfs.server.datanode.TestCachingStrategy |
|   | hadoop.hdfs.server.namenode.TestAddBlockRetry |
|   | hadoop.fs.TestSymlinkHdfsFileSystem |
|   | hadoop.fs.viewfs.TestViewFsDefaultValue |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestFSInputChecker |
|   | hadoop.hdfs.server.mover.TestStorageMover |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory |
|   | hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol |
|   | hadoop.cli.TestAclCLI |
|   | hadoop.hdfs.security.token.block.TestBlockToken |
|   | hadoop.hdfs.server.namenode.ha.TestHAMetrics |
|   | hadoop.hdfs.server.nameno

[jira] [Commented] (HDFS-8186) Erasure coding: Make block placement policy for EC file configurable

2015-05-18 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549644#comment-14549644
 ] 

Walter Su commented on HDFS-8186:
-

> ...Looks like we should add a TODO that webhdfs on striped files will be 
> supported later?
It's done, not a TODO. Webhdfs already support write striped file (HDFS-8352). 
{{chooseTarget4WebHDFS}} return a proxy DN. We only requires proxy DN closer to 
client. Proxy DN is or is not one target DN. Proxy DN will create a DFSClient 
to redirect data from client to target DNs. ( You can see the details from 
HDFS-2316)

> Erasure coding: Make block placement policy for EC file configurable
> 
>
> Key: HDFS-8186
> URL: https://issues.apache.org/jira/browse/HDFS-8186
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-8186-HDFS-7285.002.patch, HDFS-8186.001.patch
>
>
> This includes:
> 1. User can config block placement policy for EC file in xml configuration 
> file.
> 2. EC policy works for EC file, replication policy works for non-EC file. 
> They are coexistent.
> Not includes:
> 1. Details of block placement policy for EC. Discussion and implementation 
> goes to HDFS-7613.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8418) Fix the isNeededReplication calculation for Striped block in NN

2015-05-18 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549640#comment-14549640
 ] 

Yi Liu commented on HDFS-8418:
--

Thanks Jing for the review!
{quote}
The only minor question is whether we also want to update 
INodeFile#getPreferredBlockReplication since it already takes into account the 
striped blocks
{quote}
Thanks, I ever considered this too, {{INodeFile#getPreferredBlockReplication}} 
is method of {{INodeFile}}, not method of {{BlockInfo}}. So in the patch, I 
calculate the expected replica number for block, and keep unchanged for 
{{INodeFile#getPreferredBlockReplication}}.  Not sure I get your meaning, Jing, 
or you have some suggest to change {{INodeFile#getPreferredBlockReplication}}?

BTW, I check the Findbugs and test failure, they are unrelated to this patch or 
can't be reproduced in latest branch.

> Fix the isNeededReplication calculation for Striped block in NN
> ---
>
> Key: HDFS-8418
> URL: https://issues.apache.org/jira/browse/HDFS-8418
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Critical
> Attachments: HDFS-8418-HDFS-7285.001.patch
>
>
> Currently when calculating {{isNeededReplication}} for striped block, we use 
> BlockCollection#getPreferredBlockReplication to get expected replica number 
> for striped block. See an example:
> {code}
> public void checkReplication(BlockCollection bc) {
> final short expected = bc.getPreferredBlockReplication();
> for (BlockInfo block : bc.getBlocks()) {
>   final NumberReplicas n = countNodes(block);
>   if (isNeededReplication(block, expected, n.liveReplicas())) { 
> neededReplications.add(block, n.liveReplicas(),
> n.decommissionedAndDecommissioning(), expected);
>   } else if (n.liveReplicas() > expected) {
> processOverReplicatedBlock(block, expected, null, null);
>   }
> }
>   }
> {code}
> But actually it's not correct, for example, if the length of striped file is 
> less than a cell, then the expected replica of the block should be {{1 + 
> parityBlkNum}} instead of {{dataBlkNum + parityBlkNum}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8425) [umbrella] Bug fixing for System tests for EC feature

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8425:
--
Description: This jira is {{umbrella}} jira for bug fixing of  System tests 
for EC feature.   (was: This jira is {{umbrella}} jira for bug reports of  
System tests for EC feature. 

Reports format(each report should at least contains the following):
  Mandatory :
  1. Test version of HDFS-7285 presented by last git commit code.
  2. Related logs.
  Recommended:
  1.Testing scripts, commands and conditions for reappearance of bug.
  2.Testing environment of hardware and OS version.
  3. Investigation progress and Fixing plan.
  
Reprot example: HDFS-8426
)

> [umbrella] Bug fixing for System tests for EC feature
> -
>
> Key: HDFS-8425
> URL: https://issues.apache.org/jira/browse/HDFS-8425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
>
> This jira is {{umbrella}} jira for bug fixing of  System tests for EC 
> feature. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8426) Namenode shutdown for "ReplicationMonitor thread received Runtime exception"

2015-05-18 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549613#comment-14549613
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8426:
---

Here is the stack trace copied from the log:
{code}
2015-05-15 17:42:23,856 ERROR 
[org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@695e7ea6]
 blockmanagement.BlockManager (BlockManager.java:run(3863)) - 
ReplicationMonitor thread received Runtime exception. 
java.lang.AssertionError: Absolute path required
at 
org.apache.hadoop.hdfs.server.namenode.INode.getPathNames(INode.java:744)
at 
org.apache.hadoop.hdfs.server.namenode.INode.getPathComponents(INode.java:723)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.getINodesInPath(FSDirectory.java:1655)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getECSchemaForPath(FSNamesystem.java:8435)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeRecoveryWorkForBlocks(BlockManager.java:1572)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockRecoveryWork(BlockManager.java:1402)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3894)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3846)
at java.lang.Thread.run(Thread.java:745)
{code}

> Namenode shutdown for "ReplicationMonitor thread received Runtime exception"
> 
>
> Key: HDFS-8426
> URL: https://issues.apache.org/jira/browse/HDFS-8426
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
> Attachments: 5-15 NameNode shutdow log segment
>
>
> 1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
> 2.NameNode log for this error: [^5-15 NameNode shutdow log segment]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8425) [umbrella] Bug fixing for System tests for EC feature

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8425:
--
Summary: [umbrella] Bug fixing for System tests for EC feature  (was: 
[umbrella] Bug reports and fixing for System tests for EC feature)

> [umbrella] Bug fixing for System tests for EC feature
> -
>
> Key: HDFS-8425
> URL: https://issues.apache.org/jira/browse/HDFS-8425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
>
> This jira is {{umbrella}} jira for bug reports of  System tests for EC 
> feature. 
> Reports format(each report should at least contains the following):
>   Mandatory :
>   1. Test version of HDFS-7285 presented by last git commit code.
>   2. Related logs.
>   Recommended:
>   1.Testing scripts, commands and conditions for reappearance of bug.
>   2.Testing environment of hardware and OS version.
>   3. Investigation progress and Fixing plan.
>   
> Reprot example: HDFS-8426



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8426) Namenode shutdown for "ReplicationMonitor thread received Runtime exception"

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8426:
--
Attachment: (was: 5-15 NameNode shutdow log.rtf)

> Namenode shutdown for "ReplicationMonitor thread received Runtime exception"
> 
>
> Key: HDFS-8426
> URL: https://issues.apache.org/jira/browse/HDFS-8426
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
> Attachments: 5-15 NameNode shutdow log segment
>
>
> 1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
> 2.NameNode log for this error: [^5-15 NameNode shutdow log segment]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8426) Namenode shutdown for "ReplicationMonitor thread received Runtime exception"

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8426:
--
Attachment: 5-15 NameNode shutdow log segment

> Namenode shutdown for "ReplicationMonitor thread received Runtime exception"
> 
>
> Key: HDFS-8426
> URL: https://issues.apache.org/jira/browse/HDFS-8426
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
> Attachments: 5-15 NameNode shutdow log segment
>
>
> 1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
> 2.NameNode log for this error: [^5-15 NameNode shutdow log segment]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8426) Namenode shutdown for "ReplicationMonitor thread received Runtime exception"

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8426:
--
Attachment: (was: 5-15 NameNode shutdow log segment.rtf)

> Namenode shutdown for "ReplicationMonitor thread received Runtime exception"
> 
>
> Key: HDFS-8426
> URL: https://issues.apache.org/jira/browse/HDFS-8426
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
> Attachments: 5-15 NameNode shutdow log segment
>
>
> 1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
> 2.NameNode log for this error: [^5-15 NameNode shutdow log segment]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8426) Namenode shutdown for "ReplicationMonitor thread received Runtime exception"

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8426:
--
Attachment: 5-15 NameNode shutdow log.rtf

> Namenode shutdown for "ReplicationMonitor thread received Runtime exception"
> 
>
> Key: HDFS-8426
> URL: https://issues.apache.org/jira/browse/HDFS-8426
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
> Attachments: 5-15 NameNode shutdow log segment.rtf, 5-15 NameNode 
> shutdow log.rtf
>
>
> 1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
> 2.NameNode log for this error: [^5-15 NameNode shutdow log segment.rtf]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8426) Namenode shutdown for "ReplicationMonitor thread received Runtime exception"

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8426:
--
Description: 
1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
2.NameNode log for this error: [^5-15 NameNode shutdow log segment]

  was:
1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
2.NameNode log for this error: [^5-15 NameNode shutdow log segment.rtf]


> Namenode shutdown for "ReplicationMonitor thread received Runtime exception"
> 
>
> Key: HDFS-8426
> URL: https://issues.apache.org/jira/browse/HDFS-8426
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
> Attachments: 5-15 NameNode shutdow log segment.rtf, 5-15 NameNode 
> shutdow log.rtf
>
>
> 1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
> 2.NameNode log for this error: [^5-15 NameNode shutdow log segment]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4660) Duplicated checksum on DN in a recovered pipeline

2015-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549597#comment-14549597
 ] 

Hadoop QA commented on HDFS-4660:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 38s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 37s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 37s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 36s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  4s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 163m 24s | Tests failed in hadoop-hdfs. |
| | | 204m 50s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestAppendSnapshotTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12733630/HDFS-4660.v2.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0790275 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11038/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11038/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11038/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11038/console |


This message was automatically generated.

> Duplicated checksum on DN in a recovered pipeline
> -
>
> Key: HDFS-4660
> URL: https://issues.apache.org/jira/browse/HDFS-4660
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Peng Zhang
>Assignee: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-4660.patch, HDFS-4660.patch, HDFS-4660.v2.patch
>
>
> pipeline DN1  DN2  DN3
> stop DN2
> pipeline added node DN4 located at 2nd position
> DN1  DN4  DN3
> recover RBW
> DN4 after recover rbw
> 2013-04-01 21:02:31,570 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Recover 
> RBW replica 
> BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1004
> 2013-04-01 21:02:31,570 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
> Recovering ReplicaBeingWritten, blk_-9076133543772600337_1004, RBW
>   getNumBytes() = 134144
>   getBytesOnDisk() = 134144
>   getVisibleLength()= 134144
> end at chunk (134144/512=262)
> DN3 after recover rbw
> 2013-04-01 21:02:31,575 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Recover 
> RBW replica 
> BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_10042013-04-01
>  21:02:31,575 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
> Recovering ReplicaBeingWritten, blk_-9076133543772600337_1004, RBW
>   getNumBytes() = 134028 
>   getBytesOnDisk() = 134028
>   getVisibleLength()= 134028
> client send packet after recover pipeline
> offset=133632  len=1008
> DN4 after flush 
> 2013-04-01 21:02:31,779 DEBUG 
> org.apache.hadoop.hdfs.server.datanode.DataNode: FlushOrsync, file 
> offset:134640; meta offset:1063
> // meta end position should be floor(134640/512)*4 + 7 == 1059, but now it is 
> 1063.
> DN3 after flush
> 2013-04-01 21:02:31,782 DEBUG 
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: 
> BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1005, 
> type=LAST_IN_PIPELINE, downstreams=0:[]: enqueue Packet(seqno=219, 
> lastPacketInBlock=false, offs

[jira] [Commented] (HDFS-8192) Eviction should key off used locked memory instead of ram disk free space

2015-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549579#comment-14549579
 ] 

Hadoop QA commented on HDFS-8192:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m  9s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 5 new or modified test files. |
| {color:red}-1{color} | javac |   7m 47s | The applied patch generated  1  
additional warning messages. |
| {color:green}+1{color} | javadoc |   9m 56s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 22s | The applied patch generated  5 
new checkstyle issues (total was 552, now 552). |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  8s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 19s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 168m 32s | Tests failed in hadoop-hdfs. |
| | | 212m 52s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter 
|
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12733620/HDFS-8192.02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0790275 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11037/artifact/patchprocess/diffJavacWarnings.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11037/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11037/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11037/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11037/console |


This message was automatically generated.

> Eviction should key off used locked memory instead of ram disk free space
> -
>
> Key: HDFS-8192
> URL: https://issues.apache.org/jira/browse/HDFS-8192
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8192.01.patch, HDFS-8192.02.patch
>
>
> Followup to HDFS-8157, eviction from RAM disk should be triggered when locked 
> memory is low. More details later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8405) Fix a typo in NamenodeFsck

2015-05-18 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549578#comment-14549578
 ] 

Takanobu Asanuma commented on HDFS-8405:


Thank you very much, Nicholas!

> Fix a typo in NamenodeFsck
> --
>
> Key: HDFS-8405
> URL: https://issues.apache.org/jira/browse/HDFS-8405
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-8405.1.patch
>
>
> DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY below should not be quoted.
> {code}
>   res.append("\n  
> ").append("DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY:\t")
>  .append(minReplication);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8426) Namenode shutdown for "ReplicationMonitor thread received Runtime exception"

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8426:
--
Description: 
1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
2.NameNode log for this error: [^5-15 NameNode shutdow log segment.rtf]

  was:
1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
2.NameNode log for this error: [^5-15 NameNode shutdow log segment]


> Namenode shutdown for "ReplicationMonitor thread received Runtime exception"
> 
>
> Key: HDFS-8426
> URL: https://issues.apache.org/jira/browse/HDFS-8426
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
> Attachments: 5-15 NameNode shutdow log segment.rtf
>
>
> 1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
> 2.NameNode log for this error: [^5-15 NameNode shutdow log segment.rtf]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8426) Namenode shutdown for "ReplicationMonitor thread received Runtime exception"

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8426:
--
Description: 
1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
2.NameNode log for this error: [^

  was:

1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
2.


> Namenode shutdown for "ReplicationMonitor thread received Runtime exception"
> 
>
> Key: HDFS-8426
> URL: https://issues.apache.org/jira/browse/HDFS-8426
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
> Attachments: 5-15 NameNode shutdow log segment.rtf
>
>
> 1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
> 2.NameNode log for this error: [^



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8426) Namenode shutdown for "ReplicationMonitor thread received Runtime exception"

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8426:
--
Description: 
1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
2.NameNode log for this error: [^5-15 NameNode shutdow log segment]

  was:
1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
2.NameNode log for this error: [^


> Namenode shutdown for "ReplicationMonitor thread received Runtime exception"
> 
>
> Key: HDFS-8426
> URL: https://issues.apache.org/jira/browse/HDFS-8426
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
> Attachments: 5-15 NameNode shutdow log segment.rtf
>
>
> 1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
> 2.NameNode log for this error: [^5-15 NameNode shutdow log segment]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8426) Namenode shutdown for "ReplicationMonitor thread received Runtime exception"

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8426:
--
Attachment: 5-15 NameNode shutdow log segment.rtf

> Namenode shutdown for "ReplicationMonitor thread received Runtime exception"
> 
>
> Key: HDFS-8426
> URL: https://issues.apache.org/jira/browse/HDFS-8426
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
> Attachments: 5-15 NameNode shutdow log segment.rtf
>
>
> 1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
> 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8427) Remove dataBlockNum and parityBlockNum from BlockInfoStriped

2015-05-18 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-8427:
-
Description: Remove unnecessary members such as {{dataBlockNum}} and 
{{parityBlockNum}} from {{BlockInfoStriped}}. These are included in 
{{ECShema}}.  (was: Remove unnecessary members such as {dataBlockNum} and 
{parityBlockNum} from {BlockInfoStriped}. These are included in {ECShema})

> Remove dataBlockNum and parityBlockNum from BlockInfoStriped
> 
>
> Key: HDFS-8427
> URL: https://issues.apache.org/jira/browse/HDFS-8427
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Fix For: HDFS-7285
>
>
> Remove unnecessary members such as {{dataBlockNum}} and {{parityBlockNum}} 
> from {{BlockInfoStriped}}. These are included in {{ECShema}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8427) Remove dataBlockNum and parityBlockNum from BlockInfoStriped

2015-05-18 Thread Kai Sasaki (JIRA)
Kai Sasaki created HDFS-8427:


 Summary: Remove dataBlockNum and parityBlockNum from 
BlockInfoStriped
 Key: HDFS-8427
 URL: https://issues.apache.org/jira/browse/HDFS-8427
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Kai Sasaki
Assignee: Kai Sasaki
 Fix For: HDFS-7285


Remove unnecessary members such as {dataBlockNum} and {parityBlockNum} from 
{BlockInfoStriped}. These are included in {ECShema}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8426) Namenode shutdown for "ReplicationMonitor thread received Runtime exception"

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8426:
--
Description: 

1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
2.

  was:
Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a



> Namenode shutdown for "ReplicationMonitor thread received Runtime exception"
> 
>
> Key: HDFS-8426
> URL: https://issues.apache.org/jira/browse/HDFS-8426
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
>
> 1.Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a
> 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8425) [umbrella] Bug reports and fixing for System tests for EC feature

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8425:
--
Description: 
This jira is {{umbrella}} jira for bug reports of  System tests for EC feature. 

Reports format(each report should at least contains the following):
  Mandatory :
  1. Test version of HDFS-7285 presented by last git commit code.
  2. Related logs.
  Recommended:
  1.Testing scripts, commands and conditions for reappearance of bug.
  2.Testing environment of hardware and OS version.
  3. Investigation progress and Fixing plan.
  
Reprot example: HDFS-8426


  was:
This jira is {{umbrella}} jira for bug reports of  System tests for EC feature. 

Reports format(each report should at least contains the following):
  Mandatory :
  1. Test version of HDFS-7285 presented by last git commit code.
  2. Related logs.
  Recommended:
  1.Testing scripts, commands and conditions for reappearance of bug.
  2.Testing environment of hardware and OS version.
  3. Investigation progress and Fixing plan.
  
Reprot example: HDFS-7286 



> [umbrella] Bug reports and fixing for System tests for EC feature
> -
>
> Key: HDFS-8425
> URL: https://issues.apache.org/jira/browse/HDFS-8425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
>
> This jira is {{umbrella}} jira for bug reports of  System tests for EC 
> feature. 
> Reports format(each report should at least contains the following):
>   Mandatory :
>   1. Test version of HDFS-7285 presented by last git commit code.
>   2. Related logs.
>   Recommended:
>   1.Testing scripts, commands and conditions for reappearance of bug.
>   2.Testing environment of hardware and OS version.
>   3. Investigation progress and Fixing plan.
>   
> Reprot example: HDFS-8426



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8426) Namenode shutdown for "ReplicationMonitor thread received Runtime exception"

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8426:
--
Description: 
Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a


> Namenode shutdown for "ReplicationMonitor thread received Runtime exception"
> 
>
> Key: HDFS-8426
> URL: https://issues.apache.org/jira/browse/HDFS-8426
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
>
> Last git commit code : 041c936e3b677f9d61e8a2c5deb20e7b2dd8292a



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8425) [umbrella] Bug reports and fixing for System tests for EC feature

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8425:
--
Description: 
This jira is {{umbrella}} jira for bug reports of  System tests for EC feature. 

Reports format(each report should at least contains the following):
  Mandatory :
  1. Test version of HDFS-7285 presented by last git commit code.
  2. Related logs.
  Recommended:
  1.Testing scripts, commands and conditions for reappearance of bug.
  2.Testing environment of hardware and OS version.
  3. Investigation progress and Fixing plan.
  
Reprot example: HDFS-7286 


  was:
This jira is {{umbrella}} jira for bug reports of  System tests for EC feature. 

Reports format(each report should at least contains the following):
  Mandatory :
  1. Test version of HDFS-7285 presented by git version.
  2. Related logs.
  Recommended:
  1.Testing scripts, commands and conditions for reappearance of bug.
  2.Testing environment of hardware and OS version.
  3. Investigation progress and Fixing plan.
  
 



> [umbrella] Bug reports and fixing for System tests for EC feature
> -
>
> Key: HDFS-8425
> URL: https://issues.apache.org/jira/browse/HDFS-8425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
>
> This jira is {{umbrella}} jira for bug reports of  System tests for EC 
> feature. 
> Reports format(each report should at least contains the following):
>   Mandatory :
>   1. Test version of HDFS-7285 presented by last git commit code.
>   2. Related logs.
>   Recommended:
>   1.Testing scripts, commands and conditions for reappearance of bug.
>   2.Testing environment of hardware and OS version.
>   3. Investigation progress and Fixing plan.
>   
> Reprot example: HDFS-7286 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8426) Namenode shutdown for "ReplicationMonitor thread received Runtime exception"

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8426:
--
Summary: Namenode shutdown for "ReplicationMonitor thread received Runtime 
exception"  (was: Namenode shutdown for {{ReplicationMonitor thread received 
Runtime exception}})

> Namenode shutdown for "ReplicationMonitor thread received Runtime exception"
> 
>
> Key: HDFS-8426
> URL: https://issues.apache.org/jira/browse/HDFS-8426
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8367) BlockInfoStriped uses EC schema

2015-05-18 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549520#comment-14549520
 ] 

Kai Zheng commented on HDFS-8367:
-

Kai as [~jingzhao] suggested, would you please address above mentioned issue 
separately in a new issue?

> BlockInfoStriped uses EC schema
> ---
>
> Key: HDFS-8367
> URL: https://issues.apache.org/jira/browse/HDFS-8367
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>  Labels: EC
> Fix For: HDFS-7285
>
> Attachments: HDFS-8367-FYI-v2.patch, HDFS-8367-FYI.patch, 
> HDFS-8367-HDFS-7285-01.patch, HDFS-8367-HDFS-7285-02.patch, 
> HDFS-8367-HDFS-7285-06.patch, HDFS-8367.1.patch, 
> HDFS-8467-HDFS-7285-03.patch, HDFS-8467-HDFS-7285-04.patch, 
> HDFS-8467-HDFS-7285-05.patch
>
>
> {{BlockInfoStriped}} should receive the total information for erasure coding 
> as {{ECSchema}}. This JIRA changes the constructor interface and its 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8367) BlockInfoStriped uses EC schema

2015-05-18 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549511#comment-14549511
 ] 

Kai Sasaki commented on HDFS-8367:
--

Thanks for graceful support! I'll back to 
[HDFS-8062|https://issues.apache.org/jira/browse/HDFS-8062] to replace hard 
coded values. 

> BlockInfoStriped uses EC schema
> ---
>
> Key: HDFS-8367
> URL: https://issues.apache.org/jira/browse/HDFS-8367
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>  Labels: EC
> Fix For: HDFS-7285
>
> Attachments: HDFS-8367-FYI-v2.patch, HDFS-8367-FYI.patch, 
> HDFS-8367-HDFS-7285-01.patch, HDFS-8367-HDFS-7285-02.patch, 
> HDFS-8367-HDFS-7285-06.patch, HDFS-8367.1.patch, 
> HDFS-8467-HDFS-7285-03.patch, HDFS-8467-HDFS-7285-04.patch, 
> HDFS-8467-HDFS-7285-05.patch
>
>
> {{BlockInfoStriped}} should receive the total information for erasure coding 
> as {{ECSchema}}. This JIRA changes the constructor interface and its 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8320) Erasure coding: consolidate striping-related terminologies

2015-05-18 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8320:

Attachment: HDFS-8320-HDFS-7285.01.patch

Many thanks to Jing for the helpful review! Updating the patch to address the 
comments.

# Considering we will remove cellSize out of the ECSchema, we can consider 
adding a separate cellSize parameter
HDFS-8408 will potentially bring us a unified {{ErasureCodingInfo}} class to 
represent both codec schema and {{cellSize}}. How about we make the change 
after it?
# In blockSeekTo, since we only need to get each internal block's start offset, 
to call getRangesForInternalBlocks which breaks the whole block group into 
cells may be an overkill. 
Good point! Actually it can be even simpler than that because we don't care 
about the span, instead only care about the start offsets. Let me know if the 
new {{getStartOffsetsForInternalBlocks}} method looks OK.
# Looks like HADOOP-11938 will be ready soon. Please see if you want to update 
the decoding function accordingly in this jira.
I had a quick try but got a content mismatch. Will take some more time to 
address this separately.

Addressed smaller issues (2, 3, 5). Will address #4 separately since the patch 
is already big.

> Erasure coding: consolidate striping-related terminologies
> --
>
> Key: HDFS-8320
> URL: https://issues.apache.org/jira/browse/HDFS-8320
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8320-HDFS-7285.00.patch, 
> HDFS-8320-HDFS-7285.01.patch
>
>
> Right now we are doing striping-based I/O in a number of places:
> # Client output stream (HDFS-7889)
> # Client input stream
> #* pread (HDFS-7782, HDFS-7678)
> #* stateful read (HDFS-8033, HDFS-8281, HDFS-8319)
> # DN reconstruction (HDFS-7348)
> In each place we use one or multiple of the following terminologies:
> # Cell
> # Stripe
> # Block group
> # Internal block
> # Chunk
> This JIRA aims to systematically define these terminologies in relation with 
> each other and in the context of the containing file. For example, a cell 
> belong to stripe _i_ and internal block _j_ can be indexed as {{(i, j)}} and 
> its logical index _k_ in the file can be calculated.
> With the above consolidation, hopefully we can further consolidate striping 
> I/O codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8426) Namenode shutdown for {{ReplicationMonitor thread received Runtime exception}}

2015-05-18 Thread GAO Rui (JIRA)
GAO Rui created HDFS-8426:
-

 Summary: Namenode shutdown for {{ReplicationMonitor thread 
received Runtime exception}}
 Key: HDFS-8426
 URL: https://issues.apache.org/jira/browse/HDFS-8426
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: HDFS-7285
Reporter: GAO Rui






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8424) distcp on raw namespace using HFTP protocol fails for TDE

2015-05-18 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8424:
-
Component/s: encryption

> distcp on raw namespace using HFTP protocol fails for TDE
> -
>
> Key: HDFS-8424
> URL: https://issues.apache.org/jira/browse/HDFS-8424
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, security
>Affects Versions: 2.6.0
>Reporter: Sumana Sathish
>Priority: Critical
>
> For TDE, distcp using HFTP protocol fails on raw namespace with the following 
> exception 
> {code}
> sudo su - -c "/usr/hdp/current/hadoop-client/bin/hadoop distcp -i  
> -skipcrccheck -update 
> hftp://ip-172-31-33-229.ec2.internal:50070/.reserved/raw/user/hrt_qa/srcDistcpWithTDE/smallFiles/
>  
> hdfs://ip-172-31-33-229.ec2.internal:8020/.reserved/raw/user/hrt_qa/destDistcpWithTDE/distcpedFiles/"
>  hdfs
> 2015-05-15 
> 23:55:35,579|beaver.machine|INFO|2581|140406820472576|MainThread|15/05/15 
> 23:55:35 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, 
> syncFolder=true, deleteMissing=false, ignoreFailures=true, maxMaps=20, 
> sslConfigurationFile='null', copyStrategy='uniformsize', 
> sourceFileListing=null, 
> sourcePaths=[hftp://ip-172-31-33-229.ec2.internal:50070/.reserved/raw/user/hrt_qa/srcDistcpWithTDE/smallFiles],
>  
> targetPath=hdfs://ip-172-31-33-229.ec2.internal:8020/.reserved/raw/user/hrt_qa/destDistcpWithTDE/distcpedFiles,
>  targetPathExists=false, preserveRawXattrs=false}
> 2015-05-15 
> 23:55:36,829|beaver.machine|INFO|2581|140406820472576|MainThread|15/05/15 
> 23:55:36 INFO impl.TimelineClientImpl: Timeline service address: 
> http://ip-172-31-33-229.ec2.internal:8188/ws/v1/timeline/
> 2015-05-15 
> 23:55:37,073|beaver.machine|INFO|2581|140406820472576|MainThread|15/05/15 
> 23:55:37 INFO client.RMProxy: Connecting to ResourceManager at 
> ip-172-31-33-229.ec2.internal/172.31.33.229:8050
> 2015-05-15 
> 23:55:38,040|beaver.machine|INFO|2581|140406820472576|MainThread|15/05/15 
> 23:55:38 ERROR tools.DistCp: Exception encountered
> 2015-05-15 
> 23:55:38,040|beaver.machine|INFO|2581|140406820472576|MainThread|java.lang.UnsupportedOperationException:
>  HftpFileSystem doesn't support getXAttrs
> 2015-05-15 
> 23:55:38,041|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.fs.FileSystem.getXAttrs(FileSystem.java:2559)
> 2015-05-15 
> 23:55:38,041|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.util.DistCpUtils.toCopyListingFileStatus(DistCpUtils.java:322)
> 2015-05-15 
> 23:55:38,042|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:177)
> 2015-05-15 
> 23:55:38,042|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:140)
> 2015-05-15 
> 23:55:38,043|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
> 2015-05-15 
> 23:55:38,043|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
> 2015-05-15 
> 23:55:38,044|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
> 2015-05-15 
> 23:55:38,044|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:382)
> 2015-05-15 
> 23:55:38,045|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:181)
> 2015-05-15 
> 23:55:38,045|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
> 2015-05-15 
> 23:55:38,046|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
> 2015-05-15 
> 23:55:38,046|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> 2015-05-15 
> 23:55:38,047|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.DistCp.main(DistCp.java:430)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8424) distcp on raw namespace using HFTP protocol fails for TDE

2015-05-18 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8424:
-
Affects Version/s: (was: 2.3.0)
   2.6.0

> distcp on raw namespace using HFTP protocol fails for TDE
> -
>
> Key: HDFS-8424
> URL: https://issues.apache.org/jira/browse/HDFS-8424
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Sumana Sathish
>Priority: Critical
>
> For TDE, distcp using HFTP protocol fails on raw namespace with the following 
> exception 
> {code}
> sudo su - -c "/usr/hdp/current/hadoop-client/bin/hadoop distcp -i  
> -skipcrccheck -update 
> hftp://ip-172-31-33-229.ec2.internal:50070/.reserved/raw/user/hrt_qa/srcDistcpWithTDE/smallFiles/
>  
> hdfs://ip-172-31-33-229.ec2.internal:8020/.reserved/raw/user/hrt_qa/destDistcpWithTDE/distcpedFiles/"
>  hdfs
> 2015-05-15 
> 23:55:35,579|beaver.machine|INFO|2581|140406820472576|MainThread|15/05/15 
> 23:55:35 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, 
> syncFolder=true, deleteMissing=false, ignoreFailures=true, maxMaps=20, 
> sslConfigurationFile='null', copyStrategy='uniformsize', 
> sourceFileListing=null, 
> sourcePaths=[hftp://ip-172-31-33-229.ec2.internal:50070/.reserved/raw/user/hrt_qa/srcDistcpWithTDE/smallFiles],
>  
> targetPath=hdfs://ip-172-31-33-229.ec2.internal:8020/.reserved/raw/user/hrt_qa/destDistcpWithTDE/distcpedFiles,
>  targetPathExists=false, preserveRawXattrs=false}
> 2015-05-15 
> 23:55:36,829|beaver.machine|INFO|2581|140406820472576|MainThread|15/05/15 
> 23:55:36 INFO impl.TimelineClientImpl: Timeline service address: 
> http://ip-172-31-33-229.ec2.internal:8188/ws/v1/timeline/
> 2015-05-15 
> 23:55:37,073|beaver.machine|INFO|2581|140406820472576|MainThread|15/05/15 
> 23:55:37 INFO client.RMProxy: Connecting to ResourceManager at 
> ip-172-31-33-229.ec2.internal/172.31.33.229:8050
> 2015-05-15 
> 23:55:38,040|beaver.machine|INFO|2581|140406820472576|MainThread|15/05/15 
> 23:55:38 ERROR tools.DistCp: Exception encountered
> 2015-05-15 
> 23:55:38,040|beaver.machine|INFO|2581|140406820472576|MainThread|java.lang.UnsupportedOperationException:
>  HftpFileSystem doesn't support getXAttrs
> 2015-05-15 
> 23:55:38,041|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.fs.FileSystem.getXAttrs(FileSystem.java:2559)
> 2015-05-15 
> 23:55:38,041|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.util.DistCpUtils.toCopyListingFileStatus(DistCpUtils.java:322)
> 2015-05-15 
> 23:55:38,042|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:177)
> 2015-05-15 
> 23:55:38,042|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:140)
> 2015-05-15 
> 23:55:38,043|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
> 2015-05-15 
> 23:55:38,043|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
> 2015-05-15 
> 23:55:38,044|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
> 2015-05-15 
> 23:55:38,044|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:382)
> 2015-05-15 
> 23:55:38,045|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:181)
> 2015-05-15 
> 23:55:38,045|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
> 2015-05-15 
> 23:55:38,046|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
> 2015-05-15 
> 23:55:38,046|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> 2015-05-15 
> 23:55:38,047|beaver.machine|INFO|2581|140406820472576|MainThread|at 
> org.apache.hadoop.tools.DistCp.main(DistCp.java:430)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8421) Move startFile() and related operations into FSDirWriteFileOp

2015-05-18 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8421:
-
Attachment: HDFS-8421.001.patch

> Move startFile() and related operations into FSDirWriteFileOp
> -
>
> Key: HDFS-8421
> URL: https://issues.apache.org/jira/browse/HDFS-8421
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8421.000.patch, HDFS-8421.001.patch
>
>
> This jira proposes to move startFile() and related functions into 
> FSDirWriteFileOp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8425) [umbrella] Bug reports for System tests for EC feature

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8425:
--
Issue Type: Sub-task  (was: Bug)
Parent: HDFS-7285

> [umbrella] Bug reports for System tests for EC feature
> --
>
> Key: HDFS-8425
> URL: https://issues.apache.org/jira/browse/HDFS-8425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
>
> This jira is {{umbrella}} jira for bug reports of  System tests for EC 
> feature. 
> Reports format(each report should at least contains the following):
>   Mandatory :
>   1. Test version of HDFS-7285 presented by git version.
>   2. Related logs.
>   Recommended:
>   1.Testing scripts, commands and conditions for reappearance of bug.
>   2.Testing environment of hardware and OS version.
>   3. Investigation progress and Fixing plan.
>   
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8425) [umbrella] Bug reports and fixing for System tests for EC feature

2015-05-18 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8425:
--
Summary: [umbrella] Bug reports and fixing for System tests for EC feature  
(was: [umbrella] Bug reports for System tests for EC feature)

> [umbrella] Bug reports and fixing for System tests for EC feature
> -
>
> Key: HDFS-8425
> URL: https://issues.apache.org/jira/browse/HDFS-8425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
>
> This jira is {{umbrella}} jira for bug reports of  System tests for EC 
> feature. 
> Reports format(each report should at least contains the following):
>   Mandatory :
>   1. Test version of HDFS-7285 presented by git version.
>   2. Related logs.
>   Recommended:
>   1.Testing scripts, commands and conditions for reappearance of bug.
>   2.Testing environment of hardware and OS version.
>   3. Investigation progress and Fixing plan.
>   
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >