[jira] [Updated] (HDFS-5581) NameNodeFsck should use only one instance of BlockPlacementPolicy

2013-11-28 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-5581:


Attachment: HDFS-5581.patch

Updated the patch

> NameNodeFsck should use only one instance of BlockPlacementPolicy
> -
>
> Key: HDFS-5581
> URL: https://issues.apache.org/jira/browse/HDFS-5581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Vinay
>Assignee: Vinay
> Attachments: HDFS-5581.patch, HDFS-5581.patch
>
>
> While going through NameNodeFsck I found that following code creates the new 
> instance of BlockPlacementPolicy for every block.
> {code}  // verify block placement policy
>   BlockPlacementStatus blockPlacementStatus = 
>   BlockPlacementPolicy.getInstance(conf, null, networktopology).
>   verifyBlockPlacement(path, lBlk, targetFileReplication);{code}
> It would be better to use the namenode's BPP itself instead of creating a new 
> one.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache should stub out native mlock

2013-11-28 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-5562:


   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Andrew for committing, and Colin for updating the patch!
Closing this issue.

> TestCacheDirectives and TestFsDatasetCache should stub out native mlock
> ---
>
> Key: HDFS-5562
> URL: https://issues.apache.org/jira/browse/HDFS-5562
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Akira AJISAKA
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: HDFS-5562.002.patch, HDFS-5562.3.patch, 
> HDFS-5562.v1.patch, 
> org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache-output.txt, 
> org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.txt
>
>
> Some tests fail on trunk.
> {code}
> Tests in error:
>   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
> datan...
>   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
> » Runtime
>   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
> Cannot ...
>   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
> datanode ...
> Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
> {code}
> For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5581) NameNodeFsck should use only one instance of BlockPlacementPolicy

2013-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835174#comment-13835174
 ] 

Hadoop QA commented on HDFS-5581:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616332/HDFS-5581.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestFsck

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5603//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5603//console

This message is automatically generated.

> NameNodeFsck should use only one instance of BlockPlacementPolicy
> -
>
> Key: HDFS-5581
> URL: https://issues.apache.org/jira/browse/HDFS-5581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Vinay
>Assignee: Vinay
> Attachments: HDFS-5581.patch
>
>
> While going through NameNodeFsck I found that following code creates the new 
> instance of BlockPlacementPolicy for every block.
> {code}  // verify block placement policy
>   BlockPlacementStatus blockPlacementStatus = 
>   BlockPlacementPolicy.getInstance(conf, null, networktopology).
>   verifyBlockPlacement(path, lBlk, targetFileReplication);{code}
> It would be better to use the namenode's BPP itself instead of creating a new 
> one.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5177) blocksScheduled count should be decremented for abandoned blocks

2013-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835171#comment-13835171
 ] 

Hadoop QA commented on HDFS-5177:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616329/HDFS-5177.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5602//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5602//console

This message is automatically generated.

> blocksScheduled  count should be decremented for abandoned blocks
> -
>
> Key: HDFS-5177
> URL: https://issues.apache.org/jira/browse/HDFS-5177
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Vinay
>Assignee: Vinay
> Attachments: HDFS-5177.patch, HDFS-5177.patch, HDFS-5177.patch
>
>
> DatanodeDescriptor#incBlocksScheduled() will be called for all datanodes of 
> the block on each allocation. But same should be decremented for abandoned 
> blocks.
> When one of the datanodes is down and same is allocated for the block along 
> with other live datanodes, then this block will be abandoned, but the 
> scheduled count on other datanodes will consider live datanodes as loaded, 
> but in reality these datanodes may not be loaded.
> Anyway this scheduled count will be rolled every 20 mins.
> Problem will come if the rate of creation of files is more. Due to increase 
> in the scheduled count, there might be chances of missing local datanode to 
> write to. and some times writes also can fail in small clusters.
> So we need to decrement the unnecessary count on abandon block call.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5582) hdfs getconf -excludeFile or -includeFile always failed

2013-11-28 Thread Henry Hung (JIRA)
Henry Hung created HDFS-5582:


 Summary: hdfs getconf -excludeFile or -includeFile always failed
 Key: HDFS-5582
 URL: https://issues.apache.org/jira/browse/HDFS-5582
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Henry Hung
Priority: Minor


In hadoop-2.2.0, if you execute getconf for exclude and include file, it will 
return this error message:

{code}
[hadoop@fphd1 hadoop-2.2.0]$ bin/hdfs getconf -excludeFile
Configuration DFSConfigKeys.DFS_HOSTS_EXCLUDE is missing.

[hadoop@fphd1 hadoop-2.2.0]$ bin/hdfs getconf -includeFile
Configuration DFSConfigKeys.DFS_HOSTS is missing.
{code}

I found out the root cause is very simple, it’s because the source code of 
{{org/apache/hadoop/hdfs/tools/GetConf.java}} hard coded it to 
{{"DFSConfigKeys.DFS_HOSTS"}} and {{"DFSConfigKeys.DFS_HOSTS_EXCLUDE"}}

{code}
  map.put(INCLUDE_FILE.getName().toLowerCase(), 
  new CommandHandler("DFSConfigKeys.DFS_HOSTS"));
  map.put(EXCLUDE_FILE.getName().toLowerCase(),
  new CommandHandler("DFSConfigKeys.DFS_HOSTS_EXCLUDE"));
{code}

A simple fix would be to remove the quote:

{code}
  map.put(INCLUDE_FILE.getName().toLowerCase(), 
  new CommandHandler(DFSConfigKeys.DFS_HOSTS));
  map.put(EXCLUDE_FILE.getName().toLowerCase(),
  new CommandHandler(DFSConfigKeys.DFS_HOSTS_EXCLUDE));
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5581) NameNodeFsck should use only one instance of BlockPlacementPolicy

2013-11-28 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-5581:


Status: Patch Available  (was: Open)

> NameNodeFsck should use only one instance of BlockPlacementPolicy
> -
>
> Key: HDFS-5581
> URL: https://issues.apache.org/jira/browse/HDFS-5581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Vinay
>Assignee: Vinay
> Attachments: HDFS-5581.patch
>
>
> While going through NameNodeFsck I found that following code creates the new 
> instance of BlockPlacementPolicy for every block.
> {code}  // verify block placement policy
>   BlockPlacementStatus blockPlacementStatus = 
>   BlockPlacementPolicy.getInstance(conf, null, networktopology).
>   verifyBlockPlacement(path, lBlk, targetFileReplication);{code}
> It would be better to use the namenode's BPP itself instead of creating a new 
> one.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5581) NameNodeFsck should use only one instance of BlockPlacementPolicy

2013-11-28 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-5581:


Attachment: HDFS-5581.patch

Attaching a simple patch for this

> NameNodeFsck should use only one instance of BlockPlacementPolicy
> -
>
> Key: HDFS-5581
> URL: https://issues.apache.org/jira/browse/HDFS-5581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Vinay
>Assignee: Vinay
> Attachments: HDFS-5581.patch
>
>
> While going through NameNodeFsck I found that following code creates the new 
> instance of BlockPlacementPolicy for every block.
> {code}  // verify block placement policy
>   BlockPlacementStatus blockPlacementStatus = 
>   BlockPlacementPolicy.getInstance(conf, null, networktopology).
>   verifyBlockPlacement(path, lBlk, targetFileReplication);{code}
> It would be better to use the namenode's BPP itself instead of creating a new 
> one.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5581) NameNodeFsck should use only one instance of BlockPlacementPolicy

2013-11-28 Thread Vinay (JIRA)
Vinay created HDFS-5581:
---

 Summary: NameNodeFsck should use only one instance of 
BlockPlacementPolicy
 Key: HDFS-5581
 URL: https://issues.apache.org/jira/browse/HDFS-5581
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinay
Assignee: Vinay


While going through NameNodeFsck I found that following code creates the new 
instance of BlockPlacementPolicy for every block.

{code}  // verify block placement policy
  BlockPlacementStatus blockPlacementStatus = 
  BlockPlacementPolicy.getInstance(conf, null, networktopology).
  verifyBlockPlacement(path, lBlk, targetFileReplication);{code}

It would be better to use the namenode's BPP itself instead of creating a new 
one.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5580) Infinite loop in Balancer.waitForMoveCompletion

2013-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835138#comment-13835138
 ] 

Hadoop QA commented on HDFS-5580:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616319/HDFS-5580.v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5601//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5601//console

This message is automatically generated.

> Infinite loop in Balancer.waitForMoveCompletion
> ---
>
> Key: HDFS-5580
> URL: https://issues.apache.org/jira/browse/HDFS-5580
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HDFS-5580.v1.patch, HDFS-5580.v2.patch, 
> TestBalancerWithNodeGroupTimeout.log
>
>
> In recent 
> [build|https://builds.apache.org/job/PreCommit-HDFS-Build/5592//testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerWithNodeGroup/testBalancerWithNodeGroup/]
>  in HDFS-5574, TestBalancerWithNodeGroup timeout, this is also mentioned in 
> HDFS-4376 
> [here|https://issues.apache.org/jira/browse/HDFS-4376?focusedCommentId=13799402&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13799402].
>  
> Looks like the bug is introduced by HDFS-3495.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5177) blocksScheduled count should be decremented for abandoned blocks

2013-11-28 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-5177:


Attachment: HDFS-5177.patch

Updated latest patch.
Please review

> blocksScheduled  count should be decremented for abandoned blocks
> -
>
> Key: HDFS-5177
> URL: https://issues.apache.org/jira/browse/HDFS-5177
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Vinay
>Assignee: Vinay
> Attachments: HDFS-5177.patch, HDFS-5177.patch, HDFS-5177.patch
>
>
> DatanodeDescriptor#incBlocksScheduled() will be called for all datanodes of 
> the block on each allocation. But same should be decremented for abandoned 
> blocks.
> When one of the datanodes is down and same is allocated for the block along 
> with other live datanodes, then this block will be abandoned, but the 
> scheduled count on other datanodes will consider live datanodes as loaded, 
> but in reality these datanodes may not be loaded.
> Anyway this scheduled count will be rolled every 20 mins.
> Problem will come if the rate of creation of files is more. Due to increase 
> in the scheduled count, there might be chances of missing local datanode to 
> write to. and some times writes also can fail in small clusters.
> So we need to decrement the unnecessary count on abandon block call.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5580) Infinite loop in Balancer.waitForMoveCompletion

2013-11-28 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-5580:


Attachment: HDFS-5580.v2.patch

New patch fixing a minor bug, forgot to choose none local location at the end.

> Infinite loop in Balancer.waitForMoveCompletion
> ---
>
> Key: HDFS-5580
> URL: https://issues.apache.org/jira/browse/HDFS-5580
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HDFS-5580.v1.patch, HDFS-5580.v2.patch, 
> TestBalancerWithNodeGroupTimeout.log
>
>
> In recent 
> [build|https://builds.apache.org/job/PreCommit-HDFS-Build/5592//testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerWithNodeGroup/testBalancerWithNodeGroup/]
>  in HDFS-5574, TestBalancerWithNodeGroup timeout, this is also mentioned in 
> HDFS-4376 
> [here|https://issues.apache.org/jira/browse/HDFS-4376?focusedCommentId=13799402&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13799402].
>  
> Looks like the bug is introduced by HDFS-3495.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5580) Infinite loop in Balancer.waitForMoveCompletion

2013-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834977#comment-13834977
 ] 

Hadoop QA commented on HDFS-5580:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616259/HDFS-5580.v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
  org.apache.hadoop.hdfs.server.balancer.TestBalancer
  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5600//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5600//console

This message is automatically generated.

> Infinite loop in Balancer.waitForMoveCompletion
> ---
>
> Key: HDFS-5580
> URL: https://issues.apache.org/jira/browse/HDFS-5580
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HDFS-5580.v1.patch, TestBalancerWithNodeGroupTimeout.log
>
>
> In recent 
> [build|https://builds.apache.org/job/PreCommit-HDFS-Build/5592//testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerWithNodeGroup/testBalancerWithNodeGroup/]
>  in HDFS-5574, TestBalancerWithNodeGroup timeout, this is also mentioned in 
> HDFS-4376 
> [here|https://issues.apache.org/jira/browse/HDFS-4376?focusedCommentId=13799402&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13799402].
>  
> Looks like the bug is introduced by HDFS-3495.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5580) Infinite loop in Balancer.waitForMoveCompletion

2013-11-28 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-5580:


Description: 
In recent 
[build|https://builds.apache.org/job/PreCommit-HDFS-Build/5592//testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerWithNodeGroup/testBalancerWithNodeGroup/]
 in HDFS-5574, TestBalancerWithNodeGroup timeout, this is also mentioned in 
HDFS-4376 
[here|https://issues.apache.org/jira/browse/HDFS-4376?focusedCommentId=13799402&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13799402].
 
Looks like the bug is introduced by HDFS-3495.

  was:
In recent 
[build|https://builds.apache.org/job/PreCommit-HDFS-Build/5592//testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerWithNodeGroup/testBalancerWithNodeGroup/]
 in HDFS-5574, TestBalancerWithNodeGroup timeout, this is also mentioned in 
HDFS-4376 
[here|https://issues.apache.org/jira/browse/HDFS-4376?focusedCommentId=13799402&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13799402].
 
Looks like the bug is introduced by HDFS-4376.


> Infinite loop in Balancer.waitForMoveCompletion
> ---
>
> Key: HDFS-5580
> URL: https://issues.apache.org/jira/browse/HDFS-5580
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HDFS-5580.v1.patch, TestBalancerWithNodeGroupTimeout.log
>
>
> In recent 
> [build|https://builds.apache.org/job/PreCommit-HDFS-Build/5592//testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerWithNodeGroup/testBalancerWithNodeGroup/]
>  in HDFS-5574, TestBalancerWithNodeGroup timeout, this is also mentioned in 
> HDFS-4376 
> [here|https://issues.apache.org/jira/browse/HDFS-4376?focusedCommentId=13799402&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13799402].
>  
> Looks like the bug is introduced by HDFS-3495.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5580) Infinite loop in Balancer.waitForMoveCompletion

2013-11-28 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-5580:


Attachment: HDFS-5580.v1.patch

Bug analysis:
In Balancer.PendingBlockMove.chooseProxySource()
{code}
  boolean find = false;
  for (BalancerDatanode loc : block.getLocations()) {
// check if there is replica which is on the same rack with the target
if (cluster.isOnSameRack(loc.getDatanode(), targetDN) && addTo(loc)) {
  find = true;
  // if cluster is not nodegroup aware or the proxy is on the same 
  // nodegroup with target, then we already find the nearest proxy
  if (!cluster.isNodeGroupAware() 
  || cluster.isOnSameNodeGroup(loc.getDatanode(), targetDN)) {
return true;
  }
}

if (!find) {
  // find out a non-busy replica out of rack of target
  find = addTo(loc);
}
  }
{code}
PendingBlockMove may be added to mulitple locations instead of one, but 
consumer thread pool only remove a pair of PendingBlockMove at a time, left  
some wild PendingBlockMove in the queue, Balancer.waitForMoveCompletion wait 
the queue become empty, which will never happen, causing dead lock.



> Infinite loop in Balancer.waitForMoveCompletion
> ---
>
> Key: HDFS-5580
> URL: https://issues.apache.org/jira/browse/HDFS-5580
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HDFS-5580.v1.patch, TestBalancerWithNodeGroupTimeout.log
>
>
> In recent 
> [build|https://builds.apache.org/job/PreCommit-HDFS-Build/5592//testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerWithNodeGroup/testBalancerWithNodeGroup/]
>  in HDFS-5574, TestBalancerWithNodeGroup timeout, this is also mentioned in 
> HDFS-4376 
> [here|https://issues.apache.org/jira/browse/HDFS-4376?focusedCommentId=13799402&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13799402].
>  
> Looks like the bug is introduced by HDFS-4376.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5580) Infinite loop in Balancer.waitForMoveCompletion

2013-11-28 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-5580:


Status: Patch Available  (was: Open)

> Infinite loop in Balancer.waitForMoveCompletion
> ---
>
> Key: HDFS-5580
> URL: https://issues.apache.org/jira/browse/HDFS-5580
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: TestBalancerWithNodeGroupTimeout.log
>
>
> In recent 
> [build|https://builds.apache.org/job/PreCommit-HDFS-Build/5592//testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerWithNodeGroup/testBalancerWithNodeGroup/]
>  in HDFS-5574, TestBalancerWithNodeGroup timeout, this is also mentioned in 
> HDFS-4376 
> [here|https://issues.apache.org/jira/browse/HDFS-4376?focusedCommentId=13799402&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13799402].
>  
> Looks like the bug is introduced by HDFS-4376.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5430) Support TTL on CacheDirectives

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834828#comment-13834828
 ] 

Hudson commented on HDFS-5430:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1596 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1596/])
HDFS-5430. Support TTL on CacheDirectives. Contributed by Andrew Wang. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546301)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCacheAdminConf.xml


> Support TTL on CacheDirectives
> --
>
> Key: HDFS-5430
> URL: https://issues.apache.org/jira/browse/HDFS-5430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hdfs-5430-1.patch, hdfs-5430-2.patch, hdfs-5430-3.patch, 
> hdfs-5430-4.patch
>
>
> It would be nice if CacheBasedPathDirectives would support an expiration 
> time, after which they would be automatically removed by the NameNode.  This 
> time would probably be in wall-block time for the convenience of system 
> administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5537) Remove FileWithSnapshot interface

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834830#comment-13834830
 ] 

Hudson commented on HDFS-5537:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1596 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1596/])
HDFS-5537. Remove FileWithSnapshot interface.  Contributed by jing9 (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546184)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileUnderConstructionFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotFSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java


> Remove FileWithSnapshot interface
> -
>
> Key: HDFS-5537
> URL: https://issues.apache.org/jira/browse/HDFS-5537
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, snapshots
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 3.0.0
>
> Attachments: HDFS-5537.000.patch, HDFS-5537.001.patch, 
> HDFS-5537.002.patch, HDFS-5537.003.patch, HDFS-5537.003.patch, 
> HDFS-5537.004.patch, HDFS-5537.004.patch
>
>
> We use the FileWithSnapshot interface to define a set of methods shared by 
> INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot. After using 
> the Under-Construction feature to replace the INodeFileUC and 
> INodeFileUCWithSnapshot, we no longer need this interface.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5577) NFS user guide update

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834829#comment-13834829
 ] 

Hudson commented on HDFS-5577:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1596 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1596/])
HDFS-5577. NFS user guide update. Contributed by Brandon Li (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546210)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsNfsGateway.apt.vm


> NFS user guide update
> -
>
> Key: HDFS-5577
> URL: https://issues.apache.org/jira/browse/HDFS-5577
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Trivial
> Fix For: 2.2.1
>
> Attachments: HDFS-5577.patch
>
>
> dfs.access.time.precision is deprecated and the doc should use 
> dfs.namenode.accesstime.precision instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5556) add some more NameNode cache statistics, cache pool stats

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834826#comment-13834826
 ] 

Hudson commented on HDFS-5556:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1596 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1596/])
HDFS-5556. Add some more NameNode cache statistics, cache pool stats (cmccabe) 
(cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546143)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolEntry.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStatistics.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/FSDatasetMBean.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java


> add some more NameNode cache statistics, cache pool stats
> -
>
> Key: HDFS-5556
> URL: https://issues.apache.org/jira/browse/HDFS-5556
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix 

[jira] [Commented] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache should stub out native mlock

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834822#comment-13834822
 ] 

Hudson commented on HDFS-5562:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1596 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1596/])
HDFS-5562. TestCacheDirectives and TestFsDatasetCache should stub out native 
mlock. Contributed by Colin Patrick McCabe and Akira Ajisaka. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546246)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java


> TestCacheDirectives and TestFsDatasetCache should stub out native mlock
> ---
>
> Key: HDFS-5562
> URL: https://issues.apache.org/jira/browse/HDFS-5562
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Akira AJISAKA
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5562.002.patch, HDFS-5562.3.patch, 
> HDFS-5562.v1.patch, 
> org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache-output.txt, 
> org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.txt
>
>
> Some tests fail on trunk.
> {code}
> Tests in error:
>   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
> datan...
>   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
> » Runtime
>   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
> Cannot ...
>   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
> datanode ...
> Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
> {code}
> For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5545) Allow specifying endpoints for listeners in HttpServer

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834823#comment-13834823
 ] 

Hudson commented on HDFS-5545:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1596 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1596/])
HDFS-5545. Allow specifying endpoints for listeners in HttpServer. Contributed 
by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546151)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/HttpServerFunctionalTest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestGlobalFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestPathFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestSSLHttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogLevel.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestJobEndNotifier.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestJobEndNotifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestWebApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/WebServer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java


> Allow specifying endpoints for listeners in HttpServer
> --
>
> Key: HDFS-5545
> URL: https://issues.apache.org/jira/browse/HDFS-5545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0
>
> Attachments: HDFS-5545.000.patch, HDFS-5545.001.patch, 
> HDFS-5545.002.patch, HDFS-5545.003.patch
>
>
> Currently HttpServer listens to HTTP port and provides a method to allow the 
> users to add an SSL listeners after the server starts. This complicates the 
> logic if the client needs to set up HTTP / HTTPS serverfs.
> This jira proposes to replace these two methods with the concepts of listener 
> endpoints. A listener endpoints is a URI (i.e., scheme + host + port) that 
> the HttpServer should listen to. This concept simplifies the task of managing 
> the HTTP server from HDFS / YARN.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834825#comment-13834825
 ] 

Hudson commented on HDFS-5563:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1596 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1596/])
HDFS-5563. NFS gateway should commit the buffered data when read request comes 
after write to the same file. Contributed by Brandon Li (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546233)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestWrites.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> NFS gateway should commit the buffered data when read request comes after 
> write to the same file
> 
>
> Key: HDFS-5563
> URL: https://issues.apache.org/jira/browse/HDFS-5563
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.2.1
>
> Attachments: HDFS-5563.001.patch, HDFS-5563.002.patch, 
> HDFS-5563.003.patch
>
>
> HDFS write is asynchronous and data may not be available to read immediately 
> after write.
> One of the main reason is that DFSClient doesn't flush data to DN until its 
> local buffer is full.
> To workaround this problem, when a read comes after write to the same file, 
> NFS gateway should sync the data so the read request can get the latest 
> content. The drawback is that, the frequent hsync() call can slow down data 
> write.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5537) Remove FileWithSnapshot interface

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834818#comment-13834818
 ] 

Hudson commented on HDFS-5537:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1622 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1622/])
HDFS-5537. Remove FileWithSnapshot interface.  Contributed by jing9 (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546184)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileUnderConstructionFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotFSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java


> Remove FileWithSnapshot interface
> -
>
> Key: HDFS-5537
> URL: https://issues.apache.org/jira/browse/HDFS-5537
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, snapshots
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 3.0.0
>
> Attachments: HDFS-5537.000.patch, HDFS-5537.001.patch, 
> HDFS-5537.002.patch, HDFS-5537.003.patch, HDFS-5537.003.patch, 
> HDFS-5537.004.patch, HDFS-5537.004.patch
>
>
> We use the FileWithSnapshot interface to define a set of methods shared by 
> INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot. After using 
> the Under-Construction feature to replace the INodeFileUC and 
> INodeFileUCWithSnapshot, we no longer need this interface.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834813#comment-13834813
 ] 

Hudson commented on HDFS-5563:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1622 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1622/])
HDFS-5563. NFS gateway should commit the buffered data when read request comes 
after write to the same file. Contributed by Brandon Li (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546233)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestWrites.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> NFS gateway should commit the buffered data when read request comes after 
> write to the same file
> 
>
> Key: HDFS-5563
> URL: https://issues.apache.org/jira/browse/HDFS-5563
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.2.1
>
> Attachments: HDFS-5563.001.patch, HDFS-5563.002.patch, 
> HDFS-5563.003.patch
>
>
> HDFS write is asynchronous and data may not be available to read immediately 
> after write.
> One of the main reason is that DFSClient doesn't flush data to DN until its 
> local buffer is full.
> To workaround this problem, when a read comes after write to the same file, 
> NFS gateway should sync the data so the read request can get the latest 
> content. The drawback is that, the frequent hsync() call can slow down data 
> write.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5545) Allow specifying endpoints for listeners in HttpServer

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834811#comment-13834811
 ] 

Hudson commented on HDFS-5545:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1622 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1622/])
HDFS-5545. Allow specifying endpoints for listeners in HttpServer. Contributed 
by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546151)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/HttpServerFunctionalTest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestGlobalFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestPathFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestSSLHttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogLevel.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestJobEndNotifier.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestJobEndNotifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestWebApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/WebServer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java


> Allow specifying endpoints for listeners in HttpServer
> --
>
> Key: HDFS-5545
> URL: https://issues.apache.org/jira/browse/HDFS-5545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0
>
> Attachments: HDFS-5545.000.patch, HDFS-5545.001.patch, 
> HDFS-5545.002.patch, HDFS-5545.003.patch
>
>
> Currently HttpServer listens to HTTP port and provides a method to allow the 
> users to add an SSL listeners after the server starts. This complicates the 
> logic if the client needs to set up HTTP / HTTPS serverfs.
> This jira proposes to replace these two methods with the concepts of listener 
> endpoints. A listener endpoints is a URI (i.e., scheme + host + port) that 
> the HttpServer should listen to. This concept simplifies the task of managing 
> the HTTP server from HDFS / YARN.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5430) Support TTL on CacheDirectives

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834816#comment-13834816
 ] 

Hudson commented on HDFS-5430:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1622 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1622/])
HDFS-5430. Support TTL on CacheDirectives. Contributed by Andrew Wang. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546301)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCacheAdminConf.xml


> Support TTL on CacheDirectives
> --
>
> Key: HDFS-5430
> URL: https://issues.apache.org/jira/browse/HDFS-5430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hdfs-5430-1.patch, hdfs-5430-2.patch, hdfs-5430-3.patch, 
> hdfs-5430-4.patch
>
>
> It would be nice if CacheBasedPathDirectives would support an expiration 
> time, after which they would be automatically removed by the NameNode.  This 
> time would probably be in wall-block time for the convenience of system 
> administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5577) NFS user guide update

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834817#comment-13834817
 ] 

Hudson commented on HDFS-5577:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1622 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1622/])
HDFS-5577. NFS user guide update. Contributed by Brandon Li (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546210)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsNfsGateway.apt.vm


> NFS user guide update
> -
>
> Key: HDFS-5577
> URL: https://issues.apache.org/jira/browse/HDFS-5577
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Trivial
> Fix For: 2.2.1
>
> Attachments: HDFS-5577.patch
>
>
> dfs.access.time.precision is deprecated and the doc should use 
> dfs.namenode.accesstime.precision instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache should stub out native mlock

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834810#comment-13834810
 ] 

Hudson commented on HDFS-5562:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1622 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1622/])
HDFS-5562. TestCacheDirectives and TestFsDatasetCache should stub out native 
mlock. Contributed by Colin Patrick McCabe and Akira Ajisaka. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546246)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java


> TestCacheDirectives and TestFsDatasetCache should stub out native mlock
> ---
>
> Key: HDFS-5562
> URL: https://issues.apache.org/jira/browse/HDFS-5562
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Akira AJISAKA
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5562.002.patch, HDFS-5562.3.patch, 
> HDFS-5562.v1.patch, 
> org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache-output.txt, 
> org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.txt
>
>
> Some tests fail on trunk.
> {code}
> Tests in error:
>   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
> datan...
>   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
> » Runtime
>   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
> Cannot ...
>   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
> datanode ...
> Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
> {code}
> For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5556) add some more NameNode cache statistics, cache pool stats

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834814#comment-13834814
 ] 

Hudson commented on HDFS-5556:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1622 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1622/])
HDFS-5556. Add some more NameNode cache statistics, cache pool stats (cmccabe) 
(cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546143)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolEntry.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStatistics.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/FSDatasetMBean.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java


> add some more NameNode cache statistics, cache pool stats
> -
>
> Key: HDFS-5556
> URL: https://issues.apache.org/jira/browse/HDFS-5556
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>   

[jira] [Created] (HDFS-5580) Dead loop in Balancer.waitForMoveCompletion

2013-11-28 Thread Binglin Chang (JIRA)
Binglin Chang created HDFS-5580:
---

 Summary: Dead loop in Balancer.waitForMoveCompletion
 Key: HDFS-5580
 URL: https://issues.apache.org/jira/browse/HDFS-5580
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
 Attachments: TestBalancerWithNodeGroupTimeout.log

In recent 
[build|https://builds.apache.org/job/PreCommit-HDFS-Build/5592//testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerWithNodeGroup/testBalancerWithNodeGroup/]
 in HDFS-5574, TestBalancerWithNodeGroup timeout, this is also mentioned in 
HDFS-4376 
[here|https://issues.apache.org/jira/browse/HDFS-4376?focusedCommentId=13799402&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13799402].
 
Looks like the bug is introduced by HDFS-4376.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5580) Infinite loop in Balancer.waitForMoveCompletion

2013-11-28 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-5580:


Summary: Infinite loop in Balancer.waitForMoveCompletion  (was: Dead loop 
in Balancer.waitForMoveCompletion)

> Infinite loop in Balancer.waitForMoveCompletion
> ---
>
> Key: HDFS-5580
> URL: https://issues.apache.org/jira/browse/HDFS-5580
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: TestBalancerWithNodeGroupTimeout.log
>
>
> In recent 
> [build|https://builds.apache.org/job/PreCommit-HDFS-Build/5592//testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerWithNodeGroup/testBalancerWithNodeGroup/]
>  in HDFS-5574, TestBalancerWithNodeGroup timeout, this is also mentioned in 
> HDFS-4376 
> [here|https://issues.apache.org/jira/browse/HDFS-4376?focusedCommentId=13799402&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13799402].
>  
> Looks like the bug is introduced by HDFS-4376.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5580) Infinite loop in Balancer.waitForMoveCompletion

2013-11-28 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-5580:


Attachment: TestBalancerWithNodeGroupTimeout.log

Attaching timeout log

> Infinite loop in Balancer.waitForMoveCompletion
> ---
>
> Key: HDFS-5580
> URL: https://issues.apache.org/jira/browse/HDFS-5580
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: TestBalancerWithNodeGroupTimeout.log
>
>
> In recent 
> [build|https://builds.apache.org/job/PreCommit-HDFS-Build/5592//testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerWithNodeGroup/testBalancerWithNodeGroup/]
>  in HDFS-5574, TestBalancerWithNodeGroup timeout, this is also mentioned in 
> HDFS-4376 
> [here|https://issues.apache.org/jira/browse/HDFS-4376?focusedCommentId=13799402&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13799402].
>  
> Looks like the bug is introduced by HDFS-4376.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5430) Support TTL on CacheDirectives

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834727#comment-13834727
 ] 

Hudson commented on HDFS-5430:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #405 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/405/])
HDFS-5430. Support TTL on CacheDirectives. Contributed by Andrew Wang. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546301)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCacheAdminConf.xml


> Support TTL on CacheDirectives
> --
>
> Key: HDFS-5430
> URL: https://issues.apache.org/jira/browse/HDFS-5430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hdfs-5430-1.patch, hdfs-5430-2.patch, hdfs-5430-3.patch, 
> hdfs-5430-4.patch
>
>
> It would be nice if CacheBasedPathDirectives would support an expiration 
> time, after which they would be automatically removed by the NameNode.  This 
> time would probably be in wall-block time for the convenience of system 
> administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5537) Remove FileWithSnapshot interface

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834729#comment-13834729
 ] 

Hudson commented on HDFS-5537:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #405 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/405/])
HDFS-5537. Remove FileWithSnapshot interface.  Contributed by jing9 (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546184)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileUnderConstructionFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotFSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java


> Remove FileWithSnapshot interface
> -
>
> Key: HDFS-5537
> URL: https://issues.apache.org/jira/browse/HDFS-5537
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, snapshots
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 3.0.0
>
> Attachments: HDFS-5537.000.patch, HDFS-5537.001.patch, 
> HDFS-5537.002.patch, HDFS-5537.003.patch, HDFS-5537.003.patch, 
> HDFS-5537.004.patch, HDFS-5537.004.patch
>
>
> We use the FileWithSnapshot interface to define a set of methods shared by 
> INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot. After using 
> the Under-Construction feature to replace the INodeFileUC and 
> INodeFileUCWithSnapshot, we no longer need this interface.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5556) add some more NameNode cache statistics, cache pool stats

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834725#comment-13834725
 ] 

Hudson commented on HDFS-5556:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #405 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/405/])
HDFS-5556. Add some more NameNode cache statistics, cache pool stats (cmccabe) 
(cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546143)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolEntry.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStatistics.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/FSDatasetMBean.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java


> add some more NameNode cache statistics, cache pool stats
> -
>
> Key: HDFS-5556
> URL: https://issues.apache.org/jira/browse/HDFS-5556
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix Fo

[jira] [Commented] (HDFS-5568) Support inclusion of snapshot paths in Namenode fsck

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834724#comment-13834724
 ] 

Hudson commented on HDFS-5568:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #405 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/405/])
HDFS-5568. Support includeSnapshots option with Fsck command. Contributed by 
Vinay (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1545987)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


> Support inclusion of snapshot paths in Namenode fsck
> 
>
> Key: HDFS-5568
> URL: https://issues.apache.org/jira/browse/HDFS-5568
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Vinay
>Assignee: Vinay
> Fix For: 3.0.0, 2.3.0, 2.2.1
>
> Attachments: HDFS-5568-1.patch, HDFS-5568.patch, HDFS-5568.patch, 
> HDFS-5568.patch, HDFS-5568.patch
>
>
> Support Fsck to check the snapshot paths also for inconsistency.
> Currently Fsck supports snapshot paths if path given explicitly refers to a 
> snapshot path.
> We have seen safemode problems in our clusters which were due to blocks 
> missing which were only present inside snapshots. But "hdfs fsck /" shows 
> HEALTHY. 
> So supporting snapshot paths also during fsck (may be by default or on 
> demand) would be helpful in these cases instead of specifying each and every 
> snapshottable directory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache should stub out native mlock

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834720#comment-13834720
 ] 

Hudson commented on HDFS-5562:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #405 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/405/])
HDFS-5562. TestCacheDirectives and TestFsDatasetCache should stub out native 
mlock. Contributed by Colin Patrick McCabe and Akira Ajisaka. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546246)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java


> TestCacheDirectives and TestFsDatasetCache should stub out native mlock
> ---
>
> Key: HDFS-5562
> URL: https://issues.apache.org/jira/browse/HDFS-5562
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Akira AJISAKA
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5562.002.patch, HDFS-5562.3.patch, 
> HDFS-5562.v1.patch, 
> org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache-output.txt, 
> org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.txt
>
>
> Some tests fail on trunk.
> {code}
> Tests in error:
>   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
> datan...
>   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
> » Runtime
>   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
> Cannot ...
>   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
> datanode ...
> Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
> {code}
> For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834723#comment-13834723
 ] 

Hudson commented on HDFS-5563:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #405 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/405/])
HDFS-5563. NFS gateway should commit the buffered data when read request comes 
after write to the same file. Contributed by Brandon Li (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546233)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestWrites.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> NFS gateway should commit the buffered data when read request comes after 
> write to the same file
> 
>
> Key: HDFS-5563
> URL: https://issues.apache.org/jira/browse/HDFS-5563
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.2.1
>
> Attachments: HDFS-5563.001.patch, HDFS-5563.002.patch, 
> HDFS-5563.003.patch
>
>
> HDFS write is asynchronous and data may not be available to read immediately 
> after write.
> One of the main reason is that DFSClient doesn't flush data to DN until its 
> local buffer is full.
> To workaround this problem, when a read comes after write to the same file, 
> NFS gateway should sync the data so the read request can get the latest 
> content. The drawback is that, the frequent hsync() call can slow down data 
> write.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5545) Allow specifying endpoints for listeners in HttpServer

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834721#comment-13834721
 ] 

Hudson commented on HDFS-5545:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #405 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/405/])
HDFS-5545. Allow specifying endpoints for listeners in HttpServer. Contributed 
by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546151)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/HttpServerFunctionalTest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestGlobalFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestPathFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestSSLHttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogLevel.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestJobEndNotifier.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestJobEndNotifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestWebApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/WebServer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java


> Allow specifying endpoints for listeners in HttpServer
> --
>
> Key: HDFS-5545
> URL: https://issues.apache.org/jira/browse/HDFS-5545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0
>
> Attachments: HDFS-5545.000.patch, HDFS-5545.001.patch, 
> HDFS-5545.002.patch, HDFS-5545.003.patch
>
>
> Currently HttpServer listens to HTTP port and provides a method to allow the 
> users to add an SSL listeners after the server starts. This complicates the 
> logic if the client needs to set up HTTP / HTTPS serverfs.
> This jira proposes to replace these two methods with the concepts of listener 
> endpoints. A listener endpoints is a URI (i.e., scheme + host + port) that 
> the HttpServer should listen to. This concept simplifies the task of managing 
> the HTTP server from HDFS / YARN.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5577) NFS user guide update

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834728#comment-13834728
 ] 

Hudson commented on HDFS-5577:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #405 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/405/])
HDFS-5577. NFS user guide update. Contributed by Brandon Li (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546210)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsNfsGateway.apt.vm


> NFS user guide update
> -
>
> Key: HDFS-5577
> URL: https://issues.apache.org/jira/browse/HDFS-5577
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Trivial
> Fix For: 2.2.1
>
> Attachments: HDFS-5577.patch
>
>
> dfs.access.time.precision is deprecated and the doc should use 
> dfs.namenode.accesstime.precision instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5579) Under construction files make DataNode decommission take very long hours

2013-11-28 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834711#comment-13834711
 ] 

Vinay commented on HDFS-5579:
-

Updated patch looks good.

It will be better you add a testcase to test decommissioning with openfiles.

+1 on adding test.

> Under construction files make DataNode decommission take very long hours
> 
>
> Key: HDFS-5579
> URL: https://issues.apache.org/jira/browse/HDFS-5579
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.0, 2.2.0
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-5579-branch-1.2.patch, HDFS-5579.patch
>
>
> We noticed that some times decommission DataNodes takes very long time, even 
> exceeds 100 hours.
> After check the code, I found that in 
> BlockManager:computeReplicationWorkForBlocks(List> 
> blocksToReplicate) it won't replicate blocks which belongs to under 
> construction files, however in 
> BlockManager:isReplicationInProgress(DatanodeDescriptor srcNode), if there  
> is block need replicate no matter whether it belongs to under construction or 
> not, the decommission progress will continue running.
> That's the reason some time the decommission takes very long time.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5578) [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments

2013-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834701#comment-13834701
 ] 

Hadoop QA commented on HDFS-5578:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616123/5578-branch-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-httpfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5599//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5599//console

This message is automatically generated.

> [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments
> -
>
> Key: HDFS-5578
> URL: https://issues.apache.org/jira/browse/HDFS-5578
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Andrew Purtell
>Priority: Minor
> Attachments: 5578-branch-2.patch, 5578-trunk.patch
>
>
> Javadoc is more strict by default in JDK8 and will error out on malformed or 
> illegal tags found in doc comments. Although tagged as JDK8 all of the 
> required changes are generic Javadoc cleanups.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5578) [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments

2013-11-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-5578:
-

Status: Patch Available  (was: Open)

> [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments
> -
>
> Key: HDFS-5578
> URL: https://issues.apache.org/jira/browse/HDFS-5578
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Andrew Purtell
>Priority: Minor
> Attachments: 5578-branch-2.patch, 5578-trunk.patch
>
>
> Javadoc is more strict by default in JDK8 and will error out on malformed or 
> illegal tags found in doc comments. Although tagged as JDK8 all of the 
> required changes are generic Javadoc cleanups.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5579) Under construction files make DataNode decommission take very long hours

2013-11-28 Thread zhaoyunjiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyunjiong updated HDFS-5579:
---

Attachment: (was: HDFS-5579-branch-1.2.patch)

> Under construction files make DataNode decommission take very long hours
> 
>
> Key: HDFS-5579
> URL: https://issues.apache.org/jira/browse/HDFS-5579
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.0, 2.2.0
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-5579-branch-1.2.patch, HDFS-5579.patch
>
>
> We noticed that some times decommission DataNodes takes very long time, even 
> exceeds 100 hours.
> After check the code, I found that in 
> BlockManager:computeReplicationWorkForBlocks(List> 
> blocksToReplicate) it won't replicate blocks which belongs to under 
> construction files, however in 
> BlockManager:isReplicationInProgress(DatanodeDescriptor srcNode), if there  
> is block need replicate no matter whether it belongs to under construction or 
> not, the decommission progress will continue running.
> That's the reason some time the decommission takes very long time.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5579) Under construction files make DataNode decommission take very long hours

2013-11-28 Thread zhaoyunjiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyunjiong updated HDFS-5579:
---

Attachment: (was: HDFS-5579.patch)

> Under construction files make DataNode decommission take very long hours
> 
>
> Key: HDFS-5579
> URL: https://issues.apache.org/jira/browse/HDFS-5579
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.0, 2.2.0
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-5579-branch-1.2.patch, HDFS-5579.patch
>
>
> We noticed that some times decommission DataNodes takes very long time, even 
> exceeds 100 hours.
> After check the code, I found that in 
> BlockManager:computeReplicationWorkForBlocks(List> 
> blocksToReplicate) it won't replicate blocks which belongs to under 
> construction files, however in 
> BlockManager:isReplicationInProgress(DatanodeDescriptor srcNode), if there  
> is block need replicate no matter whether it belongs to under construction or 
> not, the decommission progress will continue running.
> That's the reason some time the decommission takes very long time.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5579) Under construction files make DataNode decommission take very long hours

2013-11-28 Thread zhaoyunjiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyunjiong updated HDFS-5579:
---

Attachment: HDFS-5579-branch-1.2.patch
HDFS-5579.patch

Thanks Vinay.
Update patch as your comments.
Except: getLastBlock  do throws IOException, I deleted it in this patch.

> Under construction files make DataNode decommission take very long hours
> 
>
> Key: HDFS-5579
> URL: https://issues.apache.org/jira/browse/HDFS-5579
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.0, 2.2.0
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-5579-branch-1.2.patch, HDFS-5579-branch-1.2.patch, 
> HDFS-5579.patch, HDFS-5579.patch
>
>
> We noticed that some times decommission DataNodes takes very long time, even 
> exceeds 100 hours.
> After check the code, I found that in 
> BlockManager:computeReplicationWorkForBlocks(List> 
> blocksToReplicate) it won't replicate blocks which belongs to under 
> construction files, however in 
> BlockManager:isReplicationInProgress(DatanodeDescriptor srcNode), if there  
> is block need replicate no matter whether it belongs to under construction or 
> not, the decommission progress will continue running.
> That's the reason some time the decommission takes very long time.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5579) Under construction files make DataNode decommission take very long hours

2013-11-28 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834600#comment-13834600
 ] 

Vinay commented on HDFS-5579:
-

Thanks [~zhaoyunjiong] for filing Jira.

I think your fix would work and decommission datanode quickly.
Here is little comments about your patch.

1. try-catch not required. no statement inside try block will throw exception.

2. {code}block.getBlockId() == bc.getLastBlock().getBlockId(){code}
Better to use block.equals(bc.getLastBlock())

3. {code}if (block.getBlockId() == bc.getLastBlock().getBlockId() && 
curReplicas > 1) {
+  continue;
+}{code}
Instead of 1 use minReplication

4. {code}+  underReplicatedInOpenFiles++;{code}
This should be incremented only if enough replicas are not there.

> Under construction files make DataNode decommission take very long hours
> 
>
> Key: HDFS-5579
> URL: https://issues.apache.org/jira/browse/HDFS-5579
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.0, 2.2.0
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-5579-branch-1.2.patch, HDFS-5579.patch
>
>
> We noticed that some times decommission DataNodes takes very long time, even 
> exceeds 100 hours.
> After check the code, I found that in 
> BlockManager:computeReplicationWorkForBlocks(List> 
> blocksToReplicate) it won't replicate blocks which belongs to under 
> construction files, however in 
> BlockManager:isReplicationInProgress(DatanodeDescriptor srcNode), if there  
> is block need replicate no matter whether it belongs to under construction or 
> not, the decommission progress will continue running.
> That's the reason some time the decommission takes very long time.



--
This message was sent by Atlassian JIRA
(v6.1#6144)