[jira] [Commented] (HDFS-2832) Enable support for heterogeneous storages in HDFS

2013-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830589#comment-13830589
 ] 

Hadoop QA commented on HDFS-2832:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615442/h2832_20131122b.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 49 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core:

  org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
  org.apache.hadoop.hdfs.TestDatanodeConfig
  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5553//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5553//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5553//console

This message is automatically generated.

> Enable support for heterogeneous storages in HDFS
> -
>
> Key: HDFS-2832
> URL: https://issues.apache.org/jira/browse/HDFS-2832
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.24.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: 20130813-HeterogeneousStorage.pdf, H2832_20131107.patch, 
> editsStored, h2832_20131023.patch, h2832_20131023b.patch, 
> h2832_20131025.patch, h2832_20131028.patch, h2832_20131028b.patch, 
> h2832_20131029.patch, h2832_20131103.patch, h2832_20131104.patch, 
> h2832_20131105.patch, h2832_20131107b.patch, h2832_20131108.patch, 
> h2832_20131110.patch, h2832_20131110b.patch, h2832_2013.patch, 
> h2832_20131112.patch, h2832_20131112b.patch, h2832_20131114.patch, 
> h2832_20131118.patch, h2832_20131119.patch, h2832_20131119b.patch, 
> h2832_20131121.patch, h2832_20131122.patch, h2832_20131122b.patch
>
>
> HDFS currently supports configuration where storages are a list of 
> directories. Typically each of these directories correspond to a volume with 
> its own file system. All these directories are homogeneous and therefore 
> identified as a single storage at the namenode. I propose, change to the 
> current model where Datanode * is a * storage, to Datanode * is a collection 
> * of strorages. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5557) Write pipeline recovery for the last packet in the block may cause rejection of valid replicas

2013-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830585#comment-13830585
 ] 

Hadoop QA commented on HDFS-5557:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615444/HDFS-5557.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5554//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5554//console

This message is automatically generated.

> Write pipeline recovery for the last packet in the block may cause rejection 
> of valid replicas
> --
>
> Key: HDFS-5557
> URL: https://issues.apache.org/jira/browse/HDFS-5557
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.9, 2.3.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-5557.patch, HDFS-5557.patch
>
>
> When a block is reported from a data node while the block is under 
> construction (i.e. not committed or completed), BlockManager calls 
> BlockInfoUnderConstruction.addReplicaIfNotPresent() to update the reported 
> replica state. But BlockManager is calling it with the stored block, not 
> reported block.  This causes the recorded replicas' gen stamp to be that of 
> BlockInfoUnderConstruction itself, not the one from reported replica.
> When a pipeline recovery is done for the last packet of a block, the 
> incremental block reports with the new gen stamp may come before the client 
> calling updatePipeline(). If this happens, these replicas will be incorrectly 
> recorded with the old gen stamp and get removed later.  The result is close 
> or addAdditionalBlock failure.
> If the last block is completed, but the penultimate block is not because of 
> this issue, the file won't be closed. If this file is not cleared, but the 
> client goes away, the lease manager will try to recover the lease/block, at 
> which point it will crash. I will file a separate jira for this shortly.
> The worst case is to reject all good ones and accepting a bad one. In this 
> case, the block will get completed, but the data cannot be read until the 
> next full block report containing one of the valid replicas is received.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5557) Write pipeline recovery for the last packet in the block may cause rejection of valid replicas

2013-11-22 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830569#comment-13830569
 ] 

Kihwal Lee commented on HDFS-5557:
--

bq. If the last block is completed, but the penultimate block is not because of 
this issue, the file won't be closed. If this file is not cleared, but the 
client goes away, the lease manager will try to recover the lease/block, at 
which point it will crash. I will file a separate jira for this shortly.

HDFS-5558 has been filed for this.

> Write pipeline recovery for the last packet in the block may cause rejection 
> of valid replicas
> --
>
> Key: HDFS-5557
> URL: https://issues.apache.org/jira/browse/HDFS-5557
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.9, 2.3.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-5557.patch, HDFS-5557.patch
>
>
> When a block is reported from a data node while the block is under 
> construction (i.e. not committed or completed), BlockManager calls 
> BlockInfoUnderConstruction.addReplicaIfNotPresent() to update the reported 
> replica state. But BlockManager is calling it with the stored block, not 
> reported block.  This causes the recorded replicas' gen stamp to be that of 
> BlockInfoUnderConstruction itself, not the one from reported replica.
> When a pipeline recovery is done for the last packet of a block, the 
> incremental block reports with the new gen stamp may come before the client 
> calling updatePipeline(). If this happens, these replicas will be incorrectly 
> recorded with the old gen stamp and get removed later.  The result is close 
> or addAdditionalBlock failure.
> If the last block is completed, but the penultimate block is not because of 
> this issue, the file won't be closed. If this file is not cleared, but the 
> client goes away, the lease manager will try to recover the lease/block, at 
> which point it will crash. I will file a separate jira for this shortly.
> The worst case is to reject all good ones and accepting a bad one. In this 
> case, the block will get completed, but the data cannot be read until the 
> next full block report containing one of the valid replicas is received.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5558) LeaseManager monitor thread can crash if the last block is complete but another block is not.

2013-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830561#comment-13830561
 ] 

Hadoop QA commented on HDFS-5558:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615435/HDFS-5558.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5552//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5552//console

This message is automatically generated.

> LeaseManager monitor thread can crash if the last block is complete but 
> another block is not.
> -
>
> Key: HDFS-5558
> URL: https://issues.apache.org/jira/browse/HDFS-5558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.9, 2.3.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-5558.branch-023.patch, HDFS-5558.patch
>
>
> As mentioned in HDFS-5557, if a file has its last and penultimate block not 
> completed and the file is being closed, the last block may be completed but 
> the penultimate one might not. If this condition lasts long and the file is 
> abandoned, LeaseManager will try to recover the lease and the block. But 
> {{internalReleaseLease()}} will fail with invalid cast exception with this 
> kind of file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5558) LeaseManager monitor thread can crash if the last block is complete but another block is not.

2013-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830559#comment-13830559
 ] 

Hadoop QA commented on HDFS-5558:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12615446/HDFS-5558.branch-023.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build///console

This message is automatically generated.

> LeaseManager monitor thread can crash if the last block is complete but 
> another block is not.
> -
>
> Key: HDFS-5558
> URL: https://issues.apache.org/jira/browse/HDFS-5558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.9, 2.3.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-5558.branch-023.patch, HDFS-5558.patch
>
>
> As mentioned in HDFS-5557, if a file has its last and penultimate block not 
> completed and the file is being closed, the last block may be completed but 
> the penultimate one might not. If this condition lasts long and the file is 
> abandoned, LeaseManager will try to recover the lease and the block. But 
> {{internalReleaseLease()}} will fail with invalid cast exception with this 
> kind of file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5558) LeaseManager monitor thread can crash if the last block is complete but another block is not.

2013-11-22 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-5558:
-

Attachment: HDFS-5558.branch-023.patch

Posting the branch-0.23 version of the patch. PreCommit will fail on this.

> LeaseManager monitor thread can crash if the last block is complete but 
> another block is not.
> -
>
> Key: HDFS-5558
> URL: https://issues.apache.org/jira/browse/HDFS-5558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.9, 2.3.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-5558.branch-023.patch, HDFS-5558.patch
>
>
> As mentioned in HDFS-5557, if a file has its last and penultimate block not 
> completed and the file is being closed, the last block may be completed but 
> the penultimate one might not. If this condition lasts long and the file is 
> abandoned, LeaseManager will try to recover the lease and the block. But 
> {{internalReleaseLease()}} will fail with invalid cast exception with this 
> kind of file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5557) Write pipeline recovery for the last packet in the block may cause rejection of valid replicas

2013-11-22 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-5557:
-

Attachment: HDFS-5557.patch

Uploading updated patch that fixes the test compilation problem.

> Write pipeline recovery for the last packet in the block may cause rejection 
> of valid replicas
> --
>
> Key: HDFS-5557
> URL: https://issues.apache.org/jira/browse/HDFS-5557
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.9, 2.3.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-5557.patch, HDFS-5557.patch
>
>
> When a block is reported from a data node while the block is under 
> construction (i.e. not committed or completed), BlockManager calls 
> BlockInfoUnderConstruction.addReplicaIfNotPresent() to update the reported 
> replica state. But BlockManager is calling it with the stored block, not 
> reported block.  This causes the recorded replicas' gen stamp to be that of 
> BlockInfoUnderConstruction itself, not the one from reported replica.
> When a pipeline recovery is done for the last packet of a block, the 
> incremental block reports with the new gen stamp may come before the client 
> calling updatePipeline(). If this happens, these replicas will be incorrectly 
> recorded with the old gen stamp and get removed later.  The result is close 
> or addAdditionalBlock failure.
> If the last block is completed, but the penultimate block is not because of 
> this issue, the file won't be closed. If this file is not cleared, but the 
> client goes away, the lease manager will try to recover the lease/block, at 
> which point it will crash. I will file a separate jira for this shortly.
> The worst case is to reject all good ones and accepting a bad one. In this 
> case, the block will get completed, but the data cannot be read until the 
> next full block report containing one of the valid replicas is received.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HDFS-5558) LeaseManager monitor thread can crash if the last block is complete but another block is not.

2013-11-22 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HDFS-5558:


Assignee: Kihwal Lee

> LeaseManager monitor thread can crash if the last block is complete but 
> another block is not.
> -
>
> Key: HDFS-5558
> URL: https://issues.apache.org/jira/browse/HDFS-5558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.9, 2.3.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-5558.patch
>
>
> As mentioned in HDFS-5557, if a file has its last and penultimate block not 
> completed and the file is being closed, the last block may be completed but 
> the penultimate one might not. If this condition lasts long and the file is 
> abandoned, LeaseManager will try to recover the lease and the block. But 
> {{internalReleaseLease()}} will fail with invalid cast exception with this 
> kind of file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5557) Write pipeline recovery for the last packet in the block may cause rejection of valid replicas

2013-11-22 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830538#comment-13830538
 ] 

Kihwal Lee commented on HDFS-5557:
--

I shouldn't have relied on maven dependency tracking without doing "mvn clean". 
An updated patch shall come soon.

> Write pipeline recovery for the last packet in the block may cause rejection 
> of valid replicas
> --
>
> Key: HDFS-5557
> URL: https://issues.apache.org/jira/browse/HDFS-5557
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.9, 2.3.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-5557.patch
>
>
> When a block is reported from a data node while the block is under 
> construction (i.e. not committed or completed), BlockManager calls 
> BlockInfoUnderConstruction.addReplicaIfNotPresent() to update the reported 
> replica state. But BlockManager is calling it with the stored block, not 
> reported block.  This causes the recorded replicas' gen stamp to be that of 
> BlockInfoUnderConstruction itself, not the one from reported replica.
> When a pipeline recovery is done for the last packet of a block, the 
> incremental block reports with the new gen stamp may come before the client 
> calling updatePipeline(). If this happens, these replicas will be incorrectly 
> recorded with the old gen stamp and get removed later.  The result is close 
> or addAdditionalBlock failure.
> If the last block is completed, but the penultimate block is not because of 
> this issue, the file won't be closed. If this file is not cleared, but the 
> client goes away, the lease manager will try to recover the lease/block, at 
> which point it will crash. I will file a separate jira for this shortly.
> The worst case is to reject all good ones and accepting a bad one. In this 
> case, the block will get completed, but the data cannot be read until the 
> next full block report containing one of the valid replicas is received.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HDFS-5557) Write pipeline recovery for the last packet in the block may cause rejection of valid replicas

2013-11-22 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HDFS-5557:


Assignee: Kihwal Lee

> Write pipeline recovery for the last packet in the block may cause rejection 
> of valid replicas
> --
>
> Key: HDFS-5557
> URL: https://issues.apache.org/jira/browse/HDFS-5557
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.9, 2.3.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-5557.patch
>
>
> When a block is reported from a data node while the block is under 
> construction (i.e. not committed or completed), BlockManager calls 
> BlockInfoUnderConstruction.addReplicaIfNotPresent() to update the reported 
> replica state. But BlockManager is calling it with the stored block, not 
> reported block.  This causes the recorded replicas' gen stamp to be that of 
> BlockInfoUnderConstruction itself, not the one from reported replica.
> When a pipeline recovery is done for the last packet of a block, the 
> incremental block reports with the new gen stamp may come before the client 
> calling updatePipeline(). If this happens, these replicas will be incorrectly 
> recorded with the old gen stamp and get removed later.  The result is close 
> or addAdditionalBlock failure.
> If the last block is completed, but the penultimate block is not because of 
> this issue, the file won't be closed. If this file is not cleared, but the 
> client goes away, the lease manager will try to recover the lease/block, at 
> which point it will crash. I will file a separate jira for this shortly.
> The worst case is to reject all good ones and accepting a bad one. In this 
> case, the block will get completed, but the data cannot be read until the 
> next full block report containing one of the valid replicas is received.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5558) LeaseManager monitor thread can crash if the last block is complete but another block is not.

2013-11-22 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830536#comment-13830536
 ] 

Jason Lowe commented on HDFS-5558:
--

We might need a slight addition to the patch for branch-0.23, as 
getAdditionalBlock has a very similar structure where it can commit/complete 
the last block before checking the state of the penultimate block.  If we get 
stuck in that state for a while or client abandons the file then the 
LeaseManager could hit the same condition if a block report completes the last 
block but not the penultimate block.  In trunk/branch-2 getAdditionalBlock 
calls analyzeFileState before it tries to commit/complete the block, and that 
will throw if the penultimate block has not completed.

> LeaseManager monitor thread can crash if the last block is complete but 
> another block is not.
> -
>
> Key: HDFS-5558
> URL: https://issues.apache.org/jira/browse/HDFS-5558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.9, 2.3.0
>Reporter: Kihwal Lee
> Attachments: HDFS-5558.patch
>
>
> As mentioned in HDFS-5557, if a file has its last and penultimate block not 
> completed and the file is being closed, the last block may be completed but 
> the penultimate one might not. If this condition lasts long and the file is 
> abandoned, LeaseManager will try to recover the lease and the block. But 
> {{internalReleaseLease()}} will fail with invalid cast exception with this 
> kind of file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-2832) Enable support for heterogeneous storages in HDFS

2013-11-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-2832:
-

Attachment: h2832_20131122b.patch

h2832_20131122b.patch: 2832 branch + HDFS-5559.

Note that the \-1 javadoc warnings in the previous build are nonsense.  It said 
that there were *-12* warnings, i.e. 12 less warnings than trunk.

> Enable support for heterogeneous storages in HDFS
> -
>
> Key: HDFS-2832
> URL: https://issues.apache.org/jira/browse/HDFS-2832
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.24.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: 20130813-HeterogeneousStorage.pdf, H2832_20131107.patch, 
> editsStored, h2832_20131023.patch, h2832_20131023b.patch, 
> h2832_20131025.patch, h2832_20131028.patch, h2832_20131028b.patch, 
> h2832_20131029.patch, h2832_20131103.patch, h2832_20131104.patch, 
> h2832_20131105.patch, h2832_20131107b.patch, h2832_20131108.patch, 
> h2832_20131110.patch, h2832_20131110b.patch, h2832_2013.patch, 
> h2832_20131112.patch, h2832_20131112b.patch, h2832_20131114.patch, 
> h2832_20131118.patch, h2832_20131119.patch, h2832_20131119b.patch, 
> h2832_20131121.patch, h2832_20131122.patch, h2832_20131122b.patch
>
>
> HDFS currently supports configuration where storages are a list of 
> directories. Typically each of these directories correspond to a volume with 
> its own file system. All these directories are homogeneous and therefore 
> identified as a single storage at the namenode. I propose, change to the 
> current model where Datanode * is a * storage, to Datanode * is a collection 
> * of strorages. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5559) Fix TestDatanodeConfig in HDFS-2832

2013-11-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5559:
-

Attachment: h5559_20131122.patch

> Fix TestDatanodeConfig in HDFS-2832
> ---
>
> Key: HDFS-5559
> URL: https://issues.apache.org/jira/browse/HDFS-5559
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Attachments: h5559_20131122.patch
>
>
> HDFS-5542 breaks this test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5559) Fix TestDatanodeConfig in HDFS-2832

2013-11-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5559:
-

Attachment: h5559_20131122.patch

h5559_20131122.patch: fixes the test.

> Fix TestDatanodeConfig in HDFS-2832
> ---
>
> Key: HDFS-5559
> URL: https://issues.apache.org/jira/browse/HDFS-5559
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
>
> HDFS-5542 breaks this test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5559) Fix TestDatanodeConfig in HDFS-2832

2013-11-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5559:
-

Attachment: (was: h5559_20131122.patch)

> Fix TestDatanodeConfig in HDFS-2832
> ---
>
> Key: HDFS-5559
> URL: https://issues.apache.org/jira/browse/HDFS-5559
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
>
> HDFS-5542 breaks this test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5559) Fix TestDatanodeConfig in HDFS-2832

2013-11-22 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HDFS-5559:


 Summary: Fix TestDatanodeConfig in HDFS-2832
 Key: HDFS-5559
 URL: https://issues.apache.org/jira/browse/HDFS-5559
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor


HDFS-5542 breaks this test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5558) LeaseManager monitor thread can crash if the last block is complete but another block is not.

2013-11-22 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-5558:
-

 Target Version/s: 2.3.0, 0.23.10
Affects Version/s: 2.3.0
   0.23.9

> LeaseManager monitor thread can crash if the last block is complete but 
> another block is not.
> -
>
> Key: HDFS-5558
> URL: https://issues.apache.org/jira/browse/HDFS-5558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.9, 2.3.0
>Reporter: Kihwal Lee
> Attachments: HDFS-5558.patch
>
>
> As mentioned in HDFS-5557, if a file has its last and penultimate block not 
> completed and the file is being closed, the last block may be completed but 
> the penultimate one might not. If this condition lasts long and the file is 
> abandoned, LeaseManager will try to recover the lease and the block. But 
> {{internalReleaseLease()}} will fail with invalid cast exception with this 
> kind of file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5558) LeaseManager monitor thread can crash if the last block is complete but another block is not.

2013-11-22 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-5558:
-

Status: Patch Available  (was: Open)

> LeaseManager monitor thread can crash if the last block is complete but 
> another block is not.
> -
>
> Key: HDFS-5558
> URL: https://issues.apache.org/jira/browse/HDFS-5558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
> Attachments: HDFS-5558.patch
>
>
> As mentioned in HDFS-5557, if a file has its last and penultimate block not 
> completed and the file is being closed, the last block may be completed but 
> the penultimate one might not. If this condition lasts long and the file is 
> abandoned, LeaseManager will try to recover the lease and the block. But 
> {{internalReleaseLease()}} will fail with invalid cast exception with this 
> kind of file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5558) LeaseManager monitor thread can crash if the last block is complete but another block is not.

2013-11-22 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-5558:
-

Attachment: HDFS-5558.patch

> LeaseManager monitor thread can crash if the last block is complete but 
> another block is not.
> -
>
> Key: HDFS-5558
> URL: https://issues.apache.org/jira/browse/HDFS-5558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
> Attachments: HDFS-5558.patch
>
>
> As mentioned in HDFS-5557, if a file has its last and penultimate block not 
> completed and the file is being closed, the last block may be completed but 
> the penultimate one might not. If this condition lasts long and the file is 
> abandoned, LeaseManager will try to recover the lease and the block. But 
> {{internalReleaseLease()}} will fail with invalid cast exception with this 
> kind of file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5558) LeaseManager monitor thread can crash if the last block is complete but another block is not.

2013-11-22 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830519#comment-13830519
 ] 

Kihwal Lee commented on HDFS-5558:
--

{{LeaseManager}} might need to be made more robust, but more importantly, the 
last block shouldn't be completed if the penultimate block is not completed 
when closing a file.  In {{completeFileInternal()}}, 
{{checkFileProgress(pendingFile, false)}} needs to be called before calling 
{{commitOrCompleteLastBlock()}}.  If the penultimate block isn't going to be 
completed soon, the close will fail anyway. It should fail before doing more 
damage.

> LeaseManager monitor thread can crash if the last block is complete but 
> another block is not.
> -
>
> Key: HDFS-5558
> URL: https://issues.apache.org/jira/browse/HDFS-5558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>
> As mentioned in HDFS-5557, if a file has its last and penultimate block not 
> completed and the file is being closed, the last block may be completed but 
> the penultimate one might not. If this condition lasts long and the file is 
> abandoned, LeaseManager will try to recover the lease and the block. But 
> {{internalReleaseLease()}} will fail with invalid cast exception with this 
> kind of file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5557) Write pipeline recovery for the last packet in the block may cause rejection of valid replicas

2013-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830518#comment-13830518
 ] 

Hadoop QA commented on HDFS-5557:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615433/HDFS-5557.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5551//console

This message is automatically generated.

> Write pipeline recovery for the last packet in the block may cause rejection 
> of valid replicas
> --
>
> Key: HDFS-5557
> URL: https://issues.apache.org/jira/browse/HDFS-5557
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.9, 2.3.0
>Reporter: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-5557.patch
>
>
> When a block is reported from a data node while the block is under 
> construction (i.e. not committed or completed), BlockManager calls 
> BlockInfoUnderConstruction.addReplicaIfNotPresent() to update the reported 
> replica state. But BlockManager is calling it with the stored block, not 
> reported block.  This causes the recorded replicas' gen stamp to be that of 
> BlockInfoUnderConstruction itself, not the one from reported replica.
> When a pipeline recovery is done for the last packet of a block, the 
> incremental block reports with the new gen stamp may come before the client 
> calling updatePipeline(). If this happens, these replicas will be incorrectly 
> recorded with the old gen stamp and get removed later.  The result is close 
> or addAdditionalBlock failure.
> If the last block is completed, but the penultimate block is not because of 
> this issue, the file won't be closed. If this file is not cleared, but the 
> client goes away, the lease manager will try to recover the lease/block, at 
> which point it will crash. I will file a separate jira for this shortly.
> The worst case is to reject all good ones and accepting a bad one. In this 
> case, the block will get completed, but the data cannot be read until the 
> next full block report containing one of the valid replicas is received.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5558) LeaseManager monitor thread can crash if the last block is complete but another block is not.

2013-11-22 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-5558:
-

Description: 
As mentioned in HDFS-5557, if a file has its last and penultimate block not 
completed and the file is being closed, the last block may be completed but the 
penultimate one might not. If this condition lasts long and the file is 
abandoned, LeaseManager will try to recover the lease and the block. But 
{{internalReleaseLease()}} will fail with invalid cast exception with this kind 
of file.



  was:
As mentioned in HDFS-5557, if a file has its last and penultimate block not 
completed and the file is being closed, the last block may be completed but the 
penultimate one might not. If this condition lasts long and the file is 
abandoned, LeaseManager will try to recover the lease and the block. But {{ 
internalReleaseLease}} will fail with invalid cast exception with this kind of 
file.




> LeaseManager monitor thread can crash if the last block is complete but 
> another block is not.
> -
>
> Key: HDFS-5558
> URL: https://issues.apache.org/jira/browse/HDFS-5558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>
> As mentioned in HDFS-5557, if a file has its last and penultimate block not 
> completed and the file is being closed, the last block may be completed but 
> the penultimate one might not. If this condition lasts long and the file is 
> abandoned, LeaseManager will try to recover the lease and the block. But 
> {{internalReleaseLease()}} will fail with invalid cast exception with this 
> kind of file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5558) LeaseManager monitor thread can crash if the last block is complete but another block is not.

2013-11-22 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-5558:


 Summary: LeaseManager monitor thread can crash if the last block 
is complete but another block is not.
 Key: HDFS-5558
 URL: https://issues.apache.org/jira/browse/HDFS-5558
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee


As mentioned in HDFS-5557, a file has its last and penultimate block not 
completed and the file is being closed, the last block can be completed but the 
penultimate one might not. If this condition lasts long and the file is 
abandoned, LeaseManager will try to recover the lease and the block. But {{ 
internalReleaseLease}} will fail with invalid cast exception with this kind of 
file.





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5558) LeaseManager monitor thread can crash if the last block is complete but another block is not.

2013-11-22 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-5558:
-

Description: 
As mentioned in HDFS-5557, if a file has its last and penultimate block not 
completed and the file is being closed, the last block may be completed but the 
penultimate one might not. If this condition lasts long and the file is 
abandoned, LeaseManager will try to recover the lease and the block. But {{ 
internalReleaseLease}} will fail with invalid cast exception with this kind of 
file.



  was:
As mentioned in HDFS-5557, a file has its last and penultimate block not 
completed and the file is being closed, the last block can be completed but the 
penultimate one might not. If this condition lasts long and the file is 
abandoned, LeaseManager will try to recover the lease and the block. But {{ 
internalReleaseLease}} will fail with invalid cast exception with this kind of 
file.




> LeaseManager monitor thread can crash if the last block is complete but 
> another block is not.
> -
>
> Key: HDFS-5558
> URL: https://issues.apache.org/jira/browse/HDFS-5558
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>
> As mentioned in HDFS-5557, if a file has its last and penultimate block not 
> completed and the file is being closed, the last block may be completed but 
> the penultimate one might not. If this condition lasts long and the file is 
> abandoned, LeaseManager will try to recover the lease and the block. But {{ 
> internalReleaseLease}} will fail with invalid cast exception with this kind 
> of file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5549) Support for implementing custom FsDatasetSpi from outside the project

2013-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830515#comment-13830515
 ] 

Hadoop QA commented on HDFS-5549:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615407/HDFS-5549.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5549//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5549//console

This message is automatically generated.

> Support for implementing custom FsDatasetSpi from outside the project
> -
>
> Key: HDFS-5549
> URL: https://issues.apache.org/jira/browse/HDFS-5549
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Ignacio Corderi
> Attachments: HDFS-5549.patch
>
>
> Visibility for multiple methods and a few classes got changed to public to 
> allow FsDatasetSpi and all the related classes that need subtyping to be 
> fully implemented from outside the HDFS project.
> Blocks transfers got abstracted to a factory given that the behavior will be 
> changed for DataNodes using Kinetic drives. The existing DataNode to DataNode 
> block transfer functionality got moved to LegacyBlockTransferer, no new 
> configuration is needed to use this class and have the same behavior that is 
> currently present.
> DataNodes have an additional configuration key 
> DFS_DATANODE_BLOCKTRANSFERER_FACTORY_KEY to override the default block 
> transfer behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5430) Support TTL on CacheBasedPathDirectives

2013-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830513#comment-13830513
 ] 

Hadoop QA commented on HDFS-5430:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615406/hdfs-5430-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
-2 warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5550//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5550//console

This message is automatically generated.

> Support TTL on CacheBasedPathDirectives
> ---
>
> Key: HDFS-5430
> URL: https://issues.apache.org/jira/browse/HDFS-5430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: hdfs-5430-1.patch, hdfs-5430-2.patch
>
>
> It would be nice if CacheBasedPathDirectives would support an expiration 
> time, after which they would be automatically removed by the NameNode.  This 
> time would probably be in wall-block time for the convenience of system 
> administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5557) Write pipeline recovery for the last packet in the block may cause rejection of valid replicas

2013-11-22 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-5557:
-

Attachment: HDFS-5557.patch

The patch modifies {{BlockManager.addStoredBlockUnderConstruction()}} to update 
{{BlockInfoUnderContruction}} with the reported block, not the stored block 
itself.

In this method, the old behavior is restored if {{ucBlock.storedBlock}} is used 
instead of {{ucBlock.reportedBlock}}. The unit test will fail with the old 
behavior.

An incorrect assertion is removed from {{DFSOutputStream}}. The assertion is 
only correct if the block is the last block. If there is more data to be 
written beyond this block, it fails.

> Write pipeline recovery for the last packet in the block may cause rejection 
> of valid replicas
> --
>
> Key: HDFS-5557
> URL: https://issues.apache.org/jira/browse/HDFS-5557
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.9, 2.3.0
>Reporter: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-5557.patch
>
>
> When a block is reported from a data node while the block is under 
> construction (i.e. not committed or completed), BlockManager calls 
> BlockInfoUnderConstruction.addReplicaIfNotPresent() to update the reported 
> replica state. But BlockManager is calling it with the stored block, not 
> reported block.  This causes the recorded replicas' gen stamp to be that of 
> BlockInfoUnderConstruction itself, not the one from reported replica.
> When a pipeline recovery is done for the last packet of a block, the 
> incremental block reports with the new gen stamp may come before the client 
> calling updatePipeline(). If this happens, these replicas will be incorrectly 
> recorded with the old gen stamp and get removed later.  The result is close 
> or addAdditionalBlock failure.
> If the last block is completed, but the penultimate block is not because of 
> this issue, the file won't be closed. If this file is not cleared, but the 
> client goes away, the lease manager will try to recover the lease/block, at 
> which point it will crash. I will file a separate jira for this shortly.
> The worst case is to reject all good ones and accepting a bad one. In this 
> case, the block will get completed, but the data cannot be read until the 
> next full block report containing one of the valid replicas is received.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5557) Write pipeline recovery for the last packet in the block may cause rejection of valid replicas

2013-11-22 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-5557:
-

Status: Patch Available  (was: Open)

> Write pipeline recovery for the last packet in the block may cause rejection 
> of valid replicas
> --
>
> Key: HDFS-5557
> URL: https://issues.apache.org/jira/browse/HDFS-5557
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.9, 2.3.0
>Reporter: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-5557.patch
>
>
> When a block is reported from a data node while the block is under 
> construction (i.e. not committed or completed), BlockManager calls 
> BlockInfoUnderConstruction.addReplicaIfNotPresent() to update the reported 
> replica state. But BlockManager is calling it with the stored block, not 
> reported block.  This causes the recorded replicas' gen stamp to be that of 
> BlockInfoUnderConstruction itself, not the one from reported replica.
> When a pipeline recovery is done for the last packet of a block, the 
> incremental block reports with the new gen stamp may come before the client 
> calling updatePipeline(). If this happens, these replicas will be incorrectly 
> recorded with the old gen stamp and get removed later.  The result is close 
> or addAdditionalBlock failure.
> If the last block is completed, but the penultimate block is not because of 
> this issue, the file won't be closed. If this file is not cleared, but the 
> client goes away, the lease manager will try to recover the lease/block, at 
> which point it will crash. I will file a separate jira for this shortly.
> The worst case is to reject all good ones and accepting a bad one. In this 
> case, the block will get completed, but the data cannot be read until the 
> next full block report containing one of the valid replicas is received.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5557) Write pipeline recovery for the last packet in the block may cause rejection of valid replicas

2013-11-22 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-5557:


 Summary: Write pipeline recovery for the last packet in the block 
may cause rejection of valid replicas
 Key: HDFS-5557
 URL: https://issues.apache.org/jira/browse/HDFS-5557
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.9, 2.3.0
Reporter: Kihwal Lee
Priority: Critical


When a block is reported from a data node while the block is under construction 
(i.e. not committed or completed), BlockManager calls 
BlockInfoUnderConstruction.addReplicaIfNotPresent() to update the reported 
replica state. But BlockManager is calling it with the stored block, not 
reported block.  This causes the recorded replicas' gen stamp to be that of 
BlockInfoUnderConstruction itself, not the one from reported replica.

When a pipeline recovery is done for the last packet of a block, the 
incremental block reports with the new gen stamp may come before the client 
calling updatePipeline(). If this happens, these replicas will be incorrectly 
recorded with the old gen stamp and get removed later.  The result is close or 
addAdditionalBlock failure.

If the last block is completed, but the penultimate block is not because of 
this issue, the file won't be closed. If this file is not cleared, but the 
client goes away, the lease manager will try to recover the lease/block, at 
which point it will crash. I will file a separate jira for this shortly.

The worst case is to reject all good ones and accepting a bad one. In this 
case, the block will get completed, but the data cannot be read until the next 
full block report containing one of the valid replicas is received.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5286) Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature

2013-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830474#comment-13830474
 ] 

Hadoop QA commented on HDFS-5286:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615400/h5286_20131122.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5546//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5546//console

This message is automatically generated.

> Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature
> ---
>
> Key: HDFS-5286
> URL: https://issues.apache.org/jira/browse/HDFS-5286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h5286_20131122.patch
>
>
> Similar to the case of INodeFile (HFDS-5285), we should also flatten the 
> INodeDirectory hierarchy.
> This is the first step to add DirectoryWithQuotaFeature for replacing 
> INodeDirectoryWithQuota.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5556) add some more NameNode cache statistics, cache pool stats

2013-11-22 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830447#comment-13830447
 ] 

Andrew Wang commented on HDFS-5556:
---

Patch looks good. It'd be easier to review if these renames were in a separate 
JIRA, but I managed. There's also going to be some conflict with HDFS-5430, 
it'd be nice if that could go first since rebasing after a big rename is 
challenging.

* Did you want to expose the new CachePool stats in the CacheAdmin output? It'd 
be nice.
* Need tests for the new CachePoolStats object, and the NN's new CacheUsed stat

> add some more NameNode cache statistics, cache pool stats
> -
>
> Key: HDFS-5556
> URL: https://issues.apache.org/jira/browse/HDFS-5556
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5556.001.patch
>
>
> Add some more NameNode cache statistics and also cache pool statistics.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5556) add some more NameNode cache statistics, cache pool stats

2013-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830433#comment-13830433
 ] 

Hadoop QA commented on HDFS-5556:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615391/HDFS-5556.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5545//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5545//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5545//console

This message is automatically generated.

> add some more NameNode cache statistics, cache pool stats
> -
>
> Key: HDFS-5556
> URL: https://issues.apache.org/jira/browse/HDFS-5556
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5556.001.patch
>
>
> Add some more NameNode cache statistics and also cache pool statistics.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5549) Support for implementing custom FsDatasetSpi from outside the project

2013-11-22 Thread Ignacio Corderi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Corderi updated HDFS-5549:
--

Attachment: HDFS-5549.patch

> Support for implementing custom FsDatasetSpi from outside the project
> -
>
> Key: HDFS-5549
> URL: https://issues.apache.org/jira/browse/HDFS-5549
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Ignacio Corderi
> Attachments: HDFS-5549.patch
>
>
> Visibility for multiple methods and a few classes got changed to public to 
> allow FsDatasetSpi and all the related classes that need subtyping to be 
> fully implemented from outside the HDFS project.
> Blocks transfers got abstracted to a factory given that the behavior will be 
> changed for DataNodes using Kinetic drives. The existing DataNode to DataNode 
> block transfer functionality got moved to LegacyBlockTransferer, no new 
> configuration is needed to use this class and have the same behavior that is 
> currently present.
> DataNodes have an additional configuration key 
> DFS_DATANODE_BLOCKTRANSFERER_FACTORY_KEY to override the default block 
> transfer behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5549) Support for implementing custom FsDatasetSpi from outside the project

2013-11-22 Thread Ignacio Corderi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Corderi updated HDFS-5549:
--

Attachment: (was: HDFS-5549.patch)

> Support for implementing custom FsDatasetSpi from outside the project
> -
>
> Key: HDFS-5549
> URL: https://issues.apache.org/jira/browse/HDFS-5549
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Ignacio Corderi
> Attachments: HDFS-5549.patch
>
>
> Visibility for multiple methods and a few classes got changed to public to 
> allow FsDatasetSpi and all the related classes that need subtyping to be 
> fully implemented from outside the HDFS project.
> Blocks transfers got abstracted to a factory given that the behavior will be 
> changed for DataNodes using Kinetic drives. The existing DataNode to DataNode 
> block transfer functionality got moved to LegacyBlockTransferer, no new 
> configuration is needed to use this class and have the same behavior that is 
> currently present.
> DataNodes have an additional configuration key 
> DFS_DATANODE_BLOCKTRANSFERER_FACTORY_KEY to override the default block 
> transfer behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5549) Support for implementing custom FsDatasetSpi from outside the project

2013-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830421#comment-13830421
 ] 

Hadoop QA commented on HDFS-5549:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615405/HDFS-5549.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5548//console

This message is automatically generated.

> Support for implementing custom FsDatasetSpi from outside the project
> -
>
> Key: HDFS-5549
> URL: https://issues.apache.org/jira/browse/HDFS-5549
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Ignacio Corderi
> Attachments: HDFS-5549.patch
>
>
> Visibility for multiple methods and a few classes got changed to public to 
> allow FsDatasetSpi and all the related classes that need subtyping to be 
> fully implemented from outside the HDFS project.
> Blocks transfers got abstracted to a factory given that the behavior will be 
> changed for DataNodes using Kinetic drives. The existing DataNode to DataNode 
> block transfer functionality got moved to LegacyBlockTransferer, no new 
> configuration is needed to use this class and have the same behavior that is 
> currently present.
> DataNodes have an additional configuration key 
> DFS_DATANODE_BLOCKTRANSFERER_FACTORY_KEY to override the default block 
> transfer behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5430) Support TTL on CacheBasedPathDirectives

2013-11-22 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5430:
--

Attachment: hdfs-5430-2.patch

Patch attached, turns out 0x15 doesn't equal 0xF :)

> Support TTL on CacheBasedPathDirectives
> ---
>
> Key: HDFS-5430
> URL: https://issues.apache.org/jira/browse/HDFS-5430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: hdfs-5430-1.patch, hdfs-5430-2.patch
>
>
> It would be nice if CacheBasedPathDirectives would support an expiration 
> time, after which they would be automatically removed by the NameNode.  This 
> time would probably be in wall-block time for the convenience of system 
> administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5546) race condition crashes "hadoop ls -R" when directories are moved/removed

2013-11-22 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830418#comment-13830418
 ] 

Colin Patrick McCabe commented on HDFS-5546:


Thanks, Kousuke.  I think the goal is to have it continue, ignoring the failure 
to stat that entry.  This will be a bit tricky since when listing a single 
file, we can't ignore that error.  It probably makes sense to print out a 
warning at the end, as well.

> race condition crashes "hadoop ls -R" when directories are moved/removed
> 
>
> Key: HDFS-5546
> URL: https://issues.apache.org/jira/browse/HDFS-5546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-5546.1.patch
>
>
> This seems to be a rare race condition where we have a sequence of events 
> like this:
> 1. org.apache.hadoop.shell.Ls calls DFS#getFileStatus on directory D.
> 2. someone deletes or moves directory D
> 3. org.apache.hadoop.shell.Ls calls PathData#getDirectoryContents(D), which 
> calls DFS#listStatus(D). This throws FileNotFoundException.
> 4. ls command terminates with FNF



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5549) Support for implementing custom FsDatasetSpi from outside the project

2013-11-22 Thread Ignacio Corderi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Corderi updated HDFS-5549:
--

Attachment: HDFS-5549.patch

Added missing Apache License

> Support for implementing custom FsDatasetSpi from outside the project
> -
>
> Key: HDFS-5549
> URL: https://issues.apache.org/jira/browse/HDFS-5549
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Ignacio Corderi
> Attachments: HDFS-5549.patch
>
>
> Visibility for multiple methods and a few classes got changed to public to 
> allow FsDatasetSpi and all the related classes that need subtyping to be 
> fully implemented from outside the HDFS project.
> Blocks transfers got abstracted to a factory given that the behavior will be 
> changed for DataNodes using Kinetic drives. The existing DataNode to DataNode 
> block transfer functionality got moved to LegacyBlockTransferer, no new 
> configuration is needed to use this class and have the same behavior that is 
> currently present.
> DataNodes have an additional configuration key 
> DFS_DATANODE_BLOCKTRANSFERER_FACTORY_KEY to override the default block 
> transfer behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5549) Support for implementing custom FsDatasetSpi from outside the project

2013-11-22 Thread Ignacio Corderi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Corderi updated HDFS-5549:
--

Attachment: (was: HDFS-5549.patch)

> Support for implementing custom FsDatasetSpi from outside the project
> -
>
> Key: HDFS-5549
> URL: https://issues.apache.org/jira/browse/HDFS-5549
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Ignacio Corderi
>
> Visibility for multiple methods and a few classes got changed to public to 
> allow FsDatasetSpi and all the related classes that need subtyping to be 
> fully implemented from outside the HDFS project.
> Blocks transfers got abstracted to a factory given that the behavior will be 
> changed for DataNodes using Kinetic drives. The existing DataNode to DataNode 
> block transfer functionality got moved to LegacyBlockTransferer, no new 
> configuration is needed to use this class and have the same behavior that is 
> currently present.
> DataNodes have an additional configuration key 
> DFS_DATANODE_BLOCKTRANSFERER_FACTORY_KEY to override the default block 
> transfer behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5549) Support for implementing custom FsDatasetSpi from outside the project

2013-11-22 Thread Ignacio Corderi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830414#comment-13830414
 ] 

Ignacio Corderi commented on HDFS-5549:
---

This patch enables multiple different DataNode implementations running on the 
infrastructure. 
In particular this allows us to implement a DataNode that can access new 
storage based on Seagate Technology.
Part of this work was done in collaboration with Nathan Roberts and Daryn Sharp 
from Yahoo.

The block transfer factory also gives us the ability to change how this new 
DataNodes transfer blocks between each other, but this is only part of the main 
motivation for the patch.

> Support for implementing custom FsDatasetSpi from outside the project
> -
>
> Key: HDFS-5549
> URL: https://issues.apache.org/jira/browse/HDFS-5549
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Ignacio Corderi
> Attachments: HDFS-5549.patch
>
>
> Visibility for multiple methods and a few classes got changed to public to 
> allow FsDatasetSpi and all the related classes that need subtyping to be 
> fully implemented from outside the HDFS project.
> Blocks transfers got abstracted to a factory given that the behavior will be 
> changed for DataNodes using Kinetic drives. The existing DataNode to DataNode 
> block transfer functionality got moved to LegacyBlockTransferer, no new 
> configuration is needed to use this class and have the same behavior that is 
> currently present.
> DataNodes have an additional configuration key 
> DFS_DATANODE_BLOCKTRANSFERER_FACTORY_KEY to override the default block 
> transfer behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-2832) Enable support for heterogeneous storages in HDFS

2013-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830406#comment-13830406
 ] 

Hadoop QA commented on HDFS-2832:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615385/h2832_20131122.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 47 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
-12 warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
  org.apache.hadoop.hdfs.TestDatanodeConfig

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5544//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5544//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5544//console

This message is automatically generated.

> Enable support for heterogeneous storages in HDFS
> -
>
> Key: HDFS-2832
> URL: https://issues.apache.org/jira/browse/HDFS-2832
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.24.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: 20130813-HeterogeneousStorage.pdf, H2832_20131107.patch, 
> editsStored, h2832_20131023.patch, h2832_20131023b.patch, 
> h2832_20131025.patch, h2832_20131028.patch, h2832_20131028b.patch, 
> h2832_20131029.patch, h2832_20131103.patch, h2832_20131104.patch, 
> h2832_20131105.patch, h2832_20131107b.patch, h2832_20131108.patch, 
> h2832_20131110.patch, h2832_20131110b.patch, h2832_2013.patch, 
> h2832_20131112.patch, h2832_20131112b.patch, h2832_20131114.patch, 
> h2832_20131118.patch, h2832_20131119.patch, h2832_20131119b.patch, 
> h2832_20131121.patch, h2832_20131122.patch
>
>
> HDFS currently supports configuration where storages are a list of 
> directories. Typically each of these directories correspond to a volume with 
> its own file system. All these directories are homogeneous and therefore 
> identified as a single storage at the namenode. I propose, change to the 
> current model where Datanode * is a * storage, to Datanode * is a collection 
> * of strorages. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5537) Remove FileWithSnapshot interface

2013-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830392#comment-13830392
 ] 

Hadoop QA commented on HDFS-5537:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615402/HDFS-5537.002.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5547//console

This message is automatically generated.

> Remove FileWithSnapshot interface
> -
>
> Key: HDFS-5537
> URL: https://issues.apache.org/jira/browse/HDFS-5537
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-5537.000.patch, HDFS-5537.001.patch, 
> HDFS-5537.002.patch
>
>
> We use the FileWithSnapshot interface to define a set of methods shared by 
> INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot. After using 
> the Under-Construction feature to replace the INodeFileUC and 
> INodeFileUCWithSnapshot, we no longer need this interface.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5537) Remove FileWithSnapshot interface

2013-11-22 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5537:


Attachment: HDFS-5537.002.patch

Update the patch to fix the javadoc warnings. Also add Preconditions check to 
INodeFile#toCompleteFile and INodeFile#removeLastBlock to make sure the file is 
under construction.

> Remove FileWithSnapshot interface
> -
>
> Key: HDFS-5537
> URL: https://issues.apache.org/jira/browse/HDFS-5537
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-5537.000.patch, HDFS-5537.001.patch, 
> HDFS-5537.002.patch
>
>
> We use the FileWithSnapshot interface to define a set of methods shared by 
> INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot. After using 
> the Under-Construction feature to replace the INodeFileUC and 
> INodeFileUCWithSnapshot, we no longer need this interface.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5286) Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature

2013-11-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5286:
-

Attachment: h5286_20131122.patch

h5286_20131122.patch: removes INodeDirectoryWithQuota.

> Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature
> ---
>
> Key: HDFS-5286
> URL: https://issues.apache.org/jira/browse/HDFS-5286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h5286_20131122.patch
>
>
> Similar to the case of INodeFile (HFDS-5285), we should also flatten the 
> INodeDirectory hierarchy.
> This is the first step to add DirectoryWithQuotaFeature for replacing 
> INodeDirectoryWithQuota.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5286) Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature

2013-11-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5286:
-

Status: Patch Available  (was: Open)

> Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature
> ---
>
> Key: HDFS-5286
> URL: https://issues.apache.org/jira/browse/HDFS-5286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h5286_20131122.patch
>
>
> Similar to the case of INodeFile (HFDS-5285), we should also flatten the 
> INodeDirectory hierarchy.
> This is the first step to add DirectoryWithQuotaFeature for replacing 
> INodeDirectoryWithQuota.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5546) race condition crashes "hadoop ls -R" when directories are moved/removed

2013-11-22 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830366#comment-13830366
 ] 

Kousuke Saruta commented on HDFS-5546:
--

I see, and I will try to modify that.

> race condition crashes "hadoop ls -R" when directories are moved/removed
> 
>
> Key: HDFS-5546
> URL: https://issues.apache.org/jira/browse/HDFS-5546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-5546.1.patch
>
>
> This seems to be a rare race condition where we have a sequence of events 
> like this:
> 1. org.apache.hadoop.shell.Ls calls DFS#getFileStatus on directory D.
> 2. someone deletes or moves directory D
> 3. org.apache.hadoop.shell.Ls calls PathData#getDirectoryContents(D), which 
> calls DFS#listStatus(D). This throws FileNotFoundException.
> 4. ls command terminates with FNF



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5554) Add Snapshot Feature to INodeFile

2013-11-22 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5554:


Summary: Add Snapshot Feature to INodeFile  (was: Add FileWithSnapshot 
Feature)

> Add Snapshot Feature to INodeFile
> -
>
> Key: HDFS-5554
> URL: https://issues.apache.org/jira/browse/HDFS-5554
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>
> Similar with HDFS-5285, we can add a FileWithSnapshot feature to INodeFile 
> and use it to replace the current INodeFileWithSnapshot.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5546) race condition crashes "hadoop ls -R" when directories are moved/removed

2013-11-22 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830353#comment-13830353
 ] 

Colin Patrick McCabe commented on HDFS-5546:


{code}
@@ -86,7 +87,11 @@ protected void processOptions(LinkedList args)
   protected void processPathArgument(PathData item) throws IOException {
 // implicitly recurse once for cmdline directories
 if (dirRecurse && item.stat.isDirectory()) {
-  recursePath(item);
+  try {
+recursePath(item);
+  } catch (FileNotFoundException e){
+displayError(e);
+  } 
 } else {
   super.processPathArgument(item);
 }
{code}

This will result in the first moved/removed file aborting the entire recursive 
ls with an error message.  Basically, the same behavior as now, only with a 
process exit code of 0 rather than nonzero.  That's not what we want.

> race condition crashes "hadoop ls -R" when directories are moved/removed
> 
>
> Key: HDFS-5546
> URL: https://issues.apache.org/jira/browse/HDFS-5546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-5546.1.patch
>
>
> This seems to be a rare race condition where we have a sequence of events 
> like this:
> 1. org.apache.hadoop.shell.Ls calls DFS#getFileStatus on directory D.
> 2. someone deletes or moves directory D
> 3. org.apache.hadoop.shell.Ls calls PathData#getDirectoryContents(D), which 
> calls DFS#listStatus(D). This throws FileNotFoundException.
> 4. ls command terminates with FNF



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5556) add some more NameNode cache statistics, cache pool stats

2013-11-22 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5556:
---

Attachment: HDFS-5556.001.patch

* listCachePools now returns a list of CachePoolEntry objects, which contain 
CachePoolInfo and CachePoolStats.  This is similar to how we handle 
listDirectives.

* CachePool now contains an implicit list of all the CacheDirective objects in 
that pool.  Currently, this just makes deleting a cache pool faster.  In the 
future, this will be useful for improving the cache replication monitor.

* rename some places where we were using the word "entry" but really we should 
be saying "directive"

* validate that we can't set negative CachePool weights.  Throw 
InvalidRequestException rather than plain IOE when asked to do this.

* CacheReplicationMonitor#rescanCacheDirectives: fix invalid javadoc comment.

* add NameNode statistics for total cluster cache capacity and usage in bytes.

* CacheManager#processCacheReportImpl: since we no longer send the genstamp of 
cached blocks on the wire in cache reports, there is no need to check it here.

> add some more NameNode cache statistics, cache pool stats
> -
>
> Key: HDFS-5556
> URL: https://issues.apache.org/jira/browse/HDFS-5556
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5556.001.patch
>
>
> Add some more NameNode cache statistics and also cache pool statistics.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5556) add some more NameNode cache statistics, cache pool stats

2013-11-22 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5556:
---

Status: Patch Available  (was: Open)

> add some more NameNode cache statistics, cache pool stats
> -
>
> Key: HDFS-5556
> URL: https://issues.apache.org/jira/browse/HDFS-5556
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5556.001.patch
>
>
> Add some more NameNode cache statistics and also cache pool statistics.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5556) add some more NameNode cache statistics, cache pool stats

2013-11-22 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5556:
--

 Summary: add some more NameNode cache statistics, cache pool stats
 Key: HDFS-5556
 URL: https://issues.apache.org/jira/browse/HDFS-5556
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Add some more NameNode cache statistics and also cache pool statistics.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5537) Remove FileWithSnapshot interface

2013-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830320#comment-13830320
 ] 

Hadoop QA commented on HDFS-5537:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615370/HDFS-5537.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
10 warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5543//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5543//console

This message is automatically generated.

> Remove FileWithSnapshot interface
> -
>
> Key: HDFS-5537
> URL: https://issues.apache.org/jira/browse/HDFS-5537
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-5537.000.patch, HDFS-5537.001.patch
>
>
> We use the FileWithSnapshot interface to define a set of methods shared by 
> INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot. After using 
> the Under-Construction feature to replace the INodeFileUC and 
> INodeFileUCWithSnapshot, we no longer need this interface.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5538) URLConnectionFactory should pick up the SSL related configuration by default

2013-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830318#comment-13830318
 ] 

Hadoop QA commented on HDFS-5538:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615272/HDFS-5538.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5542//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5542//console

This message is automatically generated.

> URLConnectionFactory should pick up the SSL related configuration by default
> 
>
> Key: HDFS-5538
> URL: https://issues.apache.org/jira/browse/HDFS-5538
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5538.000.patch, HDFS-5538.001.patch, 
> HDFS-5538.002.patch
>
>
> The default instance of URLConnectionFactory, DEFAULT_CONNECTION_FACTORY does 
> not pick up any hadoop-specific, SSL-related configuration. Its customers 
> have to set up the ConnectionConfigurator explicitly in order to pick up 
> these configurations.
> This is less than ideal for HTTPS because whenever the code needs to make a 
> HTTPS connection, the code is forced to go through the set up.
> This jira refactors URLConnectionFactory to ease the handling of HTTPS 
> connections (compared to the DEFAULT_CONNECTION_FACTORY we have right now).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5546) race condition crashes "hadoop ls -R" when directories are moved/removed

2013-11-22 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5546:
-

Attachment: HDFS-5546.1.patch

I've tried to make a patch for this issue.
How do you look that?

> race condition crashes "hadoop ls -R" when directories are moved/removed
> 
>
> Key: HDFS-5546
> URL: https://issues.apache.org/jira/browse/HDFS-5546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-5546.1.patch
>
>
> This seems to be a rare race condition where we have a sequence of events 
> like this:
> 1. org.apache.hadoop.shell.Ls calls DFS#getFileStatus on directory D.
> 2. someone deletes or moves directory D
> 3. org.apache.hadoop.shell.Ls calls PathData#getDirectoryContents(D), which 
> calls DFS#listStatus(D). This throws FileNotFoundException.
> 4. ls command terminates with FNF



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-2832) Enable support for heterogeneous storages in HDFS

2013-11-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-2832:


Attachment: h2832_20131122.patch

> Enable support for heterogeneous storages in HDFS
> -
>
> Key: HDFS-2832
> URL: https://issues.apache.org/jira/browse/HDFS-2832
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.24.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: 20130813-HeterogeneousStorage.pdf, H2832_20131107.patch, 
> editsStored, h2832_20131023.patch, h2832_20131023b.patch, 
> h2832_20131025.patch, h2832_20131028.patch, h2832_20131028b.patch, 
> h2832_20131029.patch, h2832_20131103.patch, h2832_20131104.patch, 
> h2832_20131105.patch, h2832_20131107b.patch, h2832_20131108.patch, 
> h2832_20131110.patch, h2832_20131110b.patch, h2832_2013.patch, 
> h2832_20131112.patch, h2832_20131112b.patch, h2832_20131114.patch, 
> h2832_20131118.patch, h2832_20131119.patch, h2832_20131119b.patch, 
> h2832_20131121.patch, h2832_20131122.patch
>
>
> HDFS currently supports configuration where storages are a list of 
> directories. Typically each of these directories correspond to a volume with 
> its own file system. All these directories are homogeneous and therefore 
> identified as a single storage at the namenode. I propose, change to the 
> current model where Datanode * is a * storage, to Datanode * is a collection 
> * of strorages. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-2832) Enable support for heterogeneous storages in HDFS

2013-11-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-2832:


Attachment: (was: h5542_20131122.patch)

> Enable support for heterogeneous storages in HDFS
> -
>
> Key: HDFS-2832
> URL: https://issues.apache.org/jira/browse/HDFS-2832
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.24.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: 20130813-HeterogeneousStorage.pdf, H2832_20131107.patch, 
> editsStored, h2832_20131023.patch, h2832_20131023b.patch, 
> h2832_20131025.patch, h2832_20131028.patch, h2832_20131028b.patch, 
> h2832_20131029.patch, h2832_20131103.patch, h2832_20131104.patch, 
> h2832_20131105.patch, h2832_20131107b.patch, h2832_20131108.patch, 
> h2832_20131110.patch, h2832_20131110b.patch, h2832_2013.patch, 
> h2832_20131112.patch, h2832_20131112b.patch, h2832_20131114.patch, 
> h2832_20131118.patch, h2832_20131119.patch, h2832_20131119b.patch, 
> h2832_20131121.patch, h2832_20131122.patch
>
>
> HDFS currently supports configuration where storages are a list of 
> directories. Typically each of these directories correspond to a volume with 
> its own file system. All these directories are homogeneous and therefore 
> identified as a single storage at the namenode. I propose, change to the 
> current model where Datanode * is a * storage, to Datanode * is a collection 
> * of strorages. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-2832) Enable support for heterogeneous storages in HDFS

2013-11-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-2832:


Attachment: h5542_20131122.patch

> Enable support for heterogeneous storages in HDFS
> -
>
> Key: HDFS-2832
> URL: https://issues.apache.org/jira/browse/HDFS-2832
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.24.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: 20130813-HeterogeneousStorage.pdf, H2832_20131107.patch, 
> editsStored, h2832_20131023.patch, h2832_20131023b.patch, 
> h2832_20131025.patch, h2832_20131028.patch, h2832_20131028b.patch, 
> h2832_20131029.patch, h2832_20131103.patch, h2832_20131104.patch, 
> h2832_20131105.patch, h2832_20131107b.patch, h2832_20131108.patch, 
> h2832_20131110.patch, h2832_20131110b.patch, h2832_2013.patch, 
> h2832_20131112.patch, h2832_20131112b.patch, h2832_20131114.patch, 
> h2832_20131118.patch, h2832_20131119.patch, h2832_20131119b.patch, 
> h2832_20131121.patch, h2832_20131122.patch
>
>
> HDFS currently supports configuration where storages are a list of 
> directories. Typically each of these directories correspond to a volume with 
> its own file system. All these directories are homogeneous and therefore 
> identified as a single storage at the namenode. I propose, change to the 
> current model where Datanode * is a * storage, to Datanode * is a collection 
> * of strorages. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5542) Fix TODO and clean up the code in HDFS-2832.

2013-11-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-5542.
-

   Resolution: Fixed
Fix Version/s: Heterogeneous Storage (HDFS-2832)
 Hadoop Flags: Reviewed

+1 for the patch, I committed it to branch HDFS-2832.

Thanks a lot for taking care of the TODOs including many of mine!

> Fix TODO and clean up the code in HDFS-2832.
> 
>
> Key: HDFS-5542
> URL: https://issues.apache.org/jira/browse/HDFS-5542
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Fix For: Heterogeneous Storage (HDFS-2832)
>
> Attachments: h5542_20131121.patch, h5542_20131122.patch
>
>
> - Fix TODOs.
> - Remove unused code.
> - Reduce visibility (e.g. change public to package private.)
> - Simplify the code if possible.
> - Fix comments and javadoc.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5542) Fix TODO and clean up the code in HDFS-2832.

2013-11-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5542:
-

Attachment: h5542_20131122.patch

The import is changed by Eclipse.  It is unfortunate that different IDEs have 
different policy on arranging imports.

h5542_20131122.patch: manually change the import using *.

> Fix TODO and clean up the code in HDFS-2832.
> 
>
> Key: HDFS-5542
> URL: https://issues.apache.org/jira/browse/HDFS-5542
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Attachments: h5542_20131121.patch, h5542_20131122.patch
>
>
> - Fix TODOs.
> - Remove unused code.
> - Reduce visibility (e.g. change public to package private.)
> - Simplify the code if possible.
> - Fix comments and javadoc.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5555) CacheAdmin commands fail when first listed NameNode is in Standby

2013-11-22 Thread Stephen Chu (JIRA)
Stephen Chu created HDFS-:
-

 Summary: CacheAdmin commands fail when first listed NameNode is in 
Standby
 Key: HDFS-
 URL: https://issues.apache.org/jira/browse/HDFS-
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: caching
Affects Versions: 3.0.0
Reporter: Stephen Chu


I am on a HA-enabled cluster. The NameNodes are on host-1 and host-2.

In the configurations, we specify the host-1 NN first and the host-2 NN 
afterwards in the _dfs.ha.namenodes.ns1_ property (where _ns1_ is the name of 
the nameservice).

If the host-1 NN is Standby and the host-2 NN is Active, some CacheAdmins will 
fail complaining about operation not supported in standby state.

e.g.
{code}
bash-4.1$ hdfs cacheadmin -removeDirectives -path /user/hdfs2
Exception in thread "main" 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): 
Operation category READ is not supported in state standby
at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1501)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1082)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listCacheDirectives(FSNamesystem.java:6892)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$ServerSideCacheEntriesIterator.makeRequest(NameNodeRpcServer.java:1263)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$ServerSideCacheEntriesIterator.makeRequest(NameNodeRpcServer.java:1249)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listCacheDirectives(ClientNamenodeProtocolServerSideTranslatorPB.java:1087)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1499)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

at org.apache.hadoop.ipc.Client.call(Client.java:1348)
at org.apache.hadoop.ipc.Client.call(Client.java:1301)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy9.listCacheDirectives(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB$CacheEntriesIterator.makeRequest(ClientNamenodeProtocolTranslatorPB.java:1079)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB$CacheEntriesIterator.makeRequest(ClientNamenodeProtocolTranslatorPB.java:1064)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$32.hasNext(DistributedFileSystem.java:1704)
at 
org.apache.hadoop.hdfs.tools.CacheAdmin$RemoveCacheDirectiveInfosCommand.run(CacheAdmin.java:372)
at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:84)
at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:89)
{code}

After manually failing over from host-2 to host-1, the CacheAdmin commands 
succeed.


The affected commands are:

-listPools
-listDirectives
-removeDirectives



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5554) Add FileWithSnapshot Feature

2013-11-22 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-5554:
---

 Summary: Add FileWithSnapshot Feature
 Key: HDFS-5554
 URL: https://issues.apache.org/jira/browse/HDFS-5554
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao


Similar with HDFS-5285, we can add a FileWithSnapshot feature to INodeFile and 
use it to replace the current INodeFileWithSnapshot.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5537) Remove FileWithSnapshot interface

2013-11-22 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5537:


Status: Patch Available  (was: Open)

> Remove FileWithSnapshot interface
> -
>
> Key: HDFS-5537
> URL: https://issues.apache.org/jira/browse/HDFS-5537
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-5537.000.patch, HDFS-5537.001.patch
>
>
> We use the FileWithSnapshot interface to define a set of methods shared by 
> INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot. After using 
> the Under-Construction feature to replace the INodeFileUC and 
> INodeFileUCWithSnapshot, we no longer need this interface.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5537) Remove FileWithSnapshot interface

2013-11-22 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5537:


Attachment: HDFS-5537.001.patch

Rebase the patch.

> Remove FileWithSnapshot interface
> -
>
> Key: HDFS-5537
> URL: https://issues.apache.org/jira/browse/HDFS-5537
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-5537.000.patch, HDFS-5537.001.patch
>
>
> We use the FileWithSnapshot interface to define a set of methods shared by 
> INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot. After using 
> the Under-Construction feature to replace the INodeFileUC and 
> INodeFileUCWithSnapshot, we no longer need this interface.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5552) Fix wrong information of "Cluster summay" in dfshealth.html

2013-11-22 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5552:


  Resolution: Fixed
   Fix Version/s: 2.3.0
Target Version/s:   (was: 3.0.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks Shinichi, Kousuke, and Haohui! +1 for the patch. I've committed this to 
trunk and branch-2.

> Fix wrong information of "Cluster summay" in dfshealth.html
> ---
>
> Key: HDFS-5552
> URL: https://issues.apache.org/jira/browse/HDFS-5552
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Haohui Mai
> Fix For: 2.3.0
>
> Attachments: HDFS-5552.000.patch, dfshealth-html.png
>
>
> "files and directories" + "blocks" = total filesystem object(s). But wrong 
> value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5525) Inline dust templates

2013-11-22 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5525:


Fix Version/s: 2.3.0

I've merged this to branch-2.

> Inline dust templates
> -
>
> Key: HDFS-5525
> URL: https://issues.apache.org/jira/browse/HDFS-5525
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HDFS-5525.000.patch, HDFS-5525.000.patch, screenshot.png
>
>
> Currently the dust templates are stored as separate files on the server side. 
> The web UI has to make separate HTTP requests to load the templates, which 
> increases the network overheads and page load latency.
> This jira proposes to inline all dust templates with the main HTML file, so 
> that the page can be loaded faster.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5444) Choose default web UI based on browser capabilities

2013-11-22 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5444:


Fix Version/s: 2.3.0

I've also merged this to branch-2.

> Choose default web UI based on browser capabilities
> ---
>
> Key: HDFS-5444
> URL: https://issues.apache.org/jira/browse/HDFS-5444
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HDFS-5444.000.patch, HDFS-5444.000.patch, 
> HDFS-5444.001.patch, Screenshot-new.png, Screenshot-old.png
>
>
> This jira changes the entrance of the web UI -- so that modern browsers with 
> JavaScript support are redirected to the new web UI, while other browsers 
> will automatically fall back to the old JSP based UI.
> It also add hyperlinks in both UIs to facilitate testings and evaluation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5525) Inline dust templates

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830210#comment-13830210
 ] 

Hudson commented on HDFS-5525:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4787 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4787/])
Move HDFS-5444 and HDFS-5525 to branch 2.3.0 section in CHANGES.txt (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544631)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Inline dust templates
> -
>
> Key: HDFS-5525
> URL: https://issues.apache.org/jira/browse/HDFS-5525
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0
>
> Attachments: HDFS-5525.000.patch, HDFS-5525.000.patch, screenshot.png
>
>
> Currently the dust templates are stored as separate files on the server side. 
> The web UI has to make separate HTTP requests to load the templates, which 
> increases the network overheads and page load latency.
> This jira proposes to inline all dust templates with the main HTML file, so 
> that the page can be loaded faster.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5552) Fix wrong information of "Cluster summay" in dfshealth.html

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830208#comment-13830208
 ] 

Hudson commented on HDFS-5552:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4787 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4787/])
HDFS-5552. Fix wrong information of Cluster summay in dfshealth.html. 
Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544627)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dust-helpers-1.1.1.min.js


> Fix wrong information of "Cluster summay" in dfshealth.html
> ---
>
> Key: HDFS-5552
> URL: https://issues.apache.org/jira/browse/HDFS-5552
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Haohui Mai
> Attachments: HDFS-5552.000.patch, dfshealth-html.png
>
>
> "files and directories" + "blocks" = total filesystem object(s). But wrong 
> value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5444) Choose default web UI based on browser capabilities

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830209#comment-13830209
 ] 

Hudson commented on HDFS-5444:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4787 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4787/])
Move HDFS-5444 and HDFS-5525 to branch 2.3.0 section in CHANGES.txt (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544631)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Choose default web UI based on browser capabilities
> ---
>
> Key: HDFS-5444
> URL: https://issues.apache.org/jira/browse/HDFS-5444
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0
>
> Attachments: HDFS-5444.000.patch, HDFS-5444.000.patch, 
> HDFS-5444.001.patch, Screenshot-new.png, Screenshot-old.png
>
>
> This jira changes the entrance of the web UI -- so that modern browsers with 
> JavaScript support are redirected to the new web UI, while other browsers 
> will automatically fall back to the old JSP based UI.
> It also add hyperlinks in both UIs to facilitate testings and evaluation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5542) Fix TODO and clean up the code in HDFS-2832.

2013-11-22 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830205#comment-13830205
 ] 

Arpit Agarwal commented on HDFS-5542:
-

Thanks for the patch Nicholas!

Looks like it causes a compile error in Datanode.java
{code}
[ERROR] 
/Users/aagarwal/src/hdp2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:[309,8]
 cannot find symbol
symbol  : variable DFS_MAX_NUM_BLOCKS_TO_LOG_DEFAULT
{code}

Would it just be simpler to have the following like in trunk, instead of 
importing individual keys? I am not sure if some of the imports cleanup was 
intentional or done by Eclipse.
{code}
import static org.apache.hadoop.hdfs.DFSConfigKeys.*;
{code}

Changes look good otherwise.


> Fix TODO and clean up the code in HDFS-2832.
> 
>
> Key: HDFS-5542
> URL: https://issues.apache.org/jira/browse/HDFS-5542
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Attachments: h5542_20131121.patch
>
>
> - Fix TODOs.
> - Remove unused code.
> - Reduce visibility (e.g. change public to package private.)
> - Simplify the code if possible.
> - Fix comments and javadoc.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5541) LIBHDFS questions and performance suggestions

2013-11-22 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830164#comment-13830164
 ] 

Chris Nauroth commented on HDFS-5541:
-

Hi, [~stevebovy].  Thank you for reporting this.  I suggest splitting this into 
2 separate jiras: one for cross-platform compatibility and another for 
performance enhancements.  We can probably act quickly on compatibility.  The 
performance questions might require more discussion.  Splitting out separate 
patches also helps make code reviews easier.

Regarding compatibility, we've done a lot of similar work on the JNI extensions 
in hadoop-common for C89 compatibility.  The work hasn't been done for libhdfs, 
but I expect it will be similar (declare variables at the top, no // comments, 
avoid Linux-specific functions or put them behind an {{#ifdef}}, etc.).

bq. >> I just spent a week white-washing the code back to nornal C standards so 
that it could compile and build accross a wide range of platforms <<

Would you be interested in contributing this as a patch?  We'd definitely 
appreciate it!

bq. 5) FINALLY Windows Compatibility

Assuming all of the compatibility issues mentioned above are addressed 
(standard C89 code + no Linux extras), is there some additional Windows 
compatibility problem that you encountered, or this comment just stating the 
end goal that you want to achieve?

> LIBHDFS questions and performance suggestions
> -
>
> Key: HDFS-5541
> URL: https://issues.apache.org/jira/browse/HDFS-5541
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Stephen Bovy
>Priority: Minor
>
> Since libhdfs is a "client" interface",  and esspecially because it is a "C" 
> interface , it should be assumed that the code will be used accross many 
> different platforms, and many different compilers.
> 1) The code should be cross platform ( no Linux extras )
> 2) The code should compile on standard c89 compilers, the
> >>>  {least common denominator rule applies here} !! <<  
> C  code with  "c"   extension should follow the rules of the c standard  
> All variables must be declared at the begining of scope , and no (//) 
> comments allowed 
> >> I just spent a week white-washing the code back to nornal C standards so 
> >> that it could compile and build accross a wide range of platforms << 
> Now on-to  performance questions 
> 1) If threads are not used why do a thread attach ( when threads are not used 
> all the thread attach nonesense is a waste of time and a performance killer ) 
> 2) The JVM  init  code should not be imbedded within the context of every 
> function call   .  The  JVM init code should be in a stand-alone  LIBINIT 
> function that is only invoked once.   The JVM * and the JNI * should be 
> global variables for use when no threads are utilized.  
> 3) When threads are utilized the attach fucntion can use the GLOBAL  jvm * 
> created by the LIBINIT  { WHICH IS INVOKED ONLY ONCE } and thus safely 
> outside the scope of any LOOP that is using the functions 
> 4) Hash Table and Locking  Why ?
> When threads are used the hash table locking is going to hurt perfromance .  
> Why not use thread local storage for the hash table,that way no locking is 
> required either with or without threads.   
>  
> 5) FINALLY Windows  Compatibility 
> Do not use posix features if they cannot easilly be replaced on other 
> platforms   !!



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5544) Adding Test case For Checking dfs.checksum type as NULL value

2013-11-22 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-5544:
--

   Resolution: Fixed
Fix Version/s: 2.2.1
   2.3.0
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks a lot, Sathish for the patch. I have just committed this to trunk, 
branch-2 and 2.2. 

> Adding Test case For Checking dfs.checksum type as NULL value
> -
>
> Key: HDFS-5544
> URL: https://issues.apache.org/jira/browse/HDFS-5544
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.1.0-beta
> Environment: HDFS-TEST
>Reporter: sathish
>Assignee: sathish
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.1
>
> Attachments: HDFS-5544.patch
>
>
> https://issues.apache.org/jira/i#browse/HADOOP-9114,
> For checking the dfs.checksumtype as NULL,it is better  to add one unit test 
> case



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5544) Adding Test case For Checking dfs.checksum type as NULL value

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830082#comment-13830082
 ] 

Hudson commented on HDFS-5544:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4786 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4786/])
HDFS-5544. Adding Test case For Checking dfs.checksum.type as NULL value. 
Contributed by Sathish. (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544596)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFSOutputSummer.java


> Adding Test case For Checking dfs.checksum type as NULL value
> -
>
> Key: HDFS-5544
> URL: https://issues.apache.org/jira/browse/HDFS-5544
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.1.0-beta
> Environment: HDFS-TEST
>Reporter: sathish
>Assignee: sathish
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.1
>
> Attachments: HDFS-5544.patch
>
>
> https://issues.apache.org/jira/i#browse/HADOOP-9114,
> For checking the dfs.checksumtype as NULL,it is better  to add one unit test 
> case



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5543) fix narrow race condition in TestPathBasedCacheRequests

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829961#comment-13829961
 ] 

Hudson commented on HDFS-5543:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1616 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1616/])
HDFS-5543. Fix narrow race condition in TestPathBasedCacheRequests (cmccabe) 
(cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544310)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java


> fix narrow race condition in TestPathBasedCacheRequests
> ---
>
> Key: HDFS-5543
> URL: https://issues.apache.org/jira/browse/HDFS-5543
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: HDFS-5543.001.patch, HDFS-5543.002.patch
>
>
> TestPathBasedCacheRequests has a narrow race condition in 
> testWaitForCachedReplicasInDirectory where an assert checking the number of 
> bytes cached may fail.  The reason is because waitForCachedBlock looks at the 
> NameNode data structures directly to see how many replicas are cached, but 
> the scanner asynchronously updates the cache entries with this information.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5285) Flatten INodeFile hierarchy: Add UnderContruction Feature

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829958#comment-13829958
 ] 

Hudson commented on HDFS-5285:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1616 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1616/])
HDFS-5285. Flatten INodeFile hierarchy: Replace INodeFileUnderConstruction and 
INodeFileUnderConstructionWithSnapshot with FileUnderContructionFeature.  
Contributed by jing9 (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544389)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileUnderConstructionFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileUnderConstruction.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileUnderConstructionWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/CreateEditsLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java


> Flatten INodeFile hierarchy: Add UnderContruction Feature
> -
>
> Key: HDFS-5285
> URL: https://issues.apache.org/jira/browse/HDFS-5285
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Jing Zhao
> Fix For: 3.0.0
>
> Attachments: HDFS-5285.001.patch, HDFS-5285.002.patch, 
> HDFS-5285.003.patch, h5285_20131001.patch, h5285_20131002.patch, 
> h5285_20131118.patch
>
>
> For files, there are INodeFile, INodeFileUnderConstruction, 
> INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot for 
> representing whether a file is under construction or whether it is in some 
> snapshot.  The following are two major problems of the current approach:
> - Java class does not support multiple inheritances so that 
> INodeFileUnderConstructionWithSnapshot cannot extend both 
> INodeFileUnderConstruction and INodeFil

[jira] [Commented] (HDFS-5288) Close idle connections in portmap

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829966#comment-13829966
 ] 

Hudson commented on HDFS-5288:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1616 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1616/])
HDFS-5288. Close idle connections in portmap. Contributed by Haohui Mai 
(brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544352)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/Portmap.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/PortmapResponse.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/RpcProgramPortmap.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/portmap
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/portmap/TestPortmap.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/TestPortmapRegister.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Close idle connections in portmap
> -
>
> Key: HDFS-5288
> URL: https://issues.apache.org/jira/browse/HDFS-5288
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.2.1
>
> Attachments: HDFS-5288.000.patch, HDFS-5288.001.patch
>
>
> Currently the portmap daemon does not close idle connections. The daemon 
> should close any idle connections to save resources.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5407) Fix typos in DFSClientCache

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829963#comment-13829963
 ] 

Hudson commented on HDFS-5407:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1616 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1616/])
HDFS-5407. Fix typos in DFSClientCache. Contributed by Haohui Mai (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544362)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/DFSClientCache.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix typos in DFSClientCache
> ---
>
> Key: HDFS-5407
> URL: https://issues.apache.org/jira/browse/HDFS-5407
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Trivial
> Fix For: 2.2.1
>
> Attachments: HDFS-5407.000.patch
>
>
> in DFSClientCache, clientRemovealListener() should be clientRemovalListener().



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5473) Consistent naming of user-visible caching classes and methods

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829965#comment-13829965
 ] 

Hudson commented on HDFS-5473:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1616 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1616/])
HDFS-5473. Consistent naming of user-visible caching classes and methods 
(cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544252)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveEntry.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/PathBasedCacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/PathBasedCacheEntry.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/CentralizedCacheManagement.apt.vm
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/OfflineEditsViewerHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathBasedCacheRequests.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java


> Consistent naming of user-visible caching classes and methods
> -
>
> Key: HDFS-5473
> URL: https://issues.apache.org/jira/browse/HDFS-5473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: HDFS-5473.002.patch, HDFS-5473.003.patch
>
>
> It's kind of warty that (after HDFS-5326 goes in) DistributedFileSystem has 
> {{*CachePool}} methods take a {{CachePoolInfo}} and 
> {{*PathBasedCacheDirective}} methods that thake a 
> {{PathBasedCa

[jira] [Commented] (HDFS-5473) Consistent naming of user-visible caching classes and methods

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829954#comment-13829954
 ] 

Hudson commented on HDFS-5473:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1590 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1590/])
HDFS-5473. Consistent naming of user-visible caching classes and methods 
(cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544252)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveEntry.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/PathBasedCacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/PathBasedCacheEntry.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/CentralizedCacheManagement.apt.vm
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/OfflineEditsViewerHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathBasedCacheRequests.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java


> Consistent naming of user-visible caching classes and methods
> -
>
> Key: HDFS-5473
> URL: https://issues.apache.org/jira/browse/HDFS-5473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: HDFS-5473.002.patch, HDFS-5473.003.patch
>
>
> It's kind of warty that (after HDFS-5326 goes in) DistributedFileSystem has 
> {{*CachePool}} methods take a {{CachePoolInfo}} and 
> {{*PathBasedCacheDirective}} methods that thake a 
> {{PathBasedCacheDirecti

[jira] [Commented] (HDFS-5543) fix narrow race condition in TestPathBasedCacheRequests

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829950#comment-13829950
 ] 

Hudson commented on HDFS-5543:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1590 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1590/])
HDFS-5543. Fix narrow race condition in TestPathBasedCacheRequests (cmccabe) 
(cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544310)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java


> fix narrow race condition in TestPathBasedCacheRequests
> ---
>
> Key: HDFS-5543
> URL: https://issues.apache.org/jira/browse/HDFS-5543
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: HDFS-5543.001.patch, HDFS-5543.002.patch
>
>
> TestPathBasedCacheRequests has a narrow race condition in 
> testWaitForCachedReplicasInDirectory where an assert checking the number of 
> bytes cached may fail.  The reason is because waitForCachedBlock looks at the 
> NameNode data structures directly to see how many replicas are cached, but 
> the scanner asynchronously updates the cache entries with this information.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5288) Close idle connections in portmap

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829955#comment-13829955
 ] 

Hudson commented on HDFS-5288:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1590 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1590/])
HDFS-5288. Close idle connections in portmap. Contributed by Haohui Mai 
(brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544352)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/Portmap.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/PortmapResponse.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/RpcProgramPortmap.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/portmap
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/portmap/TestPortmap.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/TestPortmapRegister.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Close idle connections in portmap
> -
>
> Key: HDFS-5288
> URL: https://issues.apache.org/jira/browse/HDFS-5288
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.2.1
>
> Attachments: HDFS-5288.000.patch, HDFS-5288.001.patch
>
>
> Currently the portmap daemon does not close idle connections. The daemon 
> should close any idle connections to save resources.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5407) Fix typos in DFSClientCache

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829952#comment-13829952
 ] 

Hudson commented on HDFS-5407:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1590 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1590/])
HDFS-5407. Fix typos in DFSClientCache. Contributed by Haohui Mai (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544362)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/DFSClientCache.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix typos in DFSClientCache
> ---
>
> Key: HDFS-5407
> URL: https://issues.apache.org/jira/browse/HDFS-5407
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Trivial
> Fix For: 2.2.1
>
> Attachments: HDFS-5407.000.patch
>
>
> in DFSClientCache, clientRemovealListener() should be clientRemovalListener().



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5285) Flatten INodeFile hierarchy: Add UnderContruction Feature

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829947#comment-13829947
 ] 

Hudson commented on HDFS-5285:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1590 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1590/])
HDFS-5285. Flatten INodeFile hierarchy: Replace INodeFileUnderConstruction and 
INodeFileUnderConstructionWithSnapshot with FileUnderContructionFeature.  
Contributed by jing9 (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544389)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileUnderConstructionFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileUnderConstruction.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileUnderConstructionWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/CreateEditsLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java


> Flatten INodeFile hierarchy: Add UnderContruction Feature
> -
>
> Key: HDFS-5285
> URL: https://issues.apache.org/jira/browse/HDFS-5285
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Jing Zhao
> Fix For: 3.0.0
>
> Attachments: HDFS-5285.001.patch, HDFS-5285.002.patch, 
> HDFS-5285.003.patch, h5285_20131001.patch, h5285_20131002.patch, 
> h5285_20131118.patch
>
>
> For files, there are INodeFile, INodeFileUnderConstruction, 
> INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot for 
> representing whether a file is under construction or whether it is in some 
> snapshot.  The following are two major problems of the current approach:
> - Java class does not support multiple inheritances so that 
> INodeFileUnderConstructionWithSnapshot cannot extend both 
> INodeFileUnderConstruction and INodeFileWithSnaps

[jira] [Commented] (HDFS-5407) Fix typos in DFSClientCache

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829876#comment-13829876
 ] 

Hudson commented on HDFS-5407:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #399 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/399/])
HDFS-5407. Fix typos in DFSClientCache. Contributed by Haohui Mai (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544362)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/DFSClientCache.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix typos in DFSClientCache
> ---
>
> Key: HDFS-5407
> URL: https://issues.apache.org/jira/browse/HDFS-5407
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Trivial
> Fix For: 2.2.1
>
> Attachments: HDFS-5407.000.patch
>
>
> in DFSClientCache, clientRemovealListener() should be clientRemovalListener().



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5288) Close idle connections in portmap

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829879#comment-13829879
 ] 

Hudson commented on HDFS-5288:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #399 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/399/])
HDFS-5288. Close idle connections in portmap. Contributed by Haohui Mai 
(brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544352)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/Portmap.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/PortmapResponse.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/RpcProgramPortmap.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/portmap
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/portmap/TestPortmap.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/TestPortmapRegister.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Close idle connections in portmap
> -
>
> Key: HDFS-5288
> URL: https://issues.apache.org/jira/browse/HDFS-5288
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.2.1
>
> Attachments: HDFS-5288.000.patch, HDFS-5288.001.patch
>
>
> Currently the portmap daemon does not close idle connections. The daemon 
> should close any idle connections to save resources.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5285) Flatten INodeFile hierarchy: Add UnderContruction Feature

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829871#comment-13829871
 ] 

Hudson commented on HDFS-5285:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #399 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/399/])
HDFS-5285. Flatten INodeFile hierarchy: Replace INodeFileUnderConstruction and 
INodeFileUnderConstructionWithSnapshot with FileUnderContructionFeature.  
Contributed by jing9 (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544389)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileUnderConstructionFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileUnderConstruction.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileUnderConstructionWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/CreateEditsLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java


> Flatten INodeFile hierarchy: Add UnderContruction Feature
> -
>
> Key: HDFS-5285
> URL: https://issues.apache.org/jira/browse/HDFS-5285
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Jing Zhao
> Fix For: 3.0.0
>
> Attachments: HDFS-5285.001.patch, HDFS-5285.002.patch, 
> HDFS-5285.003.patch, h5285_20131001.patch, h5285_20131002.patch, 
> h5285_20131118.patch
>
>
> For files, there are INodeFile, INodeFileUnderConstruction, 
> INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot for 
> representing whether a file is under construction or whether it is in some 
> snapshot.  The following are two major problems of the current approach:
> - Java class does not support multiple inheritances so that 
> INodeFileUnderConstructionWithSnapshot cannot extend both 
> INodeFileUnderConstruction and INodeFileWithSnapsho

[jira] [Commented] (HDFS-5473) Consistent naming of user-visible caching classes and methods

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829878#comment-13829878
 ] 

Hudson commented on HDFS-5473:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #399 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/399/])
HDFS-5473. Consistent naming of user-visible caching classes and methods 
(cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544252)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveEntry.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/PathBasedCacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/PathBasedCacheEntry.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/CentralizedCacheManagement.apt.vm
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/OfflineEditsViewerHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathBasedCacheRequests.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java


> Consistent naming of user-visible caching classes and methods
> -
>
> Key: HDFS-5473
> URL: https://issues.apache.org/jira/browse/HDFS-5473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: HDFS-5473.002.patch, HDFS-5473.003.patch
>
>
> It's kind of warty that (after HDFS-5326 goes in) DistributedFileSystem has 
> {{*CachePool}} methods take a {{CachePoolInfo}} and 
> {{*PathBasedCacheDirective}} methods that thake a 
> {{PathBasedCacheDirective

[jira] [Commented] (HDFS-5543) fix narrow race condition in TestPathBasedCacheRequests

2013-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829874#comment-13829874
 ] 

Hudson commented on HDFS-5543:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #399 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/399/])
HDFS-5543. Fix narrow race condition in TestPathBasedCacheRequests (cmccabe) 
(cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1544310)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java


> fix narrow race condition in TestPathBasedCacheRequests
> ---
>
> Key: HDFS-5543
> URL: https://issues.apache.org/jira/browse/HDFS-5543
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: HDFS-5543.001.patch, HDFS-5543.002.patch
>
>
> TestPathBasedCacheRequests has a narrow race condition in 
> testWaitForCachedReplicasInDirectory where an assert checking the number of 
> bytes cached may fail.  The reason is because waitForCachedBlock looks at the 
> NameNode data structures directly to see how many replicas are cached, but 
> the scanner asynchronously updates the cache entries with this information.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5553) SNN crashed because edit log has gap after upgrade

2013-11-22 Thread Fengdong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengdong Yu updated HDFS-5553:
--

Description: 
As HDFS-5550 depicted, journal nodes doesn't upgrade, so I change the VERSION 
manually according to NN's VERSION.
then , I do upgrade and get this exception. I also marked this as Blocker.

my steps as following:
It's a fresh cluster with hadoop-2.0.1 before upgrading.

0) install hadoop-2.2.0 hadoop package on all nodes.
1) stop-dfs.sh on active NN
2) disable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
3) start-dfs.sh -upgrade -clusterId test-cluster on active NN(only one NN now.)
4) stop-dfs.sh after active NN started successfully.
5) enable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
6) change all journal nodes' VERSION manually according to NN's VERSION
7) rm -f 'dfs.journalnode.edits.dir'/test-cluster/current/* (just keep VERSION 
here)
8) delete all data under 'dfs.namenode.name.dir' on SNN
9) scp -r 'dfs.namenode.name.dir' to SNN on active NN
10) start-dfs.sh






  was:
As HDFS-5550 depicted, journal nodes doesn't upgrade, so I change the VERSION 
manually according to NN's VERSION.
then , I do upgrade and get this exception.

my steps as following:
It's a fresh cluster with hadoop-2.0.1 before upgrading.

0) install hadoop-2.2.0 hadoop package on all nodes.
1) stop-dfs.sh on active NN
2) disable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
3) start-dfs.sh -upgrade -clusterId test-cluster on active NN(only one NN now.)
4) stop-dfs.sh after active NN started successfully.
5) enable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
6) change all journal nodes' VERSION manually according to NN's VERSION
7) rm -f 'dfs.journalnode.edits.dir'/test-cluster/current/* (just keep VERSION 
here)
8) delete all data under 'dfs.namenode.name.dir' on SNN
9) scp -r 'dfs.namenode.name.dir' to SNN on active NN
10) start-dfs.sh







> SNN crashed because edit log has gap after upgrade
> --
>
> Key: HDFS-5553
> URL: https://issues.apache.org/jira/browse/HDFS-5553
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, hdfs-client
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Fengdong Yu
>Priority: Blocker
>
> As HDFS-5550 depicted, journal nodes doesn't upgrade, so I change the VERSION 
> manually according to NN's VERSION.
> then , I do upgrade and get this exception. I also marked this as Blocker.
> my steps as following:
> It's a fresh cluster with hadoop-2.0.1 before upgrading.
> 0) install hadoop-2.2.0 hadoop package on all nodes.
> 1) stop-dfs.sh on active NN
> 2) disable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
> 3) start-dfs.sh -upgrade -clusterId test-cluster on active NN(only one NN 
> now.)
> 4) stop-dfs.sh after active NN started successfully.
> 5) enable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
> 6) change all journal nodes' VERSION manually according to NN's VERSION
> 7) rm -f 'dfs.journalnode.edits.dir'/test-cluster/current/* (just keep 
> VERSION here)
> 8) delete all data under 'dfs.namenode.name.dir' on SNN
> 9) scp -r 'dfs.namenode.name.dir' to SNN on active NN
> 10) start-dfs.sh



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5553) SNN crashed because edit log has gap after upgrade

2013-11-22 Thread Fengdong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829861#comment-13829861
 ] 

Fengdong Yu commented on HDFS-5553:
---

{code}
2013-11-22 18:12:53,460 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'http://test2.com:8480/getJournal?jid=test-cluster&segmentTxId=39&storageInfo=-48%3A412886569%3A1385114618309%3ACID-d359fe59-5e40-41b3-bc18-8595bcc5af07,
 
http://test1.com:8480/getJournal?jid=test-cluster&segmentTxId=39&storageInfo=-48%3A412886569%3A1385114618309%3ACID-d359fe59-5e40-41b3-bc18-8595bcc5af07,
 
http://test3.com:8480/getJournal?jid=test-cluster&segmentTxId=39&storageInfo=-48%3A412886569%3A1385114618309%3ACID-d359fe59-5e40-41b3-bc18-8595bcc5af07'
 to transaction ID 38
2013-11-22 18:12:53,460 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'http://test2.com:8480/getJournal?jid=test-cluster&segmentTxId=39&storageInfo=-48%3A412886569%3A1385114618309%3ACID-d359fe59-5e40-41b3-bc18-8595bcc5af07'
 to transaction ID 38
2013-11-22 18:12:53,771 INFO org.mortbay.log: Stopped 
selectchannelconnec...@test.slave152.com:50070
2013-11-22 18:12:53,872 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
Stopping NameNode metrics system...
2013-11-22 18:12:53,873 INFO 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: ganglia thread interrupted.
2013-11-22 18:12:53,873 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
NameNode metrics system stopped.
2013-11-22 18:12:53,874 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
NameNode metrics system shutdown complete.
2013-11-22 18:12:53,875 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: 
Exception in namenode join
java.io.IOException: There appears to be a gap in the edit log.  We expected 
txid 38, but got txid 39.
at 
org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:189)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:117)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:730)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:644)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:261)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:859)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:621)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:445)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:494)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:692)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:677)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1283)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1349)
2013-11-22 18:12:53,880 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
status 1
2013-11-22 18:12:53,883 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
SHUTDOWN_MSG:
{code}

> SNN crashed because edit log has gap after upgrade
> --
>
> Key: HDFS-5553
> URL: https://issues.apache.org/jira/browse/HDFS-5553
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, hdfs-client
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Fengdong Yu
>Priority: Blocker
>
> As HDFS-5550 depicted, journal nodes doesn't upgrade, so I change the VERSION 
> manually according to NN's VERSION.
> then , I do upgrade and get this exception.
> my steps as following:
> It's a fresh cluster with hadoop-2.0.1 before upgrading.
> 0) install hadoop-2.2.0 hadoop package on all nodes.
> 1) stop-dfs.sh on active NN
> 2) disable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
> 3) start-dfs.sh -upgrade -clusterId test-cluster on active NN(only one NN 
> now.)
> 4) stop-dfs.sh after active NN started successfully.
> 5) change all journal nodes' VERSION manually according to NN's VERSION
> 6) rm -f 'dfs.journalnode.edits.dir'/test-cluster/current/* (just keep 
> VERSION here)
> 7) delete all data under 'dfs.namenode.name.dir' on SNN
> 8) scp -r 'dfs.namenode.name.dir' to SNN on active NN
> 9) start-dfs.sh



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5553) SNN crashed because edit log has gap after upgrade

2013-11-22 Thread Fengdong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengdong Yu updated HDFS-5553:
--

Description: 
As HDFS-5550 depicted, journal nodes doesn't upgrade, so I change the VERSION 
manually according to NN's VERSION.
then , I do upgrade and get this exception.

my steps as following:
It's a fresh cluster with hadoop-2.0.1 before upgrading.

0) install hadoop-2.2.0 hadoop package on all nodes.
1) stop-dfs.sh on active NN
2) disable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
3) start-dfs.sh -upgrade -clusterId test-cluster on active NN(only one NN now.)
4) stop-dfs.sh after active NN started successfully.
5) enable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
6) change all journal nodes' VERSION manually according to NN's VERSION
7) rm -f 'dfs.journalnode.edits.dir'/test-cluster/current/* (just keep VERSION 
here)
8) delete all data under 'dfs.namenode.name.dir' on SNN
9) scp -r 'dfs.namenode.name.dir' to SNN on active NN
10) start-dfs.sh






  was:
As HDFS-5550 depicted, journal nodes doesn't upgrade, so I change the VERSION 
manually according to NN's VERSION.
then , I do upgrade and get this exception.

my steps as following:
It's a fresh cluster with hadoop-2.0.1 before upgrading.

0) install hadoop-2.2.0 hadoop package on all nodes.
1) stop-dfs.sh on active NN
2) disable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
3) start-dfs.sh -upgrade -clusterId test-cluster on active NN(only one NN now.)
4) stop-dfs.sh after active NN started successfully.
5) change all journal nodes' VERSION manually according to NN's VERSION
6) rm -f 'dfs.journalnode.edits.dir'/test-cluster/current/* (just keep VERSION 
here)
7) delete all data under 'dfs.namenode.name.dir' on SNN
8) scp -r 'dfs.namenode.name.dir' to SNN on active NN
9) start-dfs.sh







> SNN crashed because edit log has gap after upgrade
> --
>
> Key: HDFS-5553
> URL: https://issues.apache.org/jira/browse/HDFS-5553
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, hdfs-client
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Fengdong Yu
>Priority: Blocker
>
> As HDFS-5550 depicted, journal nodes doesn't upgrade, so I change the VERSION 
> manually according to NN's VERSION.
> then , I do upgrade and get this exception.
> my steps as following:
> It's a fresh cluster with hadoop-2.0.1 before upgrading.
> 0) install hadoop-2.2.0 hadoop package on all nodes.
> 1) stop-dfs.sh on active NN
> 2) disable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
> 3) start-dfs.sh -upgrade -clusterId test-cluster on active NN(only one NN 
> now.)
> 4) stop-dfs.sh after active NN started successfully.
> 5) enable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
> 6) change all journal nodes' VERSION manually according to NN's VERSION
> 7) rm -f 'dfs.journalnode.edits.dir'/test-cluster/current/* (just keep 
> VERSION here)
> 8) delete all data under 'dfs.namenode.name.dir' on SNN
> 9) scp -r 'dfs.namenode.name.dir' to SNN on active NN
> 10) start-dfs.sh



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5553) SNN crashed because edit log has gap after upgrade

2013-11-22 Thread Fengdong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengdong Yu updated HDFS-5553:
--

Description: 
As HDFS-5550 depicted, journal nodes doesn't upgrade, so I change the VERSION 
manually according to NN's VERSION.
then , I do upgrade and get this exception.

my steps as following:
It's a fresh cluster with hadoop-2.0.1 before upgrading.

0) install hadoop-2.2.0 hadoop package on all nodes.
1) stop-dfs.sh on active NN
2) disable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
3) start-dfs.sh -upgrade -clusterId test-cluster on active NN(only one NN now.)
4) stop-dfs.sh after active NN started successfully.
5) change all journal nodes' VERSION manually according to NN's VERSION
6) rm -f 'dfs.journalnode.edits.dir'/test-cluster/current/* (just keep VERSION 
here)
7) delete all data under 'dfs.namenode.name.dir' on SNN
8) scp -r 'dfs.namenode.name.dir' to SNN on active NN
9) start-dfs.sh






> SNN crashed because edit log has gap after upgrade
> --
>
> Key: HDFS-5553
> URL: https://issues.apache.org/jira/browse/HDFS-5553
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, hdfs-client
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Fengdong Yu
>Priority: Blocker
>
> As HDFS-5550 depicted, journal nodes doesn't upgrade, so I change the VERSION 
> manually according to NN's VERSION.
> then , I do upgrade and get this exception.
> my steps as following:
> It's a fresh cluster with hadoop-2.0.1 before upgrading.
> 0) install hadoop-2.2.0 hadoop package on all nodes.
> 1) stop-dfs.sh on active NN
> 2) disable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
> 3) start-dfs.sh -upgrade -clusterId test-cluster on active NN(only one NN 
> now.)
> 4) stop-dfs.sh after active NN started successfully.
> 5) change all journal nodes' VERSION manually according to NN's VERSION
> 6) rm -f 'dfs.journalnode.edits.dir'/test-cluster/current/* (just keep 
> VERSION here)
> 7) delete all data under 'dfs.namenode.name.dir' on SNN
> 8) scp -r 'dfs.namenode.name.dir' to SNN on active NN
> 9) start-dfs.sh



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5552) Fix wrong information of "Cluster summay" in dfshealth.html

2013-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829857#comment-13829857
 ] 

Hadoop QA commented on HDFS-5552:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615289/HDFS-5552.000.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5541//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5541//console

This message is automatically generated.

> Fix wrong information of "Cluster summay" in dfshealth.html
> ---
>
> Key: HDFS-5552
> URL: https://issues.apache.org/jira/browse/HDFS-5552
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Haohui Mai
> Attachments: HDFS-5552.000.patch, dfshealth-html.png
>
>
> "files and directories" + "blocks" = total filesystem object(s). But wrong 
> value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5553) SNN crashed because edit log has gap after upgrade

2013-11-22 Thread Fengdong Yu (JIRA)
Fengdong Yu created HDFS-5553:
-

 Summary: SNN crashed because edit log has gap after upgrade
 Key: HDFS-5553
 URL: https://issues.apache.org/jira/browse/HDFS-5553
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, hdfs-client
Affects Versions: 2.2.0, 3.0.0
Reporter: Fengdong Yu
Priority: Blocker






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5430) Support TTL on CacheBasedPathDirectives

2013-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829816#comment-13829816
 ] 

Hadoop QA commented on HDFS-5430:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615269/hdfs-5430-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache
  org.apache.hadoop.cli.TestCacheAdminCLI
  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5540//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5540//console

This message is automatically generated.

> Support TTL on CacheBasedPathDirectives
> ---
>
> Key: HDFS-5430
> URL: https://issues.apache.org/jira/browse/HDFS-5430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: hdfs-5430-1.patch
>
>
> It would be nice if CacheBasedPathDirectives would support an expiration 
> time, after which they would be automatically removed by the NameNode.  This 
> time would probably be in wall-block time for the convenience of system 
> administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5538) URLConnectionFactory should pick up the SSL related configuration by default

2013-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829802#comment-13829802
 ] 

Hadoop QA commented on HDFS-5538:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615272/HDFS-5538.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDistributedFileSystem

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5539//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5539//console

This message is automatically generated.

> URLConnectionFactory should pick up the SSL related configuration by default
> 
>
> Key: HDFS-5538
> URL: https://issues.apache.org/jira/browse/HDFS-5538
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5538.000.patch, HDFS-5538.001.patch, 
> HDFS-5538.002.patch
>
>
> The default instance of URLConnectionFactory, DEFAULT_CONNECTION_FACTORY does 
> not pick up any hadoop-specific, SSL-related configuration. Its customers 
> have to set up the ConnectionConfigurator explicitly in order to pick up 
> these configurations.
> This is less than ideal for HTTPS because whenever the code needs to make a 
> HTTPS connection, the code is forced to go through the set up.
> This jira refactors URLConnectionFactory to ease the handling of HTTPS 
> connections (compared to the DEFAULT_CONNECTION_FACTORY we have right now).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5552) Fix wrong information of "Cluster summay" in dfshealth.html

2013-11-22 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829777#comment-13829777
 ] 

Shinichi Yamashita commented on HDFS-5552:
--

+1 LGTM

> Fix wrong information of "Cluster summay" in dfshealth.html
> ---
>
> Key: HDFS-5552
> URL: https://issues.apache.org/jira/browse/HDFS-5552
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Haohui Mai
> Attachments: HDFS-5552.000.patch, dfshealth-html.png
>
>
> "files and directories" + "blocks" = total filesystem object(s). But wrong 
> value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)