[jira] [Commented] (HDFS-3325) When configuring "dfs.namenode.safemode.threshold-pct" to a value greater or equal to 1 there is mismatch in the UI report

2015-03-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377405#comment-14377405
 ] 

Hudson commented on HDFS-3325:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #7414 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7414/])
HDFS-3325. When configuring 'dfs.namenode.safemode.threshold-pct' to a value 
greater or equal to 1 there is mismatch in the UI report (Contributed by 
J.Andreina) (vinayakumarb: rev c6c396fcd69514ba93583268b2633557c3d74a47)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java


> When configuring "dfs.namenode.safemode.threshold-pct" to a value greater or 
> equal to 1 there is mismatch in the UI report
> --
>
> Key: HDFS-3325
> URL: https://issues.apache.org/jira/browse/HDFS-3325
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-3325.1.patch, HDFS-3325.2.patch
>
>
> When dfs.namenode.safemode.threshold-pct is configured to n
> Namenode will be in safemode until n percentage of blocks that should satisfy 
> the minimal replication requirement defined by 
> dfs.namenode.replication.min is reported to namenode
> But in UI it displays that n percentage of total blocks + 1 blocks  are 
> additionally needed
> to come out of the safemode
> Scenario 1:
> 
> Configurations:
> dfs.namenode.safemode.threshold-pct = 2
> dfs.replication = 2
> dfs.namenode.replication.min =2
> Step 1: Start NN,DN1,DN2
> Step 2: Write a file "a.txt" which has got 167 blocks
> step 3: Stop NN,DN1,DN2
> Step 4: start NN
> In UI report the Number of blocks needed to come out of safemode and number 
> of blocks actually present is different.
> {noformat}
> Cluster Summary
> Security is OFF 
> Safe mode is ON. The reported blocks 0 needs additional 335 blocks to reach 
> the threshold 2. of total blocks 167. Safe mode will be turned off 
> automatically.
> 2 files and directories, 167 blocks = 169 total.
> Heap Memory used 57.05 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
> is 2 GB. 
> Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
> Max Non Heap Memory is 176 MB.{noformat}
> Scenario 2:
> ===
> Configurations:
> dfs.namenode.safemode.threshold-pct = 1
> dfs.replication = 2
> dfs.namenode.replication.min =2
> Step 1: Start NN,DN1,DN2
> Step 2: Write a file "a.txt" which has got 167 blocks
> step 3: Stop NN,DN1,DN2
> Step 4: start NN
> In UI report the Number of blocks needed to come out of safemode and number 
> of blocks actually present is different
> {noformat}
> Cluster Summary
> Security is OFF 
> Safe mode is ON. The reported blocks 0 needs additional 168 blocks to reach 
> the threshold 1. of total blocks 167. Safe mode will be turned off 
> automatically.
> 2 files and directories, 167 blocks = 169 total.
> Heap Memory used 56.2 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
> is 2 GB. 
> Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
> Max Non Heap Memory is 176 MB.{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3325) When configuring "dfs.namenode.safemode.threshold-pct" to a value greater or equal to 1 there is mismatch in the UI report

2015-03-23 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-3325:

   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.
Thanks [~andreina] for the contribution.

> When configuring "dfs.namenode.safemode.threshold-pct" to a value greater or 
> equal to 1 there is mismatch in the UI report
> --
>
> Key: HDFS-3325
> URL: https://issues.apache.org/jira/browse/HDFS-3325
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-3325.1.patch, HDFS-3325.2.patch
>
>
> When dfs.namenode.safemode.threshold-pct is configured to n
> Namenode will be in safemode until n percentage of blocks that should satisfy 
> the minimal replication requirement defined by 
> dfs.namenode.replication.min is reported to namenode
> But in UI it displays that n percentage of total blocks + 1 blocks  are 
> additionally needed
> to come out of the safemode
> Scenario 1:
> 
> Configurations:
> dfs.namenode.safemode.threshold-pct = 2
> dfs.replication = 2
> dfs.namenode.replication.min =2
> Step 1: Start NN,DN1,DN2
> Step 2: Write a file "a.txt" which has got 167 blocks
> step 3: Stop NN,DN1,DN2
> Step 4: start NN
> In UI report the Number of blocks needed to come out of safemode and number 
> of blocks actually present is different.
> {noformat}
> Cluster Summary
> Security is OFF 
> Safe mode is ON. The reported blocks 0 needs additional 335 blocks to reach 
> the threshold 2. of total blocks 167. Safe mode will be turned off 
> automatically.
> 2 files and directories, 167 blocks = 169 total.
> Heap Memory used 57.05 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
> is 2 GB. 
> Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
> Max Non Heap Memory is 176 MB.{noformat}
> Scenario 2:
> ===
> Configurations:
> dfs.namenode.safemode.threshold-pct = 1
> dfs.replication = 2
> dfs.namenode.replication.min =2
> Step 1: Start NN,DN1,DN2
> Step 2: Write a file "a.txt" which has got 167 blocks
> step 3: Stop NN,DN1,DN2
> Step 4: start NN
> In UI report the Number of blocks needed to come out of safemode and number 
> of blocks actually present is different
> {noformat}
> Cluster Summary
> Security is OFF 
> Safe mode is ON. The reported blocks 0 needs additional 168 blocks to reach 
> the threshold 1. of total blocks 167. Safe mode will be turned off 
> automatically.
> 2 files and directories, 167 blocks = 169 total.
> Heap Memory used 56.2 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
> is 2 GB. 
> Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
> Max Non Heap Memory is 176 MB.{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7884) NullPointerException in BlockSender

2015-03-23 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377390#comment-14377390
 ] 

Brahma Reddy Battula commented on HDFS-7884:


Thanks a lot for reviews and commit !!!

> NullPointerException in BlockSender
> ---
>
> Key: HDFS-7884
> URL: https://issues.apache.org/jira/browse/HDFS-7884
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: HDFS-7884-002.patch, HDFS-7884.patch, 
> h7884_20150313.patch, 
> org.apache.hadoop.hdfs.TestAppendSnapshotTruncate-output.txt
>
>
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:264)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:506)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:249)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> BlockSender.java:264 is shown below
> {code}
>   this.volumeRef = datanode.data.getVolume(block).obtainReference();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3325) When configuring "dfs.namenode.safemode.threshold-pct" to a value greater or equal to 1 there is mismatch in the UI report

2015-03-23 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-3325:

Target Version/s: 2.8.0  (was: 2.0.0-alpha)

> When configuring "dfs.namenode.safemode.threshold-pct" to a value greater or 
> equal to 1 there is mismatch in the UI report
> --
>
> Key: HDFS-3325
> URL: https://issues.apache.org/jira/browse/HDFS-3325
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Minor
> Attachments: HDFS-3325.1.patch, HDFS-3325.2.patch
>
>
> When dfs.namenode.safemode.threshold-pct is configured to n
> Namenode will be in safemode until n percentage of blocks that should satisfy 
> the minimal replication requirement defined by 
> dfs.namenode.replication.min is reported to namenode
> But in UI it displays that n percentage of total blocks + 1 blocks  are 
> additionally needed
> to come out of the safemode
> Scenario 1:
> 
> Configurations:
> dfs.namenode.safemode.threshold-pct = 2
> dfs.replication = 2
> dfs.namenode.replication.min =2
> Step 1: Start NN,DN1,DN2
> Step 2: Write a file "a.txt" which has got 167 blocks
> step 3: Stop NN,DN1,DN2
> Step 4: start NN
> In UI report the Number of blocks needed to come out of safemode and number 
> of blocks actually present is different.
> {noformat}
> Cluster Summary
> Security is OFF 
> Safe mode is ON. The reported blocks 0 needs additional 335 blocks to reach 
> the threshold 2. of total blocks 167. Safe mode will be turned off 
> automatically.
> 2 files and directories, 167 blocks = 169 total.
> Heap Memory used 57.05 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
> is 2 GB. 
> Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
> Max Non Heap Memory is 176 MB.{noformat}
> Scenario 2:
> ===
> Configurations:
> dfs.namenode.safemode.threshold-pct = 1
> dfs.replication = 2
> dfs.namenode.replication.min =2
> Step 1: Start NN,DN1,DN2
> Step 2: Write a file "a.txt" which has got 167 blocks
> step 3: Stop NN,DN1,DN2
> Step 4: start NN
> In UI report the Number of blocks needed to come out of safemode and number 
> of blocks actually present is different
> {noformat}
> Cluster Summary
> Security is OFF 
> Safe mode is ON. The reported blocks 0 needs additional 168 blocks to reach 
> the threshold 1. of total blocks 167. Safe mode will be turned off 
> automatically.
> 2 files and directories, 167 blocks = 169 total.
> Heap Memory used 56.2 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
> is 2 GB. 
> Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
> Max Non Heap Memory is 176 MB.{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3325) When configuring "dfs.namenode.safemode.threshold-pct" to a value greater or equal to 1 there is mismatch in the UI report

2015-03-23 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-3325:

Target Version/s: 2.0.0-alpha  (was: 2.0.0-alpha, 3.0.0)

> When configuring "dfs.namenode.safemode.threshold-pct" to a value greater or 
> equal to 1 there is mismatch in the UI report
> --
>
> Key: HDFS-3325
> URL: https://issues.apache.org/jira/browse/HDFS-3325
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Minor
> Attachments: HDFS-3325.1.patch, HDFS-3325.2.patch
>
>
> When dfs.namenode.safemode.threshold-pct is configured to n
> Namenode will be in safemode until n percentage of blocks that should satisfy 
> the minimal replication requirement defined by 
> dfs.namenode.replication.min is reported to namenode
> But in UI it displays that n percentage of total blocks + 1 blocks  are 
> additionally needed
> to come out of the safemode
> Scenario 1:
> 
> Configurations:
> dfs.namenode.safemode.threshold-pct = 2
> dfs.replication = 2
> dfs.namenode.replication.min =2
> Step 1: Start NN,DN1,DN2
> Step 2: Write a file "a.txt" which has got 167 blocks
> step 3: Stop NN,DN1,DN2
> Step 4: start NN
> In UI report the Number of blocks needed to come out of safemode and number 
> of blocks actually present is different.
> {noformat}
> Cluster Summary
> Security is OFF 
> Safe mode is ON. The reported blocks 0 needs additional 335 blocks to reach 
> the threshold 2. of total blocks 167. Safe mode will be turned off 
> automatically.
> 2 files and directories, 167 blocks = 169 total.
> Heap Memory used 57.05 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
> is 2 GB. 
> Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
> Max Non Heap Memory is 176 MB.{noformat}
> Scenario 2:
> ===
> Configurations:
> dfs.namenode.safemode.threshold-pct = 1
> dfs.replication = 2
> dfs.namenode.replication.min =2
> Step 1: Start NN,DN1,DN2
> Step 2: Write a file "a.txt" which has got 167 blocks
> step 3: Stop NN,DN1,DN2
> Step 4: start NN
> In UI report the Number of blocks needed to come out of safemode and number 
> of blocks actually present is different
> {noformat}
> Cluster Summary
> Security is OFF 
> Safe mode is ON. The reported blocks 0 needs additional 168 blocks to reach 
> the threshold 1. of total blocks 167. Safe mode will be turned off 
> automatically.
> 2 files and directories, 167 blocks = 169 total.
> Heap Memory used 56.2 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
> is 2 GB. 
> Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
> Max Non Heap Memory is 176 MB.{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3325) When configuring "dfs.namenode.safemode.threshold-pct" to a value greater or equal to 1 there is mismatch in the UI report

2015-03-23 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377387#comment-14377387
 ] 

Vinayakumar B commented on HDFS-3325:
-

failures are not related to patch.

> When configuring "dfs.namenode.safemode.threshold-pct" to a value greater or 
> equal to 1 there is mismatch in the UI report
> --
>
> Key: HDFS-3325
> URL: https://issues.apache.org/jira/browse/HDFS-3325
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Minor
> Attachments: HDFS-3325.1.patch, HDFS-3325.2.patch
>
>
> When dfs.namenode.safemode.threshold-pct is configured to n
> Namenode will be in safemode until n percentage of blocks that should satisfy 
> the minimal replication requirement defined by 
> dfs.namenode.replication.min is reported to namenode
> But in UI it displays that n percentage of total blocks + 1 blocks  are 
> additionally needed
> to come out of the safemode
> Scenario 1:
> 
> Configurations:
> dfs.namenode.safemode.threshold-pct = 2
> dfs.replication = 2
> dfs.namenode.replication.min =2
> Step 1: Start NN,DN1,DN2
> Step 2: Write a file "a.txt" which has got 167 blocks
> step 3: Stop NN,DN1,DN2
> Step 4: start NN
> In UI report the Number of blocks needed to come out of safemode and number 
> of blocks actually present is different.
> {noformat}
> Cluster Summary
> Security is OFF 
> Safe mode is ON. The reported blocks 0 needs additional 335 blocks to reach 
> the threshold 2. of total blocks 167. Safe mode will be turned off 
> automatically.
> 2 files and directories, 167 blocks = 169 total.
> Heap Memory used 57.05 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
> is 2 GB. 
> Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
> Max Non Heap Memory is 176 MB.{noformat}
> Scenario 2:
> ===
> Configurations:
> dfs.namenode.safemode.threshold-pct = 1
> dfs.replication = 2
> dfs.namenode.replication.min =2
> Step 1: Start NN,DN1,DN2
> Step 2: Write a file "a.txt" which has got 167 blocks
> step 3: Stop NN,DN1,DN2
> Step 4: start NN
> In UI report the Number of blocks needed to come out of safemode and number 
> of blocks actually present is different
> {noformat}
> Cluster Summary
> Security is OFF 
> Safe mode is ON. The reported blocks 0 needs additional 168 blocks to reach 
> the threshold 1. of total blocks 167. Safe mode will be turned off 
> automatically.
> 2 files and directories, 167 blocks = 169 total.
> Heap Memory used 56.2 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
> is 2 GB. 
> Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
> Max Non Heap Memory is 176 MB.{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3325) When configuring "dfs.namenode.safemode.threshold-pct" to a value greater or equal to 1 there is mismatch in the UI report

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377377#comment-14377377
 ] 

Hadoop QA commented on HDFS-3325:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12704758/HDFS-3325.2.patch
  against trunk revision 2c238ae.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
  org.apache.hadoop.tracing.TestTracing
  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
  org.apache.hadoop.hdfs.server.balancer.TestBalancer

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10046//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10046//console

This message is automatically generated.

> When configuring "dfs.namenode.safemode.threshold-pct" to a value greater or 
> equal to 1 there is mismatch in the UI report
> --
>
> Key: HDFS-3325
> URL: https://issues.apache.org/jira/browse/HDFS-3325
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Minor
> Attachments: HDFS-3325.1.patch, HDFS-3325.2.patch
>
>
> When dfs.namenode.safemode.threshold-pct is configured to n
> Namenode will be in safemode until n percentage of blocks that should satisfy 
> the minimal replication requirement defined by 
> dfs.namenode.replication.min is reported to namenode
> But in UI it displays that n percentage of total blocks + 1 blocks  are 
> additionally needed
> to come out of the safemode
> Scenario 1:
> 
> Configurations:
> dfs.namenode.safemode.threshold-pct = 2
> dfs.replication = 2
> dfs.namenode.replication.min =2
> Step 1: Start NN,DN1,DN2
> Step 2: Write a file "a.txt" which has got 167 blocks
> step 3: Stop NN,DN1,DN2
> Step 4: start NN
> In UI report the Number of blocks needed to come out of safemode and number 
> of blocks actually present is different.
> {noformat}
> Cluster Summary
> Security is OFF 
> Safe mode is ON. The reported blocks 0 needs additional 335 blocks to reach 
> the threshold 2. of total blocks 167. Safe mode will be turned off 
> automatically.
> 2 files and directories, 167 blocks = 169 total.
> Heap Memory used 57.05 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
> is 2 GB. 
> Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
> Max Non Heap Memory is 176 MB.{noformat}
> Scenario 2:
> ===
> Configurations:
> dfs.namenode.safemode.threshold-pct = 1
> dfs.replication = 2
> dfs.namenode.replication.min =2
> Step 1: Start NN,DN1,DN2
> Step 2: Write a file "a.txt" which has got 167 blocks
> step 3: Stop NN,DN1,DN2
> Step 4: start NN
> In UI report the Number of blocks needed to come out of safemode and number 
> of blocks actually present is different
> {noformat}
> Cluster Summary
> Security is OFF 
> Safe mode is ON. The reported blocks 0 needs additional 168 blocks to reach 
> the threshold 1. of total blocks 167. Safe mode will be turned off 
> automatically.
> 2 files and directories, 167 blocks = 169 total.
> Heap Memory used 56.2 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
> is 2 GB. 
> Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
> Max Non Heap Memory is 176 MB.{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7956) Improve logging for DatanodeRegistration.

2015-03-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377374#comment-14377374
 ] 

Hudson commented on HDFS-7956:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7412 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7412/])
HDFS-7956. Improve logging for DatanodeRegistration. Contributed by Plamen 
Jeliazkov. (shv: rev 970ee3fc56a68afade98017296cf9d057f225a46)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeRegistration.java


> Improve logging for DatanodeRegistration.
> -
>
> Key: HDFS-7956
> URL: https://issues.apache.org/jira/browse/HDFS-7956
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Fix For: 2.7.0
>
> Attachments: HDFS-7956.1.patch
>
>
> {{DatanodeRegistration.toString()}} 
> prints only its address without the port, it should print its full address, 
> similar {{NamenodeRegistration}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7735) Optimize decommission Datanodes to reduce the impact on NameNode's performance

2015-03-23 Thread zhaoyunjiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyunjiong updated HDFS-7735:
---
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> Optimize decommission Datanodes to reduce the impact on NameNode's performance
> --
>
> Key: HDFS-7735
> URL: https://issues.apache.org/jira/browse/HDFS-7735
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-7735.patch
>
>
> When decommission DataNodes, by default, DecommissionManager will check 
> progress every 30 seconds, and it will hold writeLock of Namesystem. It 
> significantly impact NameNode's performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7956) Improve logging for DatanodeRegistration.

2015-03-23 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7956:
--
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I just committed this. Thank you Plamen.

> Improve logging for DatanodeRegistration.
> -
>
> Key: HDFS-7956
> URL: https://issues.apache.org/jira/browse/HDFS-7956
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Fix For: 2.7.0
>
> Attachments: HDFS-7956.1.patch
>
>
> {{DatanodeRegistration.toString()}} 
> prints only its address without the port, it should print its full address, 
> similar {{NamenodeRegistration}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7978) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-23 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377369#comment-14377369
 ] 

Walter Su commented on HDFS-7978:
-

I skipped all:
1. pure "string1" + "string2". ( Because compiler will optimize it)
2. "string1" + obj1 ( only if obj1 is String type, because it's inexpensive)
3. debug("string", obj1, obj2)
4. debug(obj1)

I only wrap these:
1. "string1" + obj1 + "string2" ( concatenating object and String more than 4~5 
times)

> Add LOG.isDebugEnabled() guard for some LOG.debug(..)
> -
>
> Key: HDFS-7978
> URL: https://issues.apache.org/jira/browse/HDFS-7978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-7978.001.patch
>
>
> {{isDebugEnabled()}} is optional. But when there are :
> 1. lots of concatenating Strings
> 2. complicated function calls
> in the arguments, {{LOG.debug(..)}} should be guarded with 
> {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
> performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7978) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-23 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377352#comment-14377352
 ] 

Walter Su commented on HDFS-7978:
-

I found out a lot debug("string", obj1, obj2..), and I didn't wrap them. I 
won't wrap isDebugEnabled() unless it's necessary. I only wrap what's written 
by concatenating, like "string" + obj1 + obj2 + "string"

> Add LOG.isDebugEnabled() guard for some LOG.debug(..)
> -
>
> Key: HDFS-7978
> URL: https://issues.apache.org/jira/browse/HDFS-7978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-7978.001.patch
>
>
> {{isDebugEnabled()}} is optional. But when there are :
> 1. lots of concatenating Strings
> 2. complicated function calls
> in the arguments, {{LOG.debug(..)}} should be guarded with 
> {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
> performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7956) Improve logging for DatanodeRegistration.

2015-03-23 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377348#comment-14377348
 ] 

Konstantin Shvachko commented on HDFS-7956:
---

Don't need new tests as this is a logging change.
Don't see javac or JavaDoc warning in DatanodeRegistration.
This should not affect yarn tests, which failed on the Jenkins build.
Will commit.

> Improve logging for DatanodeRegistration.
> -
>
> Key: HDFS-7956
> URL: https://issues.apache.org/jira/browse/HDFS-7956
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: HDFS-7956.1.patch
>
>
> {{DatanodeRegistration.toString()}} 
> prints only its address without the port, it should print its full address, 
> similar {{NamenodeRegistration}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6261) Add document for enabling node group layer in HDFS

2015-03-23 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377332#comment-14377332
 ] 

Binglin Chang commented on HDFS-6261:
-

Sorry for the late, will update the patch soon. 

> Add document for enabling node group layer in HDFS
> --
>
> Key: HDFS-6261
> URL: https://issues.apache.org/jira/browse/HDFS-6261
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation
>Reporter: Wenwu Peng
>Assignee: Binglin Chang
>  Labels: documentation
> Attachments: 2-layer-topology.png, 3-layer-topology.png, 
> 3layer-topology.png, 4layer-topology.png, HDFS-6261.v1.patch, 
> HDFS-6261.v1.patch, HDFS-6261.v2.patch, HDFS-6261.v3.patch
>
>
> Most of patches from Umbrella JIRA HADOOP-8468  have committed, However there 
> is no site to introduce NodeGroup-aware(HADOOP Virtualization Extensisons) 
> and how to do configuration. so we need to doc it.
> 1.  Doc NodeGroup-aware relate in http://hadoop.apache.org/docs/current 
> 2.  Doc NodeGroup-aware properties in core-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7978) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-23 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377321#comment-14377321
 ] 

Andrew Wang commented on HDFS-7978:
---

Hey Walter, you can pass objects into slf4j log methods without calling 
{{toString}} on them first, and I believe slf4j only calls {{toString}} if the 
log level is enabled. This saves the construction and {{toString}} costs.

> Add LOG.isDebugEnabled() guard for some LOG.debug(..)
> -
>
> Key: HDFS-7978
> URL: https://issues.apache.org/jira/browse/HDFS-7978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-7978.001.patch
>
>
> {{isDebugEnabled()}} is optional. But when there are :
> 1. lots of concatenating Strings
> 2. complicated function calls
> in the arguments, {{LOG.debug(..)}} should be guarded with 
> {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
> performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7961) Trigger full block report after hot swapping disk

2015-03-23 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377317#comment-14377317
 ] 

Andrew Wang commented on HDFS-7961:
---

+1 pending jenkins, thanks Eddy

> Trigger full block report after hot swapping disk
> -
>
> Key: HDFS-7961
> URL: https://issues.apache.org/jira/browse/HDFS-7961
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-7961.000.patch, HDFS-7961.001.patch, 
> HDFS-7961.002.patch, HDFS-7961.003.patch
>
>
> As discussed in HDFS-7960, NN could not remove the data storage metadata from 
> its memory. 
> DN should trigger a full block report immediately after running hot swapping 
> drives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-23 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377314#comment-14377314
 ] 

Andrew Wang commented on HDFS-7960:
---

HDFS-7979 has the follow-up.

> The full block report should prune zombie storages even if they're not empty
> 
>
> Key: HDFS-7960
> URL: https://issues.apache.org/jira/browse/HDFS-7960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
> HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
> HDFS-7960.007.patch, HDFS-7960.008.patch
>
>
> The full block report should prune zombie storages even if they're not empty. 
>  We have seen cases in production where zombie storages have not been pruned 
> subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
> is a block in some old storage which is actually not there.  In this case, 
> the block will not show up in the "new" storage (once old is renamed to new) 
> and the old storage will linger forever as a zombie, even with the HDFS-7596 
> fix applied.  This also happens with datanode hotplug, when a drive is 
> removed.  In this case, an entire storage (volume) goes away but the blocks 
> do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7979) Initialize block report IDs with a random number

2015-03-23 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-7979:
--
Status: Patch Available  (was: Open)

> Initialize block report IDs with a random number
> 
>
> Key: HDFS-7979
> URL: https://issues.apache.org/jira/browse/HDFS-7979
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HDFS-7979.001.patch
>
>
> Right now block report IDs use system nanotime. This isn't that random, so 
> let's start it at a random number for some more safety.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7979) Initialize block report IDs with a random number

2015-03-23 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-7979:
--
Attachment: HDFS-7979.001.patch

Patch attached. I think the nanotime isn't more useful than a counter with a 
random start point, so that's what I changed it to. LMK what you think.

Also added the interface annotation to BlockReportContext that [~hitliuyi] 
asked for.

> Initialize block report IDs with a random number
> 
>
> Key: HDFS-7979
> URL: https://issues.apache.org/jira/browse/HDFS-7979
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HDFS-7979.001.patch
>
>
> Right now block report IDs use system nanotime. This isn't that random, so 
> let's start it at a random number for some more safety.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7961) Trigger full block report after hot swapping disk

2015-03-23 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-7961:

Attachment: HDFS-7961.003.patch

Rebased to fix conflicts introduced by HDFS-7960.

> Trigger full block report after hot swapping disk
> -
>
> Key: HDFS-7961
> URL: https://issues.apache.org/jira/browse/HDFS-7961
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-7961.000.patch, HDFS-7961.001.patch, 
> HDFS-7961.002.patch, HDFS-7961.003.patch
>
>
> As discussed in HDFS-7960, NN could not remove the data storage metadata from 
> its memory. 
> DN should trigger a full block report immediately after running hot swapping 
> drives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7979) Initialize block report IDs with a random number

2015-03-23 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-7979:
-

 Summary: Initialize block report IDs with a random number
 Key: HDFS-7979
 URL: https://issues.apache.org/jira/browse/HDFS-7979
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.7.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor


Right now block report IDs use system nanotime. This isn't that random, so 
let's start it at a random number for some more safety.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377303#comment-14377303
 ] 

Hudson commented on HDFS-7960:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7411 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7411/])
HDFS-7960. The full block report should prune zombie storages even if they're 
not empty. Contributed by Colin McCabe and Eddy Xu. (wang: rev 
50ee8f4e67a66aa77c5359182f61f3e951844db6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTriggerBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestNNHandlesCombinedBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/BlockReportTestBase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/TestBlockListAsLongs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestNNHandlesBlockReportPerStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeProtocolRetryPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDnRespectsBlockReportSplitThreshold.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockHasMultipleReplicasOnSameDN.java


> The full block report should prune zombie storages even if they're not empty
> 
>
> Key: HDFS-7960
> URL: https://issues.apache.org/jira/browse/HDFS-7960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
> HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
> HDFS-7960.007.patch, HDFS-7960.008.patch
>
>
> The full block report should prune zombie storages even if they're not empty. 
>  We have seen cases in production where zombie storages have not been pruned 
> subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
> is a block in some old storage which is actually not there.  In this case, 
> the block will not show up in the "new" storage (once old is renamed to new) 
> and the old storage will linger forever as a zombie, even with the HDFS-7596 
> fix applied.  This also happens with datanode hotplug, when a drive is 
> removed.  In this case, an entire storage (volume) goes away but the blocks 
> do not show up in another storage on the same datanode.



--
This message was sent by Atla

[jira] [Commented] (HDFS-7978) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-23 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377302#comment-14377302
 ] 

Walter Su commented on HDFS-7978:
-

Thanks [~andrew.wang]] for comments. 
1. I know little about slf4j, and I looked it. I found that the usage pattern 
has little different. We need to call something like 
{{logger.debug(arg1,arg2..)}} .
2. I wrapped something like {{LOG.debug(this +...)}} , So If you look at 
{{FileJournalManager.toString()}}, {{Token.toString()}}, {{URI.toString()}}, 
You will find out they are a bit expensive.

> Add LOG.isDebugEnabled() guard for some LOG.debug(..)
> -
>
> Key: HDFS-7978
> URL: https://issues.apache.org/jira/browse/HDFS-7978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-7978.001.patch
>
>
> {{isDebugEnabled()}} is optional. But when there are :
> 1. lots of concatenating Strings
> 2. complicated function calls
> in the arguments, {{LOG.debug(..)}} should be guarded with 
> {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
> performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7961) Trigger full block report after hot swapping disk

2015-03-23 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377298#comment-14377298
 ] 

Andrew Wang commented on HDFS-7961:
---

We need a rebase of this patch since HDFS-7960 was just committed, the 
arguments for blockReport have changed.

> Trigger full block report after hot swapping disk
> -
>
> Key: HDFS-7961
> URL: https://issues.apache.org/jira/browse/HDFS-7961
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-7961.000.patch, HDFS-7961.001.patch, 
> HDFS-7961.002.patch
>
>
> As discussed in HDFS-7960, NN could not remove the data storage metadata from 
> its memory. 
> DN should trigger a full block report immediately after running hot swapping 
> drives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-23 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377296#comment-14377296
 ] 

Andrew Wang commented on HDFS-7960:
---

I also thought about it a bit, and the monotonic clock is roughly the time 
since boot, which isn't that random. I'd feel better if we seeded with a random 
number, then added the monotime to that after.

I'll file a follow-up for that, and fix the interface audience nit there too.

> The full block report should prune zombie storages even if they're not empty
> 
>
> Key: HDFS-7960
> URL: https://issues.apache.org/jira/browse/HDFS-7960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
> HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
> HDFS-7960.007.patch, HDFS-7960.008.patch
>
>
> The full block report should prune zombie storages even if they're not empty. 
>  We have seen cases in production where zombie storages have not been pruned 
> subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
> is a block in some old storage which is actually not there.  In this case, 
> the block will not show up in the "new" storage (once old is renamed to new) 
> and the old storage will linger forever as a zombie, even with the HDFS-7596 
> fix applied.  This also happens with datanode hotplug, when a drive is 
> removed.  In this case, an entire storage (volume) goes away but the blocks 
> do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-23 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-7960:
--
   Resolution: Fixed
Fix Version/s: 2.7.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch Colin and Eddy, and Yi for reviewing. I've committed this 
down to branch-2.7 for 2.7.0.

> The full block report should prune zombie storages even if they're not empty
> 
>
> Key: HDFS-7960
> URL: https://issues.apache.org/jira/browse/HDFS-7960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
> HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
> HDFS-7960.007.patch, HDFS-7960.008.patch
>
>
> The full block report should prune zombie storages even if they're not empty. 
>  We have seen cases in production where zombie storages have not been pruned 
> subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
> is a block in some old storage which is actually not there.  In this case, 
> the block will not show up in the "new" storage (once old is renamed to new) 
> and the old storage will linger forever as a zombie, even with the HDFS-7596 
> fix applied.  This also happens with datanode hotplug, when a drive is 
> removed.  In this case, an entire storage (volume) goes away but the blocks 
> do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7884) NullPointerException in BlockSender

2015-03-23 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377288#comment-14377288
 ] 

Vinayakumar B commented on HDFS-7884:
-

Thanks [~szetszwo] and [~brahmareddy]

> NullPointerException in BlockSender
> ---
>
> Key: HDFS-7884
> URL: https://issues.apache.org/jira/browse/HDFS-7884
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: HDFS-7884-002.patch, HDFS-7884.patch, 
> h7884_20150313.patch, 
> org.apache.hadoop.hdfs.TestAppendSnapshotTruncate-output.txt
>
>
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:264)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:506)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:249)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> BlockSender.java:264 is shown below
> {code}
>   this.volumeRef = datanode.data.getVolume(block).obtainReference();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-23 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377287#comment-14377287
 ] 

Andrew Wang commented on HDFS-7960:
---

Looks good to me too. I'll add the InterfaceAudience line at commit, it's a 
super minor change.

> The full block report should prune zombie storages even if they're not empty
> 
>
> Key: HDFS-7960
> URL: https://issues.apache.org/jira/browse/HDFS-7960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
> HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
> HDFS-7960.007.patch, HDFS-7960.008.patch
>
>
> The full block report should prune zombie storages even if they're not empty. 
>  We have seen cases in production where zombie storages have not been pruned 
> subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
> is a block in some old storage which is actually not there.  In this case, 
> the block will not show up in the "new" storage (once old is renamed to new) 
> and the old storage will linger forever as a zombie, even with the HDFS-7596 
> fix applied.  This also happens with datanode hotplug, when a drive is 
> removed.  In this case, an entire storage (volume) goes away but the blocks 
> do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7884) NullPointerException in BlockSender

2015-03-23 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7884:
--
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Brahma!

> NullPointerException in BlockSender
> ---
>
> Key: HDFS-7884
> URL: https://issues.apache.org/jira/browse/HDFS-7884
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: HDFS-7884-002.patch, HDFS-7884.patch, 
> h7884_20150313.patch, 
> org.apache.hadoop.hdfs.TestAppendSnapshotTruncate-output.txt
>
>
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:264)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:506)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:249)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> BlockSender.java:264 is shown below
> {code}
>   this.volumeRef = datanode.data.getVolume(block).obtainReference();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7884) NullPointerException in BlockSender

2015-03-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377279#comment-14377279
 ] 

Hudson commented on HDFS-7884:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7410 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7410/])
HDFS-7884. Fix NullPointerException in BlockSender when the generation stamp 
provided by the client is larger than the one stored in the datanode.  
Contributed by Brahma Reddy Battula (szetszwo: rev 
d7e3c3364eb904f55a878bc14c331952f9dadab2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> NullPointerException in BlockSender
> ---
>
> Key: HDFS-7884
> URL: https://issues.apache.org/jira/browse/HDFS-7884
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-7884-002.patch, HDFS-7884.patch, 
> h7884_20150313.patch, 
> org.apache.hadoop.hdfs.TestAppendSnapshotTruncate-output.txt
>
>
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:264)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:506)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:249)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> BlockSender.java:264 is shown below
> {code}
>   this.volumeRef = datanode.data.getVolume(block).obtainReference();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7884) NullPointerException in BlockSender

2015-03-23 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377272#comment-14377272
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7884:
---

> While rebasing the patch, found that test actually passes even though there 
> is a NPE.

You are right.  The client will retry when there is a NPE so that the test 
indeed is passed.  Let's commit Brahma's new patch first and add the test later.

+1 on HDFS-7884-002.patch

> NullPointerException in BlockSender
> ---
>
> Key: HDFS-7884
> URL: https://issues.apache.org/jira/browse/HDFS-7884
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-7884-002.patch, HDFS-7884.patch, 
> h7884_20150313.patch, 
> org.apache.hadoop.hdfs.TestAppendSnapshotTruncate-output.txt
>
>
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:264)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:506)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:249)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> BlockSender.java:264 is shown below
> {code}
>   this.volumeRef = datanode.data.getVolume(block).obtainReference();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7978) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-23 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377256#comment-14377256
 ] 

Andrew Wang commented on HDFS-7978:
---

Hey Walter, I'd prefer if we worked on switching over to slf4j in most of these 
cases, rather than adding if guards. That saves the string construction cost. I 
don't think any of the functions you wrapped in this patch are particularly 
expensive, but if you disagree we can keep the if guards.

> Add LOG.isDebugEnabled() guard for some LOG.debug(..)
> -
>
> Key: HDFS-7978
> URL: https://issues.apache.org/jira/browse/HDFS-7978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-7978.001.patch
>
>
> {{isDebugEnabled()}} is optional. But when there are :
> 1. lots of concatenating Strings
> 2. complicated function calls
> in the arguments, {{LOG.debug(..)}} should be guarded with 
> {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
> performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7715) Implement the Hitchhiker erasure coding algorithm

2015-03-23 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377253#comment-14377253
 ] 

Kai Zheng commented on HDFS-7715:
-

Sure Jack. I will have time looking at the codes and give my comments. Thanks !

> Implement the Hitchhiker erasure coding algorithm
> -
>
> Key: HDFS-7715
> URL: https://issues.apache.org/jira/browse/HDFS-7715
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: jack liuquan
> Attachments: HDFS-7715.zip
>
>
> [Hitchhiker | 
> http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
> a new erasure coding algorithm developed as a research project at UC 
> Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
> during data reconstruction. This JIRA aims to introduce Hitchhiker to the 
> HDFS-EC framework, as one of the pluggable codec algorithms.
> The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7978) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377245#comment-14377245
 ] 

Hadoop QA commented on HDFS-7978:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706822/HDFS-7978.001.patch
  against trunk revision 9fae455.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10047//console

This message is automatically generated.

> Add LOG.isDebugEnabled() guard for some LOG.debug(..)
> -
>
> Key: HDFS-7978
> URL: https://issues.apache.org/jira/browse/HDFS-7978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-7978.001.patch
>
>
> {{isDebugEnabled()}} is optional. But when there are :
> 1. lots of concatenating Strings
> 2. complicated function calls
> in the arguments, {{LOG.debug(..)}} should be guarded with 
> {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
> performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7715) Implement the Hitchhiker erasure coding algorithm

2015-03-23 Thread jack liuquan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jack liuquan updated HDFS-7715:
---
Attachment: HDFS-7715.zip

hi all, I have uploaded the core code of Hitchhiker, please review the code and 
tell me if you find any code not right. Thanks!

> Implement the Hitchhiker erasure coding algorithm
> -
>
> Key: HDFS-7715
> URL: https://issues.apache.org/jira/browse/HDFS-7715
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: jack liuquan
> Attachments: HDFS-7715.zip
>
>
> [Hitchhiker | 
> http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
> a new erasure coding algorithm developed as a research project at UC 
> Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
> during data reconstruction. This JIRA aims to introduce Hitchhiker to the 
> HDFS-EC framework, as one of the pluggable codec algorithms.
> The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7977) NFS couldn't take percentile intervals

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377233#comment-14377233
 ] 

Hadoop QA commented on HDFS-7977:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706771/HDFS-7977.001.patch
  against trunk revision 2c238ae.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs:

  org.apache.hadoop.tracing.TestTracing
  org.apache.hadoop.hdfs.security.TestDelegationToken

  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs-nfs 

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10043//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10043//console

This message is automatically generated.

> NFS couldn't take percentile intervals
> --
>
> Key: HDFS-7977
> URL: https://issues.apache.org/jira/browse/HDFS-7977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7977.001.patch
>
>
> The configuration "nfs.metrics.percentiles.intervals" is not recognized by 
> NFS gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7410) Support CreateFlags with append() to support hsync() for appending streams

2015-03-23 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-7410:

Attachment: HDFS-7410-004.patch

Attaching the re-based patch.

> Support CreateFlags with append() to support hsync() for appending streams
> --
>
> Key: HDFS-7410
> URL: https://issues.apache.org/jira/browse/HDFS-7410
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-7410-001.patch, HDFS-7410-002.patch, 
> HDFS-7410-003.patch, HDFS-7410-004.patch
>
>
> Current FileSystem APIs include CreateFlag only for the create() api, and 
> some of these (SYNC_BLOCK) are for only client side and will not be stored in 
> metadata of the file. So append() operation will not know about these flags.
> It would be Good to support these features for append too.
> Compatibility: One more overloaded append API needs to be added to support 
> the flags keeping the current API as is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7978) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-23 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7978:

Status: Patch Available  (was: Open)

> Add LOG.isDebugEnabled() guard for some LOG.debug(..)
> -
>
> Key: HDFS-7978
> URL: https://issues.apache.org/jira/browse/HDFS-7978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-7978.001.patch
>
>
> {{isDebugEnabled()}} is optional. But when there are :
> 1. lots of concatenating Strings
> 2. complicated function calls
> in the arguments, {{LOG.debug(..)}} should be guarded with 
> {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
> performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7978) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-23 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7978:

Attachment: HDFS-7978.001.patch

> Add LOG.isDebugEnabled() guard for some LOG.debug(..)
> -
>
> Key: HDFS-7978
> URL: https://issues.apache.org/jira/browse/HDFS-7978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-7978.001.patch
>
>
> {{isDebugEnabled()}} is optional. But when there are :
> 1. lots of concatenating Strings
> 2. complicated function calls
> in the arguments, {{LOG.debug(..)}} should be guarded with 
> {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
> performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7337) Configurable and pluggable Erasure Codec and schema

2015-03-23 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377227#comment-14377227
 ] 

Kai Zheng commented on HDFS-7337:
-

Thanks Zhe for the very good thought and the new JIRA HADOOP-11740 to work on 
it. 

> Configurable and pluggable Erasure Codec and schema
> ---
>
> Key: HDFS-7337
> URL: https://issues.apache.org/jira/browse/HDFS-7337
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Kai Zheng
> Attachments: HDFS-7337-prototype-v1.patch, 
> HDFS-7337-prototype-v2.zip, HDFS-7337-prototype-v3.zip, 
> PluggableErasureCodec-v2.pdf, PluggableErasureCodec.pdf
>
>
> According to HDFS-7285 and the design, this considers to support multiple 
> Erasure Codecs via pluggable approach. It allows to define and configure 
> multiple codec schemas with different coding algorithms and parameters. The 
> resultant codec schemas can be utilized and specified via command tool for 
> different file folders. While design and implement such pluggable framework, 
> it’s also to implement a concrete codec by default (Reed Solomon) to prove 
> the framework is useful and workable. Separate JIRA could be opened for the 
> RS codec implementation.
> Note HDFS-7353 will focus on the very low level codec API and implementation 
> to make concrete vendor libraries transparent to the upper layer. This JIRA 
> focuses on high level stuffs that interact with configuration, schema and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7978) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-23 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su reassigned HDFS-7978:
---

Assignee: Walter Su

> Add LOG.isDebugEnabled() guard for some LOG.debug(..)
> -
>
> Key: HDFS-7978
> URL: https://issues.apache.org/jira/browse/HDFS-7978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Walter Su
>Assignee: Walter Su
>
> {{isDebugEnabled()}} is optional. But when there are :
> 1. lots of concatenating Strings
> 2. complicated function calls
> in the arguments, {{LOG.debug(..)}} should be guarded with 
> {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
> performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7978) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-23 Thread Walter Su (JIRA)
Walter Su created HDFS-7978:
---

 Summary: Add LOG.isDebugEnabled() guard for some LOG.debug(..)
 Key: HDFS-7978
 URL: https://issues.apache.org/jira/browse/HDFS-7978
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Walter Su


{{isDebugEnabled()}} is optional. But when there are :
1. lots of concatenating Strings
2. complicated function calls
in the arguments, {{LOG.debug(..)}} should be guarded with 
{{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377207#comment-14377207
 ] 

Hadoop QA commented on HDFS-7960:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706763/HDFS-7960.008.patch
  against trunk revision 2c238ae.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
50 warning messages.
See 
https://builds.apache.org/job/PreCommit-HDFS-Build/10042//artifact/patchprocess/diffJavadocWarnings.txt
 for details.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs:

  org.apache.hadoop.tracing.TestTracing

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs:

org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10042//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10042//console

This message is automatically generated.

> The full block report should prune zombie storages even if they're not empty
> 
>
> Key: HDFS-7960
> URL: https://issues.apache.org/jira/browse/HDFS-7960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
> HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
> HDFS-7960.007.patch, HDFS-7960.008.patch
>
>
> The full block report should prune zombie storages even if they're not empty. 
>  We have seen cases in production where zombie storages have not been pruned 
> subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
> is a block in some old storage which is actually not there.  In this case, 
> the block will not show up in the "new" storage (once old is renamed to new) 
> and the old storage will linger forever as a zombie, even with the HDFS-7596 
> fix applied.  This also happens with datanode hotplug, when a drive is 
> removed.  In this case, an entire storage (volume) goes away but the blocks 
> do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6054) MiniQJMHACluster should not use static port to avoid binding failure in unit test

2015-03-23 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377202#comment-14377202
 ] 

Yongjun Zhang commented on HDFS-6054:
-

Hi [~kihwal], would you please help taking a look at the latest patch? Many 
thanks.


> MiniQJMHACluster should not use static port to avoid binding failure in unit 
> test
> -
>
> Key: HDFS-6054
> URL: https://issues.apache.org/jira/browse/HDFS-6054
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Brandon Li
>Assignee: Yongjun Zhang
> Attachments: HDFS-6054.001.patch, HDFS-6054.002.patch, 
> HDFS-6054.002.patch
>
>
> One example of the test failues: TestFailureToReadEdits
> {noformat}
> Error Message
> Port in use: localhost:10003
> Stacktrace
> java.net.BindException: Port in use: localhost:10003
>   at sun.nio.ch.Net.bind(Native Method)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>   at 
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:845)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:786)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:132)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:593)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:492)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:650)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:635)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1283)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:966)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:851)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:697)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:374)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits.setUpCluster(TestFailureToReadEdits.java:108)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7976) Update NFS user guide for mount option "sync" to minimize or avoid reordered writes

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377189#comment-14377189
 ] 

Hadoop QA commented on HDFS-7976:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706757/HDFS-7976.002.patch
  against trunk revision 972f1f1.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10041//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10041//console

This message is automatically generated.

> Update NFS user guide for mount option "sync" to minimize or avoid reordered 
> writes
> ---
>
> Key: HDFS-7976
> URL: https://issues.apache.org/jira/browse/HDFS-7976
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7976.001.patch, HDFS-7976.002.patch
>
>
> The mount option "sync" is critical. I observed that this mount option can 
> minimize or avoid reordered writes. Mount option "sync" could have some 
> negative performance impact on the file uploading. However, it makes the 
> performance much more predicable and can also reduce the possibility of 
> failures caused by file dumping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7956) Improve logging for DatanodeRegistration.

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377181#comment-14377181
 ] 

Hadoop QA commented on HDFS-7956:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706753/HDFS-7956.1.patch
  against trunk revision 2c238ae.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

  {color:red}-1 javac{color}.  The applied patch generated 1151 javac 
compiler warnings (more than the trunk's current 205 warnings).

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
43 warning messages.
See 
https://builds.apache.org/job/PreCommit-HDFS-Build/10044//artifact/patchprocess/diffJavadocWarnings.txt
 for details.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.resourcetracker.TestNMExpiry
  
org.apache.hadoop.yarn.server.resourcemanager.resourcetracker.TestNMReconnect
  
org.apache.hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10044//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10044//artifact/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10044//console

This message is automatically generated.

> Improve logging for DatanodeRegistration.
> -
>
> Key: HDFS-7956
> URL: https://issues.apache.org/jira/browse/HDFS-7956
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: HDFS-7956.1.patch
>
>
> {{DatanodeRegistration.toString()}} 
> prints only its address without the port, it should print its full address, 
> similar {{NamenodeRegistration}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3325) When configuring "dfs.namenode.safemode.threshold-pct" to a value greater or equal to 1 there is mismatch in the UI report

2015-03-23 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377168#comment-14377168
 ] 

Vinayakumar B commented on HDFS-3325:
-

+1 for the latest patch. Waiting for jenkins report

> When configuring "dfs.namenode.safemode.threshold-pct" to a value greater or 
> equal to 1 there is mismatch in the UI report
> --
>
> Key: HDFS-3325
> URL: https://issues.apache.org/jira/browse/HDFS-3325
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Minor
> Attachments: HDFS-3325.1.patch, HDFS-3325.2.patch
>
>
> When dfs.namenode.safemode.threshold-pct is configured to n
> Namenode will be in safemode until n percentage of blocks that should satisfy 
> the minimal replication requirement defined by 
> dfs.namenode.replication.min is reported to namenode
> But in UI it displays that n percentage of total blocks + 1 blocks  are 
> additionally needed
> to come out of the safemode
> Scenario 1:
> 
> Configurations:
> dfs.namenode.safemode.threshold-pct = 2
> dfs.replication = 2
> dfs.namenode.replication.min =2
> Step 1: Start NN,DN1,DN2
> Step 2: Write a file "a.txt" which has got 167 blocks
> step 3: Stop NN,DN1,DN2
> Step 4: start NN
> In UI report the Number of blocks needed to come out of safemode and number 
> of blocks actually present is different.
> {noformat}
> Cluster Summary
> Security is OFF 
> Safe mode is ON. The reported blocks 0 needs additional 335 blocks to reach 
> the threshold 2. of total blocks 167. Safe mode will be turned off 
> automatically.
> 2 files and directories, 167 blocks = 169 total.
> Heap Memory used 57.05 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
> is 2 GB. 
> Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
> Max Non Heap Memory is 176 MB.{noformat}
> Scenario 2:
> ===
> Configurations:
> dfs.namenode.safemode.threshold-pct = 1
> dfs.replication = 2
> dfs.namenode.replication.min =2
> Step 1: Start NN,DN1,DN2
> Step 2: Write a file "a.txt" which has got 167 blocks
> step 3: Stop NN,DN1,DN2
> Step 4: start NN
> In UI report the Number of blocks needed to come out of safemode and number 
> of blocks actually present is different
> {noformat}
> Cluster Summary
> Security is OFF 
> Safe mode is ON. The reported blocks 0 needs additional 168 blocks to reach 
> the threshold 1. of total blocks 167. Safe mode will be turned off 
> automatically.
> 2 files and directories, 167 blocks = 169 total.
> Heap Memory used 56.2 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
> is 2 GB. 
> Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
> Max Non Heap Memory is 176 MB.{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3325) When configuring "dfs.namenode.safemode.threshold-pct" to a value greater or equal to 1 there is mismatch in the UI report

2015-03-23 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377167#comment-14377167
 ] 

Vinayakumar B commented on HDFS-3325:
-

triggered jenkins to get a clean report.

> When configuring "dfs.namenode.safemode.threshold-pct" to a value greater or 
> equal to 1 there is mismatch in the UI report
> --
>
> Key: HDFS-3325
> URL: https://issues.apache.org/jira/browse/HDFS-3325
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Minor
> Attachments: HDFS-3325.1.patch, HDFS-3325.2.patch
>
>
> When dfs.namenode.safemode.threshold-pct is configured to n
> Namenode will be in safemode until n percentage of blocks that should satisfy 
> the minimal replication requirement defined by 
> dfs.namenode.replication.min is reported to namenode
> But in UI it displays that n percentage of total blocks + 1 blocks  are 
> additionally needed
> to come out of the safemode
> Scenario 1:
> 
> Configurations:
> dfs.namenode.safemode.threshold-pct = 2
> dfs.replication = 2
> dfs.namenode.replication.min =2
> Step 1: Start NN,DN1,DN2
> Step 2: Write a file "a.txt" which has got 167 blocks
> step 3: Stop NN,DN1,DN2
> Step 4: start NN
> In UI report the Number of blocks needed to come out of safemode and number 
> of blocks actually present is different.
> {noformat}
> Cluster Summary
> Security is OFF 
> Safe mode is ON. The reported blocks 0 needs additional 335 blocks to reach 
> the threshold 2. of total blocks 167. Safe mode will be turned off 
> automatically.
> 2 files and directories, 167 blocks = 169 total.
> Heap Memory used 57.05 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
> is 2 GB. 
> Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
> Max Non Heap Memory is 176 MB.{noformat}
> Scenario 2:
> ===
> Configurations:
> dfs.namenode.safemode.threshold-pct = 1
> dfs.replication = 2
> dfs.namenode.replication.min =2
> Step 1: Start NN,DN1,DN2
> Step 2: Write a file "a.txt" which has got 167 blocks
> step 3: Stop NN,DN1,DN2
> Step 4: start NN
> In UI report the Number of blocks needed to come out of safemode and number 
> of blocks actually present is different
> {noformat}
> Cluster Summary
> Security is OFF 
> Safe mode is ON. The reported blocks 0 needs additional 168 blocks to reach 
> the threshold 1. of total blocks 167. Safe mode will be turned off 
> automatically.
> 2 files and directories, 167 blocks = 169 total.
> Heap Memory used 56.2 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
> is 2 GB. 
> Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
> Max Non Heap Memory is 176 MB.{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7824) GetContentSummary API and its namenode implementaion for Storage Type Quota/Usage

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377158#comment-14377158
 ] 

Hadoop QA commented on HDFS-7824:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12705760/HDFS-7824.03.patch
  against trunk revision 972f1f1.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1152 javac 
compiler warnings (more than the trunk's current 1151 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-httpfs:

  org.apache.hadoop.ipc.TestRPCWaitForProxy
  org.apache.hadoop.tracing.TestTracing
  org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate

  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs-httpfs 

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10038//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10038//artifact/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10038//console

This message is automatically generated.

> GetContentSummary API and its namenode implementaion for Storage Type 
> Quota/Usage
> -
>
> Key: HDFS-7824
> URL: https://issues.apache.org/jira/browse/HDFS-7824
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-7824.00.patch, HDFS-7824.01.patch, 
> HDFS-7824.02.patch, HDFS-7824.03.patch
>
>
> This JIRA is opened to provide API support of GetContentSummary with storage 
> type quota and usage information. It includes namenode implementation, client 
> namenode RPC protocol and Content.Counts refactoring. It is required by 
> HDFS-7701 (CLI to display storage type quota and usage).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7854) Separate class DataStreamer out of DFSOutputStream

2015-03-23 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377149#comment-14377149
 ] 

Li Bo commented on HDFS-7854:
-

{{TestStartup}},{{TestDatanodeManager}} pass locally, {{TestTracing}} seems 
unrelated with current patch.

> Separate class DataStreamer out of DFSOutputStream
> --
>
> Key: HDFS-7854
> URL: https://issues.apache.org/jira/browse/HDFS-7854
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-7854-001.patch, HDFS-7854-002.patch, 
> HDFS-7854-003.patch, HDFS-7854-004-duplicate.patch, 
> HDFS-7854-004-duplicate2.patch, HDFS-7854-004-duplicate3.patch, 
> HDFS-7854-004.patch, HDFS-7854-005.patch, HDFS-7854-006.patch, 
> HDFS-7854-007.patch, HDFS-7854-008.patch, HDFS-7854-009.patch, 
> HDFS-7854.010.patch
>
>
> This sub task separate DataStreamer from DFSOutputStream. New DataStreamer 
> will accept packets and write them to remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6841) Use Time.monotonicNow() wherever applicable instead of Time.now()

2015-03-23 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377144#comment-14377144
 ] 

Vinayakumar B commented on HDFS-6841:
-

Thanks [~kihwal] and [~cmccabe].

> Use Time.monotonicNow() wherever applicable instead of Time.now()
> -
>
> Key: HDFS-6841
> URL: https://issues.apache.org/jira/browse/HDFS-6841
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.7.0
>
> Attachments: HDFS-6841-001.patch, HDFS-6841-002.patch, 
> HDFS-6841-003.patch, HDFS-6841-004.patch, HDFS-6841-005.patch, 
> HDFS-6841-006.patch
>
>
> {{Time.now()}} used  in many places to calculate elapsed time.
> This should be replaced with {{Time.monotonicNow()}} to avoid effect of 
> System time changes on elapsed time calculations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7931) Spurious Error message "Could not find uri with key [dfs.encryption.key.provider.uri] to create a key" appears even when Encryption is dissabled

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377141#comment-14377141
 ] 

Hadoop QA commented on HDFS-7931:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706744/HDFS-7931.2.patch
  against trunk revision 972f1f1.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10039//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10039//console

This message is automatically generated.

> Spurious Error message "Could not find uri with key 
> [dfs.encryption.key.provider.uri] to create a key" appears even when 
> Encryption is dissabled
> 
>
> Key: HDFS-7931
> URL: https://issues.apache.org/jira/browse/HDFS-7931
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient
>Affects Versions: 2.7.0
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Minor
> Attachments: HDFS-7931.1.patch, HDFS-7931.2.patch, HDFS-7931.2.patch
>
>
> The {{addDelegationTokens}} method in {{DistributedFileSystem}} calls 
> {{DFSClient#getKeyProvider()}} which attempts to get a provider from the 
> {{KeyProvderCache}} but since the required key, 
> *dfs.encryption.key.provider.uri* is not present (due to encryption being 
> dissabled), it throws an exception.
> {noformat}
> 2015-03-11 23:55:47,849 [JobControl] ER ROR 
> org.apache.hadoop.hdfs.KeyProviderCache - Could not find uri with key 
> [dfs.encryption.key.provider.uri] to create a keyProvider !!
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7854) Separate class DataStreamer out of DFSOutputStream

2015-03-23 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377138#comment-14377138
 ] 

Li Bo commented on HDFS-7854:
-

hi, Jing, thanks for your careful review and improvement of the patch. I check 
all the changes and they also look good to me.

> Separate class DataStreamer out of DFSOutputStream
> --
>
> Key: HDFS-7854
> URL: https://issues.apache.org/jira/browse/HDFS-7854
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-7854-001.patch, HDFS-7854-002.patch, 
> HDFS-7854-003.patch, HDFS-7854-004-duplicate.patch, 
> HDFS-7854-004-duplicate2.patch, HDFS-7854-004-duplicate3.patch, 
> HDFS-7854-004.patch, HDFS-7854-005.patch, HDFS-7854-006.patch, 
> HDFS-7854-007.patch, HDFS-7854-008.patch, HDFS-7854-009.patch, 
> HDFS-7854.010.patch
>
>
> This sub task separate DataStreamer from DFSOutputStream. New DataStreamer 
> will accept packets and write them to remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7937) Erasure Coding: INodeFile quota computation unit tests

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377075#comment-14377075
 ] 

Hadoop QA commented on HDFS-7937:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706791/HDFS-7937.2.patch
  against trunk revision 2c238ae.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10045//console

This message is automatically generated.

> Erasure Coding: INodeFile quota computation unit tests
> --
>
> Key: HDFS-7937
> URL: https://issues.apache.org/jira/browse/HDFS-7937
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: HDFS-7937.1.patch, HDFS-7937.2.patch
>
>
> Unit test for [HDFS-7826|https://issues.apache.org/jira/browse/HDFS-7826]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-23 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377060#comment-14377060
 ] 

Yi Liu commented on HDFS-7960:
--

Thanks Colin, Lei for updating the patch. It looks really good, a nit:
*1.* please add _InterfaceAudience_/_InterfaceStability_ annotations for 
{{BlockReportContext}}

Let's wait to see what Andrew says, also wait for a fresh Jenkins.


> The full block report should prune zombie storages even if they're not empty
> 
>
> Key: HDFS-7960
> URL: https://issues.apache.org/jira/browse/HDFS-7960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
> HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
> HDFS-7960.007.patch, HDFS-7960.008.patch
>
>
> The full block report should prune zombie storages even if they're not empty. 
>  We have seen cases in production where zombie storages have not been pruned 
> subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
> is a block in some old storage which is actually not there.  In this case, 
> the block will not show up in the "new" storage (once old is renamed to new) 
> and the old storage will linger forever as a zombie, even with the HDFS-7596 
> fix applied.  This also happens with datanode hotplug, when a drive is 
> removed.  In this case, an entire storage (volume) goes away but the blocks 
> do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7854) Separate class DataStreamer out of DFSOutputStream

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377058#comment-14377058
 ] 

Hadoop QA commented on HDFS-7854:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706730/HDFS-7854.010.patch
  against trunk revision 972f1f1.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestStartup
  org.apache.hadoop.tracing.TestTracing
  
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10037//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10037//console

This message is automatically generated.

> Separate class DataStreamer out of DFSOutputStream
> --
>
> Key: HDFS-7854
> URL: https://issues.apache.org/jira/browse/HDFS-7854
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-7854-001.patch, HDFS-7854-002.patch, 
> HDFS-7854-003.patch, HDFS-7854-004-duplicate.patch, 
> HDFS-7854-004-duplicate2.patch, HDFS-7854-004-duplicate3.patch, 
> HDFS-7854-004.patch, HDFS-7854-005.patch, HDFS-7854-006.patch, 
> HDFS-7854-007.patch, HDFS-7854-008.patch, HDFS-7854-009.patch, 
> HDFS-7854.010.patch
>
>
> This sub task separate DataStreamer from DFSOutputStream. New DataStreamer 
> will accept packets and write them to remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7937) Erasure Coding: INodeFile quota computation unit tests

2015-03-23 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-7937:
-
Attachment: HDFS-7937.2.patch

> Erasure Coding: INodeFile quota computation unit tests
> --
>
> Key: HDFS-7937
> URL: https://issues.apache.org/jira/browse/HDFS-7937
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: HDFS-7937.1.patch, HDFS-7937.2.patch
>
>
> Unit test for [HDFS-7826|https://issues.apache.org/jira/browse/HDFS-7826]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7956) Improve logging for DatanodeRegistration.

2015-03-23 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7956:
--
Status: Patch Available  (was: Open)

+1. Lets trigger Jenkins

> Improve logging for DatanodeRegistration.
> -
>
> Key: HDFS-7956
> URL: https://issues.apache.org/jira/browse/HDFS-7956
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: HDFS-7956.1.patch
>
>
> {{DatanodeRegistration.toString()}} 
> prints only its address without the port, it should print its full address, 
> similar {{NamenodeRegistration}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-7956) Improve logging for DatanodeRegistration.

2015-03-23 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14368067#comment-14368067
 ] 

Konstantin Shvachko edited comment on HDFS-7956 at 3/24/15 1:22 AM:


Otherwise it is hard to identify DNs when they register, send reports or 
heartbeats. Because the main DN port is not printed anywhere in 
DatanodeRegistration.
Current:
{code}DatanodeRegistration(127.0.0.1, 
datanodeUuid=532f1c1f-fe09-4ad4-8d7d-c58f7b8b32b0, infoPort=35614 ...){code}
Desired:
{code}DatanodeRegistration(127.0.0.1:46044, 
datanodeUuid=532f1c1f-fe09-4ad4-8d7d-c58f7b8b32b0, infoPort=35614 ...){code}
The fix is to simply print {{DatanodeID}} via {{super.toString()}} instead of 
{{getIpAddr()}}


was (Author: shv):
Otherwise it is hard to identify DNs when they register, send reports or 
heartbeats. Because the main DN port is not printed anywhere in 
DatanodeRegistration.
Current:
{code}DatanodeRegistration(127.0.0.1, 
datanodeUuid=532f1c1f-fe09-4ad4-8d7d-c58f7b8b32b0, infoPort=35614 ...){code}
Desired:
{code}DatanodeRegistration(127.0.0.1:46044, 
datanodeUuid=532f1c1f-fe09-4ad4-8d7d-c58f7b8b32b0, infoPort=35614 ...){code}
The fix is to simply print {{DatanodeID}} via {{super.toString()}} instead of 
{{super.toString()}}

> Improve logging for DatanodeRegistration.
> -
>
> Key: HDFS-7956
> URL: https://issues.apache.org/jira/browse/HDFS-7956
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: HDFS-7956.1.patch
>
>
> {{DatanodeRegistration.toString()}} 
> prints only its address without the port, it should print its full address, 
> similar {{NamenodeRegistration}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7036) HDFS-6776 fix requires to upgrade insecure cluster, which means quite some user pain

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377050#comment-14377050
 ] 

Hadoop QA commented on HDFS-7036:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12668367/HDFS-7036.001.patch
  against trunk revision 972f1f1.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication
  org.apache.hadoop.hdfs.server.namenode.TestFsck

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10036//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10036//console

This message is automatically generated.

> HDFS-6776 fix requires to upgrade insecure cluster, which means quite some 
> user pain
> 
>
> Key: HDFS-7036
> URL: https://issues.apache.org/jira/browse/HDFS-7036
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.5.1
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-7036.001.patch
>
>
> Issuing command
> {code}
>  hadoop fs -lsr webhdfs://
> {code}
> at a secure cluster side fails with message "Failed to get the token ...", 
> similar symptom as reported in HDFS-6776.
> If the fix of HDFS-6776 is applied to only the secure cluster, doing 
> {code}
> distcp webhdfs:// 
> {code}
> would fail same way.
> Basically running any application in secure cluster to access insecure 
> cluster via webhdfs would fail the same way, if the HDFS-6776 fix is not 
> applied to the insecure cluster.
> This could be quite some user pain. Filing this jira for a solution to make 
> user's life easier.
> One proposed solution was to add a msg-parsing mechanism in webhdfs, which is 
> a bit hacky. The other proposed solution is to do the same kind of hack at 
> application side, which means the same hack need to be applied in each 
> application.
> Thanks [~daryn], [~wheat9], [~jingzhao], [~tucu00] and [~atm] for the 
> discussion in HDFS-6776.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7881) TestHftpFileSystem#testSeek fails in branch-2

2015-03-23 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377043#comment-14377043
 ] 

Brahma Reddy Battula commented on HDFS-7881:


Thanks a lot 

> TestHftpFileSystem#testSeek fails in branch-2
> -
>
> Key: HDFS-7881
> URL: https://issues.apache.org/jira/browse/HDFS-7881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: HDFS-7881-002.patch, HDFS-7881-003.patch, 
> HDFS-7881-004.patch, HDFS-7881.patch
>
>
> TestHftpFileSystem#testSeek fails in branch-2.
> {code}
> ---
>  T E S T S
> ---
> Running org.apache.hadoop.hdfs.web.TestHftpFileSystem
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 6.201 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.web.TestHftpFileSystem
> testSeek(org.apache.hadoop.hdfs.web.TestHftpFileSystem)  Time elapsed: 0.054 
> sec  <<< ERROR!
> java.io.IOException: Content-Length is missing: {null=[HTTP/1.1 206 Partial 
> Content], Date=[Wed, 04 Mar 2015 05:32:30 GMT, Wed, 04 Mar 2015 05:32:30 
> GMT], Expires=[Wed, 04 Mar 2015 05:32:30 GMT, Wed, 04 Mar 2015 05:32:30 GMT], 
> Connection=[close], Content-Type=[text/plain; charset=utf-8], 
> Server=[Jetty(6.1.26)], Content-Range=[bytes 7-9/10], Pragma=[no-cache, 
> no-cache], Cache-Control=[no-cache]}
>   at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.openInputStream(ByteRangeInputStream.java:132)
>   at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.getInputStream(ByteRangeInputStream.java:104)
>   at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:181)
>   at java.io.FilterInputStream.read(FilterInputStream.java:83)
>   at 
> org.apache.hadoop.hdfs.web.TestHftpFileSystem.testSeek(TestHftpFileSystem.java:253)
> Results :
> Tests in error: 
>   TestHftpFileSystem.testSeek:253 » IO Content-Length is missing: 
> {null=[HTTP/1
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7782) Read a striping layout file from client side

2015-03-23 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377027#comment-14377027
 ] 

Jing Zhao commented on HDFS-7782:
-

Thanks for working on this, Zhe! Some early comments and questions:
# Looks like the current {{DFSStripedInputStream#hedgedFetchBlockByteRange}} 
implementation is actually parallel reading instead of "hedged" read. "Hedged" 
read means "if a read from a replica is slow, start up another parallel read 
against a different block replica" to control the latency. For EC, without 
considering reading parity data, we only read from all the DNs storing 
different data blocks in parallel.
# We should try to avoid unnecessary data copy in the implementation. The 
current patch reads data to temporary byte arrays first and later copies the 
data into the given buffer. It will be better to directly read data into the 
given byte array. You may need to extend {{getFromOneDataNode}} to achieve this 
for parallel reading.
# Besides the current end-to-end tests in {{TestReadStripedFile}}, we need to 
add more tests to make sure the calculation in {{planReadPortions}} and 
{{parseStripedBlockGroup}} is correct in all different scenarios.
# I guess the read failure/timeout will be handled in a separate jira?

> Read a striping layout file from client side
> 
>
> Key: HDFS-7782
> URL: https://issues.apache.org/jira/browse/HDFS-7782
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Zhe Zhang
> Attachments: HDFS-7782-000.patch, HDFS-7782-001.patch, 
> HDFS-7782-002.patch, HDFS-7782-003.patch
>
>
> If client wants to read a file, he is not necessary to know and handle what 
> layout the file is. This sub task adds logic to DFSInputStream to support 
> reading striping layout files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7824) GetContentSummary API and its namenode implementaion for Storage Type Quota/Usage

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376998#comment-14376998
 ] 

Hadoop QA commented on HDFS-7824:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12705760/HDFS-7824.03.patch
  against trunk revision 2bc097c.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1152 javac 
compiler warnings (more than the trunk's current 1151 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-httpfs:

  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
  org.apache.hadoop.hdfs.TestReadWhileWriting
  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestRbwSpaceReservation
  org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA
  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS
  org.apache.hadoop.hdfs.server.namenode.TestHDFSConcat
  org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
  org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
  org.apache.hadoop.hdfs.TestEncryptedTransfer
  
org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
  org.apache.hadoop.tracing.TestTracing
  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
  org.apache.hadoop.hdfs.TestBlockReaderLocalLegacy
  org.apache.hadoop.hdfs.TestPersistBlocks
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
  org.apache.hadoop.hdfs.server.namenode.TestINodeFile
  org.apache.hadoop.hdfs.server.namenode.TestQuotaByStorageType
  org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache
  org.apache.hadoop.hdfs.TestAppendDifferentChecksum
  org.apache.hadoop.hdfs.TestFileAppend2
  org.apache.hadoop.hdfs.TestFileAppend3
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot
  org.apache.hadoop.hdfs.TestQuota
  org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes
  org.apache.hadoop.hdfs.TestRollingUpgrade
  org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs
  org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength
  org.apache.hadoop.fs.permission.TestStickyBit
  org.apache.hadoop.hdfs.TestFileCreation
  org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode
  
org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate
  org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
  org.apache.hadoop.hdfs.TestPipelines
  org.apache.hadoop.fs.TestHDFSFileContextMainOperations
  org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
  org.apache.hadoop.cli.TestHDFSCLI
  org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure
  org.apache.hadoop.hdfs.server.namenode.TestAddBlock
  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing

  The following test timeouts occurred in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-httpfs:

org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot

  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs-httpfs 

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10035//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10035//artifact/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10035//console

This message is automatically generated.

> GetContentSummary API and its namenode

[jira] [Updated] (HDFS-7977) NFS couldn't take percentile intervals

2015-03-23 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7977:
-
Status: Patch Available  (was: Open)

> NFS couldn't take percentile intervals
> --
>
> Key: HDFS-7977
> URL: https://issues.apache.org/jira/browse/HDFS-7977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7977.001.patch
>
>
> The configuration "nfs.metrics.percentiles.intervals" is not recognized by 
> NFS gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7977) NFS couldn't take percentile intervals

2015-03-23 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7977:
-
Attachment: HDFS-7977.001.patch

> NFS couldn't take percentile intervals
> --
>
> Key: HDFS-7977
> URL: https://issues.apache.org/jira/browse/HDFS-7977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7977.001.patch
>
>
> The configuration "nfs.metrics.percentiles.intervals" is not recognized by 
> NFS gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6261) Add document for enabling node group layer in HDFS

2015-03-23 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376969#comment-14376969
 ] 

Junping Du commented on HDFS-6261:
--

Hi [~decster], given HADOOP-11495 is already get commit in, would you mind to 
update your patch here? I will give it a review. Thx!

> Add document for enabling node group layer in HDFS
> --
>
> Key: HDFS-6261
> URL: https://issues.apache.org/jira/browse/HDFS-6261
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation
>Reporter: Wenwu Peng
>Assignee: Binglin Chang
>  Labels: documentation
> Attachments: 2-layer-topology.png, 3-layer-topology.png, 
> 3layer-topology.png, 4layer-topology.png, HDFS-6261.v1.patch, 
> HDFS-6261.v1.patch, HDFS-6261.v2.patch, HDFS-6261.v3.patch
>
>
> Most of patches from Umbrella JIRA HADOOP-8468  have committed, However there 
> is no site to introduce NodeGroup-aware(HADOOP Virtualization Extensisons) 
> and how to do configuration. so we need to doc it.
> 1.  Doc NodeGroup-aware relate in http://hadoop.apache.org/docs/current 
> 2.  Doc NodeGroup-aware properties in core-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7977) NFS couldn't take percentile intervals

2015-03-23 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7977:
-
Affects Version/s: (was: 2.6.0)
   2.7.0

> NFS couldn't take percentile intervals
> --
>
> Key: HDFS-7977
> URL: https://issues.apache.org/jira/browse/HDFS-7977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>
> The configuration "nfs.metrics.percentiles.intervals" is not recognized by 
> NFS gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7977) NFS couldn't take percentile intervals

2015-03-23 Thread Brandon Li (JIRA)
Brandon Li created HDFS-7977:


 Summary: NFS couldn't take percentile intervals
 Key: HDFS-7977
 URL: https://issues.apache.org/jira/browse/HDFS-7977
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
Reporter: Brandon Li
Assignee: Brandon Li


The configuration "nfs.metrics.percentiles.intervals" is not recognized by NFS 
gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7917) Use file to replace data dirs in test to simulate a disk failure.

2015-03-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376948#comment-14376948
 ] 

Hudson commented on HDFS-7917:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #7408 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7408/])
HDFS-7917. Use file to replace data dirs in test to simulate a disk failure. 
Contributed by Lei (Eddy) Xu. (cnauroth: rev 
2c238ae4e00371ef76582b007bb0e20ac8455d9c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java


> Use file to replace data dirs in test to simulate a disk failure. 
> --
>
> Key: HDFS-7917
> URL: https://issues.apache.org/jira/browse/HDFS-7917
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HDFS-7917.000.patch, HDFS-7917.001.patch, 
> HDFS-7917.002.patch
>
>
> Currently, in several tests, e.g., {{TestDataNodeVolumeFailureXXX}} and 
> {{TestDataNotHowSwapVolumes}},  we simulate a disk failure by setting a 
> directory's executable permission as false. However, it raises the risk that 
> if the cleanup code could not be executed, the directory can not be easily 
> removed by Jenkins job. 
> Since in {{DiskChecker#checkDirAccess}}:
> {code}
> private static void checkDirAccess(File dir) throws DiskErrorException {
> if (!dir.isDirectory()) {
>   throw new DiskErrorException("Not a directory: "
>+ dir.toString());
> }
> checkAccessByFileMethods(dir);
>   }
> {code}
> We can replace the DN data directory as a file to achieve the same fault 
> injection goal, while it is safer for cleaning up in any circumstance. 
> Additionally, as [~cnauroth] suggested: 
> bq. That might even let us enable some of these tests that are skipped on 
> Windows, because Windows allows access for the owner even after permissions 
> have been stripped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7864) Erasure Coding: Update safemode calculation for striped blocks

2015-03-23 Thread GAO Rui (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376947#comment-14376947
 ] 

GAO Rui commented on HDFS-7864:
---

[~jingzhao] Thank you very much for all your help. I learned a lot about HDFS 
block management during this jira. Next, I will work on 
[https://issues.apache.org/jira/browse/HDFS-7661] and 
[https://issues.apache.org/jira/browse/HDFS-7618] to improve read support of 
striped blocks.  Thank you again!

> Erasure Coding: Update safemode calculation for striped blocks
> --
>
> Key: HDFS-7864
> URL: https://issues.apache.org/jira/browse/HDFS-7864
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: GAO Rui
> Fix For: HDFS-7285
>
> Attachments: HDFS-7864.1.patch, HDFS-7864.2.patch, HDFS-7864.3.patch, 
> HDFS-7864.4.patch
>
>
> We need to update the safemode calculation for striped blocks. Specifically, 
> each striped block now consists of multiple data/parity blocks stored in 
> corresponding DataNodes. The current code's calculation is thus inconsistent: 
> each striped block is only counted as 1 expected block, while each of its 
> member block may increase the number of received blocks by 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7917) Use file to replace data dirs in test to simulate a disk failure.

2015-03-23 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376936#comment-14376936
 ] 

Lei (Eddy) Xu commented on HDFS-7917:
-

Thanks for reviewing and committing this, [~cnauroth]. 

> Use file to replace data dirs in test to simulate a disk failure. 
> --
>
> Key: HDFS-7917
> URL: https://issues.apache.org/jira/browse/HDFS-7917
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HDFS-7917.000.patch, HDFS-7917.001.patch, 
> HDFS-7917.002.patch
>
>
> Currently, in several tests, e.g., {{TestDataNodeVolumeFailureXXX}} and 
> {{TestDataNotHowSwapVolumes}},  we simulate a disk failure by setting a 
> directory's executable permission as false. However, it raises the risk that 
> if the cleanup code could not be executed, the directory can not be easily 
> removed by Jenkins job. 
> Since in {{DiskChecker#checkDirAccess}}:
> {code}
> private static void checkDirAccess(File dir) throws DiskErrorException {
> if (!dir.isDirectory()) {
>   throw new DiskErrorException("Not a directory: "
>+ dir.toString());
> }
> checkAccessByFileMethods(dir);
>   }
> {code}
> We can replace the DN data directory as a file to achieve the same fault 
> injection goal, while it is safer for cleaning up in any circumstance. 
> Additionally, as [~cnauroth] suggested: 
> bq. That might even let us enable some of these tests that are skipped on 
> Windows, because Windows allows access for the owner even after permissions 
> have been stripped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7917) Use file to replace data dirs in test to simulate a disk failure.

2015-03-23 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-7917:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

+1 for the patch.  The failure in {{TestTracing}} is unrelated and tracked 
elsewhere.  I committed this to trunk, branch-2 and branch-2.7.  [~eddyxu], 
thank you for contributing the patch.

> Use file to replace data dirs in test to simulate a disk failure. 
> --
>
> Key: HDFS-7917
> URL: https://issues.apache.org/jira/browse/HDFS-7917
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HDFS-7917.000.patch, HDFS-7917.001.patch, 
> HDFS-7917.002.patch
>
>
> Currently, in several tests, e.g., {{TestDataNodeVolumeFailureXXX}} and 
> {{TestDataNotHowSwapVolumes}},  we simulate a disk failure by setting a 
> directory's executable permission as false. However, it raises the risk that 
> if the cleanup code could not be executed, the directory can not be easily 
> removed by Jenkins job. 
> Since in {{DiskChecker#checkDirAccess}}:
> {code}
> private static void checkDirAccess(File dir) throws DiskErrorException {
> if (!dir.isDirectory()) {
>   throw new DiskErrorException("Not a directory: "
>+ dir.toString());
> }
> checkAccessByFileMethods(dir);
>   }
> {code}
> We can replace the DN data directory as a file to achieve the same fault 
> injection goal, while it is safer for cleaning up in any circumstance. 
> Additionally, as [~cnauroth] suggested: 
> bq. That might even let us enable some of these tests that are skipped on 
> Windows, because Windows allows access for the owner even after permissions 
> have been stripped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-23 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-7960:

Attachment: HDFS-7960.008.patch

Added missing file to fix compiling errors. 

> The full block report should prune zombie storages even if they're not empty
> 
>
> Key: HDFS-7960
> URL: https://issues.apache.org/jira/browse/HDFS-7960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
> HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
> HDFS-7960.007.patch, HDFS-7960.008.patch
>
>
> The full block report should prune zombie storages even if they're not empty. 
>  We have seen cases in production where zombie storages have not been pruned 
> subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
> is a block in some old storage which is actually not there.  In this case, 
> the block will not show up in the "new" storage (once old is renamed to new) 
> and the old storage will linger forever as a zombie, even with the HDFS-7596 
> fix applied.  This also happens with datanode hotplug, when a drive is 
> removed.  In this case, an entire storage (volume) goes away but the blocks 
> do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7969) Erasure coding: lease recovery for striped block groups

2015-03-23 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376918#comment-14376918
 ] 

Zhe Zhang commented on HDFS-7969:
-

In the latest [design doc | 
https://issues.apache.org/jira/secure/attachment/12697210/HDFSErasureCodingDesign-20150206.pdf],
 [~szetszwo] has a good summary of handling the generation stamp of a striped 
block group. This JIRA aims to implement lease recovery for striped block 
groups. Other scenarios related to GS (append, truncate, failures in writing) 
will be handled in separate JIRAs.

Below is the process of a lease recovery (borrowed from a [blog post | 
http://blog.cloudera.com/blog/2015/02/understanding-hdfs-recovery-processes-part-1/]
 by [~yzhangal]).
# Get the DataNodes which contain the last block of f.
# Assign one of the DataNodes as the primary DataNode p.
# p obtains a new generation stamp from the NameNode.
# p gets the block info from each DataNode.
# p computes the minimum block length.
# p updates the DataNodes, which have a valid generation  stamp, with the new 
generation stamp and the minimum block length.
# p acknowledges the NameNode the update results.
# NameNode updates the BlockInfo.
# NameNode remove f’s lease (other writer can now obtain the lease for writing 
to f).
# NameNode commit changes to edit log.

The main updates should be the following:
bq. Assign one of the DataNodes as the primary DataNode p.
We _might_ need a different algorithm for selecting primary DN. 
bq. p computes the minimum block length.
This needs to be updated with striping logic.

> Erasure coding: lease recovery for striped block groups
> ---
>
> Key: HDFS-7969
> URL: https://issues.apache.org/jira/browse/HDFS-7969
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376912#comment-14376912
 ] 

Hadoop QA commented on HDFS-7960:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706749/HDFS-7960.007.patch
  against trunk revision 972f1f1.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10040//console

This message is automatically generated.

> The full block report should prune zombie storages even if they're not empty
> 
>
> Key: HDFS-7960
> URL: https://issues.apache.org/jira/browse/HDFS-7960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
> HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
> HDFS-7960.007.patch
>
>
> The full block report should prune zombie storages even if they're not empty. 
>  We have seen cases in production where zombie storages have not been pruned 
> subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
> is a block in some old storage which is actually not there.  In this case, 
> the block will not show up in the "new" storage (once old is renamed to new) 
> and the old storage will linger forever as a zombie, even with the HDFS-7596 
> fix applied.  This also happens with datanode hotplug, when a drive is 
> removed.  In this case, an entire storage (volume) goes away but the blocks 
> do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7976) Update NFS user guide for mount option "sync" to minimize or avoid reordered writes

2015-03-23 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376893#comment-14376893
 ] 

Arpit Agarwal commented on HDFS-7976:
-

Thanks [~brandonli], +1.

> Update NFS user guide for mount option "sync" to minimize or avoid reordered 
> writes
> ---
>
> Key: HDFS-7976
> URL: https://issues.apache.org/jira/browse/HDFS-7976
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7976.001.patch, HDFS-7976.002.patch
>
>
> The mount option "sync" is critical. I observed that this mount option can 
> minimize or avoid reordered writes. Mount option "sync" could have some 
> negative performance impact on the file uploading. However, it makes the 
> performance much more predicable and can also reduce the possibility of 
> failures caused by file dumping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7976) Update NFS user guide for mount option "sync" to minimize or avoid reordered writes

2015-03-23 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376889#comment-14376889
 ] 

Brandon Li commented on HDFS-7976:
--

Thank you, Arpit. I've updated the patch to indicate the importance of this 
option for the large file uploading.

> Update NFS user guide for mount option "sync" to minimize or avoid reordered 
> writes
> ---
>
> Key: HDFS-7976
> URL: https://issues.apache.org/jira/browse/HDFS-7976
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7976.001.patch, HDFS-7976.002.patch
>
>
> The mount option "sync" is critical. I observed that this mount option can 
> minimize or avoid reordered writes. Mount option "sync" could have some 
> negative performance impact on the file uploading. However, it makes the 
> performance much more predicable and can also reduce the possibility of 
> failures caused by file dumping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7976) Update NFS user guide for mount option "sync" to minimize or avoid reordered writes

2015-03-23 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7976:
-
Attachment: HDFS-7976.002.patch

> Update NFS user guide for mount option "sync" to minimize or avoid reordered 
> writes
> ---
>
> Key: HDFS-7976
> URL: https://issues.apache.org/jira/browse/HDFS-7976
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7976.001.patch, HDFS-7976.002.patch
>
>
> The mount option "sync" is critical. I observed that this mount option can 
> minimize or avoid reordered writes. Mount option "sync" could have some 
> negative performance impact on the file uploading. However, it makes the 
> performance much more predicable and can also reduce the possibility of 
> failures caused by file dumping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7976) Update NFS user guide for mount option "sync" to minimize or avoid reordered writes

2015-03-23 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376879#comment-14376879
 ] 

Arpit Agarwal commented on HDFS-7976:
-

+1 for the patch.

Given what a significant difference the sync flag makes I think we can make the 
recommendation even more forceful if you agree. :-)

> Update NFS user guide for mount option "sync" to minimize or avoid reordered 
> writes
> ---
>
> Key: HDFS-7976
> URL: https://issues.apache.org/jira/browse/HDFS-7976
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7976.001.patch
>
>
> The mount option "sync" is critical. I observed that this mount option can 
> minimize or avoid reordered writes. Mount option "sync" could have some 
> negative performance impact on the file uploading. However, it makes the 
> performance much more predicable and can also reduce the possibility of 
> failures caused by file dumping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-23 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376878#comment-14376878
 ] 

Colin Patrick McCabe commented on HDFS-7960:


bq. Yi wrote: In the patch, rpcsSeen is calculated in NN by counting all rpcs 
of same block report, it's not safe in case of split reports. 
DatanodeProtocol#blockReport is @Idempotent, if retry happens, if (rpcsSeen >= 
context.getTotalRpcs()) can be true, while some datanode storages may not send 
splits of reports, in this case, these datanode storages will be treated as 
zombie and wrongly removed from NN.

Thanks, that's a good point.  We should make sure that these RPCs stay 
idempotent.  I like [~eddyxu]'s solution of using a bitset to track which parts 
were received.

bq. Yi wrote: While removing stored block, we'd better to remove it from 
InvalidateBlocks too.

Very good point.

bq. I attempted to update the patch to address Yi Liu's comments, also fixed 
the test failure TestNNHandlesBlockReportPerStorage.

Thanks, Eddy.

> The full block report should prune zombie storages even if they're not empty
> 
>
> Key: HDFS-7960
> URL: https://issues.apache.org/jira/browse/HDFS-7960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
> HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
> HDFS-7960.007.patch
>
>
> The full block report should prune zombie storages even if they're not empty. 
>  We have seen cases in production where zombie storages have not been pruned 
> subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
> is a block in some old storage which is actually not there.  In this case, 
> the block will not show up in the "new" storage (once old is renamed to new) 
> and the old storage will linger forever as a zombie, even with the HDFS-7596 
> fix applied.  This also happens with datanode hotplug, when a drive is 
> removed.  In this case, an entire storage (volume) goes away but the blocks 
> do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7956) Improve logging for DatanodeRegistration.

2015-03-23 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov reassigned HDFS-7956:
--

Assignee: Plamen Jeliazkov  (was: Brahma Reddy Battula)

> Improve logging for DatanodeRegistration.
> -
>
> Key: HDFS-7956
> URL: https://issues.apache.org/jira/browse/HDFS-7956
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: HDFS-7956.1.patch
>
>
> {{DatanodeRegistration.toString()}} 
> prints only its address without the port, it should print its full address, 
> similar {{NamenodeRegistration}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7956) Improve logging for DatanodeRegistration.

2015-03-23 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HDFS-7956:
---
Attachment: HDFS-7956.1.patch

Attaching patch to address Konstantin's comment.

> Improve logging for DatanodeRegistration.
> -
>
> Key: HDFS-7956
> URL: https://issues.apache.org/jira/browse/HDFS-7956
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-7956.1.patch
>
>
> {{DatanodeRegistration.toString()}} 
> prints only its address without the port, it should print its full address, 
> similar {{NamenodeRegistration}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7976) Update NFS user guide for mount option "sync" to minimize or avoid reordered writes

2015-03-23 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7976:
-
Attachment: HDFS-7976.001.patch

> Update NFS user guide for mount option "sync" to minimize or avoid reordered 
> writes
> ---
>
> Key: HDFS-7976
> URL: https://issues.apache.org/jira/browse/HDFS-7976
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7976.001.patch
>
>
> The mount option "sync" is critical. I observed that this mount option can 
> minimize or avoid reordered writes. Mount option "sync" could have some 
> negative performance impact on the file uploading. However, it makes the 
> performance much more predicable and can also reduce the possibility of 
> failures caused by file dumping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7976) Update NFS user guide for mount option "sync" to minimize or avoid reordered writes

2015-03-23 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7976:
-
Status: Patch Available  (was: Open)

> Update NFS user guide for mount option "sync" to minimize or avoid reordered 
> writes
> ---
>
> Key: HDFS-7976
> URL: https://issues.apache.org/jira/browse/HDFS-7976
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7976.001.patch
>
>
> The mount option "sync" is critical. I observed that this mount option can 
> minimize or avoid reordered writes. Mount option "sync" could have some 
> negative performance impact on the file uploading. However, it makes the 
> performance much more predicable and can also reduce the possibility of 
> failures caused by file dumping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-23 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-7960:

Attachment: HDFS-7960.007.patch

Hi, [~hitliuyi] and [~andrew.wang].

I attempted to update the patch to address [~hitliuyi]'s comments, also fixed 
the test failure {{TestNNHandlesBlockReportPerStorage}}.

Could you give another review? Thanks!

> The full block report should prune zombie storages even if they're not empty
> 
>
> Key: HDFS-7960
> URL: https://issues.apache.org/jira/browse/HDFS-7960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
> HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
> HDFS-7960.007.patch
>
>
> The full block report should prune zombie storages even if they're not empty. 
>  We have seen cases in production where zombie storages have not been pruned 
> subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
> is a block in some old storage which is actually not there.  In this case, 
> the block will not show up in the "new" storage (once old is renamed to new) 
> and the old storage will linger forever as a zombie, even with the HDFS-7596 
> fix applied.  This also happens with datanode hotplug, when a drive is 
> removed.  In this case, an entire storage (volume) goes away but the blocks 
> do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7824) GetContentSummary API and its namenode implementaion for Storage Type Quota/Usage

2015-03-23 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376839#comment-14376839
 ] 

Xiaoyu Yao commented on HDFS-7824:
--

Thanks [~arpitagarwal] or the help!. All failures are directly or indirectly 
caused by missing the new Builder class 
{code}org.apache.hadoop.fs.ContentSummary$Builder {code} added from this patch. 
Looks like the jars build for this patch is overwritten by other Jenkins run. I 
can't repro the failure on my local machine either.

{code}
The failures are caused by missing Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.fs.ContentSummary$Builder
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
{code}

> GetContentSummary API and its namenode implementaion for Storage Type 
> Quota/Usage
> -
>
> Key: HDFS-7824
> URL: https://issues.apache.org/jira/browse/HDFS-7824
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-7824.00.patch, HDFS-7824.01.patch, 
> HDFS-7824.02.patch, HDFS-7824.03.patch
>
>
> This JIRA is opened to provide API support of GetContentSummary with storage 
> type quota and usage information. It includes namenode implementation, client 
> namenode RPC protocol and Content.Counts refactoring. It is required by 
> HDFS-7701 (CLI to display storage type quota and usage).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7931) Spurious Error message "Could not find uri with key [dfs.encryption.key.provider.uri] to create a key" appears even when Encryption is dissabled

2015-03-23 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-7931:
--
Attachment: HDFS-7931.2.patch

Re-uploading patch to kick jenkins

> Spurious Error message "Could not find uri with key 
> [dfs.encryption.key.provider.uri] to create a key" appears even when 
> Encryption is dissabled
> 
>
> Key: HDFS-7931
> URL: https://issues.apache.org/jira/browse/HDFS-7931
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient
>Affects Versions: 2.7.0
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Minor
> Attachments: HDFS-7931.1.patch, HDFS-7931.2.patch, HDFS-7931.2.patch
>
>
> The {{addDelegationTokens}} method in {{DistributedFileSystem}} calls 
> {{DFSClient#getKeyProvider()}} which attempts to get a provider from the 
> {{KeyProvderCache}} but since the required key, 
> *dfs.encryption.key.provider.uri* is not present (due to encryption being 
> dissabled), it throws an exception.
> {noformat}
> 2015-03-11 23:55:47,849 [JobControl] ER ROR 
> org.apache.hadoop.hdfs.KeyProviderCache - Could not find uri with key 
> [dfs.encryption.key.provider.uri] to create a keyProvider !!
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7824) GetContentSummary API and its namenode implementaion for Storage Type Quota/Usage

2015-03-23 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376829#comment-14376829
 ] 

Arpit Agarwal commented on HDFS-7824:
-

Also +1 on the v3 patch once Jenkins issues are resolved.

> GetContentSummary API and its namenode implementaion for Storage Type 
> Quota/Usage
> -
>
> Key: HDFS-7824
> URL: https://issues.apache.org/jira/browse/HDFS-7824
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-7824.00.patch, HDFS-7824.01.patch, 
> HDFS-7824.02.patch, HDFS-7824.03.patch
>
>
> This JIRA is opened to provide API support of GetContentSummary with storage 
> type quota and usage information. It includes namenode implementation, client 
> namenode RPC protocol and Content.Counts refactoring. It is required by 
> HDFS-7701 (CLI to display storage type quota and usage).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7824) GetContentSummary API and its namenode implementaion for Storage Type Quota/Usage

2015-03-23 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376822#comment-14376822
 ] 

Arpit Agarwal commented on HDFS-7824:
-

The tests pass for me locally but Jenkins flagged some exceptions in 
getContentSummary.

{code}
org/apache/hadoop/fs/ContentSummary$Builder
 at 
org.apache.hadoop.hdfs.server.namenode.INode.computeAndConvertContentSummary(INode.java:447)
 at 
org.apache.hadoop.hdfs.server.namenode.INode.computeContentSummary(INode.java:436)
 at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.convertLastBlockToUnderConstruction(BlockManager.java:747)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.prepareFileForAppend(FSNamesystem.java:2698)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2661)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2943)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2914)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:656)
{code}

> GetContentSummary API and its namenode implementaion for Storage Type 
> Quota/Usage
> -
>
> Key: HDFS-7824
> URL: https://issues.apache.org/jira/browse/HDFS-7824
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-7824.00.patch, HDFS-7824.01.patch, 
> HDFS-7824.02.patch, HDFS-7824.03.patch
>
>
> This JIRA is opened to provide API support of GetContentSummary with storage 
> type quota and usage information. It includes namenode implementation, client 
> namenode RPC protocol and Content.Counts refactoring. It is required by 
> HDFS-7701 (CLI to display storage type quota and usage).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7854) Separate class DataStreamer out of DFSOutputStream

2015-03-23 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376770#comment-14376770
 ] 

Zhe Zhang commented on HDFS-7854:
-

Thanks Jing! The changes look good to me.

bq. DFSOutputStream#completeFile's change is unnecessary?
Right, seems the rebase I did in 009 patch missed this from HDFS-7835. Thanks 
for spotting it!

> Separate class DataStreamer out of DFSOutputStream
> --
>
> Key: HDFS-7854
> URL: https://issues.apache.org/jira/browse/HDFS-7854
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-7854-001.patch, HDFS-7854-002.patch, 
> HDFS-7854-003.patch, HDFS-7854-004-duplicate.patch, 
> HDFS-7854-004-duplicate2.patch, HDFS-7854-004-duplicate3.patch, 
> HDFS-7854-004.patch, HDFS-7854-005.patch, HDFS-7854-006.patch, 
> HDFS-7854-007.patch, HDFS-7854-008.patch, HDFS-7854-009.patch, 
> HDFS-7854.010.patch
>
>
> This sub task separate DataStreamer from DFSOutputStream. New DataStreamer 
> will accept packets and write them to remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7827) Erasure Coding: support striped blocks in non-protobuf fsimage

2015-03-23 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao resolved HDFS-7827.
-
   Resolution: Fixed
Fix Version/s: HDFS-7285
 Hadoop Flags: Reviewed

I've committed this to the feature branch. Thanks for the contribution, 
[~huizane]!

> Erasure Coding: support striped blocks in non-protobuf fsimage
> --
>
> Key: HDFS-7827
> URL: https://issues.apache.org/jira/browse/HDFS-7827
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Hui Zheng
> Fix For: HDFS-7285
>
> Attachments: HDFS-7827.000.patch, HDFS-7827.002.patch, 
> HDFS-7827.003.patch, HDFS-7827.004.patch
>
>
> HDFS-7749 only adds code to persist striped blocks to protobuf-based fsimage. 
> We should also add this support to the non-protobuf fsimage since it is still 
> used for use cases like offline image processing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7864) Erasure Coding: Update safemode calculation for striped blocks

2015-03-23 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao resolved HDFS-7864.
-
   Resolution: Fixed
Fix Version/s: HDFS-7285
 Hadoop Flags: Reviewed

I've committed this to the feature branch. Thanks for the contribution, 
[~demongaorui]!

> Erasure Coding: Update safemode calculation for striped blocks
> --
>
> Key: HDFS-7864
> URL: https://issues.apache.org/jira/browse/HDFS-7864
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: GAO Rui
> Fix For: HDFS-7285
>
> Attachments: HDFS-7864.1.patch, HDFS-7864.2.patch, HDFS-7864.3.patch, 
> HDFS-7864.4.patch
>
>
> We need to update the safemode calculation for striped blocks. Specifically, 
> each striped block now consists of multiple data/parity blocks stored in 
> corresponding DataNodes. The current code's calculation is thus inconsistent: 
> each striped block is only counted as 1 expected block, while each of its 
> member block may increase the number of received blocks by 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7917) Use file to replace data dirs in test to simulate a disk failure.

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376739#comment-14376739
 ] 

Hadoop QA commented on HDFS-7917:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706670/HDFS-7917.002.patch
  against trunk revision 6ca1f12.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10034//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10034//console

This message is automatically generated.

> Use file to replace data dirs in test to simulate a disk failure. 
> --
>
> Key: HDFS-7917
> URL: https://issues.apache.org/jira/browse/HDFS-7917
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-7917.000.patch, HDFS-7917.001.patch, 
> HDFS-7917.002.patch
>
>
> Currently, in several tests, e.g., {{TestDataNodeVolumeFailureXXX}} and 
> {{TestDataNotHowSwapVolumes}},  we simulate a disk failure by setting a 
> directory's executable permission as false. However, it raises the risk that 
> if the cleanup code could not be executed, the directory can not be easily 
> removed by Jenkins job. 
> Since in {{DiskChecker#checkDirAccess}}:
> {code}
> private static void checkDirAccess(File dir) throws DiskErrorException {
> if (!dir.isDirectory()) {
>   throw new DiskErrorException("Not a directory: "
>+ dir.toString());
> }
> checkAccessByFileMethods(dir);
>   }
> {code}
> We can replace the DN data directory as a file to achieve the same fault 
> injection goal, while it is safer for cleaning up in any circumstance. 
> Additionally, as [~cnauroth] suggested: 
> bq. That might even let us enable some of these tests that are skipped on 
> Windows, because Windows allows access for the owner even after permissions 
> have been stripped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7037) Using distcp to copy data from insecure to secure cluster via hftp doesn't work (branch-2 only)

2015-03-23 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376733#comment-14376733
 ] 

Haohui Mai commented on HDFS-7037:
--

[~atm], sorry for the delay as I'm busy with 2.7 blockers.

bq. Note that in the latest patch allowing connections to fall back to an 
insecure cluster is configurable, and disabled by default. 

Yes you can disable it through configuration but as this is a global 
configuration that affects every HFTP connections misconfiguration is still a 
concern from a practical point of view (which I raised in HDFS-6776). I think 
[~cnauroth] has an excellent articulation on the issue in 
https://issues.apache.org/jira/browse/HADOOP-11321?focusedCommentId=14225238&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14225238:

{quote}
...
This is pretty standard Hadoop code review feedback. As a result, Hadoop now 
has 762 configuration properties. That's from a grep -c of core-default.xml, 
hdfs-default.xml, yarn-default.xml and mapred-default.xml, so the count doesn't 
include undocumented properties. 
...
{quote}

Also, the fallback behavior is problematic from a security point of view. Chris 
has also proposed HADOOP-11701 to limit the impacts of potential configuration. 
Indeed it is not an ideal solution but it is a practical one given the 
constraints on backward compatibility. Maybe we can do something similar in 
this jira.

To summarize:

* -1 on putting fallback logics in FileSystem in general due to potential 
security vulnerabilities.
* Given the fact that HFTP is deprecated and it is used in limited use cases, 
I'm willing to change it to -0 if there are solutions like HADOOP-11701 to 
limit the impact of such a configuration.

> Using distcp to copy data from insecure to secure cluster via hftp doesn't 
> work  (branch-2 only)
> 
>
> Key: HDFS-7037
> URL: https://issues.apache.org/jira/browse/HDFS-7037
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, tools
>Affects Versions: 2.6.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-7037.001.patch
>
>
> This is a branch-2 only issue since hftp is only supported there. 
> Issuing "distcp hftp:// hdfs://" gave the 
> following failure exception:
> {code}
> 14/09/13 22:07:40 INFO tools.DelegationTokenFetcher: Error when dealing 
> remote token:
> java.io.IOException: Error when dealing remote token: Internal Server Error
>   at 
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.run(DelegationTokenFetcher.java:375)
>   at 
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:238)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:252)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:247)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.getDelegationToken(HftpFileSystem.java:247)
>   at 
> org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:140)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.addDelegationTokenParam(HftpFileSystem.java:337)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.openConnection(HftpFileSystem.java:324)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.fetchList(HftpFileSystem.java:457)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.getFileStatus(HftpFileSystem.java:472)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.getFileStatus(HftpFileSystem.java:501)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:248)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1623)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:81)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:342)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:121)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:390)
> 14/09/13 22:07:40 WARN security.UserGroupInformation: 
> PriviledgedActionException as:hadoopu...@xyz.com (auth:KERBEROS) 
> cause:java.io.IOException: Unable to obtain remote token
> 14/09/13 22:07:40 ERROR tools.DistCp: Excepti

[jira] [Updated] (HDFS-7854) Separate class DataStreamer out of DFSOutputStream

2015-03-23 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7854:

Attachment: HDFS-7854.010.patch

Since all the comments are just trivial, upload a patch with all the minor 
changes to save time. [~libo-intel] and [~zhz], please see if the comments and 
the patch make sense.

> Separate class DataStreamer out of DFSOutputStream
> --
>
> Key: HDFS-7854
> URL: https://issues.apache.org/jira/browse/HDFS-7854
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-7854-001.patch, HDFS-7854-002.patch, 
> HDFS-7854-003.patch, HDFS-7854-004-duplicate.patch, 
> HDFS-7854-004-duplicate2.patch, HDFS-7854-004-duplicate3.patch, 
> HDFS-7854-004.patch, HDFS-7854-005.patch, HDFS-7854-006.patch, 
> HDFS-7854-007.patch, HDFS-7854-008.patch, HDFS-7854-009.patch, 
> HDFS-7854.010.patch
>
>
> This sub task separate DataStreamer from DFSOutputStream. New DataStreamer 
> will accept packets and write them to remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7854) Separate class DataStreamer out of DFSOutputStream

2015-03-23 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376723#comment-14376723
 ] 

Jing Zhao commented on HDFS-7854:
-

Thanks Bo and Zhe for working on this! The 009 patch looks pretty good to me. 
Only some nits:
# With the HdfsFileStatus passed into DataStreamer, we may no longer need to 
define {{blockSize}} and {{fileId}} in {{DataStreamer}}. We can directly call 
{{stat#getXXX}}.
# There is a tab in {{DataStreamer#run}}
# In {{DFSOutputStream#flushOrSync}}, there is a redundant ";"
{code}
toWaitFor = streamer.getLastQueuedSeqno();;
{code}
# We can move static methods to the beginning of DataStreamer.java
# We should remove the item for 
{{DFSOutputStream$DataStreamer$ResponseProcessor}} from findbugsExcludeFile:
# It may be more clear to move the {{adjustPacketChunkSize}} and 
{{streamer.setPipelineInConstruction}} call into the first "if" section: these 
are all the actions for appending to the last block.
# {{DFSOutputStream#completeFile}}'s change is unnecessary?
# {{DFSOutputStream#createSocketForPipeline}} can be removed

> Separate class DataStreamer out of DFSOutputStream
> --
>
> Key: HDFS-7854
> URL: https://issues.apache.org/jira/browse/HDFS-7854
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-7854-001.patch, HDFS-7854-002.patch, 
> HDFS-7854-003.patch, HDFS-7854-004-duplicate.patch, 
> HDFS-7854-004-duplicate2.patch, HDFS-7854-004-duplicate3.patch, 
> HDFS-7854-004.patch, HDFS-7854-005.patch, HDFS-7854-006.patch, 
> HDFS-7854-007.patch, HDFS-7854-008.patch, HDFS-7854-009.patch
>
>
> This sub task separate DataStreamer from DFSOutputStream. New DataStreamer 
> will accept packets and write them to remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7976) Update NFS user guide for mount option "sync" to minimize or avoid reordered writes

2015-03-23 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7976:
-
Description: The mount option "sync" is critical. I observed that this 
mount option can minimize or avoid reordered writes. Mount option "sync" could 
have some negative performance impact on the file uploading. However, it makes 
the performance much more predicable and can also reduce the possibility of 
failures caused by file dumping.  (was: The mount option "sync" is critical. I 
observed that this mount option can minimize or avoid reordered writes. Mount 
option "sync" could have some negative performance impact on the file 
uploading. However, it makes the performance much more predicable and can also 
reduce the possibly of failures caused by file dumping.)

> Update NFS user guide for mount option "sync" to minimize or avoid reordered 
> writes
> ---
>
> Key: HDFS-7976
> URL: https://issues.apache.org/jira/browse/HDFS-7976
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>
> The mount option "sync" is critical. I observed that this mount option can 
> minimize or avoid reordered writes. Mount option "sync" could have some 
> negative performance impact on the file uploading. However, it makes the 
> performance much more predicable and can also reduce the possibility of 
> failures caused by file dumping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7782) Read a striping layout file from client side

2015-03-23 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-7782:

Attachment: HDFS-7782-003.patch

Thanks Bo for the review! The new patch address all minor comments. It also 
adds a Javadoc for {{getBlockAt}}.

I guess the logic of padding zero bytes will be added with the striped 
{{DFSOutputStream}}? We can add the reading logic after the writing part is 
finalized.

Regarding the issue of opening {{blockReader}} multiple times: actually the 
current remote block reader returns at most 64KB at a time, which is much 
smaller than our default cell size (in your example even {{DFSInputStream}} 
will open {{blockReader}} 3 times). So I made the decision of reusing much of 
the stateful / non-positional read code. Local block reader could return more 
than 64KB. But for a striped block group, a local block reader cannot read 
across cell boundary anyway.

The following code from {{RemoteBlockReader2#read}} determines the maximum read 
size:
{code}
int nRead = Math.min(curDataSlice.remaining(), len);
curDataSlice.get(buf, off, nRead);

return nRead;
{code}

> Read a striping layout file from client side
> 
>
> Key: HDFS-7782
> URL: https://issues.apache.org/jira/browse/HDFS-7782
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Zhe Zhang
> Attachments: HDFS-7782-000.patch, HDFS-7782-001.patch, 
> HDFS-7782-002.patch, HDFS-7782-003.patch
>
>
> If client wants to read a file, he is not necessary to know and handle what 
> layout the file is. This sub task adds logic to DFSInputStream to support 
> reading striping layout files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-23 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376712#comment-14376712
 ] 

Yi Liu commented on HDFS-7960:
--

Some incorrect in my second comment: wo could not use 
{{removeBlocksAssociatedTo}},it will remove of all blocks of that DN, not only 
the storage, so just add removing blocks from {{InvalidateBlocks}} in 
removeZombieReplicas

> The full block report should prune zombie storages even if they're not empty
> 
>
> Key: HDFS-7960
> URL: https://issues.apache.org/jira/browse/HDFS-7960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
> HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch
>
>
> The full block report should prune zombie storages even if they're not empty. 
>  We have seen cases in production where zombie storages have not been pruned 
> subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
> is a block in some old storage which is actually not there.  In this case, 
> the block will not show up in the "new" storage (once old is renamed to new) 
> and the old storage will linger forever as a zombie, even with the HDFS-7596 
> fix applied.  This also happens with datanode hotplug, when a drive is 
> removed.  In this case, an entire storage (volume) goes away but the blocks 
> do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6826) Plugin interface to enable delegation of HDFS authorization assertions

2015-03-23 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376707#comment-14376707
 ] 

Jitendra Nath Pandey commented on HDFS-6826:


That makes sense.
+1

I will commit this tomorrow, unless there is an objection.


> Plugin interface to enable delegation of HDFS authorization assertions
> --
>
> Key: HDFS-6826
> URL: https://issues.apache.org/jira/browse/HDFS-6826
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Attachments: HDFS-6826-idea.patch, HDFS-6826-idea2.patch, 
> HDFS-6826-permchecker.patch, HDFS-6826.10.patch, HDFS-6826.11.patch, 
> HDFS-6826.12.patch, HDFS-6826.13.patch, HDFS-6826.14.patch, 
> HDFS-6826.15.patch, HDFS-6826.16.patch, HDFS-6826v3.patch, HDFS-6826v4.patch, 
> HDFS-6826v5.patch, HDFS-6826v6.patch, HDFS-6826v7.1.patch, 
> HDFS-6826v7.2.patch, HDFS-6826v7.3.patch, HDFS-6826v7.4.patch, 
> HDFS-6826v7.5.patch, HDFS-6826v7.6.patch, HDFS-6826v7.patch, 
> HDFS-6826v8.patch, HDFS-6826v9.patch, 
> HDFSPluggableAuthorizationProposal-v2.pdf, 
> HDFSPluggableAuthorizationProposal.pdf
>
>
> When Hbase data, HiveMetaStore data or Search data is accessed via services 
> (Hbase region servers, HiveServer2, Impala, Solr) the services can enforce 
> permissions on corresponding entities (databases, tables, views, columns, 
> search collections, documents). It is desirable, when the data is accessed 
> directly by users accessing the underlying data files (i.e. from a MapReduce 
> job), that the permission of the data files map to the permissions of the 
> corresponding data entity (i.e. table, column family or search collection).
> To enable this we need to have the necessary hooks in place in the NameNode 
> to delegate authorization to an external system that can map HDFS 
> files/directories to data entities and resolve their permissions based on the 
> data entities permissions.
> I’ll be posting a design proposal in the next few days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >