[jira] [Commented] (HDFS-6482) Use block ID-based block layout on datanodes

2015-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645568#comment-14645568
 ] 

Hudson commented on HDFS-6482:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8236 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8236/])
HDFS-8834. TestReplication is not valid after HDFS-6482. (Contributed by Lei 
Xu) (lei: rev f4f1b8b267703b8bebab06e17e69a4a4de611592)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplication.java


> Use block ID-based block layout on datanodes
> 
>
> Key: HDFS-6482
> URL: https://issues.apache.org/jira/browse/HDFS-6482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: James Thomas
>Assignee: James Thomas
> Fix For: 2.6.0
>
> Attachments: 6482-design.doc, HDFS-6482.1.patch, HDFS-6482.2.patch, 
> HDFS-6482.3.patch, HDFS-6482.4.patch, HDFS-6482.5.patch, HDFS-6482.6.patch, 
> HDFS-6482.7.patch, HDFS-6482.8.patch, HDFS-6482.9.patch, HDFS-6482.patch, 
> hadoop-24-datanode-dir.tgz
>
>
> Right now blocks are placed into directories that are split into many 
> subdirectories when capacity is reached. Instead we can use a block's ID to 
> determine the path it should go in. This eliminates the need for the LDir 
> data structure that facilitates the splitting of directories when they reach 
> capacity as well as fields in ReplicaInfo that keep track of a replica's 
> location.
> An extension of the work in HDFS-3290.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8834) TestReplication#testReplicationWhenBlockCorruption is not valid after HDFS-6482

2015-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645569#comment-14645569
 ] 

Hudson commented on HDFS-8834:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8236 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8236/])
HDFS-8834. TestReplication is not valid after HDFS-6482. (Contributed by Lei 
Xu) (lei: rev f4f1b8b267703b8bebab06e17e69a4a4de611592)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplication.java


> TestReplication#testReplicationWhenBlockCorruption is not valid after 
> HDFS-6482
> ---
>
> Key: HDFS-8834
> URL: https://issues.apache.org/jira/browse/HDFS-8834
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: testing
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-8834.00.patch
>
>
> {{TestReplication#testReplicationWhenBlockCorruption}} assumes DN has one 
> level of directory:
> {code}
> File[] listFiles = participatedNodeDirs.listFiles();
> {code}
> However, HDFS-6482 changed the layout of block directories and used two level 
> directories, which makes the following code invalidate (not running).
> {code}
> for (File file : listFiles) {
>   if (file.getName().startsWith(Block.BLOCK_FILE_PREFIX)
>   && !file.getName().endsWith("meta")) {
>   blockFile = file.getName();
>   for (File file1 : nonParticipatedNodeDirs) {
> file1.mkdirs();
> new File(file1, blockFile).createNewFile();
> new File(file1, blockFile + "_1000.meta").createNewFile();
>   }
> break;
> }
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8834) TestReplication#testReplicationWhenBlockCorruption is not valid after HDFS-6482

2015-07-28 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8834:

   Resolution: Fixed
Fix Version/s: 2.8.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks a lot for the quick review, [~hitliuyi]!  This patch only changes the 
test {{TestReplication}}, thus these two failure tests are not related.

I've committed this to trunk and branch-2.

> TestReplication#testReplicationWhenBlockCorruption is not valid after 
> HDFS-6482
> ---
>
> Key: HDFS-8834
> URL: https://issues.apache.org/jira/browse/HDFS-8834
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: testing
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-8834.00.patch
>
>
> {{TestReplication#testReplicationWhenBlockCorruption}} assumes DN has one 
> level of directory:
> {code}
> File[] listFiles = participatedNodeDirs.listFiles();
> {code}
> However, HDFS-6482 changed the layout of block directories and used two level 
> directories, which makes the following code invalidate (not running).
> {code}
> for (File file : listFiles) {
>   if (file.getName().startsWith(Block.BLOCK_FILE_PREFIX)
>   && !file.getName().endsWith("meta")) {
>   blockFile = file.getName();
>   for (File file1 : nonParticipatedNodeDirs) {
> file1.mkdirs();
> new File(file1, blockFile).createNewFile();
> new File(file1, blockFile + "_1000.meta").createNewFile();
>   }
> break;
> }
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7351) Document the HDFS Erasure Coding feature

2015-07-28 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645549#comment-14645549
 ] 

Vinayakumar B commented on HDFS-7351:
-

Thanks [~zhz] for the review and update.
Sure will wait for the HDFS-8833 for the conclusion.

> Document the HDFS Erasure Coding feature
> 
>
> Key: HDFS-7351
> URL: https://issues.apache.org/jira/browse/HDFS-7351
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-7351-HDFS-7285-01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8834) TestReplication#testReplicationWhenBlockCorruption is not valid after HDFS-6482

2015-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645543#comment-14645543
 ] 

Hadoop QA commented on HDFS-8834:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   8m  8s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 53s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 21s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 23s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 22s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m  8s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 174m  8s | Tests failed in hadoop-hdfs. |
| | | 197m 27s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider |
| Timed out tests | org.apache.hadoop.hdfs.TestPread |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12747685/HDFS-8834.00.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 69b0957 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11860/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11860/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11860/console |


This message was automatically generated.

> TestReplication#testReplicationWhenBlockCorruption is not valid after 
> HDFS-6482
> ---
>
> Key: HDFS-8834
> URL: https://issues.apache.org/jira/browse/HDFS-8834
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: testing
> Attachments: HDFS-8834.00.patch
>
>
> {{TestReplication#testReplicationWhenBlockCorruption}} assumes DN has one 
> level of directory:
> {code}
> File[] listFiles = participatedNodeDirs.listFiles();
> {code}
> However, HDFS-6482 changed the layout of block directories and used two level 
> directories, which makes the following code invalidate (not running).
> {code}
> for (File file : listFiles) {
>   if (file.getName().startsWith(Block.BLOCK_FILE_PREFIX)
>   && !file.getName().endsWith("meta")) {
>   blockFile = file.getName();
>   for (File file1 : nonParticipatedNodeDirs) {
> file1.mkdirs();
> new File(file1, blockFile).createNewFile();
> new File(file1, blockFile + "_1000.meta").createNewFile();
>   }
> break;
> }
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8811) Move BlockStoragePolicy name's constants from HdfsServerConstants.java to HdfsConstants.java

2015-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645540#comment-14645540
 ] 

Hudson commented on HDFS-8811:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8235 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8235/])
HDFS-8811. Move BlockStoragePolicy name's constants from 
HdfsServerConstants.java to HdfsConstants.java (Contributed by Vinayakumar B) 
(vinayakumarb: rev 50887e5b07b6abb20c0edd74211e5612dc7b16da)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestStorageMover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java


> Move BlockStoragePolicy name's constants from HdfsServerConstants.java to 
> HdfsConstants.java
> 
>
> Key: HDFS-8811
> URL: https://issues.apache.org/jira/browse/HDFS-8811
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-8811-01.patch
>
>
> Currently {{HdfsServerConstants.java}} have following constants, 
> {code}  String HOT_STORAGE_POLICY_NAME = "HOT";
>   String WARM_STORAGE_POLICY_NAME = "WARM";
>   String COLD_STORAGE_POLICY_NAME = "COLD";{code}
> and {{HdfsConstants.java}} have the following
> {code}  public static final String MEMORY_STORAGE_POLICY_NAME = 
> "LAZY_PERSIST";
>   public static final String ALLSSD_STORAGE_POLICY_NAME = "ALL_SSD";
>   public static final String ONESSD_STORAGE_POLICY_NAME = "ONE_SSD";{code}
> It would be better to move all these to one place HdfsConstants.java, which 
> client APIs also could access since this presents in hdfs-client module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8822) Add SSD storagepolicy tests in TestBlockStoragePolicy#testDefaultPolicies

2015-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645541#comment-14645541
 ] 

Hudson commented on HDFS-8822:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8235 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8235/])
HDFS-8822. Add SSD storagepolicy tests in 
TestBlockStoragePolicy#testDefaultPolicies (Contributed by Vinayakumar B) 
(vinayakumarb: rev 975e138df316f59e8bb0642e138d4b1170fb8184)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Add SSD storagepolicy tests in TestBlockStoragePolicy#testDefaultPolicies
> -
>
> Key: HDFS-8822
> URL: https://issues.apache.org/jira/browse/HDFS-8822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-8822-01.patch
>
>
> Add tests for storage policies ALLSSD and ONESSD in 
> {{TestBlockStoragePolicy#testDefaultPolicies(..)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8822) Add SSD storagepolicy tests in TestBlockStoragePolicy#testDefaultPolicies

2015-07-28 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8822:

Hadoop Flags: Reviewed

> Add SSD storagepolicy tests in TestBlockStoragePolicy#testDefaultPolicies
> -
>
> Key: HDFS-8822
> URL: https://issues.apache.org/jira/browse/HDFS-8822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-8822-01.patch
>
>
> Add tests for storage policies ALLSSD and ONESSD in 
> {{TestBlockStoragePolicy#testDefaultPolicies(..)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8822) Add SSD storagepolicy tests in TestBlockStoragePolicy#testDefaultPolicies

2015-07-28 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8822:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~hitliuyi] for review.
committed to trunk and branch-2

> Add SSD storagepolicy tests in TestBlockStoragePolicy#testDefaultPolicies
> -
>
> Key: HDFS-8822
> URL: https://issues.apache.org/jira/browse/HDFS-8822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-8822-01.patch
>
>
> Add tests for storage policies ALLSSD and ONESSD in 
> {{TestBlockStoragePolicy#testDefaultPolicies(..)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8622) Implement GETCONTENTSUMMARY operation for WebImageViewer

2015-07-28 Thread Jagadesh Kiran N (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645522#comment-14645522
 ] 

Jagadesh Kiran N commented on HDFS-8622:


Hi the test case failure 
"org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.testAST" is not related to 
the changes done in the patch

> Implement GETCONTENTSUMMARY operation for WebImageViewer
> 
>
> Key: HDFS-8622
> URL: https://issues.apache.org/jira/browse/HDFS-8622
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-8622-00.patch, HDFS-8622-01.patch, 
> HDFS-8622-02.patch, HDFS-8622-03.patch, HDFS-8622-04.patch, HDFS-8622-05.patch
>
>
>  it would be better for administrators if {code} GETCONTENTSUMMARY {code} are 
> supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8581) count cmd calculate wrong when huge files exist in one folder

2015-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645524#comment-14645524
 ] 

Hadoop QA commented on HDFS-8581:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12739214/HDFS-8581.1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 50887e5 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11864/console |


This message was automatically generated.

> count cmd calculate wrong when huge files exist in one folder
> -
>
> Key: HDFS-8581
> URL: https://issues.apache.org/jira/browse/HDFS-8581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: tongshiquan
>Assignee: J.Andreina
>Priority: Minor
> Attachments: HDFS-8581.1.patch
>
>
> If one directory such as "/result" exists about 20 files, then when 
> execute "hdfs dfs -count /", the result will go wrong. For all directories 
> whose name after "/result", file num will not be included.
> My cluster see as below, "/result_1433858936" is the directory exist huge 
> files, and files in "/sparkJobHistory", "/tmp", "/user" are not included
> vm-221:/export1/BigData/current # hdfs dfs -ls /
> 15/06/11 11:00:17 INFO hdfs.PeerCache: SocketCache disabled.
> Found 9 items
> -rw-r--r--   3 hdfs   supergroup  0 2015-06-08 12:10 
> /PRE_CREATE_DIR.SUCCESS
> drwxr-x---   - flume  hadoop  0 2015-06-08 12:08 /flume
> drwx--   - hbase  hadoop  0 2015-06-10 15:25 /hbase
> drwxr-xr-x   - hdfs   supergroup  0 2015-06-10 17:19 /hyt
> drwxrwxrwx   - mapred hadoop  0 2015-06-08 12:08 /mr-history
> drwxr-xr-x   - hdfs   supergroup  0 2015-06-09 22:10 
> /result_1433858936
> drwxrwxrwx   - spark  supergroup  0 2015-06-10 19:15 /sparkJobHistory
> drwxrwxrwx   - hdfs   hadoop  0 2015-06-08 12:14 /tmp
> drwxrwxrwx   - hdfs   hadoop  0 2015-06-09 21:57 /user
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /
> 15/06/11 11:00:24 INFO hdfs.PeerCache: SocketCache disabled.
> 1043   171536 1756375688 /
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /PRE_CREATE_DIR.SUCCESS
> 15/06/11 11:00:30 INFO hdfs.PeerCache: SocketCache disabled.
>01  0 /PRE_CREATE_DIR.SUCCESS
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /flume
> 15/06/11 11:00:41 INFO hdfs.PeerCache: SocketCache disabled.
>10  0 /flume
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /hbase
> 15/06/11 11:00:49 INFO hdfs.PeerCache: SocketCache disabled.
>   36   18  14807 /hbase
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /hyt
> 15/06/11 11:01:09 INFO hdfs.PeerCache: SocketCache disabled.
>10  0 /hyt
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /mr-history
> 15/06/11 11:01:18 INFO hdfs.PeerCache: SocketCache disabled.
>30  0 /mr-history
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /result_1433858936
> 15/06/11 11:01:29 INFO hdfs.PeerCache: SocketCache disabled.
> 1001   171517 1756360881 /result_1433858936
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /sparkJobHistory
> 15/06/11 11:01:41 INFO hdfs.PeerCache: SocketCache disabled.
>13  21785 /sparkJobHistory
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /tmp
> 15/06/11 11:01:48 INFO hdfs.PeerCache: SocketCache disabled.
>   176  35958 /tmp
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /user
> 15/06/11 11:01:55 INFO hdfs.PeerCache: SocketCache disabled.
>   121  19077 /user



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8811) Move BlockStoragePolicy name's constants from HdfsServerConstants.java to HdfsConstants.java

2015-07-28 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8811:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~hitliuyi] for the review.
committed to trunk and branch-2

> Move BlockStoragePolicy name's constants from HdfsServerConstants.java to 
> HdfsConstants.java
> 
>
> Key: HDFS-8811
> URL: https://issues.apache.org/jira/browse/HDFS-8811
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-8811-01.patch
>
>
> Currently {{HdfsServerConstants.java}} have following constants, 
> {code}  String HOT_STORAGE_POLICY_NAME = "HOT";
>   String WARM_STORAGE_POLICY_NAME = "WARM";
>   String COLD_STORAGE_POLICY_NAME = "COLD";{code}
> and {{HdfsConstants.java}} have the following
> {code}  public static final String MEMORY_STORAGE_POLICY_NAME = 
> "LAZY_PERSIST";
>   public static final String ALLSSD_STORAGE_POLICY_NAME = "ALL_SSD";
>   public static final String ONESSD_STORAGE_POLICY_NAME = "ONE_SSD";{code}
> It would be better to move all these to one place HdfsConstants.java, which 
> client APIs also could access since this presents in hdfs-client module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7351) Document the HDFS Erasure Coding feature

2015-07-28 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645519#comment-14645519
 ] 

Zhe Zhang commented on HDFS-7351:
-

Thanks [~vinayrpet] and [~umamaheswararao] for the great work!

The documentation looks good to me overall. I'd like to point out HDFS-8833 
which aims to change to way to configure EC policies. Depending on how it goes 
we might need to update the EC zone section in the documentation.

> Document the HDFS Erasure Coding feature
> 
>
> Key: HDFS-7351
> URL: https://issues.apache.org/jira/browse/HDFS-7351
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-7351-HDFS-7285-01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8768) Erasure Coding: block group ID displayed in WebUI is not consistent with fsck

2015-07-28 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HDFS-8768.
-
Resolution: Duplicate

> Erasure Coding: block group ID displayed in WebUI is not consistent with fsck
> -
>
> Key: HDFS-8768
> URL: https://issues.apache.org/jira/browse/HDFS-8768
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: GAO Rui
> Attachments: Screen Shot 2015-07-14 at 15.33.08.png, 
> screen-shot-with-HDFS-8779-patch.PNG
>
>
> This is duplicated by [HDFS-8779].
> For example, In WebUI( usually, namenode port: 50070) , one Erasure Code   
> file with one block group was displayed as the attached screenshot [^Screen 
> Shot 2015-07-14 at 15.33.08.png]. But, with fsck command, the block group of 
> the same file was displayed like: {{0. 
> BP-1130999596-172.23.38.10-1433791629728:blk_-9223372036854740160_3384 
> len=6438256640}}
> After checking block file names in datanodes, we believe WebUI may have some 
> problem with Erasure Code block group display.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8480) Fix performance and timeout issues in HDFS-7929 by using hard-links to preserve old edit logs instead of copying them

2015-07-28 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645511#comment-14645511
 ] 

Zhe Zhang commented on HDFS-8480:
-

[~mingma] Thanks for the good catch! Yes this is indeed a flaw in the patch.

I can't think of another use case requiring us to extend {{FSEditLogOp}} as you 
suggested. But testing upgrade scenarios itself is a pretty strong motivation. 
The only alternative is to manually create some old edit log files.

Once we agree upon a plan I'm happy to make the change.

> Fix performance and timeout issues in HDFS-7929 by using hard-links to 
> preserve old edit logs instead of copying them
> -
>
> Key: HDFS-8480
> URL: https://issues.apache.org/jira/browse/HDFS-8480
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Critical
> Fix For: 2.7.1
>
> Attachments: HDFS-8480.00.patch, HDFS-8480.01.patch, 
> HDFS-8480.02.patch, HDFS-8480.03.patch
>
>
> HDFS-7929 copies existing edit logs to the storage directory of the upgraded 
> {{NameNode}}. This slows down the upgrade process. This JIRA aims to use 
> hard-linking instead of per-op copying to achieve the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8815) DFS getStoragePolicy implementation using single RPC call

2015-07-28 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645506#comment-14645506
 ] 

Vinayakumar B commented on HDFS-8815:
-

Hi [~surendrasingh], thanks for taking this up. 
Here are some comments.

1. I think there is a need to implement any one of the proposed options. Patch 
seems to have both.
Since {{getStoragePolicy(path)}} directly get the {{BlockStoragePolicy}} lets 
NOT cache the policy suite for now.

2. {{FSDirAttrOp#getStoragePolicy(..)}}, need not be {{public}}.

3. in {{FSDirAttrOp#getStoragePolicy(..)}}, after the line {{byte[][] 
pathComponents = FSDirectory.getPathComponentsForReservedPath(path);}}, entire 
thing should be inside {{fsd.readlock()}}.

4. instead of below code
  {code}for (BlockStoragePolicy policy : bm.getStoragePolicies()) {
 byte storagePolicyId = inode.getStoragePolicyID();
  if (policy.getId() == storagePolicyId) {
return policy;
  }
}
return null;{code}
can use one line, {code}return 
bm.getStoragePolicy(inode.getStoragePolicyID());{code}

5. {{DFSClient#getStoragePolicy}} javadoc, put @param above @return.



[~arpitagarwal], Do you have any thoughts here?

> DFS getStoragePolicy implementation using single RPC call
> -
>
> Key: HDFS-8815
> URL: https://issues.apache.org/jira/browse/HDFS-8815
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8815-001.patch, HDFS-8815-002.patch
>
>
> HADOOP-12161 introduced a new {{FileSystem#getStoragePolicy}} call. The DFS 
> implementation of the call requires two RPC calls, the first to fetch the 
> storage policy ID and the second to fetch the policy suite to map the policy 
> ID to a {{BlockStoragePolicySpi}}.
> Fix the implementation to require a single RPC call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8811) Move BlockStoragePolicy name's constants from HdfsServerConstants.java to HdfsConstants.java

2015-07-28 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645484#comment-14645484
 ] 

Yi Liu commented on HDFS-8811:
--

Thanks Vinay for working on this.

I agree all the constants of policy names should be better put in one place.  
Currently {{MEMORY_STORAGE_POLICY_NAME}} is used in client side.  So we should 
put them in {{HdfsConstants.java}}.   

+1 for the patch.


> Move BlockStoragePolicy name's constants from HdfsServerConstants.java to 
> HdfsConstants.java
> 
>
> Key: HDFS-8811
> URL: https://issues.apache.org/jira/browse/HDFS-8811
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8811-01.patch
>
>
> Currently {{HdfsServerConstants.java}} have following constants, 
> {code}  String HOT_STORAGE_POLICY_NAME = "HOT";
>   String WARM_STORAGE_POLICY_NAME = "WARM";
>   String COLD_STORAGE_POLICY_NAME = "COLD";{code}
> and {{HdfsConstants.java}} have the following
> {code}  public static final String MEMORY_STORAGE_POLICY_NAME = 
> "LAZY_PERSIST";
>   public static final String ALLSSD_STORAGE_POLICY_NAME = "ALL_SSD";
>   public static final String ONESSD_STORAGE_POLICY_NAME = "ONE_SSD";{code}
> It would be better to move all these to one place HdfsConstants.java, which 
> client APIs also could access since this presents in hdfs-client module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8830) Support add/remove directories to an existing encryption zone

2015-07-28 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8830:
-
Attachment: HDFS-8830.03.patch

Fix the checkstyle issue.

> Support add/remove directories to an existing encryption zone
> -
>
> Key: HDFS-8830
> URL: https://issues.apache.org/jira/browse/HDFS-8830
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-8830.01.patch, HDFS-8830.02.patch, 
> HDFS-8830.03.patch
>
>
> This is the first step toward better "Scratch space" and "Soft Delete" 
> support. We remove the assumption that the hdfs directory and encryption zone 
> is 1 to 1 mapped and can't be changed once created.
> The encryption zone creation part is kept As-Is from Hadoop 2.4. We 
> generalize the encryption zone and its directories from 1:1 to 1:many. This 
> way, other directories such as scratch can be added to/removed from 
> encryption zone as needed. Later on, files in these directories can be 
> renamed within the same encryption zone efficiently. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8822) Add SSD storagepolicy tests in TestBlockStoragePolicy#testDefaultPolicies

2015-07-28 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645475#comment-14645475
 ] 

Yi Liu commented on HDFS-8822:
--

+1, Thanks Vinay. 

> Add SSD storagepolicy tests in TestBlockStoragePolicy#testDefaultPolicies
> -
>
> Key: HDFS-8822
> URL: https://issues.apache.org/jira/browse/HDFS-8822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8822-01.patch
>
>
> Add tests for storage policies ALLSSD and ONESSD in 
> {{TestBlockStoragePolicy#testDefaultPolicies(..)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8816) Improve visualization for the Datanode tab in the NN UI

2015-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645453#comment-14645453
 ] 

Hadoop QA commented on HDFS-8816:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 12s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m 29s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 28s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  67m  8s | Tests failed in hadoop-hdfs. |
| | | 108m  3s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots |
|   | hadoop.fs.TestUrlStreamHandler |
|   | hadoop.hdfs.server.namenode.TestFileLimit |
|   | hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot |
|   | hadoop.hdfs.server.namenode.TestEditLogAutoroll |
|   | hadoop.TestRefreshCallQueue |
|   | hadoop.hdfs.protocolPB.TestPBHelper |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.cli.TestCryptoAdminCLI |
|   | hadoop.hdfs.server.datanode.TestDiskError |
|   | hadoop.fs.viewfs.TestViewFsWithAcls |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.server.namenode.TestHostsFiles |
|   | hadoop.hdfs.server.datanode.TestTransferRbw |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy |
|   | hadoop.fs.contract.hdfs.TestHDFSContractDelete |
|   | hadoop.hdfs.server.namenode.TestFileContextAcl |
|   | hadoop.fs.TestFcHdfsSetUMask |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.server.namenode.TestClusterId |
|   | hadoop.hdfs.server.namenode.TestDeleteRace |
|   | hadoop.hdfs.server.namenode.TestFSDirectory |
|   | hadoop.hdfs.server.namenode.TestLeaseManager |
|   | hadoop.fs.contract.hdfs.TestHDFSContractOpen |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing |
|   | hadoop.hdfs.server.datanode.TestStorageReport |
|   | hadoop.hdfs.server.datanode.TestBlockRecovery |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.datanode.TestSimulatedFSDataset |
|   | hadoop.fs.contract.hdfs.TestHDFSContractMkdir |
|   | hadoop.fs.contract.hdfs.TestHDFSContractAppend |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestRbwSpaceReservation |
|   | hadoop.hdfs.server.namenode.ha.TestQuotasWithHA |
|   | hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA |
|   | hadoop.hdfs.server.namenode.TestSecondaryWebUi |
|   | hadoop.hdfs.server.namenode.TestMalformedURLs |
|   | hadoop.hdfs.server.namenode.TestAuditLogger |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | hadoop.hdfs.server.namenode.TestHDFSConcat |
|   | hadoop.hdfs.server.namenode.TestAddBlockRetry |
|   | hadoop.fs.TestSymlinkHdfsFileSystem |
|   | hadoop.fs.viewfs.TestViewFsDefaultValue |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestFSInputChecker |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.server.datanode.TestDataNodeInitStorage |
|   | hadoop.hdfs.server.mover.TestStorageMover |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol |
|   | hadoop.cli.TestAclCLI |
|   | hadoop.hdfs.server.namenode.ha.TestHAMetrics |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap |
|   | hadoop.hdfs.server.namenode.TestFsLimits |
|   | hadoop.hdfs.server.datanode.TestReadOnlySharedStorage |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional |
|   | hadoop.hdfs.server.datanode.TestHSync |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter |
|   | hadoop.hdfs.server

[jira] [Commented] (HDFS-8820) Enable RPC Congestion control by default

2015-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645445#comment-14645445
 ] 

Hadoop QA commented on HDFS-8820:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 45s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 39s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 46s | The applied patch generated  1 
new checkstyle issues (total was 218, now 218). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 23s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 21s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 40s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 160m 47s | Tests failed in hadoop-hdfs. |
| | | 228m 23s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.TestRefreshCallQueue |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyIsHot |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12747655/HDFS-8820.02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 69b0957 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11858/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11858/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11858/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11858/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11858/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11858/console |


This message was automatically generated.

> Enable RPC Congestion control by default
> 
>
> Key: HDFS-8820
> URL: https://issues.apache.org/jira/browse/HDFS-8820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8820.01.patch, HDFS-8820.02.patch
>
>
> We propose enabling RPC congestion control introduced by HADOOP-10597 by 
> default.
> We enabled it on a couple of large clusters a few weeks ago and it has helped 
> keep the namenodes responsive under load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645438#comment-14645438
 ] 

Hudson commented on HDFS-8180:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8234 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8234/])
HDFS-8180. AbstractFileSystem Implementation for WebHdfs. Contributed by 
Sathosh G Nayak. (jghoman: rev 0712a8103fec6e9a9ceba335e3c3800b85b2c7ca)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestWebHdfsFileContextMainOperations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestSWebHdfsFileContextMainOperations.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/WebHdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/package.html
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/SWebHdfs.java


> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Fix For: 2.8.0
>
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch, 
> HDFS-8180-4.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8670) Better to exclude decommissioned nodes for namenode NodeUsage JMX

2015-07-28 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8670:
-
Attachment: HDFS-8670.5.patch

Thanks [~vinayrpet] for the review comments.
 I have updated the patch .Sorry for the delay. Please review. 

> Better to exclude decommissioned nodes for namenode NodeUsage JMX
> -
>
> Key: HDFS-8670
> URL: https://issues.apache.org/jira/browse/HDFS-8670
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: J.Andreina
> Attachments: HDFS-8670.1.patch, HDFS-8670.2.patch, HDFS-8670.3.patch, 
> HDFS-8670.4.patch, HDFS-8670.5.patch
>
>
> The namenode NodeUsage JMX has Max, Median, Min and Standard Deviation of 
> DataNodes usage, it currently includes decommissioned nodes for the 
> calculation. However, given balancer doesn't work on decommissioned nodes and 
> sometimes we could have nodes stay in decommissioned states for a long time; 
> it might be better to exclude decommissioned nodes for the metrics 
> calculation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-28 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8180:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed this.  Resolving.  Thanks, Santhosh!

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Fix For: 2.8.0
>
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch, 
> HDFS-8180-4.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-28 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645417#comment-14645417
 ] 

Jakob Homan commented on HDFS-8180:
---

+1

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch, 
> HDFS-8180-4.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8480) Fix performance and timeout issues in HDFS-7929 by using hard-links to preserve old edit logs instead of copying them

2015-07-28 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645399#comment-14645399
 ] 

Ming Ma commented on HDFS-8480:
---

Thanks [~zhz].

The test code simulates older version edit by creating EditLogFileOutputStream 
with the prior version. However, the FSEditLogOp's writeFields implementation 
doesn't distinguish different versions and always write with the latest 
features.

For example, if this patch is applied directly on the top of the 2.6 release 
where CURRENT_LAYOUT_VERSION is set to -60, the test will fail because even 
though EditLogFileOutputStream is created with version -59, OP_ADD's 
writeFields will add storagePolicyId to the edit. During the edit loading, it 
will skip the reading of storagePolicyId given it has version -59 and thus 
cause checksum error.

To simulates old edit for test purpose, it seems we need to have FSEditLogOp 
support writing optional data depending on the version requested. So in this 
case, if the version is set to -59, it shouldn't write storagePolicyId. This 
functionality seems only useful for testing upgrade scenario.

Thought?

> Fix performance and timeout issues in HDFS-7929 by using hard-links to 
> preserve old edit logs instead of copying them
> -
>
> Key: HDFS-8480
> URL: https://issues.apache.org/jira/browse/HDFS-8480
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Critical
> Fix For: 2.7.1
>
> Attachments: HDFS-8480.00.patch, HDFS-8480.01.patch, 
> HDFS-8480.02.patch, HDFS-8480.03.patch
>
>
> HDFS-7929 copies existing edit logs to the storage directory of the upgraded 
> {{NameNode}}. This slows down the upgrade process. This JIRA aims to use 
> hard-linking instead of per-op copying to achieve the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8834) TestReplication#testReplicationWhenBlockCorruption is not valid after HDFS-6482

2015-07-28 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645391#comment-14645391
 ] 

Yi Liu commented on HDFS-8834:
--

+1, pending Jenkins, thanks Lei.

> TestReplication#testReplicationWhenBlockCorruption is not valid after 
> HDFS-6482
> ---
>
> Key: HDFS-8834
> URL: https://issues.apache.org/jira/browse/HDFS-8834
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: testing
> Attachments: HDFS-8834.00.patch
>
>
> {{TestReplication#testReplicationWhenBlockCorruption}} assumes DN has one 
> level of directory:
> {code}
> File[] listFiles = participatedNodeDirs.listFiles();
> {code}
> However, HDFS-6482 changed the layout of block directories and used two level 
> directories, which makes the following code invalidate (not running).
> {code}
> for (File file : listFiles) {
>   if (file.getName().startsWith(Block.BLOCK_FILE_PREFIX)
>   && !file.getName().endsWith("meta")) {
>   blockFile = file.getName();
>   for (File file1 : nonParticipatedNodeDirs) {
> file1.mkdirs();
> new File(file1, blockFile).createNewFile();
> new File(file1, blockFile + "_1000.meta").createNewFile();
>   }
> break;
> }
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8820) Enable RPC Congestion control by default

2015-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645388#comment-14645388
 ] 

Hadoop QA commented on HDFS-8820:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 51s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 36s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 34s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 44s | The applied patch generated  1 
new checkstyle issues (total was 218, now 218). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 23s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 20s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 17s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 160m 10s | Tests failed in hadoop-hdfs. |
| | | 227m 22s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestStandbyIsHot |
|   | hadoop.TestRefreshCallQueue |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12747655/HDFS-8820.02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 69b0957 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11857/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11857/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11857/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11857/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11857/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11857/console |


This message was automatically generated.

> Enable RPC Congestion control by default
> 
>
> Key: HDFS-8820
> URL: https://issues.apache.org/jira/browse/HDFS-8820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8820.01.patch, HDFS-8820.02.patch
>
>
> We propose enabling RPC congestion control introduced by HADOOP-10597 by 
> default.
> We enabled it on a couple of large clusters a few weeks ago and it has helped 
> keep the namenodes responsive under load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8287) DFSStripedOutputStream.writeChunk should not wait for writing parity

2015-07-28 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-8287:
-
Attachment: HDFS-8287-HDFS-7285.00.patch

> DFSStripedOutputStream.writeChunk should not wait for writing parity 
> -
>
> Key: HDFS-8287
> URL: https://issues.apache.org/jira/browse/HDFS-8287
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Kai Sasaki
> Attachments: HDFS-8287-HDFS-7285.00.patch
>
>
> When a stripping cell is full, writeChunk computes and generates parity 
> packets.  It sequentially calls waitAndQueuePacket so that user client cannot 
> continue to write data until it finishes.
> We should allow user client to continue writing instead but not blocking it 
> when writing parity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8287) DFSStripedOutputStream.writeChunk should not wait for writing parity

2015-07-28 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-8287:
-
Status: Patch Available  (was: Open)

> DFSStripedOutputStream.writeChunk should not wait for writing parity 
> -
>
> Key: HDFS-8287
> URL: https://issues.apache.org/jira/browse/HDFS-8287
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Kai Sasaki
> Attachments: HDFS-8287-HDFS-7285.00.patch
>
>
> When a stripping cell is full, writeChunk computes and generates parity 
> packets.  It sequentially calls waitAndQueuePacket so that user client cannot 
> continue to write data until it finishes.
> We should allow user client to continue writing instead but not blocking it 
> when writing parity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8830) Support add/remove directories to an existing encryption zone

2015-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645337#comment-14645337
 ] 

Hadoop QA commented on HDFS-8830:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  21m 39s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 48s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 52s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 30s | The applied patch generated  1 
new checkstyle issues (total was 6, now 7). |
| {color:red}-1{color} | checkstyle |   3m 32s | The applied patch generated  1 
new checkstyle issues (total was 4, now 5). |
| {color:red}-1{color} | whitespace |   0m 25s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   6m 43s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 53s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 161m 53s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 30s | Tests passed in 
hadoop-hdfs-client. |
| | | 237m 50s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.TestDistributedFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12747633/HDFS-8830.02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 69b0957 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11856/artifact/patchprocess/diffcheckstylehadoop-common.txt
 
https://builds.apache.org/job/PreCommit-HDFS-Build/11856/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11856/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11856/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11856/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11856/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11856/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11856/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11856/console |


This message was automatically generated.

> Support add/remove directories to an existing encryption zone
> -
>
> Key: HDFS-8830
> URL: https://issues.apache.org/jira/browse/HDFS-8830
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-8830.01.patch, HDFS-8830.02.patch
>
>
> This is the first step toward better "Scratch space" and "Soft Delete" 
> support. We remove the assumption that the hdfs directory and encryption zone 
> is 1 to 1 mapped and can't be changed once created.
> The encryption zone creation part is kept As-Is from Hadoop 2.4. We 
> generalize the encryption zone and its directories from 1:1 to 1:many. This 
> way, other directories such as scratch can be added to/removed from 
> encryption zone as needed. Later on, files in these directories can be 
> renamed within the same encryption zone efficiently. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8499) Refactor BlockInfo class hierarchy with static helper class

2015-07-28 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645332#comment-14645332
 ] 

Zhe Zhang commented on HDFS-8499:
-

[~szetszwo] Sure, thanks for offering to do that. The only reason I proposed 
doing it together with HDFS-8835 was to save the reverting effort.

> Refactor BlockInfo class hierarchy with static helper class
> ---
>
> Key: HDFS-8499
> URL: https://issues.apache.org/jira/browse/HDFS-8499
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-8499.00.patch, HDFS-8499.01.patch, 
> HDFS-8499.02.patch, HDFS-8499.03.patch, HDFS-8499.04.patch, 
> HDFS-8499.05.patch, HDFS-8499.06.patch, HDFS-8499.07.patch, 
> HDFS-8499.UCFeature.patch, HDFS-bistriped.patch
>
>
> In HDFS-7285 branch, the {{BlockInfoUnderConstruction}} interface provides a 
> common abstraction for striped and contiguous UC blocks. This JIRA aims to 
> merge it to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8816) Improve visualization for the Datanode tab in the NN UI

2015-07-28 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8816:
-
Attachment: HDFS-8816.004.patch

> Improve visualization for the Datanode tab in the NN UI
> ---
>
> Key: HDFS-8816
> URL: https://issues.apache.org/jira/browse/HDFS-8816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8816.000.patch, HDFS-8816.001.patch, 
> HDFS-8816.002.patch, HDFS-8816.003.patch, HDFS-8816.004.patch, HDFS-8816.png, 
> HDFS-8816.png, Screen Shot 2015-07-23 at 10.24.24 AM.png
>
>
> The information of the datanode tab in the NN UI is clogged. This jira 
> proposes to improve the visualization of the datanode tab in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-8816) Improve visualization for the Datanode tab in the NN UI

2015-07-28 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645300#comment-14645300
 ] 

Haohui Mai edited comment on HDFS-8816 at 7/29/15 12:55 AM:


bq. Could you please also address?

I tested with Chrome 44, Firefox 36 and Safari and I can't reproduce. Which 
browser are you using?

bq. For last contact, if its greater than 3 seconds ago, only then would I be 
interested in the timestamp.

Let's just leave it for now. I can see arguments from both side, maybe we can 
address it in a separate jira?



was (Author: wheat9):
bq. Could you please also address?

I tested with Chrome 44, Firefox 36 and Safari and I can't reproduce. Which 
browser are you using?

bq. For last contact, if its greater than 3 seconds ago, only then would I be 
interested in the timestamp.

Let's just leave it for now. We can address it in a separate jira.


> Improve visualization for the Datanode tab in the NN UI
> ---
>
> Key: HDFS-8816
> URL: https://issues.apache.org/jira/browse/HDFS-8816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8816.000.patch, HDFS-8816.001.patch, 
> HDFS-8816.002.patch, HDFS-8816.003.patch, HDFS-8816.png, HDFS-8816.png, 
> Screen Shot 2015-07-23 at 10.24.24 AM.png
>
>
> The information of the datanode tab in the NN UI is clogged. This jira 
> proposes to improve the visualization of the datanode tab in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8829) DataNode sets SO_RCVBUF explicitly is disabling tcp auto-tuning

2015-07-28 Thread He Tianyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645305#comment-14645305
 ] 

He Tianyi commented on HDFS-8829:
-

This affects pipeline throughput, and I've observed a 30% performance gain with 
tcp auto-tuning enabled. 
(since 128KB window size is not always optimal)

Could be particularly useful with SSD drives (where disk throughput may be 
greater than net throughput).


> DataNode sets SO_RCVBUF explicitly is disabling tcp auto-tuning
> ---
>
> Key: HDFS-8829
> URL: https://issues.apache.org/jira/browse/HDFS-8829
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.3.0, 2.6.0
>Reporter: He Tianyi
>Assignee: kanaka kumar avvaru
>
> {code:java}
>   private void initDataXceiver(Configuration conf) throws IOException {
> // find free port or use privileged port provided
> TcpPeerServer tcpPeerServer;
> if (secureResources != null) {
>   tcpPeerServer = new TcpPeerServer(secureResources);
> } else {
>   tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
>   DataNode.getStreamingAddr(conf));
> }
> 
> tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
> {code}
> The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on 
> some system.
> Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8816) Improve visualization for the Datanode tab in the NN UI

2015-07-28 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645300#comment-14645300
 ] 

Haohui Mai commented on HDFS-8816:
--

bq. Could you please also address?

I tested with Chrome 44, Firefox 36 and Safari and I can't reproduce. Which 
browser are you using?

bq. For last contact, if its greater than 3 seconds ago, only then would I be 
interested in the timestamp.

Let's just leave it for now. We can address it in a separate jira.


> Improve visualization for the Datanode tab in the NN UI
> ---
>
> Key: HDFS-8816
> URL: https://issues.apache.org/jira/browse/HDFS-8816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8816.000.patch, HDFS-8816.001.patch, 
> HDFS-8816.002.patch, HDFS-8816.003.patch, HDFS-8816.png, HDFS-8816.png, 
> Screen Shot 2015-07-23 at 10.24.24 AM.png
>
>
> The information of the datanode tab in the NN UI is clogged. This jira 
> proposes to improve the visualization of the datanode tab in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8499) Refactor BlockInfo class hierarchy with static helper class

2015-07-28 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645296#comment-14645296
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8499:
---

Thanks for trying.  Please revert the committed patch but not commit another 
patch for reverting.  I think we also need to revert some related JIRAs 
committed after HDFS-8499.  If you don't mind, I can give a try.

> Refactor BlockInfo class hierarchy with static helper class
> ---
>
> Key: HDFS-8499
> URL: https://issues.apache.org/jira/browse/HDFS-8499
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-8499.00.patch, HDFS-8499.01.patch, 
> HDFS-8499.02.patch, HDFS-8499.03.patch, HDFS-8499.04.patch, 
> HDFS-8499.05.patch, HDFS-8499.06.patch, HDFS-8499.07.patch, 
> HDFS-8499.UCFeature.patch, HDFS-bistriped.patch
>
>
> In HDFS-7285 branch, the {{BlockInfoUnderConstruction}} interface provides a 
> common abstraction for striped and contiguous UC blocks. This JIRA aims to 
> merge it to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6994) libhdfs3 - A native C/C++ HDFS client

2015-07-28 Thread li zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645292#comment-14645292
 ] 

li zhang commented on HDFS-6994:


Hi Zhanwei Wang
I see that Hadoop 2.4.1 provided centralized caching management in HDFS and 
zero-copy read interface. I'm wondering if libhdfs3 supports the new interface 
for direct reading from preloaded cache and getting cache locality information 
from NN? If not, any plan to support this in future? Thanks in advance.

> libhdfs3 - A native C/C++ HDFS client
> -
>
> Key: HDFS-6994
> URL: https://issues.apache.org/jira/browse/HDFS-6994
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs-client
>Reporter: Zhanwei Wang
>Assignee: Zhanwei Wang
> Attachments: HDFS-6994-rpc-8.patch, HDFS-6994.patch
>
>
> Hi All
> I just got the permission to open source libhdfs3, which is a native C/C++ 
> HDFS client based on Hadoop RPC protocol and HDFS Data Transfer Protocol.
> libhdfs3 provide the libhdfs style C interface and a C++ interface. Support 
> both HADOOP RPC version 8 and 9. Support Namenode HA and Kerberos 
> authentication.
> libhdfs3 is currently used by HAWQ of Pivotal
> I'd like to integrate libhdfs3 into HDFS source code to benefit others.
> You can find libhdfs3 code from github
> https://github.com/PivotalRD/libhdfs3
> http://pivotalrd.github.io/libhdfs3/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8499) Refactor BlockInfo class hierarchy with static helper class

2015-07-28 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HDFS-8499.
-
  Resolution: Fixed
Hadoop Flags: Reviewed

[~andrew.wang] tried reverting the patch and found some conflicts. Instead of 
reverting, we can take the chance to setup the {{BlockInfoUnderConstruction}} 
interface, which should be done in trunk anyway. I created HDFS-8835 to make 
necessary changes. Resolving this JIRA again.

> Refactor BlockInfo class hierarchy with static helper class
> ---
>
> Key: HDFS-8499
> URL: https://issues.apache.org/jira/browse/HDFS-8499
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-8499.00.patch, HDFS-8499.01.patch, 
> HDFS-8499.02.patch, HDFS-8499.03.patch, HDFS-8499.04.patch, 
> HDFS-8499.05.patch, HDFS-8499.06.patch, HDFS-8499.07.patch, 
> HDFS-8499.UCFeature.patch, HDFS-bistriped.patch
>
>
> In HDFS-7285 branch, the {{BlockInfoUnderConstruction}} interface provides a 
> common abstraction for striped and contiguous UC blocks. This JIRA aims to 
> merge it to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8835) Convert BlockInfoUnderConstruction as an interface

2015-07-28 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-8835:
---

 Summary: Convert BlockInfoUnderConstruction as an interface
 Key: HDFS-8835
 URL: https://issues.apache.org/jira/browse/HDFS-8835
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7.1
Reporter: Zhe Zhang
Assignee: Zhe Zhang


Per discussion under HDFS-8499, this JIRA aims to convert 
{{BlockInfoUnderConstruction}} as an interface and 
{{BlockInfoContiguousUnderConstruction}} as its implementation. The HDFS-7285 
branch will add {{BlockInfoStripedUnderConstruction}} as another implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8202) Improve end to end stirpping file test to add erasure recovering test

2015-07-28 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645270#comment-14645270
 ] 

Zhe Zhang commented on HDFS-8202:
-

Thanks Xinwei for updating the patch. The patch LGTM except for the following 
minor issues:

# {{TestReadStripedFileWithDecoding}} has an unused import and {{import 
org.apache.hadoop.hdfs.protocol.*}}
# We should probably convert {{Assert.fail}} to {{assertTrue}}.
{code}
int recoverBlkNum = dataBlkDelNum + parityBlkDelNum;
if (dataBlkDelNum < 0 || parityBlkDelNum < 0) {
  Assert.fail("dataBlkDelNum and parityBlkDelNum should be positive");
}
if (recoverBlkNum > parityBlocks) {
  Assert.fail("The sum of " +
  "dataBlkDelNum and parityBlkDelNum should be between 1 ~ "
  + parityBlocks);
}
{code}
# We can add some randomness to the following code, maybe as a follow-on:
{code}
for (int i = 0; i < indices.length; i++) {
  if (j < dataBlkDelNum) {
if (indices[i] < dataBlocks) {
  delDataBlkIndices[j++] = i;
}
  }
  if (k < parityBlkDelNum) {
if (indices[i] >= dataBlocks) {
  delParityBlkIndices[k++] = i;
}
  }
}
{code}
# Calling {{TestDFSStripedOutputStreamWithFailure#killDatanode}} from its peer 
class ({{TestWriteStripedFileWithFailure}}) doesn't look very neat. As a 
follow-on we can move it to a utility class.
# {{TestWriteStripedFileWithFailure}} actually fails, could you debug the 
issue? Thanks.

> Improve end to end stirpping file test to add erasure recovering test
> -
>
> Key: HDFS-8202
> URL: https://issues.apache.org/jira/browse/HDFS-8202
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Xinwei Qin 
> Attachments: HDFS-8202-HDFS-7285.003.patch, 
> HDFS-8202-HDFS-7285.004.patch, HDFS-8202-HDFS-7285.005.patch, 
> HDFS-8202.001.patch, HDFS-8202.002.patch
>
>
> This to follow on HDFS-8201 to add erasure recovering test in the end to end 
> stripping file test:
> * After writing certain blocks to the test file, delete some block file;
> * Read the file content and compare, see if any recovering issue, or verify 
> the erasure recovering works or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8834) TestReplication#testReplicationWhenBlockCorruption is not valid after HDFS-6482

2015-07-28 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8834:

Attachment: HDFS-8834.00.patch

Use {{Files#walkFileTree}} to collect block files and add asserts to make sure 
the tests have been called.

> TestReplication#testReplicationWhenBlockCorruption is not valid after 
> HDFS-6482
> ---
>
> Key: HDFS-8834
> URL: https://issues.apache.org/jira/browse/HDFS-8834
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: testing
> Attachments: HDFS-8834.00.patch
>
>
> {{TestReplication#testReplicationWhenBlockCorruption}} assumes DN has one 
> level of directory:
> {code}
> File[] listFiles = participatedNodeDirs.listFiles();
> {code}
> However, HDFS-6482 changed the layout of block directories and used two level 
> directories, which makes the following code invalidate (not running).
> {code}
> for (File file : listFiles) {
>   if (file.getName().startsWith(Block.BLOCK_FILE_PREFIX)
>   && !file.getName().endsWith("meta")) {
>   blockFile = file.getName();
>   for (File file1 : nonParticipatedNodeDirs) {
> file1.mkdirs();
> new File(file1, blockFile).createNewFile();
> new File(file1, blockFile + "_1000.meta").createNewFile();
>   }
> break;
> }
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8834) TestReplication#testReplicationWhenBlockCorruption is not valid after HDFS-6482

2015-07-28 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8834:

Status: Patch Available  (was: Open)

> TestReplication#testReplicationWhenBlockCorruption is not valid after 
> HDFS-6482
> ---
>
> Key: HDFS-8834
> URL: https://issues.apache.org/jira/browse/HDFS-8834
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: testing
> Attachments: HDFS-8834.00.patch
>
>
> {{TestReplication#testReplicationWhenBlockCorruption}} assumes DN has one 
> level of directory:
> {code}
> File[] listFiles = participatedNodeDirs.listFiles();
> {code}
> However, HDFS-6482 changed the layout of block directories and used two level 
> directories, which makes the following code invalidate (not running).
> {code}
> for (File file : listFiles) {
>   if (file.getName().startsWith(Block.BLOCK_FILE_PREFIX)
>   && !file.getName().endsWith("meta")) {
>   blockFile = file.getName();
>   for (File file1 : nonParticipatedNodeDirs) {
> file1.mkdirs();
> new File(file1, blockFile).createNewFile();
> new File(file1, blockFile + "_1000.meta").createNewFile();
>   }
> break;
> }
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8834) TestReplication#testReplicationWhenBlockCorruption is not valid after HDFS-6482

2015-07-28 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8834:

Description: 
{{TestReplication#testReplicationWhenBlockCorruption}} assumes DN has one level 
of directory:
{code}
File[] listFiles = participatedNodeDirs.listFiles();
{code}

However, HDFS-6482 changed the layout of block directories and used two level 
directories, which makes the following code invalidate (not running).

{code}
for (File file : listFiles) {
  if (file.getName().startsWith(Block.BLOCK_FILE_PREFIX)
  && !file.getName().endsWith("meta")) {
  blockFile = file.getName();
  for (File file1 : nonParticipatedNodeDirs) {
file1.mkdirs();
new File(file1, blockFile).createNewFile();
new File(file1, blockFile + "_1000.meta").createNewFile();
  }
break;
}
 }
{code}

  was:
{{TestReplication#testReplicationWhenBlockCorruption}} assumes DN has one level 
of directory:
{code}
File[] listFiles = participatedNodeDirs.listFiles();
{code}

However, HDFS-6482 changed the layout of block directories and used two level 
directories, which makes the following code invalidate (not running).

{code}
for (File file : listFiles) {
if (file.getName().startsWith(Block.BLOCK_FILE_PREFIX)
&& !file.getName().endsWith("meta")) {
  blockFile = file.getName();
  for (File file1 : nonParticipatedNodeDirs) {
file1.mkdirs();
new File(file1, blockFile).createNewFile();
new File(file1, blockFile + "_1000.meta").createNewFile();
  }
  break;
}
  }
{code}


> TestReplication#testReplicationWhenBlockCorruption is not valid after 
> HDFS-6482
> ---
>
> Key: HDFS-8834
> URL: https://issues.apache.org/jira/browse/HDFS-8834
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: testing
>
> {{TestReplication#testReplicationWhenBlockCorruption}} assumes DN has one 
> level of directory:
> {code}
> File[] listFiles = participatedNodeDirs.listFiles();
> {code}
> However, HDFS-6482 changed the layout of block directories and used two level 
> directories, which makes the following code invalidate (not running).
> {code}
> for (File file : listFiles) {
>   if (file.getName().startsWith(Block.BLOCK_FILE_PREFIX)
>   && !file.getName().endsWith("meta")) {
>   blockFile = file.getName();
>   for (File file1 : nonParticipatedNodeDirs) {
> file1.mkdirs();
> new File(file1, blockFile).createNewFile();
> new File(file1, blockFile + "_1000.meta").createNewFile();
>   }
> break;
> }
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8834) TestReplication#testReplicationWhenBlockCorruption is not valid after HDFS-6482

2015-07-28 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-8834:
---

 Summary: TestReplication#testReplicationWhenBlockCorruption is not 
valid after HDFS-6482
 Key: HDFS-8834
 URL: https://issues.apache.org/jira/browse/HDFS-8834
 Project: Hadoop HDFS
  Issue Type: Test
  Components: datanode
Affects Versions: 2.7.1
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor


{{TestReplication#testReplicationWhenBlockCorruption}} assumes DN has one level 
of directory:
{code}
File[] listFiles = participatedNodeDirs.listFiles();
{code}

However, HDFS-6482 changed the layout of block directories and used two level 
directories, which makes the following code invalidate (not running).

{code}
for (File file : listFiles) {
if (file.getName().startsWith(Block.BLOCK_FILE_PREFIX)
&& !file.getName().endsWith("meta")) {
  blockFile = file.getName();
  for (File file1 : nonParticipatedNodeDirs) {
file1.mkdirs();
new File(file1, blockFile).createNewFile();
new File(file1, blockFile + "_1000.meta").createNewFile();
  }
  break;
}
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3743) QJM: improve formatting behavior for JNs

2015-07-28 Thread Jian Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645247#comment-14645247
 ] 

Jian Fang commented on HDFS-3743:
-

I was working on other things and now come back to this JIRA again. 

In my use case, I care more about a replacement JN if one EC2 instance where a 
JN was running was gone. I looked at the format() API, seems the required 
information to format a JN is NamespaceInfo, however, such information could be 
obtained from a running name node by running a separate command line because 
the directory is locked by name node. Also, the list of IPCLoggerChannelsin in 
QJM needs to be updated if we don't restart name node. This makes me think of 
using HADOOP-7001 support for QJM to call the format() API if it is aware of 
new JNs are introduced in the hadoop configuration. The running QJM has the  
NamespaceInfo object in memory and it could update the list of 
IPCLoggerChannels as well if the new JNs are formatted successfully. 

Does this idea make sense at all? 

Thanks.

> QJM: improve formatting behavior for JNs
> 
>
> Key: HDFS-3743
> URL: https://issues.apache.org/jira/browse/HDFS-3743
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: QuorumJournalManager (HDFS-3077)
>Reporter: Todd Lipcon
>
> Currently, the JournalNodes automatically format themselves when a new writer 
> takes over, if they don't have any data for that namespace. However, this has 
> a few problems:
> 1) if the administrator accidentally points a new NN at the wrong quorum (eg 
> corresponding to another cluster), it will auto-format a directory on those 
> nodes. This doesn't cause any data loss, but would be better to bail out with 
> an error indicating that they need to be formatted.
> 2) if a journal node crashes and needs to be reformatted, it should be able 
> to re-join the cluster and start storing new segments without having to fail 
> over to a new NN.
> 3) if 2/3 JNs get accidentally reformatted (eg the mount point becomes 
> undone), and the user starts the NN, it should fail to start, because it may 
> end up missing edits. If it auto-formats in this case, the user might have 
> silent "rollback" of the most recent edits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8816) Improve visualization for the Datanode tab in the NN UI

2015-07-28 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-8816:
---
Attachment: HDFS-8816.png

Thanks for the work Haohui! The indentation is indeed fixed now. However I see 
this artifact if I play with my window size. 

Could you please also address?
bq. Hopefully there'll be some way to sort based on "Admin state" after 
HDFS-6407?

In the CSS there's a lot of repetition. Is that necessary? 

I'm sorry if I wasn't clear. For last contact, if its greater than 3 seconds 
ago, only then would I be interested in the timestamp. 

> Improve visualization for the Datanode tab in the NN UI
> ---
>
> Key: HDFS-8816
> URL: https://issues.apache.org/jira/browse/HDFS-8816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8816.000.patch, HDFS-8816.001.patch, 
> HDFS-8816.002.patch, HDFS-8816.003.patch, HDFS-8816.png, HDFS-8816.png, 
> Screen Shot 2015-07-23 at 10.24.24 AM.png
>
>
> The information of the datanode tab in the NN UI is clogged. This jira 
> proposes to improve the visualization of the datanode tab in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7858) Improve HA Namenode Failover detection on the client

2015-07-28 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-7858:
--
Description: 
In an HA deployment, Clients are configured with the hostnames of both the 
Active and Standby Namenodes.Clients will first try one of the NNs 
(non-deterministically) and if its a standby NN, then it will respond to the 
client to retry the request on the other Namenode.
If the client happens to talks to the Standby first, and the standby is 
undergoing some GC / is busy, then those clients might not get a response soon 
enough to try the other NN.

Proposed Approach to solve this :
1) Use hedged RPCs to simultaneously call multiple configured NNs to decide 
which is the active Namenode.
2) Subsequent calls, will invoke the previously successful NN.
3) On failover of the currently active NN, the remaining NNs will be invoked to 
decide which is the new active 

  was:
In an HA deployment, Clients are configured with the hostnames of both the 
Active and Standby Namenodes.Clients will first try one of the NNs 
(non-deterministically) and if its a standby NN, then it will respond to the 
client to retry the request on the other Namenode.
If the client happens to talks to the Standby first, and the standby is 
undergoing some GC / is busy, then those clients might not get a response soon 
enough to try the other NN.

Proposed Approaches to solve this :
1) Use hedged RPCs to simultaneously call multiple configured NNs to decide 
which is the active one.
2) Since Zookeeper is already used as the failover controller, the clients 
could talk to ZK and find out which is the active namenode before contacting it.
3) Long-lived DFSClients would have a ZK watch configured which fires when 
there is a failover so they do not have to query ZK everytime to find out the 
active NN
4) Clients can also cache the last active NN in the user's home directory 
(~/.lastNN) so that short-lived clients can try that Namenode first before 
querying ZK


> Improve HA Namenode Failover detection on the client
> 
>
> Key: HDFS-7858
> URL: https://issues.apache.org/jira/browse/HDFS-7858
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7858.1.patch, HDFS-7858.10.patch, 
> HDFS-7858.10.patch, HDFS-7858.11.patch, HDFS-7858.12.patch, 
> HDFS-7858.13.patch, HDFS-7858.2.patch, HDFS-7858.2.patch, HDFS-7858.3.patch, 
> HDFS-7858.4.patch, HDFS-7858.5.patch, HDFS-7858.6.patch, HDFS-7858.7.patch, 
> HDFS-7858.8.patch, HDFS-7858.9.patch
>
>
> In an HA deployment, Clients are configured with the hostnames of both the 
> Active and Standby Namenodes.Clients will first try one of the NNs 
> (non-deterministically) and if its a standby NN, then it will respond to the 
> client to retry the request on the other Namenode.
> If the client happens to talks to the Standby first, and the standby is 
> undergoing some GC / is busy, then those clients might not get a response 
> soon enough to try the other NN.
> Proposed Approach to solve this :
> 1) Use hedged RPCs to simultaneously call multiple configured NNs to decide 
> which is the active Namenode.
> 2) Subsequent calls, will invoke the previously successful NN.
> 3) On failover of the currently active NN, the remaining NNs will be invoked 
> to decide which is the new active 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7858) Improve HA Namenode Failover detection on the client

2015-07-28 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-7858:
--
Description: 
In an HA deployment, Clients are configured with the hostnames of both the 
Active and Standby Namenodes.Clients will first try one of the NNs 
(non-deterministically) and if its a standby NN, then it will respond to the 
client to retry the request on the other Namenode.
If the client happens to talks to the Standby first, and the standby is 
undergoing some GC / is busy, then those clients might not get a response soon 
enough to try the other NN.

Proposed Approaches to solve this :
1) Use hedged RPCs to simultaneously call multiple configured NNs to decide 
which is the active one.
2) Since Zookeeper is already used as the failover controller, the clients 
could talk to ZK and find out which is the active namenode before contacting it.
3) Long-lived DFSClients would have a ZK watch configured which fires when 
there is a failover so they do not have to query ZK everytime to find out the 
active NN
4) Clients can also cache the last active NN in the user's home directory 
(~/.lastNN) so that short-lived clients can try that Namenode first before 
querying ZK

  was:
In an HA deployment, Clients are configured with the hostnames of both the 
Active and Standby Namenodes.Clients will first try one of the NNs 
(non-deterministically) and if its a standby NN, then it will respond to the 
client to retry the request on the other Namenode.
If the client happens to talks to the Standby first, and the standby is 
undergoing some GC / is busy, then those clients might not get a response soon 
enough to try the other NN.

Proposed Approach to solve this :
1) Use hedged RPCs to simultaneously call multiple configured NNs to decide 
which is the active one.
2) Since Zookeeper is already used as the failover controller, the clients 
could talk to ZK and find out which is the active namenode before contacting it.
3) Long-lived DFSClients would have a ZK watch configured which fires when 
there is a failover so they do not have to query ZK everytime to find out the 
active NN
4) Clients can also cache the last active NN in the user's home directory 
(~/.lastNN) so that short-lived clients can try that Namenode first before 
querying ZK


> Improve HA Namenode Failover detection on the client
> 
>
> Key: HDFS-7858
> URL: https://issues.apache.org/jira/browse/HDFS-7858
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7858.1.patch, HDFS-7858.10.patch, 
> HDFS-7858.10.patch, HDFS-7858.11.patch, HDFS-7858.12.patch, 
> HDFS-7858.13.patch, HDFS-7858.2.patch, HDFS-7858.2.patch, HDFS-7858.3.patch, 
> HDFS-7858.4.patch, HDFS-7858.5.patch, HDFS-7858.6.patch, HDFS-7858.7.patch, 
> HDFS-7858.8.patch, HDFS-7858.9.patch
>
>
> In an HA deployment, Clients are configured with the hostnames of both the 
> Active and Standby Namenodes.Clients will first try one of the NNs 
> (non-deterministically) and if its a standby NN, then it will respond to the 
> client to retry the request on the other Namenode.
> If the client happens to talks to the Standby first, and the standby is 
> undergoing some GC / is busy, then those clients might not get a response 
> soon enough to try the other NN.
> Proposed Approaches to solve this :
> 1) Use hedged RPCs to simultaneously call multiple configured NNs to decide 
> which is the active one.
> 2) Since Zookeeper is already used as the failover controller, the clients 
> could talk to ZK and find out which is the active namenode before contacting 
> it.
> 3) Long-lived DFSClients would have a ZK watch configured which fires when 
> there is a failover so they do not have to query ZK everytime to find out the 
> active NN
> 4) Clients can also cache the last active NN in the user's home directory 
> (~/.lastNN) so that short-lived clients can try that Namenode first before 
> querying ZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7858) Improve HA Namenode Failover detection on the client

2015-07-28 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-7858:
--
Description: 
In an HA deployment, Clients are configured with the hostnames of both the 
Active and Standby Namenodes.Clients will first try one of the NNs 
(non-deterministically) and if its a standby NN, then it will respond to the 
client to retry the request on the other Namenode.
If the client happens to talks to the Standby first, and the standby is 
undergoing some GC / is busy, then those clients might not get a response soon 
enough to try the other NN.

Proposed Approach to solve this :
1) Use hedged RPCs to simultaneously call multiple configured NNs to decide 
which is the active one.
2) Since Zookeeper is already used as the failover controller, the clients 
could talk to ZK and find out which is the active namenode before contacting it.
3) Long-lived DFSClients would have a ZK watch configured which fires when 
there is a failover so they do not have to query ZK everytime to find out the 
active NN
4) Clients can also cache the last active NN in the user's home directory 
(~/.lastNN) so that short-lived clients can try that Namenode first before 
querying ZK

  was:
In an HA deployment, Clients are configured with the hostnames of both the 
Active and Standby Namenodes.Clients will first try one of the NNs 
(non-deterministically) and if its a standby NN, then it will respond to the 
client to retry the request on the other Namenode.

If the client happens to talks to the Standby first, and the standby is 
undergoing some GC / is busy, then those clients might not get a response soon 
enough to try the other NN.

Proposed Approach to solve this :
1) Since Zookeeper is already used as the failover controller, the clients 
could talk to ZK and find out which is the active namenode before contacting it.
2) Long-lived DFSClients would have a ZK watch configured which fires when 
there is a failover so they do not have to query ZK everytime to find out the 
active NN
2) Clients can also cache the last active NN in the user's home directory 
(~/.lastNN) so that short-lived clients can try that Namenode first before 
querying ZK


> Improve HA Namenode Failover detection on the client
> 
>
> Key: HDFS-7858
> URL: https://issues.apache.org/jira/browse/HDFS-7858
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7858.1.patch, HDFS-7858.10.patch, 
> HDFS-7858.10.patch, HDFS-7858.11.patch, HDFS-7858.12.patch, 
> HDFS-7858.13.patch, HDFS-7858.2.patch, HDFS-7858.2.patch, HDFS-7858.3.patch, 
> HDFS-7858.4.patch, HDFS-7858.5.patch, HDFS-7858.6.patch, HDFS-7858.7.patch, 
> HDFS-7858.8.patch, HDFS-7858.9.patch
>
>
> In an HA deployment, Clients are configured with the hostnames of both the 
> Active and Standby Namenodes.Clients will first try one of the NNs 
> (non-deterministically) and if its a standby NN, then it will respond to the 
> client to retry the request on the other Namenode.
> If the client happens to talks to the Standby first, and the standby is 
> undergoing some GC / is busy, then those clients might not get a response 
> soon enough to try the other NN.
> Proposed Approach to solve this :
> 1) Use hedged RPCs to simultaneously call multiple configured NNs to decide 
> which is the active one.
> 2) Since Zookeeper is already used as the failover controller, the clients 
> could talk to ZK and find out which is the active namenode before contacting 
> it.
> 3) Long-lived DFSClients would have a ZK watch configured which fires when 
> there is a failover so they do not have to query ZK everytime to find out the 
> active NN
> 4) Clients can also cache the last active NN in the user's home directory 
> (~/.lastNN) so that short-lived clients can try that Namenode first before 
> querying ZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8728) Erasure coding: revisit and simplify BlockInfoStriped and INodeFile

2015-07-28 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HDFS-8728.
-
Resolution: Later

Since HDFS-8499 is reopened, closing this one. We should revisit it after 
finalizing the HDFS-8499 discussion.

> Erasure coding: revisit and simplify BlockInfoStriped and INodeFile
> ---
>
> Key: HDFS-8728
> URL: https://issues.apache.org/jira/browse/HDFS-8728
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8728-HDFS-7285.00.patch, 
> HDFS-8728-HDFS-7285.01.patch, HDFS-8728-HDFS-7285.02.patch, 
> HDFS-8728-HDFS-7285.03.patch, HDFS-8728.00.patch, HDFS-8728.01.patch, 
> HDFS-8728.02.patch, Merge-1-codec.patch, Merge-2-ecZones.patch, 
> Merge-3-blockInfo.patch, Merge-4-blockmanagement.patch, 
> Merge-5-blockPlacementPolicies.patch, Merge-6-locatedStripedBlock.patch, 
> Merge-7-replicationMonitor.patch, Merge-8-inodeFile.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8796) Erasure coding: merge HDFS-8499 to EC branch and refactor BlockInfoStriped

2015-07-28 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8796:

Parent Issue: HDFS-8031  (was: HDFS-7285)

> Erasure coding: merge HDFS-8499 to EC branch and refactor BlockInfoStriped
> --
>
> Key: HDFS-8796
> URL: https://issues.apache.org/jira/browse/HDFS-8796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8796-HDFS-7285.00.patch, 
> HDFS-8796-HDFS-7285.01-part1.patch, HDFS-8796-HDFS-7285.01-part2.patch
>
>
> Separating this change from the HDFS-8728 discussion. Per suggestion from 
> [~szetszwo], clarifying the description of the change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8059) Erasure coding: revisit how to store EC schema and cellSize in NameNode

2015-07-28 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645178#comment-14645178
 ] 

Zhe Zhang commented on HDFS-8059:
-

I created HDFS-8833 to make the change [~andrew.wang] suggested. Thanks for the 
discussions.

> Erasure coding: revisit how to store EC schema and cellSize in NameNode
> ---
>
> Key: HDFS-8059
> URL: https://issues.apache.org/jira/browse/HDFS-8059
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HDFS-8059.001.patch
>
>
> Move {{dataBlockNum}} and {{parityBlockNum}} from BlockInfoStriped to 
> INodeFile, and store them in {{FileWithStripedBlocksFeature}}.
> Ideally these two nums are the same for all striped blocks in a file, and 
> store them in BlockInfoStriped will waste NN memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8833) Erasure coding: store EC schema and cell size with INodeFile and eliminate EC zones

2015-07-28 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-8833:
---

 Summary: Erasure coding: store EC schema and cell size with 
INodeFile and eliminate EC zones
 Key: HDFS-8833
 URL: https://issues.apache.org/jira/browse/HDFS-8833
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS-7285
Reporter: Zhe Zhang
Assignee: Zhe Zhang


We have [discussed | 
https://issues.apache.org/jira/browse/HDFS-7285?focusedCommentId=14357754&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14357754]
 storing EC schema with files instead of EC zones and recently revisited the 
discussion under HDFS-8059.

As a recap, the _zone_ concept has severe limitations including renaming and 
nested configuration. Those limitations are valid in encryption for security 
reasons and it doesn't make sense to carry them over in EC.

This JIRA aims to store EC schema and cell size on {{INodeFile}} level. For 
simplicity, we should first implement it as an xattr and consider memory 
optimizations (such as moving it to file header) as a follow-on. We should also 
disable changing EC policy on a non-empty file / dir in the first phase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645143#comment-14645143
 ] 

Hadoop QA commented on HDFS-8180:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  16m 59s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 42s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 44s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 40s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 25s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 38s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | hdfs tests | 162m 26s | Tests passed in hadoop-hdfs. 
|
| | | 228m  5s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12747599/HDFS-8180-4.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f170934 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11855/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11855/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11855/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11855/console |


This message was automatically generated.

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch, 
> HDFS-8180-4.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8820) Enable RPC Congestion control by default

2015-07-28 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8820:

Attachment: HDFS-8820.02.patch

v02 patch updates test cases to add cleanup.

> Enable RPC Congestion control by default
> 
>
> Key: HDFS-8820
> URL: https://issues.apache.org/jira/browse/HDFS-8820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8820.01.patch, HDFS-8820.02.patch
>
>
> We propose enabling RPC congestion control introduced by HADOOP-10597 by 
> default.
> We enabled it on a couple of large clusters a few weeks ago and it has helped 
> keep the namenodes responsive under load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-8820) Enable RPC Congestion control by default

2015-07-28 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645083#comment-14645083
 ] 

Arpit Agarwal edited comment on HDFS-8820 at 7/28/15 9:36 PM:
--

Add a new configuration key to simplify controlling RPC backoff for NN ports. 
The setting is on by default.

Not enabling it for DN ports since I haven't seen them experience RPC 
congestion.


was (Author: arpitagarwal):
Add a new configuration key to simplify controlling RPC backoff for NN ports. 
The setting is on by default.

> Enable RPC Congestion control by default
> 
>
> Key: HDFS-8820
> URL: https://issues.apache.org/jira/browse/HDFS-8820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8820.01.patch
>
>
> We propose enabling RPC congestion control introduced by HADOOP-10597 by 
> default.
> We enabled it on a couple of large clusters a few weeks ago and it has helped 
> keep the namenodes responsive under load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8820) Enable RPC Congestion control by default

2015-07-28 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8820:

Attachment: HDFS-8820.01.patch

Add a new configuration key to simplify controlling RPC backoff for NN ports. 
The setting is on by default.

> Enable RPC Congestion control by default
> 
>
> Key: HDFS-8820
> URL: https://issues.apache.org/jira/browse/HDFS-8820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8820.01.patch
>
>
> We propose enabling RPC congestion control introduced by HADOOP-10597 by 
> default.
> We enabled it on a couple of large clusters a few weeks ago and it has helped 
> keep the namenodes responsive under load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8820) Enable RPC Congestion control by default

2015-07-28 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8820:

Status: Patch Available  (was: Open)

> Enable RPC Congestion control by default
> 
>
> Key: HDFS-8820
> URL: https://issues.apache.org/jira/browse/HDFS-8820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8820.01.patch
>
>
> We propose enabling RPC congestion control introduced by HADOOP-10597 by 
> default.
> We enabled it on a couple of large clusters a few weeks ago and it has helped 
> keep the namenodes responsive under load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8828) Utilize Snapshot diff report to build copy list in distcp

2015-07-28 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated HDFS-8828:
---
Attachment: HDFS-8828.001.patch

> Utilize Snapshot diff report to build copy list in distcp
> -
>
> Key: HDFS-8828
> URL: https://issues.apache.org/jira/browse/HDFS-8828
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp, snapshots
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: HDFS-8828.001.patch
>
>
> Some users reported huge time cost to build file copy list in distcp. (30 
> hours with 1.6M files). We can leverage snapshot diff report to build file 
> copy list including files/dirs which are changes only between two snapshots 
> (or a snapshot and a normal dir). It speed up the process in two folds: 1. 
> less copy list building time. 2. less file copy MR jobs.
> HDFS snapshot diff report provide information about file/directory creation, 
> deletion, rename and modification between two snapshots or a snapshot and a 
> normal directory. HDFS-7535 synchronize deletion and rename, then fallback to 
> the default distcp. So it still relies on default distcp to building copy 
> list which will traverse all files under the source dir. This patch will 
> build the copy list based on snapshot diff report. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8828) Utilize Snapshot diff report to build copy list in distcp

2015-07-28 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated HDFS-8828:
---
Component/s: snapshots
 distcp

> Utilize Snapshot diff report to build copy list in distcp
> -
>
> Key: HDFS-8828
> URL: https://issues.apache.org/jira/browse/HDFS-8828
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp, snapshots
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> Some users reported huge time cost to build file copy list in distcp. (30 
> hours with 1.6M files). We can leverage snapshot diff report to build file 
> copy list including files/dirs which are changes only between two snapshots 
> (or a snapshot and a normal dir). It speed up the process in two folds: 1. 
> less copy list building time. 2. less file copy MR jobs.
> HDFS snapshot diff report provide information about file/directory creation, 
> deletion, rename and modification between two snapshots or a snapshot and a 
> normal directory. HDFS-7535 synchronize deletion and rename, then fallback to 
> the default distcp. So it still relies on default distcp to building copy 
> list which will traverse all files under the source dir. This patch will 
> build the copy list based on snapshot diff report. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8832) Document hdfs crypto cli changes

2015-07-28 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-8832:


 Summary: Document hdfs crypto cli changes
 Key: HDFS-8832
 URL: https://issues.apache.org/jira/browse/HDFS-8832
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8830) Support add/remove directories to an existing encryption zone

2015-07-28 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8830:
-
Attachment: HDFS-8830.02.patch

Clean up unused code and some typos from v01.

> Support add/remove directories to an existing encryption zone
> -
>
> Key: HDFS-8830
> URL: https://issues.apache.org/jira/browse/HDFS-8830
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-8830.01.patch, HDFS-8830.02.patch
>
>
> This is the first step toward better "Scratch space" and "Soft Delete" 
> support. We remove the assumption that the hdfs directory and encryption zone 
> is 1 to 1 mapped and can't be changed once created.
> The encryption zone creation part is kept As-Is from Hadoop 2.4. We 
> generalize the encryption zone and its directories from 1:1 to 1:many. This 
> way, other directories such as scratch can be added to/removed from 
> encryption zone as needed. Later on, files in these directories can be 
> renamed within the same encryption zone efficiently. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8830) Support add/remove directories to an existing encryption zone

2015-07-28 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8830:
-
Attachment: (was: HDFS-8830.01.patch)

> Support add/remove directories to an existing encryption zone
> -
>
> Key: HDFS-8830
> URL: https://issues.apache.org/jira/browse/HDFS-8830
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-8830.01.patch
>
>
> This is the first step toward better "Scratch space" and "Soft Delete" 
> support. We remove the assumption that the hdfs directory and encryption zone 
> is 1 to 1 mapped and can't be changed once created.
> The encryption zone creation part is kept As-Is from Hadoop 2.4. We 
> generalize the encryption zone and its directories from 1:1 to 1:many. This 
> way, other directories such as scratch can be added to/removed from 
> encryption zone as needed. Later on, files in these directories can be 
> renamed within the same encryption zone efficiently. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8830) Support add/remove directories to an existing encryption zone

2015-07-28 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8830:
-
Attachment: HDFS-8830.01.patch

> Support add/remove directories to an existing encryption zone
> -
>
> Key: HDFS-8830
> URL: https://issues.apache.org/jira/browse/HDFS-8830
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-8830.01.patch
>
>
> This is the first step toward better "Scratch space" and "Soft Delete" 
> support. We remove the assumption that the hdfs directory and encryption zone 
> is 1 to 1 mapped and can't be changed once created.
> The encryption zone creation part is kept As-Is from Hadoop 2.4. We 
> generalize the encryption zone and its directories from 1:1 to 1:many. This 
> way, other directories such as scratch can be added to/removed from 
> encryption zone as needed. Later on, files in these directories can be 
> renamed within the same encryption zone efficiently. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8830) Support add/remove directories to an existing encryption zone

2015-07-28 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8830:
-
Attachment: (was: HDFS-8747.01.patch)

> Support add/remove directories to an existing encryption zone
> -
>
> Key: HDFS-8830
> URL: https://issues.apache.org/jira/browse/HDFS-8830
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-8830.01.patch
>
>
> This is the first step toward better "Scratch space" and "Soft Delete" 
> support. We remove the assumption that the hdfs directory and encryption zone 
> is 1 to 1 mapped and can't be changed once created.
> The encryption zone creation part is kept As-Is from Hadoop 2.4. We 
> generalize the encryption zone and its directories from 1:1 to 1:many. This 
> way, other directories such as scratch can be added to/removed from 
> encryption zone as needed. Later on, files in these directories can be 
> renamed within the same encryption zone efficiently. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8830) Support add/remove directories to an existing encryption zone

2015-07-28 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8830:
-
Attachment: HDFS-8830.01.patch

Rename the patch to match with the JIRA ID of the subtask.

> Support add/remove directories to an existing encryption zone
> -
>
> Key: HDFS-8830
> URL: https://issues.apache.org/jira/browse/HDFS-8830
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-8830.01.patch
>
>
> This is the first step toward better "Scratch space" and "Soft Delete" 
> support. We remove the assumption that the hdfs directory and encryption zone 
> is 1 to 1 mapped and can't be changed once created.
> The encryption zone creation part is kept As-Is from Hadoop 2.4. We 
> generalize the encryption zone and its directories from 1:1 to 1:many. This 
> way, other directories such as scratch can be added to/removed from 
> encryption zone as needed. Later on, files in these directories can be 
> renamed within the same encryption zone efficiently. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8831) Support "Soft Delete" for files under HDFS encryption zone

2015-07-28 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-8831:


 Summary: Support "Soft Delete" for files under HDFS encryption zone
 Key: HDFS-8831
 URL: https://issues.apache.org/jira/browse/HDFS-8831
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


Currently, "Soft Delete" is only supported if the whole encryption zone is 
deleted. If you delete files whinin the zone with trash feature enabled, you 
will get error similar to the following 

{code}
rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: 
/z1_1/startnn.sh can't be moved from an encryption zone.
{code}

With HDFS-8830, we can support "Soft Delete" by adding the .Trash folder of the 
file being deleted appropriately to the same encryption zone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8747) Provide Better "Scratch Space" and "Soft Delete" Support for HDFS Encryption Zones

2015-07-28 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8747:
-
Attachment: HDFS-8747-07282015.pdf

> Provide Better "Scratch Space" and "Soft Delete" Support for HDFS Encryption 
> Zones
> --
>
> Key: HDFS-8747
> URL: https://issues.apache.org/jira/browse/HDFS-8747
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-8747-07092015.pdf, HDFS-8747-07152015.pdf, 
> HDFS-8747-07282015.pdf
>
>
> HDFS Transparent Data Encryption At-Rest was introduced in Hadoop 2.6 to 
> allow create encryption zone on top of a single HDFS directory. Files under 
> the root directory of the encryption zone will be encrypted/decrypted 
> transparently upon HDFS client write or read operations. 
> Generally, it does not support rename(without data copying) across encryption 
> zones or between encryption zone and non-encryption zone because different 
> security settings of encryption zones. However, there are certain use cases 
> where efficient rename support is desired. This JIRA is to propose better 
> support of two such use cases “Scratch Space” (a.k.a. staging area) and “Soft 
> Delete” (a.k.a. trash) with HDFS encryption zones.
> “Scratch Space” is widely used in Hadoop jobs, which requires efficient 
> rename support. Temporary files from MR jobs are usually stored in staging 
> area outside encryption zone such as “/tmp” directory and then rename to 
> targeted directories as specified once the data is ready to be further 
> processed. 
> Below is a summary of supported/unsupported cases from latest Hadoop:
> * Rename within the encryption zone is supported
> * Rename the entire encryption zone by moving the root directory of the zone  
> is allowed.
> * Rename sub-directory/file from encryption zone to non-encryption zone is 
> not allowed.
> * Rename sub-directory/file from encryption zone A to encryption zone B is 
> not allowed.
> * Rename from non-encryption zone to encryption zone is not allowed.
> “Soft delete” (a.k.a. trash) is a client-side “soft delete” feature that 
> helps prevent accidental deletion of files and directories. If trash is 
> enabled and a file or directory is deleted using the Hadoop shell, the file 
> is moved to the .Trash directory of the user's home directory instead of 
> being deleted.  Deleted files are initially moved (renamed) to the Current 
> sub-directory of the .Trash directory with original path being preserved. 
> Files and directories in the trash can be restored simply by moving them to a 
> location outside the .Trash directory.
> Due to the limited rename support, delete sub-directory/file within 
> encryption zone with trash feature is not allowed. Client has to use 
> -skipTrash option to work around this. HADOOP-10902 and HDFS-6767 improved 
> the error message but without a complete solution to the problem. 
> We propose to solve the problem by generalizing the mapping between 
> encryption zone and its underlying HDFS directories from 1:1 today to 1:N. 
> The encryption zone should allow non-overlapped directories such as scratch 
> space or soft delete "trash" locations to be added/removed dynamically after 
> creation. This way, rename for "scratch space" and "soft delete" can be 
> better supported without breaking the assumption that rename is only 
> supported "within the zone". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8830) Support add/remove directories to an existing encryption zone

2015-07-28 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8830:
-
Status: Patch Available  (was: Open)

> Support add/remove directories to an existing encryption zone
> -
>
> Key: HDFS-8830
> URL: https://issues.apache.org/jira/browse/HDFS-8830
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-8747.01.patch
>
>
> This is the first step toward better "Scratch space" and "Soft Delete" 
> support. We remove the assumption that the hdfs directory and encryption zone 
> is 1 to 1 mapped and can't be changed once created.
> The encryption zone creation part is kept As-Is from Hadoop 2.4. We 
> generalize the encryption zone and its directories from 1:1 to 1:many. This 
> way, other directories such as scratch can be added to/removed from 
> encryption zone as needed. Later on, files in these directories can be 
> renamed within the same encryption zone efficiently. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8830) Support add/remove directories to an existing encryption zone

2015-07-28 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8830:
-
Attachment: HDFS-8747.01.patch

Attach an initial patch that 

1) Allow directories to be added and removed from encryption zone. 
2) Enhance the hdfs crypto cli with "-v" option to show headers when listing 
encryption zones. 

> Support add/remove directories to an existing encryption zone
> -
>
> Key: HDFS-8830
> URL: https://issues.apache.org/jira/browse/HDFS-8830
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-8747.01.patch
>
>
> This is the first step toward better "Scratch space" and "Soft Delete" 
> support. We remove the assumption that the hdfs directory and encryption zone 
> is 1 to 1 mapped and can't be changed once created.
> The encryption zone creation part is kept As-Is from Hadoop 2.4. We 
> generalize the encryption zone and its directories from 1:1 to 1:many. This 
> way, other directories such as scratch can be added to/removed from 
> encryption zone as needed. Later on, files in these directories can be 
> renamed within the same encryption zone efficiently. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8818) Allow Balancer to run faster

2015-07-28 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644944#comment-14644944
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8818:
---

I am going to make MAX_BLOCKS_SIZE_TO_FETCH configurable in HDFS-8824.

> Allow Balancer to run faster
> 
>
> Key: HDFS-8818
> URL: https://issues.apache.org/jira/browse/HDFS-8818
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h8818_20150723.patch, h8818_20150727.patch
>
>
> The original design of Balancer is intentionally to make it run slowly so 
> that the balancing activities won't affect the normal cluster activities and 
> the running jobs.
> There are new use case that cluster admin may choose to balance the cluster 
> when the cluster load is low, or in a maintain window.  So that we should 
> have an option to allow Balancer to run faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8830) Support add/remove directories to an existing encryption zone

2015-07-28 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-8830:


 Summary: Support add/remove directories to an existing encryption 
zone
 Key: HDFS-8830
 URL: https://issues.apache.org/jira/browse/HDFS-8830
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This is the first step toward better "Scratch space" and "Soft Delete" support. 
We remove the assumption that the hdfs directory and encryption zone is 1 to 1 
mapped and can't be changed once created.

The encryption zone creation part is kept As-Is from Hadoop 2.4. We generalize 
the encryption zone and its directories from 1:1 to 1:many. This way, other 
directories such as scratch can be added to/removed from encryption zone as 
needed. Later on, files in these directories can be renamed within the same 
encryption zone efficiently. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3570) Balancer shouldn't rely on "DFS Space Used %" as that ignores non-DFS used space

2015-07-28 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644935#comment-14644935
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3570:
---

> ... It went on scheduling writes to the DN to balance it out, but the DN 
> simply can't accept any more blocks as a result of its disks' state.

This is similar to HDFS-8278.  I suggest that Balancer also checks if the 
remaining space is larger then a threshold before adding the datanode to 
underUtilized or belowAvgUtilized.

> Balancer shouldn't rely on "DFS Space Used %" as that ignores non-DFS used 
> space
> 
>
> Key: HDFS-3570
> URL: https://issues.apache.org/jira/browse/HDFS-3570
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HDFS-3570.003.patch, HDFS-3570.2.patch, 
> HDFS-3570.aash.1.patch
>
>
> Report from a user here: 
> https://groups.google.com/a/cloudera.org/d/msg/cdh-user/pIhNyDVxdVY/b7ENZmEvBjIJ,
>  post archived at http://pastebin.com/eVFkk0A0
> This user had a specific DN that had a large non-DFS usage among 
> dfs.data.dirs, and very little DFS usage (which is computed against total 
> possible capacity). 
> Balancer apparently only looks at the usage, and ignores to consider that 
> non-DFS usage may also be high on a DN/cluster. Hence, it thinks that if a 
> DFS Usage report from DN is 8% only, its got a lot of free space to write 
> more blocks, when that isn't true as shown by the case of this user. It went 
> on scheduling writes to the DN to balance it out, but the DN simply can't 
> accept any more blocks as a result of its disks' state.
> I think it would be better if we _computed_ the actual utilization based on 
> {{(100-(actual remaining space))/(capacity)}}, as opposed to the current 
> {{(dfs used)/(capacity)}}. Thoughts?
> This isn't very critical, however, cause it is very rare to see DN space 
> being used for non DN data, but it does expose a valid bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8818) Allow Balancer to run faster

2015-07-28 Thread Chang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644919#comment-14644919
 ] 

Chang Li commented on HDFS-8818:


Also right now Souce will fetch from namenode no more than 2GB blocks at a 
time. IMO it's better to increase MAX_BLOCKS_SIZE_TO_FETCH, say about 10G. It's 
not efficient to ask namenode for this little amount each time and ask it a lot 
of times.

> Allow Balancer to run faster
> 
>
> Key: HDFS-8818
> URL: https://issues.apache.org/jira/browse/HDFS-8818
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h8818_20150723.patch, h8818_20150727.patch
>
>
> The original design of Balancer is intentionally to make it run slowly so 
> that the balancing activities won't affect the normal cluster activities and 
> the running jobs.
> There are new use case that cluster admin may choose to balance the cluster 
> when the cluster load is low, or in a maintain window.  So that we should 
> have an option to allow Balancer to run faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8818) Allow Balancer to run faster

2015-07-28 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644906#comment-14644906
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8818:
---

Datanode store blocks in TB scale.  Balancer only gets GB blocks.  It seems 
unlikely to get the same blocks.

> Allow Balancer to run faster
> 
>
> Key: HDFS-8818
> URL: https://issues.apache.org/jira/browse/HDFS-8818
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h8818_20150723.patch, h8818_20150727.patch
>
>
> The original design of Balancer is intentionally to make it run slowly so 
> that the balancing activities won't affect the normal cluster activities and 
> the running jobs.
> There are new use case that cluster admin may choose to balance the cluster 
> when the cluster load is low, or in a maintain window.  So that we should 
> have an option to allow Balancer to run faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-28 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8180:
--
Status: Patch Available  (was: Open)

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch, 
> HDFS-8180-4.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-28 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8180:
--
Status: Open  (was: Patch Available)

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch, 
> HDFS-8180-4.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8829) DataNode sets SO_RCVBUF explicitly is disabling tcp auto-tuning

2015-07-28 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru reassigned HDFS-8829:
-

Assignee: kanaka kumar avvaru

> DataNode sets SO_RCVBUF explicitly is disabling tcp auto-tuning
> ---
>
> Key: HDFS-8829
> URL: https://issues.apache.org/jira/browse/HDFS-8829
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.3.0, 2.6.0
>Reporter: He Tianyi
>Assignee: kanaka kumar avvaru
>
> {code:java}
>   private void initDataXceiver(Configuration conf) throws IOException {
> // find free port or use privileged port provided
> TcpPeerServer tcpPeerServer;
> if (secureResources != null) {
>   tcpPeerServer = new TcpPeerServer(secureResources);
> } else {
>   tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
>   DataNode.getStreamingAddr(conf));
> }
> 
> tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
> {code}
> The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on 
> some system.
> Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8823) Move replication factor into individual blocks

2015-07-28 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644896#comment-14644896
 ] 

Haohui Mai commented on HDFS-8823:
--

I should have word it more clear. The main motivation of this work is to 
further separate the block management layer and the namespace. It is a 
prerequisite to put the block management layer under a separated lock so that 
processing the block reports will no longer block the namespace operations.

As a side effect the changes potentially enable per-block replication factor. 
However, there are no plans to support it, nor any plans to make it visible to 
APIs.

Speaking of the memory usage, here are the outputs of the object layout of the 
{{BlockInfo}} class before and after the changes:

Before:

{noformat}
Running 64-bit HotSpot VM.
Using compressed oop with 3-bit shift.
Using compressed klass with 3-bit shift.
Objects are 8 bytes aligned.
Field sizes by type: 4, 1, 1, 2, 2, 4, 4, 8, 8 [bytes]
Array element sizes: 4, 1, 1, 2, 2, 4, 4, 8, 8 [bytes]

VM fails to invoke the default constructor, falling back to class-only 
introspection.

BlockInfo object internals:
 OFFSET  SIZETYPE DESCRIPTIONVALUE
  012 (object header)N/A
 12 4 (alignment/padding gap)N/A
 16 8long Block.blockId  N/A
 24 8long Block.numBytes N/A
 32 8long Block.generationStamp  N/A
 40 4 BlockCollection BlockInfo.bc   N/A
 44 4   LinkedElement BlockInfo.nextLinkedElementN/A
 48 4Object[] BlockInfo.triplets N/A
 52 4 (loss due to the next object alignment)
Instance size: 56 bytes (estimated, the sample instance is not available)
Space losses: 4 bytes internal + 4 bytes external = 8 bytes total
{noformat}

After:

{noformat}
Running 64-bit HotSpot VM.
Using compressed oop with 3-bit shift.
Using compressed klass with 3-bit shift.
Objects are 8 bytes aligned.
Field sizes by type: 4, 1, 1, 2, 2, 4, 4, 8, 8 [bytes]
Array element sizes: 4, 1, 1, 2, 2, 4, 4, 8, 8 [bytes]

VM fails to invoke the default constructor, falling back to class-only 
introspection.

objc[86584]: Class JavaLaunchHelper is implemented in both 
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/java and 
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/jre/lib/libinstrument.dylib.
 One of the two will be used. Which one is undefined.
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo object internals:
 OFFSET  SIZETYPE DESCRIPTIONVALUE
  012 (object header)N/A
 12 4 (alignment/padding gap)N/A
 16 8long Block.blockId  N/A
 24 8long Block.numBytes N/A
 32 8long Block.generationStamp  N/A
 40 2   short BlockInfo.replication  N/A
 42 2 (alignment/padding gap)N/A
 44 4 BlockCollection BlockInfo.bc   N/A
 48 4   LinkedElement BlockInfo.nextLinkedElementN/A
 52 4Object[] BlockInfo.triplets N/A
Instance size: 56 bytes (estimated, the sample instance is not available)
Space losses: 6 bytes internal + 0 bytes external = 6 bytes total
{noformat}

The changes add a short into the {{BlockInfo}} class. Under my configuration 
(Java 1.8) the space overhead is absorbed by the alignment of the class, which 
means there is no memory overhead compared to the current implementation. YMMV.

> Move replication factor into individual blocks
> --
>
> Key: HDFS-8823
> URL: https://issues.apache.org/jira/browse/HDFS-8823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8823.000.patch
>
>
> This jira proposes to record the replication factor in the {{BlockInfo}} 
> class. The changes have two advantages:
> * Decoupling the namespace and the block management layer. It is a 
> prerequisite step to move block management off the heap or to a separate 
> process.
> * Increased flexibility on replicating blocks. Currently the replication 
> factors of all blocks have to be the same. The replication factors of these 
> blocks are equal to the highest replication factor across all snapshots. The 
> changes will allow blocks in a file to have different replication factor, 
> potentially saving some space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8818) Allow Balancer to run faster

2015-07-28 Thread Chang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644888#comment-14644888
 ] 

Chang Li commented on HDFS-8818:


bq. Balancer waits until all block transfer are done each iteration and 
Datanodes send a block receipt immediately once they receive a block.
But inside dispatchBlocks(), the Source will not wait for block gets 
transferred, it will quickly iterate and ask namenode for more blocks, even 
random offset can not prevent namenode from returning many same blocks, which 
will be waste

> Allow Balancer to run faster
> 
>
> Key: HDFS-8818
> URL: https://issues.apache.org/jira/browse/HDFS-8818
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h8818_20150723.patch, h8818_20150727.patch
>
>
> The original design of Balancer is intentionally to make it run slowly so 
> that the balancing activities won't affect the normal cluster activities and 
> the running jobs.
> There are new use case that cluster admin may choose to balance the cluster 
> when the cluster load is low, or in a maintain window.  So that we should 
> have an option to allow Balancer to run faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3570) Balancer shouldn't rely on "DFS Space Used %" as that ignores non-DFS used space

2015-07-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644870#comment-14644870
 ] 

Allen Wittenauer commented on HDFS-3570:


bq. We could get space used by calling df rather than du

... which, as a reminder, would return incorrect numbers on a lot of pooled 
storage systems (ZFS, btrfs, etc, etc).

> Balancer shouldn't rely on "DFS Space Used %" as that ignores non-DFS used 
> space
> 
>
> Key: HDFS-3570
> URL: https://issues.apache.org/jira/browse/HDFS-3570
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HDFS-3570.003.patch, HDFS-3570.2.patch, 
> HDFS-3570.aash.1.patch
>
>
> Report from a user here: 
> https://groups.google.com/a/cloudera.org/d/msg/cdh-user/pIhNyDVxdVY/b7ENZmEvBjIJ,
>  post archived at http://pastebin.com/eVFkk0A0
> This user had a specific DN that had a large non-DFS usage among 
> dfs.data.dirs, and very little DFS usage (which is computed against total 
> possible capacity). 
> Balancer apparently only looks at the usage, and ignores to consider that 
> non-DFS usage may also be high on a DN/cluster. Hence, it thinks that if a 
> DFS Usage report from DN is 8% only, its got a lot of free space to write 
> more blocks, when that isn't true as shown by the case of this user. It went 
> on scheduling writes to the DN to balance it out, but the DN simply can't 
> accept any more blocks as a result of its disks' state.
> I think it would be better if we _computed_ the actual utilization based on 
> {{(100-(actual remaining space))/(capacity)}}, as opposed to the current 
> {{(dfs used)/(capacity)}}. Thoughts?
> This isn't very critical, however, cause it is very rare to see DN space 
> being used for non DN data, but it does expose a valid bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8622) Implement GETCONTENTSUMMARY operation for WebImageViewer

2015-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644844#comment-14644844
 ] 

Hadoop QA commented on HDFS-8622:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 10s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 34s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  1s | Site still builds. |
| {color:green}+1{color} | checkstyle |   0m 31s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 27s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  3s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 160m 43s | Tests failed in hadoop-hdfs. |
| | | 207m 37s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestAppendSnapshotTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12747570/HDFS-8622-05.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / f170934 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11854/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11854/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11854/console |


This message was automatically generated.

> Implement GETCONTENTSUMMARY operation for WebImageViewer
> 
>
> Key: HDFS-8622
> URL: https://issues.apache.org/jira/browse/HDFS-8622
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-8622-00.patch, HDFS-8622-01.patch, 
> HDFS-8622-02.patch, HDFS-8622-03.patch, HDFS-8622-04.patch, HDFS-8622-05.patch
>
>
>  it would be better for administrators if {code} GETCONTENTSUMMARY {code} are 
> supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-28 Thread Santhosh G Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Santhosh G Nayak reassigned HDFS-8180:
--

Assignee: Santhosh G Nayak  (was: Jakob Homan)

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch, 
> HDFS-8180-4.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-28 Thread Santhosh G Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Santhosh G Nayak updated HDFS-8180:
---
Attachment: HDFS-8180-4.patch

Thanks [~jghoman]. Uploading the patch version 4 with check style fixes. 

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Jakob Homan
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch, 
> HDFS-8180-4.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3570) Balancer shouldn't rely on "DFS Space Used %" as that ignores non-DFS used space

2015-07-28 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644814#comment-14644814
 ] 

Colin Patrick McCabe commented on HDFS-3570:


I agree that it would be nice to have an optimized code path assuming a 
dedicated partition for HDFS.  We could get space used by calling df rather 
than du, which would be much more efficient.  However, in the past, we've 
avoided doing this because MR almost always spills to the same disks that HDFS 
is using, so we would have to have 2 partitions on every disk.  I'm not sure if 
there is a good way around this problem...

> Balancer shouldn't rely on "DFS Space Used %" as that ignores non-DFS used 
> space
> 
>
> Key: HDFS-3570
> URL: https://issues.apache.org/jira/browse/HDFS-3570
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HDFS-3570.003.patch, HDFS-3570.2.patch, 
> HDFS-3570.aash.1.patch
>
>
> Report from a user here: 
> https://groups.google.com/a/cloudera.org/d/msg/cdh-user/pIhNyDVxdVY/b7ENZmEvBjIJ,
>  post archived at http://pastebin.com/eVFkk0A0
> This user had a specific DN that had a large non-DFS usage among 
> dfs.data.dirs, and very little DFS usage (which is computed against total 
> possible capacity). 
> Balancer apparently only looks at the usage, and ignores to consider that 
> non-DFS usage may also be high on a DN/cluster. Hence, it thinks that if a 
> DFS Usage report from DN is 8% only, its got a lot of free space to write 
> more blocks, when that isn't true as shown by the case of this user. It went 
> on scheduling writes to the DN to balance it out, but the DN simply can't 
> accept any more blocks as a result of its disks' state.
> I think it would be better if we _computed_ the actual utilization based on 
> {{(100-(actual remaining space))/(capacity)}}, as opposed to the current 
> {{(dfs used)/(capacity)}}. Thoughts?
> This isn't very critical, however, cause it is very rare to see DN space 
> being used for non DN data, but it does expose a valid bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7240) Object store in HDFS

2015-07-28 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644810#comment-14644810
 ] 

Colin Patrick McCabe commented on HDFS-7240:


[~jnp], [~sanjay.radia], did we come to a conclusion about range partitioning?

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8695) OzoneHandler : Add Bucket REST Interface

2015-07-28 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-8695:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

> OzoneHandler : Add Bucket REST Interface
> 
>
> Key: HDFS-8695
> URL: https://issues.apache.org/jira/browse/HDFS-8695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: hdfs-8695-HDFS-7240.001.patch, 
> hdfs-8695-HDFS-7240.002.patch, hdfs-8695-HDFS-7240.003.patch
>
>
> Add Bucket REST interface into Ozone server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8818) Allow Balancer to run faster

2015-07-28 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644790#comment-14644790
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8818:
---

> now that dispatcher will keep fetching more blocks from namenode every 
> iteration, but namenode is likely to return very same list of blocks since 
> the block moving is not that fast and namenode can't know the blocks just 
> moved instantly. ...

I think it is not the case by the following reasons.
- blocksToReceive will become <= 0.
- Balancer waits until all block transfer are done each iteration and Datanodes 
send a block receipt immediately once they receive a block.
- BlockManager.getBlocks(..) uses a random offset to get the blocks from the 
list.


> Allow Balancer to run faster
> 
>
> Key: HDFS-8818
> URL: https://issues.apache.org/jira/browse/HDFS-8818
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h8818_20150723.patch, h8818_20150727.patch
>
>
> The original design of Balancer is intentionally to make it run slowly so 
> that the balancing activities won't affect the normal cluster activities and 
> the running jobs.
> There are new use case that cluster admin may choose to balance the cluster 
> when the cluster load is low, or in a maintain window.  So that we should 
> have an option to allow Balancer to run faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-742) A down DataNode makes Balancer to hang on repeatingly asking NameNode its partial block list

2015-07-28 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-742:
-
Priority: Minor  (was: Major)
Hadoop Flags: Reviewed

+1 patch looks good.

> A down DataNode makes Balancer to hang on repeatingly asking NameNode its 
> partial block list
> 
>
> Key: HDFS-742
> URL: https://issues.apache.org/jira/browse/HDFS-742
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Hairong Kuang
>Assignee: Mit Desai
>Priority: Minor
> Attachments: HDFS-742-trunk.patch, HDFS-742.patch
>
>
> We had a balancer that had not made any progress for a long time. It turned 
> out it was repeatingly asking Namenode for a partial block list of one 
> datanode, which was done while the balancer was running.
> NameNode should notify Balancer that the datanode is not available and 
> Balancer should stop asking for the datanode's block list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8695) OzoneHandler : Add Bucket REST Interface

2015-07-28 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644745#comment-14644745
 ] 

Anu Engineer commented on HDFS-8695:


[~arpitagarwal] [~kanaka] Thanks for the reviews.  I will commit this shortly.



> OzoneHandler : Add Bucket REST Interface
> 
>
> Key: HDFS-8695
> URL: https://issues.apache.org/jira/browse/HDFS-8695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8695-HDFS-7240.001.patch, 
> hdfs-8695-HDFS-7240.002.patch, hdfs-8695-HDFS-7240.003.patch
>
>
> Add Bucket REST interface into Ozone server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8695) OzoneHandler : Add Bucket REST Interface

2015-07-28 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644743#comment-14644743
 ] 

Anu Engineer commented on HDFS-8695:


bq.  One comment - handleIOException looks out of place in 
BucketProcessTemplate. IIUC isn't that exception mapping specific to 
LocalStorageHandler? +1 otherwise.

The only assumption that handler makes about the lower layers is that it will 
either throw OzoneExceptions or it will throw IOExceptions. The Ozone error 
code mapping to specific IOException is based on our understanding of what the 
lower layer throws. However I would expect that even with OzoneHandler it is 
going to be similar, we can certainly make changes to the mapping in future if 
needed.





> OzoneHandler : Add Bucket REST Interface
> 
>
> Key: HDFS-8695
> URL: https://issues.apache.org/jira/browse/HDFS-8695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8695-HDFS-7240.001.patch, 
> hdfs-8695-HDFS-7240.002.patch, hdfs-8695-HDFS-7240.003.patch
>
>
> Add Bucket REST interface into Ozone server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8823) Move replication factor into individual blocks

2015-07-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644727#comment-14644727
 ] 

Andrew Wang commented on HDFS-8823:
---

I think this work should happen on a branch if affects memory usage of current 
deployments, at least until we have an implementation of a separated namespace. 
Even when this is true, ideally we can still deploy with the current model 
without introducing extra overheads.

> Move replication factor into individual blocks
> --
>
> Key: HDFS-8823
> URL: https://issues.apache.org/jira/browse/HDFS-8823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8823.000.patch
>
>
> This jira proposes to record the replication factor in the {{BlockInfo}} 
> class. The changes have two advantages:
> * Decoupling the namespace and the block management layer. It is a 
> prerequisite step to move block management off the heap or to a separate 
> process.
> * Increased flexibility on replicating blocks. Currently the replication 
> factors of all blocks have to be the same. The replication factors of these 
> blocks are equal to the highest replication factor across all snapshots. The 
> changes will allow blocks in a file to have different replication factor, 
> potentially saving some space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3570) Balancer shouldn't rely on "DFS Space Used %" as that ignores non-DFS used space

2015-07-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644720#comment-14644720
 ] 

Allen Wittenauer commented on HDFS-3570:


bq. Setting the parameter for non-dfs used space is an ideal way to avoid the 
problem

Not really.  The "negative math" model just flat out doesn't work in practice.  
It makes assumptions that whatever else is on the file system has a way to 
contain how much space is used which is pretty much impossible.   It's one 
of the reasons why I've been advocated a dedicated partition per disk for HDFS 
for years now.  Those that do seem to have a lot less problems with HDFS at the 
cost of some initial setup pain.

> Balancer shouldn't rely on "DFS Space Used %" as that ignores non-DFS used 
> space
> 
>
> Key: HDFS-3570
> URL: https://issues.apache.org/jira/browse/HDFS-3570
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HDFS-3570.003.patch, HDFS-3570.2.patch, 
> HDFS-3570.aash.1.patch
>
>
> Report from a user here: 
> https://groups.google.com/a/cloudera.org/d/msg/cdh-user/pIhNyDVxdVY/b7ENZmEvBjIJ,
>  post archived at http://pastebin.com/eVFkk0A0
> This user had a specific DN that had a large non-DFS usage among 
> dfs.data.dirs, and very little DFS usage (which is computed against total 
> possible capacity). 
> Balancer apparently only looks at the usage, and ignores to consider that 
> non-DFS usage may also be high on a DN/cluster. Hence, it thinks that if a 
> DFS Usage report from DN is 8% only, its got a lot of free space to write 
> more blocks, when that isn't true as shown by the case of this user. It went 
> on scheduling writes to the DN to balance it out, but the DN simply can't 
> accept any more blocks as a result of its disks' state.
> I think it would be better if we _computed_ the actual utilization based on 
> {{(100-(actual remaining space))/(capacity)}}, as opposed to the current 
> {{(dfs used)/(capacity)}}. Thoughts?
> This isn't very critical, however, cause it is very rare to see DN space 
> being used for non DN data, but it does expose a valid bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8818) Allow Balancer to run faster

2015-07-28 Thread Chang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644620#comment-14644620
 ] 

Chang Li commented on HDFS-8818:


[~szetszwo] thanks for the patch. One thing I am concerned is the change
{code}
-  return srcBlocks.size() < SOURCE_BLOCKS_MIN_SIZE && blocksToReceive > 0;
+  return blocksToReceive > 0;
{code}
now that dispatcher will keep fetching more blocks from namenode every 
iteration, but namenode is likely to return very same list of blocks since the 
block moving is not that fast and namenode can't know the blocks just moved 
instantly. This could increase useless load on namenode. 

> Allow Balancer to run faster
> 
>
> Key: HDFS-8818
> URL: https://issues.apache.org/jira/browse/HDFS-8818
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h8818_20150723.patch, h8818_20150727.patch
>
>
> The original design of Balancer is intentionally to make it run slowly so 
> that the balancing activities won't affect the normal cluster activities and 
> the running jobs.
> There are new use case that cluster admin may choose to balance the cluster 
> when the cluster load is low, or in a maintain window.  So that we should 
> have an option to allow Balancer to run faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8785) TestDistributedFileSystem is failing in trunk

2015-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644548#comment-14644548
 ] 

Hudson commented on HDFS-8785:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2216 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2216/])
HDFS-8785. TestDistributedFileSystem is failing in trunk. Contributed by Xiaoyu 
Yao. (xyao: rev 2196e39e142b0f8d1944805db2bfacd4e3244625)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestDistributedFileSystem is failing in trunk
> -
>
> Key: HDFS-8785
> URL: https://issues.apache.org/jira/browse/HDFS-8785
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HDFS-8785.00.patch, HDFS-8785.01.patch, 
> HDFS-8785.02.patch
>
>
> A newly added test case 
> {{TestDistributedFileSystem#testDFSClientPeerWriteTimeout}} is failing in 
> trunk.
> e.g. run
> https://builds.apache.org/job/PreCommit-HDFS-Build/11716/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testDFSClientPeerWriteTimeout/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7858) Improve HA Namenode Failover detection on the client

2015-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644543#comment-14644543
 ] 

Hudson commented on HDFS-7858:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2216 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2216/])
HDFS-7858. Improve HA Namenode Failover detection on the client. (asuresh) 
(Arun Suresh: rev 030fcfa99c345ad57625486eeabedebf2fd4411f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ConfiguredFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRequestHedgingProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryInvocationHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithNFS.md
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/MultiException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Improve HA Namenode Failover detection on the client
> 
>
> Key: HDFS-7858
> URL: https://issues.apache.org/jira/browse/HDFS-7858
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7858.1.patch, HDFS-7858.10.patch, 
> HDFS-7858.10.patch, HDFS-7858.11.patch, HDFS-7858.12.patch, 
> HDFS-7858.13.patch, HDFS-7858.2.patch, HDFS-7858.2.patch, HDFS-7858.3.patch, 
> HDFS-7858.4.patch, HDFS-7858.5.patch, HDFS-7858.6.patch, HDFS-7858.7.patch, 
> HDFS-7858.8.patch, HDFS-7858.9.patch
>
>
> In an HA deployment, Clients are configured with the hostnames of both the 
> Active and Standby Namenodes.Clients will first try one of the NNs 
> (non-deterministically) and if its a standby NN, then it will respond to the 
> client to retry the request on the other Namenode.
> If the client happens to talks to the Standby first, and the standby is 
> undergoing some GC / is busy, then those clients might not get a response 
> soon enough to try the other NN.
> Proposed Approach to solve this :
> 1) Since Zookeeper is already used as the failover controller, the clients 
> could talk to ZK and find out which is the active namenode before contacting 
> it.
> 2) Long-lived DFSClients would have a ZK watch configured which fires when 
> there is a failover so they do not have to query ZK everytime to find out the 
> active NN
> 2) Clients can also cache the last active NN in the user's home directory 
> (~/.lastNN) so that short-lived clients can try that Namenode first before 
> querying ZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8785) TestDistributedFileSystem is failing in trunk

2015-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644537#comment-14644537
 ] 

Hudson commented on HDFS-8785:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #267 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/267/])
HDFS-8785. TestDistributedFileSystem is failing in trunk. Contributed by Xiaoyu 
Yao. (xyao: rev 2196e39e142b0f8d1944805db2bfacd4e3244625)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java


> TestDistributedFileSystem is failing in trunk
> -
>
> Key: HDFS-8785
> URL: https://issues.apache.org/jira/browse/HDFS-8785
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HDFS-8785.00.patch, HDFS-8785.01.patch, 
> HDFS-8785.02.patch
>
>
> A newly added test case 
> {{TestDistributedFileSystem#testDFSClientPeerWriteTimeout}} is failing in 
> trunk.
> e.g. run
> https://builds.apache.org/job/PreCommit-HDFS-Build/11716/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testDFSClientPeerWriteTimeout/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7858) Improve HA Namenode Failover detection on the client

2015-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644532#comment-14644532
 ] 

Hudson commented on HDFS-7858:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #267 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/267/])
HDFS-7858. Improve HA Namenode Failover detection on the client. (asuresh) 
(Arun Suresh: rev 030fcfa99c345ad57625486eeabedebf2fd4411f)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryInvocationHandler.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/MultiException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRequestHedgingProxyProvider.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ConfiguredFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithNFS.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java


> Improve HA Namenode Failover detection on the client
> 
>
> Key: HDFS-7858
> URL: https://issues.apache.org/jira/browse/HDFS-7858
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7858.1.patch, HDFS-7858.10.patch, 
> HDFS-7858.10.patch, HDFS-7858.11.patch, HDFS-7858.12.patch, 
> HDFS-7858.13.patch, HDFS-7858.2.patch, HDFS-7858.2.patch, HDFS-7858.3.patch, 
> HDFS-7858.4.patch, HDFS-7858.5.patch, HDFS-7858.6.patch, HDFS-7858.7.patch, 
> HDFS-7858.8.patch, HDFS-7858.9.patch
>
>
> In an HA deployment, Clients are configured with the hostnames of both the 
> Active and Standby Namenodes.Clients will first try one of the NNs 
> (non-deterministically) and if its a standby NN, then it will respond to the 
> client to retry the request on the other Namenode.
> If the client happens to talks to the Standby first, and the standby is 
> undergoing some GC / is busy, then those clients might not get a response 
> soon enough to try the other NN.
> Proposed Approach to solve this :
> 1) Since Zookeeper is already used as the failover controller, the clients 
> could talk to ZK and find out which is the active namenode before contacting 
> it.
> 2) Long-lived DFSClients would have a ZK watch configured which fires when 
> there is a failover so they do not have to query ZK everytime to find out the 
> active NN
> 2) Clients can also cache the last active NN in the user's home directory 
> (~/.lastNN) so that short-lived clients can try that Namenode first before 
> querying ZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8622) Implement GETCONTENTSUMMARY operation for WebImageViewer

2015-07-28 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N updated HDFS-8622:
---
Attachment: HDFS-8622-05.patch

Hi [~ajisakaa] , i have updated the patch as per your comments , please check 

> Implement GETCONTENTSUMMARY operation for WebImageViewer
> 
>
> Key: HDFS-8622
> URL: https://issues.apache.org/jira/browse/HDFS-8622
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-8622-00.patch, HDFS-8622-01.patch, 
> HDFS-8622-02.patch, HDFS-8622-03.patch, HDFS-8622-04.patch, HDFS-8622-05.patch
>
>
>  it would be better for administrators if {code} GETCONTENTSUMMARY {code} are 
> supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >