[jira] [Commented] (HDFS-9019) sticky bit permission denied error not informative enough

2015-09-03 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730372#comment-14730372
 ] 

Tsz Wo Nicholas Sze commented on HDFS-9019:
---

Let's use colon notation to print user:group:mode and print out full path for 
the inode and its parent; see toAccessControlString(..).

> sticky bit permission denied error not informative enough
> -
>
> Key: HDFS-9019
> URL: https://issues.apache.org/jira/browse/HDFS-9019
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0, 2.7.0, 2.7.1
>Reporter: Thejas M Nair
>Assignee: Xiaoyu Yao
>  Labels: easyfix, newbie
> Attachments: HDFS-9019.000.patch
>
>
> The check for sticky bit permission in FSPermissionChecker.java prints only 
> the child file name and the current owner.
> It does not print the owner of the file and the parent directory. It would 
> help to have that printed as well for ease of debugging permission issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7116) Add a command to get the balancer bandwidth

2015-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730365#comment-14730365
 ] 

Hadoop QA commented on HDFS-7116:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  22m 56s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m  5s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 21s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  4s | Site still builds. |
| {color:green}+1{color} | checkstyle |   2m 30s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 33s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 23s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 163m 32s | Tests passed in hadoop-hdfs. 
|
| {color:green}+1{color} | hdfs tests |   0m 29s | Tests passed in 
hadoop-hdfs-client. |
| | | 221m 34s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12753961/HDFS-7116-09.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / c83d13c |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12300/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12300/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12300/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12300/console |


This message was automatically generated.

> Add a command to get the balancer bandwidth
> ---
>
> Key: HDFS-7116
> URL: https://issues.apache.org/jira/browse/HDFS-7116
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer & mover
>Reporter: Akira AJISAKA
>Assignee: Rakesh R
> Attachments: HDFS-7116-00.patch, HDFS-7116-01.patch, 
> HDFS-7116-02.patch, HDFS-7116-03.patch, HDFS-7116-04.patch, 
> HDFS-7116-05.patch, HDFS-7116-06.patch, HDFS-7116-07.patch, 
> HDFS-7116-08.patch, HDFS-7116-09.patch
>
>
> Now reading logs is the only way to check how the balancer bandwidth is set. 
> It would be useful for administrators if they can get the value of the same. 
> This jira to discuss & implement the way to access the balancer bandwidth 
> value of the datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8384) Allow NN to startup if there are files having a lease but are not under construction

2015-09-03 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730364#comment-14730364
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8384:
---

+1 patch looks good.

Do we need a different patch for pre-HDFS-6757?

> Allow NN to startup if there are files having a lease but are not under 
> construction
> 
>
> Key: HDFS-8384
> URL: https://issues.apache.org/jira/browse/HDFS-8384
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-8384.000.patch
>
>
> When there are files having a lease but are not under construction, NN will 
> fail to start up with
> {code}
> 15/05/12 00:36:31 ERROR namenode.FSImage: Unable to save image for 
> /hadoop/hdfs/namenode
> java.lang.IllegalStateException
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager.getINodesUnderConstruction(LeaseManager.java:412)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesUnderConstruction(FSNamesystem.java:7124)
> ...
> {code}
> The actually problem is that the image could be corrupted by bugs like 
> HDFS-7587.  We should have an option/conf to allow NN to start up so that the 
> problematic files could possibly be deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9022) Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client

2015-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730363#comment-14730363
 ] 

Hadoop QA commented on HDFS-9022:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m 31s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 13 new or modified test files. |
| {color:green}+1{color} | javac |   7m 49s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  1s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   3m 20s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  3s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   6m  8s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 10s | Pre-build of native portion |
| {color:green}+1{color} | mapreduce tests | 101m 50s | Tests passed in 
hadoop-mapreduce-client-jobclient. |
| {color:red}-1{color} | hdfs tests | 100m 20s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 42s | Tests passed in 
hadoop-hdfs-client. |
| {color:green}+1{color} | hdfs tests |   1m 49s | Tests passed in 
hadoop-hdfs-nfs. |
| | | 258m 21s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
| Timed out tests | 
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754119/HDFS-9022.000.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / c83d13c |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12299/artifact/patchprocess/whitespace.txt
 |
| hadoop-mapreduce-client-jobclient test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12299/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12299/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12299/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| hadoop-hdfs-nfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12299/artifact/patchprocess/testrun_hadoop-hdfs-nfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12299/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12299/console |


This message was automatically generated.

> Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client
> --
>
> Key: HDFS-9022
> URL: https://issues.apache.org/jira/browse/HDFS-9022
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9022.000.patch
>
>
> The static helper methods in NameNodes are used in {{hdfs-client}} module. 
> For example, it's used by the {{DFSClient}} and {{NameNodeProxies}} classes 
> which are being moved to {{hadoop-hdfs-client}} module. Meanwhile, we should 
> keep the {{NameNode}} class itself in the {{hadoop-hdfs}} module.
> This jira tracks the effort of moving the following static helper methods out 
> of  {{NameNode}} and thus {{hadoop-hdfs}} module. A good place to put these 
> methods is the {{DFSUtilClient}} class:
> {code}
> public static InetSocketAddress getAddress(String address);
> public static InetSocketAddress getAddress(Configuration conf);
> public static InetSocketAddress getAddress(URI filesystemURI);
> public static URI getUri(InetSocketAddress namenode);
> {code}
> Be cautious not to bring new checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8960) DFS client says "no more good datanodes being available to try" on a single drive failure

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730359#comment-14730359
 ] 

Hudson commented on HDFS-8960:
--

FAILURE: Integrated in HBase-TRUNK #6778 (See 
[https://builds.apache.org/job/HBase-TRUNK/6778/])
HBASE-14317 Stuck FSHLog: bad disk (HDFS-8960) and can't roll WAL (stack: rev 
661faf6fe0833726d7ce7ad44a829eba3f8e3e45)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogKey.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SyncFuture.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWALLockup.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogReader.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MultiVersionConcurrencyControl.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSWALEntry.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiVersionConcurrencyControlBasic.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiVersionConcurrencyControl.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALKey.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/DamagedWALException.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFailedAppendAndSync.java
HBASE-14317 Stuck FSHLog: bad disk (HDFS-8960) and can't roll WAL; addendum 
(stack: rev 54717a6314ef6673f7607091e5f77321c202d49f)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java


> DFS client says "no more good datanodes being available to try" on a single 
> drive failure
> -
>
> Key: HDFS-8960
> URL: https://issues.apache.org/jira/browse/HDFS-8960
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.1
> Environment: openjdk version "1.8.0_45-internal"
> OpenJDK Runtime Environment (build 1.8.0_45-internal-b14)
> OpenJDK 64-Bit Server VM (build 25.45-b02, mixed mode)
>Reporter: Benoit Sigoure
> Attachments: blk_1073817519_77099.log, r12s13-datanode.log, 
> r12s16-datanode.log
>
>
> Since we upgraded to 2.7.1 we regularly see single-drive failures cause 
> widespread problems at the HBase level (with the default 3x replication 
> target).
> Here's an example.  This HBase RegionServer is r12s16 (172.24.32.16) and is 
> writing its WAL to [172.24.32.16:10110, 172.24.32.8:10110, 
> 172.24.32.13:10110] as can be seen by the following occasional messages:
> {code}
> 2015-08-23 06:28:40,272 INFO  [sync.3] wal.FSHLog: Slow sync cost: 123 ms, 
> current pipeline: [172.24.32.16:10110, 172.24.32.8:10110, 172.24.32.13:10110]
> {code}
> A bit later, the second node in the pipeline above is going to experience an 
> HDD failure.
> {code}
> 2015-08-23 07:21:58,720 WARN  [DataStreamer for file 
> /hbase/WALs/r12s16.sjc.aristanetworks.com,9104,1439917659071/r12s16.sjc.aristanetworks.com%2C9104%2C1439917659071.default.1440314434998
>  block BP-1466258523-172.24.32.1-1437768622582:blk_1073817519_77099] 
> hdfs.DFSClient: Error Recovery for block 
> BP-1466258523-172.24.32.1-1437768622582:blk_1073817519_77099 in pipeline 
> 172.24.32.16:10110, 172.24.32.13:10110, 172.24.32.8:10110: bad datanode 
> 172.24.32.8:10110
> {code}
> And then HBase will go like "omg I can't write to my WAL, let me commit 
> suicide".
> {code}
> 2015-08-23 07:22:26,060 FATAL 
> [regionserver/r12s16.sjc.aristanetworks.com/172.24.32.16:9104.append-pool1-t1]
>  wal.FSHLog: Could not append. Requesting close of wal
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[172.24.32.16:10110, 172.24.32.13:10110], 
> original=[172.24.32.16:10110, 172.24.32.13:10110]). The current failed 
> datanode replacement policy is DEFAULT, and a client may configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
> at 
> org.apache.hadoop.hd

[jira] [Commented] (HDFS-9021) Use a yellow elephant rather than a blue one in diagram

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730343#comment-14730343
 ] 

Hudson commented on HDFS-9021:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2271 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2271/])
HDFS-9021. Use a yellow elephant rather than a blue one in diagram. (wang: rev 
c83d13c64993c3a7f0f35142cddac19e1074976e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/LazyPersistWrites.png
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Use a yellow elephant rather than a blue one in diagram
> ---
>
> Key: HDFS-9021
> URL: https://issues.apache.org/jira/browse/HDFS-9021
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9021.001.patch
>
>
> We should promote usage of Apache Hadoop by using yellow elephants in our 
> documentation :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8545) Add an API to fetch the total file length from a specific path, apart from getting by default from root

2015-09-03 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8545:
-
Attachment: HDFS-8545.2.patch

Updated the patch.
Please review

> Add an API to fetch the total file length from a specific path, apart from 
> getting by default from root
> ---
>
> Key: HDFS-8545
> URL: https://issues.apache.org/jira/browse/HDFS-8545
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Minor
> Attachments: HDFS-8545.1.patch, HDFS-8545.2.patch
>
>
> Currently by default in FileSystem#getUsed() returns the total file size from 
> root. 
> It is good to have an api to return the total file size from specified path 
> ,same as we specify the path in "./hdfs dfs -du -s /path" .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9021) Use a yellow elephant rather than a blue one in diagram

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730289#comment-14730289
 ] 

Hudson commented on HDFS-9021:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #333 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/333/])
HDFS-9021. Use a yellow elephant rather than a blue one in diagram. (wang: rev 
c83d13c64993c3a7f0f35142cddac19e1074976e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/LazyPersistWrites.png
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Use a yellow elephant rather than a blue one in diagram
> ---
>
> Key: HDFS-9021
> URL: https://issues.apache.org/jira/browse/HDFS-9021
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9021.001.patch
>
>
> We should promote usage of Apache Hadoop by using yellow elephants in our 
> documentation :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9017) UI shows wrong last contact for dead nodes

2015-09-03 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina reassigned HDFS-9017:


Assignee: J.Andreina

> UI shows wrong last contact for dead nodes
> --
>
> Key: HDFS-9017
> URL: https://issues.apache.org/jira/browse/HDFS-9017
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: J.Andreina
>Priority: Minor
>
> It's showing the last contact as the restart of the NN host (not process, 
> host).  Presumably it's using monotonic time 0.  Ideally last contact for 
> nodes that never connected would be "never" instead of the epoch or boot time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9017) UI shows wrong last contact for dead nodes

2015-09-03 Thread J.Andreina (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730282#comment-14730282
 ] 

J.Andreina commented on HDFS-9017:
--

I would like to work on this issue. [~daryn] please assign back if you have 
already started to work on this.

> UI shows wrong last contact for dead nodes
> --
>
> Key: HDFS-9017
> URL: https://issues.apache.org/jira/browse/HDFS-9017
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Priority: Minor
>
> It's showing the last contact as the restart of the NN host (not process, 
> host).  Presumably it's using monotonic time 0.  Ideally last contact for 
> nodes that never connected would be "never" instead of the epoch or boot time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9021) Use a yellow elephant rather than a blue one in diagram

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730262#comment-14730262
 ] 

Hudson commented on HDFS-9021:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2292 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2292/])
HDFS-9021. Use a yellow elephant rather than a blue one in diagram. (wang: rev 
c83d13c64993c3a7f0f35142cddac19e1074976e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/LazyPersistWrites.png
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Use a yellow elephant rather than a blue one in diagram
> ---
>
> Key: HDFS-9021
> URL: https://issues.apache.org/jira/browse/HDFS-9021
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9021.001.patch
>
>
> We should promote usage of Apache Hadoop by using yellow elephants in our 
> documentation :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8967) Create a BlockManagerLock class to represent the lock used in the BlockManager

2015-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730246#comment-14730246
 ] 

Hadoop QA commented on HDFS-8967:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 42s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 51s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  4s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 23s | The applied patch generated  2 
new checkstyle issues (total was 514, now 514). |
| {color:red}-1{color} | whitespace |   0m  2s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 28s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 164m 25s | Tests failed in hadoop-hdfs. |
| | | 209m 40s | |
\\
\\
|| Reason || Tests ||
| Timed out tests | org.apache.hadoop.hdfs.server.namenode.TestDeleteRace |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12753563/HDFS-8967.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / c83d13c |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12295/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12295/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12295/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12295/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12295/console |


This message was automatically generated.

> Create a BlockManagerLock class to represent the lock used in the BlockManager
> --
>
> Key: HDFS-8967
> URL: https://issues.apache.org/jira/browse/HDFS-8967
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8967.000.patch, HDFS-8967.001.patch, 
> HDFS-8967.002.patch
>
>
> This jira proposes to create a {{BlockManagerLock}} class to represent the 
> lock used in {{BlockManager}}.
> Currently it directly points to the {{FSNamesystem}} lock thus there are no 
> functionality changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9019) sticky bit permission denied error not informative enough

2015-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730240#comment-14730240
 ] 

Hadoop QA commented on HDFS-9019:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 44s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m 49s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m  2s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 38s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 51s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 35s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 168m 58s | Tests failed in hadoop-hdfs. |
| | | 219m 18s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestFSNamesystem |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754093/HDFS-9019.000.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / ed78b14 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12294/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12294/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12294/console |


This message was automatically generated.

> sticky bit permission denied error not informative enough
> -
>
> Key: HDFS-9019
> URL: https://issues.apache.org/jira/browse/HDFS-9019
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0, 2.7.0, 2.7.1
>Reporter: Thejas M Nair
>Assignee: Xiaoyu Yao
>  Labels: easyfix, newbie
> Attachments: HDFS-9019.000.patch
>
>
> The check for sticky bit permission in FSPermissionChecker.java prints only 
> the child file name and the current owner.
> It does not print the owner of the file and the parent directory. It would 
> help to have that printed as well for ease of debugging permission issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8967) Create a BlockManagerLock class to represent the lock used in the BlockManager

2015-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730214#comment-14730214
 ] 

Hadoop QA commented on HDFS-8967:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 39s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 45s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  1s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 22s | The applied patch generated  2 
new checkstyle issues (total was 514, now 514). |
| {color:red}-1{color} | whitespace |   0m  2s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 162m  4s | Tests failed in hadoop-hdfs. |
| | | 207m  5s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.namenode.TestFSNamesystem |
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12753563/HDFS-8967.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / ed78b14 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12292/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12292/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12292/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12292/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12292/console |


This message was automatically generated.

> Create a BlockManagerLock class to represent the lock used in the BlockManager
> --
>
> Key: HDFS-8967
> URL: https://issues.apache.org/jira/browse/HDFS-8967
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8967.000.patch, HDFS-8967.001.patch, 
> HDFS-8967.002.patch
>
>
> This jira proposes to create a {{BlockManagerLock}} class to represent the 
> lock used in {{BlockManager}}.
> Currently it directly points to the {{FSNamesystem}} lock thus there are no 
> functionality changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9011) Support splitting BlockReport of a storage into multiple RPC

2015-09-03 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9011:

Attachment: HDFS-9011.001.patch

Add unit tests and fix some bugs.

> Support splitting BlockReport of a storage into multiple RPC
> 
>
> Key: HDFS-9011
> URL: https://issues.apache.org/jira/browse/HDFS-9011
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-9011.000.patch, HDFS-9011.001.patch
>
>
> Currently if a DataNode has too many blocks (more than 1m by default), it 
> sends multiple RPC to the NameNode for the block report, each RPC contains 
> report for a single storage. However, in practice we've seen sometimes even a 
> single storage can contains large amount of blocks and the report even 
> exceeds the max RPC data length. It may be helpful to support sending 
> multiple RPC for the block report of a storage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730192#comment-14730192
 ] 

Hudson commented on HDFS-8939:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #332 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/332/])
HDFS-8939. Test(S)WebHdfsFileContextMainOperations failing on branch-2. 
Contributed by Chris Nauroth. (jghoman: rev 
c2d2c1802a11e3e11a953b23b0eccbf4d107de59)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/WebHdfs.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/SWebHdfs.java


> Test(S)WebHdfsFileContextMainOperations failing on branch-2
> ---
>
> Key: HDFS-8939
> URL: https://issues.apache.org/jira/browse/HDFS-8939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0
>Reporter: Jakob Homan
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HDFS-8939-branch-2.001.patch, 
> HDFS-8939-branch-2.002.patch, HDFS-8939-branch-2.003.patch, 
> HDFS-8939.003.patch
>
>
> After HDFS-8180, TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
> instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
> trying to access a conf that was never provided.  In the constructor both 
> both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
> instantiated in the constructor and never have a chance to have their 
> {{setConf}} methods called:
> {code}  SWebHdfs(URI theUri, Configuration conf)
>   throws IOException, URISyntaxException {
> super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
>   }r{code}
> The test passes on trunk because HDFS-5321 removed the call to the 
> Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
> to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
> how branch-2 versus trunk handles default values (branch-2 pulls them from 
> configs if specified, trunk just returns the hard-coded value from the 
> constants file).
> I've fixed this behave like trunk and return just the hard-coded value, which 
> causes the test to pass.
>   There is no WebHdfsFileSystem that takes a Config, which would be another 
> way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9002) Move o.a.h.hdfs.net/*Peer classes to hdfs-client

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730191#comment-14730191
 ] 

Hudson commented on HDFS-9002:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #332 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/332/])
HDFS-9002. Move o.a.h.hdfs.net/*Peer classes to hdfs-client. Contributed by 
Mingliang Liu. (wheat9: rev ed78b14ebc9a21bb57ccd088e8b49bfa457a396f)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/IOStreamPair.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/NioInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/BasicInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/Peer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/TrustedChannelResolver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/IOStreamPair.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/NioInetPeer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/TrustedChannelResolver.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/BasicInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestSecureNNWithQJM.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipan

[jira] [Commented] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730190#comment-14730190
 ] 

Hudson commented on HDFS-8939:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2270 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2270/])
HDFS-8939. Test(S)WebHdfsFileContextMainOperations failing on branch-2. 
Contributed by Chris Nauroth. (jghoman: rev 
c2d2c1802a11e3e11a953b23b0eccbf4d107de59)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/SWebHdfs.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/WebHdfs.java


> Test(S)WebHdfsFileContextMainOperations failing on branch-2
> ---
>
> Key: HDFS-8939
> URL: https://issues.apache.org/jira/browse/HDFS-8939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0
>Reporter: Jakob Homan
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HDFS-8939-branch-2.001.patch, 
> HDFS-8939-branch-2.002.patch, HDFS-8939-branch-2.003.patch, 
> HDFS-8939.003.patch
>
>
> After HDFS-8180, TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
> instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
> trying to access a conf that was never provided.  In the constructor both 
> both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
> instantiated in the constructor and never have a chance to have their 
> {{setConf}} methods called:
> {code}  SWebHdfs(URI theUri, Configuration conf)
>   throws IOException, URISyntaxException {
> super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
>   }r{code}
> The test passes on trunk because HDFS-5321 removed the call to the 
> Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
> to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
> how branch-2 versus trunk handles default values (branch-2 pulls them from 
> configs if specified, trunk just returns the hard-coded value from the 
> constants file).
> I've fixed this behave like trunk and return just the hard-coded value, which 
> causes the test to pass.
>   There is no WebHdfsFileSystem that takes a Config, which would be another 
> way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9002) Move o.a.h.hdfs.net/*Peer classes to hdfs-client

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730189#comment-14730189
 ] 

Hudson commented on HDFS-9002:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2270 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2270/])
HDFS-9002. Move o.a.h.hdfs.net/*Peer classes to hdfs-client. Contributed by 
Mingliang Liu. (wheat9: rev ed78b14ebc9a21bb57ccd088e8b49bfa457a396f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/TrustedChannelResolver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/TrustedChannelResolver.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/Peer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/IOStreamPair.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/IOStreamPair.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestSecureNNWithQJM.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/NioInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/NioInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/BasicInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/ja

[jira] [Commented] (HDFS-9021) Use a yellow elephant rather than a blue one in diagram

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730179#comment-14730179
 ] 

Hudson commented on HDFS-9021:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1081 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1081/])
HDFS-9021. Use a yellow elephant rather than a blue one in diagram. (wang: rev 
c83d13c64993c3a7f0f35142cddac19e1074976e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/LazyPersistWrites.png
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Use a yellow elephant rather than a blue one in diagram
> ---
>
> Key: HDFS-9021
> URL: https://issues.apache.org/jira/browse/HDFS-9021
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9021.001.patch
>
>
> We should promote usage of Apache Hadoop by using yellow elephants in our 
> documentation :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9021) Use a yellow elephant rather than a blue one in diagram

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730178#comment-14730178
 ] 

Hudson commented on HDFS-9021:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #344 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/344/])
HDFS-9021. Use a yellow elephant rather than a blue one in diagram. (wang: rev 
c83d13c64993c3a7f0f35142cddac19e1074976e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/LazyPersistWrites.png
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Use a yellow elephant rather than a blue one in diagram
> ---
>
> Key: HDFS-9021
> URL: https://issues.apache.org/jira/browse/HDFS-9021
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9021.001.patch
>
>
> We should promote usage of Apache Hadoop by using yellow elephants in our 
> documentation :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8984) Move replication queues related methods in FSNamesystem to BlockManager

2015-09-03 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730148#comment-14730148
 ] 

Konstantin Shvachko commented on HDFS-8984:
---

+1 modular Jings nit and my comment #1 about NameNode change.

> Move replication queues related methods in FSNamesystem to BlockManager
> ---
>
> Key: HDFS-8984
> URL: https://issues.apache.org/jira/browse/HDFS-8984
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8984.000.patch, HDFS-8984.001.patch, 
> HDFS-8984.002.patch, HDFS-8984.003.patch, HDFS-8984.004.patch
>
>
> Currently {{FSNamesystem}} controls whether replication queue should be 
> populated based on whether the NN is in safe mode or whether it is an active 
> NN.
> Replication is a concept on the block management layer. It is more natural to 
> place the functionality in the {{BlockManager}} class.
> This jira proposes to move the these methods to the {{BlockManager}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-03 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9010:

Attachment: HDFS-9010.003.patch

> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9002) Move o.a.h.hdfs.net/*Peer classes to hdfs-client

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730135#comment-14730135
 ] 

Hudson commented on HDFS-9002:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #343 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/343/])
HDFS-9002. Move o.a.h.hdfs.net/*Peer classes to hdfs-client. Contributed by 
Mingliang Liu. (wheat9: rev ed78b14ebc9a21bb57ccd088e8b49bfa457a396f)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/IOStreamPair.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/BasicInetPeer.java
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/BasicInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/NioInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/Peer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/NioInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/TrustedChannelResolver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestSecureNNWithQJM.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/TrustedChannelResolver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/IOStreamPair.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/te

[jira] [Commented] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730136#comment-14730136
 ] 

Hudson commented on HDFS-8939:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #343 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/343/])
HDFS-8939. Test(S)WebHdfsFileContextMainOperations failing on branch-2. 
Contributed by Chris Nauroth. (jghoman: rev 
c2d2c1802a11e3e11a953b23b0eccbf4d107de59)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/WebHdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/SWebHdfs.java


> Test(S)WebHdfsFileContextMainOperations failing on branch-2
> ---
>
> Key: HDFS-8939
> URL: https://issues.apache.org/jira/browse/HDFS-8939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0
>Reporter: Jakob Homan
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HDFS-8939-branch-2.001.patch, 
> HDFS-8939-branch-2.002.patch, HDFS-8939-branch-2.003.patch, 
> HDFS-8939.003.patch
>
>
> After HDFS-8180, TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
> instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
> trying to access a conf that was never provided.  In the constructor both 
> both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
> instantiated in the constructor and never have a chance to have their 
> {{setConf}} methods called:
> {code}  SWebHdfs(URI theUri, Configuration conf)
>   throws IOException, URISyntaxException {
> super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
>   }r{code}
> The test passes on trunk because HDFS-5321 removed the call to the 
> Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
> to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
> how branch-2 versus trunk handles default values (branch-2 pulls them from 
> configs if specified, trunk just returns the hard-coded value from the 
> constants file).
> I've fixed this behave like trunk and return just the hard-coded value, which 
> causes the test to pass.
>   There is no WebHdfsFileSystem that takes a Config, which would be another 
> way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9022) Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client

2015-09-03 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9022:

Status: Patch Available  (was: Open)

> Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client
> --
>
> Key: HDFS-9022
> URL: https://issues.apache.org/jira/browse/HDFS-9022
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9022.000.patch
>
>
> The static helper methods in NameNodes are used in {{hdfs-client}} module. 
> For example, it's used by the {{DFSClient}} and {{NameNodeProxies}} classes 
> which are being moved to {{hadoop-hdfs-client}} module. Meanwhile, we should 
> keep the {{NameNode}} class itself in the {{hadoop-hdfs}} module.
> This jira tracks the effort of moving the following static helper methods out 
> of  {{NameNode}} and thus {{hadoop-hdfs}} module. A good place to put these 
> methods is the {{DFSUtilClient}} class:
> {code}
> public static InetSocketAddress getAddress(String address);
> public static InetSocketAddress getAddress(Configuration conf);
> public static InetSocketAddress getAddress(URI filesystemURI);
> public static URI getUri(InetSocketAddress namenode);
> {code}
> Be cautious not to bring new checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9022) Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client

2015-09-03 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9022:

Attachment: HDFS-9022.000.patch

> Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client
> --
>
> Key: HDFS-9022
> URL: https://issues.apache.org/jira/browse/HDFS-9022
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9022.000.patch
>
>
> The static helper methods in NameNodes are used in {{hdfs-client}} module. 
> For example, it's used by the {{DFSClient}} and {{NameNodeProxies}} classes 
> which are being moved to {{hadoop-hdfs-client}} module. Meanwhile, we should 
> keep the {{NameNode}} class itself in the {{hadoop-hdfs}} module.
> This jira tracks the effort of moving the following static helper methods out 
> of  {{NameNode}} and thus {{hadoop-hdfs}} module. A good place to put these 
> methods is the {{DFSUtilClient}} class:
> {code}
> public static InetSocketAddress getAddress(String address);
> public static InetSocketAddress getAddress(Configuration conf);
> public static InetSocketAddress getAddress(URI filesystemURI);
> public static URI getUri(InetSocketAddress namenode);
> {code}
> Be cautious not to bring new checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8054) Move DFSInputStream and related classes to hadoop-hdfs-client

2015-09-03 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730130#comment-14730130
 ] 

Mingliang Liu commented on HDFS-8054:
-

We can combine this with the effort of moving {{DFSOutputStream}} in 
[HDFS-8053|https://issues.apache.org/jira/browse/HDFS-8053].

> Move DFSInputStream and related classes to hadoop-hdfs-client
> -
>
> Key: HDFS-8054
> URL: https://issues.apache.org/jira/browse/HDFS-8054
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8054) Move DFSInputStream and related classes to hadoop-hdfs-client

2015-09-03 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu resolved HDFS-8054.
-
Resolution: Duplicate

> Move DFSInputStream and related classes to hadoop-hdfs-client
> -
>
> Key: HDFS-8054
> URL: https://issues.apache.org/jira/browse/HDFS-8054
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8053) Move DFSIn/OutputStream and related classes to hadoop-hdfs-client

2015-09-03 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-8053:

Description: 
This jira tracks the effort of moving the {{DFSInputStream}} and 
{{DFSOutputSream}} classes from {{hadoop-hdfs}} to {{hadoop-hdfs-client}} 
module.

Guidelines:
* As the {{DFSClient}} is heavily coupled to these two classes, we should move 
it together.
* Related classes should be addressed in separate jiras if they're independent 
and complex enough.
* The checkstyle warnings can be addressed in [HDFS-8979 | 
https://issues.apache.org/jira/browse/HDFS-8979]
* Removing the _slf4j_ logger guards when calling {{LOG.debug()}} and 
{{LOG.trace()}} can be addressed in [HDFS-8971 | 
https://issues.apache.org/jira/browse/HDFS-8971].

> Move DFSIn/OutputStream and related classes to hadoop-hdfs-client
> -
>
> Key: HDFS-8053
> URL: https://issues.apache.org/jira/browse/HDFS-8053
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
>
> This jira tracks the effort of moving the {{DFSInputStream}} and 
> {{DFSOutputSream}} classes from {{hadoop-hdfs}} to {{hadoop-hdfs-client}} 
> module.
> Guidelines:
> * As the {{DFSClient}} is heavily coupled to these two classes, we should 
> move it together.
> * Related classes should be addressed in separate jiras if they're 
> independent and complex enough.
> * The checkstyle warnings can be addressed in [HDFS-8979 | 
> https://issues.apache.org/jira/browse/HDFS-8979]
> * Removing the _slf4j_ logger guards when calling {{LOG.debug()}} and 
> {{LOG.trace()}} can be addressed in [HDFS-8971 | 
> https://issues.apache.org/jira/browse/HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9002) Move o.a.h.hdfs.net/*Peer classes to hdfs-client

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730124#comment-14730124
 ] 

Hudson commented on HDFS-9002:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2291 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2291/])
HDFS-9002. Move o.a.h.hdfs.net/*Peer classes to hdfs-client. Contributed by 
Mingliang Liu. (wheat9: rev ed78b14ebc9a21bb57ccd088e8b49bfa457a396f)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/IOStreamPair.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestSecureNNWithQJM.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/IOStreamPair.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/TrustedChannelResolver.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/NioInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/BasicInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/BasicInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/TrustedChannelResolver.java
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/NioInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/a

[jira] [Commented] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730125#comment-14730125
 ] 

Hudson commented on HDFS-8939:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2291 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2291/])
HDFS-8939. Test(S)WebHdfsFileContextMainOperations failing on branch-2. 
Contributed by Chris Nauroth. (jghoman: rev 
c2d2c1802a11e3e11a953b23b0eccbf4d107de59)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/SWebHdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/WebHdfs.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Test(S)WebHdfsFileContextMainOperations failing on branch-2
> ---
>
> Key: HDFS-8939
> URL: https://issues.apache.org/jira/browse/HDFS-8939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0
>Reporter: Jakob Homan
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HDFS-8939-branch-2.001.patch, 
> HDFS-8939-branch-2.002.patch, HDFS-8939-branch-2.003.patch, 
> HDFS-8939.003.patch
>
>
> After HDFS-8180, TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
> instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
> trying to access a conf that was never provided.  In the constructor both 
> both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
> instantiated in the constructor and never have a chance to have their 
> {{setConf}} methods called:
> {code}  SWebHdfs(URI theUri, Configuration conf)
>   throws IOException, URISyntaxException {
> super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
>   }r{code}
> The test passes on trunk because HDFS-5321 removed the call to the 
> Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
> to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
> how branch-2 versus trunk handles default values (branch-2 pulls them from 
> configs if specified, trunk just returns the hard-coded value from the 
> constants file).
> I've fixed this behave like trunk and return just the hard-coded value, which 
> causes the test to pass.
>   There is no WebHdfsFileSystem that takes a Config, which would be another 
> way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8053) Move DFSOutputStream and related classes to hadoop-hdfs-client

2015-09-03 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HDFS-8053:
---

Assignee: Mingliang Liu  (was: Haohui Mai)

> Move DFSOutputStream and related classes to hadoop-hdfs-client
> --
>
> Key: HDFS-8053
> URL: https://issues.apache.org/jira/browse/HDFS-8053
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8053) Move DFSIn/OutputStream and related classes to hadoop-hdfs-client

2015-09-03 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-8053:

Summary: Move DFSIn/OutputStream and related classes to hadoop-hdfs-client  
(was: Move DFSOutputStream and related classes to hadoop-hdfs-client)

> Move DFSIn/OutputStream and related classes to hadoop-hdfs-client
> -
>
> Key: HDFS-8053
> URL: https://issues.apache.org/jira/browse/HDFS-8053
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9022) Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client

2015-09-03 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9022:

Component/s: (was: build)
 namenode
 hdfs-client

> Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client
> --
>
> Key: HDFS-9022
> URL: https://issues.apache.org/jira/browse/HDFS-9022
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> The static helper methods in NameNodes are used in {{hdfs-client}} module. 
> For example, it's used by the {{DFSClient}} and {{NameNodeProxies}} classes 
> which are being moved to {{hadoop-hdfs-client}} module. Meanwhile, we should 
> keep the {{NameNode}} class itself in the {{hadoop-hdfs}} module.
> This jira tracks the effort of moving the following static helper methods out 
> of  {{NameNode}} and thus {{hadoop-hdfs}} module. A good place to put these 
> methods is the {{DFSUtilClient}} class:
> {code}
> public static InetSocketAddress getAddress(String address);
> public static InetSocketAddress getAddress(Configuration conf);
> public static InetSocketAddress getAddress(URI filesystemURI);
> public static URI getUri(InetSocketAddress namenode);
> {code}
> Be cautious not to bring new checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8740) Move DistributedFileSystem to hadoop-hdfs-client

2015-09-03 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730118#comment-14730118
 ] 

Mingliang Liu commented on HDFS-8740:
-

Do we still work on this? I see the {{DistributedFileSystem.java}} is still in 
{{hadoop-hdfs}} module.

> Move DistributedFileSystem to hadoop-hdfs-client
> 
>
> Key: HDFS-8740
> URL: https://issues.apache.org/jira/browse/HDFS-8740
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Yi Liu
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8740) Move DistributedFileSystem to hadoop-hdfs-client

2015-09-03 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HDFS-8740:
---

Assignee: Mingliang Liu

> Move DistributedFileSystem to hadoop-hdfs-client
> 
>
> Key: HDFS-8740
> URL: https://issues.apache.org/jira/browse/HDFS-8740
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Yi Liu
>Assignee: Mingliang Liu
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9022) Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client

2015-09-03 Thread Mingliang Liu (JIRA)
Mingliang Liu created HDFS-9022:
---

 Summary: Move NameNode.getAddress() and NameNode.getUri() to 
hadoop-hdfs-client
 Key: HDFS-9022
 URL: https://issues.apache.org/jira/browse/HDFS-9022
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Mingliang Liu
Assignee: Mingliang Liu


The static helper methods in NameNodes are used in {{hdfs-client}} module. For 
example, it's used by the {{DFSClient}} and {{NameNodeProxies}} classes which 
are being moved to {{hadoop-hdfs-client}} module. Meanwhile, we should keep the 
{{NameNode}} class itself in the {{hadoop-hdfs}} module.

This jira tracks the effort of moving the following static helper methods out 
of  {{NameNode}} and thus {{hadoop-hdfs}} module. A good place to put these 
methods is the {{DFSUtilClient}} class:
{code}
public static InetSocketAddress getAddress(String address);
public static InetSocketAddress getAddress(Configuration conf);
public static InetSocketAddress getAddress(URI filesystemURI);
public static URI getUri(InetSocketAddress namenode);
{code}

Be cautious not to bring new checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9021) Use a yellow elephant rather than a blue one in diagram

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730106#comment-14730106
 ] 

Hudson commented on HDFS-9021:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #350 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/350/])
HDFS-9021. Use a yellow elephant rather than a blue one in diagram. (wang: rev 
c83d13c64993c3a7f0f35142cddac19e1074976e)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/LazyPersistWrites.png


> Use a yellow elephant rather than a blue one in diagram
> ---
>
> Key: HDFS-9021
> URL: https://issues.apache.org/jira/browse/HDFS-9021
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9021.001.patch
>
>
> We should promote usage of Apache Hadoop by using yellow elephants in our 
> documentation :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9012) Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module

2015-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730089#comment-14730089
 ] 

Hadoop QA commented on HDFS-9012:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 27s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 50s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 58s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 28s | The applied patch generated  
20 new checkstyle issues (total was 0, now 20). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 38s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   0m 20s | Post-patch findbugs 
hadoop-hdfs-project/hadoop-hdfs compilation is broken. |
| {color:red}-1{color} | findbugs |   0m 40s | Post-patch findbugs 
hadoop-hdfs-project/hadoop-hdfs-client compilation is broken. |
| {color:green}+1{color} | findbugs |   0m 40s | The patch does not introduce 
any new Findbugs (version ) warnings. |
| {color:red}-1{color} | native |   0m 20s | Failed to build the native portion 
 of hadoop-common prior to running the unit tests in   
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client |
| | |  43m 25s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754106/HDFS-9012.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / c83d13c |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12297/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12297/console |


This message was automatically generated.

> Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client 
> module
> 
>
> Key: HDFS-9012
> URL: https://issues.apache.org/jira/browse/HDFS-9012
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9012.000.patch, HDFS-9012.001.patch, 
> HDFS-9012.002.patch
>
>
> The {{package org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck}} 
> class is used in client module classes (e.g. 
> {{DataStreamer$ResponseProcessor}} in {{DFSInputStream}} and 
> {{DFSOutputStream}}). This jira tracks the effort of moving this class to 
> {{hadoop-hdfs-client}} module.
> We should keep the static attribute {{OOB_TIMEOUT}} and helper method 
> {{getOOBTimeout}} in the {{hadoop-hdfs}} module as they're not used (by now) 
> in {{hadoop-hdfs-client}} module. Meanwhile, we don't create the 
> {{HdfsConfiguration}} statically if we can pass the correct {{conf}} object.
> The checkstyle warnings can be addressed in 
> [HDFS-8979|https://issues.apache.org/jira/browse/HDFS-8979].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9002) Move o.a.h.hdfs.net/*Peer classes to hdfs-client

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730087#comment-14730087
 ] 

Hudson commented on HDFS-9002:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #349 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/349/])
HDFS-9002. Move o.a.h.hdfs.net/*Peer classes to hdfs-client. Contributed by 
Mingliang Liu. (wheat9: rev ed78b14ebc9a21bb57ccd088e8b49bfa457a396f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestSecureNNWithQJM.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/IOStreamPair.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/Peer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/BasicInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/NioInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/NioInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/IOStreamPair.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/TrustedChannelResolver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/TrustedChannelResolver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/BasicInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.jav

[jira] [Commented] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730088#comment-14730088
 ] 

Hudson commented on HDFS-8939:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #349 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/349/])
HDFS-8939. Test(S)WebHdfsFileContextMainOperations failing on branch-2. 
Contributed by Chris Nauroth. (jghoman: rev 
c2d2c1802a11e3e11a953b23b0eccbf4d107de59)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/WebHdfs.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/SWebHdfs.java


> Test(S)WebHdfsFileContextMainOperations failing on branch-2
> ---
>
> Key: HDFS-8939
> URL: https://issues.apache.org/jira/browse/HDFS-8939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0
>Reporter: Jakob Homan
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HDFS-8939-branch-2.001.patch, 
> HDFS-8939-branch-2.002.patch, HDFS-8939-branch-2.003.patch, 
> HDFS-8939.003.patch
>
>
> After HDFS-8180, TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
> instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
> trying to access a conf that was never provided.  In the constructor both 
> both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
> instantiated in the constructor and never have a chance to have their 
> {{setConf}} methods called:
> {code}  SWebHdfs(URI theUri, Configuration conf)
>   throws IOException, URISyntaxException {
> super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
>   }r{code}
> The test passes on trunk because HDFS-5321 removed the call to the 
> Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
> to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
> how branch-2 versus trunk handles default values (branch-2 pulls them from 
> configs if specified, trunk just returns the hard-coded value from the 
> constants file).
> I've fixed this behave like trunk and return just the hard-coded value, which 
> causes the test to pass.
>   There is no WebHdfsFileSystem that takes a Config, which would be another 
> way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9020) Support hflush/hsync in WebHDFS

2015-09-03 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730075#comment-14730075
 ] 

Chris Douglas commented on HDFS-9020:
-

bq. caching dfsclients is nontrivial as the cache uses the instances of UGI, 
but not the principals of it
Are you referring to the FileSystem cache? I haven't traced this, but the idea 
was to map an incoming {{POST}} to the HDFS stream, not just the client 
instance. But I may not catch your meaning... do you have a pointer into the 
code?

bq. the client cannot recover once the stateful DN is down.
[~daryn] pointed out that we need a timeout for idle connections, which should 
let us time out the connection and recover, at least to the extent we can in 
the existing protocol. The existing implementation already relies on TCP 
timeouts... and is intolerant of multiple failures, particularly for the append 
case, right? I'm not dismissing the complexity of handling state, but is it 
adding novel failure modes?

The HTTP/2 work (HDFS-7966) should dominate WebSockets, which is pretty raw. 
The goal is shared: breaking up the stream into a sequence of {{\[PUT\]POST\*}} 
operations is trying to be message-based. Similar to the point Todd and Stack 
[made|https://issues.apache.org/jira/browse/HDFS-7966?focusedCommentId=14588913&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14588913],
 a WebSockets protocol is yet-another thing to maintain...

> Support hflush/hsync in WebHDFS
> ---
>
> Key: HDFS-9020
> URL: https://issues.apache.org/jira/browse/HDFS-9020
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Chris Douglas
> Attachments: HDFS-9020-alt.txt
>
>
> In the current implementation, hflush/hsync have no effect on WebHDFS 
> streams, particularly w.r.t. visibility to other clients. This proposes to 
> extend the protocol and implementation to enable this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7460) Rewrite httpfs to use new shell framework

2015-09-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-7460:
---
Release Note: 

This deprecates the following environment variables:

| Old | New |
|: |: |
| HTTPFS_LOG | HADOOP_LOG_DIR|
| HTTPFS_CONFG | HADOOP_CONF_DIR |

  was:
This deprecates the following environment variables:

| Old | New |
|: |: |
| HTTPFS_LOG | HADOOP_LOG_DIR|
| HTTPFS_CONFG | HADOOP_CONF_DIR |


> Rewrite httpfs to use new shell framework
> -
>
> Key: HDFS-7460
> URL: https://issues.apache.org/jira/browse/HDFS-7460
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: John Smith
>  Labels: security
> Fix For: 3.0.0
>
> Attachments: HDFS-7460-01.patch, HDFS-7460.patch
>
>
> httpfs shell code was not rewritten during HADOOP-9902. It should be modified 
> to take advantage of the common shell framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8986) Add option to -du to calculate directory space usage excluding snapshots

2015-09-03 Thread Gautam Gopalakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730062#comment-14730062
 ] 

Gautam Gopalakrishnan commented on HDFS-8986:
-

Thanks [~jagadesh.kiran]. This is a real need for many users and something that 
works until snapshots are enabled. Our tools should continue to work in a 
familiar way in the presence of new features or have a way of compensating for 
it. Other than documenting this change, are you aware of any other method to 
achieve what the users want (get current usage of a directory)?

> Add option to -du to calculate directory space usage excluding snapshots
> 
>
> Key: HDFS-8986
> URL: https://issues.apache.org/jira/browse/HDFS-8986
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Gautam Gopalakrishnan
>Assignee: Jagadesh Kiran N
>
> When running {{hadoop fs -du}} on a snapshotted directory (or one of its 
> children), the report includes space consumed by blocks that are only present 
> in the snapshots. This is confusing for end users.
> {noformat}
> $  hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 799.7 M  2.3 G  /tmp/parent
> 799.7 M  2.3 G  /tmp/parent/sub1
> $ hdfs dfs -createSnapshot /tmp/parent snap1
> Created snapshot /tmp/parent/.snapshot/snap1
> $ hadoop fs -rm -skipTrash /tmp/parent/sub1/*
> ...
> $ hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 799.7 M  2.3 G  /tmp/parent
> 799.7 M  2.3 G  /tmp/parent/sub1
> $ hdfs dfs -deleteSnapshot /tmp/parent snap1
> $ hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 0  0  /tmp/parent
> 0  0  /tmp/parent/sub1
> {noformat}
> It would be helpful if we had a flag, say -X, to exclude any snapshot related 
> disk usage in the output



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9021) Use a yellow elephant rather than a blue one in diagram

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730054#comment-14730054
 ] 

Hudson commented on HDFS-9021:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8401 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8401/])
HDFS-9021. Use a yellow elephant rather than a blue one in diagram. (wang: rev 
c83d13c64993c3a7f0f35142cddac19e1074976e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/LazyPersistWrites.png
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Use a yellow elephant rather than a blue one in diagram
> ---
>
> Key: HDFS-9021
> URL: https://issues.apache.org/jira/browse/HDFS-9021
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9021.001.patch
>
>
> We should promote usage of Apache Hadoop by using yellow elephants in our 
> documentation :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8383) Tolerate multiple failures in DFSStripedOutputStream

2015-09-03 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730042#comment-14730042
 ] 

Zhe Zhang commented on HDFS-8383:
-

Continuing the review on the patch itself:
# Reading the current single-failure handling logic again, I think the 
{{BlockRecoveryTrigger}} should work. We are making the recovery transactional 
by setting the streamer's {{externalError}} before {{updateBlockForPipeline}} 
and resetting it after {{updatePipeline}}. I think it's the right approach at 
this stage.
# Why not incrementing {{numScheduled}} if it's already positive?
{code}
+if (numScheduled == 0) {
+  numScheduled++;
+}
{code}
# The error handling logic is quite complex now. We should use this chance to 
add more explanation. Below is my draft. [~walter.k.su] If it looks OK to you, 
could you help add to the patch?
{code}
  class Coordinator {
/**
 * The next internal block to write to, allocated by the fastest streamer
 * (earliest to finish writing the current internal block) by calling
 * {@link StripedDataStreamer#locateFollowingBlock}.
 */
private final MultipleBlockingQueue followingBlocks;

/**
 * Records the number of bytes actually written to the most recent internal
 * block. Used to calculate the size of the entire block group.
 */
private final MultipleBlockingQueue endBlocks;

/**
 * The following 2 queues are used to handle stream failures.
 *
 * When stream_i fails, the OutputStream notifies all other healthy
 * streamers by setting an external error on each of them, which triggers
 * {@link DataStreamer#processDatanodeError}. The first streamer reaching
 * the external error will call {@link DataStreamer#updateBlockForPipeline}
 * to get a new block with bumped generation stamp, and populate
 * {@link newBlocks} for other streamers. This first streamer will also
 * call {@link DataStreamer#updatePipeline} to update the NameNode state
 * for the block.
 */
private final MultipleBlockingQueue newBlocks;
private final MultipleBlockingQueue updateBlocks;
{code}
# Naming suggestions:
{code}
BlockRecoveryTrigger -> PipelineRecoveryManager or PipelineRecoveryCoordinator 
(I don't have a strong opinion but we can also consider moving the class under 
Coordinator).
trigger() -> addRecoveryWork()
isRecovering() -> isUnderRecovery()
{code}
# The patch at HDFS-8704 removes the {{setFailed}} API, we need to coordinate 
the 2 efforts. Pinging [~libo-intel] for comments.

Follow-ons:
# {{waitLastRecoveryToFinish}} can be improved. The current logic waits for the 
slowest streamer to get out of {{externalError}} state.
# {{externalError}} is actually quite awkward in {{DataStreamer}} -- it's a 
null concept for non-EC {{DataStreamer}}.

> Tolerate multiple failures in DFSStripedOutputStream
> 
>
> Key: HDFS-8383
> URL: https://issues.apache.org/jira/browse/HDFS-8383
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Walter Su
> Attachments: HDFS-8383.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8984) Move replication queues related methods in FSNamesystem to BlockManager

2015-09-03 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730041#comment-14730041
 ] 

Haohui Mai commented on HDFS-8984:
--

Thanks for the reviews! I'll hold off the commit until tomorrow in case of 
additional comments.

> Move replication queues related methods in FSNamesystem to BlockManager
> ---
>
> Key: HDFS-8984
> URL: https://issues.apache.org/jira/browse/HDFS-8984
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8984.000.patch, HDFS-8984.001.patch, 
> HDFS-8984.002.patch, HDFS-8984.003.patch, HDFS-8984.004.patch
>
>
> Currently {{FSNamesystem}} controls whether replication queue should be 
> populated based on whether the NN is in safe mode or whether it is an active 
> NN.
> Replication is a concept on the block management layer. It is more natural to 
> place the functionality in the {{BlockManager}} class.
> This jira proposes to move the these methods to the {{BlockManager}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8984) Move replication queues related methods in FSNamesystem to BlockManager

2015-09-03 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730035#comment-14730035
 ] 

Jing Zhao commented on HDFS-8984:
-

The 04 patch looks good to me. One nit is that the change in NameNode.java is 
unnecessary. You can remove it when you commit the patch. +1

> Move replication queues related methods in FSNamesystem to BlockManager
> ---
>
> Key: HDFS-8984
> URL: https://issues.apache.org/jira/browse/HDFS-8984
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8984.000.patch, HDFS-8984.001.patch, 
> HDFS-8984.002.patch, HDFS-8984.003.patch, HDFS-8984.004.patch
>
>
> Currently {{FSNamesystem}} controls whether replication queue should be 
> populated based on whether the NN is in safe mode or whether it is an active 
> NN.
> Replication is a concept on the block management layer. It is more natural to 
> place the functionality in the {{BlockManager}} class.
> This jira proposes to move the these methods to the {{BlockManager}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9021) Use a yellow elephant rather than a blue one in diagram

2015-09-03 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9021:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks for reviewing Colin, committed to trunk and branch-2

> Use a yellow elephant rather than a blue one in diagram
> ---
>
> Key: HDFS-9021
> URL: https://issues.apache.org/jira/browse/HDFS-9021
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9021.001.patch
>
>
> We should promote usage of Apache Hadoop by using yellow elephants in our 
> documentation :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9021) Use a yellow elephant rather than a blue one in diagram

2015-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730007#comment-14730007
 ] 

Hadoop QA commented on HDFS-9021:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | release audit |   0m 26s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   0m 30s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754087/HDFS-9021.001.patch |
| Optional Tests |  |
| git revision | trunk / ed78b14 |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12293/console |


This message was automatically generated.

> Use a yellow elephant rather than a blue one in diagram
> ---
>
> Key: HDFS-9021
> URL: https://issues.apache.org/jira/browse/HDFS-9021
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HDFS-9021.001.patch
>
>
> We should promote usage of Apache Hadoop by using yellow elephants in our 
> documentation :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730006#comment-14730006
 ] 

Hudson commented on HDFS-8939:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1080 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1080/])
HDFS-8939. Test(S)WebHdfsFileContextMainOperations failing on branch-2. 
Contributed by Chris Nauroth. (jghoman: rev 
c2d2c1802a11e3e11a953b23b0eccbf4d107de59)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/WebHdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/SWebHdfs.java


> Test(S)WebHdfsFileContextMainOperations failing on branch-2
> ---
>
> Key: HDFS-8939
> URL: https://issues.apache.org/jira/browse/HDFS-8939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0
>Reporter: Jakob Homan
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HDFS-8939-branch-2.001.patch, 
> HDFS-8939-branch-2.002.patch, HDFS-8939-branch-2.003.patch, 
> HDFS-8939.003.patch
>
>
> After HDFS-8180, TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
> instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
> trying to access a conf that was never provided.  In the constructor both 
> both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
> instantiated in the constructor and never have a chance to have their 
> {{setConf}} methods called:
> {code}  SWebHdfs(URI theUri, Configuration conf)
>   throws IOException, URISyntaxException {
> super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
>   }r{code}
> The test passes on trunk because HDFS-5321 removed the call to the 
> Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
> to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
> how branch-2 versus trunk handles default values (branch-2 pulls them from 
> configs if specified, trunk just returns the hard-coded value from the 
> constants file).
> I've fixed this behave like trunk and return just the hard-coded value, which 
> causes the test to pass.
>   There is no WebHdfsFileSystem that takes a Config, which would be another 
> way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9002) Move o.a.h.hdfs.net/*Peer classes to hdfs-client

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730005#comment-14730005
 ] 

Hudson commented on HDFS-9002:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1080 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1080/])
HDFS-9002. Move o.a.h.hdfs.net/*Peer classes to hdfs-client. Contributed by 
Mingliang Liu. (wheat9: rev ed78b14ebc9a21bb57ccd088e8b49bfa457a396f)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/NioInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/IOStreamPair.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/TrustedChannelResolver.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/IOStreamPair.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/BasicInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/NioInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/BasicInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/TrustedChannelResolver.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/q

[jira] [Updated] (HDFS-9012) Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module

2015-09-03 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9012:

Attachment: HDFS-9012.002.patch

Thank you [~wheat9] for your insight. Pre-parsing the OOB timeout config value 
and saving them will definitely make it better.

The v2 patch moves the OOB timeouts related code to {{DataNode}} class.

> Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client 
> module
> 
>
> Key: HDFS-9012
> URL: https://issues.apache.org/jira/browse/HDFS-9012
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9012.000.patch, HDFS-9012.001.patch, 
> HDFS-9012.002.patch
>
>
> The {{package org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck}} 
> class is used in client module classes (e.g. 
> {{DataStreamer$ResponseProcessor}} in {{DFSInputStream}} and 
> {{DFSOutputStream}}). This jira tracks the effort of moving this class to 
> {{hadoop-hdfs-client}} module.
> We should keep the static attribute {{OOB_TIMEOUT}} and helper method 
> {{getOOBTimeout}} in the {{hadoop-hdfs}} module as they're not used (by now) 
> in {{hadoop-hdfs-client}} module. Meanwhile, we don't create the 
> {{HdfsConfiguration}} statically if we can pass the correct {{conf}} object.
> The checkstyle warnings can be addressed in 
> [HDFS-8979|https://issues.apache.org/jira/browse/HDFS-8979].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8981) Adding revision to data node jmx getVersion() method

2015-09-03 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729990#comment-14729990
 ] 

Haohui Mai commented on HDFS-8981:
--

Thanks for the explanation. That way the JMX is consistent across NN and DN. 
Changing the target version to trunk.

> Adding revision to data node jmx getVersion() method
> 
>
> Key: HDFS-8981
> URL: https://issues.apache.org/jira/browse/HDFS-8981
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Siqi Li
>Assignee: Siqi Li
>Priority: Minor
> Attachments: HDFS-8981.v1.patch, HDFS-8981.v2.patch, 
> HDFS-8981.v3.patch, HDFS-8981.v4.patch
>
>
> to be consistent with namenode jmx, datanode jmx should also output revision 
> number



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8981) Adding revision to data node jmx getVersion() method

2015-09-03 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8981:
-
Target Version/s: 3.0.0

> Adding revision to data node jmx getVersion() method
> 
>
> Key: HDFS-8981
> URL: https://issues.apache.org/jira/browse/HDFS-8981
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Siqi Li
>Assignee: Siqi Li
>Priority: Minor
> Attachments: HDFS-8981.v1.patch, HDFS-8981.v2.patch, 
> HDFS-8981.v3.patch, HDFS-8981.v4.patch
>
>
> to be consistent with namenode jmx, datanode jmx should also output revision 
> number



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9002) Move o.a.h.hdfs.net/*Peer classes to hdfs-client

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729979#comment-14729979
 ] 

Hudson commented on HDFS-9002:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8400 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8400/])
HDFS-9002. Move o.a.h.hdfs.net/*Peer classes to hdfs-client. Contributed by 
Mingliang Liu. (wheat9: rev ed78b14ebc9a21bb57ccd088e8b49bfa457a396f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/IOStreamPair.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslResponseWithNegotiatedCipherOption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/TrustedChannelResolver.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/IOStreamPair.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/Peer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/NioInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/net/BasicInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/NioInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestSecureNNWithQJM.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/BasicInetPeer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java

[jira] [Commented] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729980#comment-14729980
 ] 

Hudson commented on HDFS-8939:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8400 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8400/])
HDFS-8939. Test(S)WebHdfsFileContextMainOperations failing on branch-2. 
Contributed by Chris Nauroth. (jghoman: rev 
c2d2c1802a11e3e11a953b23b0eccbf4d107de59)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/WebHdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/SWebHdfs.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Test(S)WebHdfsFileContextMainOperations failing on branch-2
> ---
>
> Key: HDFS-8939
> URL: https://issues.apache.org/jira/browse/HDFS-8939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0
>Reporter: Jakob Homan
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HDFS-8939-branch-2.001.patch, 
> HDFS-8939-branch-2.002.patch, HDFS-8939-branch-2.003.patch, 
> HDFS-8939.003.patch
>
>
> After HDFS-8180, TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
> instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
> trying to access a conf that was never provided.  In the constructor both 
> both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
> instantiated in the constructor and never have a chance to have their 
> {{setConf}} methods called:
> {code}  SWebHdfs(URI theUri, Configuration conf)
>   throws IOException, URISyntaxException {
> super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
>   }r{code}
> The test passes on trunk because HDFS-5321 removed the call to the 
> Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
> to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
> how branch-2 versus trunk handles default values (branch-2 pulls them from 
> configs if specified, trunk just returns the hard-coded value from the 
> constants file).
> I've fixed this behave like trunk and return just the hard-coded value, which 
> causes the test to pass.
>   There is no WebHdfsFileSystem that takes a Config, which would be another 
> way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9020) Support hflush/hsync in WebHDFS

2015-09-03 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729975#comment-14729975
 ] 

Haohui Mai commented on HDFS-9020:
--

I think supporting hflush / hsync has a lot of values.

I think making WebHDFS stateful across connection is a bad idea, because (1) 
caching dfsclients is nontrivial as the cache uses the instances of UGI, but 
not the principals of it. (2) the client cannot recover once the stateful DN is 
down.

The issue is that WebHDFS is a stream but not message-oriented protocol. 
Without making it message-oriented I don't really see how hflush and hsync can 
work when there are multiple failures in large clusters. Chunked write is a 
good direction, but it might make sense to take a look at the websocket 
protocol which provides you facilities for implementing chunked writes and 
hflush / hsync messages. It will simplify things a lot.

> Support hflush/hsync in WebHDFS
> ---
>
> Key: HDFS-9020
> URL: https://issues.apache.org/jira/browse/HDFS-9020
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Chris Douglas
> Attachments: HDFS-9020-alt.txt
>
>
> In the current implementation, hflush/hsync have no effect on WebHDFS 
> streams, particularly w.r.t. visibility to other clients. This proposes to 
> extend the protocol and implementation to enable this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8981) Adding revision to data node jmx getVersion() method

2015-09-03 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729972#comment-14729972
 ] 

Ming Ma commented on HDFS-8981:
---

Thanks [~wheat9]. This patch changes the output format of an existing attribute 
"Version" and add a new attribute "SoftwareVersion". Which method is to 
deprecate? The current plan is to commit this only to trunk.

> Adding revision to data node jmx getVersion() method
> 
>
> Key: HDFS-8981
> URL: https://issues.apache.org/jira/browse/HDFS-8981
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Siqi Li
>Assignee: Siqi Li
>Priority: Minor
> Attachments: HDFS-8981.v1.patch, HDFS-8981.v2.patch, 
> HDFS-8981.v3.patch, HDFS-8981.v4.patch
>
>
> to be consistent with namenode jmx, datanode jmx should also output revision 
> number



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7314) When the DFSClient lease cannot be renewed, abort open-for-write files rather than the entire DFSClient

2015-09-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-7314:
--
Fix Version/s: 2.6.1

This has the 2.6.1-candidate label, but it seems like it is already pulled into 
2.6.1.

Ran compilation and TestDFSClientRetries anyways just to be sure.

> When the DFSClient lease cannot be renewed, abort open-for-write files rather 
> than the entire DFSClient
> ---
>
> Key: HDFS-7314
> URL: https://issues.apache.org/jira/browse/HDFS-7314
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Ming Ma
>  Labels: 2.6.1-candidate, 2.7.2-candidate, BB2015-05-TBR
> Fix For: 2.6.1, 2.8.0
>
> Attachments: HDFS-7314-2.patch, HDFS-7314-3.patch, HDFS-7314-4.patch, 
> HDFS-7314-5.patch, HDFS-7314-6.patch, HDFS-7314-7.patch, HDFS-7314-8.patch, 
> HDFS-7314-9.patch, HDFS-7314.patch
>
>
> It happened in YARN nodemanger scenario. But it could happen to any long 
> running service that use cached instance of DistrbutedFileSystem.
> 1. Active NN is under heavy load. So it became unavailable for 10 minutes; 
> any DFSClient request will get ConnectTimeoutException.
> 2. YARN nodemanager use DFSClient for certain write operation such as log 
> aggregator or shared cache in YARN-1492. DFSClient used by YARN NM's 
> renewLease RPC got ConnectTimeoutException.
> {noformat}
> 2014-10-29 01:36:19,559 WARN org.apache.hadoop.hdfs.LeaseRenewer: Failed to 
> renew lease for [DFSClient_NONMAPREDUCE_-550838118_1] for 372 seconds.  
> Aborting ...
> {noformat}
> 3. After DFSClient is in Aborted state, YARN NM can't use that cached 
> instance of DistributedFileSystem.
> {noformat}
> 2014-10-29 20:26:23,991 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
>  Failed to download rsrc...
> java.io.IOException: Filesystem closed
> at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:727)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1780)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
> at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:237)
> at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:340)
> at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:57)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> We can make YARN or DFSClient more tolerant to temporary NN unavailability. 
> Given the callstack is YARN -> DistributedFileSystem -> DFSClient, this can 
> be addressed at different layers.
> * YARN closes the DistributedFileSystem object when it receives some well 
> defined exception. Then the next HDFS call will create a new instance of 
> DistributedFileSystem. We have to fix all the places in YARN. Plus other HDFS 
> applications need to address this as well.
> * DistributedFileSystem detects Aborted DFSClient and create a new instance 
> of DFSClient. We will need to fix all the places DistributedFileSystem calls 
> DFSClient.
> * After DFSClient gets into Aborted state, it doesn't have to reject all 
> requests , instead it can retry. If NN is available again it can transition 
> to healthy state.
> Comments?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8270) create() always retried with hardcoded timeout when file already exists with open lease

2015-09-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-8270:
--
Labels: 2.6.1-candidate  (was: )

This was marked as committed in 2.6.1 but it wasn't actually in the 2.6.1 
branch. Presumably because it got committed to 2.6 and by that time I already 
created the 2.6.1 branch.

I just pushed the fix to the right 2.6.1 branch. Ran compilation and 
TestFileCreation before the push.

> create() always retried with hardcoded timeout when file already exists with 
> open lease
> ---
>
> Key: HDFS-8270
> URL: https://issues.apache.org/jira/browse/HDFS-8270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Andrey Stepachev
>Assignee: J.Andreina
>  Labels: 2.6.1-candidate
> Fix For: 2.6.1, 2.7.1
>
> Attachments: HDFS-8270-branch-2.6-v3.patch, 
> HDFS-8270-branch-2.7-03.patch, HDFS-8270.1.patch, HDFS-8270.2.patch, 
> HDFS-8270.3.patch
>
>
> In Hbase we stumbled on unexpected behaviour, which could 
> break things. 
> HDFS-6478 fixed wrong exception
> translation, but that apparently led to unexpected bahaviour:
> clients trying to create file without override=true will be forced
> to retry hardcoded amount of time (60 seconds).
> That could break or slowdown systems, that use filesystem
> for locks (like hbase fsck did, and we got it broken HBASE-13574).
> We should make this behaviour configurable, do client really need
> to wait lease timeout to be sure that file doesn't exists, or it it should
> be enough to fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9020) Support hflush/hsync in WebHDFS

2015-09-03 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729933#comment-14729933
 ] 

Chris Douglas commented on HDFS-9020:
-

Sorry, I didn't mean to imply an endorsement.

I share your reservations about caching clients. It wasn't intended to address 
leaks, but to associate separate {{POST}} requests as a session. The goal is to 
have the overhead at the NN not be significantly worse than if the client had 
instantiated a DFSClient instance. After being redirected, a client using 
{{WebHdfsFileSystem}} shouldn't create more (or longer-lived) clients than the 
existing code, if the stream is properly closed.

bq. If a stream is orphaned, the NN should eventually recover the lease but the 
cached client will keep the lease alive. So now you must have some additional 
mechanism for timing out open streams and closing them.
Good point. If the client were to send a zero-length append (i.e., {{POST}} w/ 
the session cookie) in the stream to keep the client alive, we could use that 
to time out clients that disappear without closing the stream. Combined with 
the shutdown hook, is that sufficient to catch most of the cases we'd also 
cover in DFSClient?

bq. If the intention is to abort the cached dfsclient on another node, the 
client won't know the lease is gone until it tries to add or complete a block - 
but an idle stream isn't going to do that.
Yes, that's the intent. If the client gets an error, it may retry and be 
redirected to another DN. Unfortunately, the old DFSClient shouldn't see any 
new writes, unless the old WebHDFS client is redirected back. Is there a way to 
compel the client to verify its lease on an idle stream? The existing {{POST}} 
isn't idempotent, and it'd be a significant change to make it so. We could try 
to resync at the WebHDFS client, if we could guarantee that the old DFSClient 
were closed and it had flushed the stream, but this should probably be a 
separate issue.

> Support hflush/hsync in WebHDFS
> ---
>
> Key: HDFS-9020
> URL: https://issues.apache.org/jira/browse/HDFS-9020
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Chris Douglas
> Attachments: HDFS-9020-alt.txt
>
>
> In the current implementation, hflush/hsync have no effect on WebHDFS 
> streams, particularly w.r.t. visibility to other clients. This proposes to 
> extend the protocol and implementation to enable this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9002) Move o.a.h.hdfs.net/*Peer classes to hdfs-client

2015-09-03 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9002:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~liuml07] for the 
contribution.

> Move o.a.h.hdfs.net/*Peer classes to hdfs-client
> 
>
> Key: HDFS-9002
> URL: https://issues.apache.org/jira/browse/HDFS-9002
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9002.000.patch, HDFS-9002.001.patch, 
> HDFS-9002.002.patch, HDFS-9002.003.patch
>
>
> This jira tracks the effort of moving the following two parts to 
> {{hadoop-hdfs-client}} module:
> * {{*Peer.java}} classes
> * static helper methods in {{TcpPeerServer}}
> In {{org.apache.hadoop.hdfs.net}} package, the {{Peer}} classes should be 
> moved to {{hadoop-hdfs-client}} module as they are used in client, while 
> {{PeerServer}} classes stay in {{hadoop-hdfs}} module. For the static helper 
> methods in {{TcpPeerServer}}, we need to move them out of the 
> {{TcpPeerServer}} class and put them in client module. 
> Meanwhile, we need to move the related classes in 
> {{org.apache.hadoop.hdfs.protocol.datatransfer.sasl}} packages as they're 
> used by client module. Config keys should also be moved.
> The checkstyle warnings can be addressed in [HDFS-8979 | 
> https://issues.apache.org/jira/browse/HDFS-8979], and removing the _slf4j_ 
> logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971 | https://issues.apache.org/jira/browse/HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8981) Adding revision to data node jmx getVersion() method

2015-09-03 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729929#comment-14729929
 ] 

Haohui Mai commented on HDFS-8981:
--

It might make sense to file a new jira to deprecate the old method in branch-2 
and clean things up in trunk?

> Adding revision to data node jmx getVersion() method
> 
>
> Key: HDFS-8981
> URL: https://issues.apache.org/jira/browse/HDFS-8981
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Siqi Li
>Assignee: Siqi Li
>Priority: Minor
> Attachments: HDFS-8981.v1.patch, HDFS-8981.v2.patch, 
> HDFS-8981.v3.patch, HDFS-8981.v4.patch
>
>
> to be consistent with namenode jmx, datanode jmx should also output revision 
> number



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9002) Move o.a.h.hdfs.net/*Peer classes to hdfs-client

2015-09-03 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729931#comment-14729931
 ] 

Mingliang Liu commented on HDFS-9002:
-

Thanks [~wheat9] for reviewing the code.

> Move o.a.h.hdfs.net/*Peer classes to hdfs-client
> 
>
> Key: HDFS-9002
> URL: https://issues.apache.org/jira/browse/HDFS-9002
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9002.000.patch, HDFS-9002.001.patch, 
> HDFS-9002.002.patch, HDFS-9002.003.patch
>
>
> This jira tracks the effort of moving the following two parts to 
> {{hadoop-hdfs-client}} module:
> * {{*Peer.java}} classes
> * static helper methods in {{TcpPeerServer}}
> In {{org.apache.hadoop.hdfs.net}} package, the {{Peer}} classes should be 
> moved to {{hadoop-hdfs-client}} module as they are used in client, while 
> {{PeerServer}} classes stay in {{hadoop-hdfs}} module. For the static helper 
> methods in {{TcpPeerServer}}, we need to move them out of the 
> {{TcpPeerServer}} class and put them in client module. 
> Meanwhile, we need to move the related classes in 
> {{org.apache.hadoop.hdfs.protocol.datatransfer.sasl}} packages as they're 
> used by client module. Config keys should also be moved.
> The checkstyle warnings can be addressed in [HDFS-8979 | 
> https://issues.apache.org/jira/browse/HDFS-8979], and removing the _slf4j_ 
> logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971 | https://issues.apache.org/jira/browse/HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9018) Update the pom to add junit dependency and move TestXAttr to client project

2015-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729927#comment-14729927
 ] 

Hadoop QA commented on HDFS-9018:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 17s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  1s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 44s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 53s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | native |   3m 11s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests |   0m 29s | Tests passed in 
hadoop-hdfs-client. |
| | |  39m  6s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754056/HDFS-9018.patch |
| Optional Tests | javadoc javac unit |
| git revision | trunk / c2d2c18 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12291/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12291/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12291/console |


This message was automatically generated.

> Update the pom to add junit dependency and move TestXAttr to client project
> ---
>
> Key: HDFS-9018
> URL: https://issues.apache.org/jira/browse/HDFS-9018
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Kanaka Kumar Avvaru
>Assignee: Kanaka Kumar Avvaru
> Attachments: HDFS-9018.patch
>
>
> Update the pom to add junit dependency and move 
> {{org.apache.hadoop.fs.TestXAttr}}  to client project to start with test 
> movement



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9002) Move o.a.h.hdfs.net/*Peer classes to hdfs-client

2015-09-03 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9002:
-
Summary: Move o.a.h.hdfs.net/*Peer classes to hdfs-client  (was: Move 
o.a.h.hdfs.net/*Peer classes to client module)

> Move o.a.h.hdfs.net/*Peer classes to hdfs-client
> 
>
> Key: HDFS-9002
> URL: https://issues.apache.org/jira/browse/HDFS-9002
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9002.000.patch, HDFS-9002.001.patch, 
> HDFS-9002.002.patch, HDFS-9002.003.patch
>
>
> This jira tracks the effort of moving the following two parts to 
> {{hadoop-hdfs-client}} module:
> * {{*Peer.java}} classes
> * static helper methods in {{TcpPeerServer}}
> In {{org.apache.hadoop.hdfs.net}} package, the {{Peer}} classes should be 
> moved to {{hadoop-hdfs-client}} module as they are used in client, while 
> {{PeerServer}} classes stay in {{hadoop-hdfs}} module. For the static helper 
> methods in {{TcpPeerServer}}, we need to move them out of the 
> {{TcpPeerServer}} class and put them in client module. 
> Meanwhile, we need to move the related classes in 
> {{org.apache.hadoop.hdfs.protocol.datatransfer.sasl}} packages as they're 
> used by client module. Config keys should also be moved.
> The checkstyle warnings can be addressed in [HDFS-8979 | 
> https://issues.apache.org/jira/browse/HDFS-8979], and removing the _slf4j_ 
> logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971 | https://issues.apache.org/jira/browse/HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8981) Adding revision to data node jmx getVersion() method

2015-09-03 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729918#comment-14729918
 ] 

Ming Ma commented on HDFS-8981:
---

+1 on the latest patch. The failed unit tests aren't related. I will wait until 
tomorrow to commit in case folks have questions about the incompatible change.

> Adding revision to data node jmx getVersion() method
> 
>
> Key: HDFS-8981
> URL: https://issues.apache.org/jira/browse/HDFS-8981
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Siqi Li
>Assignee: Siqi Li
>Priority: Minor
> Attachments: HDFS-8981.v1.patch, HDFS-8981.v2.patch, 
> HDFS-8981.v3.patch, HDFS-8981.v4.patch
>
>
> to be consistent with namenode jmx, datanode jmx should also output revision 
> number



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9002) Move o.a.h.hdfs.net/*Peer classes to client module

2015-09-03 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729919#comment-14729919
 ] 

Haohui Mai commented on HDFS-9002:
--

Got it. Thanks. +1.

> Move o.a.h.hdfs.net/*Peer classes to client module
> --
>
> Key: HDFS-9002
> URL: https://issues.apache.org/jira/browse/HDFS-9002
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9002.000.patch, HDFS-9002.001.patch, 
> HDFS-9002.002.patch, HDFS-9002.003.patch
>
>
> This jira tracks the effort of moving the following two parts to 
> {{hadoop-hdfs-client}} module:
> * {{*Peer.java}} classes
> * static helper methods in {{TcpPeerServer}}
> In {{org.apache.hadoop.hdfs.net}} package, the {{Peer}} classes should be 
> moved to {{hadoop-hdfs-client}} module as they are used in client, while 
> {{PeerServer}} classes stay in {{hadoop-hdfs}} module. For the static helper 
> methods in {{TcpPeerServer}}, we need to move them out of the 
> {{TcpPeerServer}} class and put them in client module. 
> Meanwhile, we need to move the related classes in 
> {{org.apache.hadoop.hdfs.protocol.datatransfer.sasl}} packages as they're 
> used by client module. Config keys should also be moved.
> The checkstyle warnings can be addressed in [HDFS-8979 | 
> https://issues.apache.org/jira/browse/HDFS-8979], and removing the _slf4j_ 
> logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971 | https://issues.apache.org/jira/browse/HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9002) Move o.a.h.hdfs.net/*Peer classes to client module

2015-09-03 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729894#comment-14729894
 ] 

Mingliang Liu commented on HDFS-9002:
-

The bug pattern in this class was filtered by _global filter rule_ in 
{{hadoop-hdfs}}, e.g.
{code}

 ..
 
   
 
 ...
 
{code}

There are other classes/methods (>12 found) that are excluded for this bug 
pattern. So we can't delete this global filter rule in 
{{hadoop-hdfs/dev-support/findbugsExcludeFile.xml}}.

> Move o.a.h.hdfs.net/*Peer classes to client module
> --
>
> Key: HDFS-9002
> URL: https://issues.apache.org/jira/browse/HDFS-9002
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9002.000.patch, HDFS-9002.001.patch, 
> HDFS-9002.002.patch, HDFS-9002.003.patch
>
>
> This jira tracks the effort of moving the following two parts to 
> {{hadoop-hdfs-client}} module:
> * {{*Peer.java}} classes
> * static helper methods in {{TcpPeerServer}}
> In {{org.apache.hadoop.hdfs.net}} package, the {{Peer}} classes should be 
> moved to {{hadoop-hdfs-client}} module as they are used in client, while 
> {{PeerServer}} classes stay in {{hadoop-hdfs}} module. For the static helper 
> methods in {{TcpPeerServer}}, we need to move them out of the 
> {{TcpPeerServer}} class and put them in client module. 
> Meanwhile, we need to move the related classes in 
> {{org.apache.hadoop.hdfs.protocol.datatransfer.sasl}} packages as they're 
> used by client module. Config keys should also be moved.
> The checkstyle warnings can be addressed in [HDFS-8979 | 
> https://issues.apache.org/jira/browse/HDFS-8979], and removing the _slf4j_ 
> logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971 | https://issues.apache.org/jira/browse/HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9011) Support splitting BlockReport of a storage into multiple RPC

2015-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729883#comment-14729883
 ] 

Hadoop QA commented on HDFS-9011:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 43s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 8 new or modified test files. |
| {color:green}+1{color} | javac |   7m 55s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  8s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 23s | The applied patch generated  8 
new checkstyle issues (total was 424, now 426). |
| {color:red}-1{color} | whitespace |   0m  3s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 31s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 168m  7s | Tests failed in hadoop-hdfs. |
| | | 213m 34s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestDFSFinalize |
|   | hadoop.hdfs.server.datanode.TestDnRespectsBlockReportSplitThreshold |
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
|   | hadoop.hdfs.TestParallelShortCircuitReadUnCached |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12753882/HDFS-9011.000.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 53c38cc |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12290/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12290/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12290/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12290/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12290/console |


This message was automatically generated.

> Support splitting BlockReport of a storage into multiple RPC
> 
>
> Key: HDFS-9011
> URL: https://issues.apache.org/jira/browse/HDFS-9011
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-9011.000.patch
>
>
> Currently if a DataNode has too many blocks (more than 1m by default), it 
> sends multiple RPC to the NameNode for the block report, each RPC contains 
> report for a single storage. However, in practice we've seen sometimes even a 
> single storage can contains large amount of blocks and the report even 
> exceeds the max RPC data length. It may be helpful to support sending 
> multiple RPC for the block report of a storage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8964) When validating the edit log, do not read at or beyond the file offset that is being written

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729882#comment-14729882
 ] 

Hudson commented on HDFS-8964:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2269 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2269/])
HDFS-8964. When validating the edit log, do not read at or beyond the file 
offset that is being written (Zhe Zhang via Colin P. McCabe) (cmccabe: rev 
53c38cc89ab979ec47557dcfa7affbad20578c0a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckPointForSecurityTokens.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java


> When validating the edit log, do not read at or beyond the file offset that 
> is being written
> 
>
> Key: HDFS-8964
> URL: https://issues.apache.org/jira/browse/HDFS-8964
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-8964.00.patch, HDFS-8964.01.patch, 
> HDFS-8964.02.patch, HDFS-8964.03.patch, HDFS-8964.04.patch, 
> HDFS-8964.05.patch, HDFS-8964.06.patch
>
>
> NN/JN validates in-progress edit log files in multiple scenarios, via 
> {{EditLogFile#validateLog}}. The method scans through the edit log file to 
> find the last transaction ID.
> However, an in-progress edit log file could be actively written to, which 
> creates a race condition and causes incorrect data to be read (and later we 
> attempt to interpret the data as ops).  This causes problems for INotify, 
> which reads edit log entries while the edit log is still being written.
> Currently {{validateLog}} is used in 3 places:
> # NN {{getEditsFromTxid}}
> # JN {{getEditLogManifest}}
> # NN/JN {{recoverUnfinalizedSegments}}
> In the first two scenarios we should provide a maximum TxId to validate in 
> the in-progress file. The 3rd scenario won't cause a race condition because 
> only non-current in-progress edit log files are validated.
> {{validateLog}} is actually only used with in-progress files, and could use a 
> better name and Javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9009) Send metrics logs to NullAppender by default

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729881#comment-14729881
 ] 

Hudson commented on HDFS-9009:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2269 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2269/])
HDFS-9009. Send metrics logs to NullAppender by default. (Arpit Agarwal) (arp: 
rev 524ba8708b8e3e17e806748e1f819dec2183bf94)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/conf/log4j.properties


> Send metrics logs to NullAppender by default
> 
>
> Key: HDFS-9009
> URL: https://issues.apache.org/jira/browse/HDFS-9009
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HDFS-9009.01.patch, HDFS-9009.02.patch
>
>
> Disable the metrics logger by default by directing logs to the 
> {{NullAppender}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9002) Move o.a.h.hdfs.net/*Peer classes to client module

2015-09-03 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729880#comment-14729880
 ] 

Haohui Mai commented on HDFS-9002:
--

Thanks. It looks good overall.

{code}
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
@@ -14,6 +14,7 @@
   
   
   
+  
 
 
   
{code}

The entry should be removed in 
{{hadoop-hdfs/dev-support/findbugsExcludeFile.xml}}. +1 after addressing this.

> Move o.a.h.hdfs.net/*Peer classes to client module
> --
>
> Key: HDFS-9002
> URL: https://issues.apache.org/jira/browse/HDFS-9002
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9002.000.patch, HDFS-9002.001.patch, 
> HDFS-9002.002.patch, HDFS-9002.003.patch
>
>
> This jira tracks the effort of moving the following two parts to 
> {{hadoop-hdfs-client}} module:
> * {{*Peer.java}} classes
> * static helper methods in {{TcpPeerServer}}
> In {{org.apache.hadoop.hdfs.net}} package, the {{Peer}} classes should be 
> moved to {{hadoop-hdfs-client}} module as they are used in client, while 
> {{PeerServer}} classes stay in {{hadoop-hdfs}} module. For the static helper 
> methods in {{TcpPeerServer}}, we need to move them out of the 
> {{TcpPeerServer}} class and put them in client module. 
> Meanwhile, we need to move the related classes in 
> {{org.apache.hadoop.hdfs.protocol.datatransfer.sasl}} packages as they're 
> used by client module. Config keys should also be moved.
> The checkstyle warnings can be addressed in [HDFS-8979 | 
> https://issues.apache.org/jira/browse/HDFS-8979], and removing the _slf4j_ 
> logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971 | https://issues.apache.org/jira/browse/HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9020) Support hflush/hsync in WebHDFS

2015-09-03 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729874#comment-14729874
 ] 

Daryn Sharp commented on HDFS-9020:
---

In fairness, I didn't advocate for the above. :)  I agreed that chunked writes 
are probably the only "clean" design but it's complicated.  I think moving from 
a state-less operation with a fixed lifecycle, to an implementation that 
becomes state-full and async with a non-deterministic lifecycle will be 
difficult.

A big concern is the caching of dfsclients & streams.  It can easily lead to 
leaks and exhausting fds on the NN.  Caching clients per user is probably an 
attempt to address that but won't work.  All subsequent file streams are 
relying on the original token.  If that token is cancelled, all sessions will 
die even though they had their own valid tokens.

Cached clients pose problems for leases.  If a stream is orphaned, the NN 
should eventually recover the lease but the cached client will keep the lease 
alive.  So now you must have some additional mechanism for timing out open 
streams and closing them.

Lease recovery should always be a premeditated action, not an implicit action.  
If a webhdfs client opens a file, and another client forcibly revokes the 
lease, the original webhdfs client shouldn't just "steal" it back.  If the 
intention is to abort the cached dfsclient on another node, the client won't 
know the lease is gone until it tries to add or complete a block - but an idle 
stream isn't going to do that.

I'm not sure how this can be done cleanly & correctly.  Webhdfs has been so 
problematic that I'm hesitant for it to become more complex.

> Support hflush/hsync in WebHDFS
> ---
>
> Key: HDFS-9020
> URL: https://issues.apache.org/jira/browse/HDFS-9020
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Chris Douglas
> Attachments: HDFS-9020-alt.txt
>
>
> In the current implementation, hflush/hsync have no effect on WebHDFS 
> streams, particularly w.r.t. visibility to other clients. This proposes to 
> extend the protocol and implementation to enable this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8981) Adding revision to data node jmx getVersion() method

2015-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729870#comment-14729870
 ] 

Hadoop QA commented on HDFS-8981:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  21m 18s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 54s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 17s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 24s | The applied patch generated  3 
new checkstyle issues (total was 150, now 152). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 25s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  9s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 166m  5s | Tests failed in hadoop-hdfs. |
| | | 215m  4s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
|   | hadoop.hdfs.server.datanode.web.dtp.TestDtpHttp2 |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshot |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754047/HDFS-8981.v4.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 524ba87 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12288/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12288/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12288/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12288/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12288/console |


This message was automatically generated.

> Adding revision to data node jmx getVersion() method
> 
>
> Key: HDFS-8981
> URL: https://issues.apache.org/jira/browse/HDFS-8981
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Siqi Li
>Assignee: Siqi Li
>Priority: Minor
> Attachments: HDFS-8981.v1.patch, HDFS-8981.v2.patch, 
> HDFS-8981.v3.patch, HDFS-8981.v4.patch
>
>
> to be consistent with namenode jmx, datanode jmx should also output revision 
> number



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9002) Move o.a.h.hdfs.net/*Peer classes to client module

2015-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729864#comment-14729864
 ] 

Hadoop QA commented on HDFS-9002:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  17m 39s | Findbugs (version 3.0.0) 
appears to be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 8 new or modified test files. |
| {color:green}+1{color} | javac |   7m 56s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  5s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 34s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  5s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 26s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 163m  6s | Tests passed in hadoop-hdfs. 
|
| {color:green}+1{color} | hdfs tests |   0m 29s | Tests passed in 
hadoop-hdfs-client. |
| | | 211m  8s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754048/HDFS-9002.003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 524ba87 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12289/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12289/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12289/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12289/console |


This message was automatically generated.

> Move o.a.h.hdfs.net/*Peer classes to client module
> --
>
> Key: HDFS-9002
> URL: https://issues.apache.org/jira/browse/HDFS-9002
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9002.000.patch, HDFS-9002.001.patch, 
> HDFS-9002.002.patch, HDFS-9002.003.patch
>
>
> This jira tracks the effort of moving the following two parts to 
> {{hadoop-hdfs-client}} module:
> * {{*Peer.java}} classes
> * static helper methods in {{TcpPeerServer}}
> In {{org.apache.hadoop.hdfs.net}} package, the {{Peer}} classes should be 
> moved to {{hadoop-hdfs-client}} module as they are used in client, while 
> {{PeerServer}} classes stay in {{hadoop-hdfs}} module. For the static helper 
> methods in {{TcpPeerServer}}, we need to move them out of the 
> {{TcpPeerServer}} class and put them in client module. 
> Meanwhile, we need to move the related classes in 
> {{org.apache.hadoop.hdfs.protocol.datatransfer.sasl}} packages as they're 
> used by client module. Config keys should also be moved.
> The checkstyle warnings can be addressed in [HDFS-8979 | 
> https://issues.apache.org/jira/browse/HDFS-8979], and removing the _slf4j_ 
> logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971 | https://issues.apache.org/jira/browse/HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9019) sticky bit permission denied error not informative enough

2015-09-03 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9019:
-
Status: Patch Available  (was: Open)

> sticky bit permission denied error not informative enough
> -
>
> Key: HDFS-9019
> URL: https://issues.apache.org/jira/browse/HDFS-9019
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1, 2.7.0, 2.6.0
>Reporter: Thejas M Nair
>Assignee: Xiaoyu Yao
>  Labels: easyfix, newbie
> Attachments: HDFS-9019.000.patch
>
>
> The check for sticky bit permission in FSPermissionChecker.java prints only 
> the child file name and the current owner.
> It does not print the owner of the file and the parent directory. It would 
> help to have that printed as well for ease of debugging permission issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9019) sticky bit permission denied error not informative enough

2015-09-03 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9019:
-
Attachment: HDFS-9019.000.patch

Thanks [~thejas] for reporting the issue. Attach a patch adding additional 
inode and its parent info to AccessControlException message.

> sticky bit permission denied error not informative enough
> -
>
> Key: HDFS-9019
> URL: https://issues.apache.org/jira/browse/HDFS-9019
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0, 2.7.0, 2.7.1
>Reporter: Thejas M Nair
>Assignee: Xiaoyu Yao
>  Labels: easyfix, newbie
> Attachments: HDFS-9019.000.patch
>
>
> The check for sticky bit permission in FSPermissionChecker.java prints only 
> the child file name and the current owner.
> It does not print the owner of the file and the parent directory. It would 
> help to have that printed as well for ease of debugging permission issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9019) sticky bit permission denied error not informative enough

2015-09-03 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDFS-9019:


Assignee: Xiaoyu Yao

> sticky bit permission denied error not informative enough
> -
>
> Key: HDFS-9019
> URL: https://issues.apache.org/jira/browse/HDFS-9019
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0, 2.7.0, 2.7.1
>Reporter: Thejas M Nair
>Assignee: Xiaoyu Yao
>  Labels: easyfix, newbie
>
> The check for sticky bit permission in FSPermissionChecker.java prints only 
> the child file name and the current owner.
> It does not print the owner of the file and the parent directory. It would 
> help to have that printed as well for ease of debugging permission issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9012) Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module

2015-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729855#comment-14729855
 ] 

Hadoop QA commented on HDFS-9012:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 40s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 50s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 10s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 36s | The applied patch generated  
21 new checkstyle issues (total was 0, now 21). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 33s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 10s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 163m 46s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 28s | Tests passed in 
hadoop-hdfs-client. |
| | | 214m 47s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.web.TestWebHDFSOAuth2 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754044/HDFS-9012.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0ebc658 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12285/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12285/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12285/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12285/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12285/console |


This message was automatically generated.

> Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client 
> module
> 
>
> Key: HDFS-9012
> URL: https://issues.apache.org/jira/browse/HDFS-9012
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9012.000.patch, HDFS-9012.001.patch
>
>
> The {{package org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck}} 
> class is used in client module classes (e.g. 
> {{DataStreamer$ResponseProcessor}} in {{DFSInputStream}} and 
> {{DFSOutputStream}}). This jira tracks the effort of moving this class to 
> {{hadoop-hdfs-client}} module.
> We should keep the static attribute {{OOB_TIMEOUT}} and helper method 
> {{getOOBTimeout}} in the {{hadoop-hdfs}} module as they're not used (by now) 
> in {{hadoop-hdfs-client}} module. Meanwhile, we don't create the 
> {{HdfsConfiguration}} statically if we can pass the correct {{conf}} object.
> The checkstyle warnings can be addressed in 
> [HDFS-8979|https://issues.apache.org/jira/browse/HDFS-8979].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8964) When validating the edit log, do not read at or beyond the file offset that is being written

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729854#comment-14729854
 ] 

Hudson commented on HDFS-8964:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #331 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/331/])
HDFS-8964. When validating the edit log, do not read at or beyond the file 
offset that is being written (Zhe Zhang via Colin P. McCabe) (cmccabe: rev 
53c38cc89ab979ec47557dcfa7affbad20578c0a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckPointForSecurityTokens.java


> When validating the edit log, do not read at or beyond the file offset that 
> is being written
> 
>
> Key: HDFS-8964
> URL: https://issues.apache.org/jira/browse/HDFS-8964
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-8964.00.patch, HDFS-8964.01.patch, 
> HDFS-8964.02.patch, HDFS-8964.03.patch, HDFS-8964.04.patch, 
> HDFS-8964.05.patch, HDFS-8964.06.patch
>
>
> NN/JN validates in-progress edit log files in multiple scenarios, via 
> {{EditLogFile#validateLog}}. The method scans through the edit log file to 
> find the last transaction ID.
> However, an in-progress edit log file could be actively written to, which 
> creates a race condition and causes incorrect data to be read (and later we 
> attempt to interpret the data as ops).  This causes problems for INotify, 
> which reads edit log entries while the edit log is still being written.
> Currently {{validateLog}} is used in 3 places:
> # NN {{getEditsFromTxid}}
> # JN {{getEditLogManifest}}
> # NN/JN {{recoverUnfinalizedSegments}}
> In the first two scenarios we should provide a maximum TxId to validate in 
> the in-progress file. The 3rd scenario won't cause a race condition because 
> only non-current in-progress edit log files are validated.
> {{validateLog}} is actually only used with in-progress files, and could use a 
> better name and Javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9009) Send metrics logs to NullAppender by default

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729853#comment-14729853
 ] 

Hudson commented on HDFS-9009:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #331 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/331/])
HDFS-9009. Send metrics logs to NullAppender by default. (Arpit Agarwal) (arp: 
rev 524ba8708b8e3e17e806748e1f819dec2183bf94)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/conf/log4j.properties


> Send metrics logs to NullAppender by default
> 
>
> Key: HDFS-9009
> URL: https://issues.apache.org/jira/browse/HDFS-9009
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HDFS-9009.01.patch, HDFS-9009.02.patch
>
>
> Disable the metrics logger by default by directing logs to the 
> {{NullAppender}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8850) VolumeScanner thread exits with exception if there is no block pool to be scanned but there are suspicious blocks

2015-09-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-8850:
--
Labels: 2.6.1-candidate  (was: )

Adding back 2.6.1-candidate label for tracking. Will remove it once 2.6.1 is 
done.

> VolumeScanner thread exits with exception if there is no block pool to be 
> scanned but there are suspicious blocks
> -
>
> Key: HDFS-8850
> URL: https://issues.apache.org/jira/browse/HDFS-8850
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>  Labels: 2.6.1-candidate
> Fix For: 2.8.0
>
> Attachments: HDFS-8850.001.patch
>
>
> The VolumeScanner threads inside the BlockScanner exit with an exception if 
> there is no block pool to be scanned but there are suspicious blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8967) Create a BlockManagerLock class to represent the lock used in the BlockManager

2015-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729847#comment-14729847
 ] 

Hadoop QA commented on HDFS-8967:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 58s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 51s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 14s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 22s | The applied patch generated  2 
new checkstyle issues (total was 514, now 514). |
| {color:red}-1{color} | whitespace |   0m  2s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 31s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 167m 12s | Tests failed in hadoop-hdfs. |
| | | 212m 56s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestFSNamesystem |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12753563/HDFS-8967.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0ebc658 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12284/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12284/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12284/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12284/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12284/console |


This message was automatically generated.

> Create a BlockManagerLock class to represent the lock used in the BlockManager
> --
>
> Key: HDFS-8967
> URL: https://issues.apache.org/jira/browse/HDFS-8967
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8967.000.patch, HDFS-8967.001.patch, 
> HDFS-8967.002.patch
>
>
> This jira proposes to create a {{BlockManagerLock}} class to represent the 
> lock used in {{BlockManager}}.
> Currently it directly points to the {{FSNamesystem}} lock thus there are no 
> functionality changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9018) Update the pom to add junit dependency and move TestXAttr to client project

2015-09-03 Thread Kanaka Kumar Avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729846#comment-14729846
 ] 

Kanaka Kumar Avvaru commented on HDFS-9018:
---

{{org.apache.hadoop.fs.XAttr}} is already present in client project. [~wheat9] 
, Do you mean to handle along with any other  Xattr related test files? 

> Update the pom to add junit dependency and move TestXAttr to client project
> ---
>
> Key: HDFS-9018
> URL: https://issues.apache.org/jira/browse/HDFS-9018
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Kanaka Kumar Avvaru
>Assignee: Kanaka Kumar Avvaru
> Attachments: HDFS-9018.patch
>
>
> Update the pom to add junit dependency and move 
> {{org.apache.hadoop.fs.TestXAttr}}  to client project to start with test 
> movement



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8984) Move replication queues related methods in FSNamesystem to BlockManager

2015-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729844#comment-14729844
 ] 

Hadoop QA commented on HDFS-8984:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 4 new or modified test files. |
| {color:green}+1{color} | javac |   7m 50s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 57s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 21s | The applied patch generated  9 
new checkstyle issues (total was 681, now 681). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 31s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 163m  9s | Tests passed in hadoop-hdfs. 
|
| | | 208m 33s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754043/HDFS-8984.004.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0ebc658 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12287/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12287/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12287/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12287/console |


This message was automatically generated.

> Move replication queues related methods in FSNamesystem to BlockManager
> ---
>
> Key: HDFS-8984
> URL: https://issues.apache.org/jira/browse/HDFS-8984
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8984.000.patch, HDFS-8984.001.patch, 
> HDFS-8984.002.patch, HDFS-8984.003.patch, HDFS-8984.004.patch
>
>
> Currently {{FSNamesystem}} controls whether replication queue should be 
> populated based on whether the NN is in safe mode or whether it is an active 
> NN.
> Replication is a concept on the block management layer. It is more natural to 
> place the functionality in the {{BlockManager}} class.
> This jira proposes to move the these methods to the {{BlockManager}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8384) Allow NN to startup if there are files having a lease but are not under construction

2015-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729836#comment-14729836
 ] 

Hadoop QA commented on HDFS-8384:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m  7s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 55s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  3s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 23s | The applied patch generated  1 
new checkstyle issues (total was 273, now 273). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 31s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   2m  8s | Post-patch findbugs 
hadoop-hdfs-project/hadoop-hdfs compilation is broken. |
| {color:green}+1{color} | findbugs |   2m  8s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   0m 24s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 161m 24s | Tests failed in hadoop-hdfs. |
| | | 203m 53s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754045/HDFS-8384.000.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0ebc658 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12286/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12286/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12286/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12286/console |


This message was automatically generated.

> Allow NN to startup if there are files having a lease but are not under 
> construction
> 
>
> Key: HDFS-8384
> URL: https://issues.apache.org/jira/browse/HDFS-8384
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-8384.000.patch
>
>
> When there are files having a lease but are not under construction, NN will 
> fail to start up with
> {code}
> 15/05/12 00:36:31 ERROR namenode.FSImage: Unable to save image for 
> /hadoop/hdfs/namenode
> java.lang.IllegalStateException
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager.getINodesUnderConstruction(LeaseManager.java:412)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesUnderConstruction(FSNamesystem.java:7124)
> ...
> {code}
> The actually problem is that the image could be corrupted by bugs like 
> HDFS-7587.  We should have an option/conf to allow NN to start up so that the 
> problematic files could possibly be deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9021) Use a yellow elephant rather than a blue one in diagram

2015-09-03 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729837#comment-14729837
 ] 

Colin Patrick McCabe commented on HDFS-9021:


Thanks, Andrew.  It is good to see the Apache elephant there.  +1 pending 
jenkins.

> Use a yellow elephant rather than a blue one in diagram
> ---
>
> Key: HDFS-9021
> URL: https://issues.apache.org/jira/browse/HDFS-9021
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HDFS-9021.001.patch
>
>
> We should promote usage of Apache Hadoop by using yellow elephants in our 
> documentation :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9021) Use a yellow elephant rather than a blue one in diagram

2015-09-03 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9021:
--
Status: Patch Available  (was: Open)

> Use a yellow elephant rather than a blue one in diagram
> ---
>
> Key: HDFS-9021
> URL: https://issues.apache.org/jira/browse/HDFS-9021
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HDFS-9021.001.patch
>
>
> We should promote usage of Apache Hadoop by using yellow elephants in our 
> documentation :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9021) Use a yellow elephant rather than a blue one in diagram

2015-09-03 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9021:
--
Attachment: HDFS-9021.001.patch

Patch attached, just modifying a png. [~cmccabe] mind an easy review?

> Use a yellow elephant rather than a blue one in diagram
> ---
>
> Key: HDFS-9021
> URL: https://issues.apache.org/jira/browse/HDFS-9021
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HDFS-9021.001.patch
>
>
> We should promote usage of Apache Hadoop by using yellow elephants in our 
> documentation :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9021) Use a yellow elephant rather than a blue one in diagram

2015-09-03 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-9021:
-

 Summary: Use a yellow elephant rather than a blue one in diagram
 Key: HDFS-9021
 URL: https://issues.apache.org/jira/browse/HDFS-9021
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.1
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor


We should promote usage of Apache Hadoop by using yellow elephants in our 
documentation :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8966) Separate the lock used in namespace and block management layer

2015-09-03 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729826#comment-14729826
 ] 

Jing Zhao commented on HDFS-8966:
-

Yeah, I agree more design details are necessary. In the meanwhile, I think to 
have a meeting discussing the issue and possible solutions will be helpful. 
Maybe we can organize a meeting sometime next week?

> Separate the lock used in namespace and block management layer
> --
>
> Key: HDFS-8966
> URL: https://issues.apache.org/jira/browse/HDFS-8966
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> Currently the namespace and the block management layer share one giant lock. 
> One consequence that we have seen more and more often is that the namespace 
> hangs due to excessive activities from the block management layer. For 
> example, the NN might take a couple hundred milliseconds to handle a large 
> block report. Because the NN holds the write lock during processing the block 
> report, all namespace requests are paused. In production we have seen these 
> lock contentions cause long latencies and instabilities in the cluster.
> This umbrella jira proposes to separate the lock used by namespace and the 
> block management layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9018) Update the pom to add junit dependency and move TestXAttr to client project

2015-09-03 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729817#comment-14729817
 ] 

Haohui Mai commented on HDFS-9018:
--

Maybe it makes more sense to combine this patch with actual code refactoring?

> Update the pom to add junit dependency and move TestXAttr to client project
> ---
>
> Key: HDFS-9018
> URL: https://issues.apache.org/jira/browse/HDFS-9018
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Kanaka Kumar Avvaru
>Assignee: Kanaka Kumar Avvaru
> Attachments: HDFS-9018.patch
>
>
> Update the pom to add junit dependency and move 
> {{org.apache.hadoop.fs.TestXAttr}}  to client project to start with test 
> movement



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9012) Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module

2015-09-03 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729813#comment-14729813
 ] 

Haohui Mai commented on HDFS-9012:
--

The patch looks good to me.

{code}
+
+  /**
+   * Get the timeout to be used for transmitting the OOB type
+   * @return the timeout in milliseconds
+   */
+  public static long getOOBTimeout(Configuration conf, Status status)
+  throws IOException {
+final int OOB_START = Status.OOB_RESTART_VALUE; // the first OOB type
+final int OOB_END = Status.OOB_RESERVED3_VALUE; // the last OOB type
+final int NUM_OOB_TYPES = OOB_END - OOB_START + 1;
+
+final int index = status.getNumber() - OOB_START;
+if (index < 0 || index >= NUM_OOB_TYPES) {
+  // Not an OOB.
+  throw new IOException("Not an OOB status: " + status);
+}
+
+// get timeout value of each OOB type from configuration
+final String[] OOB_TIMEOUT = conf.get(DFS_DATANODE_OOB_TIMEOUT_KEY,
+DFS_DATANODE_OOB_TIMEOUT_DEFAULT).split(",");
+return index < OOB_TIMEOUT.length ? Long.parseLong(OOB_TIMEOUT[index]) : 0;
+  }
{code}

It might make more sense to promote the array of {{OOB_TIMEOUT}} to the 
{{DataNode}} class to avoid repetitive queries of the configuration object. 

> Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client 
> module
> 
>
> Key: HDFS-9012
> URL: https://issues.apache.org/jira/browse/HDFS-9012
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9012.000.patch, HDFS-9012.001.patch
>
>
> The {{package org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck}} 
> class is used in client module classes (e.g. 
> {{DataStreamer$ResponseProcessor}} in {{DFSInputStream}} and 
> {{DFSOutputStream}}). This jira tracks the effort of moving this class to 
> {{hadoop-hdfs-client}} module.
> We should keep the static attribute {{OOB_TIMEOUT}} and helper method 
> {{getOOBTimeout}} in the {{hadoop-hdfs}} module as they're not used (by now) 
> in {{hadoop-hdfs-client}} module. Meanwhile, we don't create the 
> {{HdfsConfiguration}} statically if we can pass the correct {{conf}} object.
> The checkstyle warnings can be addressed in 
> [HDFS-8979|https://issues.apache.org/jira/browse/HDFS-8979].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9009) Send metrics logs to NullAppender by default

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729790#comment-14729790
 ] 

Hudson commented on HDFS-9009:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2290 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2290/])
HDFS-9009. Send metrics logs to NullAppender by default. (Arpit Agarwal) (arp: 
rev 524ba8708b8e3e17e806748e1f819dec2183bf94)
* hadoop-common-project/hadoop-common/src/main/conf/log4j.properties
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Send metrics logs to NullAppender by default
> 
>
> Key: HDFS-9009
> URL: https://issues.apache.org/jira/browse/HDFS-9009
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HDFS-9009.01.patch, HDFS-9009.02.patch
>
>
> Disable the metrics logger by default by directing logs to the 
> {{NullAppender}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8964) When validating the edit log, do not read at or beyond the file offset that is being written

2015-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729791#comment-14729791
 ] 

Hudson commented on HDFS-8964:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2290 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2290/])
HDFS-8964. When validating the edit log, do not read at or beyond the file 
offset that is being written (Zhe Zhang via Colin P. McCabe) (cmccabe: rev 
53c38cc89ab979ec47557dcfa7affbad20578c0a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckPointForSecurityTokens.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> When validating the edit log, do not read at or beyond the file offset that 
> is being written
> 
>
> Key: HDFS-8964
> URL: https://issues.apache.org/jira/browse/HDFS-8964
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-8964.00.patch, HDFS-8964.01.patch, 
> HDFS-8964.02.patch, HDFS-8964.03.patch, HDFS-8964.04.patch, 
> HDFS-8964.05.patch, HDFS-8964.06.patch
>
>
> NN/JN validates in-progress edit log files in multiple scenarios, via 
> {{EditLogFile#validateLog}}. The method scans through the edit log file to 
> find the last transaction ID.
> However, an in-progress edit log file could be actively written to, which 
> creates a race condition and causes incorrect data to be read (and later we 
> attempt to interpret the data as ops).  This causes problems for INotify, 
> which reads edit log entries while the edit log is still being written.
> Currently {{validateLog}} is used in 3 places:
> # NN {{getEditsFromTxid}}
> # JN {{getEditLogManifest}}
> # NN/JN {{recoverUnfinalizedSegments}}
> In the first two scenarios we should provide a maximum TxId to validate in 
> the in-progress file. The 3rd scenario won't cause a race condition because 
> only non-current in-progress edit log files are validated.
> {{validateLog}} is actually only used with in-progress files, and could use a 
> better name and Javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8965) Harden edit log reading code against out of memory errors

2015-09-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-8965:
---
Fix Version/s: (was: 2.8)
   2.8.0

> Harden edit log reading code against out of memory errors
> -
>
> Key: HDFS-8965
> URL: https://issues.apache.org/jira/browse/HDFS-8965
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.8.0
>
> Attachments: HDFS-8965.001.patch, HDFS-8965.002.patch, 
> HDFS-8965.003.patch, HDFS-8965.004.patch, HDFS-8965.005.patch, 
> HDFS-8965.006.patch, HDFS-8965.007.patch
>
>
> We should harden the edit log reading code against out of memory errors.  Now 
> that each op has a length prefix and a checksum, we can validate the checksum 
> before trying to load the Op data.  This should avoid out of memory errors 
> when trying to load garbage data as Op data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8155) Support OAuth2 in WebHDFS

2015-09-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-8155:
---
Fix Version/s: (was: 2.8)
   2.8.0

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Fix For: 2.8.0
>
> Attachments: HDFS-8155-1.patch, HDFS-8155.002.patch, 
> HDFS-8155.003.patch, HDFS-8155.004.patch, HDFS-8155.005.patch, 
> HDFS-8155.006.patch
>
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >