[jira] [Commented] (HDFS-8053) Move DFSIn/OutputStream and related classes to hadoop-hdfs-client

2015-09-26 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909119#comment-14909119
 ] 

Haohui Mai commented on HDFS-8053:
--

Thanks for updating the patch! The change looks mostly good to me.

{code}
+public class SecureResources {
+  private final ServerSocket streamingSocket;
+  private final ServerSocketChannel httpServerSocket;
+  public SecureResources(ServerSocket streamingSocket, ServerSocketChannel 
httpServerSocket) {
+this.streamingSocket = streamingSocket;
+this.httpServerSocket = httpServerSocket;
+  }
...
{code}

{{SecureResources}} used only by DataNodes. It should not be exposed to the 
hdfs-client package. I don't think there is any needs to change it  or the 
{{TcpPeerServer}}.

> Move DFSIn/OutputStream and related classes to hadoop-hdfs-client
> -
>
> Key: HDFS-8053
> URL: https://issues.apache.org/jira/browse/HDFS-8053
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-8053.000.patch, HDFS-8053.001.patch, 
> HDFS-8053.002.patch, HDFS-8053.003.patch, HDFS-8053.004.patch
>
>
> This jira tracks the effort of moving the {{DFSInputStream}} and 
> {{DFSOutputSream}} classes from {{hadoop-hdfs}} to {{hadoop-hdfs-client}} 
> module.
> Guidelines:
> * As the {{DFSClient}} is heavily coupled to these two classes, we should 
> move it together.
> * Related classes should be addressed in separate jiras if they're 
> independent and complex enough.
> * The checkstyle warnings can be addressed in [HDFS-8979 | 
> https://issues.apache.org/jira/browse/HDFS-8979]
> * Removing the _slf4j_ logger guards when calling {{LOG.debug()}} and 
> {{LOG.trace()}} can be addressed in [HDFS-8971 | 
> https://issues.apache.org/jira/browse/HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8053) Move DFSIn/OutputStream and related classes to hadoop-hdfs-client

2015-09-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-8053:

Attachment: HDFS-8053.005.patch

Thank you [~wheat9] for your comments. I had a look at the usages of the 
{{SecureResources}} class and you're totally correct. I recalled that the 
change was brought at the early age of this work when I mistakenly thought we 
need to expose the {{TcpPeerServer}} class to {{hadoop-hdfs-client}}. I put the 
{{*PeerServer}} classes back to the server side but did not revert all 
dependent changes.

The v5 patch reverts the changes of {{SecureResources}}. Meanwhile, I also 
double checked every class rename is necessary in order to move the 
{{DFSIn/OutputStream}} classes to client side.

> Move DFSIn/OutputStream and related classes to hadoop-hdfs-client
> -
>
> Key: HDFS-8053
> URL: https://issues.apache.org/jira/browse/HDFS-8053
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-8053.000.patch, HDFS-8053.001.patch, 
> HDFS-8053.002.patch, HDFS-8053.003.patch, HDFS-8053.004.patch, 
> HDFS-8053.005.patch
>
>
> This jira tracks the effort of moving the {{DFSInputStream}} and 
> {{DFSOutputSream}} classes from {{hadoop-hdfs}} to {{hadoop-hdfs-client}} 
> module.
> Guidelines:
> * As the {{DFSClient}} is heavily coupled to these two classes, we should 
> move it together.
> * Related classes should be addressed in separate jiras if they're 
> independent and complex enough.
> * The checkstyle warnings can be addressed in [HDFS-8979 | 
> https://issues.apache.org/jira/browse/HDFS-8979]
> * Removing the _slf4j_ logger guards when calling {{LOG.debug()}} and 
> {{LOG.trace()}} can be addressed in [HDFS-8971 | 
> https://issues.apache.org/jira/browse/HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8873) Allow the directoryScanner to be rate-limited

2015-09-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-8873:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> Allow the directoryScanner to be rate-limited
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Fix For: 2.8.0
>
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch, 
> HDFS-8873.006.patch, HDFS-8873.007.patch, HDFS-8873.008.patch, 
> HDFS-8873.009.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909192#comment-14909192
 ] 

Hadoop QA commented on HDFS-9139:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch |  20m 19s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 21 new or modified test files. |
| {color:green}+1{color} | javac |   8m  8s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  2s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 16s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | shellcheck |   0m 10s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:red}-1{color} | whitespace |   0m  3s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 25s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |   7m 48s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests |  52m 50s | Tests failed in hadoop-hdfs. |
| | | 108m 34s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762445/HDFS-9139.01.patch |
| Optional Tests | shellcheck javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7fe521b |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12689/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12689/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12689/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12689/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12689/console |


This message was automatically generated.

> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9139.01.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8873) Allow the directoryScanner to be rate-limited

2015-09-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-8873:
---
Summary: Allow the directoryScanner to be rate-limited  (was: throttle 
directoryScanner)

> Allow the directoryScanner to be rate-limited
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch, 
> HDFS-8873.006.patch, HDFS-8873.007.patch, HDFS-8873.008.patch, 
> HDFS-8873.009.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8873) Allow the directoryScanner to be rate-limited

2015-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909221#comment-14909221
 ] 

Hudson commented on HDFS-8873:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #450 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/450/])
HDFS-8873. Allow the directoryScanner to be rate-limited (Daniel Templeton via 
Colin P. McCabe) (cmccabe: rev 7a3c381b39887a02e944fa98287afd0eb4db3560)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Allow the directoryScanner to be rate-limited
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Fix For: 2.8.0
>
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch, 
> HDFS-8873.006.patch, HDFS-8873.007.patch, HDFS-8873.008.patch, 
> HDFS-8873.009.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9125) Display help if the command option to "hdfs dfs " is not valid

2015-09-26 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909182#comment-14909182
 ] 

nijel commented on HDFS-9125:
-

thanks [~templedf] for you time.
Updated the patch with the comment and a minor change in the command prefix.
Please check

> Display help if the  command option to "hdfs dfs " is not valid
> ---
>
> Key: HDFS-9125
> URL: https://issues.apache.org/jira/browse/HDFS-9125
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: nijel
>Priority: Minor
> Attachments: HDFS-9125_1.patch, HDFS-9125_2.patch
>
>
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs
> -mkdirs: Unknown command
> {noformat}
> Better to display the help info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9125) Display help if the command option to "hdfs dfs " is not valid

2015-09-26 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9125:

Attachment: HDFS-9125_2.patch

> Display help if the  command option to "hdfs dfs " is not valid
> ---
>
> Key: HDFS-9125
> URL: https://issues.apache.org/jira/browse/HDFS-9125
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: nijel
>Priority: Minor
> Attachments: HDFS-9125_1.patch, HDFS-9125_2.patch
>
>
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs
> -mkdirs: Unknown command
> {noformat}
> Better to display the help info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8873) Allow the directoryScanner to be rate-limited

2015-09-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909206#comment-14909206
 ] 

Colin Patrick McCabe commented on HDFS-8873:


+1.  Thanks, [~templedf].  The test failures are unrelated (just more 
noclassfound jenkins environment issues), and all pass for me locally.  
Committed to 2.8 and trunk

> Allow the directoryScanner to be rate-limited
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch, 
> HDFS-8873.006.patch, HDFS-8873.007.patch, HDFS-8873.008.patch, 
> HDFS-8873.009.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9087) Add some jitter to DataNode.checkDiskErrorThread

2015-09-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909207#comment-14909207
 ] 

Colin Patrick McCabe commented on HDFS-9087:


+1.  Thanks, [~eclark].

> Add some jitter to DataNode.checkDiskErrorThread
> 
>
> Key: HDFS-9087
> URL: https://issues.apache.org/jira/browse/HDFS-9087
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HDFS-9087-v0.patch, HDFS-9087-v1.patch, 
> HDFS-9087-v2.patch, HDFS-9087-v3.patch
>
>
> If all datanodes are started across a cluster at the same time (or errors in 
> the network cause ioexceptions) there can be storms where lots of datanodes 
> check their disks at the exact same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8873) Allow the directoryScanner to be rate-limited

2015-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909216#comment-14909216
 ] 

Hudson commented on HDFS-8873:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8523 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8523/])
HDFS-8873. Allow the directoryScanner to be rate-limited (Daniel Templeton via 
Colin P. McCabe) (cmccabe: rev 7a3c381b39887a02e944fa98287afd0eb4db3560)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java


> Allow the directoryScanner to be rate-limited
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Fix For: 2.8.0
>
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch, 
> HDFS-8873.006.patch, HDFS-8873.007.patch, HDFS-8873.008.patch, 
> HDFS-8873.009.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8873) Allow the directoryScanner to be rate-limited

2015-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909246#comment-14909246
 ] 

Hudson commented on HDFS-8873:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #444 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/444/])
HDFS-8873. Allow the directoryScanner to be rate-limited (Daniel Templeton via 
Colin P. McCabe) (cmccabe: rev 7a3c381b39887a02e944fa98287afd0eb4db3560)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Allow the directoryScanner to be rate-limited
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Fix For: 2.8.0
>
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch, 
> HDFS-8873.006.patch, HDFS-8873.007.patch, HDFS-8873.008.patch, 
> HDFS-8873.009.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8053) Move DFSIn/OutputStream and related classes to hadoop-hdfs-client

2015-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909201#comment-14909201
 ] 

Hadoop QA commented on HDFS-8053:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 39s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 8 new or modified test files. |
| {color:green}+1{color} | javac |   7m 48s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  1s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 25s | The applied patch generated  
266 new checkstyle issues (total was 24, now 290). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 24s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  9s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 191m 12s | Tests passed in hadoop-hdfs. 
|
| {color:green}+1{color} | hdfs tests |   0m 30s | Tests passed in 
hadoop-hdfs-client. |
| | | 241m 48s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762508/HDFS-8053.005.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7fe521b |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12688/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12688/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12688/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12688/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12688/console |


This message was automatically generated.

> Move DFSIn/OutputStream and related classes to hadoop-hdfs-client
> -
>
> Key: HDFS-8053
> URL: https://issues.apache.org/jira/browse/HDFS-8053
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-8053.000.patch, HDFS-8053.001.patch, 
> HDFS-8053.002.patch, HDFS-8053.003.patch, HDFS-8053.004.patch, 
> HDFS-8053.005.patch
>
>
> This jira tracks the effort of moving the {{DFSInputStream}} and 
> {{DFSOutputSream}} classes from {{hadoop-hdfs}} to {{hadoop-hdfs-client}} 
> module.
> Guidelines:
> * As the {{DFSClient}} is heavily coupled to these two classes, we should 
> move it together.
> * Related classes should be addressed in separate jiras if they're 
> independent and complex enough.
> * The checkstyle warnings can be addressed in [HDFS-8979 | 
> https://issues.apache.org/jira/browse/HDFS-8979]
> * Removing the _slf4j_ logger guards when calling {{LOG.debug()}} and 
> {{LOG.trace()}} can be addressed in [HDFS-8971 | 
> https://issues.apache.org/jira/browse/HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909170#comment-14909170
 ] 

Hadoop QA commented on HDFS-9139:
-

(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDFS-Build/12689/console in case of 
problems.

> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9139.01.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9125) Display help if the command option to "hdfs dfs " is not valid

2015-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909204#comment-14909204
 ] 

Hadoop QA commented on HDFS-9125:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 13s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 18s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  2s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  9s | The applied patch generated  1 
new checkstyle issues (total was 21, now 21). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 53s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  22m 33s | Tests failed in 
hadoop-common. |
| | |  63m 40s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.cli.TestCLI |
|   | hadoop.fs.TestLocalFsFCStatistics |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762512/HDFS-9125_2.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7fe521b |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12690/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12690/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12690/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12690/console |


This message was automatically generated.

> Display help if the  command option to "hdfs dfs " is not valid
> ---
>
> Key: HDFS-9125
> URL: https://issues.apache.org/jira/browse/HDFS-9125
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: nijel
>Priority: Minor
> Attachments: HDFS-9125_1.patch, HDFS-9125_2.patch
>
>
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs
> -mkdirs: Unknown command
> {noformat}
> Better to display the help info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9125) Display help if the command option to "hdfs dfs " is not valid

2015-09-26 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9125:

Attachment: HDFS-9125_3.patch

> Display help if the  command option to "hdfs dfs " is not valid
> ---
>
> Key: HDFS-9125
> URL: https://issues.apache.org/jira/browse/HDFS-9125
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: nijel
>Priority: Minor
> Attachments: HDFS-9125_1.patch, HDFS-9125_2.patch, HDFS-9125_3.patch
>
>
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs
> -mkdirs: Unknown command
> {noformat}
> Better to display the help info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9125) Display help if the command option to "hdfs dfs " is not valid

2015-09-26 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909224#comment-14909224
 ] 

nijel commented on HDFS-9125:
-

Updated the patch with check style and fix for the test fail 
"org.apache.hadoop.cli.TestCLI.testAll"

bq. 
org.apache.hadoop.fs.TestLocalFsFCStatistics.testStatisticsThreadLocalDataCleanUp
this failure is unrelated and is passing locally.

please review
thanks

> Display help if the  command option to "hdfs dfs " is not valid
> ---
>
> Key: HDFS-9125
> URL: https://issues.apache.org/jira/browse/HDFS-9125
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: nijel
>Priority: Minor
> Attachments: HDFS-9125_1.patch, HDFS-9125_2.patch, HDFS-9125_3.patch
>
>
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs
> -mkdirs: Unknown command
> {noformat}
> Better to display the help info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9125) Display help if the command option to "hdfs dfs " is not valid

2015-09-26 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909236#comment-14909236
 ] 

Vinayakumar B commented on HDFS-9125:
-

Also, it looks like the fix is in Hadoop Common side alone, so it would be 
better to move this jira to Hadoop Common.

> Display help if the  command option to "hdfs dfs " is not valid
> ---
>
> Key: HDFS-9125
> URL: https://issues.apache.org/jira/browse/HDFS-9125
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: nijel
>Priority: Minor
> Attachments: HDFS-9125_1.patch, HDFS-9125_2.patch, HDFS-9125_3.patch
>
>
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs
> -mkdirs: Unknown command
> {noformat}
> Better to display the help info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8873) Allow the directoryScanner to be rate-limited

2015-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909309#comment-14909309
 ] 

Hudson commented on HDFS-8873:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #421 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/421/])
HDFS-8873. Allow the directoryScanner to be rate-limited (Daniel Templeton via 
Colin P. McCabe) (cmccabe: rev 7a3c381b39887a02e944fa98287afd0eb4db3560)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java


> Allow the directoryScanner to be rate-limited
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Fix For: 2.8.0
>
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch, 
> HDFS-8873.006.patch, HDFS-8873.007.patch, HDFS-8873.008.patch, 
> HDFS-8873.009.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8053) Move DFSIn/OutputStream and related classes to hadoop-hdfs-client

2015-09-26 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909389#comment-14909389
 ] 

Haohui Mai commented on HDFS-8053:
--

+1

> Move DFSIn/OutputStream and related classes to hadoop-hdfs-client
> -
>
> Key: HDFS-8053
> URL: https://issues.apache.org/jira/browse/HDFS-8053
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-8053.000.patch, HDFS-8053.001.patch, 
> HDFS-8053.002.patch, HDFS-8053.003.patch, HDFS-8053.004.patch, 
> HDFS-8053.005.patch
>
>
> This jira tracks the effort of moving the {{DFSInputStream}} and 
> {{DFSOutputSream}} classes from {{hadoop-hdfs}} to {{hadoop-hdfs-client}} 
> module.
> Guidelines:
> * As the {{DFSClient}} is heavily coupled to these two classes, we should 
> move it together.
> * Related classes should be addressed in separate jiras if they're 
> independent and complex enough.
> * The checkstyle warnings can be addressed in [HDFS-8979 | 
> https://issues.apache.org/jira/browse/HDFS-8979]
> * Removing the _slf4j_ logger guards when calling {{LOG.debug()}} and 
> {{LOG.trace()}} can be addressed in [HDFS-8971 | 
> https://issues.apache.org/jira/browse/HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8053) Move DFSIn/OutputStream and related classes to hadoop-hdfs-client

2015-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909399#comment-14909399
 ] 

Hudson commented on HDFS-8053:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8525 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8525/])
HDFS-8053. Move DFSIn/OutputStream and related classes to hadoop-hdfs-client. 
Contributed by Mingliang Liu. (wheat9: rev 
bf37d3d80e5179dea27e5bd5aea804a38aa9934c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestCrcCorruption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSPacket.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/QuotaByStorageTypeExceededException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/ReplaceDatanodeOnFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/QuotaByStorageTypeExceededException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/UnknownCryptoProtocolVersionException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSHedgedReadMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSClusterWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/ReplaceDatanodeOnFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockMissingException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/BlockMissingException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskAsyncLazyPersistService.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/HdfsBlockLocation.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSHedgedReadMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java
* 

[jira] [Commented] (HDFS-8873) Allow the directoryScanner to be rate-limited

2015-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909279#comment-14909279
 ] 

Hudson commented on HDFS-8873:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1183 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1183/])
HDFS-8873. Allow the directoryScanner to be rate-limited (Daniel Templeton via 
Colin P. McCabe) (cmccabe: rev 7a3c381b39887a02e944fa98287afd0eb4db3560)
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java


> Allow the directoryScanner to be rate-limited
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Fix For: 2.8.0
>
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch, 
> HDFS-8873.006.patch, HDFS-8873.007.patch, HDFS-8873.008.patch, 
> HDFS-8873.009.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8873) Allow the directoryScanner to be rate-limited

2015-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909306#comment-14909306
 ] 

Hudson commented on HDFS-8873:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2361 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2361/])
HDFS-8873. Allow the directoryScanner to be rate-limited (Daniel Templeton via 
Colin P. McCabe) (cmccabe: rev 7a3c381b39887a02e944fa98287afd0eb4db3560)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Allow the directoryScanner to be rate-limited
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Fix For: 2.8.0
>
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch, 
> HDFS-8873.006.patch, HDFS-8873.007.patch, HDFS-8873.008.patch, 
> HDFS-8873.009.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6264) Provide FileSystem#create() variant which throws exception if parent directory doesn't exist

2015-09-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909349#comment-14909349
 ] 

Ted Yu commented on HDFS-6264:
--

w.r.t. the two checkstyle warnings, the count didn't change with vs. without 
the patch.

[~kihwal]: Mind taking one more look ?

> Provide FileSystem#create() variant which throws exception if parent 
> directory doesn't exist
> 
>
> Key: HDFS-6264
> URL: https://issues.apache.org/jira/browse/HDFS-6264
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: hbase
> Attachments: hdfs-6264-v1.txt, hdfs-6264-v2.txt
>
>
> FileSystem#createNonRecursive() is deprecated.
> However, there is no DistributedFileSystem#create() implementation which 
> throws exception if parent directory doesn't exist.
> This limits clients' migration away from the deprecated method.
> For HBase, IO fencing relies on the behavior of 
> FileSystem#createNonRecursive().
> Variant of create() method should be added which throws exception if parent 
> directory doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9064) NN old UI (block_info_xml) not available in 2.7.x

2015-09-26 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909346#comment-14909346
 ] 

Vinayakumar B commented on HDFS-9064:
-

[~wheat9], 
Thanks for info.

So what should be the conclusion ? 
Should we need provide this despite of security issue and inaccuracy?

This jira is marked critical for 2.7.2. So we should do some progress on this.

> NN old UI (block_info_xml) not available in 2.7.x
> -
>
> Key: HDFS-9064
> URL: https://issues.apache.org/jira/browse/HDFS-9064
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Kanaka Kumar Avvaru
>Priority: Critical
>
> In 2.6.x hadoop deploys, given a blockId it was very easy to find out the 
> file name and the locations of replicas (also whether they are corrupt or 
> not).
> This was the REST call:
> {noformat}
>  http://:/block_info_xml.jsp?blockId=xxx
> {noformat}
> But this was removed by HDFS-6252 in 2.7 builds.
> Creating this jira to restore that functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909373#comment-14909373
 ] 

Hadoop QA commented on HDFS-9139:
-

(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDFS-Build/12692/console in case of 
problems.

> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9139.01.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8739) Move DFSClient to hadoop-hdfs-client

2015-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8739:
-
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Closing this as a duplicate of HDFS-8053.

> Move DFSClient to hadoop-hdfs-client
> 
>
> Key: HDFS-8739
> URL: https://issues.apache.org/jira/browse/HDFS-8739
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Yi Liu
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8739-002.patch, HDFS-8739-003.patch, HDFS-8739.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9125) Display help if the command option to "hdfs dfs " is not valid

2015-09-26 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909235#comment-14909235
 ] 

Vinayakumar B commented on HDFS-9125:
-

Thanks [~nijel] for the patch.

You cannot change the usage from 'fs' to 'dfs'. Though it looks to be changed, 
since we are calling FsShell by invoking 'hdfs dfs' from HDFS perspective, But 
FSShell is generic and any other FileSystem can use this shell. Other file 
systems will call this by 'hadoop fs'. Actual FileSystem referred inside 
depends on the 'fs.defaultFS' configuration or "-fs" argument from the client 
side.

So you might need to keep the usage as previous. And update the  new test 
accordingly. Also another test failure fix may not be necessary after revert of 
usage.

> Display help if the  command option to "hdfs dfs " is not valid
> ---
>
> Key: HDFS-9125
> URL: https://issues.apache.org/jira/browse/HDFS-9125
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: nijel
>Priority: Minor
> Attachments: HDFS-9125_1.patch, HDFS-9125_2.patch, HDFS-9125_3.patch
>
>
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs
> -mkdirs: Unknown command
> {noformat}
> Better to display the help info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7766) Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect

2015-09-26 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-7766:
---
Attachment: HDFS-7766.04.patch

I wasn't able to recreate the unit test failure. Here's a patch with the other 
fixes

> Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect
> --
>
> Key: HDFS-7766
> URL: https://issues.apache.org/jira/browse/HDFS-7766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-7766.01.patch, HDFS-7766.02.patch, 
> HDFS-7766.03.patch, HDFS-7766.04.patch
>
>
> Please see 
> https://issues.apache.org/jira/browse/HDFS-7588?focusedCommentId=14276192=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14276192
> A backwards compatible manner we can fix this is to add a flag on the request 
> which would disable the redirect, i.e.
> {noformat}
> curl -i -X PUT 
> "http://:/webhdfs/v1/?op=CREATE[=]
> {noformat}
> returns 200 with the DN location in the response.
> This would allow the Browser clients to get the redirect URL to put the file 
> to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8053) Move DFSIn/OutputStream and related classes to hadoop-hdfs-client

2015-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909526#comment-14909526
 ] 

Hudson commented on HDFS-8053:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2363 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2363/])
HDFS-8053. Move DFSIn/OutputStream and related classes to hadoop-hdfs-client. 
Contributed by Mingliang Liu. (wheat9: rev 
bf37d3d80e5179dea27e5bd5aea804a38aa9934c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSHedgedReadMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/AclException.java
* hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSPacket.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/BlockMissingException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/UnresolvedPathException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockMissingException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/fs/HdfsBlockLocation.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSHedgedReadMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/AclException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/ReplaceDatanodeOnFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/UnresolvedPathException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/MissingEventsException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
* 

[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909497#comment-14909497
 ] 

Hadoop QA commented on HDFS-9139:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:red}-1{color} | pre-patch |  19m 47s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 21 new or modified test files. |
| {color:green}+1{color} | javac |   8m  4s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 13s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 17s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | shellcheck |   0m  8s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:red}-1{color} | whitespace |   0m  3s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 31s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |   7m 50s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests |  53m 22s | Tests failed in hadoop-hdfs. |
| | | 108m 57s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762445/HDFS-9139.01.patch |
| Optional Tests | shellcheck javadoc javac unit findbugs checkstyle |
| git revision | trunk / bf37d3d |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12694/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12694/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12694/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12694/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12694/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12694/console |


This message was automatically generated.

> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9139.01.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8740) Move DistributedFileSystem to hadoop-hdfs-client

2015-09-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-8740:

Attachment: (was: HDFS-8740.000.patch)

> Move DistributedFileSystem to hadoop-hdfs-client
> 
>
> Key: HDFS-8740
> URL: https://issues.apache.org/jira/browse/HDFS-8740
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Yi Liu
>Assignee: Mingliang Liu
>
> This jira tracks efforts of moving 
> {{org.apache.hadoop.hdfs.DistributedFileSystem}} class from {{hadoop-hdfs}} 
> to {{hadoop-hdfs-client}} module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8053) Move DFSIn/OutputStream and related classes to hadoop-hdfs-client

2015-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909502#comment-14909502
 ] 

Hudson commented on HDFS-8053:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #423 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/423/])
HDFS-8053. Move DFSIn/OutputStream and related classes to hadoop-hdfs-client. 
Contributed by Mingliang Liu. (wheat9: rev 
bf37d3d80e5179dea27e5bd5aea804a38aa9934c)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/ReplaceDatanodeOnFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/QuotaByStorageTypeExceededException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/UnknownCryptoProtocolVersionException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/inotify/MissingEventsException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSPacket.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockMissingException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCryptoProtocolVersionException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/AclException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestCrcCorruption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSPacket.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSHedgedReadMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/UnresolvedPathException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSHedgedReadMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java
* 

[jira] [Updated] (HDFS-9080) update htrace version to 4.0.1

2015-09-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-9080:
---
Summary: update htrace version to 4.0.1  (was: update htrace version to 4.0)

> update htrace version to 4.0.1
> --
>
> Key: HDFS-9080
> URL: https://issues.apache.org/jira/browse/HDFS-9080
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-9080.001.patch, HDFS-9080.002.patch, 
> HDFS-9080.003.patch, HDFS-9080.004.patch, HDFS-9080.005.patch, 
> HDFS-9080.006.patch, HDFS-9080.007.patch, HDFS-9080.009.patch, 
> HDFS-9080.010.patch, tracing-fsshell-put.png
>
>
> Update the HTrace library version Hadoop uses to htrace 4.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8740) Move DistributedFileSystem to hadoop-hdfs-client

2015-09-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-8740:

Attachment: HDFS-8740.000.patch

The v0 patch moves the {{DistributedFileSystem}} to the {{hadoop-hdfs-client}} 
module.

> Move DistributedFileSystem to hadoop-hdfs-client
> 
>
> Key: HDFS-8740
> URL: https://issues.apache.org/jira/browse/HDFS-8740
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Yi Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-8740.000.patch
>
>
> This jira tracks efforts of moving 
> {{org.apache.hadoop.hdfs.DistributedFileSystem}} class from {{hadoop-hdfs}} 
> to {{hadoop-hdfs-client}} module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9080) update htrace version to 4.0.1

2015-09-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-9080:
---
Attachment: HDFS-9080.011.patch

> update htrace version to 4.0.1
> --
>
> Key: HDFS-9080
> URL: https://issues.apache.org/jira/browse/HDFS-9080
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-9080.001.patch, HDFS-9080.002.patch, 
> HDFS-9080.003.patch, HDFS-9080.004.patch, HDFS-9080.005.patch, 
> HDFS-9080.006.patch, HDFS-9080.007.patch, HDFS-9080.009.patch, 
> HDFS-9080.010.patch, HDFS-9080.011.patch, tracing-fsshell-put.png
>
>
> Update the HTrace library version Hadoop uses to htrace 4.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8740) Move DistributedFileSystem to hadoop-hdfs-client

2015-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909544#comment-14909544
 ] 

Hadoop QA commented on HDFS-8740:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  19m 16s | Findbugs (version 3.0.0) 
appears to be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m 57s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  12m  1s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 25s | The applied patch generated  
45 new checkstyle issues (total was 0, now 45). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   2m 20s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 42s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   6m 51s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 39s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 103m 55s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 34s | Tests passed in 
hadoop-hdfs-client. |
| | | 161m  8s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.web.TestWebHDFSOAuth2 |
| Timed out tests | org.apache.hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762559/HDFS-8740.000.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / bf37d3d |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12696/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12696/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12696/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12696/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12696/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12696/console |


This message was automatically generated.

> Move DistributedFileSystem to hadoop-hdfs-client
> 
>
> Key: HDFS-8740
> URL: https://issues.apache.org/jira/browse/HDFS-8740
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Yi Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-8740.000.patch
>
>
> This jira tracks efforts of moving 
> {{org.apache.hadoop.hdfs.DistributedFileSystem}} class from {{hadoop-hdfs}} 
> to {{hadoop-hdfs-client}} module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8053) Move DFSIn/OutputStream and related classes to hadoop-hdfs-client

2015-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8053:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~liuml07] for the 
contribution.

> Move DFSIn/OutputStream and related classes to hadoop-hdfs-client
> -
>
> Key: HDFS-8053
> URL: https://issues.apache.org/jira/browse/HDFS-8053
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-8053.000.patch, HDFS-8053.001.patch, 
> HDFS-8053.002.patch, HDFS-8053.003.patch, HDFS-8053.004.patch, 
> HDFS-8053.005.patch
>
>
> This jira tracks the effort of moving the {{DFSInputStream}} and 
> {{DFSOutputSream}} classes from {{hadoop-hdfs}} to {{hadoop-hdfs-client}} 
> module.
> Guidelines:
> * As the {{DFSClient}} is heavily coupled to these two classes, we should 
> move it together.
> * Related classes should be addressed in separate jiras if they're 
> independent and complex enough.
> * The checkstyle warnings can be addressed in [HDFS-8979 | 
> https://issues.apache.org/jira/browse/HDFS-8979]
> * Removing the _slf4j_ logger guards when calling {{LOG.debug()}} and 
> {{LOG.trace()}} can be addressed in [HDFS-8971 | 
> https://issues.apache.org/jira/browse/HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-09-26 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909421#comment-14909421
 ] 

Brahma Reddy Battula commented on HDFS-8647:


 {quote}Maybe we can move hasClusterEverBeenMultiRack from DatanodeManager to 
NetworkTopology? Then BlockPlacementPolicyDefault's verifyBlockPlacement can 
ask clusterMap if the cluster has ever been multi rack. In that way, we 
completely remove the multi rack reference from BlockManager. {quote}

Agree with you.

Uploaded the patch to address all the comments..Kindly review..

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8053) Move DFSIn/OutputStream and related classes to hadoop-hdfs-client

2015-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909496#comment-14909496
 ] 

Hudson commented on HDFS-8053:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2390 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2390/])
HDFS-8053. Move DFSIn/OutputStream and related classes to hadoop-hdfs-client. 
Contributed by Mingliang Liu. (wheat9: rev 
bf37d3d80e5179dea27e5bd5aea804a38aa9934c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockMissingException.java
* hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/BlockMissingException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/UnresolvedPathException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/ReplaceDatanodeOnFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/QuotaByStorageTypeExceededException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSHedgedReadMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/HdfsBlockLocation.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/UnknownCryptoProtocolVersionException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskAsyncLazyPersistService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsConfigurationLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/AclException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSHedgedReadMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/inotify/MissingEventsException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/AclException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSPacket.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSClusterWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/QuotaByStorageTypeExceededException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolIterator.java
* 

[jira] [Updated] (HDFS-8740) Move DistributedFileSystem to hadoop-hdfs-client

2015-09-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-8740:

Status: Patch Available  (was: Reopened)

> Move DistributedFileSystem to hadoop-hdfs-client
> 
>
> Key: HDFS-8740
> URL: https://issues.apache.org/jira/browse/HDFS-8740
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Yi Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-8740.000.patch
>
>
> This jira tracks efforts of moving 
> {{org.apache.hadoop.hdfs.DistributedFileSystem}} class from {{hadoop-hdfs}} 
> to {{hadoop-hdfs-client}} module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8740) Move DistributedFileSystem to hadoop-hdfs-client

2015-09-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-8740:

Attachment: HDFS-8740.000.patch

> Move DistributedFileSystem to hadoop-hdfs-client
> 
>
> Key: HDFS-8740
> URL: https://issues.apache.org/jira/browse/HDFS-8740
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Yi Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-8740.000.patch
>
>
> This jira tracks efforts of moving 
> {{org.apache.hadoop.hdfs.DistributedFileSystem}} class from {{hadoop-hdfs}} 
> to {{hadoop-hdfs-client}} module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8648) Revisit FsDirectory#resolvePath() function usage to check the call is made under proper lock

2015-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8648:
-
Issue Type: Bug  (was: Sub-task)
Parent: (was: HDFS-7416)

> Revisit FsDirectory#resolvePath() function usage to check the call is made 
> under proper lock
> 
>
> Key: HDFS-8648
> URL: https://issues.apache.org/jira/browse/HDFS-8648
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Rakesh R
>Assignee: Rakesh R
>
> As per the 
> [discussion|https://issues.apache.org/jira/browse/HDFS-8493?focusedCommentId=14595735=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14595735]
>  in HDFS-8493 the function {{FsDirectory#resolvePath}} usage needs to be 
> reviewed. It seems there are many places it has done the resolution 
> {{fsd.resolvePath(pc, src, pathComponents);}} by acquiring only fsn lock and 
> not fsd lock. As per the initial analysis following are such cases, probably 
> it needs to filter out and fix wrong usage.
> # FsDirAclOp.java
> -> getAclStatus()
> -> modifyAclEntries()
> -> removeAcl()
> -> removeDefaultAcl()
> -> setAcl()
> -> getAclStatus()
> # FsDirDeleteOp.java
> -> delete(fsn, src, recursive, logRetryCache)
> # FsDirRenameOp.java
> -> renameToInt(fsd, srcArg, dstArg, logRetryCache)
> -> renameToInt(fsd, srcArg, dstArg, logRetryCache, options)
> # FsDirStatAndListingOp.java
> -> getContentSummary(fsd, src)
> -> getFileInfo(fsd, srcArg, resolveLink)
> -> isFileClosed(fsd, src)
> -> getListingInt(fsd, srcArg, startAfter, needLocation)
> # FsDirWriteFileOp.java
> -> abandonBlock()
> -> completeFile(fsn, pc, srcArg, holder, last, fileId)
> -> getEncryptionKeyInfo(fsn, pc, src, supportedVersions)
> -> startFile()
> -> validateAddBlock()
> # FsDirXAttrOp.java
> -> getXAttrs(fsd, srcArg, xAttrs)
> -> listXAttrs(fsd, src)
> -> setXAttr(fsd, src, xAttr, flag, logRetryCache)
> # FSNamesystem.java
> -> createEncryptionZoneInt()
> -> getEZForPath()
> Thanks [~wheat9], [~vinayrpet] for the advice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8648) Revisit FsDirectory#resolvePath() function usage to check the call is made under proper lock

2015-09-26 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909427#comment-14909427
 ] 

Haohui Mai commented on HDFS-8648:
--

I think this is an important improvement.

HDFS-7416 has focused separating the code from FSNamesystem / FSDirectory. This 
jira is slightly different, I'm moving it out of from HDFS-7416 and to promote 
it as a standalone jira.

> Revisit FsDirectory#resolvePath() function usage to check the call is made 
> under proper lock
> 
>
> Key: HDFS-8648
> URL: https://issues.apache.org/jira/browse/HDFS-8648
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
>
> As per the 
> [discussion|https://issues.apache.org/jira/browse/HDFS-8493?focusedCommentId=14595735=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14595735]
>  in HDFS-8493 the function {{FsDirectory#resolvePath}} usage needs to be 
> reviewed. It seems there are many places it has done the resolution 
> {{fsd.resolvePath(pc, src, pathComponents);}} by acquiring only fsn lock and 
> not fsd lock. As per the initial analysis following are such cases, probably 
> it needs to filter out and fix wrong usage.
> # FsDirAclOp.java
> -> getAclStatus()
> -> modifyAclEntries()
> -> removeAcl()
> -> removeDefaultAcl()
> -> setAcl()
> -> getAclStatus()
> # FsDirDeleteOp.java
> -> delete(fsn, src, recursive, logRetryCache)
> # FsDirRenameOp.java
> -> renameToInt(fsd, srcArg, dstArg, logRetryCache)
> -> renameToInt(fsd, srcArg, dstArg, logRetryCache, options)
> # FsDirStatAndListingOp.java
> -> getContentSummary(fsd, src)
> -> getFileInfo(fsd, srcArg, resolveLink)
> -> isFileClosed(fsd, src)
> -> getListingInt(fsd, srcArg, startAfter, needLocation)
> # FsDirWriteFileOp.java
> -> abandonBlock()
> -> completeFile(fsn, pc, srcArg, holder, last, fileId)
> -> getEncryptionKeyInfo(fsn, pc, src, supportedVersions)
> -> startFile()
> -> validateAddBlock()
> # FsDirXAttrOp.java
> -> getXAttrs(fsd, srcArg, xAttrs)
> -> listXAttrs(fsd, src)
> -> setXAttr(fsd, src, xAttr, flag, logRetryCache)
> # FSNamesystem.java
> -> createEncryptionZoneInt()
> -> getEZForPath()
> Thanks [~wheat9], [~vinayrpet] for the advice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7416) Revisit the abstraction between NameNodeRpcServer, FSNameSystem and FSDirectory

2015-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai resolved HDFS-7416.
--
Resolution: Fixed

The desired code refactoring has completed. Closing this jira.

Thanks everyone for the work!

> Revisit the abstraction between NameNodeRpcServer, FSNameSystem and 
> FSDirectory
> ---
>
> Key: HDFS-7416
> URL: https://issues.apache.org/jira/browse/HDFS-7416
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> {{NameNodeRpcServer}}, {{FSNameSystem}} and {{FSDirectory}} implement the 
> namespace of the NN. In the current implementation the boundary of these 
> classes are not fully clear.
> This jira tracks the efforts of clarifying the boundaries between these three 
> classes so that they can be more easily maintained in the long term.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909437#comment-14909437
 ] 

Hadoop QA commented on HDFS-9139:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:red}-1{color} | pre-patch |  19m 34s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 21 new or modified test files. |
| {color:green}+1{color} | javac |   7m 56s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 13s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 13s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | shellcheck |   0m  8s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:red}-1{color} | whitespace |   0m  3s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 21s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |   6m 29s | Tests failed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests |  42m 46s | Tests failed in hadoop-hdfs. |
| | |  96m 15s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.fs.shell.TestTextCommand |
|   | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.net.TestDNS |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762445/HDFS-9139.01.patch |
| Optional Tests | shellcheck javadoc javac unit findbugs checkstyle |
| git revision | trunk / 861b52d |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12692/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12692/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12692/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12692/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12692/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12692/console |


This message was automatically generated.

> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9139.01.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8053) Move DFSIn/OutputStream and related classes to hadoop-hdfs-client

2015-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909440#comment-14909440
 ] 

Hudson commented on HDFS-8053:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #446 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/446/])
HDFS-8053. Move DFSIn/OutputStream and related classes to hadoop-hdfs-client. 
Contributed by Mingliang Liu. (wheat9: rev 
bf37d3d80e5179dea27e5bd5aea804a38aa9934c)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSPacket.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCryptoProtocolVersionException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/UnknownCryptoProtocolVersionException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/inotify/MissingEventsException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/MissingEventsException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/AclException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskAsyncLazyPersistService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockMissingException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/BlockMissingException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/fs/HdfsBlockLocation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/HdfsBlockLocation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSClusterWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java
* 

[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909450#comment-14909450
 ] 

Hadoop QA commented on HDFS-9139:
-

(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDFS-Build/12694/console in case of 
problems.

> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9139.01.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8053) Move DFSIn/OutputStream and related classes to hadoop-hdfs-client

2015-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909471#comment-14909471
 ] 

Hudson commented on HDFS-8053:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1185 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1185/])
HDFS-8053. Move DFSIn/OutputStream and related classes to hadoop-hdfs-client. 
Contributed by Mingliang Liu. (wheat9: rev 
bf37d3d80e5179dea27e5bd5aea804a38aa9934c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCryptoProtocolVersionException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockMissingException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/fs/HdfsBlockLocation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSPacket.java
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/QuotaByStorageTypeExceededException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/QuotaByStorageTypeExceededException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSHedgedReadMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestCrcCorruption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/AclException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskAsyncLazyPersistService.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/UnresolvedPathException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsConfigurationLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSPacket.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* 

[jira] [Commented] (HDFS-8740) Move DistributedFileSystem to hadoop-hdfs-client

2015-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909548#comment-14909548
 ] 

Hadoop QA commented on HDFS-8740:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  19m 52s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 57s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  9s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 31s | The applied patch generated  
51 new checkstyle issues (total was 0, now 51). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 28s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 10s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 163m 43s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 38s | Tests passed in 
hadoop-hdfs-client. |
| | | 215m  7s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762558/HDFS-8740.000.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / bf37d3d |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12695/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12695/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12695/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12695/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12695/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12695/console |


This message was automatically generated.

> Move DistributedFileSystem to hadoop-hdfs-client
> 
>
> Key: HDFS-8740
> URL: https://issues.apache.org/jira/browse/HDFS-8740
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Yi Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-8740.000.patch
>
>
> This jira tracks efforts of moving 
> {{org.apache.hadoop.hdfs.DistributedFileSystem}} class from {{hadoop-hdfs}} 
> to {{hadoop-hdfs-client}} module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9087) Add some jitter to DataNode.checkDiskErrorThread

2015-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909582#comment-14909582
 ] 

Hudson commented on HDFS-9087:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8526 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8526/])
HDFS-9087. Add some jitter to DataNode.checkDiskErrorThread (Elliott Clark via 
Colin P. McCabe) (cmccabe: rev 0b31c237f2622e256726fc5d7698f0f195dbdbc1)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Add some jitter to DataNode.checkDiskErrorThread
> 
>
> Key: HDFS-9087
> URL: https://issues.apache.org/jira/browse/HDFS-9087
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HDFS-9087-v0.patch, HDFS-9087-v1.patch, 
> HDFS-9087-v2.patch, HDFS-9087-v3.patch
>
>
> If all datanodes are started across a cluster at the same time (or errors in 
> the network cause ioexceptions) there can be storms where lots of datanodes 
> check their disks at the exact same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9080) update htrace version to 4.0.1

2015-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909592#comment-14909592
 ] 

Hadoop QA commented on HDFS-9080:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  30m 39s | Findbugs (version 3.0.0) 
appears to be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 11 new or modified test files. |
| {color:red}-1{color} | javac |  10m 51s | The applied patch generated  6  
additional warning messages. |
| {color:green}+1{color} | javadoc |  15m 13s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 31s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  3s | Site still builds. |
| {color:red}-1{color} | checkstyle |   2m 50s | The applied patch generated  6 
new checkstyle issues (total was 718, now 712). |
| {color:red}-1{color} | whitespace |   1m 17s | The patch has 4  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   6m 19s | The patch appears to introduce 2 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 52s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 164m 49s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 28s | Tests passed in 
hadoop-hdfs-client. |
| | | 261m 44s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common |
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.TestClientBlockVerification |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.TestBlockReaderFactory |
|   | hadoop.hdfs.TestRemoteBlockReader2 |
|   | hadoop.hdfs.TestRemoteBlockReader |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.TestDFSOutputStream |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762567/HDFS-9080.011.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / bf37d3d |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12698/artifact/patchprocess/diffJavacWarnings.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12698/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12698/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12698/artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12698/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12698/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12698/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12698/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12698/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12698/console |


This message was automatically generated.

> update htrace version to 4.0.1
> --
>
> Key: HDFS-9080
> URL: https://issues.apache.org/jira/browse/HDFS-9080
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-9080.001.patch, HDFS-9080.002.patch, 
> HDFS-9080.003.patch, HDFS-9080.004.patch, HDFS-9080.005.patch, 
> HDFS-9080.006.patch, HDFS-9080.007.patch, HDFS-9080.009.patch, 
> HDFS-9080.010.patch, HDFS-9080.011.patch, tracing-fsshell-put.png
>
>
> Update the HTrace library version Hadoop uses to htrace 4.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7766) Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect

2015-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909566#comment-14909566
 ] 

Hadoop QA commented on HDFS-7766:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  22m 24s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 45s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  1s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  1s | Site still builds. |
| {color:green}+1{color} | checkstyle |   2m 28s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 22s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 173m 24s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 36s | Tests passed in 
hadoop-hdfs-client. |
| | | 229m 55s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
| Timed out tests | org.apache.hadoop.hdfs.TestPread |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12762560/HDFS-7766.04.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / bf37d3d |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12697/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12697/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12697/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12697/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12697/console |


This message was automatically generated.

> Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect
> --
>
> Key: HDFS-7766
> URL: https://issues.apache.org/jira/browse/HDFS-7766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-7766.01.patch, HDFS-7766.02.patch, 
> HDFS-7766.03.patch, HDFS-7766.04.patch
>
>
> Please see 
> https://issues.apache.org/jira/browse/HDFS-7588?focusedCommentId=14276192=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14276192
> A backwards compatible manner we can fix this is to add a flag on the request 
> which would disable the redirect, i.e.
> {noformat}
> curl -i -X PUT 
> "http://:/webhdfs/v1/?op=CREATE[=]
> {noformat}
> returns 200 with the DN location in the response.
> This would allow the Browser clients to get the redirect URL to put the file 
> to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9149) Consider multi

2015-09-26 Thread He Xiaoqiao (JIRA)
He Xiaoqiao created HDFS-9149:
-

 Summary: Consider multi
 Key: HDFS-9149
 URL: https://issues.apache.org/jira/browse/HDFS-9149
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: He Xiaoqiao






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9149) Consider multi datacenter when sortByDistance

2015-09-26 Thread He Xiaoqiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HDFS-9149:
--
Description: {{sortByDistance}} doesn't consider multi-datacenter when read 
data, so there my be reading data via other datacenter when hadoop deployment 
with multi-IDC.
Component/s: namenode
Summary: Consider multi datacenter when sortByDistance  (was: Consider 
multi)

> Consider multi datacenter when sortByDistance
> -
>
> Key: HDFS-9149
> URL: https://issues.apache.org/jira/browse/HDFS-9149
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: He Xiaoqiao
>
> {{sortByDistance}} doesn't consider multi-datacenter when read data, so there 
> my be reading data via other datacenter when hadoop deployment with multi-IDC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9147) Fix the setting of visibleLength in ExternalBlockReader

2015-09-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-9147:
---
Status: Patch Available  (was: Open)

> Fix the setting of visibleLength in ExternalBlockReader
> ---
>
> Key: HDFS-9147
> URL: https://issues.apache.org/jira/browse/HDFS-9147
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-9147.001.patch
>
>
> BlockReaderFactory needs to take the start offset into consideration when 
> setting the visibleLength to use in ExternalBlockReader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9080) update htrace version to 4.0.1

2015-09-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-9080:
---
Attachment: HDFS-9080.012.patch

Remove unused include
Don't use deprecated Builder method

> update htrace version to 4.0.1
> --
>
> Key: HDFS-9080
> URL: https://issues.apache.org/jira/browse/HDFS-9080
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-9080.001.patch, HDFS-9080.002.patch, 
> HDFS-9080.003.patch, HDFS-9080.004.patch, HDFS-9080.005.patch, 
> HDFS-9080.006.patch, HDFS-9080.007.patch, HDFS-9080.009.patch, 
> HDFS-9080.010.patch, HDFS-9080.011.patch, HDFS-9080.012.patch, 
> tracing-fsshell-put.png
>
>
> Update the HTrace library version Hadoop uses to htrace 4.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9087) Add some jitter to DataNode.checkDiskErrorThread

2015-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909610#comment-14909610
 ] 

Hudson commented on HDFS-9087:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1186 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1186/])
HDFS-9087. Add some jitter to DataNode.checkDiskErrorThread (Elliott Clark via 
Colin P. McCabe) (cmccabe: rev 0b31c237f2622e256726fc5d7698f0f195dbdbc1)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Add some jitter to DataNode.checkDiskErrorThread
> 
>
> Key: HDFS-9087
> URL: https://issues.apache.org/jira/browse/HDFS-9087
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HDFS-9087-v0.patch, HDFS-9087-v1.patch, 
> HDFS-9087-v2.patch, HDFS-9087-v3.patch
>
>
> If all datanodes are started across a cluster at the same time (or errors in 
> the network cause ioexceptions) there can be storms where lots of datanodes 
> check their disks at the exact same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9087) Add some jitter to DataNode.checkDiskErrorThread

2015-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909591#comment-14909591
 ] 

Hudson commented on HDFS-9087:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #447 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/447/])
HDFS-9087. Add some jitter to DataNode.checkDiskErrorThread (Elliott Clark via 
Colin P. McCabe) (cmccabe: rev 0b31c237f2622e256726fc5d7698f0f195dbdbc1)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


> Add some jitter to DataNode.checkDiskErrorThread
> 
>
> Key: HDFS-9087
> URL: https://issues.apache.org/jira/browse/HDFS-9087
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HDFS-9087-v0.patch, HDFS-9087-v1.patch, 
> HDFS-9087-v2.patch, HDFS-9087-v3.patch
>
>
> If all datanodes are started across a cluster at the same time (or errors in 
> the network cause ioexceptions) there can be storms where lots of datanodes 
> check their disks at the exact same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9147) Fix the setting of visibleLength in ExternalBlockReader

2015-09-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-9147:
---
Attachment: HDFS-9147.001.patch

> Fix the setting of visibleLength in ExternalBlockReader
> ---
>
> Key: HDFS-9147
> URL: https://issues.apache.org/jira/browse/HDFS-9147
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-9147.001.patch
>
>
> BlockReaderFactory needs to take the start offset into consideration when 
> setting the visibleLength to use in ExternalBlockReader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9147) Fix the setting of visibleLength in ExternalBlockReader

2015-09-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-9147:
---
Summary: Fix the setting of visibleLength in ExternalBlockReader  (was: 
ExternalBlockReader should not treat bytesRemaining as visibleLength)

> Fix the setting of visibleLength in ExternalBlockReader
> ---
>
> Key: HDFS-9147
> URL: https://issues.apache.org/jira/browse/HDFS-9147
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>
> ExternalBlockReader should not treat the bytesRemaining passed in from 
> DFSInputStream as the visibleLength.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9147) Fix the setting of visibleLength in ExternalBlockReader

2015-09-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-9147:
---
Description: BlockReaderFactory needs to take the start offset into 
consideration when setting the visibleLength to use in ExternalBlockReader.  
(was: ExternalBlockReader should not treat the bytesRemaining passed in from 
DFSInputStream as the visibleLength.)

> Fix the setting of visibleLength in ExternalBlockReader
> ---
>
> Key: HDFS-9147
> URL: https://issues.apache.org/jira/browse/HDFS-9147
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>
> BlockReaderFactory needs to take the start offset into consideration when 
> setting the visibleLength to use in ExternalBlockReader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8873) Allow the directoryScanner to be rate-limited

2015-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909290#comment-14909290
 ] 

Hudson commented on HDFS-8873:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2388 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2388/])
HDFS-8873. Allow the directoryScanner to be rate-limited (Daniel Templeton via 
Colin P. McCabe) (cmccabe: rev 7a3c381b39887a02e944fa98287afd0eb4db3560)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java


> Allow the directoryScanner to be rate-limited
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Fix For: 2.8.0
>
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch, 
> HDFS-8873.006.patch, HDFS-8873.007.patch, HDFS-8873.008.patch, 
> HDFS-8873.009.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HDFS-7529) Consolidate encryption zone related implementation into a single class

2015-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai closed HDFS-7529.


> Consolidate encryption zone related implementation into a single class
> --
>
> Key: HDFS-7529
> URL: https://issues.apache.org/jira/browse/HDFS-7529
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-7529-002.patch, HDFS-7529-003.patch, 
> HDFS-7529-004.patch, HDFS-7529-005.patch, HDFS-7529.000.patch, 
> HDFS-7529.001.patch
>
>
> This jira proposes to consolidate encryption zone related implementation to a 
> single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HDFS-8493) Consolidate truncate() related implementation in a single class

2015-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai closed HDFS-8493.


> Consolidate truncate() related implementation in a single class
> ---
>
> Key: HDFS-8493
> URL: https://issues.apache.org/jira/browse/HDFS-8493
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-8493-001.patch, HDFS-8493-002.patch, 
> HDFS-8493-003.patch, HDFS-8493-004.patch, HDFS-8493-005.patch, 
> HDFS-8493-006.patch, HDFS-8493-007.patch, HDFS-8493-007.patch, 
> HDFS-8493-009.patch, HDFS-8493-010.patch, HDFS-8493-011.patch
>
>
> This jira proposes to consolidate truncate() related methods into a single 
> class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HDFS-8495) Consolidate append() related implementation into a single class

2015-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai closed HDFS-8495.


> Consolidate append() related implementation into a single class
> ---
>
> Key: HDFS-8495
> URL: https://issues.apache.org/jira/browse/HDFS-8495
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-8495-000.patch, HDFS-8495-001.patch, 
> HDFS-8495-002.patch, HDFS-8495-003.patch, HDFS-8495-003.patch, 
> HDFS-8495-004.patch, HDFS-8495-005.patch, HDFS-8495-006.patch
>
>
> This jira proposes to consolidate {{FSNamesystem#append()}} related methods 
> into a single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HDFS-7436) Consolidate implementation of concat()

2015-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai closed HDFS-7436.


> Consolidate implementation of concat()
> --
>
> Key: HDFS-7436
> URL: https://issues.apache.org/jira/browse/HDFS-7436
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-7436.000.patch
>
>
> The implementation of {{concat()}} scatters across both {{FSNameSystem}} and 
> {{FSDirectory}}. This jira proposes to consolidate the implementation in a 
> single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-09-26 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8647:
---
Attachment: HDFS-8647-004.patch

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8053) Move DFSIn/OutputStream and related classes to hadoop-hdfs-client

2015-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909455#comment-14909455
 ] 

Hudson commented on HDFS-8053:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #452 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/452/])
HDFS-8053. Move DFSIn/OutputStream and related classes to hadoop-hdfs-client. 
Contributed by Mingliang Liu. (wheat9: rev 
bf37d3d80e5179dea27e5bd5aea804a38aa9934c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestCrcCorruption.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/BlockMissingException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/HdfsBlockLocation.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/QuotaByStorageTypeExceededException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/QuotaByStorageTypeExceededException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/MissingEventsException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/fs/HdfsBlockLocation.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolIterator.java
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/UnresolvedPathException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/AclException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/AclException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCryptoProtocolVersionException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/UnresolvedPathException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/UnknownCryptoProtocolVersionException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSHedgedReadMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/ReplaceDatanodeOnFailure.java
* hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/ReplaceDatanodeOnFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 

[jira] [Closed] (HDFS-7416) Revisit the abstraction between NameNodeRpcServer, FSNameSystem and FSDirectory

2015-09-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai closed HDFS-7416.


> Revisit the abstraction between NameNodeRpcServer, FSNameSystem and 
> FSDirectory
> ---
>
> Key: HDFS-7416
> URL: https://issues.apache.org/jira/browse/HDFS-7416
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> {{NameNodeRpcServer}}, {{FSNameSystem}} and {{FSDirectory}} implement the 
> namespace of the NN. In the current implementation the boundary of these 
> classes are not fully clear.
> This jira tracks the efforts of clarifying the boundaries between these three 
> classes so that they can be more easily maintained in the long term.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)