[jira] [Commented] (HDFS-2043) TestHFlush failing intermittently

2016-05-05 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273704#comment-15273704
 ] 

Masatake Iwasaki commented on HDFS-2043:


+1. I will commit this shortly.

> TestHFlush failing intermittently
> -
>
> Key: HDFS-2043
> URL: https://issues.apache.org/jira/browse/HDFS-2043
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Aaron T. Myers
>Assignee: Lin Yiqun
> Attachments: HDFS-2043.002.patch, HDFS-2043.003.patch, 
> HDFS-2043.004.patch, HDFS-2043.005.patch, HDFS-2043.006.patch, HDFS.001.patch
>
>
> I can't reproduce this failure reliably, but it seems like TestHFlush has 
> been failing intermittently, with the frequency increasing of late.
> Note the following two pre-commit test runs from different JIRAs where 
> TestHFlush seems to have failed spuriously:
> https://builds.apache.org/job/PreCommit-HDFS-Build/734//testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/680//testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2043) TestHFlush failing intermittently

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273699#comment-15273699
 ] 

Hadoop QA commented on HDFS-2043:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 52s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 54m 17s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 139m 42s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.hdfs.TestFileAppend |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12802594/HDFS-2043.006.patch |
| JIRA Issue | HDFS-2043 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 671912e69434 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build too

[jira] [Commented] (HDFS-2043) TestHFlush failing intermittently

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273651#comment-15273651
 ] 

Hadoop QA commented on HDFS-2043:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 48s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 13s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 47s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 200m 42s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.TestFileAppend |
|   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestAsyncDFSRename |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs |
|   | hado

[jira] [Commented] (HDFS-10372) Fix for failing TestFsDatasetImpl#testCleanShutdownOfVolume

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273628#comment-15273628
 ] 

Hadoop QA commented on HDFS-10372:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 31s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 49s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 137m 19s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.hdfs.server.namenode.TestDecommissioningStatus |
| JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.TestHFlush |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12802583/HDFS-10372.patch |
| JIRA Issue | HDFS-10372 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 33b258198e8f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build t

[jira] [Updated] (HDFS-2043) TestHFlush failing intermittently

2016-05-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-2043:
-
Hadoop Flags: Reviewed

> TestHFlush failing intermittently
> -
>
> Key: HDFS-2043
> URL: https://issues.apache.org/jira/browse/HDFS-2043
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Aaron T. Myers
>Assignee: Lin Yiqun
> Attachments: HDFS-2043.002.patch, HDFS-2043.003.patch, 
> HDFS-2043.004.patch, HDFS-2043.005.patch, HDFS-2043.006.patch, HDFS.001.patch
>
>
> I can't reproduce this failure reliably, but it seems like TestHFlush has 
> been failing intermittently, with the frequency increasing of late.
> Note the following two pre-commit test runs from different JIRAs where 
> TestHFlush seems to have failed spuriously:
> https://builds.apache.org/job/PreCommit-HDFS-Build/734//testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/680//testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-2043) TestHFlush failing intermittently

2016-05-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-2043:
-
Attachment: HDFS-2043.006.patch

My bad [~linyiqun], this file uses System.out instead of log. I uploaded a 
simple 006.patch to use System.out.

+1 LGTM.

> TestHFlush failing intermittently
> -
>
> Key: HDFS-2043
> URL: https://issues.apache.org/jira/browse/HDFS-2043
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Aaron T. Myers
>Assignee: Lin Yiqun
> Attachments: HDFS-2043.002.patch, HDFS-2043.003.patch, 
> HDFS-2043.004.patch, HDFS-2043.005.patch, HDFS-2043.006.patch, HDFS.001.patch
>
>
> I can't reproduce this failure reliably, but it seems like TestHFlush has 
> been failing intermittently, with the frequency increasing of late.
> Note the following two pre-commit test runs from different JIRAs where 
> TestHFlush seems to have failed spuriously:
> https://builds.apache.org/job/PreCommit-HDFS-Build/734//testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/680//testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10371) MiniDFSCluster#restartDataNode does not always stop DN before start DN

2016-05-05 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou reassigned HDFS-10371:


Assignee: Xiaobing Zhou

> MiniDFSCluster#restartDataNode does not always stop DN before start DN
> --
>
> Key: HDFS-10371
> URL: https://issues.apache.org/jira/browse/HDFS-10371
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Xiaobing Zhou
>
> This could cause intermittent port binding problem if the keep the same port 
> option is chosen as evident in the recent 
> [Jenkins|https://builds.apache.org/job/PreCommit-HDFS-Build/15366/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_91.txt]
> {code}
> Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 53.772 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestDecommissionWithStriped
> testDecommissionWithURBlockForSameBlockGroup(org.apache.hadoop.hdfs.TestDecommissionWithStriped)
>   Time elapsed: 6.946 sec  <<< ERROR!
> java.net.BindException: Problem binding to [localhost:52957] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:530)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:793)
>   at org.apache.hadoop.ipc.Server.(Server.java:2592)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:563)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:538)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:932)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1297)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:479)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2584)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2472)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2519)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2242)
>   at 
> org.apache.hadoop.hdfs.TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup(TestDecommissionWithStriped.java:254)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10337) OfflineEditsViewer stats option should print 0 instead of null for the count of operations

2016-05-05 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-10337:
-
Attachment: HDFS-10337.003.patch

Thanks for review again. Update the patch for addressing the comments.

> OfflineEditsViewer stats option should print 0 instead of null for the count 
> of operations
> --
>
> Key: HDFS-10337
> URL: https://issues.apache.org/jira/browse/HDFS-10337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Akira AJISAKA
>Assignee: Lin Yiqun
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-10337.001.patch, HDFS-10337.002.patch, 
> HDFS-10337.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10372) Fix for failing TestFsDatasetImpl#testCleanShutdownOfVolume

2016-05-05 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-10372:
--
Status: Patch Available  (was: Open)

[~kihwal], [~jojochuang]: can you please review.

> Fix for failing TestFsDatasetImpl#testCleanShutdownOfVolume
> ---
>
> Key: HDFS-10372
> URL: https://issues.apache.org/jira/browse/HDFS-10372
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-10372.patch
>
>
> TestFsDatasetImpl#testCleanShutdownOfVolume fails very often.
> We added more debug information in HDFS-10260 to find out why this test is 
> failing.
> Now I think I know the root cause of failure.
> I thought that {{LocatedBlock#getLocations()}} returns an array of 
> DatanodeInfo but now I realized that it returns an array of 
> DatandeStorageInfo (which is subclass of DatanodeInfo).
> In the test I intended to check whether the exception contains the xfer 
> address of the DatanodeInfo. Since {{DatanodeInfo#toString()}} method returns 
> the xfer address, I checked whether exception contains 
> {{DatanodeInfo#toString}} or not.
> But since  {{LocatedBlock#getLocations()}} returned an array of 
> DatanodeStorageInfo, it has storage info in the toString() implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10372) Fix for failing TestFsDatasetImpl#testCleanShutdownOfVolume

2016-05-05 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-10372:
--
Attachment: HDFS-10372.patch

> Fix for failing TestFsDatasetImpl#testCleanShutdownOfVolume
> ---
>
> Key: HDFS-10372
> URL: https://issues.apache.org/jira/browse/HDFS-10372
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-10372.patch
>
>
> TestFsDatasetImpl#testCleanShutdownOfVolume fails very often.
> We added more debug information in HDFS-10260 to find out why this test is 
> failing.
> Now I think I know the root cause of failure.
> I thought that {{LocatedBlock#getLocations()}} returns an array of 
> DatanodeInfo but now I realized that it returns an array of 
> DatandeStorageInfo (which is subclass of DatanodeInfo).
> In the test I intended to check whether the exception contains the xfer 
> address of the DatanodeInfo. Since {{DatanodeInfo#toString()}} method returns 
> the xfer address, I checked whether exception contains 
> {{DatanodeInfo#toString}} or not.
> But since  {{LocatedBlock#getLocations()}} returned an array of 
> DatanodeStorageInfo, it has storage info in the toString() implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10372) Fix for failing TestFsDatasetImpl#testCleanShutdownOfVolume

2016-05-05 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HDFS-10372:
-

 Summary: Fix for failing 
TestFsDatasetImpl#testCleanShutdownOfVolume
 Key: HDFS-10372
 URL: https://issues.apache.org/jira/browse/HDFS-10372
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.7.3
Reporter: Rushabh S Shah
Assignee: Rushabh S Shah


TestFsDatasetImpl#testCleanShutdownOfVolume fails very often.
We added more debug information in HDFS-10260 to find out why this test is 
failing.
Now I think I know the root cause of failure.
I thought that {{LocatedBlock#getLocations()}} returns an array of DatanodeInfo 
but now I realized that it returns an array of DatandeStorageInfo (which is 
subclass of DatanodeInfo).
In the test I intended to check whether the exception contains the xfer address 
of the DatanodeInfo. Since {{DatanodeInfo#toString()}} method returns the xfer 
address, I checked whether exception contains {{DatanodeInfo#toString}} or not.
But since  {{LocatedBlock#getLocations()}} returned an array of 
DatanodeStorageInfo, it has storage info in the toString() implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-2043) TestHFlush failing intermittently

2016-05-05 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-2043:

Attachment: HDFS-2043.005.patch

Post the new patch for addressing the comment.

> TestHFlush failing intermittently
> -
>
> Key: HDFS-2043
> URL: https://issues.apache.org/jira/browse/HDFS-2043
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Aaron T. Myers
>Assignee: Lin Yiqun
> Attachments: HDFS-2043.002.patch, HDFS-2043.003.patch, 
> HDFS-2043.004.patch, HDFS-2043.005.patch, HDFS.001.patch
>
>
> I can't reproduce this failure reliably, but it seems like TestHFlush has 
> been failing intermittently, with the frequency increasing of late.
> Note the following two pre-commit test runs from different JIRAs where 
> TestHFlush seems to have failed spuriously:
> https://builds.apache.org/job/PreCommit-HDFS-Build/734//testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/680//testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10346) Implement asynchronous setPermission/setOwner for DistributedFileSystem

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273470#comment-15273470
 ] 

Hadoop QA commented on HDFS-10346:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 33s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 28s 
{color} | {color:red} hadoop-hdfs-project: patch generated 2 new + 8 unchanged 
- 0 fixed = 10 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 48s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 0s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 31s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
28s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 155m 44s {color} 
| {color:black} {co

[jira] [Commented] (HDFS-10371) MiniDFSCluster#restartDataNode does not always stop DN before start DN

2016-05-05 Thread Lin Yiqun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273461#comment-15273461
 ] 

Lin Yiqun commented on HDFS-10371:
--

I suggest that we should change the param {{expireOnNN}} from false to true in 
these two {{restartDataNode}}. So the dn will be stopped and restart.
{code}
  /*
   * Restart a particular datanode, use newly assigned port
   */
  public boolean restartDataNode(int i) throws IOException {
return restartDataNode(i, false);
  }

  /*
   * Restart a particular datanode, on the same port if keepPort is true
   */
  public synchronized boolean restartDataNode(int i, boolean keepPort)
  throws IOException {
return restartDataNode(i, keepPort, false);
  }
{code}

> MiniDFSCluster#restartDataNode does not always stop DN before start DN
> --
>
> Key: HDFS-10371
> URL: https://issues.apache.org/jira/browse/HDFS-10371
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Xiaoyu Yao
>
> This could cause intermittent port binding problem if the keep the same port 
> option is chosen as evident in the recent 
> [Jenkins|https://builds.apache.org/job/PreCommit-HDFS-Build/15366/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_91.txt]
> {code}
> Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 53.772 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestDecommissionWithStriped
> testDecommissionWithURBlockForSameBlockGroup(org.apache.hadoop.hdfs.TestDecommissionWithStriped)
>   Time elapsed: 6.946 sec  <<< ERROR!
> java.net.BindException: Problem binding to [localhost:52957] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:530)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:793)
>   at org.apache.hadoop.ipc.Server.(Server.java:2592)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:563)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:538)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:932)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1297)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:479)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2584)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2472)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2519)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2242)
>   at 
> org.apache.hadoop.hdfs.TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup(TestDecommissionWithStriped.java:254)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10371) MiniDFSCluster#restartDataNode does not always stop DN before start DN

2016-05-05 Thread Lin Yiqun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273460#comment-15273460
 ] 

Lin Yiqun commented on HDFS-10371:
--

I suggest that we should change the param {{expireOnNN}} from false to true in 
these two {{restartDataNode}}. So the dn will be stopped and restart.
{code}
  /*
   * Restart a particular datanode, use newly assigned port
   */
  public boolean restartDataNode(int i) throws IOException {
return restartDataNode(i, false);
  }

  /*
   * Restart a particular datanode, on the same port if keepPort is true
   */
  public synchronized boolean restartDataNode(int i, boolean keepPort)
  throws IOException {
return restartDataNode(i, keepPort, false);
  }
{code}

> MiniDFSCluster#restartDataNode does not always stop DN before start DN
> --
>
> Key: HDFS-10371
> URL: https://issues.apache.org/jira/browse/HDFS-10371
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Xiaoyu Yao
>
> This could cause intermittent port binding problem if the keep the same port 
> option is chosen as evident in the recent 
> [Jenkins|https://builds.apache.org/job/PreCommit-HDFS-Build/15366/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_91.txt]
> {code}
> Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 53.772 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestDecommissionWithStriped
> testDecommissionWithURBlockForSameBlockGroup(org.apache.hadoop.hdfs.TestDecommissionWithStriped)
>   Time elapsed: 6.946 sec  <<< ERROR!
> java.net.BindException: Problem binding to [localhost:52957] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:530)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:793)
>   at org.apache.hadoop.ipc.Server.(Server.java:2592)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:563)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:538)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:932)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1297)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:479)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2584)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2472)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2519)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2242)
>   at 
> org.apache.hadoop.hdfs.TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup(TestDecommissionWithStriped.java:254)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10324) Trash directory in an encryption zone should be pre-created with correct permissions

2016-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273441#comment-15273441
 ] 

Hudson commented on HDFS-10324:
---

FAILURE: Integrated in Hadoop-trunk-Commit #9727 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9727/])
HDFS-10324. Trash directory in an encryption zone should be pre-created (xyao: 
rev dacd1f50feb24ccdf6155b2b7a6126fe21a47ad0)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/CreateEncryptionZoneFlag.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZonesWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/package-info.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestRpcProgramNfs3.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/TransparentEncryption.md
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZonesWithKMS.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml


> Trash directory in an encryption zone should be pre-created with correct 
> permissions
> 
>
> Key: HDFS-10324
> URL: https://issues.apache.org/jira/browse/HDFS-10324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.8.0
> Environment: CDH5.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0
>
> Attachments: HDFS-10324.001.patch, HDFS-10324.002.patch, 
> HDFS-10324.003.patch, HDFS-10324.004.patch, HDFS-10324.005.patch, 
> HDFS-10324.006.patch, HDFS-10324.007.patch, HDFS-10324.008.patch
>
>
> We encountered a bug in HDFS-8831:
> After HDFS-8831, a deleted file in an encryption zone is moved to a .Trash 
> subdirectory within the encryption zone.
> However, if this .Trash subdirectory is not created beforehand, it will be 
> created and owned by the first user who deleted a file, with permission 
> drwx--. This creates a serious bug because any other non-privileged user 
> will not be able to delete any files within the encryption zone, because they 
> do not have the permission to move directories to the trash directory.
> We should fix this bug, by pre-creating the .Trash directory with sticky bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10324) Trash directory in an encryption zone should be pre-created with correct permissions

2016-05-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10324:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~jojochuang] for the contribution and [~andrew.wang] for the discussion 
and code review. I've committed the patch to trunk, branch-2 and branch-2.8.

> Trash directory in an encryption zone should be pre-created with correct 
> permissions
> 
>
> Key: HDFS-10324
> URL: https://issues.apache.org/jira/browse/HDFS-10324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.8.0
> Environment: CDH5.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0
>
> Attachments: HDFS-10324.001.patch, HDFS-10324.002.patch, 
> HDFS-10324.003.patch, HDFS-10324.004.patch, HDFS-10324.005.patch, 
> HDFS-10324.006.patch, HDFS-10324.007.patch, HDFS-10324.008.patch
>
>
> We encountered a bug in HDFS-8831:
> After HDFS-8831, a deleted file in an encryption zone is moved to a .Trash 
> subdirectory within the encryption zone.
> However, if this .Trash subdirectory is not created beforehand, it will be 
> created and owned by the first user who deleted a file, with permission 
> drwx--. This creates a serious bug because any other non-privileged user 
> will not be able to delete any files within the encryption zone, because they 
> do not have the permission to move directories to the trash directory.
> We should fix this bug, by pre-creating the .Trash directory with sticky bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9939) Increase DecompressorStream skip buffer size

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273428#comment-15273428
 ] 

Hadoop QA commented on HDFS-9939:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 5 unchanged - 1 fixed = 5 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 15s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 58s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 34s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Timed out junit tests | 
org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12802560/HDFS-9939.002.patch |
| JIRA Issue | HDFS-9939 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2a1f50955b9b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Pers

[jira] [Commented] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273309#comment-15273309
 ] 

Hadoop QA commented on HDFS-9890:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
33s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 56s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 57s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 22m 0s {color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_91 with JDK 
v1.8.0_91 generated 41 new + 23 unchanged - 6 fixed = 64 total (was 29) {color} 
|
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 25m 22s {color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_101 with JDK 
v1.7.0_101 generated 41 new + 23 unchanged - 6 fixed = 64 total (was 29) 
{color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 38s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 33s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 42s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_101 Failed CTEST tests | 
test_libhdfs_mini_stress_hdfspp_test_shim_static |
\\
\\
|| Subsyste

[jira] [Updated] (HDFS-9939) Increase DecompressorStream skip buffer size

2016-05-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-9939:
-
Status: Patch Available  (was: In Progress)

> Increase DecompressorStream skip buffer size
> 
>
> Key: HDFS-9939
> URL: https://issues.apache.org/jira/browse/HDFS-9939
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: John Zhuge
> Fix For: 2.8.0
>
> Attachments: HDFS-9939.001.patch, HDFS-9939.002.patch
>
>
> See ACCUMULO-2353 for details.
> Filing this jira to investigate performance difference and possibly make the 
> buf size change accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9939) Increase DecompressorStream skip buffer size

2016-05-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-9939:
-
Attachment: HDFS-9939.002.patch

Patch 002:
* Remove unused variable buf in testSkip

[~andrew.wang], could you please review and commit?

> Increase DecompressorStream skip buffer size
> 
>
> Key: HDFS-9939
> URL: https://issues.apache.org/jira/browse/HDFS-9939
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: John Zhuge
> Fix For: 2.8.0
>
> Attachments: HDFS-9939.001.patch, HDFS-9939.002.patch
>
>
> See ACCUMULO-2353 for details.
> Filing this jira to investigate performance difference and possibly make the 
> buf size change accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9939) Increase DecompressorStream skip buffer size

2016-05-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-9939:
-
Status: In Progress  (was: Patch Available)

> Increase DecompressorStream skip buffer size
> 
>
> Key: HDFS-9939
> URL: https://issues.apache.org/jira/browse/HDFS-9939
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: John Zhuge
> Fix For: 2.8.0
>
> Attachments: HDFS-9939.001.patch
>
>
> See ACCUMULO-2353 for details.
> Filing this jira to investigate performance difference and possibly make the 
> buf size change accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10350) Implement asynchronous setOwner for DistributedFileSystem

2016-05-05 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou resolved HDFS-10350.
--
Resolution: Won't Fix

Resolved this one since it's been merged to HDFS-10346.

> Implement asynchronous setOwner for DistributedFileSystem
> -
>
> Key: HDFS-10350
> URL: https://issues.apache.org/jira/browse/HDFS-10350
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, hdfs-client
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> This is proposed to implement an asynchronous setOwner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10300) TestDistCpSystem should share MiniDFSCluster

2016-05-05 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273245#comment-15273245
 ] 

John Zhuge commented on HDFS-10300:
---

Could anyone kindly code review? Thanks.

> TestDistCpSystem should share MiniDFSCluster
> 
>
> Key: HDFS-10300
> URL: https://issues.apache.org/jira/browse/HDFS-10300
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>  Labels: quality, test
> Attachments: HDFS-10300.001.patch
>
>
> The test cases in this class should share MiniDFSCluster if possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10346) Implement asynchronous setPermission/setOwner for DistributedFileSystem

2016-05-05 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273237#comment-15273237
 ] 

Xiaobing Zhou commented on HDFS-10346:
--

The 001 patch merged with setOwner. It also added tests and changed the 
structure of tests.

> Implement asynchronous setPermission/setOwner for DistributedFileSystem
> ---
>
> Key: HDFS-10346
> URL: https://issues.apache.org/jira/browse/HDFS-10346
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, hdfs-client
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10346-HDFS-9924.000.patch, 
> HDFS-10346-HDFS-9924.001.patch
>
>
> This is proposed to implement an asynchronous setPermission and setOwner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10346) Implement asynchronous setPermission/setOwner for DistributedFileSystem

2016-05-05 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10346:
-
Attachment: HDFS-10346-HDFS-9924.001.patch

> Implement asynchronous setPermission/setOwner for DistributedFileSystem
> ---
>
> Key: HDFS-10346
> URL: https://issues.apache.org/jira/browse/HDFS-10346
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, hdfs-client
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10346-HDFS-9924.000.patch, 
> HDFS-10346-HDFS-9924.001.patch
>
>
> This is proposed to implement an asynchronous setPermission and setOwner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10346) Implement asynchronous setPermission/setOwner for DistributedFileSystem

2016-05-05 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10346:
-
Summary: Implement asynchronous setPermission/setOwner for 
DistributedFileSystem  (was: Implement asynchronous setPermission for 
DistributedFileSystem)

> Implement asynchronous setPermission/setOwner for DistributedFileSystem
> ---
>
> Key: HDFS-10346
> URL: https://issues.apache.org/jira/browse/HDFS-10346
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, hdfs-client
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10346-HDFS-9924.000.patch
>
>
> This is proposed to implement an asynchronous setPermission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10346) Implement asynchronous setPermission/setOwner for DistributedFileSystem

2016-05-05 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10346:
-
Description: This is proposed to implement an asynchronous setPermission 
and setOwner.  (was: This is proposed to implement an asynchronous 
setPermission.)

> Implement asynchronous setPermission/setOwner for DistributedFileSystem
> ---
>
> Key: HDFS-10346
> URL: https://issues.apache.org/jira/browse/HDFS-10346
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, hdfs-client
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10346-HDFS-9924.000.patch
>
>
> This is proposed to implement an asynchronous setPermission and setOwner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10361) Ozone: Support starting StorageContainerManager as a daemon

2016-05-05 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10361:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-1312
Target Version/s:   (was: HDFS-1312)
  Status: Resolved  (was: Patch Available)

Thank you for the reviews and verification [~anu] and [~cnauroth]. I am not 
sure what's up with Jenkins.

I took the liberty of pushing this change to the feature branch after some 
manual testing. In case it breaks something I will back it out.

> Ozone: Support starting StorageContainerManager as a daemon
> ---
>
> Key: HDFS-10361
> URL: https://issues.apache.org/jira/browse/HDFS-10361
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-1312
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: HDFS-1312
>
> Attachments: HDFS-10361-HDFS-7240.01.patch
>
>
> Add shell script support for starting the StorageContainerManager service as 
> a daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2043) TestHFlush failing intermittently

2016-05-05 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273155#comment-15273155
 ] 

John Zhuge commented on HDFS-2043:
--

[~linyiqun], could you please add a INFO log message with stack trace at line 
494 when ignoring the exception? Otherwise LGTM.

> TestHFlush failing intermittently
> -
>
> Key: HDFS-2043
> URL: https://issues.apache.org/jira/browse/HDFS-2043
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Aaron T. Myers
>Assignee: Lin Yiqun
> Attachments: HDFS-2043.002.patch, HDFS-2043.003.patch, 
> HDFS-2043.004.patch, HDFS.001.patch
>
>
> I can't reproduce this failure reliably, but it seems like TestHFlush has 
> been failing intermittently, with the frequency increasing of late.
> Note the following two pre-commit test runs from different JIRAs where 
> TestHFlush seems to have failed spuriously:
> https://builds.apache.org/job/PreCommit-HDFS-Build/734//testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/680//testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-05-05 Thread Xiaowei Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273131#comment-15273131
 ] 

Xiaowei Zhu commented on HDFS-9890:
---

Previous patches have bad merges. Resubmit HDFS-9890.HDFS-8707.007.patch, in 
which I also cleaned up commented out code.

> libhdfs++: Add test suite to simulate network issues
> 
>
> Key: HDFS-9890
> URL: https://issues.apache.org/jira/browse/HDFS-9890
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-9890.HDFS-8707.000.patch, 
> HDFS-9890.HDFS-8707.001.patch, HDFS-9890.HDFS-8707.002.patch, 
> HDFS-9890.HDFS-8707.003.patch, HDFS-9890.HDFS-8707.004.patch, 
> HDFS-9890.HDFS-8707.005.patch, HDFS-9890.HDFS-8707.006.patch, 
> HDFS-9890.HDFS-8707.007.patch
>
>
> I propose adding a test suite to simulate various network issues/failures in 
> order to get good test coverage on some of the retry paths that aren't easy 
> to hit in mock unit tests.
> At the moment the only things that hit the retry paths are the gmock unit 
> tests.  The gmock are only as good as their mock implementations which do a 
> great job of simulating protocol correctness but not more complex 
> interactions.  They also can't really simulate the types of lock contention 
> and subtle memory stomps that show up while doing hundreds or thousands of 
> concurrent reads.   We should add a new minidfscluster test that focuses on 
> heavy read/seek load and then randomly convert error codes returned by 
> network functions into errors.
> List of things to simulate(while heavily loaded), roughly in order of how 
> badly I think they need to be tested at the moment:
> -Rpc connection disconnect
> -Rpc connection slowed down enough to cause a timeout and trigger retry
> -DN connection disconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-05-05 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-9890:
--
Attachment: HDFS-9890.HDFS-8707.007.patch

> libhdfs++: Add test suite to simulate network issues
> 
>
> Key: HDFS-9890
> URL: https://issues.apache.org/jira/browse/HDFS-9890
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-9890.HDFS-8707.000.patch, 
> HDFS-9890.HDFS-8707.001.patch, HDFS-9890.HDFS-8707.002.patch, 
> HDFS-9890.HDFS-8707.003.patch, HDFS-9890.HDFS-8707.004.patch, 
> HDFS-9890.HDFS-8707.005.patch, HDFS-9890.HDFS-8707.006.patch, 
> HDFS-9890.HDFS-8707.007.patch
>
>
> I propose adding a test suite to simulate various network issues/failures in 
> order to get good test coverage on some of the retry paths that aren't easy 
> to hit in mock unit tests.
> At the moment the only things that hit the retry paths are the gmock unit 
> tests.  The gmock are only as good as their mock implementations which do a 
> great job of simulating protocol correctness but not more complex 
> interactions.  They also can't really simulate the types of lock contention 
> and subtle memory stomps that show up while doing hundreds or thousands of 
> concurrent reads.   We should add a new minidfscluster test that focuses on 
> heavy read/seek load and then randomly convert error codes returned by 
> network functions into errors.
> List of things to simulate(while heavily loaded), roughly in order of how 
> badly I think they need to be tested at the moment:
> -Rpc connection disconnect
> -Rpc connection slowed down enough to cause a timeout and trigger retry
> -DN connection disconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-05-05 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-9890:
--
Attachment: (was: HDFS-9890.HDFS-8707.007.patch)

> libhdfs++: Add test suite to simulate network issues
> 
>
> Key: HDFS-9890
> URL: https://issues.apache.org/jira/browse/HDFS-9890
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-9890.HDFS-8707.000.patch, 
> HDFS-9890.HDFS-8707.001.patch, HDFS-9890.HDFS-8707.002.patch, 
> HDFS-9890.HDFS-8707.003.patch, HDFS-9890.HDFS-8707.004.patch, 
> HDFS-9890.HDFS-8707.005.patch, HDFS-9890.HDFS-8707.006.patch
>
>
> I propose adding a test suite to simulate various network issues/failures in 
> order to get good test coverage on some of the retry paths that aren't easy 
> to hit in mock unit tests.
> At the moment the only things that hit the retry paths are the gmock unit 
> tests.  The gmock are only as good as their mock implementations which do a 
> great job of simulating protocol correctness but not more complex 
> interactions.  They also can't really simulate the types of lock contention 
> and subtle memory stomps that show up while doing hundreds or thousands of 
> concurrent reads.   We should add a new minidfscluster test that focuses on 
> heavy read/seek load and then randomly convert error codes returned by 
> network functions into errors.
> List of things to simulate(while heavily loaded), roughly in order of how 
> badly I think they need to be tested at the moment:
> -Rpc connection disconnect
> -Rpc connection slowed down enough to cause a timeout and trigger retry
> -DN connection disconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-05-05 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-9890:
--
Attachment: HDFS-9890.HDFS-8707.007.patch

> libhdfs++: Add test suite to simulate network issues
> 
>
> Key: HDFS-9890
> URL: https://issues.apache.org/jira/browse/HDFS-9890
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-9890.HDFS-8707.000.patch, 
> HDFS-9890.HDFS-8707.001.patch, HDFS-9890.HDFS-8707.002.patch, 
> HDFS-9890.HDFS-8707.003.patch, HDFS-9890.HDFS-8707.004.patch, 
> HDFS-9890.HDFS-8707.005.patch, HDFS-9890.HDFS-8707.006.patch
>
>
> I propose adding a test suite to simulate various network issues/failures in 
> order to get good test coverage on some of the retry paths that aren't easy 
> to hit in mock unit tests.
> At the moment the only things that hit the retry paths are the gmock unit 
> tests.  The gmock are only as good as their mock implementations which do a 
> great job of simulating protocol correctness but not more complex 
> interactions.  They also can't really simulate the types of lock contention 
> and subtle memory stomps that show up while doing hundreds or thousands of 
> concurrent reads.   We should add a new minidfscluster test that focuses on 
> heavy read/seek load and then randomly convert error codes returned by 
> network functions into errors.
> List of things to simulate(while heavily loaded), roughly in order of how 
> badly I think they need to be tested at the moment:
> -Rpc connection disconnect
> -Rpc connection slowed down enough to cause a timeout and trigger retry
> -DN connection disconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-240) Should HDFS restrict the names used for files?

2016-05-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reassigned HDFS-240:
---

Assignee: John Zhuge

> Should HDFS restrict the names used for files?
> --
>
> Key: HDFS-240
> URL: https://issues.apache.org/jira/browse/HDFS-240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.7.2
>Reporter: Robert Chansler
>Assignee: John Zhuge
>
> When reviewing the consequences of HADOOP-6017 (the name system could not 
> start because a file name interpreted as a regex caused a fault), the 
> discussion turned to improving the test set for file system functions by 
> broadening the set of names used for testing. Presently, HDFS allows any name 
> without a slash. _Should the space of names be restricted?_ If most funny 
> names are unintended, maybe the user would benefit from an early error 
> indication. A contrary view is that restricting names is so 20th-century.
> Should be or shouldn't we?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10348) Namenode report bad block method doesn't check whether the block belongs to datanode before adding it to corrupt replicas map.

2016-05-05 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273027#comment-15273027
 ] 

Rushabh S Shah commented on HDFS-10348:
---

{quote}
There are some odd edge cases surrounding storageInfo#addBlock. A client can 
report a corrupt GS that's different than currently in the blocks map. I 
question if the those should be ignored.
{quote}
[~daryn]: Thanks for the review and this is a good observation. But I think 
that this jira is more concerned with adding to corrupt replicas map when it 
shouldn't.
I think we should fix the corrupt GS case via other jira.
Any comments ?

> Namenode report bad block method doesn't check whether the block belongs to 
> datanode before adding it to corrupt replicas map.
> --
>
> Key: HDFS-10348
> URL: https://issues.apache.org/jira/browse/HDFS-10348
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-10348-1.patch, HDFS-10348.patch
>
>
> Namenode (via report bad block nethod) doesn't check whether the block 
> belongs to the datanode before it adds to corrupt replicas map.
> In one of our cluster we found that there were 3 lingering corrupt blocks.
> It happened in the following order.
> 1. Two clients called getBlockLocations for a particular file.
> 2. Client C1 tried to open the file and encountered checksum error from   
> node N3 and it reported bad block (blk1) to the namenode.
> 3. Namenode added that node N3 and block blk1  to corrrupt replicas map   and 
> ask one of the good node (one of the 2 nodes) to replicate the block to 
> another node N4.
> 4. After receiving the block, N4 sends an IBR (with RECEIVED_BLOCK) to 
> namenode.
> 5. Namenode removed the block and node N3 from corrupt replicas map.
>It also removed N3's storage from triplets and queued an invalidate 
> request for N3.
> 6. In the mean time, Client C2 tries to open the file and the request went to 
> node N3.
>C2 also encountered the checksum exception and reported bad block to 
> namenode.
> 7. Namenode added the corrupt block blk1 and node N3 to the corrupt replicas 
> map without confirming whether node N3 has the block or not.
> After deleting the block, N3 sends an IBR (with DELETED) and the namenode 
> simply ignores the report since the N3's storage is no longer in the 
> triplets(from step 5)
> We took the node out of rotation, but still the block was present only in the 
> corruptReplciasMap. 
> Since on removing the node, we only goes through the block which are present 
> in the triplets for a given datanode.
> [~kshukla]'s patch fixed this bug via 
> https://issues.apache.org/jira/browse/HDFS-9958.
> But I think the following check should be made in the 
> BlockManager#markBlockAsCorrupt instead of 
> BlockManager#findAndMarkBlockAsCorrupt.
> {noformat}
> if (storage == null) {
>   storage = storedBlock.findStorageInfo(node);
> }
> if (storage == null) {
>   blockLog.debug("BLOCK* findAndMarkBlockAsCorrupt: {} not found on {}",
>   blk, dn);
>   return;
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10361) Ozone: Support starting StorageContainerManager as a daemon

2016-05-05 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10361:

Summary: Ozone: Support starting StorageContainerManager as a daemon  (was: 
Support starting StorageContainerManager as a daemon)

> Ozone: Support starting StorageContainerManager as a daemon
> ---
>
> Key: HDFS-10361
> URL: https://issues.apache.org/jira/browse/HDFS-10361
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-1312
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-10361-HDFS-7240.01.patch
>
>
> Add shell script support for starting the StorageContainerManager service as 
> a daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10363) Ozone: Introduce new config keys for SCM service addresses

2016-05-05 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10363:

Summary: Ozone: Introduce new config keys for SCM service addresses  (was: 
Introduce new config keys for SCM service addresses)

> Ozone: Introduce new config keys for SCM service addresses
> --
>
> Key: HDFS-10363
> URL: https://issues.apache.org/jira/browse/HDFS-10363
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: OzoneScmEndpointconfiguration.pdf
>
>
> The SCM should have its own config keys to specify service addresses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-8790) Add Filesystem level stress tests

2016-05-05 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu reassigned HDFS-8790:
-

Assignee: Xiaowei Zhu  (was: James Clampffer)

> Add Filesystem level stress tests
> -
>
> Key: HDFS-8790
> URL: https://issues.apache.org/jira/browse/HDFS-8790
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-8790.HDFS-8707.000.patch
>
>
> I propose adding stress tests on libhdfs(3) compatibility layer was well as 
> the async calls.  These can be also used for basic performance metrics and 
> inputs to profiling tools to see improvements over time.
> I'd like to make these tests into a seperate executable, or set of them, so 
> that they can be used for longer running tests on dedicated clusters that may 
> already exist.  Each should provide a simple command line interface for 
> scripted or manual use.
> Basic tests would be:
> looped open-read-close
> sequential scans
> small random reads 
> All tests will be parameterized for number of threads, read size, and upper 
> and lower offset bounds for a specified file.  This will make it much easier 
> to detect and reproduce threading issues and resource leaks as well as 
> provide a simple executable (or set of executables) that can be run with 
> valgrind to gain a high confidence that the code is operating correctly.
> I'd appreciate suggestions for any other simple stress tests.
> HDFS-8766 intentionally avoided shared_ptr and unique_ptr in the C api to 
> make debugging this a little easier in case memory stomps and dangling 
> references show up in stress tests.  These will be added into the C API when 
> the patch for this jira is submitted because things should be reasonably 
> stable once the stress tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8790) Add Filesystem level stress tests

2016-05-05 Thread Xiaowei Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273002#comment-15273002
 ] 

Xiaowei Zhu commented on HDFS-8790:
---

Per James, 

1. Ideally it should stat the file it's running tests against so it knows the 
upper bounds for seek rather than a hard coded max offset.
2. Lots of stuff can be cleaned up, and then also adding more test cases like 
large reads. 
3. And to emulate cancels in vertica have a whole bunch of threads doing reads 
like EE would be doing, and then cancel them all at once.
4. GetBlockLocations has the file length so that could be used instead of stat 
temporarily.

> Add Filesystem level stress tests
> -
>
> Key: HDFS-8790
> URL: https://issues.apache.org/jira/browse/HDFS-8790
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-8790.HDFS-8707.000.patch
>
>
> I propose adding stress tests on libhdfs(3) compatibility layer was well as 
> the async calls.  These can be also used for basic performance metrics and 
> inputs to profiling tools to see improvements over time.
> I'd like to make these tests into a seperate executable, or set of them, so 
> that they can be used for longer running tests on dedicated clusters that may 
> already exist.  Each should provide a simple command line interface for 
> scripted or manual use.
> Basic tests would be:
> looped open-read-close
> sequential scans
> small random reads 
> All tests will be parameterized for number of threads, read size, and upper 
> and lower offset bounds for a specified file.  This will make it much easier 
> to detect and reproduce threading issues and resource leaks as well as 
> provide a simple executable (or set of executables) that can be run with 
> valgrind to gain a high confidence that the code is operating correctly.
> I'd appreciate suggestions for any other simple stress tests.
> HDFS-8766 intentionally avoided shared_ptr and unique_ptr in the C api to 
> make debugging this a little easier in case memory stomps and dangling 
> references show up in stress tests.  These will be added into the C API when 
> the patch for this jira is submitted because things should be reasonably 
> stable once the stress tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10356) Ozone: Container server needs enhancements to control of bind address for greater flexibility and testability.

2016-05-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273000#comment-15273000
 ] 

Anu Engineer commented on HDFS-10356:
-

This JIRA intends to copy the same solution as proposed by [~arpitagarwal] in 
HDFS-10363, so that we have a similar solution on both datanodes and namenodes

> Ozone: Container server needs enhancements to control of bind address for 
> greater flexibility and testability.
> --
>
> Key: HDFS-10356
> URL: https://issues.apache.org/jira/browse/HDFS-10356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chris Nauroth
>Assignee: Anu Engineer
>
> The container server, as implemented in class 
> {{org.apache.hadoop.ozone.container.common.transport.server.XceiverServer}}, 
> currently does not offer the same degree of flexibility as our other RPC 
> servers for controlling the network interface and port used in the bind call. 
>  There is no "bind-host" property, so it is not possible to select all 
> available network interfaces via the 0.0.0.0 wildcard address.  If the 
> requested port is different from the actual bound port (i.e. setting port to 
> 0 in test cases), then there is no exposure of that actual bound port to 
> clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10363) Introduce new config keys for SCM service addresses

2016-05-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272995#comment-15272995
 ] 

Anu Engineer commented on HDFS-10363:
-

This is really great, I am going to wait until you post the patches and copy 
the same solution for the containers.


> Introduce new config keys for SCM service addresses
> ---
>
> Key: HDFS-10363
> URL: https://issues.apache.org/jira/browse/HDFS-10363
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: OzoneScmEndpointconfiguration.pdf
>
>
> The SCM should have its own config keys to specify service addresses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10188) libhdfs++: Implement debug allocators

2016-05-05 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu reassigned HDFS-10188:
--

Assignee: Xiaowei Zhu  (was: James Clampffer)

> libhdfs++: Implement debug allocators
> -
>
> Key: HDFS-10188
> URL: https://issues.apache.org/jira/browse/HDFS-10188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
>
> I propose implementing a set of memory new/delete pairs with additional 
> checking to detect double deletes, read-after-delete, and write-after-deletes 
> to help debug resource ownership issues and prevent new ones from entering 
> the library.
> One of the most common issues we have is use-after-free issues.  The 
> continuation pattern makes these really tricky to debug because by the time a 
> segsegv is raised the context of what has caused the error is long gone.
> The plan is to add allocators that can be turned on that can do the 
> following, in order of runtime cost.
> 1: no-op, forward through to default new/delete
> 2: make sure the memory given to the constructor is dirty, memset free'd 
> memory to 0
> 3: implement operator new with mmap, lock that region of memory once it's 
> been deleted; obviously this can't be left to run forever because the memory 
> is never unmapped
> This should also put some groundwork in place for implementing specialized 
> allocators for tiny objects that we churn through like std::string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10363) Introduce new config keys for SCM service addresses

2016-05-05 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272966#comment-15272966
 ] 

Chris Nauroth commented on HDFS-10363:
--

I think the proposal looks great.  I like flipping around the service bind 
address so that wildcard is the default behavior.  I also like decoupling the 
client connect setting from the service bind setting.  Thank you, Arpit.

> Introduce new config keys for SCM service addresses
> ---
>
> Key: HDFS-10363
> URL: https://issues.apache.org/jira/browse/HDFS-10363
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: OzoneScmEndpointconfiguration.pdf
>
>
> The SCM should have its own config keys to specify service addresses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272939#comment-15272939
 ] 

Hadoop QA commented on HDFS-9890:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
40s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 51s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 50s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 21m 54s {color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_91 with JDK 
v1.8.0_91 generated 41 new + 23 unchanged - 6 fixed = 64 total (was 29) {color} 
|
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 25m 19s {color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_101 with JDK 
v1.7.0_101 generated 42 new + 22 unchanged - 7 fixed = 64 total (was 29) 
{color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 41s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 40s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 46s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/

[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-05-05 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272889#comment-15272889
 ] 

Mingliang Liu commented on HDFS-9732:
-

Thanks for the explanation. String concatenation in loop should use 
{{StringBuilder}} to eliminate the intermediate object construction, which is 
very true. For other cases in the patch I think the + operator should be as 
efficient as the StringBuilder. Readability is subjective, and we won't spend 
time in debating this, though I prefer the + operator for shorter/simpler 
statement. In last comment, I was wondering other obvious reasons why 
StringBuilder was explicitly preferred by you guys.

Other part of the patch looks good to me.

> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732-000.patch, 
> HDFS-9732.001.patch, HDFS-9732.002.patch, HDFS-9732.003.patch, 
> HDFS-9732.004.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-05-05 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-9890:
--
Attachment: HDFS-9890.HDFS-8707.006.patch

> libhdfs++: Add test suite to simulate network issues
> 
>
> Key: HDFS-9890
> URL: https://issues.apache.org/jira/browse/HDFS-9890
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-9890.HDFS-8707.000.patch, 
> HDFS-9890.HDFS-8707.001.patch, HDFS-9890.HDFS-8707.002.patch, 
> HDFS-9890.HDFS-8707.003.patch, HDFS-9890.HDFS-8707.004.patch, 
> HDFS-9890.HDFS-8707.005.patch, HDFS-9890.HDFS-8707.006.patch
>
>
> I propose adding a test suite to simulate various network issues/failures in 
> order to get good test coverage on some of the retry paths that aren't easy 
> to hit in mock unit tests.
> At the moment the only things that hit the retry paths are the gmock unit 
> tests.  The gmock are only as good as their mock implementations which do a 
> great job of simulating protocol correctness but not more complex 
> interactions.  They also can't really simulate the types of lock contention 
> and subtle memory stomps that show up while doing hundreds or thousands of 
> concurrent reads.   We should add a new minidfscluster test that focuses on 
> heavy read/seek load and then randomly convert error codes returned by 
> network functions into errors.
> List of things to simulate(while heavily loaded), roughly in order of how 
> badly I think they need to be tested at the moment:
> -Rpc connection disconnect
> -Rpc connection slowed down enough to cause a timeout and trigger retry
> -DN connection disconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-05-05 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-9890:
--
Attachment: (was: HDFS-9890.HDFS-8707.006.patch)

> libhdfs++: Add test suite to simulate network issues
> 
>
> Key: HDFS-9890
> URL: https://issues.apache.org/jira/browse/HDFS-9890
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-9890.HDFS-8707.000.patch, 
> HDFS-9890.HDFS-8707.001.patch, HDFS-9890.HDFS-8707.002.patch, 
> HDFS-9890.HDFS-8707.003.patch, HDFS-9890.HDFS-8707.004.patch, 
> HDFS-9890.HDFS-8707.005.patch, HDFS-9890.HDFS-8707.006.patch
>
>
> I propose adding a test suite to simulate various network issues/failures in 
> order to get good test coverage on some of the retry paths that aren't easy 
> to hit in mock unit tests.
> At the moment the only things that hit the retry paths are the gmock unit 
> tests.  The gmock are only as good as their mock implementations which do a 
> great job of simulating protocol correctness but not more complex 
> interactions.  They also can't really simulate the types of lock contention 
> and subtle memory stomps that show up while doing hundreds or thousands of 
> concurrent reads.   We should add a new minidfscluster test that focuses on 
> heavy read/seek load and then randomly convert error codes returned by 
> network functions into errors.
> List of things to simulate(while heavily loaded), roughly in order of how 
> badly I think they need to be tested at the moment:
> -Rpc connection disconnect
> -Rpc connection slowed down enough to cause a timeout and trigger retry
> -DN connection disconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-05-05 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272850#comment-15272850
 ] 

Yongjun Zhang commented on HDFS-9732:
-

Hi [~ste...@apache.org],

So far all comments are addressed ([~aw], if you disagree, would you please let 
us know?), would you please see if you could give +1 on rev 004 that you have 
reviewed? 

Thanks  a lot.




> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732-000.patch, 
> HDFS-9732.001.patch, HDFS-9732.002.patch, HDFS-9732.003.patch, 
> HDFS-9732.004.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10337) OfflineEditsViewer stats option should print 0 instead of null for the count of operations

2016-05-05 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272829#comment-15272829
 ] 

Akira AJISAKA commented on HDFS-10337:
--

Thanks [~linyiqun] for updating the patch. Three comments:
* Would you fix the findbugs warning?
{code}
count == null ? 0 : count));
{code}
According to https://sourceforge.net/p/findbugs/bugs/1184/, if count is not 
null, count is unboxed and then immediately reboxed. We can fix it by
{code}
count == null ? Long.valueOf(0L) : count));
{code}
* Would you fix the checkstyle warning?
* Would you add a regression test for this issue?

> OfflineEditsViewer stats option should print 0 instead of null for the count 
> of operations
> --
>
> Key: HDFS-10337
> URL: https://issues.apache.org/jira/browse/HDFS-10337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Akira AJISAKA
>Assignee: Lin Yiqun
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-10337.001.patch, HDFS-10337.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-240) Should HDFS restrict the names used for files?

2016-05-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-240:
-
Affects Version/s: 2.7.2
 Target Version/s: 3.0.0
 Hadoop Flags: Incompatible change

This would be great to get into the next major release, setting the target 
version for tracking.

We have "-renameReserved" functionality introduced in HDFS-5709 that we can 
leverage to rename invalid paths on upgrade.

> Should HDFS restrict the names used for files?
> --
>
> Key: HDFS-240
> URL: https://issues.apache.org/jira/browse/HDFS-240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.7.2
>Reporter: Robert Chansler
>
> When reviewing the consequences of HADOOP-6017 (the name system could not 
> start because a file name interpreted as a regex caused a fault), the 
> discussion turned to improving the test set for file system functions by 
> broadening the set of names used for testing. Presently, HDFS allows any name 
> without a slash. _Should the space of names be restricted?_ If most funny 
> names are unintended, maybe the user would benefit from an early error 
> indication. A contrary view is that restricting names is so 20th-century.
> Should be or shouldn't we?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272798#comment-15272798
 ] 

Hadoop QA commented on HDFS-9890:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 53s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
37s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 57s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 57s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 22m 4s {color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_91 with JDK 
v1.8.0_91 generated 41 new + 23 unchanged - 6 fixed = 64 total (was 29) {color} 
|
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 25m 29s {color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_101 with JDK 
v1.7.0_101 generated 41 new + 23 unchanged - 6 fixed = 64 total (was 29) 
{color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 37s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 53s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 51s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed CTEST tests | 
test_libhdfs_mini_stress_hdfspp_test_shim_static |
\\
\\
|| Subsys

[jira] [Assigned] (HDFS-10370) Allow DataNode to be started with numactl

2016-05-05 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion reassigned HDFS-10370:
--

Assignee: Dave Marion

> Allow DataNode to be started with numactl
> -
>
> Key: HDFS-10370
> URL: https://issues.apache.org/jira/browse/HDFS-10370
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Dave Marion
>Assignee: Dave Marion
> Attachments: HDFS-10370-1.patch
>
>
> Allow numactl constraints to be applied to the datanode process. The 
> implementation I have in mind involves two environment variables (enable and 
> parameters) in the datanode startup process. Basically, if enabled and 
> numactl exists on the system, then start the java process using it. Provide a 
> default set of parameters, and allow the user to override the default. Wiring 
> this up for the non-jsvc use case seems straightforward. Not sure how this 
> can be supported using jsvc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10370) Allow DataNode to be started with numactl

2016-05-05 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HDFS-10370:
---
Attachment: HDFS-10370-1.patch

Straw man patch, looking for feedback.

> Allow DataNode to be started with numactl
> -
>
> Key: HDFS-10370
> URL: https://issues.apache.org/jira/browse/HDFS-10370
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Dave Marion
> Attachments: HDFS-10370-1.patch
>
>
> Allow numactl constraints to be applied to the datanode process. The 
> implementation I have in mind involves two environment variables (enable and 
> parameters) in the datanode startup process. Basically, if enabled and 
> numactl exists on the system, then start the java process using it. Provide a 
> default set of parameters, and allow the user to override the default. Wiring 
> this up for the non-jsvc use case seems straightforward. Not sure how this 
> can be supported using jsvc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8790) Add Filesystem level stress tests

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272717#comment-15272717
 ] 

Hadoop QA commented on HDFS-8790:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 1s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
10s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 48s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 47s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 25m 30s {color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_91 with JDK 
v1.8.0_91 generated 41 new + 23 unchanged - 6 fixed = 64 total (was 29) {color} 
|
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 28s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 28m 59s {color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_101 with JDK 
v1.7.0_101 generated 41 new + 23 unchanged - 6 fixed = 64 total (was 29) 
{color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 43s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 36s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 16s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org

[jira] [Commented] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-05-05 Thread Xiaowei Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272695#comment-15272695
 ] 

Xiaowei Zhu commented on HDFS-9890:
---

The latest patch HDFS-9890.HDFS-8707.006.patch fixed a bug in hdfs.cc that file 
event callback with not be set and passed down to block reader properly. Also 
for test_libhdfs_mini_stress.c changed the behavior of RANDOM_ERROR_RATIO: 
1. unset: use default 10
2. set to 0: always error
3. <0: always pass
4. other cases: random() % RANDOM_ERROR_RATIO

> libhdfs++: Add test suite to simulate network issues
> 
>
> Key: HDFS-9890
> URL: https://issues.apache.org/jira/browse/HDFS-9890
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-9890.HDFS-8707.000.patch, 
> HDFS-9890.HDFS-8707.001.patch, HDFS-9890.HDFS-8707.002.patch, 
> HDFS-9890.HDFS-8707.003.patch, HDFS-9890.HDFS-8707.004.patch, 
> HDFS-9890.HDFS-8707.005.patch, HDFS-9890.HDFS-8707.006.patch
>
>
> I propose adding a test suite to simulate various network issues/failures in 
> order to get good test coverage on some of the retry paths that aren't easy 
> to hit in mock unit tests.
> At the moment the only things that hit the retry paths are the gmock unit 
> tests.  The gmock are only as good as their mock implementations which do a 
> great job of simulating protocol correctness but not more complex 
> interactions.  They also can't really simulate the types of lock contention 
> and subtle memory stomps that show up while doing hundreds or thousands of 
> concurrent reads.   We should add a new minidfscluster test that focuses on 
> heavy read/seek load and then randomly convert error codes returned by 
> network functions into errors.
> List of things to simulate(while heavily loaded), roughly in order of how 
> badly I think they need to be tested at the moment:
> -Rpc connection disconnect
> -Rpc connection slowed down enough to cause a timeout and trigger retry
> -DN connection disconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-05-05 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-9890:
--
Attachment: HDFS-9890.HDFS-8707.006.patch

> libhdfs++: Add test suite to simulate network issues
> 
>
> Key: HDFS-9890
> URL: https://issues.apache.org/jira/browse/HDFS-9890
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-9890.HDFS-8707.000.patch, 
> HDFS-9890.HDFS-8707.001.patch, HDFS-9890.HDFS-8707.002.patch, 
> HDFS-9890.HDFS-8707.003.patch, HDFS-9890.HDFS-8707.004.patch, 
> HDFS-9890.HDFS-8707.005.patch, HDFS-9890.HDFS-8707.006.patch
>
>
> I propose adding a test suite to simulate various network issues/failures in 
> order to get good test coverage on some of the retry paths that aren't easy 
> to hit in mock unit tests.
> At the moment the only things that hit the retry paths are the gmock unit 
> tests.  The gmock are only as good as their mock implementations which do a 
> great job of simulating protocol correctness but not more complex 
> interactions.  They also can't really simulate the types of lock contention 
> and subtle memory stomps that show up while doing hundreds or thousands of 
> concurrent reads.   We should add a new minidfscluster test that focuses on 
> heavy read/seek load and then randomly convert error codes returned by 
> network functions into errors.
> List of things to simulate(while heavily loaded), roughly in order of how 
> badly I think they need to be tested at the moment:
> -Rpc connection disconnect
> -Rpc connection slowed down enough to cause a timeout and trigger retry
> -DN connection disconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8790) Add Filesystem level stress tests

2016-05-05 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-8790:
--
Attachment: (was: HDFS-9890.HDFS-8707.006.patch)

> Add Filesystem level stress tests
> -
>
> Key: HDFS-8790
> URL: https://issues.apache.org/jira/browse/HDFS-8790
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-8790.HDFS-8707.000.patch
>
>
> I propose adding stress tests on libhdfs(3) compatibility layer was well as 
> the async calls.  These can be also used for basic performance metrics and 
> inputs to profiling tools to see improvements over time.
> I'd like to make these tests into a seperate executable, or set of them, so 
> that they can be used for longer running tests on dedicated clusters that may 
> already exist.  Each should provide a simple command line interface for 
> scripted or manual use.
> Basic tests would be:
> looped open-read-close
> sequential scans
> small random reads 
> All tests will be parameterized for number of threads, read size, and upper 
> and lower offset bounds for a specified file.  This will make it much easier 
> to detect and reproduce threading issues and resource leaks as well as 
> provide a simple executable (or set of executables) that can be run with 
> valgrind to gain a high confidence that the code is operating correctly.
> I'd appreciate suggestions for any other simple stress tests.
> HDFS-8766 intentionally avoided shared_ptr and unique_ptr in the C api to 
> make debugging this a little easier in case memory stomps and dangling 
> references show up in stress tests.  These will be added into the C API when 
> the patch for this jira is submitted because things should be reasonably 
> stable once the stress tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8790) Add Filesystem level stress tests

2016-05-05 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-8790:
--
Attachment: HDFS-9890.HDFS-8707.006.patch

> Add Filesystem level stress tests
> -
>
> Key: HDFS-8790
> URL: https://issues.apache.org/jira/browse/HDFS-8790
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-8790.HDFS-8707.000.patch, 
> HDFS-9890.HDFS-8707.006.patch
>
>
> I propose adding stress tests on libhdfs(3) compatibility layer was well as 
> the async calls.  These can be also used for basic performance metrics and 
> inputs to profiling tools to see improvements over time.
> I'd like to make these tests into a seperate executable, or set of them, so 
> that they can be used for longer running tests on dedicated clusters that may 
> already exist.  Each should provide a simple command line interface for 
> scripted or manual use.
> Basic tests would be:
> looped open-read-close
> sequential scans
> small random reads 
> All tests will be parameterized for number of threads, read size, and upper 
> and lower offset bounds for a specified file.  This will make it much easier 
> to detect and reproduce threading issues and resource leaks as well as 
> provide a simple executable (or set of executables) that can be run with 
> valgrind to gain a high confidence that the code is operating correctly.
> I'd appreciate suggestions for any other simple stress tests.
> HDFS-8766 intentionally avoided shared_ptr and unique_ptr in the C api to 
> make debugging this a little easier in case memory stomps and dangling 
> references show up in stress tests.  These will be added into the C API when 
> the patch for this jira is submitted because things should be reasonably 
> stable once the stress tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10324) Trash directory in an encryption zone should be pre-created with correct permissions

2016-05-05 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272605#comment-15272605
 ] 

Xiaoyu Yao commented on HDFS-10324:
---

Thanks [~jojochuang] for fixing the checkstyle issues. The latest patch looks 
good to me. The unit test failures are known flaky tests tracked by HDFS-2043, 
HADOOP-13101 and HDFS-10371. 

I will hold off committing today in case [~andrew.wang] has additional comments.

> Trash directory in an encryption zone should be pre-created with correct 
> permissions
> 
>
> Key: HDFS-10324
> URL: https://issues.apache.org/jira/browse/HDFS-10324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.8.0
> Environment: CDH5.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10324.001.patch, HDFS-10324.002.patch, 
> HDFS-10324.003.patch, HDFS-10324.004.patch, HDFS-10324.005.patch, 
> HDFS-10324.006.patch, HDFS-10324.007.patch, HDFS-10324.008.patch
>
>
> We encountered a bug in HDFS-8831:
> After HDFS-8831, a deleted file in an encryption zone is moved to a .Trash 
> subdirectory within the encryption zone.
> However, if this .Trash subdirectory is not created beforehand, it will be 
> created and owned by the first user who deleted a file, with permission 
> drwx--. This creates a serious bug because any other non-privileged user 
> will not be able to delete any files within the encryption zone, because they 
> do not have the permission to move directories to the trash directory.
> We should fix this bug, by pre-creating the .Trash directory with sticky bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10355) Fix thread_local related build issue on Mac OS X

2016-05-05 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272602#comment-15272602
 ] 

James Clampffer commented on HDFS-10355:


So if it has pthread support we can do something clever to emulate 
thread_local.  Based on a little bit of looking around the __thread qualifier 
isn't supported either unfortunately (please correct me if I'm wrong).

The lack of thread local support is actually very amusing (to me). The OSX 
version of clang could support it but has an explicit check and decides not to. 
 Obviously ABI compatibility is important but disabling chunks of the standard 
it claims to support with a "we'll fix this later with something faster" excuse 
doesn't really help anyone.
{code}
.Case("cxx_thread_local",
- LangOpts.CPlusPlus11 && PP.getTargetInfo().isTLSSupported() &&
- !PP.getTargetInfo().getTriple().isOSDarwin())
+ LangOpts.CPlusPlus11 && PP.getTargetInfo().isTLSSupported())
{code}
(from 
stackoverflow.com/questions/23791060/c-thread-local-storage-clang-503-0-40-mac-osx)

I think options (c) and (d) will cause a lot of pain.  I suppose you could 
disable the C API and any other synchronous calls that interact with asio with 
some #ifdefs to avoid anything that really needs to be thread local.  The 
thread_local variables are only there to make a shim for a synchronous API.  If 
everything is done via callback they aren't needed.

If getting a new compiler installed on OSX is a pain that seems like it's 
preventing potentially interested people from contributing to this then it 
seems like (b) is the clear way to go.  It'd pay for itself with just a couple 
patches from new people.

My personal preference would be (a), but I'm biased because all development I 
do is linux based.  Also clang seems to support undefined behavior more than 
I'd like e.g. calling shared_from_this() on a class deriving from 
enable_shared_from_this works even if std::make_shared hasn't been called which 
led to some confusion a while back.

> Fix thread_local related build issue on Mac OS X
> 
>
> Key: HDFS-10355
> URL: https://issues.apache.org/jira/browse/HDFS-10355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
> Environment: OS: Mac OS X 10.11
> clang: Apple LLVM version 7.0.2 (clang-700.1.81)
>Reporter: Tibor Kiss
>
> The native hdfs library uses C++11 features heavily.
> One of such feature is thread_local storage class which is supported in GCC, 
> Visual Studio and the community version of clang compiler, but not by Apple's 
> clang (which is default on OS X boxes). 
> See further details here: http://stackoverflow.com/a/29929949
> Even though not many Hadoop cluster runs on OS X developers still use this 
> platform for development.
> The problem can be solved multiple ways:
>  a) Stick to gcc/g++ or community based clang on OS X. Developers will need 
> extra steps to build Hadoop.
>  b) Workaround thread_local with a helper class.
>  c) Get rid of all the globals marked with thread_local. Interface change 
> will be erquired.
>  d) Disable multi threading support in the native client on OS X and document 
> this limitation. 
> Compile error related to thread_local:
> {noformat}
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/bindings/c/hdfs.cc:66:1:
>  error: thread-local storage is not supported for the current target
>  [exec] thread_local std::string errstr;
>  [exec] ^
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/bindings/c/hdfs.cc:87:1:
>  error: thread-local storage is not supported for the current target
>  [exec] thread_local std::experimental::optional 
> fsEventCallback;
>  [exec] ^
>  [exec] 
> /Users/tiborkiss/workspace/apache-hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/bindings/c/hdfs.cc:88:1:
>  error: thread-local storage is not supported for the current target
>  [exec] thread_local std::experimental::optional 
> fileEventCallback;
>  [exec] ^
>  [exec] 1 warning and 3 errors generated.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10371) MiniDFSCluster#restartDataNode does not always stop DN before start DN

2016-05-05 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272597#comment-15272597
 ] 

Xiaoyu Yao commented on HDFS-10371:
---

Many tests such as 
{{TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup}} 
use the one is subject to the similar binding problem.
{code}
  public synchronized boolean restartDataNode(DataNodeProperties dnprop, 
boolean keepPort)
{code}

This ticket is opened to fix them with the new one introduced by HDFS-7886, 
which stops DN before restart.
{code}
public synchronized boolean restartDataNode(
  int idn, boolean keepPort, boolean expireOnNN) throws IOException {
DataNodeProperties dnprop = stopDataNode(idn);
if(expireOnNN) {
  setDataNodeDead(dnprop.datanode.getDatanodeId());
}
if (dnprop == null) {
  return false;
} else {
  return restartDataNode(dnprop, keepPort);
}
  }
{code}


> MiniDFSCluster#restartDataNode does not always stop DN before start DN
> --
>
> Key: HDFS-10371
> URL: https://issues.apache.org/jira/browse/HDFS-10371
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Xiaoyu Yao
>
> This could cause intermittent port binding problem if the keep the same port 
> option is chosen as evident in the recent 
> [Jenkins|https://builds.apache.org/job/PreCommit-HDFS-Build/15366/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_91.txt]
> {code}
> Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 53.772 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestDecommissionWithStriped
> testDecommissionWithURBlockForSameBlockGroup(org.apache.hadoop.hdfs.TestDecommissionWithStriped)
>   Time elapsed: 6.946 sec  <<< ERROR!
> java.net.BindException: Problem binding to [localhost:52957] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:530)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:793)
>   at org.apache.hadoop.ipc.Server.(Server.java:2592)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:563)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:538)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:932)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1297)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:479)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2584)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2472)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2519)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2242)
>   at 
> org.apache.hadoop.hdfs.TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup(TestDecommissionWithStriped.java:254)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10371) MiniDFSCluster#restartDataNode does not always stop DN before start DN

2016-05-05 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-10371:
-

 Summary: MiniDFSCluster#restartDataNode does not always stop DN 
before start DN
 Key: HDFS-10371
 URL: https://issues.apache.org/jira/browse/HDFS-10371
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Reporter: Xiaoyu Yao


This could cause intermittent port binding problem if the keep the same port 
option is chosen as evident in the recent 
[Jenkins|https://builds.apache.org/job/PreCommit-HDFS-Build/15366/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_91.txt]

{code}
Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 53.772 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestDecommissionWithStriped
testDecommissionWithURBlockForSameBlockGroup(org.apache.hadoop.hdfs.TestDecommissionWithStriped)
  Time elapsed: 6.946 sec  <<< ERROR!
java.net.BindException: Problem binding to [localhost:52957] 
java.net.BindException: Address already in use; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:530)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:793)
at org.apache.hadoop.ipc.Server.(Server.java:2592)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:563)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:538)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:932)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1297)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:479)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2584)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2472)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2519)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2242)
at 
org.apache.hadoop.hdfs.TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup(TestDecommissionWithStriped.java:254)

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10370) Allow DataNode to be started with numactl

2016-05-05 Thread Dave Marion (JIRA)
Dave Marion created HDFS-10370:
--

 Summary: Allow DataNode to be started with numactl
 Key: HDFS-10370
 URL: https://issues.apache.org/jira/browse/HDFS-10370
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Dave Marion


Allow numactl constraints to be applied to the datanode process. The 
implementation I have in mind involves two environment variables (enable and 
parameters) in the datanode startup process. Basically, if enabled and numactl 
exists on the system, then start the java process using it. Provide a default 
set of parameters, and allow the user to override the default. Wiring this up 
for the non-jsvc use case seems straightforward. Not sure how this can be 
supported using jsvc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10369) hdfsread crash when reading data reaches to 128M

2016-05-05 Thread vince zhang (JIRA)
vince zhang created HDFS-10369:
--

 Summary: hdfsread crash when reading data reaches to 128M
 Key: HDFS-10369
 URL: https://issues.apache.org/jira/browse/HDFS-10369
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs
Reporter: vince zhang


see code below, it would crash after   printf("hdfsGetDefaultBlockSize2:%d, 
ret:%d\n", hdfsGetDefaultBlockSize(fs), ret);
  
hdfsFile read_file = hdfsOpenFile(fs, "/testpath", O_RDONLY, 0, 0, 1); 
  int total = hdfsAvailable(fs, read_file);
  printf("Total:%d\n", total);
  char* buffer = (char*)malloc(sizeof(size+1) * sizeof(char));
  int ret = -1; 
  int len = 0;
  ret = hdfsSeek(fs, read_file, 134152192);
  printf("hdfsGetDefaultBlockSize1:%d, ret:%d\n", hdfsGetDefaultBlockSize(fs), 
ret);
  ret = hdfsRead(fs, read_file, (void*)buffer, size);
  printf("hdfsGetDefaultBlockSize2:%d, ret:%d\n", hdfsGetDefaultBlockSize(fs), 
ret);
  ret = hdfsRead(fs, read_file, (void*)buffer, size);
  printf("hdfsGetDefaultBlockSize3:%d, ret:%d\n", hdfsGetDefaultBlockSize(fs), 
ret);
  return 0;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2043) TestHFlush failing intermittently

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272226#comment-15272226
 ] 

Hadoop QA commented on HDFS-2043:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 1s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 0s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 20s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 228m 39s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestAsyncDFSRename |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server

[jira] [Commented] (HDFS-10303) DataStreamer#ResponseProcessor calculate packet acknowledge duration wrongly.

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272219#comment-15272219
 ] 

Hadoop QA commented on HDFS-10303:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 43s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-hdfs-project: patch generated 1 new + 77 unchanged 
- 1 fixed = 78 total (was 78) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 42s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 45s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 147m 57s {color} 
| {color:black} {co

[jira] [Updated] (HDFS-10368) Erasure Coding: Deprecate replication-related config keys

2016-05-05 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-10368:

Description: 
This jira is to visit the replication based config keys and deprecate them(if 
necessary) in order to make it more meaningful.

Please refer [discussion 
thread|https://issues.apache.org/jira/browse/HDFS-9869?focusedCommentId=15249363&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15249363]

  was:
This jira is to visit the replication based config keys and deprecate them.

Please refer [discussion 
thread|https://issues.apache.org/jira/browse/HDFS-9869?focusedCommentId=15249363&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15249363]


> Erasure Coding: Deprecate replication-related config keys
> -
>
> Key: HDFS-10368
> URL: https://issues.apache.org/jira/browse/HDFS-10368
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>
> This jira is to visit the replication based config keys and deprecate them(if 
> necessary) in order to make it more meaningful.
> Please refer [discussion 
> thread|https://issues.apache.org/jira/browse/HDFS-9869?focusedCommentId=15249363&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15249363]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10368) Erasure Coding: Deprecate replication-related config keys

2016-05-05 Thread Rakesh R (JIRA)
Rakesh R created HDFS-10368:
---

 Summary: Erasure Coding: Deprecate replication-related config keys
 Key: HDFS-10368
 URL: https://issues.apache.org/jira/browse/HDFS-10368
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R


This jira is to visit the replication based config keys and deprecate them.

Please refer [discussion 
thread|https://issues.apache.org/jira/browse/HDFS-9869?focusedCommentId=15249363&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15249363]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10303) DataStreamer#ResponseProcessor calculate packet acknowledge duration wrongly.

2016-05-05 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-10303:
--
Status: Patch Available  (was: Open)

> DataStreamer#ResponseProcessor calculate packet acknowledge duration wrongly.
> -
>
> Key: HDFS-10303
> URL: https://issues.apache.org/jira/browse/HDFS-10303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-10303-001.patch
>
>
> Packets acknowledge duration should be calculated based on the packet send 
> time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2043) TestHFlush failing intermittently

2016-05-05 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272057#comment-15272057
 ] 

John Zhuge commented on HDFS-2043:
--

[~linyiqun] and [~iwasakims], great work on patch 004. It fixes my 
testHFlushInterrupted failure (ClosedByInterruptException) on my branch (latest 
2.6.0-based CDH + HDFS-9812).

> TestHFlush failing intermittently
> -
>
> Key: HDFS-2043
> URL: https://issues.apache.org/jira/browse/HDFS-2043
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Aaron T. Myers
>Assignee: Lin Yiqun
> Attachments: HDFS-2043.002.patch, HDFS-2043.003.patch, 
> HDFS-2043.004.patch, HDFS.001.patch
>
>
> I can't reproduce this failure reliably, but it seems like TestHFlush has 
> been failing intermittently, with the frequency increasing of late.
> Note the following two pre-commit test runs from different JIRAs where 
> TestHFlush seems to have failed spuriously:
> https://builds.apache.org/job/PreCommit-HDFS-Build/734//testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/680//testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-2043) TestHFlush failing intermittently

2016-05-05 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-2043:

Attachment: HDFS-2043.004.patch

Thanks [~iwasakims] for great analysis! 
{quote}
The testHFlushInterrupted expects that the second stm.close() succeeds but it 
is not true. Underlying streamer thread is closed since closeThreads(true) is 
called in the finally block of DFSOutputStream#closeImpl.
{quote}
This operation was added in HDFS-9812, that issue fixed the problem of streamer 
threads leaking if failure happens when closing DFSOutputStream. And after 
HDFS-9812 fixed, the second {{stm.close()}} will failed more frequently.

Thanks again for the comment. Post a new patch for this.

> TestHFlush failing intermittently
> -
>
> Key: HDFS-2043
> URL: https://issues.apache.org/jira/browse/HDFS-2043
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Aaron T. Myers
>Assignee: Lin Yiqun
> Attachments: HDFS-2043.002.patch, HDFS-2043.003.patch, 
> HDFS-2043.004.patch, HDFS.001.patch
>
>
> I can't reproduce this failure reliably, but it seems like TestHFlush has 
> been failing intermittently, with the frequency increasing of late.
> Note the following two pre-commit test runs from different JIRAs where 
> TestHFlush seems to have failed spuriously:
> https://builds.apache.org/job/PreCommit-HDFS-Build/734//testReport/
> https://builds.apache.org/job/PreCommit-HDFS-Build/680//testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8449) Add tasks count metrics to datanode for ECWorker

2016-05-05 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272031#comment-15272031
 ] 

Li Bo commented on HDFS-8449:
-

Thanks very much for Kai's review.

bq. Could you enhance TestReconstructStripedFile similarly?
bq. Could we share TestReconstructStripedFile#waitForRecoveryFinished and avoid 
waitForRecoveryFinished?
I think it’s a little strange to call 
{{TestReconstructStripedFile#waitForRecoveryFinished}}  in 
{{TestDataNodeErasureCodingMetrics}} because the change of 
{{TestReconstructStripedFile}} may impact {{TestDataNodeErasureCodingMetrics}}. 
We can move the shared function to a util class.
I think it's better to do the changes of {{TestReconstructStripedFile}} in a 
new separate jira in order to make this jira focusing on the test of datanode 
metrics.

bq. Could we use DFSTestUtil.writeFile to generate the test file?
Both implementations are OK. There’re many test cases directly using 
outputstream to write a file.

bq. I'm not sure about the following block codes are necessary.
The system will execute the actions periodically. In the test we should make 
sure the actions are executed before moving forward. 


> Add tasks count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8449
> URL: https://issues.apache.org/jira/browse/HDFS-8449
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8449-000.patch, HDFS-8449-001.patch, 
> HDFS-8449-002.patch, HDFS-8449-003.patch, HDFS-8449-004.patch, 
> HDFS-8449-005.patch, HDFS-8449-006.patch, HDFS-8449-007.patch, 
> HDFS-8449-008.patch, HDFS-8449-009.patch
>
>
> This sub task try to record ec recovery tasks that a datanode has done, 
> including total tasks, failed tasks and sucessful tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10365) FullBlockReports retransmission delays NN startup time in large cluster.

2016-05-05 Thread Chackaravarthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272001#comment-15272001
 ] 

Chackaravarthy commented on HDFS-10365:
---

[~kihwal] These are valuable inputs for us. Thanks.
{noformat}
Yes. Each FBR rpc will be smaller, so the impact of timeout-retransmit will be 
lower. Also NN will process individual report quicker.
{noformat}
By doing so, are we not delaying the next heartbeat sent from DN too long as 
each RPC call might consume upto 60s. Or this is affordable to do since FBR 
will happen only once in 6 hours? 

> FullBlockReports retransmission delays NN startup time in large cluster.
> 
>
> Key: HDFS-10365
> URL: https://issues.apache.org/jira/browse/HDFS-10365
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.0
> Environment: version - hadoop-2.6.0 (hdp-2.2)
> DN - 1200 nodes
>Reporter: Chackaravarthy
>Priority: Critical
>
> Whenever NN is restarted, it takes huge time for NN to come back to stable 
> state. i.e. Last contact time remains more than 1 or 2 mins continuously for 
> around 3 to 4 hours. This is mainly because most of the DN's getting timeout 
> (60s) in blockReport (FBR) rpc call and then it keep sending FBR again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org