[jira] [Created] (HDFS-11992) Replace commons-logging APIs with slf4j in FsDatasetImpl

2017-06-19 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HDFS-11992:


 Summary: Replace commons-logging APIs with slf4j in FsDatasetImpl
 Key: HDFS-11992
 URL: https://issues.apache.org/jira/browse/HDFS-11992
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Akira Ajisaka


{{FsDatasetImpl.LOG}} is widely used and this will change the APIs of 
InstrumentedLock and InstrumentedWriteLock, so this issue is to change only 
{{FsDatasetImpl.LOG}} and other related APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11992) Replace commons-logging APIs with slf4j in FsDatasetImpl

2017-06-19 Thread hu xiaodong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hu xiaodong reassigned HDFS-11992:
--

Assignee: hu xiaodong

> Replace commons-logging APIs with slf4j in FsDatasetImpl
> 
>
> Key: HDFS-11992
> URL: https://issues.apache.org/jira/browse/HDFS-11992
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: hu xiaodong
>
> {{FsDatasetImpl.LOG}} is widely used and this will change the APIs of 
> InstrumentedLock and InstrumentedWriteLock, so this issue is to change only 
> {{FsDatasetImpl.LOG}} and other related APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11993) Add log info when connect to datanode socket address failed

2017-06-19 Thread chencan (JIRA)
chencan created HDFS-11993:
--

 Summary: Add log info when connect to datanode socket address 
failed
 Key: HDFS-11993
 URL: https://issues.apache.org/jira/browse/HDFS-11993
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: chencan


In function BlockSeekTo, when connect faild to datanode socket address,log as 
follow:

DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for block"
  + ", add to deadNodes and continue. " + ex, ex);

add block info may be more explicit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11993) Add log info when connect to datanode socket address failed

2017-06-19 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-11993:
---
Attachment: HADOOP-11993.patch

> Add log info when connect to datanode socket address failed
> ---
>
> Key: HDFS-11993
> URL: https://issues.apache.org/jira/browse/HDFS-11993
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: chencan
> Attachments: HADOOP-11993.patch
>
>
> In function BlockSeekTo, when connect faild to datanode socket address,log as 
> follow:
> DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for block"
>   + ", add to deadNodes and continue. " + ex, ex);
> add block info may be more explicit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11993) Add log info when connect to datanode socket address failed

2017-06-19 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-11993:
---
Status: Patch Available  (was: Open)

> Add log info when connect to datanode socket address failed
> ---
>
> Key: HDFS-11993
> URL: https://issues.apache.org/jira/browse/HDFS-11993
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: chencan
> Attachments: HADOOP-11993.patch
>
>
> In function BlockSeekTo, when connect faild to datanode socket address,log as 
> follow:
> DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for block"
>   + ", add to deadNodes and continue. " + ex, ex);
> add block info may be more explicit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11991) Ozone: Ozone shell: the root is assumed to hdfs

2017-06-19 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned HDFS-11991:
--

Assignee: Weiwei Yang

> Ozone: Ozone shell: the root is assumed to hdfs
> ---
>
> Key: HDFS-11991
> URL: https://issues.apache.org/jira/browse/HDFS-11991
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Fix For: HDFS-7240
>
>
> *hdfs oz* command, or ozone shell has a command like option to run some 
> commands as root easily by specifying _--root_   as a command line option. 
> But after HDFS-11655 that assumption is no longer true. We need to detect the 
> user that started the scm/ksm service and _root_  should be mapped to that 
> user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11991) Ozone: Ozone shell: the root is assumed to hdfs

2017-06-19 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16053586#comment-16053586
 ] 

Weiwei Yang commented on HDFS-11991:


I will be working on this, thanks [~anu] for filing the ticket.

> Ozone: Ozone shell: the root is assumed to hdfs
> ---
>
> Key: HDFS-11991
> URL: https://issues.apache.org/jira/browse/HDFS-11991
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Fix For: HDFS-7240
>
>
> *hdfs oz* command, or ozone shell has a command like option to run some 
> commands as root easily by specifying _--root_   as a command line option. 
> But after HDFS-11655 that assumption is no longer true. We need to detect the 
> user that started the scm/ksm service and _root_  should be mapped to that 
> user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11916) Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a random EC policy

2017-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16053603#comment-16053603
 ] 

Hadoop QA commented on HDFS-11916:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 8 unchanged - 0 fixed = 10 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11916 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873436/HDFS-11916.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5fe2d4c18c47 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 75043d3 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19941/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19941/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19941/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19941/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a 
> random EC policy
> 
>
> Key: HDFS-11916
> URL: https://issues.apache.org/ji

[jira] [Commented] (HDFS-11993) Add log info when connect to datanode socket address failed

2017-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16053606#comment-16053606
 ] 

Hadoop QA commented on HDFS-11993:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 2 new + 41 unchanged - 0 fixed = 43 total (was 41) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11993 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873444/HADOOP-11993.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7f42078e9433 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 75043d3 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19943/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19943/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19943/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add log info when connect to datanode socket address failed
> ---
>
> Key: HDFS-11993
> URL: https://issues.apache.org/jira/browse/HDFS-11993
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: chencan
> Attachments: HADOOP-11993.patch
>
>
> In function BlockSeekTo, when connect faild to datanode socket address,log a

[jira] [Commented] (HDFS-11606) Add CLI cmd to remove an erasure code policy

2017-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16053616#comment-16053616
 ] 

Hadoop QA commented on HDFS-11606:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 
452 unchanged - 0 fixed = 455 total (was 452) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
25s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11606 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873438/HDFS-11606.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 38c3c0d6e6b3 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 75043d3 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19942/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19942/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-

[jira] [Commented] (HDFS-11647) Add -E option in hdfs "count" command to show erasure policy summarization

2017-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16053662#comment-16053662
 ] 

Hadoop QA commented on HDFS-11647:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
31s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 58s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestHarFileSystem |
|   | hadoop.fs.TestFilterFileSystem |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11647 |
| GITHUB PR | https://github.com/apache/hadoop/pull/233 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  cc  |
| uname | Linux cfd0dff11a7d 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 75043d3 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19940/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Bu

[jira] [Updated] (HDFS-11606) Add CLI cmd to remove an erasure code policy

2017-06-19 Thread Tim Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Yao updated HDFS-11606:
---
Attachment: HDFS-11606.03.patch

The failure in UT seems unrelated to this patch. And I have fixed the 
checkstyle problems and applied the new patch HDFS-11606.03.patch.

> Add CLI cmd to remove an erasure code policy
> 
>
> Key: HDFS-11606
> URL: https://issues.apache.org/jira/browse/HDFS-11606
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Kai Zheng
>Assignee: Tim Yao
> Fix For: 3.0.0-alpha4
>
> Attachments: HDFS-11606.01.patch, HDFS-11606.02.patch, 
> HDFS-11606.03.patch
>
>
> This is to develop a CLI cmd allowing user to remove a user defined erasure 
> code policy by specifying its name. Note if the policy is referenced and used 
> by  existing HDFS files, the removal should fail with a good message.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11992) Replace commons-logging APIs with slf4j in FsDatasetImpl

2017-06-19 Thread hu xiaodong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hu xiaodong updated HDFS-11992:
---
Attachment: HDFS-11992.001.patch

> Replace commons-logging APIs with slf4j in FsDatasetImpl
> 
>
> Key: HDFS-11992
> URL: https://issues.apache.org/jira/browse/HDFS-11992
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: Akira Ajisaka
>Assignee: hu xiaodong
> Attachments: HDFS-11992.001.patch
>
>
> {{FsDatasetImpl.LOG}} is widely used and this will change the APIs of 
> InstrumentedLock and InstrumentedWriteLock, so this issue is to change only 
> {{FsDatasetImpl.LOG}} and other related APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11992) Replace commons-logging APIs with slf4j in FsDatasetImpl

2017-06-19 Thread hu xiaodong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hu xiaodong updated HDFS-11992:
---
Affects Version/s: 3.0.0-alpha3
   Status: Patch Available  (was: Open)

> Replace commons-logging APIs with slf4j in FsDatasetImpl
> 
>
> Key: HDFS-11992
> URL: https://issues.apache.org/jira/browse/HDFS-11992
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: Akira Ajisaka
>Assignee: hu xiaodong
> Attachments: HDFS-11992.001.patch
>
>
> {{FsDatasetImpl.LOG}} is widely used and this will change the APIs of 
> InstrumentedLock and InstrumentedWriteLock, so this issue is to change only 
> {{FsDatasetImpl.LOG}} and other related APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11925) HDFS oiv:Normalize the verification of input parameter

2017-06-19 Thread LiXin Ge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16053756#comment-16053756
 ] 

LiXin Ge commented on HDFS-11925:
-

Hi [~linyiqun], sorry to distrub you, but I think you are expert in HDFS oiv 
tool. Could you please  help to review this patch and HDFS-11927, these two 
patches enhance the verification of input parameters and reject parameters 
unmatched. Thank you!

> HDFS oiv:Normalize the verification of input parameter
> --
>
> Key: HDFS-11925
> URL: https://issues.apache.org/jira/browse/HDFS-11925
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha3
>Reporter: LiXin Ge
>Assignee: LiXin Ge
> Attachments: HDFS-11925.patch
>
>
> At present, hdfs oiv tool lacks in verification of input parameter.people can 
> type in irrelevant option like:
> bq. ./hdfs oiv -i fsimage_000 -p XML -step 1024
> or type in option with wrong format which he think come into effec but 
> actually not: 
> bq. ./hdfs oiv -i fsimage_000 -p FileDistribution maxSize 
> 4096 step 512 format
> or some meaningless word which can also get through:
> bq. ./hdfs oiv -i fsimage_000 -p XML Hello Han Meimei
> We'd better not let these cases go unchecked. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11949) Add testcase for ensuring that FsShell cann't move file to the target directory that file exists

2017-06-19 Thread legend (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16053779#comment-16053779
 ] 

legend commented on HDFS-11949:
---

[~yzhangal], the javac and checkstyle fixed by HDFS-11949.004.patch. I try to 
fix the error which is caused by HDFS-11949.004.patch, and offer a patch for 
HADOOP-14541 used to track TestFilterFileSystem#testFilterFileSystem 
AssertionError. But HADOOP-14538 fixed just now.


> Add testcase for ensuring that FsShell cann't move file to the target 
> directory that file exists
> 
>
> Key: HDFS-11949
> URL: https://issues.apache.org/jira/browse/HDFS-11949
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-alpha4
>Reporter: legend
>Assignee: legend
>Priority: Minor
> Attachments: HDFS-11949.001.patch, HDFS-11949.002.patch, 
> HDFS-11949.003.patch, HDFS-11949.004.patch, HDFS-11949.patch
>
>
> moveFromLocal returns error when move file to the target directory that the 
> file exists. So we need add test case to check it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11916) Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a random EC policy

2017-06-19 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-11916:

Attachment: HDFS-11916.4.patch

Fixed the checkstyle warning. The error of 
{{TestDFSStripedInputStreamWithRandomECPolicy}} has already been filed by 
HDFS-11964. The others seem not to be related.

> Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a 
> random EC policy
> 
>
> Key: HDFS-11916
> URL: https://issues.apache.org/jira/browse/HDFS-11916
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11916.1.patch, HDFS-11916.2.patch, 
> HDFS-11916.3.patch, HDFS-11916.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11978) Remove invalid '-usage' command of 'ec' and add missing commands 'addPolicies' 'listCodecs'

2017-06-19 Thread wenxin he (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16053837#comment-16053837
 ] 

wenxin he commented on HDFS-11978:
--

hi [~xiaochen], sorry to disturb you.
Would you mind reviewing this patch?

> Remove invalid '-usage' command of 'ec' and add missing commands 
> 'addPolicies' 'listCodecs'
> ---
>
> Key: HDFS-11978
> URL: https://issues.apache.org/jira/browse/HDFS-11978
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha4
>Reporter: wenxin he
>Priority: Minor
>
> Remove invalid '-usage' command of 'ec' in HDFSErasureCoding.md.
> Add missing commands 'addPolicies' 'listCodecs' in HDFSCommands.md.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11606) Add CLI cmd to remove an erasure code policy

2017-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16053856#comment-16053856
 ] 

Hadoop QA commented on HDFS-11606:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 449 
unchanged - 3 fixed = 449 total (was 452) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
25s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11606 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873458/HDFS-11606.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 5454aaea376d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 75043d3 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19944/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19944/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output | 
https://builds.apache.org/job/P

[jira] [Commented] (HDFS-11992) Replace commons-logging APIs with slf4j in FsDatasetImpl

2017-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16053908#comment-16053908
 ] 

Hadoop QA commented on HDFS-11992:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
38s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  2s{color} | {color:orange} root: The patch generated 1 new + 100 unchanged 
- 0 fixed = 101 total (was 100) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 22s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestFilterFileSystem |
|   | hadoop.fs.TestHarFileSystem |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11992 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873460/HDFS-11992.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2eb3a5a25933 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 75043d3 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19945/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19945/artifact/

[jira] [Commented] (HDFS-11916) Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a random EC policy

2017-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16053925#comment-16053925
 ] 

Hadoop QA commented on HDFS-11916:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11916 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873476/HDFS-11916.4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ece3bc85e0d5 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3008045 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19946/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19946/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19946/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a 
> random EC policy
> 
>
> Key: HDFS-11916
> URL: https://issues.apache.org/jira/browse/HDFS-11916
> Project: Hadoop HDFS
>  Issue Typ

[jira] [Commented] (HDFS-11971) libhdfs++: A few portability issues

2017-06-19 Thread Anatoli Shein (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16053971#comment-16053971
 ] 

Anatoli Shein commented on HDFS-11971:
--

Thank you for the review, [~James C]!

The cmake option 'set_target_properties' did not cause issues for the build, 
however I think while we are fixing the linking in these cmake files it also 
makes sense to make them consistent.

For example currently 'connect_cancel.c' builds to the executable 
'connect_cancel_c', and its CMakeLists.txt file has 'set_target_properties' 
even though it is redundant since there is only one executable 
'connect_cancel_c' in the project. Also 'connect_cancel.cc' does not do 
'set_target_properties' while the rest of the examples in 'cc' folder do. In 
order to make things consistent I just removed 'set_target_properties' from all 
the examples and just named executables in 'c' folder as follows: cat_c, 
connect_cancel_c; and in 'cc' folder the executables will still be cat, 
connect_cancel, find, gendirs.

Please let me know if you agree with this modification.

> libhdfs++: A few portability issues
> ---
>
> Key: HDFS-11971
> URL: https://issues.apache.org/jira/browse/HDFS-11971
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-11971.HDFS-8707.000.patch, 
> HDFS-11971.HDFS-8707.001.patch, HDFS-11971.HDFS-8707.002.patch
>
>
> I recently encountered a few portability issues with libhdfs++ while trying 
> to build it as a stand alone project (and also as part of another Apache 
> project).
> 1. Method fixCase in configuration.h file produces a warning "conversion to 
> ‘char’ from ‘int’ may alter its value [-Werror=conversion]" which does not 
> allow libhdfs++ to be compiled as part of the codebase that treats such 
> warnings as errors (can be fixed with a simple cast).
> 2. In CMakeLists.txt file (in libhdfspp directory) we do 
> find_package(Threads) however we do not link it to the targets (e.g. 
> hdfspp_static), which causes the build to fail with pthread errors. After the 
> Threads package is found we need to link it using ${CMAKE_THREAD_LIBS_INIT}.
> 3. All the tools and examples fail to build as part of a standalone libhdfs++ 
> because they are missing multiple libraries such as protobuf, ssl, pthread, 
> etc. This happens because we link them to a shared library hdfspp instead of 
> hdfspp_static library. We should either link all the tools and examples to 
> hdfspp_static library or explicitly add linking to all missing libraries for 
> each tool/example.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11994) Hadoop NameNode Web UI throws "Failed to retrieve data from /jmx?qry=java.lang:type=Memory, cause:" When running behind a proxy

2017-06-19 Thread Sergey Bahchissaraitsev (JIRA)
Sergey Bahchissaraitsev created HDFS-11994:
--

 Summary: Hadoop NameNode Web UI throws "Failed to retrieve data 
from /jmx?qry=java.lang:type=Memory, cause:" When running behind a proxy
 Key: HDFS-11994
 URL: https://issues.apache.org/jira/browse/HDFS-11994
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ui
Affects Versions: 2.8.0
 Environment: CentOS release 6.9 (Final)
OpenJDK version "1.8.0_131"
Hadoop 2.8.0
Reporter: Sergey Bahchissaraitsev
Priority: Minor


When running behind a proxy Hadoop Web UI throws the following exception 
because it tries to make Ajax requests to the base server URL:

{code:java}
Failed to retrieve data from /jmx?qry=java.lang:type=Memory, cause:
{code}

A good solution could be to adjust the Ajax URL based on the actual window URL 
using the jQuery Ajax "beforeSend" pre-request callback function: 
http://api.jquery.com/jquery.ajax/






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11647) Add -E option in hdfs "count" command to show erasure policy summarization

2017-06-19 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16053997#comment-16053997
 ] 

Kai Zheng commented on HDFS-11647:
--

Looks like the recent building picked up the github PR instead of the updated 
patch. The PR was already closed, not sure how to avoid this. Eddy, do you have 
any idea about this? Thanks!

> Add -E option in hdfs "count" command to show erasure policy summarization
> --
>
> Key: HDFS-11647
> URL: https://issues.apache.org/jira/browse/HDFS-11647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: luhuichun
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11647-001.patch, HDFS-11647-002.patch, 
> HDFS-11647-003.patch, HDFS-11647-004.patch, HDFS-11647-005.patch, 
> HDFS-11647-006.patch
>
>
> Add -E option in hdfs "count" command to show erasure policy summarization



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11518) libhdfs++: Add a build option to skip building examples, tests, and tools

2017-06-19 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11518:
-
Attachment: HDFS-11518.HDFS-8707.001.patch

Thanks for the review, [~James C]. I added an explanation comment at the top of 
the CMakeLists.txt, please see.

> libhdfs++: Add a build option to skip building examples, tests, and tools
> -
>
> Key: HDFS-11518
> URL: https://issues.apache.org/jira/browse/HDFS-11518
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
> Attachments: HDFS-11518.HDFS-8707.000.patch, 
> HDFS-11518.HDFS-8707.001.patch
>
>
> Adding a flag to just build the core library without tools, examples, and 
> tests will make it easier and lighter weight to embed the libhdfs++ source as 
> a third-party component of other projects.  It won't need to look for a JDK, 
> valgrind, and gmock and won't generate a handful of binaries that might not 
> be relevant to other projects during normal use.
> This should also make it a bit easier to wire into other build frameworks 
> since there won't be standalone binaries that need the path to other 
> libraries like protobuf while the library builds.  They just need to be 
> around while the project embedding libhdfs++ gets linked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11943) Warn log frequently print to screen in doEncode function on AbstractNativeRawEncoder class

2017-06-19 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054033#comment-16054033
 ] 

Kai Zheng commented on HDFS-11943:
--

Thanks [~liaoyuxiangqin] & [~Sammi] for the discussions!

bq. Do you have any idea that NativeXORRawEncoder should support direct Buffer 
or not? I think the answer is yes. Correct me if it's not the case.
This is correct. Should all native coders indicate they prefer direct 
bytebuffer for better performance.

bq. so i think the condition not neccessary to add
This is correct. Agree.

bq. So as you guess, the NativeXORRawEncoder doesn't indicate itself support 
the direct buffer.
A good reasoning! Actually you got it, that's the cause. Ensuring to have the 
following block in the base native class 
{{AbstractNativeRawEncoder/AbstractNativeRawDecoder}} should assure all native 
coders will behave as the thought.
{code}
  @Override
  public boolean preferDirectBuffer() {
return true;
  }
{code}
[~liaoyuxiangqin] would you update your patch to fix this? Thanks!

> Warn log frequently print to screen in doEncode function on  
> AbstractNativeRawEncoder class
> ---
>
> Key: HDFS-11943
> URL: https://issues.apache.org/jira/browse/HDFS-11943
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, native
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20,  Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
> erasure coding: XOR-2-1-64k and enabled Intel ISA-L
> hadoop fs -put file /
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Minor
> Attachments: HDFS-11943.002.patch, HDFS-11943.patch
>
>   Original Estimate: 0.05h
>  Remaining Estimate: 0.05h
>
>  when i write file to hdfs on above environment,  the hdfs client  frequently 
> print warn log of use direct ByteBuffer inputs/outputs in doEncode function 
> to screen, detail information as follows:
> 2017-06-07 15:20:42,856 WARN rawcoder.AbstractNativeRawEncoder: 
> convertToByteBufferState is invoked, not efficiently. Please use direct 
> ByteBuffer inputs/outputs



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11943) Warn log frequently print to screen in doEncode function on AbstractNativeRawEncoder class

2017-06-19 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054039#comment-16054039
 ] 

Kai Zheng commented on HDFS-11943:
--

When you'd update the patch, also note Andrew's above suggestion about 
PerformanceAdvisory.

> Warn log frequently print to screen in doEncode function on  
> AbstractNativeRawEncoder class
> ---
>
> Key: HDFS-11943
> URL: https://issues.apache.org/jira/browse/HDFS-11943
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, native
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20,  Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
> erasure coding: XOR-2-1-64k and enabled Intel ISA-L
> hadoop fs -put file /
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Minor
> Attachments: HDFS-11943.002.patch, HDFS-11943.patch
>
>   Original Estimate: 0.05h
>  Remaining Estimate: 0.05h
>
>  when i write file to hdfs on above environment,  the hdfs client  frequently 
> print warn log of use direct ByteBuffer inputs/outputs in doEncode function 
> to screen, detail information as follows:
> 2017-06-07 15:20:42,856 WARN rawcoder.AbstractNativeRawEncoder: 
> convertToByteBufferState is invoked, not efficiently. Please use direct 
> ByteBuffer inputs/outputs



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11978) Remove invalid '-usage' command of 'ec' and add missing commands 'addPolicies' 'listCodecs'

2017-06-19 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054110#comment-16054110
 ] 

Wei-Chiu Chuang commented on HDFS-11978:


+1

> Remove invalid '-usage' command of 'ec' and add missing commands 
> 'addPolicies' 'listCodecs'
> ---
>
> Key: HDFS-11978
> URL: https://issues.apache.org/jira/browse/HDFS-11978
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha4
>Reporter: wenxin he
>Priority: Minor
>
> Remove invalid '-usage' command of 'ec' in HDFSErasureCoding.md.
> Add missing commands 'addPolicies' 'listCodecs' in HDFSCommands.md.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11970) Ozone: TestXceiverClientManager.testFreeByEviction fails occasionally

2017-06-19 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054144#comment-16054144
 ] 

Mukul Kumar Singh commented on HDFS-11970:
--

This test fails because with cache size of 2 for testFreeByEviction, it is 
possible to correctly predict the order of eviction of elements.

Inorder to fix this test, the cache size has been reduced to 1. This test is 
similar to TestDFSClientCache. 

> Ozone: TestXceiverClientManager.testFreeByEviction fails occasionally
> -
>
> Key: HDFS-11970
> URL: https://issues.apache.org/jira/browse/HDFS-11970
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
>
> TestXceiverClientManager.testFreeByEviction fails occasionally with the 
> following stack trace.
> {code}
> Running org.apache.hadoop.ozone.scm.TestXceiverClientManager
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.989 sec <<< 
> FAILURE! - in org.apache.hadoop.ozone.scm.TestXceiverClientManager
> testFreeByEviction(org.apache.hadoop.ozone.scm.TestXceiverClientManager)  
> Time elapsed: 0.024 sec  <<< FAILURE!
> java.lang.AssertionError: 
> expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.ozone.scm.TestXceiverClientManager.testFreeByEviction(TestXceiverClientManager.java:184)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11970) Ozone: TestXceiverClientManager.testFreeByEviction fails occasionally

2017-06-19 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-11970:
-
Attachment: HDFS-11970-HDFS-7240.001.patch

> Ozone: TestXceiverClientManager.testFreeByEviction fails occasionally
> -
>
> Key: HDFS-11970
> URL: https://issues.apache.org/jira/browse/HDFS-11970
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-11970-HDFS-7240.001.patch
>
>
> TestXceiverClientManager.testFreeByEviction fails occasionally with the 
> following stack trace.
> {code}
> Running org.apache.hadoop.ozone.scm.TestXceiverClientManager
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.989 sec <<< 
> FAILURE! - in org.apache.hadoop.ozone.scm.TestXceiverClientManager
> testFreeByEviction(org.apache.hadoop.ozone.scm.TestXceiverClientManager)  
> Time elapsed: 0.024 sec  <<< FAILURE!
> java.lang.AssertionError: 
> expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.ozone.scm.TestXceiverClientManager.testFreeByEviction(TestXceiverClientManager.java:184)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11970) Ozone: TestXceiverClientManager.testFreeByEviction fails occasionally

2017-06-19 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-11970:
-
Status: Patch Available  (was: Open)

> Ozone: TestXceiverClientManager.testFreeByEviction fails occasionally
> -
>
> Key: HDFS-11970
> URL: https://issues.apache.org/jira/browse/HDFS-11970
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-11970-HDFS-7240.001.patch
>
>
> TestXceiverClientManager.testFreeByEviction fails occasionally with the 
> following stack trace.
> {code}
> Running org.apache.hadoop.ozone.scm.TestXceiverClientManager
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.989 sec <<< 
> FAILURE! - in org.apache.hadoop.ozone.scm.TestXceiverClientManager
> testFreeByEviction(org.apache.hadoop.ozone.scm.TestXceiverClientManager)  
> Time elapsed: 0.024 sec  <<< FAILURE!
> java.lang.AssertionError: 
> expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.ozone.scm.TestXceiverClientManager.testFreeByEviction(TestXceiverClientManager.java:184)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11518) libhdfs++: Add a build option to skip building examples, tests, and tools

2017-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054157#comment-16054157
 ] 

Hadoop QA commented on HDFS-11518:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
42s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
31s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
8s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
3s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
32s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
42s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-11518 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873492/HDFS-11518.HDFS-8707.001.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  |
| uname | Linux a68b7c4e9782 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 40e3290 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19947/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19947/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Add a build option to skip building exam

[jira] [Commented] (HDFS-11647) Add -E option in hdfs "count" command to show erasure policy summarization

2017-06-19 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054253#comment-16054253
 ] 

Andrew Wang commented on HDFS-11647:


Unfortunately, once there's a GH PR, Yetus will always use it for precommit. If 
we want to go back to patches, we need to file a new JIRA.

> Add -E option in hdfs "count" command to show erasure policy summarization
> --
>
> Key: HDFS-11647
> URL: https://issues.apache.org/jira/browse/HDFS-11647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: luhuichun
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11647-001.patch, HDFS-11647-002.patch, 
> HDFS-11647-003.patch, HDFS-11647-004.patch, HDFS-11647-005.patch, 
> HDFS-11647-006.patch
>
>
> Add -E option in hdfs "count" command to show erasure policy summarization



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11971) libhdfs++: A few portability issues

2017-06-19 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11971:
-
Attachment: HDFS-11971.HDFS-8707.003.patch

I also just noticed another small portability issue. We should not use absolute 
paths to /lib and /include for the install targets, it is better to make them 
relative to the destination instead. This avoids permission errors when 
libhdfspp is trying to copy files into /lib and /include directories. It will 
now copy them to /lib and /include Change added in a 
new patch.

> libhdfs++: A few portability issues
> ---
>
> Key: HDFS-11971
> URL: https://issues.apache.org/jira/browse/HDFS-11971
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-11971.HDFS-8707.000.patch, 
> HDFS-11971.HDFS-8707.001.patch, HDFS-11971.HDFS-8707.002.patch, 
> HDFS-11971.HDFS-8707.003.patch
>
>
> I recently encountered a few portability issues with libhdfs++ while trying 
> to build it as a stand alone project (and also as part of another Apache 
> project).
> 1. Method fixCase in configuration.h file produces a warning "conversion to 
> ‘char’ from ‘int’ may alter its value [-Werror=conversion]" which does not 
> allow libhdfs++ to be compiled as part of the codebase that treats such 
> warnings as errors (can be fixed with a simple cast).
> 2. In CMakeLists.txt file (in libhdfspp directory) we do 
> find_package(Threads) however we do not link it to the targets (e.g. 
> hdfspp_static), which causes the build to fail with pthread errors. After the 
> Threads package is found we need to link it using ${CMAKE_THREAD_LIBS_INIT}.
> 3. All the tools and examples fail to build as part of a standalone libhdfs++ 
> because they are missing multiple libraries such as protobuf, ssl, pthread, 
> etc. This happens because we link them to a shared library hdfspp instead of 
> hdfspp_static library. We should either link all the tools and examples to 
> hdfspp_static library or explicitly add linking to all missing libraries for 
> each tool/example.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11987) DistributedFileSystem#create and append do not honor CreateFlag.CREATE|APPEND

2017-06-19 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054284#comment-16054284
 ] 

Andrew Wang commented on HDFS-11987:


Hi Eddy, thanks for working on this! One high-level comment, I think this API 
works better if it's implemented server-side for atomicity. CREATE|OVERWRITE is 
a related example that could be implemented client-side but isn't (I imagine 
for the same reasons).

Nit:
* append, not caused by this patch, but swap the order of favoredNodes and 
progress parameters to match other calls?

> DistributedFileSystem#create and append do not honor CreateFlag.CREATE|APPEND
> -
>
> Key: HDFS-11987
> URL: https://issues.apache.org/jira/browse/HDFS-11987
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-11987.00.patch
>
>
> {{DistributedFileSystem#create()}} and {{DistributedFIleSystem#append()}} do 
> not honor the expected behavior on {{CreateFlag.CREATE|APPEND}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11978) Remove invalid '-usage' command of 'ec' and add missing commands 'addPolicies' 'listCodecs'

2017-06-19 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-11978:
--

Assignee: wenxin he

> Remove invalid '-usage' command of 'ec' and add missing commands 
> 'addPolicies' 'listCodecs'
> ---
>
> Key: HDFS-11978
> URL: https://issues.apache.org/jira/browse/HDFS-11978
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha4
>Reporter: wenxin he
>Assignee: wenxin he
>Priority: Minor
>
> Remove invalid '-usage' command of 'ec' in HDFSErasureCoding.md.
> Add missing commands 'addPolicies' 'listCodecs' in HDFSCommands.md.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11978) Remove invalid '-usage' command of 'ec' and add missing commands 'addPolicies' 'listCodecs'

2017-06-19 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054312#comment-16054312
 ] 

Wei-Chiu Chuang commented on HDFS-11978:


[~vincent he] I added you a HDFS contributor role so feel free to assign the 
jiras that you are contributing to yourself.

Thanks.

> Remove invalid '-usage' command of 'ec' and add missing commands 
> 'addPolicies' 'listCodecs'
> ---
>
> Key: HDFS-11978
> URL: https://issues.apache.org/jira/browse/HDFS-11978
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha4
>Reporter: wenxin he
>Assignee: wenxin he
>Priority: Minor
>
> Remove invalid '-usage' command of 'ec' in HDFSErasureCoding.md.
> Add missing commands 'addPolicies' 'listCodecs' in HDFSCommands.md.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10480) Add an admin command to list currently open files

2017-06-19 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10480:
---
   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Committed to branch-2, thanks for the contribution Manoj!

> Add an admin command to list currently open files
> -
>
> Key: HDFS-10480
> URL: https://issues.apache.org/jira/browse/HDFS-10480
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Manoj Govindassamy
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HDFS-10480.02.patch, HDFS-10480.03.patch, 
> HDFS-10480.04.patch, HDFS-10480.05.patch, HDFS-10480.06.patch, 
> HDFS-10480.07.patch, HDFS-10480-branch-2.01.patch, HDFS-10480-trunk-1.patch, 
> HDFS-10480-trunk.patch
>
>
> Currently there is no easy way to obtain the list of active leases or files 
> being written. It will be nice if we have an admin command to list open files 
> and their lease holders.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11963) Ozone: Documentation: Add getting started page

2017-06-19 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11963:

Attachment: HDFS-11963-HDFS-7240.003.patch

Update the patch based on comments, it addresses all issues mentioned in 
previous comments.


> Ozone: Documentation: Add getting started page
> --
>
> Key: HDFS-11963
> URL: https://issues.apache.org/jira/browse/HDFS-11963
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-11963-HDFS-7240.001.patch, 
> HDFS-11963-HDFS-7240.002.patch, HDFS-11963-HDFS-7240.003.patch, Screen Shot 
> 2017-06-11 at 12.11.06 AM.png, Screen Shot 2017-06-11 at 12.11.19 AM.png, 
> Screen Shot 2017-06-11 at 12.11.32 AM.png
>
>
> We need to add the Ozone section to hadoop documentation and also a section 
> on how to get started, that is what are configuration settings needed to 
> start running ozone. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10480) Add an admin command to list currently open files

2017-06-19 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054383#comment-16054383
 ] 

Andrew Wang commented on HDFS-10480:


Sure, looks like we need a new patch though since the backport doesn't apply 
cleanly.

> Add an admin command to list currently open files
> -
>
> Key: HDFS-10480
> URL: https://issues.apache.org/jira/browse/HDFS-10480
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Manoj Govindassamy
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HDFS-10480.02.patch, HDFS-10480.03.patch, 
> HDFS-10480.04.patch, HDFS-10480.05.patch, HDFS-10480.06.patch, 
> HDFS-10480.07.patch, HDFS-10480-branch-2.01.patch, HDFS-10480-trunk-1.patch, 
> HDFS-10480-trunk.patch
>
>
> Currently there is no easy way to obtain the list of active leases or files 
> being written. It will be nice if we have an admin command to list open files 
> and their lease holders.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11963) Ozone: Documentation: Add getting started page

2017-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054389#comment-16054389
 ] 

Hadoop QA commented on HDFS-11963:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
1s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
21s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
17s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11963 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873521/HDFS-11963-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  mvnsite  xml  |
| uname | Linux f7e79e192215 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 3a868fe |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19951/artifact/patchprocess/whitespace-eol.txt
 |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19951/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-project hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19951/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Documentation: Add getting started page
> --
>
> Key: HDFS-11963
> URL: https://issues.apache.org/jira/browse/HDFS-11963
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-11963-HDFS-7240.001.patch, 
> HDFS-11963-HDFS-7240.002.patch, HDFS-11963-HDFS-7240.003.patch, Screen Shot 
> 2017-06-11 at 12.11.06 AM.png, Screen Shot 2017-06-11 at 12.11.19 AM.png, 
> Screen Shot 2017-06-11 at 12.11.32 AM.png
>
>
> We need to add the Ozone section to hadoop documentation and also a section 
> on how to get started, that is what are configuration settings needed to 
> start running ozone. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10480) Add an admin command to list currently open files

2017-06-19 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054378#comment-16054378
 ] 

Rushabh S Shah commented on HDFS-10480:
---

Can we backport to branch-2.8 also ?

> Add an admin command to list currently open files
> -
>
> Key: HDFS-10480
> URL: https://issues.apache.org/jira/browse/HDFS-10480
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Manoj Govindassamy
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HDFS-10480.02.patch, HDFS-10480.03.patch, 
> HDFS-10480.04.patch, HDFS-10480.05.patch, HDFS-10480.06.patch, 
> HDFS-10480.07.patch, HDFS-10480-branch-2.01.patch, HDFS-10480-trunk-1.patch, 
> HDFS-10480-trunk.patch
>
>
> Currently there is no easy way to obtain the list of active leases or files 
> being written. It will be nice if we have an admin command to list open files 
> and their lease holders.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11916) Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a random EC policy

2017-06-19 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-11916:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha4
   Status: Resolved  (was: Patch Available)

Thanks for the explanation and updates, [~tasanuma0829]!

+1. Committed to trunk.

> Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a 
> random EC policy
> 
>
> Key: HDFS-11916
> URL: https://issues.apache.org/jira/browse/HDFS-11916
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-alpha4
>
> Attachments: HDFS-11916.1.patch, HDFS-11916.2.patch, 
> HDFS-11916.3.patch, HDFS-11916.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11963) Ozone: Documentation: Add getting started page

2017-06-19 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11963:

Attachment: HDFS-11963-HDFS-7240.004.patch

Will fix the white space issues while committing with git command. This fixes 
the license issue.

> Ozone: Documentation: Add getting started page
> --
>
> Key: HDFS-11963
> URL: https://issues.apache.org/jira/browse/HDFS-11963
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-11963-HDFS-7240.001.patch, 
> HDFS-11963-HDFS-7240.002.patch, HDFS-11963-HDFS-7240.003.patch, 
> HDFS-11963-HDFS-7240.004.patch, Screen Shot 2017-06-11 at 12.11.06 AM.png, 
> Screen Shot 2017-06-11 at 12.11.19 AM.png, Screen Shot 2017-06-11 at 12.11.32 
> AM.png
>
>
> We need to add the Ozone section to hadoop documentation and also a section 
> on how to get started, that is what are configuration settings needed to 
> start running ozone. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11991) Ozone: Ozone shell: the root is assumed to hdfs

2017-06-19 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054415#comment-16054415
 ] 

Anu Engineer commented on HDFS-11991:
-

[~cheersyang] Thanks for picking this up. If it is not big bother, can you 
please share what you plan to do ? Thx

> Ozone: Ozone shell: the root is assumed to hdfs
> ---
>
> Key: HDFS-11991
> URL: https://issues.apache.org/jira/browse/HDFS-11991
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Fix For: HDFS-7240
>
>
> *hdfs oz* command, or ozone shell has a command like option to run some 
> commands as root easily by specifying _--root_   as a command line option. 
> But after HDFS-11655 that assumption is no longer true. We need to detect the 
> user that started the scm/ksm service and _root_  should be mapped to that 
> user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11970) Ozone: TestXceiverClientManager.testFreeByEviction fails occasionally

2017-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054420#comment-16054420
 ] 

Hadoop QA commented on HDFS-11970:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
15s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}156m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.namenode.TestMetaSave |
|   | hadoop.ozone.scm.node.TestNodeManager |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
| Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11970 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873505/HDFS-11970-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 49ed0657fb39 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 3a868fe |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19948/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19948/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoo

[jira] [Created] (HDFS-11995) HDFS Architecture documentation incorrectly describes writing to a local temporary file.

2017-06-19 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-11995:


 Summary: HDFS Architecture documentation incorrectly describes 
writing to a local temporary file.
 Key: HDFS-11995
 URL: https://issues.apache.org/jira/browse/HDFS-11995
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0-alpha3
Reporter: Chris Nauroth


The HDFS Architecture documentation has a section titled "Staging" that 
describes clients writing to a local temporary file first before interacting 
with the NameNode to allocate file metadata.  This information is incorrect.  
(Perhaps it was correct a long time ago, but it is no longer accurate with 
respect to the current implementation.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11995) HDFS Architecture documentation incorrectly describes writing to a local temporary file.

2017-06-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-11995:
-
Priority: Minor  (was: Major)

> HDFS Architecture documentation incorrectly describes writing to a local 
> temporary file.
> 
>
> Key: HDFS-11995
> URL: https://issues.apache.org/jira/browse/HDFS-11995
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Chris Nauroth
>Priority: Minor
>
> The HDFS Architecture documentation has a section titled "Staging" that 
> describes clients writing to a local temporary file first before interacting 
> with the NameNode to allocate file metadata.  This information is incorrect.  
> (Perhaps it was correct a long time ago, but it is no longer accurate with 
> respect to the current implementation.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9505) HDFS Architecture documentation needs to be refreshed.

2017-06-19 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054424#comment-16054424
 ] 

Chris Nauroth commented on HDFS-9505:
-

FYI, I have filed HDFS-11995 for another inaccuracy in the HDFS Architecture 
documentation that remains even after this patch was committed.

> HDFS Architecture documentation needs to be refreshed.
> --
>
> Key: HDFS-9505
> URL: https://issues.apache.org/jira/browse/HDFS-9505
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Chris Nauroth
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS-9505.001.patch, HDFS-9505.002.patch
>
>
> The HDFS Architecture document is out of date with respect to the current 
> design of the system.
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> There are multiple false statements and omissions of recent features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11890) Handle NPE in BlockRecoveryWorker when DN is getting shoutdown.

2017-06-19 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054429#comment-16054429
 ] 

Brahma Reddy Battula commented on HDFS-11890:
-

Test failures are unrelated. will commit tomorrow if there are no further 
comments.

> Handle NPE in BlockRecoveryWorker when DN is getting shoutdown.
> ---
>
> Key: HDFS-11890
> URL: https://issues.apache.org/jira/browse/HDFS-11890
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11890-001.patch, HDFS-11890-002.patch
>
>
> {code}
> Exception in thread 
> "org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@1c03e6ae" 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:131)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:596)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11995) HDFS Architecture documentation incorrectly describes writing to a local temporary file.

2017-06-19 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar reassigned HDFS-11995:
-

Assignee: Nandakumar

> HDFS Architecture documentation incorrectly describes writing to a local 
> temporary file.
> 
>
> Key: HDFS-11995
> URL: https://issues.apache.org/jira/browse/HDFS-11995
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Chris Nauroth
>Assignee: Nandakumar
>Priority: Minor
>
> The HDFS Architecture documentation has a section titled "Staging" that 
> describes clients writing to a local temporary file first before interacting 
> with the NameNode to allocate file metadata.  This information is incorrect.  
> (Perhaps it was correct a long time ago, but it is no longer accurate with 
> respect to the current implementation.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11971) libhdfs++: A few portability issues

2017-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054465#comment-16054465
 ] 

Hadoop QA commented on HDFS-11971:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 22m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
56s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
5s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
56s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
7s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  9m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
43s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  8m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
9s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-11971 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873515/HDFS-11971.HDFS-8707.003.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux b6d31346ce63 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 40e3290 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19950/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19950/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: A few portability issues
> ---
>
> Key: HDFS-11971
> URL: https://issues.apache.org/jira/browse/HDFS-11971
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-11971.HDFS-8707.000.patch, 
> HDFS-11971.HDFS-8707.001.patch, HDFS-11971.HDFS-8707.002.patch, 
> HDFS-11971.HDFS-8707.003.patch
>
>
> I recently encountered a few portability issues with libhdfs++ while trying 
> to build it as a stand alone project (and also as part of another Apache 
> pro

[jira] [Commented] (HDFS-11963) Ozone: Documentation: Add getting started page

2017-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054475#comment-16054475
 ] 

Hadoop QA commented on HDFS-11963:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11963 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873530/HDFS-11963-HDFS-7240.004.patch
 |
| Optional Tests |  asflicense  mvnsite  xml  |
| uname | Linux 68f15403bc79 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 3a868fe |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19952/artifact/patchprocess/whitespace-eol.txt
 |
| modules | C: hadoop-project hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19952/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Documentation: Add getting started page
> --
>
> Key: HDFS-11963
> URL: https://issues.apache.org/jira/browse/HDFS-11963
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-11963-HDFS-7240.001.patch, 
> HDFS-11963-HDFS-7240.002.patch, HDFS-11963-HDFS-7240.003.patch, 
> HDFS-11963-HDFS-7240.004.patch, Screen Shot 2017-06-11 at 12.11.06 AM.png, 
> Screen Shot 2017-06-11 at 12.11.19 AM.png, Screen Shot 2017-06-11 at 12.11.32 
> AM.png
>
>
> We need to add the Ozone section to hadoop documentation and also a section 
> on how to get started, that is what are configuration settings needed to 
> start running ozone. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11992) Replace commons-logging APIs with slf4j in FsDatasetImpl

2017-06-19 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054494#comment-16054494
 ] 

Chen Liang commented on HDFS-11992:
---

Thanks [~xiaodong.hu] for the contribution! v001 patch LGTM.

> Replace commons-logging APIs with slf4j in FsDatasetImpl
> 
>
> Key: HDFS-11992
> URL: https://issues.apache.org/jira/browse/HDFS-11992
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: Akira Ajisaka
>Assignee: hu xiaodong
> Attachments: HDFS-11992.001.patch
>
>
> {{FsDatasetImpl.LOG}} is widely used and this will change the APIs of 
> InstrumentedLock and InstrumentedWriteLock, so this issue is to change only 
> {{FsDatasetImpl.LOG}} and other related APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10391) Always enable NameNode service RPC port

2017-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054495#comment-16054495
 ] 

Hadoop QA commented on HDFS-10391:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 25 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 57s{color} | {color:orange} hadoop-hdfs-project: The patch generated 31 new 
+ 1270 unchanged - 20 fixed = 1301 total (was 1290) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
22s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-10391 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872541/HDFS-10391.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d6e91ee113e2 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3008045 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19949/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19949/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19949/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19949/console |
| Powere

[jira] [Commented] (HDFS-11916) Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a random EC policy

2017-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054498#comment-16054498
 ] 

Hudson commented on HDFS-11916:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11887 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11887/])
HDFS-11916. Extend (lei: rev 73fb75017e238e72c3162914f0db66e50139e199)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicyWithSnapshot.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicyWithSnapshotWithRandomECPolicy.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPoliciesWithRandomECPolicy.java


> Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a 
> random EC policy
> 
>
> Key: HDFS-11916
> URL: https://issues.apache.org/jira/browse/HDFS-11916
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-alpha4
>
> Attachments: HDFS-11916.1.patch, HDFS-11916.2.patch, 
> HDFS-11916.3.patch, HDFS-11916.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11996) Ozone : add partial read of chunks

2017-06-19 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11996:
-

 Summary: Ozone : add partial read of chunks
 Key: HDFS-11996
 URL: https://issues.apache.org/jira/browse/HDFS-11996
 Project: Hadoop HDFS
  Issue Type: Sub-task
 Environment: Currently when reading a chunk, it is always the whole 
chunk that gets returned. However it is possible the reader may only need to 
read a subset of the chunk. This JIRA adds the partial read of chunks.
Reporter: Chen Liang
Assignee: Chen Liang






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10480) Add an admin command to list currently open files

2017-06-19 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054508#comment-16054508
 ] 

Manoj Govindassamy commented on HDFS-10480:
---

sure [~shahrs87], [~andrew.wang]. Will post the brabch-2.8 patch once 
backported and tested.

> Add an admin command to list currently open files
> -
>
> Key: HDFS-10480
> URL: https://issues.apache.org/jira/browse/HDFS-10480
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Manoj Govindassamy
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HDFS-10480.02.patch, HDFS-10480.03.patch, 
> HDFS-10480.04.patch, HDFS-10480.05.patch, HDFS-10480.06.patch, 
> HDFS-10480.07.patch, HDFS-10480-branch-2.01.patch, HDFS-10480-trunk-1.patch, 
> HDFS-10480-trunk.patch
>
>
> Currently there is no easy way to obtain the list of active leases or files 
> being written. It will be nice if we have an admin command to list open files 
> and their lease holders.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11995) HDFS Architecture documentation incorrectly describes writing to a local temporary file.

2017-06-19 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-11995:
--
Attachment: HDFS-11995.000.patch

> HDFS Architecture documentation incorrectly describes writing to a local 
> temporary file.
> 
>
> Key: HDFS-11995
> URL: https://issues.apache.org/jira/browse/HDFS-11995
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Chris Nauroth
>Assignee: Nandakumar
>Priority: Minor
> Attachments: HDFS-11995.000.patch
>
>
> The HDFS Architecture documentation has a section titled "Staging" that 
> describes clients writing to a local temporary file first before interacting 
> with the NameNode to allocate file metadata.  This information is incorrect.  
> (Perhaps it was correct a long time ago, but it is no longer accurate with 
> respect to the current implementation.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11995) HDFS Architecture documentation incorrectly describes writing to a local temporary file.

2017-06-19 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054561#comment-16054561
 ] 

Nandakumar commented on HDFS-11995:
---

This was actually fixed in HDFS-1454, uploaded the patch with similar.
I guess it got missed while moving from xml to md.


> HDFS Architecture documentation incorrectly describes writing to a local 
> temporary file.
> 
>
> Key: HDFS-11995
> URL: https://issues.apache.org/jira/browse/HDFS-11995
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Chris Nauroth
>Assignee: Nandakumar
>Priority: Minor
> Attachments: HDFS-11995.000.patch
>
>
> The HDFS Architecture documentation has a section titled "Staging" that 
> describes clients writing to a local temporary file first before interacting 
> with the NameNode to allocate file metadata.  This information is incorrect.  
> (Perhaps it was correct a long time ago, but it is no longer accurate with 
> respect to the current implementation.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11995) HDFS Architecture documentation incorrectly describes writing to a local temporary file.

2017-06-19 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054561#comment-16054561
 ] 

Nandakumar edited comment on HDFS-11995 at 6/19/17 6:41 PM:


This was actually fixed in HDFS-1454, uploaded the patch with similar change.
I guess it got missed while moving from xml to md.



was (Author: nandakumar131):
This was actually fixed in HDFS-1454, uploaded the patch with similar.
I guess it got missed while moving from xml to md.


> HDFS Architecture documentation incorrectly describes writing to a local 
> temporary file.
> 
>
> Key: HDFS-11995
> URL: https://issues.apache.org/jira/browse/HDFS-11995
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Chris Nauroth
>Assignee: Nandakumar
>Priority: Minor
> Attachments: HDFS-11995.000.patch
>
>
> The HDFS Architecture documentation has a section titled "Staging" that 
> describes clients writing to a local temporary file first before interacting 
> with the NameNode to allocate file metadata.  This information is incorrect.  
> (Perhaps it was correct a long time ago, but it is no longer accurate with 
> respect to the current implementation.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11647) Add -E option in hdfs "count" command to show erasure policy summarization

2017-06-19 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054606#comment-16054606
 ] 

Lei (Eddy) Xu commented on HDFS-11647:
--

Thanks for the update, [~luhuichun]

One small nits: In {{FileSystemShell.md}}

{code}
The output columns with -count -e are: DIR\_COUNT,FILE\_COUNT,CONTENT_SIZE 
ERASURECODING\_POLICY,PATHNAME
{code}

Other than erasure coding policy, the rest of fields are comma separated? Could 
you make it consistently with comma separated format?

The rest LGTM.

> Add -E option in hdfs "count" command to show erasure policy summarization
> --
>
> Key: HDFS-11647
> URL: https://issues.apache.org/jira/browse/HDFS-11647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: luhuichun
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11647-001.patch, HDFS-11647-002.patch, 
> HDFS-11647-003.patch, HDFS-11647-004.patch, HDFS-11647-005.patch, 
> HDFS-11647-006.patch
>
>
> Add -E option in hdfs "count" command to show erasure policy summarization



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11518) libhdfs++: Add a build option to skip building examples, tests, and tools

2017-06-19 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11518:
-
Attachment: HDFS-11518.HDFS-8707.002.patch

Attached the wrong file by mistake. Reattaching.

> libhdfs++: Add a build option to skip building examples, tests, and tools
> -
>
> Key: HDFS-11518
> URL: https://issues.apache.org/jira/browse/HDFS-11518
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
> Attachments: HDFS-11518.HDFS-8707.000.patch, 
> HDFS-11518.HDFS-8707.001.patch, HDFS-11518.HDFS-8707.002.patch
>
>
> Adding a flag to just build the core library without tools, examples, and 
> tests will make it easier and lighter weight to embed the libhdfs++ source as 
> a third-party component of other projects.  It won't need to look for a JDK, 
> valgrind, and gmock and won't generate a handful of binaries that might not 
> be relevant to other projects during normal use.
> This should also make it a bit easier to wire into other build frameworks 
> since there won't be standalone binaries that need the path to other 
> libraries like protobuf while the library builds.  They just need to be 
> around while the project embedding libhdfs++ gets linked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11963) Ozone: Documentation: Add getting started page

2017-06-19 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054667#comment-16054667
 ] 

Elek, Marton commented on HDFS-11963:
-

Just a typo, but could avoid easy copy paste:

{code}
- ./hdfs --deamon start scm
- ./hdfs --deamon start ksm
{code}

Should be _daemon_

> Ozone: Documentation: Add getting started page
> --
>
> Key: HDFS-11963
> URL: https://issues.apache.org/jira/browse/HDFS-11963
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-11963-HDFS-7240.001.patch, 
> HDFS-11963-HDFS-7240.002.patch, HDFS-11963-HDFS-7240.003.patch, 
> HDFS-11963-HDFS-7240.004.patch, Screen Shot 2017-06-11 at 12.11.06 AM.png, 
> Screen Shot 2017-06-11 at 12.11.19 AM.png, Screen Shot 2017-06-11 at 12.11.32 
> AM.png
>
>
> We need to add the Ozone section to hadoop documentation and also a section 
> on how to get started, that is what are configuration settings needed to 
> start running ozone. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11960) Successfully closed files can stay under-replicated.

2017-06-19 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-11960:
--
Attachment: HDFS-11960-v2.trunk.txt

Added unit test.

> Successfully closed files can stay under-replicated.
> 
>
> Key: HDFS-11960
> URL: https://issues.apache.org/jira/browse/HDFS-11960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-11960.patch, HDFS-11960-v2.trunk.txt
>
>
> If a certain set of conditions hold at the time of a file creation, a block 
> of the file can stay under-replicated.  This is because the block is 
> mistakenly taken out of the under-replicated block queue and never gets 
> reevaluated.
> Re-evaluation can be triggered if
> - a replica containing node dies.
> - setrep is called
> - NN repl queues are reinitialized (NN failover or restart)
> If none of these happens, the block stays under-replicated. 
> Here is how it happens.
> 1) A replica is finalized, but the ACK does not reach the upstream in time. 
> IBR is also delayed.
> 2) A close recovery happens, which updates the gen stamp of "healthy" 
> replicas.
> 3) The file is closed with the healthy replicas. It is added to the 
> replication queue.
> 4) A replication is scheduled, so it is added to the pending replication 
> list. The replication target is picked as the failed node in 1).
> 5) The old IBR is finally received for the failed/excluded node. In the 
> meantime, the replication fails, because there is already a finalized replica 
> (with older gen stamp) on the node.
> 6) The IBR processing removes the block from the pending list, adds it to 
> corrupt replicas list, and then issues invalidation. Since the block is in 
> neither replication queue nor pending list, it stays under-replicated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11518) libhdfs++: Add a build option to skip building examples, tests, and tools

2017-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054680#comment-16054680
 ] 

Hadoop QA commented on HDFS-11518:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
36s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
12s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
56s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
6s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
54s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
58s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-11518 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873542/HDFS-11518.HDFS-8707.002.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux df2c9563ff51 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 40e3290 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19953/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19953/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Add a build option to skip building examples, tests, and tools
> -
>
> Key: HDFS-11518
> URL: https://issues.apache.org/jira/browse/HDFS-11518
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
> Attachments: HDFS-11518.HDFS-8707.000.patch, 
> HDFS-11518.HDFS-8707.001.patch, HDFS-11518.HDFS-8707.002.patch
>
>
> Adding a flag to just build the core library without tools, examples, and 
> tests will make it easier and lighter 

[jira] [Updated] (HDFS-11963) Ozone: Documentation: Add getting started page

2017-06-19 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11963:

Attachment: HDFS-11963-HDFS-7240.005.patch

[~elek] Thanks for catching that, fixed in the new patch.

> Ozone: Documentation: Add getting started page
> --
>
> Key: HDFS-11963
> URL: https://issues.apache.org/jira/browse/HDFS-11963
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-11963-HDFS-7240.001.patch, 
> HDFS-11963-HDFS-7240.002.patch, HDFS-11963-HDFS-7240.003.patch, 
> HDFS-11963-HDFS-7240.004.patch, HDFS-11963-HDFS-7240.005.patch, Screen Shot 
> 2017-06-11 at 12.11.06 AM.png, Screen Shot 2017-06-11 at 12.11.19 AM.png, 
> Screen Shot 2017-06-11 at 12.11.32 AM.png
>
>
> We need to add the Ozone section to hadoop documentation and also a section 
> on how to get started, that is what are configuration settings needed to 
> start running ozone. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11963) Ozone: Documentation: Add getting started page

2017-06-19 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054713#comment-16054713
 ] 

Chen Liang commented on HDFS-11963:
---

Thanks [~anu] for the update, v005 patch LGTM.

> Ozone: Documentation: Add getting started page
> --
>
> Key: HDFS-11963
> URL: https://issues.apache.org/jira/browse/HDFS-11963
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-11963-HDFS-7240.001.patch, 
> HDFS-11963-HDFS-7240.002.patch, HDFS-11963-HDFS-7240.003.patch, 
> HDFS-11963-HDFS-7240.004.patch, HDFS-11963-HDFS-7240.005.patch, Screen Shot 
> 2017-06-11 at 12.11.06 AM.png, Screen Shot 2017-06-11 at 12.11.19 AM.png, 
> Screen Shot 2017-06-11 at 12.11.32 AM.png
>
>
> We need to add the Ozone section to hadoop documentation and also a section 
> on how to get started, that is what are configuration settings needed to 
> start running ozone. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11960) Successfully closed files can stay under-replicated.

2017-06-19 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-11960:
--
Attachment: HDFS-11960-v2.branch-2.txt

The branch-2 patch is identical except for the name change from 
"Reconstruction" to "Replication".

> Successfully closed files can stay under-replicated.
> 
>
> Key: HDFS-11960
> URL: https://issues.apache.org/jira/browse/HDFS-11960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-11960.patch, HDFS-11960-v2.branch-2.txt, 
> HDFS-11960-v2.trunk.txt
>
>
> If a certain set of conditions hold at the time of a file creation, a block 
> of the file can stay under-replicated.  This is because the block is 
> mistakenly taken out of the under-replicated block queue and never gets 
> reevaluated.
> Re-evaluation can be triggered if
> - a replica containing node dies.
> - setrep is called
> - NN repl queues are reinitialized (NN failover or restart)
> If none of these happens, the block stays under-replicated. 
> Here is how it happens.
> 1) A replica is finalized, but the ACK does not reach the upstream in time. 
> IBR is also delayed.
> 2) A close recovery happens, which updates the gen stamp of "healthy" 
> replicas.
> 3) The file is closed with the healthy replicas. It is added to the 
> replication queue.
> 4) A replication is scheduled, so it is added to the pending replication 
> list. The replication target is picked as the failed node in 1).
> 5) The old IBR is finally received for the failed/excluded node. In the 
> meantime, the replication fails, because there is already a finalized replica 
> (with older gen stamp) on the node.
> 6) The IBR processing removes the block from the pending list, adds it to 
> corrupt replicas list, and then issues invalidation. Since the block is in 
> neither replication queue nor pending list, it stays under-replicated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11997) ChunkManager functions do not use the argument keyName

2017-06-19 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11997:
-

 Summary: ChunkManager functions do not use the argument keyName
 Key: HDFS-11997
 URL: https://issues.apache.org/jira/browse/HDFS-11997
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


{{ChunkManagerImpl}}'s functions i.e. {{writeChunk}} {{readChunk}} 
{{deleteChunk}} all take a {{keyName}} argument, which is not being used by any 
of them.

I think this makes sense because conceptually {{ChunkManager}} should not have 
to know keyName to do anything, probably except for some sort of sanity check 
or logging, which is not there either. We should revisit whether we need it 
here. I think we should remove it to make the Chunk syntax, and the function 
signatures more cleanly abstracted.

Any comments? [~anu]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11997) ChunkManager functions do not use the argument keyName

2017-06-19 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054754#comment-16054754
 ] 

Anu Engineer commented on HDFS-11997:
-

As an API for chunk calls, isn't it better to have these in the API ?. 
This particular implementation of containers is not using it. But many others 
might find it useful. If it is just part of signature, I don't know if it is 
causing any issue.


> ChunkManager functions do not use the argument keyName
> --
>
> Key: HDFS-11997
> URL: https://issues.apache.org/jira/browse/HDFS-11997
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>
> {{ChunkManagerImpl}}'s functions i.e. {{writeChunk}} {{readChunk}} 
> {{deleteChunk}} all take a {{keyName}} argument, which is not being used by 
> any of them.
> I think this makes sense because conceptually {{ChunkManager}} should not 
> have to know keyName to do anything, probably except for some sort of sanity 
> check or logging, which is not there either. We should revisit whether we 
> need it here. I think we should remove it to make the Chunk syntax, and the 
> function signatures more cleanly abstracted.
> Any comments? [~anu]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11963) Ozone: Documentation: Add getting started page

2017-06-19 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054755#comment-16054755
 ] 

Xiaoyu Yao commented on HDFS-11963:
---

Thanks [~anu] for working on this. Patch v5 looks good to me overall. Just a 
few minor issues, +1 after them being fixed.

OzoneCommandShell.md

Line 23-24: Can we add a list of supported command and link them to the 
detailed example below. I see we are adding "_" before and after the 
parameters. They are shown up in the rendered site as "_". Is this intended?

Can we elaborate the last parameter "-root"?

Line 106: typo hire->hive
{code}
* `hdfs oz -listtBucket http://localhost:9864/hire`
{code}


Line 123: typo putKey->getKey
Line 129: typo putKey->deleteKey




> Ozone: Documentation: Add getting started page
> --
>
> Key: HDFS-11963
> URL: https://issues.apache.org/jira/browse/HDFS-11963
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-11963-HDFS-7240.001.patch, 
> HDFS-11963-HDFS-7240.002.patch, HDFS-11963-HDFS-7240.003.patch, 
> HDFS-11963-HDFS-7240.004.patch, HDFS-11963-HDFS-7240.005.patch, Screen Shot 
> 2017-06-11 at 12.11.06 AM.png, Screen Shot 2017-06-11 at 12.11.19 AM.png, 
> Screen Shot 2017-06-11 at 12.11.32 AM.png
>
>
> We need to add the Ozone section to hadoop documentation and also a section 
> on how to get started, that is what are configuration settings needed to 
> start running ozone. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11997) ChunkManager functions do not use the argument keyName

2017-06-19 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054779#comment-16054779
 ] 

Chen Liang commented on HDFS-11997:
---

I think from the perspective of abstraction, {{ChunkManager}} should work 
(read/write/delete a chunk) given just the metadata of the chunk. This is not 
causing any issue for now and will more likely never, but I felt having this 
field but not being used causes confusions. I simply didn't see any cases where 
this field should be used by ChunkManager as part of any of the operations. In 
fact, an implementation of chunk manager that relies on key name seems breaking 
the abstraction in some way to me...

> ChunkManager functions do not use the argument keyName
> --
>
> Key: HDFS-11997
> URL: https://issues.apache.org/jira/browse/HDFS-11997
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>
> {{ChunkManagerImpl}}'s functions i.e. {{writeChunk}} {{readChunk}} 
> {{deleteChunk}} all take a {{keyName}} argument, which is not being used by 
> any of them.
> I think this makes sense because conceptually {{ChunkManager}} should not 
> have to know keyName to do anything, probably except for some sort of sanity 
> check or logging, which is not there either. We should revisit whether we 
> need it here. I think we should remove it to make the Chunk syntax, and the 
> function signatures more cleanly abstracted.
> Any comments? [~anu]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11998) Enable DFSNetworkTopology as default

2017-06-19 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11998:
-

 Summary: Enable DFSNetworkTopology as default
 Key: HDFS-11998
 URL: https://issues.apache.org/jira/browse/HDFS-11998
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


HDFS-11530 has made it configurable to use {{DFSNetworkTopology}}, and still 
uses {{NetworkTopology}} as default. 

Given the stress testing in HDFS-11923 which shows the correctness of 
DFSNetworkTopology, and the performance testing in HDFS-11535 which shows how 
DFSNetworkTopology can outperform NetworkTopology. I think we are at the point 
where I can and should enable DFSNetworkTopology as default.

Any comments/thoughts are more than welcome!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11999) Ozone: Clarify error message in case namenode is missing

2017-06-19 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-11999:
---

 Summary: Ozone: Clarify error message in case namenode is missing
 Key: HDFS-11999
 URL: https://issues.apache.org/jira/browse/HDFS-11999
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Elek, Marton
Assignee: Elek, Marton


Datanode is failing if namenode config setting is missing even for Ozone with a 
confusing error message:

{code}
14:33:29.176 [main] ERROR o.a.h.hdfs.server.datanode.DataNode - Exception in 
secureMain
java.io.IOException: No services to connect (NameNodes or SCM).
at 
org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:168)
 ~[classes/:na]
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1440)
 [classes/:na]
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:510) 
[classes/:na]
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2802)
 [classes/:na]
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2705)
 [classes/:na]
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2752)
 [classes/:na]
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2896) 
[classes/:na]
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2920) 
[classes/:na]
14:33:29.177 [main] INFO  org.apache.hadoop.util.ExitUtil - Exiting with status 
1: java.io.IOException: No services to connect (NameNodes or SCM).
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11999) Ozone: Clarify startup error message of Datanode in case namenode is missing

2017-06-19 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11999:

Summary: Ozone: Clarify startup error message of Datanode in case namenode 
is missing  (was: Ozone: Clarify error message in case namenode is missing)

> Ozone: Clarify startup error message of Datanode in case namenode is missing
> 
>
> Key: HDFS-11999
> URL: https://issues.apache.org/jira/browse/HDFS-11999
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>
> Datanode is failing if namenode config setting is missing even for Ozone with 
> a confusing error message:
> {code}
> 14:33:29.176 [main] ERROR o.a.h.hdfs.server.datanode.DataNode - Exception in 
> secureMain
> java.io.IOException: No services to connect (NameNodes or SCM).
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:168)
>  ~[classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1440)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:510) 
> [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2802)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2705)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2752)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2896)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2920) 
> [classes/:na]
> 14:33:29.177 [main] INFO  org.apache.hadoop.util.ExitUtil - Exiting with 
> status 1: java.io.IOException: No services to connect (NameNodes or SCM).
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11999) Ozone: Clarify startup error message of Datanode in case namenode is missing

2017-06-19 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11999:

Attachment: HDFS-11999.patch

> Ozone: Clarify startup error message of Datanode in case namenode is missing
> 
>
> Key: HDFS-11999
> URL: https://issues.apache.org/jira/browse/HDFS-11999
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-11999.patch
>
>
> Datanode is failing if namenode config setting is missing even for Ozone with 
> a confusing error message:
> {code}
> 14:33:29.176 [main] ERROR o.a.h.hdfs.server.datanode.DataNode - Exception in 
> secureMain
> java.io.IOException: No services to connect (NameNodes or SCM).
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:168)
>  ~[classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1440)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:510) 
> [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2802)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2705)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2752)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2896)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2920) 
> [classes/:na]
> 14:33:29.177 [main] INFO  org.apache.hadoop.util.ExitUtil - Exiting with 
> status 1: java.io.IOException: No services to connect (NameNodes or SCM).
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11999) Ozone: Clarify startup error message of Datanode in case namenode is missing

2017-06-19 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11999:

Status: Patch Available  (was: Open)

> Ozone: Clarify startup error message of Datanode in case namenode is missing
> 
>
> Key: HDFS-11999
> URL: https://issues.apache.org/jira/browse/HDFS-11999
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-11999.patch
>
>
> Datanode is failing if namenode config setting is missing even for Ozone with 
> a confusing error message:
> {code}
> 14:33:29.176 [main] ERROR o.a.h.hdfs.server.datanode.DataNode - Exception in 
> secureMain
> java.io.IOException: No services to connect (NameNodes or SCM).
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:168)
>  ~[classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1440)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:510) 
> [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2802)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2705)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2752)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2896)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2920) 
> [classes/:na]
> 14:33:29.177 [main] INFO  org.apache.hadoop.util.ExitUtil - Exiting with 
> status 1: java.io.IOException: No services to connect (NameNodes or SCM).
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11997) ChunkManager functions do not use the argument keyName

2017-06-19 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054822#comment-16054822
 ] 

Anu Engineer commented on HDFS-11997:
-

[~vagarychen] Thanks for your comments. You are absolutely correct. Chunk 
Manager never really needs to know which key it is part of. However, some time 
ago we did a prototype for Disaster Recovery (DR) using the containers. In that 
prototype we explored writing the chunks directly from the datanode to S3 
buckets.

If my memory serves me correct, that is when we added this. Since  having the 
context made it easier to write the chunk to the remote cluster. If we decide 
to do that in future this might be useful. 

You are absolutely right from a layering perspective, this does not make sense 
-- A chunk never needs to know the parent context. This was only for 
propagating that information over to another cluster.

Right now, we have no  strong use case for this and DR has to be re-developed 
anyway (probably will not need this info for that). So this is prototyping 
residue that can indeed be removed.  Please feel free to remove it.  Completely 
your call.

Thanks


> ChunkManager functions do not use the argument keyName
> --
>
> Key: HDFS-11997
> URL: https://issues.apache.org/jira/browse/HDFS-11997
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>
> {{ChunkManagerImpl}}'s functions i.e. {{writeChunk}} {{readChunk}} 
> {{deleteChunk}} all take a {{keyName}} argument, which is not being used by 
> any of them.
> I think this makes sense because conceptually {{ChunkManager}} should not 
> have to know keyName to do anything, probably except for some sort of sanity 
> check or logging, which is not there either. We should revisit whether we 
> need it here. I think we should remove it to make the Chunk syntax, and the 
> function signatures more cleanly abstracted.
> Any comments? [~anu]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11999) Ozone: Clarify startup error message of Datanode in case namenode is missing

2017-06-19 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054830#comment-16054830
 ] 

Anu Engineer commented on HDFS-11999:
-

[~elek] Thanks for the patch, Can you please rename your patch to 

HDFS-11999-HDFS-7240.001.patch -- The format is JIRA-BRANCH-patch, otherwise 
jenkins will apply this patch on trunk and cause a build failure.

I am +1 on this patch. So please resubmit and I will commit as soon as we have 
jenkins run.

Thank you for the contribution.





> Ozone: Clarify startup error message of Datanode in case namenode is missing
> 
>
> Key: HDFS-11999
> URL: https://issues.apache.org/jira/browse/HDFS-11999
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-11999.patch
>
>
> Datanode is failing if namenode config setting is missing even for Ozone with 
> a confusing error message:
> {code}
> 14:33:29.176 [main] ERROR o.a.h.hdfs.server.datanode.DataNode - Exception in 
> secureMain
> java.io.IOException: No services to connect (NameNodes or SCM).
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:168)
>  ~[classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1440)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:510) 
> [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2802)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2705)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2752)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2896)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2920) 
> [classes/:na]
> 14:33:29.177 [main] INFO  org.apache.hadoop.util.ExitUtil - Exiting with 
> status 1: java.io.IOException: No services to connect (NameNodes or SCM).
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11963) Ozone: Documentation: Add getting started page

2017-06-19 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11963:

Attachment: HDFS-11963-HDFS-7240.006.patch

[~xyao] Thanks for the comments. Patch v6 addresses all comments.

> Ozone: Documentation: Add getting started page
> --
>
> Key: HDFS-11963
> URL: https://issues.apache.org/jira/browse/HDFS-11963
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-11963-HDFS-7240.001.patch, 
> HDFS-11963-HDFS-7240.002.patch, HDFS-11963-HDFS-7240.003.patch, 
> HDFS-11963-HDFS-7240.004.patch, HDFS-11963-HDFS-7240.005.patch, 
> HDFS-11963-HDFS-7240.006.patch, Screen Shot 2017-06-11 at 12.11.06 AM.png, 
> Screen Shot 2017-06-11 at 12.11.19 AM.png, Screen Shot 2017-06-11 at 12.11.32 
> AM.png
>
>
> We need to add the Ozone section to hadoop documentation and also a section 
> on how to get started, that is what are configuration settings needed to 
> start running ozone. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11960) Successfully closed files can stay under-replicated.

2017-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054870#comment-16054870
 ] 

Hadoop QA commented on HDFS-11960:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 127 unchanged - 0 fixed = 129 total (was 127) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 97m  
5s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11960 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873547/HDFS-11960-v2.trunk.txt
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bb2f07bdf0b9 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 73fb750 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19954/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19954/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19954/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Successfully closed files can stay under-replicated.
> 
>
> Key: HDFS-11960
> URL: https://issues.apache.org/jira/browse/HDFS-11960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
>  

[jira] [Updated] (HDFS-11996) Ozone : add partial read of chunks

2017-06-19 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11996:
--
Attachment: HDFS-11996-HDFS-7240.001.patch

Turns out it is already possible to do partial read of a chunk. So instead of 
making any actual change, adding a unit test to illustrate and verify how it 
works. 

> Ozone : add partial read of chunks
> --
>
> Key: HDFS-11996
> URL: https://issues.apache.org/jira/browse/HDFS-11996
> Project: Hadoop HDFS
>  Issue Type: Sub-task
> Environment: Currently when reading a chunk, it is always the whole 
> chunk that gets returned. However it is possible the reader may only need to 
> read a subset of the chunk. This JIRA adds the partial read of chunks.
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11996-HDFS-7240.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11996) Ozone : add partial read of chunks

2017-06-19 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11996:
--
Status: Patch Available  (was: Open)

> Ozone : add partial read of chunks
> --
>
> Key: HDFS-11996
> URL: https://issues.apache.org/jira/browse/HDFS-11996
> Project: Hadoop HDFS
>  Issue Type: Sub-task
> Environment: Currently when reading a chunk, it is always the whole 
> chunk that gets returned. However it is possible the reader may only need to 
> read a subset of the chunk. This JIRA adds the partial read of chunks.
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11996-HDFS-7240.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11963) Ozone: Documentation: Add getting started page

2017-06-19 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054873#comment-16054873
 ] 

Xiaoyu Yao commented on HDFS-11963:
---

Thanks [~anu] for the update. +1 for v6, pending Jenkins.

> Ozone: Documentation: Add getting started page
> --
>
> Key: HDFS-11963
> URL: https://issues.apache.org/jira/browse/HDFS-11963
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-11963-HDFS-7240.001.patch, 
> HDFS-11963-HDFS-7240.002.patch, HDFS-11963-HDFS-7240.003.patch, 
> HDFS-11963-HDFS-7240.004.patch, HDFS-11963-HDFS-7240.005.patch, 
> HDFS-11963-HDFS-7240.006.patch, Screen Shot 2017-06-11 at 12.11.06 AM.png, 
> Screen Shot 2017-06-11 at 12.11.19 AM.png, Screen Shot 2017-06-11 at 12.11.32 
> AM.png
>
>
> We need to add the Ozone section to hadoop documentation and also a section 
> on how to get started, that is what are configuration settings needed to 
> start running ozone. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11782) Ozone: KSM: Add listKey

2017-06-19 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054879#comment-16054879
 ] 

Xiaoyu Yao commented on HDFS-11782:
---

+1 for the latest patch given we will follow up on the remaining issues on 
HDFS-11984.

> Ozone: KSM: Add listKey
> ---
>
> Key: HDFS-11782
> URL: https://issues.apache.org/jira/browse/HDFS-11782
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: ozone
>Reporter: Anu Engineer
>Assignee: Yiqun Lin
> Attachments: HDFS-11782-HDFS-7240.001.patch, 
> HDFS-11782-HDFS-7240.002.patch, HDFS-11782-HDFS-7240.003.patch, 
> HDFS-11782-HDFS-7240.004.patch
>
>
> Add support for listing keys in a bucket. Just like other 2 list operations, 
> this API supports paging via, prevKey, prefix and maxKeys.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11996) Ozone : add partial read of chunks

2017-06-19 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054900#comment-16054900
 ] 

Anu Engineer commented on HDFS-11996:
-

+1, pending jenkins.

> Ozone : add partial read of chunks
> --
>
> Key: HDFS-11996
> URL: https://issues.apache.org/jira/browse/HDFS-11996
> Project: Hadoop HDFS
>  Issue Type: Sub-task
> Environment: Currently when reading a chunk, it is always the whole 
> chunk that gets returned. However it is possible the reader may only need to 
> read a subset of the chunk. This JIRA adds the partial read of chunks.
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11996-HDFS-7240.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11782) Ozone: KSM: Add listKey

2017-06-19 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11782:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-7240
Target Version/s: HDFS-7240
  Status: Resolved  (was: Patch Available)

Thanks [~linyiqun] for the contribution and all for the reviews. I've committed 
the patch to the feature branch. 

> Ozone: KSM: Add listKey
> ---
>
> Key: HDFS-11782
> URL: https://issues.apache.org/jira/browse/HDFS-11782
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: ozone
>Reporter: Anu Engineer
>Assignee: Yiqun Lin
> Fix For: HDFS-7240
>
> Attachments: HDFS-11782-HDFS-7240.001.patch, 
> HDFS-11782-HDFS-7240.002.patch, HDFS-11782-HDFS-7240.003.patch, 
> HDFS-11782-HDFS-7240.004.patch
>
>
> Add support for listing keys in a bucket. Just like other 2 list operations, 
> this API supports paging via, prevKey, prefix and maxKeys.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11989) Ozone: add TestKeys with Ratis

2017-06-19 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054910#comment-16054910
 ] 

Xiaoyu Yao commented on HDFS-11989:
---

Thanks [~szetszwo] for working on this. The patch looks good to me. Now that 
listKey has been committed. Can you enable the testPutAndListKey?


> Ozone: add TestKeys with Ratis
> --
>
> Key: HDFS-11989
> URL: https://issues.apache.org/jira/browse/HDFS-11989
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: HDFS-11989-HDFS-7240.20170618.patch
>
>
> Add a Ratis test similar to TestKeys.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11995) HDFS Architecture documentation incorrectly describes writing to a local temporary file.

2017-06-19 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054913#comment-16054913
 ] 

Masatake Iwasaki commented on HDFS-11995:
-

+1, committing this.

If you say "local buffer in memory" rather than "local file", the statement 
might be not so wrong while the section name "staging" looks misleading.

> HDFS Architecture documentation incorrectly describes writing to a local 
> temporary file.
> 
>
> Key: HDFS-11995
> URL: https://issues.apache.org/jira/browse/HDFS-11995
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Chris Nauroth
>Assignee: Nandakumar
>Priority: Minor
> Attachments: HDFS-11995.000.patch
>
>
> The HDFS Architecture documentation has a section titled "Staging" that 
> describes clients writing to a local temporary file first before interacting 
> with the NameNode to allocate file metadata.  This information is incorrect.  
> (Perhaps it was correct a long time ago, but it is no longer accurate with 
> respect to the current implementation.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11983) Add documentation for metrics in KSMMetrics to OzoneMetrics.md

2017-06-19 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054933#comment-16054933
 ] 

Anu Engineer commented on HDFS-11983:
-

[~linyiqun] Thanks for adding these. Two small nits:

key space manager (KSM) is -*an optional*- a service that *is* similar to the 
Namenode in HDFS. ->  Please remove optional. 

* VolumeModifies -> VolumeUpdates   
*  Modify volume property operation ->  Update volume property operation

+1, after these fixes.


> Add documentation for metrics in KSMMetrics to OzoneMetrics.md
> --
>
> Key: HDFS-11983
> URL: https://issues.apache.org/jira/browse/HDFS-11983
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation, ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11983-HDFS-7240.001.patch
>
>
> Metrics defined in KSMMetrics are not documented in OzoneMetrics.md. This 
> JIRA will track on this. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12000) Ozone: Container : Add key versioning support

2017-06-19 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-12000:
---

 Summary: Ozone: Container : Add key versioning support
 Key: HDFS-12000
 URL: https://issues.apache.org/jira/browse/HDFS-12000
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Assignee: Chen Liang


The rest interface of ozone supports versioning of keys. This support comes 
from the containers and how chunks are managed to support this feature. This 
JIRA tracks that feature. Will post a detailed design doc so that we can talk 
about this feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12001) Remove documentation for "HDFS High Availability with NFS"

2017-06-19 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-12001:


 Summary: Remove documentation for "HDFS High Availability with NFS"
 Key: HDFS-12001
 URL: https://issues.apache.org/jira/browse/HDFS-12001
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Arpit Agarwal


We should consider removing the documentation for _HDFS High Availability with 
NFS_.
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html

Perhaps also consider deprecating support for HA with NFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11995) HDFS Architecture documentation incorrectly describes writing to a local temporary file.

2017-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054942#comment-16054942
 ] 

Hudson commented on HDFS-11995:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11888 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11888/])
HDFS-11995. HDFS Architecture documentation incorrectly describes (iwasakims: 
rev d954a64730c00346476322743462cde857164177)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md


> HDFS Architecture documentation incorrectly describes writing to a local 
> temporary file.
> 
>
> Key: HDFS-11995
> URL: https://issues.apache.org/jira/browse/HDFS-11995
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Chris Nauroth
>Assignee: Nandakumar
>Priority: Minor
> Attachments: HDFS-11995.000.patch
>
>
> The HDFS Architecture documentation has a section titled "Staging" that 
> describes clients writing to a local temporary file first before interacting 
> with the NameNode to allocate file metadata.  This information is incorrect.  
> (Perhaps it was correct a long time ago, but it is no longer accurate with 
> respect to the current implementation.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-8672) Erasure Coding: Add EC-related Metrics to NN (seperate striped blocks count from UnderReplicatedBlocks count)

2017-06-19 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-8672.
---
Resolution: Duplicate

Closing this as I believe it dupes HDFS-10999 which Manoj completed.

> Erasure Coding: Add EC-related Metrics to NN (seperate striped blocks count 
> from UnderReplicatedBlocks count)
> -
>
> Key: HDFS-8672
> URL: https://issues.apache.org/jira/browse/HDFS-8672
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Manoj Govindassamy
>Priority: Minor
>  Labels: hdfs-ec-3.0-nice-to-have
>
> 1. {{MissingBlocks}} metric is updated in HDFS-8461 so it includes striped 
> blocks.
> 2. {{CorruptBlocks}} metric is updated in HDFS-8619 so it includes striped 
> blocks.
> 3. {{UnderReplicatedBlocks}} and {{PendingReplicationBlocks}} includes 
> striped blocks (HDFS-7912).
> This jira aims to seperate striped blocks count from 
> {{UnderReplicatedBlocks}} count.
> EC file recovery need coding. It's more expensive than block duplication.
> It's necessary to seperate striped blocks count from UnderReplicatedBlocks 
> count. So user can know what's going on.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11670) [SPS]: Add CLI command for satisfy storage policy operations

2017-06-19 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054980#comment-16054980
 ] 

Uma Maheswara Rao G commented on HDFS-11670:


+1 on the latest patch. Thanks [~surendrasingh] for the patch and Thank you 
[~rakeshr] for the reviews. Committing to the branch!

> [SPS]: Add CLI command for satisfy storage policy operations
> 
>
> Key: HDFS-11670
> URL: https://issues.apache.org/jira/browse/HDFS-11670
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11670-HDFS-10285.001.patch, 
> HDFS-11670-HDFS-10285.002.patch, HDFS-11670-HDFS-10285.003.patch, 
> HDFS-11670-HDFS-10285.004.patch, HDFS-11670-HDFS-10285.005.patch
>
>
> This jira to discuss and implement set of satisfy storage policy 
> sub-commands. Following are the list of sub-commands:
> # Schedule blocks to move based on file/directory policy:
> {code}hdfs storagepolicies -satisfyStoragePolicy -path ]{code}
> # Its good to have one command to check SPS is enabled or not. Based on this 
> user can take the decision to run the Mover:
> {code}
> hdfs storagepolicies -isSPSRunning
> {code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11670) [SPS]: Add CLI command for satisfy storage policy operations

2017-06-19 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-11670:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-10285
   Status: Resolved  (was: Patch Available)

Committed to the branch!

> [SPS]: Add CLI command for satisfy storage policy operations
> 
>
> Key: HDFS-11670
> URL: https://issues.apache.org/jira/browse/HDFS-11670
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: HDFS-10285
>
> Attachments: HDFS-11670-HDFS-10285.001.patch, 
> HDFS-11670-HDFS-10285.002.patch, HDFS-11670-HDFS-10285.003.patch, 
> HDFS-11670-HDFS-10285.004.patch, HDFS-11670-HDFS-10285.005.patch
>
>
> This jira to discuss and implement set of satisfy storage policy 
> sub-commands. Following are the list of sub-commands:
> # Schedule blocks to move based on file/directory policy:
> {code}hdfs storagepolicies -satisfyStoragePolicy -path ]{code}
> # Its good to have one command to check SPS is enabled or not. Based on this 
> user can take the decision to run the Mover:
> {code}
> hdfs storagepolicies -isSPSRunning
> {code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11606) Add CLI cmd to remove an erasure code policy

2017-06-19 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16055005#comment-16055005
 ] 

Kai Zheng commented on HDFS-11606:
--

The latest patch LGTM and +1. Thanks Tim! We might get rid of the 
{{removedPoliciesByName}} map in the {{ErasureCodingPolicyManager}} later when 
following on the required fsimage/editlog change.

Will get it in tomorrow if no further comments.

> Add CLI cmd to remove an erasure code policy
> 
>
> Key: HDFS-11606
> URL: https://issues.apache.org/jira/browse/HDFS-11606
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Kai Zheng
>Assignee: Tim Yao
> Fix For: 3.0.0-alpha4
>
> Attachments: HDFS-11606.01.patch, HDFS-11606.02.patch, 
> HDFS-11606.03.patch
>
>
> This is to develop a CLI cmd allowing user to remove a user defined erasure 
> code policy by specifying its name. Note if the policy is referenced and used 
> by  existing HDFS files, the removal should fail with a good message.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11995) HDFS Architecture documentation incorrectly describes writing to a local temporary file.

2017-06-19 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki resolved HDFS-11995.
-
   Resolution: Fixed
Fix Version/s: 2.8.2
   3.0.0-alpha4
   2.9.0

Committed this to branch-2.8.2 and above. Thanks, [~nandakumar131].

> HDFS Architecture documentation incorrectly describes writing to a local 
> temporary file.
> 
>
> Key: HDFS-11995
> URL: https://issues.apache.org/jira/browse/HDFS-11995
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Chris Nauroth
>Assignee: Nandakumar
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-11995.000.patch
>
>
> The HDFS Architecture documentation has a section titled "Staging" that 
> describes clients writing to a local temporary file first before interacting 
> with the NameNode to allocate file metadata.  This information is incorrect.  
> (Perhaps it was correct a long time ago, but it is no longer accurate with 
> respect to the current implementation.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11916) Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a random EC policy

2017-06-19 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16055025#comment-16055025
 ] 

Takanobu Asanuma commented on HDFS-11916:
-

Thanks for reviewing and committing, [~eddyxu]!

> Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a 
> random EC policy
> 
>
> Key: HDFS-11916
> URL: https://issues.apache.org/jira/browse/HDFS-11916
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-alpha4
>
> Attachments: HDFS-11916.1.patch, HDFS-11916.2.patch, 
> HDFS-11916.3.patch, HDFS-11916.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >