[jira] [Updated] (HDFS-11724) libhdfs compilation is broken on OS X

2017-04-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-11724:

Description: Looks like HDFS-11529 added an include for malloc.h, which 
isn't available on OS X and likely other operating systems.  Many OSes use 
sys/malloc.h, including OS X.  But considering this is supposed to be POSIX, 
then it should be using the define in stdlib.h (as mentioned in pretty much 
every userland man page on malloc).  (was: Looks like HDFS-11529 added an 
include for malloc.h, which isn't available on OS X and likely other operating 
systems.  Many OSes use sys/malloc.h, including OS X.  But considering this is 
supposed to be POSIX, then it should be using the define in stdlib.h.)

> libhdfs compilation is broken on OS X
> -
>
> Key: HDFS-11724
> URL: https://issues.apache.org/jira/browse/HDFS-11724
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> Looks like HDFS-11529 added an include for malloc.h, which isn't available on 
> OS X and likely other operating systems.  Many OSes use sys/malloc.h, 
> including OS X.  But considering this is supposed to be POSIX, then it should 
> be using the define in stdlib.h (as mentioned in pretty much every userland 
> man page on malloc).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11724) libhdfs compilation is broken on OS X

2017-04-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-11724:

Description: Looks like HDFS-11529 added an include for malloc.h, which 
isn't available on OS X and likely other operating systems.  Many OSes use 
sys/malloc.h, including OS X.  But considering this is supposed to be POSIX, 
then it should be using the define in stdlib.h.  (was: Looks like HDFS-11529 
added an include for malloc.h, which isn't available on OS X and likely other 
operating systems.  Many OSes uses sys/malloc.h, including OS X. If we want 
POSIX, then it should be using the one in stdlib.h.)

> libhdfs compilation is broken on OS X
> -
>
> Key: HDFS-11724
> URL: https://issues.apache.org/jira/browse/HDFS-11724
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> Looks like HDFS-11529 added an include for malloc.h, which isn't available on 
> OS X and likely other operating systems.  Many OSes use sys/malloc.h, 
> including OS X.  But considering this is supposed to be POSIX, then it should 
> be using the define in stdlib.h.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11529) Add libHDFS API to return last exception

2017-04-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989779#comment-15989779
 ] 

Allen Wittenauer commented on HDFS-11529:
-

I've filed HDFS-11724 for breaking (at least) OS X and probably others.

> Add libHDFS API to return last exception
> 
>
> Key: HDFS-11529
> URL: https://issues.apache.org/jira/browse/HDFS-11529
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Mukil
>Assignee: Sailesh Mukil
>Priority: Critical
>  Labels: errorhandling, libhdfs
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11529.000.patch, HDFS-11529.001.patch, 
> HDFS-11529.002.patch, HDFS-11529.003.patch, HDFS-11529.004.patch, 
> HDFS-11529.005.patch, HDFS-11529.006.patch
>
>
> libHDFS uses a table to compare exceptions against and returns a 
> corresponding error code to the application in case of an error.
> However, this table is manually populated and many times is disremembered 
> when new exceptions are added.
> This causes libHDFS to return EINTERNAL (or Unknown Error(255)) whenever 
> these exceptions are hit. These are some examples of exceptions that have 
> been observed on an Error(255):
> org.apache.hadoop.ipc.StandbyException (Operation category WRITE is not 
> supported in state standby)
> java.io.EOFException: Cannot seek after EOF
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> It is of course not possible to have an error code for each and every type of 
> exception, so one suggestion of how this can be addressed is by having a call 
> such as hdfsGetLastException() that would return the last exception that a 
> libHDFS thread encountered. This way, an application may choose to call 
> hdfsGetLastException() if it receives EINTERNAL.
> We can make use of the Thread Local Storage to store this information. Also, 
> this makes sure that the current functionality is preserved.
> This is a follow up from HDFS-4997.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11724) libhdfs compilation is broken on OS X

2017-04-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-11724:

Description: Looks like HDFS-11529 added an include for malloc.h, which 
isn't available on OS X and likely other operating systems.  Many OSes uses 
sys/malloc.h, including OS X. If we want POSIX, then it should be using the one 
in stdlib.h.  (was: Looks like HDFS-11529 added an include for malloc.h, which 
isn't available on OS X and likely other operating systems.  Many OSes uses 
sys/malloc.h, including OS X.)

> libhdfs compilation is broken on OS X
> -
>
> Key: HDFS-11724
> URL: https://issues.apache.org/jira/browse/HDFS-11724
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> Looks like HDFS-11529 added an include for malloc.h, which isn't available on 
> OS X and likely other operating systems.  Many OSes uses sys/malloc.h, 
> including OS X. If we want POSIX, then it should be using the one in stdlib.h.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11724) libhdfs compilation is broken on OS X

2017-04-28 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-11724:
---

 Summary: libhdfs compilation is broken on OS X
 Key: HDFS-11724
 URL: https://issues.apache.org/jira/browse/HDFS-11724
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 3.0.0-alpha3
Reporter: Allen Wittenauer
Priority: Blocker


Looks like HDFS-11529 added an include for malloc.h, which isn't available on 
OS X and likely other operating systems.  Many OSes uses sys/malloc.h, 
including OS X.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6984) In Hadoop 3, make FileStatus serialize itself via protobuf

2017-04-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989778#comment-15989778
 ] 

Hadoop QA commented on HDFS-6984:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
24s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
29s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m  
7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m  7s{color} 
| {color:red} root generated 68 new + 788 unchanged - 0 fixed = 856 total (was 
788) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  4s{color} | {color:orange} root: The patch generated 19 new + 772 unchanged 
- 24 fixed = 791 total (was 796) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
37s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 3 new 
+ 2 unchanged - 0 fixed = 5 total (was 2) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 54s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 24s{color} 
| {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hado

[jira] [Updated] (HDFS-11695) [SPS]: Namenode failed to start while loading SPS xAttrs from the edits log.

2017-04-28 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-11695:
--
Status: Patch Available  (was: Open)

> [SPS]: Namenode failed to start while loading SPS xAttrs from the edits log.
> 
>
> Key: HDFS-11695
> URL: https://issues.apache.org/jira/browse/HDFS-11695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Blocker
> Attachments: fsimage.xml, HDFS-11695-HDFS-10285.001.patch
>
>
> {noformat}
> 2017-04-23 13:27:51,971 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> java.io.IOException: Cannot request to call satisfy storage policy on path 
> /ssl, as this file/dir was already called for satisfying storage policy.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSatisfyStoragePolicy(FSDirAttrOp.java:511)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirXAttrOp.unprotectedSetXAttrs(FSDirXAttrOp.java:284)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:918)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:241)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:150)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11695) [SPS]: Namenode failed to start while loading SPS xAttrs from the edits log.

2017-04-28 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-11695:
--
Attachment: HDFS-11695-HDFS-10285.001.patch

Attached initial patch.
Please review..

> [SPS]: Namenode failed to start while loading SPS xAttrs from the edits log.
> 
>
> Key: HDFS-11695
> URL: https://issues.apache.org/jira/browse/HDFS-11695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Blocker
> Attachments: fsimage.xml, HDFS-11695-HDFS-10285.001.patch
>
>
> {noformat}
> 2017-04-23 13:27:51,971 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> java.io.IOException: Cannot request to call satisfy storage policy on path 
> /ssl, as this file/dir was already called for satisfying storage policy.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSatisfyStoragePolicy(FSDirAttrOp.java:511)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirXAttrOp.unprotectedSetXAttrs(FSDirXAttrOp.java:284)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:918)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:241)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:150)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11695) [SPS]: Namenode failed to start while loading SPS xAttrs from the edits log.

2017-04-28 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989773#comment-15989773
 ] 

Surendra Singh Lilhore commented on HDFS-11695:
---

Thanks [~umamaheswararao] and [~yuanbo].

Scenario :

1. Create */test/file*.
2. set storage policy for file */test/file* and call satisfyStoragePolicy() API.
3. wait for SPS to remove xAttr for file */test/file*.
4. set storage policy for directory */test* and call satisfyStoragePolicy() API.
5. restart the namenode.

Root cause : 
==
When we will call the satisfyStoragePolicy() API for file or directory, 
namenode will log the {{OP_SET_XATTR}} in edit log.
After finishing the SPS work, it will remove the xAttr from the memory but it 
is not logged in edit log. When we restart the namenode it will load again 
{{OP_SET_XATTR}} from the editlog for */test/file* and */test*.

> [SPS]: Namenode failed to start while loading SPS xAttrs from the edits log.
> 
>
> Key: HDFS-11695
> URL: https://issues.apache.org/jira/browse/HDFS-11695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Blocker
> Attachments: fsimage.xml
>
>
> {noformat}
> 2017-04-23 13:27:51,971 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> java.io.IOException: Cannot request to call satisfy storage policy on path 
> /ssl, as this file/dir was already called for satisfying storage policy.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSatisfyStoragePolicy(FSDirAttrOp.java:511)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirXAttrOp.unprotectedSetXAttrs(FSDirXAttrOp.java:284)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:918)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:241)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:150)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9807) Add an optional StorageID to writes

2017-04-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989769#comment-15989769
 ] 

Hadoop QA commented on HDFS-9807:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
33s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
54s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-hdfs-project: The patch generated 22 new 
+ 1591 unchanged - 34 fixed = 1613 total (was 1625) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestMetadataVersionOutput |
|   | hadoop.hdfs.server.namenode.TestStartup |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | HDFS-9807 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865630/HDFS-9807.008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 910d4d8f1dc6 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/h

[jira] [Commented] (HDFS-11722) Change Datanode file IO profiling sampling to percentage

2017-04-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989759#comment-15989759
 ] 

Hadoop QA commented on HDFS-11722:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 499 unchanged - 0 fixed = 501 total (was 499) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestMetadataVersionOutput |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.namenode.TestStartup |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | HDFS-11722 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865641/HDFS-11722.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 4c76066e2009 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 19a7e94 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19238/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19238/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19238/artifact/pa

[jira] [Commented] (HDFS-11710) hadoop-hdfs-native-client build fails in trunk after HDFS-11529

2017-04-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989745#comment-15989745
 ] 

Hadoop QA commented on HDFS-11710:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m  
4s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
1s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | HDFS-11710 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865658/HDFS-11710.000.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 5de59dceb32e 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 19a7e94 |
| Default Java | 1.8.0_121 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19241/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19241/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> hadoop-hdfs-native-client build fails in trunk after HDFS-11529
> ---
>
> Key: HDFS-11710
> URL: https://issues.apache.org/jira/browse/HDFS-11710
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.0.0-alpha3
>Reporter: Vinayakumar B
>Assignee: Sailesh Mukil
>Priority: Blocker
> Attachments: HDFS-11710.000.patch
>
>
> HDFS-11529 used 'hdfsThreadDestructor()' in jni_helper.c.
> But this function is implemented in only "posix/thread_local_storage.c" NOT 
> in 
> "windows/thread_local_storage.c"
> Fails with following errors
> {noformat}
>  [exec]   hdfs.dir\RelWithDebInfo\thread_local_storage.obj  /machine:x64 
> /debug 
>  [exec]  Creating library 
> D:/hadoop/work/hadoop-hdfs-project/hadoop-hdfs-native-client/target/native/bin/RelWithDebInfo/hdfs.lib
>  and object 
> D:/hadoop/work/hadoop-hdfs-project/hadoop-hdfs-native-client/target/native/bin/RelWithDebInfo/hdfs.exp
>  [exec] jni_helper.obj : error LNK2019: unresolved external symbol 
> hdfsThreadDestru

[jira] [Commented] (HDFS-11718) DFSStripedOutputStream hsync/hflush should not throw UnsupportedOperationException

2017-04-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989742#comment-15989742
 ] 

Hudson commented on HDFS-11718:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11653 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11653/])
HDFS-11718. DFSStripedOutputStream hsync/hflush should not throw (lei: rev 
19a7e94ee47f81557f0db6fb76bdf6bc49944dd0)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java


> DFSStripedOutputStream hsync/hflush should not throw 
> UnsupportedOperationException
> --
>
> Key: HDFS-11718
> URL: https://issues.apache.org/jira/browse/HDFS-11718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11644.01.patch, HDFS-11718.01.patch
>
>
> This is a clone bug of HDFS-11644 to provide a quick fix while a proper fix 
> is being discussed and developed. The quick fix here would be to just turn 
> {{DFSStripedOutputStream#hsync()/hflush()}} as a no-op instead of throwing 
> UnsupportedOperationException. 
> {code}
>   @Override
>   public void hflush() {
> throw new UnsupportedOperationException();
>   }
>   @Override
>   public void hsync() {
> throw new UnsupportedOperationException();
>   }
> {code}
> For more details please refer to the comments in HDFS-11644.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11710) hadoop-hdfs-native-client build fails in trunk after HDFS-11529

2017-04-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989738#comment-15989738
 ] 

John Zhuge commented on HDFS-11710:
---

+1 LGTM.  [~vinayrpet] and [~Sammi], could you please build the patch on 
Windows and possibly run the unit tests? I don't have access to a Windows env 
quickly.

> hadoop-hdfs-native-client build fails in trunk after HDFS-11529
> ---
>
> Key: HDFS-11710
> URL: https://issues.apache.org/jira/browse/HDFS-11710
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.0.0-alpha3
>Reporter: Vinayakumar B
>Assignee: Sailesh Mukil
>Priority: Blocker
> Attachments: HDFS-11710.000.patch
>
>
> HDFS-11529 used 'hdfsThreadDestructor()' in jni_helper.c.
> But this function is implemented in only "posix/thread_local_storage.c" NOT 
> in 
> "windows/thread_local_storage.c"
> Fails with following errors
> {noformat}
>  [exec]   hdfs.dir\RelWithDebInfo\thread_local_storage.obj  /machine:x64 
> /debug 
>  [exec]  Creating library 
> D:/hadoop/work/hadoop-hdfs-project/hadoop-hdfs-native-client/target/native/bin/RelWithDebInfo/hdfs.lib
>  and object 
> D:/hadoop/work/hadoop-hdfs-project/hadoop-hdfs-native-client/target/native/bin/RelWithDebInfo/hdfs.exp
>  [exec] jni_helper.obj : error LNK2019: unresolved external symbol 
> hdfsThreadDestructor referenced in function getJNIEnv 
> [D:\hadoop\work\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfs\hdfs.vcxproj]
>  [exec] 
> D:\hadoop\work\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\bin\RelWithDebInfo\hdfs.dll
>  : fatal error LNK1120: 1 unresolved externals 
> [D:\hadoop\work\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfs\hdfs.vcxproj]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11710) hadoop-hdfs-native-client build fails in trunk after HDFS-11529

2017-04-28 Thread Sailesh Mukil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailesh Mukil updated HDFS-11710:
-
Status: Patch Available  (was: Open)

I've attached the patch within. Since I don't have a Windows dev environment, 
could someone in the community help with testing this?

Thanks in advance.

> hadoop-hdfs-native-client build fails in trunk after HDFS-11529
> ---
>
> Key: HDFS-11710
> URL: https://issues.apache.org/jira/browse/HDFS-11710
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.0.0-alpha3
>Reporter: Vinayakumar B
>Assignee: Sailesh Mukil
>Priority: Blocker
> Attachments: HDFS-11710.000.patch
>
>
> HDFS-11529 used 'hdfsThreadDestructor()' in jni_helper.c.
> But this function is implemented in only "posix/thread_local_storage.c" NOT 
> in 
> "windows/thread_local_storage.c"
> Fails with following errors
> {noformat}
>  [exec]   hdfs.dir\RelWithDebInfo\thread_local_storage.obj  /machine:x64 
> /debug 
>  [exec]  Creating library 
> D:/hadoop/work/hadoop-hdfs-project/hadoop-hdfs-native-client/target/native/bin/RelWithDebInfo/hdfs.lib
>  and object 
> D:/hadoop/work/hadoop-hdfs-project/hadoop-hdfs-native-client/target/native/bin/RelWithDebInfo/hdfs.exp
>  [exec] jni_helper.obj : error LNK2019: unresolved external symbol 
> hdfsThreadDestructor referenced in function getJNIEnv 
> [D:\hadoop\work\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfs\hdfs.vcxproj]
>  [exec] 
> D:\hadoop\work\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\bin\RelWithDebInfo\hdfs.dll
>  : fatal error LNK1120: 1 unresolved externals 
> [D:\hadoop\work\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfs\hdfs.vcxproj]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11710) hadoop-hdfs-native-client build fails in trunk after HDFS-11529

2017-04-28 Thread Sailesh Mukil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989712#comment-15989712
 ] 

Sailesh Mukil commented on HDFS-11710:
--

[~jzhuge] [~Sammi] FYI

> hadoop-hdfs-native-client build fails in trunk after HDFS-11529
> ---
>
> Key: HDFS-11710
> URL: https://issues.apache.org/jira/browse/HDFS-11710
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.0.0-alpha3
>Reporter: Vinayakumar B
>Assignee: Sailesh Mukil
>Priority: Blocker
> Attachments: HDFS-11710.000.patch
>
>
> HDFS-11529 used 'hdfsThreadDestructor()' in jni_helper.c.
> But this function is implemented in only "posix/thread_local_storage.c" NOT 
> in 
> "windows/thread_local_storage.c"
> Fails with following errors
> {noformat}
>  [exec]   hdfs.dir\RelWithDebInfo\thread_local_storage.obj  /machine:x64 
> /debug 
>  [exec]  Creating library 
> D:/hadoop/work/hadoop-hdfs-project/hadoop-hdfs-native-client/target/native/bin/RelWithDebInfo/hdfs.lib
>  and object 
> D:/hadoop/work/hadoop-hdfs-project/hadoop-hdfs-native-client/target/native/bin/RelWithDebInfo/hdfs.exp
>  [exec] jni_helper.obj : error LNK2019: unresolved external symbol 
> hdfsThreadDestructor referenced in function getJNIEnv 
> [D:\hadoop\work\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfs\hdfs.vcxproj]
>  [exec] 
> D:\hadoop\work\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\bin\RelWithDebInfo\hdfs.dll
>  : fatal error LNK1120: 1 unresolved externals 
> [D:\hadoop\work\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfs\hdfs.vcxproj]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11710) hadoop-hdfs-native-client build fails in trunk after HDFS-11529

2017-04-28 Thread Sailesh Mukil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailesh Mukil updated HDFS-11710:
-
Attachment: HDFS-11710.000.patch

> hadoop-hdfs-native-client build fails in trunk after HDFS-11529
> ---
>
> Key: HDFS-11710
> URL: https://issues.apache.org/jira/browse/HDFS-11710
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.0.0-alpha3
>Reporter: Vinayakumar B
>Assignee: Sailesh Mukil
>Priority: Blocker
> Attachments: HDFS-11710.000.patch
>
>
> HDFS-11529 used 'hdfsThreadDestructor()' in jni_helper.c.
> But this function is implemented in only "posix/thread_local_storage.c" NOT 
> in 
> "windows/thread_local_storage.c"
> Fails with following errors
> {noformat}
>  [exec]   hdfs.dir\RelWithDebInfo\thread_local_storage.obj  /machine:x64 
> /debug 
>  [exec]  Creating library 
> D:/hadoop/work/hadoop-hdfs-project/hadoop-hdfs-native-client/target/native/bin/RelWithDebInfo/hdfs.lib
>  and object 
> D:/hadoop/work/hadoop-hdfs-project/hadoop-hdfs-native-client/target/native/bin/RelWithDebInfo/hdfs.exp
>  [exec] jni_helper.obj : error LNK2019: unresolved external symbol 
> hdfsThreadDestructor referenced in function getJNIEnv 
> [D:\hadoop\work\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfs\hdfs.vcxproj]
>  [exec] 
> D:\hadoop\work\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\bin\RelWithDebInfo\hdfs.dll
>  : fatal error LNK1120: 1 unresolved externals 
> [D:\hadoop\work\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfs\hdfs.vcxproj]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11644) DFSStripedOutputStream should not implement Syncable

2017-04-28 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11644:
--
Attachment: HDFS-11644.02.patch

Based on suggestion from [~ste...@apache.org] and Stack, used a new interface 
to query for stream capabilities. [~andrew.wang], can you please take a look ?

> DFSStripedOutputStream should not implement Syncable
> 
>
> Key: HDFS-11644
> URL: https://issues.apache.org/jira/browse/HDFS-11644
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11644.01.patch, HDFS-11644.02.patch
>
>
> FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, 
> calls hsync. Otherwise, it just calls flush. This is used, for instance, by 
> YARN's FileSystemTimelineWriter.
> DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. 
> However, DFSStripedOS throws a runtime exception when the Syncable methods 
> are called.
> We should refactor the inheritance structure so DFSStripedOS does not 
> implement Syncable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6984) In Hadoop 3, make FileStatus serialize itself via protobuf

2017-04-28 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989654#comment-15989654
 ] 

Chris Douglas commented on HDFS-6984:
-

v006 of the patch does the following:
* Moves all the protobuf serialization to library code
* Deprecates Writable APIs for both {{FileStatus}} and {{FsPermission}}
* Where possible, removes {{FsPermissionExtension}}.
** Too many unit tests rely on this for HDFS, and trying to change those caused 
even more bloat to the patch. v006 moves its instantiation to a private 
{{convert}} method. If this is OK, I'll file a followup JIRA to clear up the 
unit tests.
** I don't know how many downstream applications rely on the {{hasAcl}}, 
{{isEncrypted}}, or {{isErasureCoded}} methods on {{FsPermission}}, but these 
are deprecated (rather than removed) in the patch.
** Introduced an intermediate, private {{FlaggedFileStatus}} class to preserve 
the attributes formerly mixed in with the permission bits. This could have been 
in {{LocatedFileStatus}}, but downstream clients may check {{instanceof 
LocatedFileStatus}} and assume the null locations are correct.
** Still need to deprecate {{FsPermission#toExtendedShort}}, will post a 
followup patch with any other checkstyle/findbugs fixes to v006
* Make {{HdfsFileStatus}} extend {{FileStatus}}
** After HADOOP-13895, the {{Serializable}} API bled further into the API
** {{getSymlink}} annoyingly throws an {{IOException}} if {{!isSymlink()}}. 
Overriding it in {{HdfsFileStatus}} required changing the return type, and to 
comply with the contract tests there are some superfluous try/catch statements 
in e.g., the JSON utils
** The JSON code tried to preserve the acl/crypt/ec bits of the 
{{FsPermission}} on {{AclStatus}}. This seems incorrect, but I can try to find 
a solution if it is meaningful.

> In Hadoop 3, make FileStatus serialize itself via protobuf
> --
>
> Key: HDFS-6984
> URL: https://issues.apache.org/jira/browse/HDFS-6984
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6984.001.patch, HDFS-6984.002.patch, 
> HDFS-6984.003.patch, HDFS-6984.004.patch, HDFS-6984.005.patch, 
> HDFS-6984.006.patch, HDFS-6984.nowritable.patch
>
>
> FileStatus was a Writable in Hadoop 2 and earlier.  Originally, we used this 
> to serialize it and send it over the wire.  But in Hadoop 2 and later, we 
> have the protobuf {{HdfsFileStatusProto}} which serves to serialize this 
> information.  The protobuf form is preferable, since it allows us to add new 
> fields in a backwards-compatible way.  Another issue is that already a lot of 
> subclasses of FileStatus don't override the Writable methods of the 
> superclass, breaking the interface contract that read(status.write) should be 
> equal to the original status.
> In Hadoop 3, we should just make FileStatus serialize itself via protobuf so 
> that we don't have to deal with these issues.  It's probably too late to do 
> this in Hadoop 2, since user code may be relying on the existing FileStatus 
> serialization there.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6984) In Hadoop 3, make FileStatus serialize itself via protobuf

2017-04-28 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-6984:

Attachment: HDFS-6984.006.patch

> In Hadoop 3, make FileStatus serialize itself via protobuf
> --
>
> Key: HDFS-6984
> URL: https://issues.apache.org/jira/browse/HDFS-6984
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6984.001.patch, HDFS-6984.002.patch, 
> HDFS-6984.003.patch, HDFS-6984.004.patch, HDFS-6984.005.patch, 
> HDFS-6984.006.patch, HDFS-6984.nowritable.patch
>
>
> FileStatus was a Writable in Hadoop 2 and earlier.  Originally, we used this 
> to serialize it and send it over the wire.  But in Hadoop 2 and later, we 
> have the protobuf {{HdfsFileStatusProto}} which serves to serialize this 
> information.  The protobuf form is preferable, since it allows us to add new 
> fields in a backwards-compatible way.  Another issue is that already a lot of 
> subclasses of FileStatus don't override the Writable methods of the 
> superclass, breaking the interface contract that read(status.write) should be 
> equal to the original status.
> In Hadoop 3, we should just make FileStatus serialize itself via protobuf so 
> that we don't have to deal with these issues.  It's probably too late to do 
> this in Hadoop 2, since user code may be relying on the existing FileStatus 
> serialization there.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11718) DFSStripedOutputStream hsync/hflush should not throw UnsupportedOperationException

2017-04-28 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989626#comment-15989626
 ] 

Manoj Govindassamy edited comment on HDFS-11718 at 4/29/17 12:19 AM:
-

Thanks for the review and commit help [~eddyxu].
[~andrew.wang], I believe this fix needs to be committed to trunk also. Please 
confirm. 


was (Author: manojg):
[~andrew.wang], I believe this fix needs to be committed to trunk also. Please 
confirm. 

> DFSStripedOutputStream hsync/hflush should not throw 
> UnsupportedOperationException
> --
>
> Key: HDFS-11718
> URL: https://issues.apache.org/jira/browse/HDFS-11718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11644.01.patch, HDFS-11718.01.patch
>
>
> This is a clone bug of HDFS-11644 to provide a quick fix while a proper fix 
> is being discussed and developed. The quick fix here would be to just turn 
> {{DFSStripedOutputStream#hsync()/hflush()}} as a no-op instead of throwing 
> UnsupportedOperationException. 
> {code}
>   @Override
>   public void hflush() {
> throw new UnsupportedOperationException();
>   }
>   @Override
>   public void hsync() {
> throw new UnsupportedOperationException();
>   }
> {code}
> For more details please refer to the comments in HDFS-11644.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11718) DFSStripedOutputStream hsync/hflush should not throw UnsupportedOperationException

2017-04-28 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989626#comment-15989626
 ] 

Manoj Govindassamy commented on HDFS-11718:
---

[~andrew.wang], I believe this fix needs to be committed to trunk also. Please 
confirm. 

> DFSStripedOutputStream hsync/hflush should not throw 
> UnsupportedOperationException
> --
>
> Key: HDFS-11718
> URL: https://issues.apache.org/jira/browse/HDFS-11718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11644.01.patch, HDFS-11718.01.patch
>
>
> This is a clone bug of HDFS-11644 to provide a quick fix while a proper fix 
> is being discussed and developed. The quick fix here would be to just turn 
> {{DFSStripedOutputStream#hsync()/hflush()}} as a no-op instead of throwing 
> UnsupportedOperationException. 
> {code}
>   @Override
>   public void hflush() {
> throw new UnsupportedOperationException();
>   }
>   @Override
>   public void hsync() {
> throw new UnsupportedOperationException();
>   }
> {code}
> For more details please refer to the comments in HDFS-11644.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11718) DFSStripedOutputStream hsync/hflush should not throw UnsupportedOperationException

2017-04-28 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-11718:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

+1 for patch 2. Committed to trunk.

Thanks for the contribution, [~manojg]. 

> DFSStripedOutputStream hsync/hflush should not throw 
> UnsupportedOperationException
> --
>
> Key: HDFS-11718
> URL: https://issues.apache.org/jira/browse/HDFS-11718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11644.01.patch, HDFS-11718.01.patch
>
>
> This is a clone bug of HDFS-11644 to provide a quick fix while a proper fix 
> is being discussed and developed. The quick fix here would be to just turn 
> {{DFSStripedOutputStream#hsync()/hflush()}} as a no-op instead of throwing 
> UnsupportedOperationException. 
> {code}
>   @Override
>   public void hflush() {
> throw new UnsupportedOperationException();
>   }
>   @Override
>   public void hsync() {
> throw new UnsupportedOperationException();
>   }
> {code}
> For more details please refer to the comments in HDFS-11644.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11723) Should log a warning message when users try to make certain directories encryption zone

2017-04-28 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11723:
--
Description: 
If a user tries to make the entire /user directory an encryption zone, and if 
trash is enabled, there will be problem when the user tries to delete 
unencrypted files outside /user. The problem will happen even with the fix in 
HDFS-8831. So we should log a WARN message when users try to make such 
directories encryption zone. Such directories include:
{{/user}}, 
{{/user/$user}} 
{{/user/$user/.Trash}}

Thanks [~xyao] for the offline discussion.


  was:
If a user tries to make the entire /user directory an encryption zone, and if 
trash is enabled, there will be problem when the user tries to delete 
unencrypted file from /user to trash directory. The problem will happen even 
with the fix in HDFS-8831. So we should log a WARN message when users try to 
make such directories encryption zone. Such directories include:
{{/user}}, 
{{/user/$user}} 
{{/user/$user/.Trash}}

Thanks [~xyao] for the offline discussion.



> Should log a warning message when users try to make certain directories 
> encryption zone
> ---
>
> Key: HDFS-11723
> URL: https://issues.apache.org/jira/browse/HDFS-11723
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, hdfs-client
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
>
> If a user tries to make the entire /user directory an encryption zone, and if 
> trash is enabled, there will be problem when the user tries to delete 
> unencrypted files outside /user. The problem will happen even with the fix in 
> HDFS-8831. So we should log a WARN message when users try to make such 
> directories encryption zone. Such directories include:
> {{/user}}, 
> {{/user/$user}} 
> {{/user/$user/.Trash}}
> Thanks [~xyao] for the offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11722) Change Datanode file IO profiling sampling to percentage

2017-04-28 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11722:
--
Status: Patch Available  (was: Open)

> Change Datanode file IO profiling sampling to percentage
> 
>
> Key: HDFS-11722
> URL: https://issues.apache.org/jira/browse/HDFS-11722
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11722.001.patch
>
>
> Datanode disk IO profiling sampling is controlled by the setting 
> _dfs.datanode.fileio.profiling.sampling.fraction_. Instead of fraction, we 
> can a percentage value to make it easier to set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11722) Change Datanode file IO profiling sampling to percentage

2017-04-28 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11722:
--
Attachment: HDFS-11722.001.patch

> Change Datanode file IO profiling sampling to percentage
> 
>
> Key: HDFS-11722
> URL: https://issues.apache.org/jira/browse/HDFS-11722
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11722.001.patch
>
>
> Datanode disk IO profiling sampling is controlled by the setting 
> _dfs.datanode.fileio.profiling.sampling.fraction_. Instead of fraction, we 
> can a percentage value to make it easier to set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11718) DFSStripedOutputStream hsync/hflush should not throw UnsupportedOperationException

2017-04-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989542#comment-15989542
 ] 

Hadoop QA commented on HDFS-11718:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
50s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
23s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
38s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestMetadataVersionOutput |
|   | hadoop.hdfs.server.namenode.TestStartup |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | HDFS-11718 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865622/HDFS-11718.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 968e0fc0aa4d 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2e52789 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19237/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
 |
| findbugs |

[jira] [Updated] (HDFS-11723) Should log a warning message when users try to make certain directories encryption zone

2017-04-28 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11723:
--
Priority: Minor  (was: Major)

> Should log a warning message when users try to make certain directories 
> encryption zone
> ---
>
> Key: HDFS-11723
> URL: https://issues.apache.org/jira/browse/HDFS-11723
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, hdfs-client
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
>
> If a user tries to make the entire /user directory an encryption zone, and if 
> trash is enabled, there will be problem when the user tries to delete 
> unencrypted file from /user to trash directory. The problem will happen even 
> with the fix in HDFS-8831. So we should log a WARN message when users try to 
> make such directories encryption zone. Such directories include:
> {{/user}}, 
> {{/user/$user}} 
> {{/user/$user/.Trash}}
> Thanks [~xyao] for the offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11723) Should log a warning message when users try to make certain directories encryption zone

2017-04-28 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11723:
-

 Summary: Should log a warning message when users try to make 
certain directories encryption zone
 Key: HDFS-11723
 URL: https://issues.apache.org/jira/browse/HDFS-11723
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: encryption, hdfs-client
Reporter: Chen Liang
Assignee: Chen Liang


If a user tries to make the entire /user directory an encryption zone, and if 
trash is enabled, there will be problem when the user tries to delete 
unencrypted file from /user to trash directory. The problem will happen even 
with the fix in HDFS-8831. So we should log a WARN message when users try to 
make such directories encryption zone. Such directories include:
{{/user}}, 
{{/user/$user}} 
{{/user/$user/.Trash}}

Thanks [~xyao] for the offline discussion.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11722) Change Datanode file IO profiling sampling to percentage

2017-04-28 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDFS-11722:
-

 Summary: Change Datanode file IO profiling sampling to percentage
 Key: HDFS-11722
 URL: https://issues.apache.org/jira/browse/HDFS-11722
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


Datanode disk IO profiling sampling is controlled by the setting 
_dfs.datanode.fileio.profiling.sampling.fraction_. Instead of fraction, we can 
a percentage value to make it easier to set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8498) Blocks can be committed with wrong size

2017-04-28 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989533#comment-15989533
 ] 

Arpit Agarwal commented on HDFS-8498:
-

Hi [~zhz], would you consider resolving this jira and filing a separate one for 
the branch-2.7 commit?

> Blocks can be committed with wrong size
> ---
>
> Key: HDFS-8498
> URL: https://issues.apache.org/jira/browse/HDFS-8498
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.5.0
>Reporter: Daryn Sharp
>Assignee: Jing Zhao
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1
>
> Attachments: HDFS-8498.000.patch, HDFS-8498.001.patch, 
> HDFS-8498.branch-2.001.patch, HDFS-8498.branch-2.7.001.patch, 
> HDFS-8498.branch-2.patch
>
>
> When an IBR for a UC block arrives, the NN updates the expected location's 
> block and replica state _only_ if it's on an unexpected storage for an 
> expected DN.  If it's for an expected storage, only the genstamp is updated.  
> When the block is committed, and the expected locations are verified, only 
> the genstamp is checked.  The size is not checked but it wasn't updated in 
> the expected locations anyway.
> A faulty client may misreport the size when committing the block.  The block 
> is effectively corrupted.  If the NN issues replications, the received IBR is 
> considered corrupt, the NN invalidates the block, immediately issues another 
> replication.  The NN eventually realizes all the original replicas are 
> corrupt after full BRs are received from the original DNs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11721) When Standby NameNode is paused using kill -SIGSTOP clients hang rather than moving to the active.

2017-04-28 Thread Thomas Scott (JIRA)
Thomas Scott created HDFS-11721:
---

 Summary: When Standby NameNode is paused using kill -SIGSTOP 
clients hang rather than moving to the active.
 Key: HDFS-11721
 URL: https://issues.apache.org/jira/browse/HDFS-11721
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Thomas Scott
Priority: Minor


Using kill -SIGSTOP on the standby namenode causes clients to hang rather than 
moving to the active. To reproduce:

1. Run kill -SIGSTOP 
2. Run hdfs dfs -ls / (this may not show the issue as it may try the active 
namenode first. If this happens, failover the NameNodes and the issue will 
occur)
3. To force the issue run hdfs dfs -ls hdfs://:8020/ 

This causes the client to hang with no timeout until the SBNN is resumed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11661) GetContentSummary uses excessive amounts of memory

2017-04-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989520#comment-15989520
 ] 

Hadoop QA commented on HDFS-11661:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
54s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 120 unchanged - 0 fixed = 121 total (was 120) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.server.namenode.TestStartup |
|   | hadoop.hdfs.server.namenode.TestMetadataVersionOutput |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | HDFS-11661 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865620/HDFs-11661.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f1e485d1ad22 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / fdf5192 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19236/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19236/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19236/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Updated] (HDFS-11644) DFSStripedOutputStream should not implement Syncable

2017-04-28 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11644:
--
Priority: Major  (was: Blocker)

> DFSStripedOutputStream should not implement Syncable
> 
>
> Key: HDFS-11644
> URL: https://issues.apache.org/jira/browse/HDFS-11644
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11644.01.patch
>
>
> FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, 
> calls hsync. Otherwise, it just calls flush. This is used, for instance, by 
> YARN's FileSystemTimelineWriter.
> DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. 
> However, DFSStripedOS throws a runtime exception when the Syncable methods 
> are called.
> We should refactor the inheritance structure so DFSStripedOS does not 
> implement Syncable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11644) DFSStripedOutputStream should not implement Syncable

2017-04-28 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989508#comment-15989508
 ] 

Manoj Govindassamy commented on HDFS-11644:
---

Downgraded the priority from blocker to major as the other bug HDFS-11718 is 
taking care of the quick fix.

> DFSStripedOutputStream should not implement Syncable
> 
>
> Key: HDFS-11644
> URL: https://issues.apache.org/jira/browse/HDFS-11644
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11644.01.patch
>
>
> FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, 
> calls hsync. Otherwise, it just calls flush. This is used, for instance, by 
> YARN's FileSystemTimelineWriter.
> DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. 
> However, DFSStripedOS throws a runtime exception when the Syncable methods 
> are called.
> We should refactor the inheritance structure so DFSStripedOS does not 
> implement Syncable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9807) Add an optional StorageID to writes

2017-04-28 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9807:
-
Status: Open  (was: Patch Available)

> Add an optional StorageID to writes
> ---
>
> Key: HDFS-9807
> URL: https://issues.apache.org/jira/browse/HDFS-9807
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Chris Douglas
>Assignee: Ewan Higgs
> Attachments: HDFS-9807.001.patch, HDFS-9807.002.patch, 
> HDFS-9807.003.patch, HDFS-9807.004.patch, HDFS-9807.005.patch, 
> HDFS-9807.006.patch, HDFS-9807.007.patch, HDFS-9807.008.patch
>
>
> The {{BlockPlacementPolicy}} considers specific storages, but when the 
> replica is written the DN {{VolumeChoosingPolicy}} is unaware of any 
> preference or constraints from other policies affecting placement. This 
> limits heterogeneity to the declared storage types, which are treated as 
> fungible within the target DN. It should be possible to influence or 
> constrain the DN policy to select a particular storage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9807) Add an optional StorageID to writes

2017-04-28 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9807:
-
Attachment: HDFS-9807.008.patch

Posting a patch which is a modification of the patch posted by [~ehiggs]. 

The test is now renamed to {{TestNamenodeStorageDirectives}} from 
{{TestDataNodeStorage}} (also moved to the 
{{org.apache.hadoop.hdfs.server.namenode}} package). Also, added a new 
{{TestBlockPlacementPolicy}} and modified the existing 
{{TestVolumeChoosingPolicy}} to ensure that the storage id passed to the 
{{VolumeChoosingPolicy}} is exactly the same as chosen by the 
{{BlockPlacementPolicy}}. Also fixed some of the checkstyle issues from the 
earlier patch.

[~ehiggs] please take a look to see if this makes sense. I also removed the 
{{verifyFileReplicasOnOnlyThreeStorageID}} as it was not being called, and 
changed the visibility of some of the functions in the test to {{private}}.


> Add an optional StorageID to writes
> ---
>
> Key: HDFS-9807
> URL: https://issues.apache.org/jira/browse/HDFS-9807
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Chris Douglas
>Assignee: Ewan Higgs
> Attachments: HDFS-9807.001.patch, HDFS-9807.002.patch, 
> HDFS-9807.003.patch, HDFS-9807.004.patch, HDFS-9807.005.patch, 
> HDFS-9807.006.patch, HDFS-9807.007.patch, HDFS-9807.008.patch
>
>
> The {{BlockPlacementPolicy}} considers specific storages, but when the 
> replica is written the DN {{VolumeChoosingPolicy}} is unaware of any 
> preference or constraints from other policies affecting placement. This 
> limits heterogeneity to the declared storage types, which are treated as 
> fungible within the target DN. It should be possible to influence or 
> constrain the DN policy to select a particular storage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9807) Add an optional StorageID to writes

2017-04-28 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9807:
-
Status: Patch Available  (was: Open)

> Add an optional StorageID to writes
> ---
>
> Key: HDFS-9807
> URL: https://issues.apache.org/jira/browse/HDFS-9807
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Chris Douglas
>Assignee: Ewan Higgs
> Attachments: HDFS-9807.001.patch, HDFS-9807.002.patch, 
> HDFS-9807.003.patch, HDFS-9807.004.patch, HDFS-9807.005.patch, 
> HDFS-9807.006.patch, HDFS-9807.007.patch, HDFS-9807.008.patch
>
>
> The {{BlockPlacementPolicy}} considers specific storages, but when the 
> replica is written the DN {{VolumeChoosingPolicy}} is unaware of any 
> preference or constraints from other policies affecting placement. This 
> limits heterogeneity to the declared storage types, which are treated as 
> fungible within the target DN. It should be possible to influence or 
> constrain the DN policy to select a particular storage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11718) DFSStripedOutputStream hsync/hflush should not throw UnsupportedOperationException

2017-04-28 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11718:
--
Description: 
This is a clone bug of HDFS-11644 to provide a quick fix while a proper fix is 
being discussed and developed. The quick fix here would be to just turn 
{{DFSStripedOutputStream#hsync()/hflush()}} as a no-op instead of throwing 
UnsupportedOperationException. 

{code}
  @Override
  public void hflush() {
throw new UnsupportedOperationException();
  }

  @Override
  public void hsync() {
throw new UnsupportedOperationException();
  }
{code}

For more details please refer to the comments in HDFS-11644.

  was:
This is a clone bug of HDFS-11644 to provide a quick fix while a proper fix is 
being discussed and developed. The quick fix here would be to just turn 
{{DFSStripedOutputStream#hsync()/hflush()}} as a no-op instead of throwing 
UnsupportedOperationException. 

  @Override
  public void hflush() {
throw new UnsupportedOperationException();
  }

  @Override
  public void hsync() {
throw new UnsupportedOperationException();
  }

For more details please refer to the comments in HDFS-11644.


> DFSStripedOutputStream hsync/hflush should not throw 
> UnsupportedOperationException
> --
>
> Key: HDFS-11718
> URL: https://issues.apache.org/jira/browse/HDFS-11718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11644.01.patch, HDFS-11718.01.patch
>
>
> This is a clone bug of HDFS-11644 to provide a quick fix while a proper fix 
> is being discussed and developed. The quick fix here would be to just turn 
> {{DFSStripedOutputStream#hsync()/hflush()}} as a no-op instead of throwing 
> UnsupportedOperationException. 
> {code}
>   @Override
>   public void hflush() {
> throw new UnsupportedOperationException();
>   }
>   @Override
>   public void hsync() {
> throw new UnsupportedOperationException();
>   }
> {code}
> For more details please refer to the comments in HDFS-11644.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11719) Arrays.fill() wrong index in BlockSender.readChecksum() exception handling

2017-04-28 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989482#comment-15989482
 ] 

Mingliang Liu commented on HDFS-11719:
--

That sounds a good fix. Can you provide a patch for this?

> Arrays.fill() wrong index in BlockSender.readChecksum() exception handling
> --
>
> Key: HDFS-11719
> URL: https://issues.apache.org/jira/browse/HDFS-11719
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tao Zhang
>Assignee: Tao Zhang
>
> In BlockSender.readChecksum() exception handling part:
> Arrays.fill(buf, checksumOffset, checksumLen, (byte) 0);
> Actually the paramters should be: Arrays.fill(buf, , , 
> value);
> So it should be changed to:
> Arrays.fill(buf, checksumOffset, checksumOffset + checksumLen, (byte) 0);



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11718) DFSStripedOutputStream hsync/hflush should not throw UnsupportedOperationException

2017-04-28 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11718:
--
Attachment: HDFS-11718.01.patch

Thanks for the review [~eddyxu]. Attached v02 patch to make use of try-resource.

> DFSStripedOutputStream hsync/hflush should not throw 
> UnsupportedOperationException
> --
>
> Key: HDFS-11718
> URL: https://issues.apache.org/jira/browse/HDFS-11718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11644.01.patch, HDFS-11718.01.patch
>
>
> This is a clone bug of HDFS-11644 to provide a quick fix while a proper fix 
> is being discussed and developed. The quick fix here would be to just turn 
> {{DFSStripedOutputStream#hsync()/hflush()}} as a no-op instead of throwing 
> UnsupportedOperationException. 
>   @Override
>   public void hflush() {
> throw new UnsupportedOperationException();
>   }
>   @Override
>   public void hsync() {
> throw new UnsupportedOperationException();
>   }
> For more details please refer to the comments in HDFS-11644.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11661) GetContentSummary uses excessive amounts of memory

2017-04-28 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-11661:
---
Attachment: HDFs-11661.002.patch

Here's a rev 002 patch. Added a few asserts and checks to make sure it doesn't 
break.

The patch passed all tests on CDH5.11 code, so I think it's ready for precommit 
check on trunk.

> GetContentSummary uses excessive amounts of memory
> --
>
> Key: HDFS-11661
> URL: https://issues.apache.org/jira/browse/HDFS-11661
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Nathan Roberts
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Attachments: HDFS-11661.001.patch, HDFs-11661.002.patch, Heap 
> growth.png
>
>
> ContentSummaryComputationContext::nodeIncluded() is being used to keep track 
> of all INodes visited during the current content summary calculation. This 
> can be all of the INodes in the filesystem, making for a VERY large hash 
> table. This simply won't work on large filesystems. 
> We noticed this after upgrading a namenode with ~100Million filesystem 
> objects was spending significantly more time in GC. Fortunately this system 
> had some memory breathing room, other clusters we have will not run with this 
> additional demand on memory.
> This was added as part of HDFS-10797 as a way of keeping track of INodes that 
> have already been accounted for - to avoid double counting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11661) GetContentSummary uses excessive amounts of memory

2017-04-28 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-11661:
---
Status: Patch Available  (was: Open)

> GetContentSummary uses excessive amounts of memory
> --
>
> Key: HDFS-11661
> URL: https://issues.apache.org/jira/browse/HDFS-11661
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0-alpha2, 2.8.0
>Reporter: Nathan Roberts
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Attachments: HDFS-11661.001.patch, HDFs-11661.002.patch, Heap 
> growth.png
>
>
> ContentSummaryComputationContext::nodeIncluded() is being used to keep track 
> of all INodes visited during the current content summary calculation. This 
> can be all of the INodes in the filesystem, making for a VERY large hash 
> table. This simply won't work on large filesystems. 
> We noticed this after upgrading a namenode with ~100Million filesystem 
> objects was spending significantly more time in GC. Fortunately this system 
> had some memory breathing room, other clusters we have will not run with this 
> additional demand on memory.
> This was added as part of HDFS-10797 as a way of keeping track of INodes that 
> have already been accounted for - to avoid double counting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10459) getTurnOffTip computes needed block incorrectly for threshold < 1 in b2.7

2017-04-28 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HDFS-10459:
---
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Not a critical fix for 2.7, so closing as won't fix.

As for trunk, after speaking offline with [~daryn] and [~kihwal], it looks like 
truncating (i.e. rounding down) is the easier approach here, so we'll just 
leave this as is and not change anything. The off by 1 error is already fixed 
there. 

> getTurnOffTip computes needed block incorrectly for threshold < 1 in b2.7
> -
>
> Key: HDFS-10459
> URL: https://issues.apache.org/jira/browse/HDFS-10459
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-10459.001.patch, HDFS-10459.002.patch, 
> HDFS-10459.003.patch, HDFS-10459-b2.7.002.patch, HDFS-10459-b2.7.003.patch
>
>
> GetTurnOffTip overstates the number of blocks necessary to come out of safe 
> mode by 1 due to an arbitrary '+1' in the code. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11529) Add libHDFS API to return last exception

2017-04-28 Thread Sailesh Mukil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989365#comment-15989365
 ] 

Sailesh Mukil commented on HDFS-11529:
--

I will post a fix for HDFS-11710 by tonight.

> Add libHDFS API to return last exception
> 
>
> Key: HDFS-11529
> URL: https://issues.apache.org/jira/browse/HDFS-11529
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Mukil
>Assignee: Sailesh Mukil
>Priority: Critical
>  Labels: errorhandling, libhdfs
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11529.000.patch, HDFS-11529.001.patch, 
> HDFS-11529.002.patch, HDFS-11529.003.patch, HDFS-11529.004.patch, 
> HDFS-11529.005.patch, HDFS-11529.006.patch
>
>
> libHDFS uses a table to compare exceptions against and returns a 
> corresponding error code to the application in case of an error.
> However, this table is manually populated and many times is disremembered 
> when new exceptions are added.
> This causes libHDFS to return EINTERNAL (or Unknown Error(255)) whenever 
> these exceptions are hit. These are some examples of exceptions that have 
> been observed on an Error(255):
> org.apache.hadoop.ipc.StandbyException (Operation category WRITE is not 
> supported in state standby)
> java.io.EOFException: Cannot seek after EOF
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> It is of course not possible to have an error code for each and every type of 
> exception, so one suggestion of how this can be addressed is by having a call 
> such as hdfsGetLastException() that would return the last exception that a 
> libHDFS thread encountered. This way, an application may choose to call 
> hdfsGetLastException() if it receives EINTERNAL.
> We can make use of the Thread Local Storage to store this information. Also, 
> this makes sure that the current functionality is preserved.
> This is a follow up from HDFS-4997.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11718) DFSStripedOutputStream hsync/hflush should not throw UnsupportedOperationException

2017-04-28 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989307#comment-15989307
 ] 

Lei (Eddy) Xu commented on HDFS-11718:
--

Thanks for the patch, [~manojg].  It LGTM. +1 pending for jenkins and after 
addressing small nits.

* Can we use {{try-resource}} for {{FSDataOutputStream}} used in the test.
* Please add comments to the test as a reference, since this JIRA is a hot fix. 

Thanks!


> DFSStripedOutputStream hsync/hflush should not throw 
> UnsupportedOperationException
> --
>
> Key: HDFS-11718
> URL: https://issues.apache.org/jira/browse/HDFS-11718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11644.01.patch
>
>
> This is a clone bug of HDFS-11644 to provide a quick fix while a proper fix 
> is being discussed and developed. The quick fix here would be to just turn 
> {{DFSStripedOutputStream#hsync()/hflush()}} as a no-op instead of throwing 
> UnsupportedOperationException. 
>   @Override
>   public void hflush() {
> throw new UnsupportedOperationException();
>   }
>   @Override
>   public void hsync() {
> throw new UnsupportedOperationException();
>   }
> For more details please refer to the comments in HDFS-11644.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11720) LeaseManager#getINodeWithLeases() should support skipping leases of deleted files with snapshot feature

2017-04-28 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-11720:
-

 Summary: LeaseManager#getINodeWithLeases() should support skipping 
leases of deleted files with snapshot feature
 Key: HDFS-11720
 URL: https://issues.apache.org/jira/browse/HDFS-11720
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0-alpha1
Reporter: Manoj Govindassamy
Assignee: Manoj Govindassamy


{{LeaseManager#getINodeWithLeases()}} currently returns a set of INodesInPath 
for all the leases currently in the system. But, these leases could also belong 
to a file with snapshot feature, which just got deleted and not yet purged. 
Better if we can have version of {{LeaseManager#getINodeWithLeases()}} which 
returns IIP set only for non-deleted files so that some of its users like 
createSnapshot which wants to look at open files only don't get tripped on the 
deleted files.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11718) DFSStripedOutputStream hsync/hflush should not throw UnsupportedOperationException

2017-04-28 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11718:
--
Attachment: HDFS-11644.01.patch

Attaching v01 patch. [~eddyxu], can you please take a look at the patch ?

> DFSStripedOutputStream hsync/hflush should not throw 
> UnsupportedOperationException
> --
>
> Key: HDFS-11718
> URL: https://issues.apache.org/jira/browse/HDFS-11718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11644.01.patch
>
>
> This is a clone bug of HDFS-11644 to provide a quick fix while a proper fix 
> is being discussed and developed. The quick fix here would be to just turn 
> {{DFSStripedOutputStream#hsync()/hflush()}} as a no-op instead of throwing 
> UnsupportedOperationException. 
>   @Override
>   public void hflush() {
> throw new UnsupportedOperationException();
>   }
>   @Override
>   public void hsync() {
> throw new UnsupportedOperationException();
>   }
> For more details please refer to the comments in HDFS-11644.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11718) DFSStripedOutputStream hsync/hflush should not throw UnsupportedOperationException

2017-04-28 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11718:
--
Status: Patch Available  (was: Open)

> DFSStripedOutputStream hsync/hflush should not throw 
> UnsupportedOperationException
> --
>
> Key: HDFS-11718
> URL: https://issues.apache.org/jira/browse/HDFS-11718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11644.01.patch
>
>
> This is a clone bug of HDFS-11644 to provide a quick fix while a proper fix 
> is being discussed and developed. The quick fix here would be to just turn 
> {{DFSStripedOutputStream#hsync()/hflush()}} as a no-op instead of throwing 
> UnsupportedOperationException. 
>   @Override
>   public void hflush() {
> throw new UnsupportedOperationException();
>   }
>   @Override
>   public void hsync() {
> throw new UnsupportedOperationException();
>   }
> For more details please refer to the comments in HDFS-11644.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11718) DFSStripedOutputStream hsync/hflush should not throw UnsupportedOperationException

2017-04-28 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11718:
--
Description: 
This is a clone bug of HDFS-11644 to provide a quick fix while a proper fix is 
being discussed and developed. The quick fix here would be to just turn 
{{DFSStripedOutputStream#hsync()/hflush()}} as a no-op instead of throwing 
UnsupportedOperationException. 

  @Override
  public void hflush() {
throw new UnsupportedOperationException();
  }

  @Override
  public void hsync() {
throw new UnsupportedOperationException();
  }

For more details please refer to the comments in HDFS-11644.

  was:
FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, 
calls hsync. Otherwise, it just calls flush. This is used, for instance, by 
YARN's FileSystemTimelineWriter.

DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. 
However, DFSStripedOS throws a runtime exception when the Syncable methods are 
called.

We should refactor the inheritance structure so DFSStripedOS does not implement 
Syncable.


> DFSStripedOutputStream hsync/hflush should not throw 
> UnsupportedOperationException
> --
>
> Key: HDFS-11718
> URL: https://issues.apache.org/jira/browse/HDFS-11718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
>
> This is a clone bug of HDFS-11644 to provide a quick fix while a proper fix 
> is being discussed and developed. The quick fix here would be to just turn 
> {{DFSStripedOutputStream#hsync()/hflush()}} as a no-op instead of throwing 
> UnsupportedOperationException. 
>   @Override
>   public void hflush() {
> throw new UnsupportedOperationException();
>   }
>   @Override
>   public void hsync() {
> throw new UnsupportedOperationException();
>   }
> For more details please refer to the comments in HDFS-11644.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11719) Arrays.fill() wrong index in BlockSender.readChecksum() exception handling

2017-04-28 Thread Tao Zhang (JIRA)
Tao Zhang created HDFS-11719:


 Summary: Arrays.fill() wrong index in BlockSender.readChecksum() 
exception handling
 Key: HDFS-11719
 URL: https://issues.apache.org/jira/browse/HDFS-11719
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tao Zhang
Assignee: Tao Zhang


In BlockSender.readChecksum() exception handling part:
Arrays.fill(buf, checksumOffset, checksumLen, (byte) 0);

Actually the paramters should be: Arrays.fill(buf, , , 
value);
So it should be changed to:
Arrays.fill(buf, checksumOffset, checksumOffset + checksumLen, (byte) 0);



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11718) DFSStripedOutputStream hsync/hflush should not throw UnsupportedOperationException

2017-04-28 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-11718:
-

 Summary: DFSStripedOutputStream hsync/hflush should not throw 
UnsupportedOperationException
 Key: HDFS-11718
 URL: https://issues.apache.org/jira/browse/HDFS-11718
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0-alpha1
Reporter: Manoj Govindassamy
Assignee: Manoj Govindassamy
Priority: Blocker


FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, 
calls hsync. Otherwise, it just calls flush. This is used, for instance, by 
YARN's FileSystemTimelineWriter.

DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. 
However, DFSStripedOS throws a runtime exception when the Syncable methods are 
called.

We should refactor the inheritance structure so DFSStripedOS does not implement 
Syncable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11717) Add unit test for HDFS-11709

2017-04-28 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-11717:
---
Attachment: HDFS-11717.000.patch

Yes, very wrong patch. Sorry about that. Thanks for catching, [~shv]. Uploaded 
the correct one.

> Add unit test for HDFS-11709
> 
>
> Key: HDFS-11717
> URL: https://issues.apache.org/jira/browse/HDFS-11717
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ha, namenode
>Affects Versions: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.1
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-11717.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11717) Add unit test for HDFS-11709

2017-04-28 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-11717:
---
Attachment: (was: HDFS-11717.000.patch)

> Add unit test for HDFS-11709
> 
>
> Key: HDFS-11717
> URL: https://issues.apache.org/jira/browse/HDFS-11717
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ha, namenode
>Affects Versions: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.1
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11717) Add unit test for HDFS-11709

2017-04-28 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989256#comment-15989256
 ] 

Konstantin Shvachko commented on HDFS-11717:


Erik, could you please explain what this has to do with EC policy? Wrong patch?
I thought you would delete oiv dir or set restrictive permissions on it.

> Add unit test for HDFS-11709
> 
>
> Key: HDFS-11717
> URL: https://issues.apache.org/jira/browse/HDFS-11717
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ha, namenode
>Affects Versions: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.1
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-11717.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11714) Newly added NN storage directory won't get initialized and cause space exhaustion

2017-04-28 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-11714:
--
Attachment: HDFS-11714.v2.trunk.patch
HDFS-11714.v2.branch-2.patch

Attaching updated patches. Everything is confined in FSImage as you suggested. 
I think it is safe.  The branch-2 patch only differs slightly in the new test.

> Newly added NN storage directory won't get initialized and cause space 
> exhaustion
> -
>
> Key: HDFS-11714
> URL: https://issues.apache.org/jira/browse/HDFS-11714
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-11714.trunk.patch, HDFS-11714.v2.branch-2.patch, 
> HDFS-11714.v2.trunk.patch
>
>
> When an empty namenode storage directory is detected on normal NN startup, it 
> may not be fully initialized. The new directory is still part of "in-service" 
> NNStrage and when a checkpoint image is uploaded, a copy will also be written 
> there.  However, the retention manager won't be able to purge old files since 
> it is lacking a VERSION file.  This causes fsimages to pile up in the 
> directory.  With a big name space, the disk will be filled in the order of 
> days or weeks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11713) Use MoveFileEx to allow renaming a file when the destination exists

2017-04-28 Thread Lukas Majercak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989242#comment-15989242
 ] 

Lukas Majercak commented on HDFS-11713:
---

None of the findbugs/unit test warnings seem to be related to the change.

> Use MoveFileEx to allow renaming a file when the destination exists
> ---
>
> Key: HDFS-11713
> URL: https://issues.apache.org/jira/browse/HDFS-11713
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, native, rolling upgrades
>Affects Versions: 2.7.1, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>  Labels: windows
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11713.001.patch
>
>
> The {{NativeIO.c#renameTo0}} currently uses {{MoveFile}} Windows system call, 
> which fails when renaming a file to a destination that already exists.
> This makes the {{TestRollingUpgrade.testRollback}} test fail on Windows, as 
> during that execution, a DataNode tries to rename block's meta file to a 
> destination that exists.
> The proposal is to change to using {{MoveFileEx}} Windows call, and passing 
> in {{MOVEFILE_REPLACE_EXISTING}} flag to force the renaming.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11714) Newly added NN storage directory won't get initialized and cause space exhaustion

2017-04-28 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989212#comment-15989212
 ] 

Daryn Sharp commented on HDFS-11714:


In FSImage, please correct the inventive word: "tirggered". :)  It would be 
nice if initNewDirs was encapsulated in FSImage since it's not something the 
servlet should need to know about but I'm not familiar enough with the code to 
know what will break.  Up to you.

And we need the branch-2 patch.

> Newly added NN storage directory won't get initialized and cause space 
> exhaustion
> -
>
> Key: HDFS-11714
> URL: https://issues.apache.org/jira/browse/HDFS-11714
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-11714.trunk.patch
>
>
> When an empty namenode storage directory is detected on normal NN startup, it 
> may not be fully initialized. The new directory is still part of "in-service" 
> NNStrage and when a checkpoint image is uploaded, a copy will also be written 
> there.  However, the retention manager won't be able to purge old files since 
> it is lacking a VERSION file.  This causes fsimages to pile up in the 
> directory.  With a big name space, the disk will be filled in the order of 
> days or weeks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11717) Add unit test for HDFS-11709

2017-04-28 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989173#comment-15989173
 ] 

Erik Krogen edited comment on HDFS-11717 at 4/28/17 5:16 PM:
-

Attaching patch which adds a unit test. Confirmed it fails without HDFS-11709.

[~zhz] or [~shv], care to review?


was (Author: xkrogen):
Attaching patch which adds a unit test. Confirmed it fails without HDFS-11709.

> Add unit test for HDFS-11709
> 
>
> Key: HDFS-11717
> URL: https://issues.apache.org/jira/browse/HDFS-11717
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ha, namenode
>Affects Versions: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.1
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-11717.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-11717) Add unit test for HDFS-11709

2017-04-28 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-11717 started by Erik Krogen.
--
> Add unit test for HDFS-11709
> 
>
> Key: HDFS-11717
> URL: https://issues.apache.org/jira/browse/HDFS-11717
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ha, namenode
>Affects Versions: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.1
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-11717.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11717) Add unit test for HDFS-11709

2017-04-28 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-11717:
---
Attachment: HDFS-11717.000.patch

Attaching patch which adds a unit test. Confirmed it fails without HDFS-11709.

> Add unit test for HDFS-11709
> 
>
> Key: HDFS-11717
> URL: https://issues.apache.org/jira/browse/HDFS-11717
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ha, namenode
>Affects Versions: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.1
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-11717.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11717) Add unit test for HDFS-11709

2017-04-28 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-11717:
---
Status: Patch Available  (was: In Progress)

> Add unit test for HDFS-11709
> 
>
> Key: HDFS-11717
> URL: https://issues.apache.org/jira/browse/HDFS-11717
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ha, namenode
>Affects Versions: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.1
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-11717.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11714) Newly added NN storage directory won't get initialized and cause space exhaustion

2017-04-28 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-11714:
--
Target Version/s: 2.7.4, 2.8.1  (was: 2.8.1)

> Newly added NN storage directory won't get initialized and cause space 
> exhaustion
> -
>
> Key: HDFS-11714
> URL: https://issues.apache.org/jira/browse/HDFS-11714
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-11714.trunk.patch
>
>
> When an empty namenode storage directory is detected on normal NN startup, it 
> may not be fully initialized. The new directory is still part of "in-service" 
> NNStrage and when a checkpoint image is uploaded, a copy will also be written 
> there.  However, the retention manager won't be able to purge old files since 
> it is lacking a VERSION file.  This causes fsimages to pile up in the 
> directory.  With a big name space, the disk will be filled in the order of 
> days or weeks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11609) Some blocks can be permanently lost if nodes are decommissioned while dead

2017-04-28 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-11609:
--
Attachment: HDFS-11609_v3.trunk.patch
HDFS-11609_v3.branch-2.patch
HDFS-11609_v3.branch-2.7.patch

Attaching patches with the comment corrected.  The code changes are identical 
to the v2.

> Some blocks can be permanently lost if nodes are decommissioned while dead
> --
>
> Key: HDFS-11609
> URL: https://issues.apache.org/jira/browse/HDFS-11609
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Blocker
> Attachments: HDFS-11609.branch-2.patch, HDFS-11609.trunk.patch, 
> HDFS-11609_v2.branch-2.patch, HDFS-11609_v2.trunk.patch, 
> HDFS-11609_v3.branch-2.7.patch, HDFS-11609_v3.branch-2.patch, 
> HDFS-11609_v3.trunk.patch
>
>
> When all the nodes containing a replica of a block are decommissioned while 
> they are dead, they get decommissioned right away even if there are missing 
> blocks. This behavior was introduced by HDFS-7374.
> The problem starts when those decommissioned nodes are brought back online. 
> The namenode no longer shows missing blocks, which creates a false sense of 
> cluster health. When the decommissioned nodes are removed and reformatted, 
> the block data is permanently lost. The namenode will report missing blocks 
> after the heartbeat recheck interval (e.g. 10 minutes) from the moment the 
> last node is taken down.
> There are multiple issues in the code. As some cause different behaviors in 
> testing vs. production, it took a while to reproduce it in a unit test. I 
> will present analysis and proposal soon.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11675) Ozone: SCM CLI: Implement delete container command

2017-04-28 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989118#comment-15989118
 ] 

Xiaoyu Yao commented on HDFS-11675:
---

Looks good to me. +1. I only have a minor question.

DeleteContainerHandler.java Line 59
Do we output the deleting/deleted info in the console window as the CLI output? 
Not sure if we always set the console option for the log4j settings of the CLI. 

{code}
59  LOG.info("Deleting container : {}", containerName);
60  getScmClient().deleteContainer(pipeline, cmd.hasOption(OPT_FORCE));
61  LOG.info("Container {} deleted", containerName);
{code}

> Ozone: SCM CLI: Implement delete container command
> --
>
> Key: HDFS-11675
> URL: https://issues.apache.org/jira/browse/HDFS-11675
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: command-line
> Attachments: HDFS-11675-HDFS-7240.001.patch, 
> HDFS-11675-HDFS-7240.002.patch, HDFS-11675-HDFS-7240.003.patch, 
> HDFS-11675-HDFS-7240.004.patch
>
>
> Implement delete container
> {code}
> hdfs scm -container del  -f
> {code}
> Deletes a container if it is empty. The -f options can be used to force a 
> delete of a non-empty container. If container name specified not exist, 
> prints a clear error message.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11609) Some blocks can be permanently lost if nodes are decommissioned while dead

2017-04-28 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989115#comment-15989115
 ] 

Daryn Sharp commented on HDFS-11609:


+1 Pending updating the comment "We do not use already decommissioned nodes as 
a source" to mention as a last resort.

> Some blocks can be permanently lost if nodes are decommissioned while dead
> --
>
> Key: HDFS-11609
> URL: https://issues.apache.org/jira/browse/HDFS-11609
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Blocker
> Attachments: HDFS-11609.branch-2.patch, HDFS-11609.trunk.patch, 
> HDFS-11609_v2.branch-2.patch, HDFS-11609_v2.trunk.patch
>
>
> When all the nodes containing a replica of a block are decommissioned while 
> they are dead, they get decommissioned right away even if there are missing 
> blocks. This behavior was introduced by HDFS-7374.
> The problem starts when those decommissioned nodes are brought back online. 
> The namenode no longer shows missing blocks, which creates a false sense of 
> cluster health. When the decommissioned nodes are removed and reformatted, 
> the block data is permanently lost. The namenode will report missing blocks 
> after the heartbeat recheck interval (e.g. 10 minutes) from the moment the 
> last node is taken down.
> There are multiple issues in the code. As some cause different behaviors in 
> testing vs. production, it took a while to reproduce it in a unit test. I 
> will present analysis and proposal soon.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11709) StandbyCheckpointer should handle an non-existing legacyOivImageDir gracefully

2017-04-28 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989050#comment-15989050
 ] 

Erik Krogen commented on HDFS-11709:


[~shv], sure. Filed HDFS-11717.

> StandbyCheckpointer should handle an non-existing legacyOivImageDir gracefully
> --
>
> Key: HDFS-11709
> URL: https://issues.apache.org/jira/browse/HDFS-11709
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.6.1
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
>Priority: Critical
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.1
>
> Attachments: HDFS-11709.000.patch
>
>
> In {{StandbyCheckpointer}}, if the legacy OIV directory is not properly 
> created, or was deleted for some reason (e.g. mis-operation), all checkpoint 
> ops will fall. Not only the ANN won't receive new fsimages, the JNs will get 
> full with edit log files, and cause NN to crash.
> {code}
>   // Save the legacy OIV image, if the output dir is defined.
>   String outputDir = checkpointConf.getLegacyOivImageDir();
>   if (outputDir != null && !outputDir.isEmpty()) {
> img.saveLegacyOIVImage(namesystem, outputDir, canceler);
>   }
> {code}
> It doesn't make sense to let such an unimportant part (saving OIV) abort all 
> checkpoints and cause NN crash (and possibly lose data).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11717) Add unit test for HDFS-11709

2017-04-28 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-11717:
--

 Summary: Add unit test for HDFS-11709
 Key: HDFS-11717
 URL: https://issues.apache.org/jira/browse/HDFS-11717
 Project: Hadoop HDFS
  Issue Type: Task
  Components: ha, namenode
Affects Versions: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.1
Reporter: Erik Krogen
Assignee: Erik Krogen
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9807) Add an optional StorageID to writes

2017-04-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989046#comment-15989046
 ] 

Hadoop QA commented on HDFS-9807:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
20s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
38s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-hdfs-project: The patch generated 22 new 
+ 1536 unchanged - 50 fixed = 1558 total (was 1586) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | HDFS-9807 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865553/HDFS-9807.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux cad6cd8a94cc 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cb672a4 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |

[jira] [Assigned] (HDFS-11714) Newly added NN storage directory won't get initialized and cause space exhaustion

2017-04-28 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HDFS-11714:
-

Assignee: Kihwal Lee

> Newly added NN storage directory won't get initialized and cause space 
> exhaustion
> -
>
> Key: HDFS-11714
> URL: https://issues.apache.org/jira/browse/HDFS-11714
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-11714.trunk.patch
>
>
> When an empty namenode storage directory is detected on normal NN startup, it 
> may not be fully initialized. The new directory is still part of "in-service" 
> NNStrage and when a checkpoint image is uploaded, a copy will also be written 
> there.  However, the retention manager won't be able to purge old files since 
> it is lacking a VERSION file.  This causes fsimages to pile up in the 
> directory.  With a big name space, the disk will be filled in the order of 
> days or weeks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11675) Ozone: SCM CLI: Implement delete container command

2017-04-28 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989031#comment-15989031
 ] 

Anu Engineer edited comment on HDFS-11675 at 4/28/17 3:42 PM:
--

[~xyao] [~vagarychen] [~yuanbo] I will commit this shortly if there are no 
further comments. Just a quick summary: with this change, we will delete the 
containers on the datanode when you run "del" command, but not inside the SCM. 
So _deleting and re-creating a container will not work_ correctly for now. We 
have filed another jira HDFS-11716 to track and fix that issue.


was (Author: anu):
[~xyao] [~vagarychen] [~yuanbo] I will commit this shortly if there are no 
further comments. Just a qucik summary: with this change, we will delete the 
containers on the datanode when you run "del" command, but not inside the SCM. 
So _deleting and re-creating a container will not work_ correctly for now. We 
have filed another jira HDFS-11716 to track and fix that issue.

> Ozone: SCM CLI: Implement delete container command
> --
>
> Key: HDFS-11675
> URL: https://issues.apache.org/jira/browse/HDFS-11675
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: command-line
> Attachments: HDFS-11675-HDFS-7240.001.patch, 
> HDFS-11675-HDFS-7240.002.patch, HDFS-11675-HDFS-7240.003.patch, 
> HDFS-11675-HDFS-7240.004.patch
>
>
> Implement delete container
> {code}
> hdfs scm -container del  -f
> {code}
> Deletes a container if it is empty. The -f options can be used to force a 
> delete of a non-empty container. If container name specified not exist, 
> prints a clear error message.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11675) Ozone: SCM CLI: Implement delete container command

2017-04-28 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989031#comment-15989031
 ] 

Anu Engineer commented on HDFS-11675:
-

[~xyao] [~vagarychen] [~yuanbo] I will commit this shortly if there are no 
further comments. Just a qucik summary: with this change, we will delete the 
containers on the datanode when you run "del" command, but not inside the SCM. 
So _deleting and re-creating a container will not work_ correctly for now. We 
have filed another jira HDFS-11716 to track and fix that issue.

> Ozone: SCM CLI: Implement delete container command
> --
>
> Key: HDFS-11675
> URL: https://issues.apache.org/jira/browse/HDFS-11675
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: command-line
> Attachments: HDFS-11675-HDFS-7240.001.patch, 
> HDFS-11675-HDFS-7240.002.patch, HDFS-11675-HDFS-7240.003.patch, 
> HDFS-11675-HDFS-7240.004.patch
>
>
> Implement delete container
> {code}
> hdfs scm -container del  -f
> {code}
> Deletes a container if it is empty. The -f options can be used to force a 
> delete of a non-empty container. If container name specified not exist, 
> prints a clear error message.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11643) Balancer fencing fails when writing erasure coded lock file

2017-04-28 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988979#comment-15988979
 ] 

Wei-Chiu Chuang commented on HDFS-11643:


HI [~Sammi]
Thanks for working on this patch. I know Andrew has basically +1 the last 
patch, but I'd like to ask for a second thought:

I think you need to mark this jira as an incompatible change, because the 
method signature of DFSClient.create() is changed.

I also wonder if it's possible to use CreateFlag, instead of adding a new 
parameter shouldReplicate for the same purpose. This will also avoid the 
incompatibility issue, plus, it's really not my taste to have so many 
parameters. I think it's a good idea to think twice about changing the 
signature of a public API.



> Balancer fencing fails when writing erasure coded lock file
> ---
>
> Key: HDFS-11643
> URL: https://issues.apache.org/jira/browse/HDFS-11643
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: SammiChen
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11643.001.patch, HDFS-11643.002.patch, 
> HDFS-11643.003.patch, HDFS-11643.004.patch, HDFS-11643.005.patch, 
> HDFS-11643.006.patch
>
>
> At startup, the balancer writes its hostname to the lock file and calls 
> hflush(). hflush is not supported for EC files, so this fails when the entire 
> filesystem is erasure coded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11529) Add libHDFS API to return last exception

2017-04-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988934#comment-15988934
 ] 

John Zhuge commented on HDFS-11529:
---

Tracked in HDFS-11710. If nobody posts a quick fix soon, I will revert this 
commit.

> Add libHDFS API to return last exception
> 
>
> Key: HDFS-11529
> URL: https://issues.apache.org/jira/browse/HDFS-11529
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Mukil
>Assignee: Sailesh Mukil
>Priority: Critical
>  Labels: errorhandling, libhdfs
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11529.000.patch, HDFS-11529.001.patch, 
> HDFS-11529.002.patch, HDFS-11529.003.patch, HDFS-11529.004.patch, 
> HDFS-11529.005.patch, HDFS-11529.006.patch
>
>
> libHDFS uses a table to compare exceptions against and returns a 
> corresponding error code to the application in case of an error.
> However, this table is manually populated and many times is disremembered 
> when new exceptions are added.
> This causes libHDFS to return EINTERNAL (or Unknown Error(255)) whenever 
> these exceptions are hit. These are some examples of exceptions that have 
> been observed on an Error(255):
> org.apache.hadoop.ipc.StandbyException (Operation category WRITE is not 
> supported in state standby)
> java.io.EOFException: Cannot seek after EOF
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> It is of course not possible to have an error code for each and every type of 
> exception, so one suggestion of how this can be addressed is by having a call 
> such as hdfsGetLastException() that would return the last exception that a 
> libHDFS thread encountered. This way, an application may choose to call 
> hdfsGetLastException() if it receives EINTERNAL.
> We can make use of the Thread Local Storage to store this information. Also, 
> this makes sure that the current functionality is preserved.
> This is a follow up from HDFS-4997.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9807) Add an optional StorageID to writes

2017-04-28 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-9807:
-
Attachment: HDFS-9807.007.patch

Adding a patch that fixes check-style issues and adds a nose-tail test using 
{{MiniDFSCluster}}.

Note: the test just checks that {storageId}} passed into the 
{{VolumeChoosingPolicy}} is part of the volume list. It doesn't test that it's 
the same as the one sent by the NN as part of the request. I wasn't sure of the 
best way to get the value from the {{BlockPlacementPolicy}} into the 
{{VolumeChoosingPolicy}} for comparison. If you think it's required and have a 
good idea of how to connect them then let me know what you think.

> Add an optional StorageID to writes
> ---
>
> Key: HDFS-9807
> URL: https://issues.apache.org/jira/browse/HDFS-9807
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Chris Douglas
>Assignee: Ewan Higgs
> Attachments: HDFS-9807.001.patch, HDFS-9807.002.patch, 
> HDFS-9807.003.patch, HDFS-9807.004.patch, HDFS-9807.005.patch, 
> HDFS-9807.006.patch, HDFS-9807.007.patch
>
>
> The {{BlockPlacementPolicy}} considers specific storages, but when the 
> replica is written the DN {{VolumeChoosingPolicy}} is unaware of any 
> preference or constraints from other policies affecting placement. This 
> limits heterogeneity to the declared storage types, which are treated as 
> fungible within the target DN. It should be possible to influence or 
> constrain the DN policy to select a particular storage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11396) TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out

2017-04-28 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988813#comment-15988813
 ] 

Kihwal Lee commented on HDFS-11396:
---

It could be due to interactions with other tests in 
TestNameNodeMetadataConsistency. When run individually, it passes, at least on 
my machine.

> TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out
> -
>
> Key: HDFS-11396
> URL: https://issues.apache.org/jira/browse/HDFS-11396
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, test
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Priority: Minor
> Attachments: patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/18334/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11714) Newly added NN storage directory won't get initialized and cause space exhaustion

2017-04-28 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988809#comment-15988809
 ] 

Kihwal Lee commented on HDFS-11714:
---

None of the findbugs warning was introduced by the patch.
TestPipelinesFailover passes when run on my machine multiple times.
TestNameNodeMetadataConsistency is not caused by this patch. See HDFS-11396.

> Newly added NN storage directory won't get initialized and cause space 
> exhaustion
> -
>
> Key: HDFS-11714
> URL: https://issues.apache.org/jira/browse/HDFS-11714
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-11714.trunk.patch
>
>
> When an empty namenode storage directory is detected on normal NN startup, it 
> may not be fully initialized. The new directory is still part of "in-service" 
> NNStrage and when a checkpoint image is uploaded, a copy will also be written 
> there.  However, the retention manager won't be able to purge old files since 
> it is lacking a VERSION file.  This causes fsimages to pile up in the 
> directory.  With a big name space, the disk will be filled in the order of 
> days or weeks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11712) Ozone: Reuse ObjectMapper instance to improve the performance

2017-04-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988630#comment-15988630
 ] 

Hadoop QA commented on HDFS-11712:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
29s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.cblock.TestCBlockCLI |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:612578f |
| JIRA Issue | HDFS-11712 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865503/HDFS-11712-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 440ff23dbf5c 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 50dd3a5 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19229/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19229/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19229/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Reuse

[jira] [Commented] (HDFS-8131) Implement a space balanced block placement policy

2017-04-28 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988564#comment-15988564
 ] 

maobaolong commented on HDFS-8131:
--

[~liushaohui]
Hello, Thanks for this great improvement! 
I have an question about the config key 
DFS_NAMENODE_AVAILABLE_SPACE_BLOCK_PLACEMENT_POLICY_BALANCED_SPACE_PREFERENCE_FRACTION_DEFAULT.
What is the situation when this key is set between 0.5 and 1.0, i think it is 
better to set to 1.0 always, Is that true? There must be something i don't 
understand well, Please point it, thank you advance.

> Implement a space balanced block placement policy
> -
>
> Key: HDFS-8131
> URL: https://issues.apache.org/jira/browse/HDFS-8131
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
>  Labels: BlockPlacementPolicy
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: balanced.png, HDFS-8131.004.patch, HDFS-8131.005.patch, 
> HDFS-8131.006.patch, HDFS-8131-v1.diff, HDFS-8131-v2.diff, HDFS-8131-v3.diff
>
>
> The default block placement policy will choose datanodes for new blocks 
> randomly, which will result in unbalanced space used percent among datanodes 
> after an cluster expansion. The old datanodes always are in high used percent 
> of space and new added ones are in low percent.
> Through we can used the external balance tool to balance the space used rate, 
> it will cost extra network IO and it's not easy to control the balance speed.
> An easy solution is to implement an balanced block placement policy which 
> will choose low used percent datanodes for new blocks with a little high 
> possibility. In a not long term, the used percent of datanodes will trend to 
> be balanced.
> Suggestions and discussions are welcomed. Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11170) Add builder-based create API to FileSystem

2017-04-28 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988536#comment-15988536
 ] 

Steve Loughran commented on HDFS-11170:
---

When people make changes to the FileSystem class, can they

# update what they actually do in the Filesystem specification markdown files, 
so we have a full declaration of what's meant to happen. Yes, it's extra work, 
but it forces you to think through what cross-FS implementation expectations 
are, rather than just have them do whatever HDFS does
# write contract tests so that the downstream filesystems can all share
# and when we do a new operation, we get to revisit what is wrong with the 
current one, rather than just repeat the old behavior.

One way to to assist this is to just warn me that this is going on, and while I 
won't do the work, I can help.

Now some comments on the code of various levels of importance

* Why is FileContext left out?
* {{newFSDataOutputStreamBuilder()}} isn't a good name. It describers the 
return value, but not what it is trying to do, which is `createFile()`
* If you look at what is wrong with {{FileSystem.create();}}, the fact that 
callers can expect a directory to be created is a problem; it's very expensive 
on object stores where we have to walk the tree and look for things above 
(files). If you look at the codepaths of how create() is used, most people 
create the parent dir anyway, they don't relay on this feature. So we could 
make it yet another builder option ("createParentDirs") and have people 
explkicitly create it if they want, leave it off by default (yes, we'll have to 
tweak filesystem.create somehow, but we can do that across the board in our own 
code)

& on the tests
* I don't think the HDFS test case needs to bring up a test cluster just for 
one test suite; it's slow. Again, Moving this into a new Contract test and then 
having the HDFS be one of the concrete implementations will sort this out. If 
added to {{AbstractContractCreateTest}} then all filesystems will get this test 
added for free, which is what is required if they are all expected to support 
the API.
* your asserts all get their expected/actual values wrong. Flip the order


I'm going to create a new JIRA to finish up the issues,. I really do want this 
to be stable. At the very least, release notes must indicate the fact that this 
is still stabilising,

Sorry for getting in so late & causing trouble, but I'd only just noticed that 
a change had happened to FileSystem.java after the fact.

> Add builder-based create API to FileSystem
> --
>
> Key: HDFS-11170
> URL: https://issues.apache.org/jira/browse/HDFS-11170
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-11170-00.patch, HDFS-11170-01.patch, 
> HDFS-11170-02.patch, HDFS-11170-03.patch, HDFS-11170-04.patch, 
> HDFS-11170-05.patch, HDFS-11170-06.patch, HDFS-11170-07.patch, 
> HDFS-11170-08.patch, HDFS-11170-branch-2.001.patch
>
>
> FileSystem class supports multiple create functions to help user create file. 
> Some create functions has many parameters, it's hard for user to exactly 
> remember these parameters and their orders. This task is to add builder  
> based create functions to help user more easily create file. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9066) expose truncate via webhdfs

2017-04-28 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-9066.

  Resolution: Duplicate
Target Version/s:   (was: )

> expose truncate via webhdfs
> ---
>
> Key: HDFS-9066
> URL: https://issues.apache.org/jira/browse/HDFS-9066
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>
> Truncate should be exposed to WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11712) Ozone: Reuse ObjectMapper instance to improve the performance

2017-04-28 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11712:
-
Attachment: HDFS-11712-HDFS-7240.003.patch

Attach v003 patch to fix checkstyle issues and simplify the codes.

> Ozone: Reuse ObjectMapper instance to improve the performance
> -
>
> Key: HDFS-11712
> URL: https://issues.apache.org/jira/browse/HDFS-11712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11712-HDFS-7240.001.patch, 
> HDFS-11712-HDFS-7240.002.patch, HDFS-11712-HDFS-7240.003.patch
>
>
> In the ozone, there are many places using {{ObjectMapper}} to do the 
> object-json transformation. From the page of {{ObjectMapper}} 
> (https://github.com/FasterXML/jackson-docs/wiki/Presentation:-Jackson-Performance),
>  {{ObjectMapper}} is a heavy-weight object, it not a good bevahiour  to 
> create this instance everywhere.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11712) Ozone: Reuse ObjectMapper instance to improve the performance

2017-04-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988322#comment-15988322
 ] 

Hadoop QA commented on HDFS-11712:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:612578f |
| JIRA Issue | HDFS-11712 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865461/HDFS-11712-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 202372b21509 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 50dd3a5 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19228/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19228/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19228/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
http